content
listlengths
1
171
tag
dict
[ { "data": "1e1e1f 424143 67666a 807f83 cbc9cfe74d1a f26628 ff8337 febb25 feec2d\nRoboto Lightfeec2dApache Beam\nProject Palette " } ]
{ "category": "App Definition and Development", "file_name": "palette.pdf", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "Iterator Archetype\nAuthor : David Abrahams, Jeremy Siek, Thomas Witt\nContact : dave@boost-consulting.com ,jsiek@osl.iu.edu ,witt@styleadvisor.com\nOrganization :Boost Consulting , Indiana University Open Systems Lab ,Zephyr Asso-\nciates, Inc.\nDate : 2004-11-01\nCopyright : Copyright David Abrahams, Jeremy Siek, and Thomas Witt 2004.\nabstract: The iterator_archetype class constructs a minimal implementation of one of\nthe iterator access concepts and one of the iterator traversal concepts. This is used for\ndoing a compile-time check to see if a the type requirements of a template are really\nenough to cover the implementation of the template. For further information see the\ndocumentation for the boost::concept_check library.\nTable of Contents\nReference\niterator_archetype Synopsis\nAccess Category Tags\niterator_archetype Requirements\niterator_archetype Models\nTraits\nReference\niterator_archetype Synopsis\nnamespace iterator_archetypes\n{\n// Access categories\ntypedef /*implementation defined*/ readable_iterator_t;\ntypedef /*implementation defined*/ writable_iterator_t;\ntypedef /*implementation defined*/ readable_writable_iterator_t;\ntypedef /*implementation defined*/ readable_lvalue_iterator_t;\ntypedef /*implementation defined*/ writable_lvalue_iterator_t;\n}\ntemplate <\nclass Value\n, class AccessCategory\n1, class TraversalCategory\n>\nclass iterator_archetype\n{\ntypedef /* see below */ value_type;\ntypedef /* see below */ reference;\ntypedef /* see below */ pointer;\ntypedef /* see below */ difference_type;\ntypedef /* see below */ iterator_category;\n};\nAccess Category Tags\nThe access category types provided correspond to the following standard iterator access concept com-\nbinations:\nreadable_iterator_t :=\nReadable Iterator\nwritable_iterator_t :=\nWriteable Iterator\nreadable_writable_iterator_t :=\nReadable Iterator & Writeable Iterator & Swappable Iterator\nreadable_lvalue_iterator_t :=\nReadable Iterator & Lvalue Iterator\nwriteable_lvalue_iterator_t :=\nReadable Iterator & Writeable Iterator & Swappable Iterator & Lvalue Iter-\nator\niterator_archetype Requirements\nThe AccessCategory argument must be one of the predefined access category tags. The Traversal-\nCategory must be one of the standard traversal tags. The Value type must satisfy the requirements\nof the iterator concept specified by AccessCategory andTraversalCategory as implied by the nested\ntraits types.\niterator_archetype Models\niterator_archetype models the iterator concepts specified by the AccessCategory and Traversal-\nCategory arguments. iterator_archetype does not model any other access concepts or any more\nderived traversal concepts.\n2Traits\nThe nested trait types are defined as follows:\nif (AccessCategory == readable_iterator_t)\nvalue_type = Value\nreference = Value\npointer = Value*\nelse if (AccessCategory == writable_iterator_t)\nvalue_type = void\nreference = void\npointer = void\nelse if (AccessCategory == readable_writable_iterator_t)\nvalue_type = Value\nreference :=\nA type X that is convertible to Value for which the following\nexpression is valid. Given an object x of type X and v of type\nValue.\nx = v\npointer = Value*\nelse if (AccessCategory == readable_lvalue_iterator_t)\nvalue_type = Value\nreference = Value const&\npointer = Value const*\nelse if (AccessCategory == writable_lvalue_iterator_t)\nvalue_type = Value\nreference = Value&\npointer = Value*\nif ( TraversalCategory is convertible to forward_traversal_tag )\ndifference_type := ptrdiff_t\nelse\ndifference_type := unspecified type\niterator_category :=\nA type X satisfying the following two constraints:\n31. X is convertible to X1, and not to any more-derived\ntype, where X1 is defined by:\nif (reference is a reference type\n&& TraversalCategory is convertible to forward_traversal_tag)\n{\nif (TraversalCategory is convertible to ran-\ndom_access_traversal_tag)\nX1 = random_access_iterator_tag\nelse if (TraversalCategory is convertible to bidirec-\ntional_traversal_tag)\nX1 = bidirectional_iterator_tag\nelse\nX1 = forward_iterator_tag\n}\nelse\n{\nif (TraversalCategory is convertible to sin-\ngle_pass_traversal_tag\n&& reference != void)\nX1 = input_iterator_tag\nelse\nX1 = output_iterator_tag\n}\n2. X is convertible to TraversalCategory\n4" } ]
{ "category": "App Definition and Development", "file_name": "iterator_archetypes.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Druid\nA Real-time Analytical Data Store\nFangjin Y ang\nMetamarkets Group, Inc.\nfangjin@metamarkets.comEric Tschetter\necheddar@gmail.comXavier Léauté\nMetamarkets Group, Inc.\nxavier@metamarkets.com\nNelson Ray\nncray86@gmail.comGian Merlino\nMetamarkets Group, Inc.\ngian@metamarkets.comDeep Ganguli\nMetamarkets Group, Inc.\ndeep@metamarkets.com\nABSTRACT\nDruidisanopensource1datastoredesignedforreal-timeexploratory\nanalyticsonlargedatasets. Thesystemcombinesacolumn-oriented\nstorage layout, a distributed, shared-nothing architecture, and an\nadvanced indexing structure to allow for the arbitrary exploration\nof billion-row tables with sub-second latencies. In this paper, we\ndescribeDruid’sarchitecture,anddetailhowitsupportsfastaggre-\ngations, flexible filters, and low latency data ingestion.\nCategories and Subject Descriptors\nH.2.4[DatabaseManagement ]: Systems— Distributeddatabases\nKeywords\ndistributed;real-time;fault-tolerant;highlyavailable;opensource;\nanalytics; column-oriented; OLAP\n1. INTRODUCTION\nIn recent years, the proliferation of internet technology has cre-\natedasurgeinmachine-generatedevents. Individually,theseevents\ncontainminimalusefulinformationandareoflowvalue. Giventhe\ntime and resources required to extract meaning from large collec-\ntionsofevents,manycompanieswerewillingtodiscardthisdatain-\nstead. Althoughinfrastructurehasbeenbuilttohandleevent-based\ndata (e.g. IBM’s Netezza[37], HP’s Vertica[5], and EMC’s Green-\nplum[29]), they are largely sold at high price points and are only\ntargetedtowards those companies who can affordthe offering.\nA few years ago, Google introduced MapReduce [11] as their\nmechanism of leveraging commodity hardware to index the inter-\nnet and analyze logs. The Hadoop [36] project soon followed and\nwaslargelypatternedaftertheinsightsthatcameoutoftheoriginal\nMapReduce paper. Hadoop is currently deployed in many orga-\nnizations to store and analyze large amounts of log data. Hadoop\nhascontributedmuchtohelpingcompaniesconverttheirlow-value\n1http://druid.io/ https://github.com/metamx/druid\nPermission to make digital or hard copies of all or part of this work for personal or\nclassroom use is granted without fee provided that copies are not made or distributed\nfor profit or commercial advantage and that copies bear this notice and the full citation\non the first page. Copyrights for components of this work owned by others than the\nauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or\nrepublish, to post on servers or to redistribute to lists, requires prior specific permission\nand/or a fee. Request permissions from permissions@acm.org.\nSIGMOD’14, June 22–27, 2014, Snowbird, UT, USA.\nCopyright is held by the owner/author(s). Publication rights licensed to ACM.\nACM 978-1-4503-2376-5/14/06 ...$15.00.\nhttp://dx.doi.org/10.1145/2588555.2595631.event streams into high-value aggregates for a variety of applica-\ntions such as business intelligence and A-B testing.\nAs with many great systems, Hadoop has opened our eyes to\na new space of problems. Specifically, Hadoop excels at storing\nandprovidingaccesstolargeamountsofdata,however,itdoesnot\nmakeanyperformanceguaranteesaroundhowquicklythatdatacan\nbe accessed. Furthermore, although Hadoop is a highly available\nsystem,performancedegradesunderheavyconcurrentload. Lastly,\nwhileHadoopworkswellforstoringdata,itisnotoptimizedforin-\ngesting data and making that data immediately readable.\nEarlyoninthedevelopmentoftheMetamarketsproduct,weran\nintoeachoftheseissuesandcametotherealizationthatHadoopis\na great back-office, batch processing, and data warehousing sys-\ntem. However, as a company that has product-level guarantees\naroundqueryperformanceanddataavailabilityinahighlyconcur-\nrent environment (1000+ users), Hadoop wasn’t going to meet our\nneeds. Weexploreddifferentsolutionsinthespace,andaftertrying\nbothRelationalDatabaseManagementSystemsandNoSQLarchi-\ntectures, we came to the conclusion that there was nothing in the\nopen source world that could be fully leveraged for our require-\nments. We ended up creating Druid, an open source, distributed,\ncolumn-oriented, real-time analytical data store. In many ways,\nDruidsharessimilaritieswithotherOLAPsystems[30,35,22],in-\nteractivequerysystems[28],main-memorydatabases[14],aswell\naswidelyknowndistributeddatastores[7,12,23]. Thedistribution\nand query model also borrow ideas from current generation search\ninfrastructure [25, 3, 4].\nThispaperdescribesthearchitectureofDruid,exploresthevari-\nousdesigndecisionsmadeincreatinganalways-onproductionsys-\ntem that powers a hosted service, and attempts to help inform any-\nonewhofacesasimilarproblemaboutapotentialmethodofsolving\nit. Druid is deployed in production at several technology compa-\nnies2. The structure of the paper is as follows: we first describe\ntheprobleminSection2. Next,wedetailsystemarchitecturefrom\nthe point of view of how data flows through the system in Section\n3. We then discuss how and why data gets converted into a binary\nformatinSection4. WebrieflydescribethequeryAPIinSection5\nand present performance results in Section 6. Lastly, we leave off\nwithourlessonsfromrunningDruidinproductioninSection7,and\nrelated work in Section 8.\n2. PROBLEM DEFINITION\nDruid was originally designed to solve problems around ingest-\ningandexploringlargequantitiesoftransactionalevents(logdata).\nThis form of timeseries data is commonly found in OLAP work-\n2http://druid.io/druid.htmlTimestamp Page Username Gender City Characters Added Characters Removed\n2011-01-01T01:00:00Z Justin Bieber Boxer Male San Francisco 1800 25\n2011-01-01T01:00:00Z Justin Bieber Reach Male Waterloo 2912 42\n2011-01-01T02:00:00Z Ke$ha Helz Male Calgary 1953 17\n2011-01-01T02:00:00Z Ke$ha Xeno Male Taiyuan 3194 170\nTable1: Sample Druid data for edits that have occurredon Wikipedia.\nflowsandthenatureofthedatatendstobeveryappendheavy. For\nexample,considerthedatashowninTable1. Table1containsdata\nfor edits that have occurred on Wikipedia. Each time a user edits\na page in Wikipedia, an event is generated that contains metadata\nabout the edit. This metadata is comprised of 3 distinct compo-\nnents. First, there is a timestamp column indicating when the edit\nwasmade. Next,thereareasetdimensioncolumnsindicatingvar-\nious attributes about the edit such as the page that was edited, the\nuser who made the edit, and the location of the user. Finally, there\nare a set of metric columns that contain values (usually numeric)\nthat can be aggregated, such as the number of characters added or\nremoved in an edit.\nOur goal is to rapidly compute drill-downs and aggregates over\nthisdata. Wewanttoanswerquestionslike“Howmanyeditswere\nmadeonthepageJustinBieberfrommalesinSanFrancisco?”and\n“Whatistheaveragenumberofcharactersthatwereaddedbypeo-\nplefromCalgaryoverthespanofamonth?”. Wealsowantqueries\nover any arbitrary combination of dimensions to return with sub-\nsecond latencies.\nThe need for Druid was facilitated by the fact that existing open\nsource Relational Database Management Systems (RDBMS) and\nNoSQLkey/valuestoreswereunabletoprovidealowlatencydata\ningestionandqueryplatformforinteractiveapplications[40]. Inthe\nearly days of Metamarkets, we were focused on building a hosted\ndashboardthatwouldallowuserstoarbitrarilyexploreandvisualize\nevent streams. The data store powering the dashboard needed to\nreturn queries fast enough that the data visualizations built on top\nof it could provide users with an interactive experience.\nInadditiontothequerylatencyneeds,thesystemhadtobemulti-\ntenant and highly available. The Metamarkets product is used in a\nhighlyconcurrentenvironment. Downtimeiscostlyandmanybusi-\nnessescannotaffordtowaitifasystemisunavailableinthefaceof\nsoftware upgrades or network failure. Downtime for startups, who\noften lack proper internal operations management, can determine\nbusiness success or failure.\nFinally, another challenge that Metamarkets faced in its early\ndays was to allow users and alerting systems to be able to make\nbusiness decisions in “real-time”. The time from when an event is\ncreated to when that event is queryable determines how fast inter-\nestedpartiesareabletoreacttopotentiallycatastrophicsituationsin\ntheirsystems. Popularopensourcedatawarehousingsystemssuch\nas Hadoop were unable to provide the sub-second data ingestion\nlatencies we required.\nTheproblemsofdataexploration,ingestion,andavailabilityspan\nmultipleindustries. SinceDruidwasopensourcedinOctober2012,\nit been deployed as a video, network monitoring, operations mon-\nitoring, and online advertising analytics platform at multiple com-\npanies.\n3. ARCHITECTURE\nADruidclusterconsistsofdifferenttypesofnodesandeachnode\ntype is designed to perform a specific set of things. We believe\nthis design separates concerns and simplifies the complexity of the\noverallsystem. Thedifferentnodetypesoperatefairlyindependentofeachotherandthereisminimalinteractionamongthem. Hence,\nintra-cluster communication failures have minimal impact on data\navailability.\nTosolvecomplexdataanalysisproblems,thedifferentnodetypes\ncome together to form a fully working system. The name Druid\ncomes from the Druid class in many role-playing games: it is a\nshape-shifter, capable of taking on many different forms to fulfill\nvarious different roles in a group. The composition of and flow of\ndata in a Druid cluster are shown in Figure 1.\n3.1 Real-time Nodes\nReal-timenodesencapsulatethefunctionalitytoingestandquery\nevent streams. Events indexed via these nodes are immediately\navailable for querying. The nodes are only concerned with events\nfor some small time range and periodically hand off immutable\nbatches of events they have collected over this small time range to\nothernodesintheDruidclusterthatarespecializedindealingwith\nbatchesofimmutableevents. Real-timenodesleverageZooKeeper\n[19] for coordination with the rest of the Druid cluster. The nodes\nannounce their online state and the data they serve in ZooKeeper.\nReal-time nodes maintain an in-memory index buffer for all in-\ncomingevents. Theseindexesareincrementallypopulatedasevents\nare ingested and the indexes are also directly queryable. Druid be-\nhaves as a row store for queries on events that exist in this JVM\nheap-based buffer. To avoid heap overflow problems, real-time\nnodes persist their in-memory indexes to disk either periodically\nor after some maximum row limit is reached. This persist process\nconverts data stored in the in-memory buffer to a column oriented\nstorage format described in Section 4. Each persisted index is im-\nmutable and real-time nodes load persisted indexes into off-heap\nmemory such that they can still be queried. This process is de-\nscribed in detail in [33] and is illustrated in Figure 2.\nOn a periodic basis, each real-time node will schedule a back-\ngroundtaskthatsearchesforalllocallypersistedindexes. Thetask\nmerges these indexes together and builds an immutable block of\ndata that contains all the events that have been ingested by a real-\ntime node for some span of time. We refer to this block of data as\na “segment”. During the handoff stage, a real-time node uploads\nthissegmenttoapermanentbackupstorage,typicallyadistributed\nfile system such as S3 [12] or HDFS [36], which Druid refers to as\n“deep storage”. The ingest, persist, merge, and handoff steps are\nfluid; there is no data loss during any of the processes.\nFigure 3 illustrates the operations of a real-time node. The node\nstartsat13:37andwillonlyaccepteventsforthecurrenthourorthe\nnext hour. When events are ingested, the node announces that it is\nservingasegmentofdataforanintervalfrom13:00to14:00. Every\n10 minutes (the persist period is configurable), the node will flush\nand persist its in-memory buffer to disk. Near the end of the hour,\nthenodewilllikelyseeeventsfor14:00to15:00. Whenthisoccurs,\nthe node prepares to serve data for the next hour and creates a new\nin-memory index. The node then announces that it is also serving\na segment from 14:00 to 15:00. The node does not immediately\nmerge persisted indexes from 13:00 to 14:00, instead it waits for\na configurable window period for straggling events from 13:00 toReal-time \nNodes\nCoordinator \nNodesBroker Nodes\nHistorical \nNodesMySQL\nZookeeper\nDeep \nStorage\nStreaming \nData\nBatch\nDataClient \nQueries\nQueries\nMetadata\nData/SegmentsDruid Nodes\nExternal DependenciesFigure1: An overviewof a Druid cluster andthe flow of data throughthe cluster.\nevent_23312\nevent_23481\nevent_23593\n...\nevent_1234\nevent_2345\n...event_3456\nevent_4567\n...\nevent_5678\nevent_6789\n...event_7890\nevent_8901\n...Disk and persisted indexesHeap and in-memory index\nPersistevent_34982\nevent_35789\nevent_36791\n...\nevent_1234\nevent_2345\n...event_3456\nevent_4567\n...\nevent_5678\nevent_6789\n...event_7890\nevent_8901\n...Off-heap memory and \npersisted indexes\nLoadQueries\nFigure2: Real-timenodesbuffereventstoanin-memoryindex,\nwhich is regularly persisted to disk. On a periodic basis, per-\nsisted indexes are then merged together before getting handed\noff. Queries will hit both the in-memory and persisted indexes.\n14:00toarrive. Thiswindowperiodminimizestheriskofdataloss\nfromdelaysineventdelivery. Attheendofthewindowperiod,the\nnodemergesallpersistedindexesfrom13:00to14:00intoasingle\nimmutable segment and hands the segment off. Once this segment\nis loaded and queryable somewhere else in the Druid cluster, the\nreal-timenodeflushesallinformationaboutthedataitcollectedfor\n13:00 to 14:00 and unannounces it is serving this data.\n3.1.1 Availability and Scalability\nReal-timenodesareaconsumerofdataandrequireacorrespond-\ningproducertoprovidethedatastream. Commonly,fordatadura-\nbility purposes, a message bus such as Kafka [21] sits between the\nproducer and the real-time node as shown in Figure 4. Real-time\nnodes ingest data by reading events from the message bus. The\ntime from event creation to event consumption is ordinarily on the\norder of hundreds of milliseconds.\nThepurposeofthemessagebusinFigure4istwo-fold. First,the\nmessage bus acts as a buffer for incoming events. A message bus\nsuchasKafkamaintainspositionaloffsetsindicatinghowfaracon-\nsumer (a real-time node) has read in an event stream. Consumers\ncanprogrammaticallyupdatetheseoffsets. Real-timenodesupdatethisoffseteachtimetheypersisttheirin-memorybufferstodisk. In\nafailandrecoverscenario,ifanodehasnotlostdisk,itcanreload\nall persisted indexes from disk and continue reading events from\nthe last offset it committed. Ingesting events from a recently com-\nmitted offset greatly reduces a node’s recovery time. In practice,\nweseenodesrecoverfromsuchfailurescenariosinafewseconds.\nThe second purpose of the message bus is to act as a single end-\npoint from which multiple real-time nodes can read events. Multi-\nple real-time nodes can ingest the same set of events from the bus,\ncreating a replication of events. In a scenario where a node com-\npletely fails and loses disk, replicated streams ensure that no data\nis lost. A single ingestion endpoint also allows for data streams\nto be partitioned such that multiple real-time nodes each ingest a\nportion of a stream. This allows additional real-time nodes to be\nseamlessly added. In practice, this model has allowed one of the\nlargestproductionDruidclusterstobeabletoconsumerawdataat\napproximately 500 MB/s (150,000 events/s or 2 TB/hour).\n3.2 Historical Nodes\nHistorical nodes encapsulate the functionality to load and serve\ntheimmutableblocksofdata(segments)createdbyreal-timenodes.\nIn many real-world workflows, most of the data loaded in a Druid\ncluster is immutable and hence, historical nodes are typically the\nmain workers of a Druid cluster. Historical nodes follow a shared-\nnothingarchitectureandthereisnosinglepointofcontentionamong\nthe nodes. The nodes have no knowledge of one another and are\noperationally simple; they only know how to load, drop, and serve\nimmutable segments.\nSimilartoreal-timenodes,historicalnodesannouncetheironline\nstate and the data they are serving in ZooKeeper. Instructions to\nloadanddropsegmentsaresentoverZooKeeperandcontaininfor-\nmationaboutwherethesegmentislocatedindeepstorageandhow\nto decompress and process the segment. Before a historical node\ndownloadsaparticularsegmentfromdeepstorage,itfirstchecksa\nlocalcachethatmaintainsinformationaboutwhatsegmentsalready\nexist on the node. If information about a segment is not present in\nthecache,thehistoricalnodewillproceedtodownloadthesegment\nfrom deep storage. This process is shown in Figure 5. Once pro-\ncessing is complete, the segment is announced in ZooKeeper. At\nthis point, the segment is queryable. The local cache also allows\nforhistoricalnodestobequicklyupdatedandrestarted. Onstartup,\nthenodeexaminesitscacheandimmediatelyserveswhateverdata\nit finds.13:00 14:00 15:00\n13:37\n- node starts\n- announce segment \nfor data 13:00-14:0013:47\npersist data for 13:00-14:00\n~14:00\n- announce segment \nfor data 14:00-15:0014:10\n- merge and handoff for data 13:00-14:00\n- persist data for 14:00-15:00~14:11\n- unannounce segment \nfor data 13:00-14:00\n13:57\npersist data for 13:00-14:0014:07\npersist data for 13:00-14:00Figure 3: The node starts, ingests data, persists, and periodically hands data off. This process repeats indefinitely. The time periods\nbetween differentreal-time node operations areconfigurable.\nevent_12345\nevent_23456\nevent_34567\nevent_35582\nevent_37193\nevent_78901\nevent_79902\nevent_79932\nevent_89012event_2849219\nevent_120202\n…\nevent_90192\nReal-time\nNode 1\nReal-time\nNode 2\noffset 1\noffset 2\neventseventseventsKafka\nStreaming events\nFigure4: Multiplereal-timenodescanreadfromthesamemes-\nsage bus. Each node maintains its own offset.\nDeep Storage\nSegmentMemory\nDisk\nCache \nEntriesSegmentSegment\ndownload\ncreate keyLoad\nFigure5: Historicalnodesdownloadimmutablesegmentsfrom\ndeep storage. Segments must be loaded in memory before they\ncan be queried.Historicalnodescansupportreadconsistencybecausetheyonly\ndealwithimmutabledata. Immutabledatablocksalsoenableasim-\nple parallelization model: historical nodes can concurrently scan\nand aggregate immutable blocks without blocking.\n3.2.1 Tiers\nHistoricalnodescanbegroupedindifferenttiers,whereallnodes\ninagiventierareidenticallyconfigured. Differentperformanceand\nfault-tolerance parameters can be set for each tier. The purpose of\ntierednodesistoenablehigherorlowerprioritysegmentstobedis-\ntributed according to their importance. For example, it is possible\nto spin up a “hot” tier of historical nodes that have a high num-\nber of cores and large memory capacity. The “hot” cluster can be\nconfigured to download more frequently accessed data. A parallel\n“cold”clustercanalsobecreatedwithmuchlesspowerfulbacking\nhardware. The “cold” cluster would only contain less frequently\naccessed segments.\n3.2.2 Availability\nHistoricalnodesdependonZooKeeperforsegmentloadandun-\nload instructions. Should ZooKeeper become unavailable, histor-\nical nodes are no longer able to serve new data or drop outdated\ndata, however, because the queries are served over HTTP, histori-\ncalnodesarestillabletorespondtoqueryrequestsforthedatathey\nare currently serving. This means that ZooKeeper outages do not\nimpact current data availability on historical nodes.\n3.3 Broker Nodes\nBrokernodesactasqueryrouterstohistoricalandreal-timenodes.\nThey understand the metadata published in ZooKeeper about what\nsegmentsarequeryableandwherethosesegmentsarelocated. Bro-\nkernodesrouteincomingqueriessuchthatthequerieshittheright\nhistorical or real-time nodes. Broker nodes also merge partial re-\nsults from historical and real-time nodes before returning a final\nconsolidated result to the caller.\n3.3.1 Caching\nBroker nodes contain a cache with a LRU [31, 20] invalidation\nstrategy. The cache can use local heap memory or an external dis-tributedkey/valuestoresuchasMemcached[16]. Eachtimeabro-\nker node receives a query, it first maps the query to a set of seg-\nments. Results for certain segments may already exist in the cache\nand there is no need to recompute them. For any results that do\nnotexistinthecache,thebrokernodewillforwardthequerytothe\ncorrecthistoricalandreal-timenodes. Oncehistoricalnodesreturn\ntheirresults,thebrokerwillcachetheseresultsonapersegmentba-\nsis for future use. This process is illustrated in Figure 6. Real-time\ndata is never cached and hence requests for real-time data will al-\nwaysbeforwardedtoreal-timenodes. Real-timedataisperpetually\nchanging and caching the results is unreliable.\nThe cache also acts as an additional level of data durability. In\nthe event that all historical nodes fail, it is still possible to query\nresults if those results already exist in the cache.\n3.3.2 Availability\nIn the event of a total ZooKeeper outage, data is still queryable.\nIfbrokernodesareunabletocommunicatetoZooKeeper,theyuse\ntheirlastknownviewoftheclusterandcontinuetoforwardqueries\nto real-time and historical nodes. Broker nodes make the assump-\ntion that the structure of the cluster is the same as it was before the\noutage. In practice, this availability model has allowed our Druid\ncluster to continue serving queries for a significant period of time\nwhile we diagnosed ZooKeeper outages.\n3.4 Coordinator Nodes\nDruidcoordinatornodesareprimarilyinchargeofdatamanage-\nment and distribution on historical nodes. The coordinator nodes\ntell historical nodes to load new data, drop outdated data, replicate\ndata, and move data to load balance. Druid uses a multi-version\nconcurrency control swapping protocol for managing immutable\nsegments in order to maintain stable views. If any immutable seg-\nmentcontainsdatathatiswhollyobsoletedbynewersegments,the\noutdated segment is dropped from the cluster. Coordinator nodes\nundergoaleader-electionprocessthatdeterminesasinglenodethat\nrunsthecoordinatorfunctionality. Theremainingcoordinatornodes\nact as redundant backups.\nA coordinator node runs periodically to determine the current\nstate of the cluster. It makes decisions by comparing the expected\nstate of the cluster with the actual state of the cluster at the time\nof the run. As with all Druid nodes, coordinator nodes maintain\na ZooKeeper connection for current cluster information. Coordi-\nnator nodes also maintain a connection to a MySQL database that\ncontainsadditionaloperationalparametersandconfigurations. One\nof the key pieces of information located in the MySQL database is\na table that contains a list of all segments that should be served by\nhistoricalnodes. Thistablecanbeupdatedbyanyservicethatcre-\natessegments,forexample,real-timenodes. TheMySQLdatabase\nalso contains a rule table that governs how segments are created,\ndestroyed, and replicated in the cluster.\n3.4.1 Rules\nRules govern how historical segments are loaded and dropped\nfromthecluster. Rulesindicatehowsegmentsshouldbeassignedto\ndifferenthistoricalnodetiersandhowmanyreplicatesofasegment\nshould exist in each tier. Rules may also indicate when segments\nshould be dropped entirely from the cluster. Rules are usually set\nfor a period of time. For example, a user may use rules to load the\nmostrecentonemonth’sworthofsegmentsintoa“hot”cluster,the\nmostrecentoneyear’sworthofsegmentsintoa“cold”cluster,and\ndrop any segments that are older.\nThecoordinatornodesloadasetofrulesfromaruletableinthe\nMySQL database. Rules may be specific to a certain data sourceand/or a default set of rules may be configured. The coordinator\nnodewillcyclethroughallavailablesegmentsandmatcheachseg-\nment with the first rule that applies to it.\n3.4.2 Load Balancing\nIn a typical production environment, queries often hit dozens or\neven hundreds of segments. Since each historical node has limited\nresources, segments must be distributed among the cluster to en-\nsure that the cluster load is not too imbalanced. Determining opti-\nmalloaddistributionrequiressomeknowledgeaboutquerypatterns\nandspeeds. Typically,queriescoverrecentsegmentsspanningcon-\ntiguoustimeintervalsforasingledatasource. Onaverage,queries\nthat access smaller segments are faster.\nThese query patterns suggest replicating recent historical seg-\nments at a higher rate, spreading out large segments that are close\nintimetodifferenthistoricalnodes,andco-locatingsegmentsfrom\ndifferent data sources. To optimally distribute and balance seg-\nments among the cluster, we developed a cost-based optimization\nprocedurethattakesintoaccountthesegmentdatasource,recency,\nandsize. Theexactdetailsofthealgorithmarebeyondthescopeof\nthis paper and may be discussed in future literature.\n3.4.3 Replication\nCoordinator nodes may tell different historical nodes to load a\ncopy of the same segment. The number of replicates in each tier\nof the historical compute cluster is fully configurable. Setups that\nrequire high levels of fault tolerance can be configured to have a\nhigh number of replicas. Replicated segments are treated the same\nastheoriginalsandfollowthesameloaddistributionalgorithm. By\nreplicatingsegments,singlehistoricalnodefailuresaretransparent\nin the Druid cluster. We use this property for software upgrades.\nWe can seamlessly take a historical node offline, update it, bring it\nbackup,andrepeattheprocessforeveryhistoricalnodeinacluster.\nOverthelasttwoyears,wehavenevertakendowntimeinourDruid\ncluster for software upgrades.\n3.4.4 Availability\nDruid coordinator nodes have ZooKeeper and MySQL as exter-\nnal dependencies. Coordinator nodes rely on ZooKeeper to deter-\nminewhathistoricalnodesalreadyexistinthecluster. IfZooKeeper\nbecomesunavailable,thecoordinatorwillnolongerbeabletosend\ninstructionstoassign,balance,anddropsegments. However,these\noperations do not affectdata availability at all.\nThe design principle for responding to MySQL and ZooKeeper\nfailures is the same: if an external dependency responsible for co-\nordination fails, the cluster maintains the status quo. Druid uses\nMySQLtostoreoperationalmanagementinformationandsegment\nmetadatainformationaboutwhatsegmentsshouldexistintheclus-\nter. IfMySQLgoesdown,thisinformationbecomesunavailableto\ncoordinator nodes. However, this does not mean data itself is un-\navailable. If coordinator nodes cannot communicate to MySQL,\nthey will cease to assign new segments and drop outdated ones.\nBroker, historical, and real-time nodes are still queryable during\nMySQLoutages.\n4. STORAGE FORMAT\nDatatablesinDruid(called datasources )arecollectionsoftimes-\ntamped events and partitioned into a set of segments, where each\nsegmentistypically5–10millionrows. Formally,wedefineaseg-\nment as a collection of rows of data that span some period of time.\nSegmentsrepresentthefundamentalstorageunitinDruidandrepli-\ncation and distribution are done at a segment level.Query for data \nfrom 2013-01-01 \nto 2013-01-08results for segment 2013-01-01/2013-01-02\nresults for segment 2013-01-02/2013-01-03\nresults for segment 2013-01-07/2013-01-08Cache (on broker nodes)\nsegment for data 2013-01-03/2013-01-04\nsegment for data 2013-01-04/2013-01-05\nsegment for data 2013-01-05/2013-01-06\nsegment for data 2013-01-06/2013-01-07Historical and real-time nodes\nQuery for data \nnot in cacheFigure6: Results arecached per segment. Queriescombine cached resultswith resultscomputed on historical and real-timenodes.\nDruid always requires a timestamp column as a method of sim-\nplifyingdatadistributionpolicies,dataretentionpolicies,andfirst-\nlevel query pruning. Druid partitions its data sources into well-\ndefined time intervals, typically an hour or a day, and may further\npartition on values from other columns to achieve the desired seg-\nment size. The time granularity to partition segments is a function\nof data volume and time range. A data set with timestamps spread\nover a year is better partitioned by day, and a data set with times-\ntamps spread over a day is better partitioned by hour.\nSegments are uniquely identified by a data source identifer, the\ntime interval of the data, and a version string that increases when-\never a new segment is created. The version string indicates the\nfreshnessofsegmentdata;segmentswithlaterversionshavenewer\nviewsofdata(oversometimerange)thansegmentswitholderver-\nsions. This segment metadata is used by the system for concur-\nrency control; read operations always access data in a particular\ntime range from the segments with the latest version identifiers for\nthat time range.\nDruid segments are stored in a column orientation. Given that\nDruidisbestusedforaggregatingeventstreams(alldatagoinginto\nDruidmusthaveatimestamp),theadvantagesofstoringaggregate\ninformation as columns rather than rows are well documented [1].\nColumn storage allows for more efficient CPU usage as only what\nis needed is actually loaded and scanned. In a row oriented data\nstore,allcolumnsassociatedwitharowmustbescannedaspartof\nan aggregation. The additional scan time can introduce signficant\nperformance degradations [1].\nDruid has multiple column types to represent various data for-\nmats. Depending on the column type, different compression meth-\nods are used to reduce the cost of storing a column in memory and\non disk. In the example given in Table 1, the page, user, gender,\nand city columns only contain strings. Storing strings directly is\nunnecessarily costly and string columns can be dictionary encoded\ninstead. Dictionaryencodingisacommonmethodtocompressdata\nand has been used in other data stores such as PowerDrill [17]. In\nthe example in Table 1, we can map each page to a unique integer\nidentifier.\nJustin Bieber -> 0\nKe$ha -> 1\nThis mapping allows us to represent the page column as an in-\nteger array where the array indices correspond to the rows of the\noriginaldataset. Forthepagecolumn,wecanrepresenttheunique\npages as follows:\n[0, 0, 1, 1]\nThe resulting integer array lends itself very well to compression\nmethods. Generic compression algorithms on top of encodings are\nextremelycommonincolumn-stores. DruidusestheLZF[24]com-\npression algorithm.\nSimilarcompressionmethodscanbeappliedtonumericcolumns.\nForexample,thecharactersaddedandcharactersremovedcolumns\nin Table1 can also be expressed as individual arrays.\nCharacters Added -> [1800, 2912, 1953, 3194]\nCharacters Removed -> [25, 42, 17, 170]\nInthiscase,wecompresstherawvaluesasopposedtotheirdic-\ntionary representations.\nInteger array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)\n1e+041e+06\n1e+02 1e+05 1e+08\nCardinalityConcise compressed size (bytes)sorted\nsorted\nunsortedFigure7: Integer array size versus Concise set size.\n4.1 Indices for Filtering Data\nIn many real world OLAP workflows, queries are issued for the\naggregated results of some set of metrics where some set of di-\nmension specifications are met. An example query is: “How many\nWikipedia edits were done by users in San Francisco who are also\nmale?”ThisqueryisfilteringtheWikipediadatasetinTable1based\non a Boolean expression of dimension values. In many real world\ndata sets, dimension columns contain strings and metric columns\ncontainnumericvalues. Druidcreatesadditionallookupindicesfor\nstringcolumnssuchthatonlythoserowsthatpertaintoaparticular\nquery filter are ever scanned.\nLetusconsiderthepagecolumninTable1. Foreachuniquepage\nin Table 1, we can form some representation indicating in which\ntable rows a particular page is seen. We can store this information\nin a binary array where the array indices represent our rows. If a\nparticular page is seen in a certain row, that array index is marked\nas1. For example:\nJustin Bieber -> rows [0, 1] -> [1][1][0][0]\nKe$ha -> rows [2, 3] -> [0][0][1][1]\nJustin Bieber is seen in rows 0and1. This mapping of col-\numn values to row indices forms an inverted index [39]. To know\nwhichrowscontain Justin Bieber orKe$ha,wecanORtogether\nthe two arrays.\n[0][1][0][1] OR [1][0][1][0] = [1][1][1][1]\nThisapproachofperformingBooleanoperationsonlargebitmap\nsetsiscommonlyusedinsearchengines. BitmapindicesforOLAP\nworkloads is described in detail in [32]. Bitmap compression al-\ngorithms are a well-defined area of research [2, 44, 42] and often\nutilize run-length encoding. Druid opted to use the Concise algo-\nrithm [10]. Figure 7 illustrates the number of bytes using Concise\ncompression versus using an integer array. The results were gen-\nerated on a cc2.8xlarge system with a single thread, 2G heap,\n512m young gen, and a forced GC between each run. The data set\nis a single day’s worth of data collected from the Twitter garden\nhose [41] data stream. The data set contains 2,272,295 rows and12dimensionsofvaryingcardinality. Asanadditionalcomparison,\nwe also resorted the data set rows to maximize compression.\nIntheunsortedcase,thetotalConcisesizewas53,451,144bytes\nand the total integer array size was 127,248,520 bytes. Overall,\nConcise compressed sets are about 42% smaller than integer ar-\nrays. In the sorted case, the total Concise compressed size was\n43,832,884 bytes and the total integer array size was 127,248,520\nbytes. What is interesting to note is that after sorting, global com-\npression only increased minimally.\n4.2 Storage Engine\nDruid’s persistence components allows for different storage en-\ngines to be plugged in, similar to Dynamo [12]. These storage en-\nginesmaystoredatainanentirelyin-memorystructuresuchasthe\nJVM heap or in memory-mapped structures. The ability to swap\nstorage engines allows for Druid to be configured depending on a\nparticular application’s specifications. An in-memory storage en-\nginemaybeoperationallymoreexpensivethanamemory-mapped\nstorage engine but could be a better alternative if performance is\ncritical. By default, a memory-mapped storage engine is used.\nWhen using a memory-mapped storage engine, Druid relies on\ntheoperatingsystemtopagesegmentsinandoutofmemory. Given\nthat segments can only be scanned if they are loaded in memory, a\nmemory-mapped storage engine allows recent segments to retain\ninmemorywhereassegmentsthatareneverqueriedarepagedout.\nThemaindrawbackwithusingthememory-mappedstorageengine\nis when a query requires more segments to be paged into memory\nthanagivennodehascapacityfor. Inthiscase,queryperformance\nwillsufferfromthecostofpagingsegmentsinandoutofmemory.\n5. QUERY API\nDruid has its own query language and accepts queries as POST\nrequests. Broker, historical, and real-time nodes all share the same\nquery API.\nThe body of the POST request is a JSON object containing key-\nvalue pairs specifying various query parameters. A typical query\nwillcontainthedatasourcename,thegranularityoftheresultdata,\ntime range of interest, the type of request, and the metrics to ag-\ngregate over. The result will also be a JSON object containing the\naggregated metrics over the time period.\nMost query types will also support a filter set. A filter set is a\nBooleanexpressionofdimensionnameandvaluepairs. Anynum-\nber and combination of dimensions and values may be specified.\nWhen a filter set is provided, only the subset of the data that per-\ntainstothefiltersetwillbescanned. Theabilitytohandlecomplex\nnestedfiltersetsiswhatenablesDruidtodrillintodataatanydepth.\nTheexactquerysyntaxdependsonthequerytypeandtheinfor-\nmation requested. A sample count query over a week of data is as\nfollows:\n{\n\"queryType\" : \"timeseries\",\n\"dataSource\" : \"wikipedia\",\n\"intervals\" : \"2013-01-01/2013-01-08\",\n\"filter\" : {\n\"type\" : \"selector\",\n\"dimension\" : \"page\",\n\"value\" : \"Ke$ha\"\n},\n\"granularity\" : \"day\",\n\"aggregations\" : [{\"type\":\"count\", \"name\":\"rows\"}]\n}\nThequeryshownabovewillreturnacountofthenumberofrows\nin the Wikipedia data source from 2013-01-01 to 2013-01-08, fil-\ntered for only those rows where the value of the “page” dimension\nis equal to “Ke$ha”. The results will be bucketed by day and will\nbe a JSON array of the following form:[ {\n\"timestamp\": \"2012-01-01T00:00:00.000Z\",\n\"result\": {\"rows\":393298}\n},\n{\n\"timestamp\": \"2012-01-02T00:00:00.000Z\",\n\"result\": {\"rows\":382932}\n},\n...\n{\n\"timestamp\": \"2012-01-07T00:00:00.000Z\",\n\"result\": {\"rows\": 1337}\n} ]\nDruid supports many types of aggregations including sums on\nfloating-point and integer types, minimums, maximums, and com-\nplex aggregations such as cardinality estimation and approximate\nquantile estimation. The results of aggregations can be combined\nin mathematical expressions to form other aggregations. It is be-\nyond the scope of this paper to fully describe the query API but\nmore information can be found online3.\nAsofthiswriting,ajoinqueryforDruidisnotyetimplemented.\nThishasbeenafunctionofengineeringresourceallocationanduse\ncase decisions more than a decision driven by technical merit. In-\ndeed, Druid’s storage format would allow for the implementation\nof joins (there is no loss of fidelity for columns included as dimen-\nsions)andtheimplementationofthemhasbeenaconversationthat\nwe have every few months. To date, we have made the choice that\ntheimplementationcostisnotworththeinvestmentforourorgani-\nzation. The reasons for this decision are generally two-fold.\n1. Scalingjoinquerieshasbeen,inourprofessionalexperience,\na constant bottleneck of working with distributed databases.\n2. The incremental gains in functionality are perceived to be\nof less value than the anticipated problems with managing\nhighly concurrent, join-heavy workloads.\nAjoinqueryisessentiallythemergingoftwoormorestreamsof\ndata based on a shared set of keys. The primary high-level strate-\ngies for join queries we are aware of are a hash-based strategy or a\nsorted-mergestrategy. Thehash-basedstrategyrequiresthatallbut\none data set be available as something that looks like a hash table,\na lookup operation is then performed on this hash table for every\nrow in the “primary” stream. The sorted-merge strategy assumes\nthateachstreamissortedbythejoinkeyandthusallowsforthein-\ncrementaljoiningofthestreams. Eachofthesestrategies,however,\nrequiresthematerializationofsomenumberofthestreamseitherin\nsorted order or in a hash table form.\nWhen all sides of the join are significantly large tables (> 1 bil-\nlion records), materializing the pre-join streams requires complex\ndistributed memory management. The complexity of the memory\nmanagementisonlyamplifiedbythefactthatwearetargetinghighly\nconcurrent, multitenant workloads. This is, as far as we are aware,\nan active academic research problem that we would be willing to\nhelp resolve in a scalable manner.\n6. PERFORMANCE\nDruidrunsinproductionatseveralorganizations,andtodemon-\nstrate its performance, we have chosen to share some real world\nnumbersforthemainproductionclusterrunningatMetamarketsas\nofearly2014. Forcomparisonwithotherdatabaseswealsoinclude\nresults from synthetic workloads on TPC-H data.\n3http://druid.io/docs/latest/Querying.htmlDataSource Dimensions Metrics\na 25 21\nb 30 26\nc 71 35\nd 60 19\ne 29 8\nf 30 16\ng 26 18\nh 78 14\nTable2: Characteristics of productiondata sources.\n6.1 Query Performance in Production\nDruidqueryperformancecanvarysignficantlydependingonthe\nquerybeingissued. Forexample,sortingthevaluesofahighcardi-\nnality dimension based on a given metric is much more expensive\nthan a simple count over a time range. To showcase the average\nquery latencies in a production Druid cluster, we selected 8 of our\nmost queried data sources, described in Table2.\nApproximately30%ofqueriesarestandardaggregatesinvolving\ndifferent types of metrics and filters, 60% of queries are ordered\ngroup bys over one or more dimensions with aggregates, and 10%\nof queries are search queries and metadata retrieval queries. The\nnumber of columns scanned in aggregate queries roughly follows\nan exponential distribution. Queries involving a single column are\nvery frequent, and queries involving all columns are very rare.\nAfewnotes about our results:\n\u000fTheresultsarefroma“hot”tierinourproductioncluster. There\nwere approximately 50 data sources in the tier and several hun-\ndred users issuing queries.\n\u000fTherewasapproximately10.5TBofRAMavailableinthe“hot”\ntier and approximately 10TB of segments loaded. Collectively,\nthereareabout50billionDruidrowsinthistier. Resultsforevery\ndata source are not shown.\n\u000fThe hot tier uses Intel®Xeon®E5-2670 processors and consists\nof 1302 processing threads and 672 total cores (hyperthreaded).\n\u000fA memory-mapped storage engine was used (the machine was\nconfiguredtomemorymapthedatainsteadofloadingitintothe\nJava heap.)\nQuerylatenciesareshowninFigure8andthequeriesperminute\nare shown in Figure 9. Across all the various data sources, aver-\nage query latency is approximately 550 milliseconds, with 90% of\nqueries returning in less than 1 second, 95% in under 2 seconds,\nand99%ofqueriesreturninginlessthan10seconds. Occasionally\nwe observe spikes in latency, as observed on February 19, where\nnetwork issues on the Memcached instances were compounded by\nvery high query load on one of our largestdata sources.\n6.2 Query Benchmarks on TPC-H Data\nWealsopresentDruidbenchmarksonTPC-Hdata. MostTPC-H\nqueriesdonotdirectlyapplytoDruid,soweselectedqueriesmore\ntypicalofDruid’sworkloadtodemonstratequeryperformance. As\na comparison, we also provide the results of the same queries us-\ningMySQLusingtheMyISAMengine(InnoDBwasslowerinour\nexperiments).\nWeselectedMySQLtobenchmarkagainstbecauseofitsuniver-\nsal popularity. We chose not to select another open source column\nstore because we were not confident we could correctly tune it for\noptimal performance.\nOur Druid setup used Amazon EC2 m3.2xlarge instance types\n(Intel®Xeon®E5-2680 v2 @ 2.80GHz) for historical nodes and\nc3.2xlarge instances(Intel®Xeon®E5-2670v2@2.50GHz)for\n0.00.51.0\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequery time (s)datasource\na\nb\nc\nd\ne\nf\ng\nhMean query latency\n0.00.51.01.5\n01234\n0510152090%ile 95%ile 99%ile\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequery time (seconds)datasource\na\nb\nc\nd\ne\nf\ng\nhQuery latency percentilesFigure8: Query latencies of productiondata sources.\n050010001500\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequeries / minutedatasource\na\nb\nc\nd\ne\nf\ng\nhQueries per minute\nFigure9: Queriesper minute of productiondata sources.01234\ncount_star_interval\nsum_all\nsum_all_filter\nsum_all_year\nsum_price\ntop_100_commitdate\ntop_100_parts\ntop_100_parts_details\ntop_100_parts_filter\nQueryTime (seconds)engine\nDruid\nMySQLMedian query time (100 runs) − 1GB data − single nodeFigure10: Druid & MySQL benchmarks – 1GB TPC-H data.\naggregation top−n\n0200400600\n02500500075001000012500count_star_interval\nsum_all\nsum_all_filter\nsum_all_year\nsum_price\ntop_100_commitdate\ntop_100_parts\ntop_100_parts_details\ntop_100_parts_filter\nQueryTime (seconds)engine\nDruid\nMySQLMedian Query Time (3+ runs) − 100GB data − single node\nFigure11: Druid&MySQLbenchmarks–100GBTPC-Hdata.\nbroker nodes. Our MySQL setup was an Amazon RDS instance\nthat ran on the same m3.2xlarge instance type.\nThe results for the 1 GB TPC-H data set are shown in Figure 10\nand the results of the 100 GB data set are shown in Figure 11.\nWebenchmarkedDruid’sscanrateat53,539,211rows/second/core\nforselect count(*) equivalentqueryoveragiventimeinterval\nand36,246,530rows/second/corefora select sum(float) type\nquery.\nFinally,wepresentourresultsofscalingDruidtomeetincreasing\ndata volumes with the TPC-H 100 GB data set. We observe that\nwhenweincreasedthenumberofcoresfrom8to48,notalltypesof\nqueries achieve linear scaling, but the simpler aggregation queries\ndo, as shown in Figure 12.\nTheincreaseinspeedofaparallelcomputingsystemisoftenlim-\nitedbythetimeneededforthesequentialoperationsofthesystem.\nIn this case, queries requiring a substantial amount of work at the\nbroker level do not parallelize as well.\n6.3 Data Ingestion Performance\nTo showcase Druid’s data ingestion latency, we selected several\nproduction datasources of varying dimensions, metrics, and event\nvolumes. Our production ingestion setup consists of 6 nodes, to-\ntalling 360GB of RAM and 96 cores (12 x Intel®Xeon®E5-2670).\ncount_star_intervalsum_allsum_all_filtersum_all_yearsum_pricetop_100_commitdatetop_100_partstop_100_parts_detailstop_100_parts_filter\n0\n50\n100 150Time (seconds)QueryDruid Scaling − 100GB\ncount_star_intervalsum_allsum_all_filtersum_all_yearsum_pricetop_100_commitdatetop_100_partstop_100_parts_detailstop_100_parts_filter\n1 2 3 4 5 6Speedup FactorQuery\nCores 8 (1 node) 48 (6 nodes)Druid Scaling ... 100GBFigure12: Druid scalingbenchmarks – 100GB TPC-H data.\nDataSource Dimensions Metrics Peak events/s\ns 7 228334.60\nt 10 768808.70\nu 5 149933.93\nv 30 1022240.45\nw 35 14135763.17\nx 28 646525.85\ny 33 24162462.41\nz 33 2495747.74\nTable3: Ingestion characteristics of various data sources.\nNote that in this setup, several other data sources were being in-\ngested and many other Druid related ingestion tasks were running\nconcurrently on the machines.\nDruid’s data ingestion latency is heavily dependent on the com-\nplexity of the data set being ingested. The data complexity is de-\ntermined by the number of dimensions in each event, the number\nof metrics in each event, and the types of aggregations we want to\nperform on those metrics. With the most basic data set (one that\nonlyhasatimestampcolumn),oursetupcaningestdataatarateof\n800,000 events/second/core, which is really just a measurement of\nhow fast we can deserialize events. Real world data sets are never\nthis simple. Table 3 shows a selection of data sources and their\ncharacteristics.\nWe can see that, based on the descriptions in Table 3, latencies\nvarysignificantlyandtheingestionlatencyisnotalwaysafactorof\nthenumberofdimensionsandmetrics. Weseesomelowerlatencies\nonsimpledatasetsbecausethatwastheratethatthedataproducer\nwas delivering data. The results are shown in Figure 13.\nWe define throughput as the number of events a real-time node\ncan ingest and also make queryable. If too many events are sent\nto the real-time node, those events are blocked until the real-time\nnode has capacity to accept them. The peak ingestion latency we\nmeasuredinproductionwas22914.43events/second/coreonadata-\nsource with 30 dimensions and 19 metrics, running an Amazon\ncc2.8xlarge instance.050,000100,000150,000200,000250,000\nDec 15 Jan 01 Jan 15 Feb 01 Feb 15 Mar 01\ntimeevents / sdatasource\ns\nt\nu\nv\nw\nx\ny\nzEvents per second ... 24h moving averageFigure13: Combined cluster ingestion rates.\nThelatencymeasurementswepresentedaresufficienttoaddress\nthestatedproblemsofinteractivity. Wewouldpreferthevariability\nin the latencies to be less. It is still possible to decrease latencies\nby adding additional hardware, but we have not chosen to do so\nbecause infrastructure costs are still a consideration for us.\n7. DRUID IN PRODUCTION\nOver the last few years, we have gained tremendous knowledge\nabout handling production workloads with Druid and have made a\ncouple of interesting observations.\nQuery Patterns.\nDruid is often used to explore data and generate reports on data.\nIn the explore use case, the number of queries issued by a single\nuser are much higher than in the reporting use case. Exploratory\nqueriesofteninvolveprogressivelyaddingfiltersforthesametime\nrange to narrow down results. Users tend to explore short time in-\ntervals of recent data. In the generate report use case, users query\nfor much longer data intervals, but those queries are generally few\nand pre-determined.\nMultitenancy.\nExpensiveconcurrentqueriescanbeproblematicinamultitenant\nenvironment. Queriesforlargedatasourcesmayenduphittingev-\nery historical node in a cluster and consume all cluster resources.\nSmaller, cheaper queries may be blocked from executing in such\ncases. We introduced query prioritization to address these issues.\nEach historical node is able to prioritize which segments it needs\ntoscan. Properqueryplanningiscriticalforproductionworkloads.\nThankfully, queries for a significant amount of data tend to be for\nreporting use cases and can be deprioritized. Users do not expect\nthe same level of interactivity in this use case as when they are ex-\nploring data.\nNode failures.\nSinglenodefailuresarecommonindistributedenvironments,but\nmany nodes failing at once are not. If historical nodes completely\nfailanddonotrecover,theirsegmentsneedtobereassigned,which\nmeansweneedexcessclustercapacitytoloadthisdata. Theamount\nof additional capacity to have at any time contributes to the cost\nof running a cluster. From our experiences, it is extremely rare to\nsee more than 2 nodes completely fail at once and hence, we leave\nenoughcapacityinourclustertocompletelyreassignthedatafrom\n2 historical nodes.Data Center Outages.\nComplete cluster failures are possible, but extremely rare. If\nDruid is only deployed in a single data center, it is possible for\nthe entire data center to fail. In such cases, new machines need\nto be provisioned. As long as deep storage is still available, clus-\nterrecoverytimeisnetworkbound,ashistoricalnodessimplyneed\nto redownload every segment from deep storage. We have experi-\nenced such failures in the past, and the recovery time was several\nhoursintheAmazonAWSecosystemforseveralterabytesofdata.\n7.1 Operational Monitoring\nPropermonitoringiscriticaltorunalargescaledistributedclus-\nter. Each Druid node is designed to periodically emit a set of oper-\national metrics. These metrics may include system level data such\nasCPUusage,availablememory,anddiskcapacity,JVMstatistics\nsuch as garbage collection time, and heap usage, or node specific\nmetrics such as segment scan time, cache hit rates, and data inges-\ntion latencies. Druid also emits per query metrics.\nWe emit metrics from a production Druid cluster and load them\ninto a dedicated metrics Druid cluster. The metrics Druid cluster\nis used to explore the performance and stability of the production\ncluster. This dedicated metrics cluster has allowed us to find nu-\nmerousproductionproblems,suchasgradualqueryspeeddegrega-\ntions,lessthanoptimallytunedhardware,andvariousothersystem\nbottlenecks. We also use a metrics cluster to analyze what queries\naremadeinproductionandwhataspectsofthedatausersaremost\ninterested in.\n7.2 Pairing Druid with a Stream Processor\nCurrently, Druid can only understand fully denormalized data\nstreams. Inordertoprovidefullbusinesslogicinproduction,Druid\ncan be paired with a stream processor such as Apache Storm [27].\nA Storm topology consumes events from a data stream, retains\nonly those that are “on-time”, and applies any relevant business\nlogic. This could range from simple transformations, such as id\ntonamelookups,tocomplexoperationssuchasmulti-streamjoins.\nThe Storm topology forwards the processed event stream to Druid\nin real-time. Storm handles the streaming data processing work,\nand Druid is used for responding to queries for both real-time and\nhistorical data.\n7.3 Multiple Data Center Distribution\nLargescaleproductionoutagesmaynotonlyaffectsinglenodes,\nbut entire data centers as well. The tier configuration in Druid co-\nordinatornodesallowforsegmentstobereplicatedacrossmultiple\ntiers. Hence, segments can be exactly replicated across historical\nnodes in multiple data centers. Similarily, query preference can be\nassigned to different tiers. It is possible to have nodes in one data\ncenter act as a primary cluster (and receive all queries) and have a\nredundant cluster in another data center. Such a setup may be de-\nsired if one data center is situated much closer to users.\n8. RELATED WORK\nCattell [6] maintains a great summary about existing Scalable\nSQL and NoSQL data stores. Hu [18] contributed another great\nsummary for streaming databases. Druid, feature-wise, sits some-\nwhere between Google’s Dremel [28] and PowerDrill [17]. Druid\nhas most of the features implemented in Dremel (Dremel handles\narbitrarynesteddatastructureswhileDruidonlyallowsforasingle\nlevel of array-based nesting) and many of the interesting compres-\nsion algorithms mentioned in PowerDrill.\nAlthough Druid builds on many of the same principles as other\ndistributedcolumnardatastores[15],manyofthesedatastoresaredesigned to be more generic key-value stores [23] and do not sup-\nport computation directly in the storage layer. There are also other\ndata stores designed for some of the same data warehousing issues\nthat Druid is meant to solve. These systems include in-memory\ndatabases such as SAP’s HANA [14] and VoltDB [43]. These data\nstoreslackDruid’slowlatencyingestioncharacteristics. Druidalso\nhas native analytical features baked in, similar to ParAccel [34],\nhowever, Druid allows system wide rolling software updates with\nno downtime.\nDruid is similiar to C-Store [38] and LazyBase [8] in that it has\ntwosubsystems,aread-optimizedsubsysteminthehistoricalnodes\nand a write-optimized subsystem in the real-time nodes. Real-time\nnodes are designed to ingest a high volume of append heavy data,\nand do not support data updates. Unlike the two aforementioned\nsystems,DruidismeantforOLAPtransactionsandnotOLTPtrans-\nactions.\nDruid’s low latency data ingestion features share some similar-\nities with Trident/Storm [27] and Spark Streaming [45], however,\nboth systems are focused on stream processing whereas Druid is\nfocused on ingestion and aggregation. Stream processors are great\ncomplementstoDruidasameansofpre-processingthedatabefore\nthe data enters Druid.\nThere are a class of systems that specialize in queries on top of\ncluster computing frameworks. Shark [13] is such a system for\nqueriesontopofSpark,andCloudera’sImpala[9]isanothersystem\nfocused on optimizing query performance on top of HDFS. Druid\nhistorical nodes download data locally and only work with native\nDruid indexes. We believe this setup allows for faster query laten-\ncies.\nDruidleveragesauniquecombinationofalgorithmsinitsarchi-\ntecture. Although we believe no other data store has the same set\nof functionality as Druid, some of Druid’s optimization techniques\nsuchasusinginvertedindicestoperformfastfiltersarealsousedin\nother data stores [26].\n9. CONCLUSIONS\nInthispaperwepresentedDruid,adistributed,column-oriented,\nreal-time analytical data store. Druid is designed to power high\nperformanceapplicationsandisoptimizedforlowquerylatencies.\nDruid supports streaming data ingestion and is fault-tolerant. We\ndiscussed Druid benchmarks and summarized key architecture as-\npects such as the storage format, query language, and general exe-\ncution.\n10. ACKNOWLEDGEMENTS\nDruid could not have been built without the help of many great\nengineersatMetamarketsandinthecommunity. Wewanttothank\neveryone that has contributed to the Druid codebase for their in-\nvaluable support.\n11. REFERENCES\n[1] D. J. Abadi, S. R. Madden, and N. Hachem. Column-stores\nvs.row-stores: Howdifferentaretheyreally? In Proceedings\nof the 2008 ACM SIGMOD international conferenceon\nManagement of data , pages 967–980. ACM,2008.\n[2] G. Antoshenkov.Byte-aligned bitmap compression. In Data\nCompressionConference,1995. DCC’95. Proceedings , page\n476. IEEE, 1995.\n[3] Apache. Apache solr.\nhttp://lucene.apache.org/solr/ , February 2013.\n[4] S. Banon. Elasticsearch.\nhttp://www.elasticseach.com/ , July 2013.[5] C. Bear,A. Lamb, and N. Tran. The vertica database: Sql\nrdbms for managing big data. In Proceedingsof the 2012\nworkshop on Management of big data systems , pages 37–38.\nACM,2012.\n[6] R. Cattell. Scalable sql and nosql data stores. ACM SIGMOD\nRecord, 39(4):12–27, 2011.\n[7] F.Chang, J. Dean, S. Ghemawat, W.C. Hsieh, D. A.\nWallach,M. Burrows, T.Chandra, A. Fikes,and R. E.\nGruber.Bigtable: A distributed storage system for structured\ndata.ACM Transactionson Computer Systems (TOCS) ,\n26(2):4, 2008.\n[8] J. Cipar,G. Ganger, K. Keeton, C. B. Morrey III, C. A.\nSoules, and A. Veitch.Lazybase: trading freshness for\nperformanceinascalabledatabase.In Proceedingsofthe7th\nACM europeanconference on Computer Systems , pages\n169–182. ACM,2012.\n[9] Cloudera impala. http://blog.cloudera.com/blog ,\nMarch 2013.\n[10] A. Colantonio and R. Di Pietro. Concise: Compressed\n‘n’composable integer set. Information ProcessingLetters ,\n110(16):644–650,2010.\n[11] J. Dean and S. Ghemawat. Mapreduce: simplified data\nprocessing on largeclusters. Communications of the ACM ,\n51(1):107–113,2008.\n[12] G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati,\nA. Lakshman, A. Pilchin, S. Sivasubramanian, P.Vosshall,\nand W.Vogels.Dynamo: amazon’s highly available\nkey-value store. In ACM SIGOPS Operating Systems\nReview, volume 41, pages 205–220. ACM, 2007.\n[13] C. Engle, A. Lupher,R. Xin, M. Zaharia, M. J. Franklin,\nS. Shenker,and I. Stoica. Shark: fast data analysis using\ncoarse-grained distributed memory.In Proceedingsof the\n2012 international conferenceon Management of Data ,\npages 689–692. ACM, 2012.\n[14] F.Färber,S. K. Cha, J. Primsch, C. Bornhövd, S. Sigg, and\nW.Lehner.Sap hana database: data management for modern\nbusiness applications. ACM Sigmod Record , 40(4):45–51,\n2012.\n[15] B. Fink. Distributed computation on dynamo-style\ndistributed storage: riak pipe. In Proceedingsof the eleventh\nACM SIGPLAN workshop on Erlang workshop , pages\n43–50. ACM,2012.\n[16] B. Fitzpatrick. Distributed caching with memcached. Linux\njournal, (124):72–74, 2004.\n[17] A. Hall, O. Bachmann, R. Büssow,S. Gănceanu, and\nM. Nunkesser.Processing a trillion cells per mouse click.\nProceedingsof the VLDB Endowment , 5(11):1436–1446,\n2012.\n[18] B. Hu. Stream database survey.2011.\n[19] P.Hunt, M. Konar,F.P. Junqueira, and B. Reed. Zookeeper:\nWait-freecoordinationforinternet-scalesystems.In USENIX\nATC, volume 10, 2010.\n[20] C. S. Kim. Lrfu: A spectrum of policies that subsumes the\nleast recently used and least frequently used policies. IEEE\nTransactionson Computers , 50(12), 2001.\n[21] J. Kreps, N.Narkhede, and J. Rao. Kafka: A distributed\nmessaging system for log processing. In Proceedingsof 6th\nInternational Workshopon Networking Meets Databases\n(NetDB), Athens, Greece , 2011.\n[22] T.Lachev. Applied MicrosoftAnalysis Services 2005: And\nMicrosoftBusiness Intelligence Platform . Prologika Press,\n2005.[23] A. Lakshman and P.Malik. Cassandra—a decentralized\nstructured storage system. Operating systems review ,\n44(2):35, 2010.\n[24] Liblzf. http://freecode.com/projects/liblzf , March\n2013.\n[25] LinkedIn. Senseidb. http://www.senseidb.com/ , July\n2013.\n[26] R. MacNicol and B. French. Sybase iq multiplex-designed\nfor analytics. In Proceedingsof the Thirtieth international\nconferenceon Verylargedata bases-Volume30 , pages\n1227–1230.VLDBEndowment, 2004.\n[27] N. Marz. Storm: Distributed and fault-tolerant realtime\ncomputation. http://storm-project.net/ , February\n2013.\n[28] S. Melnik, A. Gubarev,J. J. Long, G. Romer,S. Shivakumar,\nM.Tolton,and T.Vassilakis.Dremel: interactive analysis of\nweb-scale datasets. Proceedingsof the VLDB Endowment ,\n3(1-2):330–339, 2010.\n[29] D. Miner.Unified analytics platform for big data. In\nProceedingsof the WICSA/ECSA 2012 Companion Volume ,\npages 176–176. ACM,2012.\n[30] K.Oehler,J.Gruenes,C.Ilacqua,andM.Perez. IBMCognos\nTM1: The Official Guide . McGraw-Hill, 2012.\n[31] E. J. O’neil, P.E. O’neil, and G. Weikum.The lru-k page\nreplacement algorithm for database disk buffering.In ACM\nSIGMODRecord , volume 22, pages 297–306. ACM, 1993.\n[32] P.O’Neil and D. Quass. Improved query performance with\nvariant indexes. In ACM Sigmod Record , volume 26, pages\n38–49. ACM,1997.\n[33] P.O’Neil, E. Cheng, D. Gawlick, and E. O’Neil. The\nlog-structured merge-tree(lsm-tree). Acta Informatica ,\n33(4):351–385, 1996.\n[34] Paraccel analytic database.\nhttp://www.paraccel.com/resources/Datasheets/\nParAccel-Core-Analytic-Database.pdf , March 2013.\n[35] M. Schrader,D. Vlamis, M. Nader,C. Claterbos, D. Collins,\nM.Campbell, and F.Conrad. Oracle Essbase & Oracle\nOLAP.McGraw-Hill, Inc., 2009.[36] K. Shvachko, H. Kuang, S. Radia, and R. Chansler.The\nhadoop distributed file system. In Mass Storage Systems and\nTechnologies(MSST), 2010 IEEE 26th Symposium on , pages\n1–10. IEEE, 2010.\n[37] M. Singh and B. Leonhardi. Introduction to the ibm netezza\nwarehouse appliance. In Proceedingsof the 2011Conference\nof the Center for Advanced Studies on Collaborative\nResearch, pages 385–386. IBM Corp., 2011.\n[38] M. Stonebraker,D. J. Abadi, A. Batkin, X. Chen,\nM. Cherniack, M. Ferreira, E. Lau, A. Lin, S. Madden,\nE. O’Neil, et al. C-store: a column-oriented dbms. In\nProceedingsof the 31st international conferenceon Very\nlargedata bases , pages 553–564. VLDB Endowment, 2005.\n[39] A. Tomasicand H. Garcia-Molina. Performance of inverted\nindices in shared-nothing distributed text document\ninformation retrieval systems. In Parallel and Distributed\nInformation Systems, 1993., Proceedingsof the Second\nInternational Conferenceon ,pages 8–17. IEEE, 1993.\n[40] E. Tschetter.Introducing druid: Real-time analytics at a\nbillion rows per second. http://druid.io/blog/2011/\n04/30/introducing-druid.html , April 2011.\n[41] Twitterpublic streams. https://dev.twitter.com/\ndocs/streaming-apis/streams/public , March 2013.\n[42] S. J.van Schaik and O. de Moor.A memory efficient\nreachability data structure through bit vector compression.In\nProceedingsof the 2011international conferenceon\nManagement of data , pages 913–924. ACM, 2011.\n[43] L. VoltDB.Voltdbtechnical overview.\nhttps://voltdb.com/ , 2010.\n[44] K. Wu,E. J. Otoo, and A. Shoshani. Optimizing bitmap\nindices with efficient compression. ACM Transactionson\nDatabase Systems (TODS) , 31(1):1–38, 2006.\n[45] M. Zaharia, T.Das, H. Li, S. Shenker, and I. Stoica.\nDiscretized streams: an efficient and fault-tolerant model for\nstreamprocessingonlargeclusters.In Proceedingsofthe4th\nUSENIX conferenceon Hot Topicsin Cloud Computing ,\npages 10–10. USENIX Association, 2012." } ]
{ "category": "App Definition and Development", "file_name": "druid.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "Welcome to VoltDB\nA TutorialWelcome to VoltDB: A Tutorial\nCopyright © 2013-2022 Volt Active Data, Inc.Table of Contents\nPreface ............................................................................................................................. iv\nHow to Use This Tutorial ............................................................................................ iv\n1. Creating the Database ...................................................................................................... 1\nStarting the Database and Loading the Schema ................................................................ 1\nUsing SQL Queries ..................................................................................................... 2\n2. Loading and Managing Data ............................................................................................. 3\nRestarting the Database ................................................................................................ 3\nLoading the Data ........................................................................................................ 4\nQuerying the Database ................................................................................................. 4\n3. Partitioning ..................................................................................................................... 7\nPartitioned Tables ....................................................................................................... 7\nReplicated Tables ........................................................................................................ 8\n4. Schema Updates and Durability ........................................................................................ 10\nPreserving the Database .............................................................................................. 10\nAdding and Removing Tables ...................................................................................... 11\nUpdating Existing Tables ............................................................................................ 12\n5. Stored Procedures .......................................................................................................... 13\nSimple Stored Procedures ........................................................................................... 13\nWriting More Powerful Stored Procedures ..................................................................... 14\nCompiling Java Stored Procedures ................................................................................ 16\nPutting it All Together ............................................................................................... 17\n6. Client Applications ........................................................................................................ 19\nMaking the Sample Application Interactive .................................................................... 19\nDesigning the Solution ............................................................................................... 19\nDesigning the Stored Procedures for Data Access ........................................................... 20\nCreating the LoadWeather Client Application ................................................................. 21\nRunning the LoadWeather Application .......................................................................... 23\nCreating the GetWeather Application ............................................................................ 23\nVoltDB in User Applications ....................................................................................... 23\nVoltDB in High Performance Applications .................................................................... 24\nRunning the GetWeather Application ............................................................................ 26\nIn Conclusion ........................................................................................................... 27\n7. Next Steps .................................................................................................................... 28\niiiPreface\nVoltDB is designed to help you achieve world-class performance in terms of throughput and scalability.\nAt the same time, at its core, VoltDB is a relational database and can do all the things a traditional rela-\ntional database can do. So before we go into the fine points of tuning a database application for maximum\nperformance, let's just review the basics.\nThe following tutorial familiarizes you with the features and capabilities of VoltDB step by step, starting\nwith its roots in SQL and walking through the individual features that help VoltDB excel in both flexibility\nand performance, including:\n•Schema Definition\n•SQL Queries\n•Partitioning\n•Schema Updates\n•Stored Procedures\nFor the purposes of this tutorial, let's assume we want to learn more about the places where we live. How\nmany cities and towns are there? Where are they? How many people live there? What other interesting\nfacts can we find out?\nOur study will take advantage of data that is freely available from government web sites. But for now,\nlet's start with the basic structure.\nHow to Use This Tutorial\nOf course, you can just read the tutorial to get a feel for how VoltDB works. But we encourage you to\nfollow along using your own copy of VoltDB if you wish.\nThe data files used for the tutorial are freely available from public web sites; links are provided in the\ntext. However, the initial data is quite large. So we have created a subset of source files and pre-processed\ndata files that is available from the VoltDB web site at the following address for those who wish to try\nit themselves:\nhttp://downloads.voltdb.com/technologies/other/tutorial_files_50.zip\nFor each section of the tutorial, there is a subfolder containing the necessary source files, plus one sub-\nfolder, data, containing the data files. To follow along with the tutorial, do the following:\n1.Create a folder to work in.\n2.Unpack the tutorial files into the folder.\n3.At the beginning of each section of the tutorial:\na.Set default to your tutorial folder.\nb.Copy the sources for that current tutorial into the your working directory, like so:\n$ cp -r tutorial1/* ./\nivPreface\nThe tutorial also uses the VoltDB command line commands. Be sure to set up your environment so the\ncommands are available to you, as described in the installation chapter of Using VoltDB .\nvPart 1: Creang the Database\nIn VoltDB you define your database schema using SQL data definition language (DDL) statements just\nlike other SQL databases. So, if we want to create a database table for the places where we live, the DDL\nschema might look like the following:\nCREATE TABLE towns (\n town VARCHAR(64),\n county VARCHAR(64),\n state VARCHAR(2)\n);\nThe preceding schema defines a single table with three columns: town, county, and state. We could also\nset options, such as default values and primary keys. But for now we will keep it as simple as possible.\nStarting the Database and Loading the Schema\nOnce you have the schema defined, you can initialize and start the database, then load your schema. There\nare several options available when initializing a VoltDB database, which we will discuss later. But for\nnow, we can use the simplest init and start commands to initialize and start the database using the default\noptions on the current machine:\n$ voltdb init\n$ voltdb start\nThe voltdb init command initializes a root directory that VoltDB uses to store its configuration, logs, and\nother disk-based information. You only need to initialize the root directory once for a production database.\nWhen doing development, where you often want to start over with new settings or a completely different\nschema, you can reuse the same root by using the voltdb init --force command between runs. We will use\nboth methods — starting a fresh database and restarting an existing database — in this tutorial. To start\nwith, we will re-initialize the database root each time.\nThe voltdb start command tells VoltDB to start the database process. Once startup completes, the server\nreports the following message:\nServer completed initialization.\nNow you are ready to load your schema. To do that you use the VoltDB interactive command line utility,\nsqlcmd. Create a new terminal window and issue the sqlcmd command from the shell prompt:\n$ sqlcmd\nSQL Command :: localhost:21212\n1> \nThe VoltDB interactive SQL command line first reports what database it has connected to and then puts\nup a numbered prompt. At the prompt, you can enter DDL statements and SQL queries, execute stored\nprocedures, or type \"exit\" to end the program and return to the shell prompt.\nTo load the schema, you can either type the DDL schema statements by hand or, if you have them in a\nfile, you can use the FILE directive to process all of the DDL statements with a single command. Since\nwe only have one table definition, we can type or cut & paste the DDL directly into sqlcmd:\n1Creating the Database\n1> CREATE TABLE towns (\n2> town VARCHAR(64),\n3> county VARCHAR(64),\n4> state VARCHAR(2)\n5> );\nUsing SQL Queries\nCongratulations! You have created your first VoltDB database. Of course, an empty database is not terribly\nuseful. So the first thing you will want to do is create and retrieve a few records to prove to yourself that\nthe database is running as you expect.\nVoltDB supports all of the standard SQL query statements, such as INSERT, UPDATE, DELETE, and\nSELECT. You can invoke queries programmatically, through standard interfaces such as JDBC and JSON,\nor you can include them in stored procedures that are compiled and loaded into the database.\nBut for now, we'll just try some ad hoc queries using sqlcmd. Let's start by creating records using the\nINSERT statement. The following example creates three records, for the towns of Billerica, Buffalo, and\nBay View. Be sure to include the semi-colon after each statement.\n1> insert into towns values ('Billerica','Middlesex','MA');\n2> insert into towns values ('Buffalo','Erie','NY');\n3> insert into towns values ('Bay View','Erie','OH');\nWe can also use ad hoc queries to verify that our inserts worked as expected. The following queries use\nthe SELECT statement to retrieve information about the database records.\n4> select count(*) as total from towns;\nTOTAL \n------\n 3\n(1 row(s) affected)\n5> select town, state from towns ORDER BY town;\nTOWN STATE \n------------ ------\nBay View OH \nBillerica MA \nBuffalo NY \n(3 row(s) affected)\nWhen you are done working with the database, you can type \"exit\" to end the sqlcmd session and return\nto the shell command prompt. Then switch back to the terminal session where you started the database\nand press CTRL-C to end the database process.\nThis ends Part One of the tutorial.\n2Part 2: Loading and Managing Data\nAs useful as ad hoc queries are, typing in data by hand is not very efficient. Fortunately, VoltDB provides\nseveral features to help automate this process.\nWhen you define tables using the CREATE TABLE statement, VoltDB automatically creates stored pro-\ncedures to insert records for each table. There is also a command line tool that uses these default stored\nprocedures so you can load data files into your database with a single command. The csvloader command\nreads a data file, such as a comma-separated value (CSV) file, and writes each entry as a record in the\nspecified database table using the default insert procedure.\nIt just so happens that there is data readily available for towns and other landmarks in the United States.\nThe Geographic Names Information Service (GNIS), part of the U.S. Geological Survey, provides data\nfiles of officially named locations throughout the United States. In particular, we are interested in the\ndata file for populated places. This data is available as a text file from their web site , http://geonames.us-\ngs.gov/domestic/download_data.htm\nThe information provided by GNIS not only includes the name, county and state, it includes each location's\nposition (latitude and longitude) and elevation. Since we don't need all of the information provided, we\ncan reduce the number of columns to only the information we want.\nFor our purposes, let's use information about the name, county, state, and elevation of any populated places.\nThis means we can go back and edit our schema file, towns.sql , to add the new columns:\nCREATE TABLE towns (\n town VARCHAR(64),\n state VARCHAR(2),\n state_num TINYINT NOT NULL,\n county VARCHAR(64),\n county_num SMALLINT NOT NULL,\n elevation INTEGER\n);\nNote that the GNIS data includes both names and numbers for identifying counties and states. We re-\narranged the columns to match the order in which they appear in the data file. This makes loading the\ndata easier since, by default, the csvloader command assumes the data columns are in the same order as\nspecified in the schema.\nFinally, we can use some shell magic to trim out the unwanted columns and rows from the data file. The\nfollowing script selects the desired columns and removes any records with empty fields:\n$ cut --delimiter=\"|\" --fields=2,4-7,16 POP_PLACES_20120801.txt \\\n | grep -v \"|$\" \\\n | grep -v \"||\" > data/towns.txt\nTo save time and space, the resulting file containing only the data we need is included with the tutorial\nfiles in a subfolder as data/towns.txt .\nRestarting the Database\nBecause we changed the schema and reordered the columns of the Towns table, we want to start over with\nan empty database and reload the schema. Alternately, if the database is still running you could do a DROP\nTABLE and CREATE TABLE to delete any existing data and replace the table definition.\n3Loading and Managing Data\nLater we'll learn how to restart an existing database and recover its schema and content. But for now, we'll\nuse the same commands you learned earlier to re-initialize the root directory and start an empty database in\none terminal session and load the schema using sqlcmd in another. You'll want to add the --force argument\nto the voltdb init command to indicate you don't need any of the old command logs or snapshots from\nyour previous session. And this time we will load the schema from our DDL file using the FILE directive:\n [terminal 1]\n$ voltdb init --force\n$ voltdb start\n [terminal 2]\n$ sqlcmd\n1> FILE towns.sql;\nCommand succeeded.\n2> exit\nLoading the Data\nOnce the database is running and the schema loaded, we are ready to insert our new data. To do this, set\ndefault to the /data subfolder in your tutorial directory, and use the csvloader command to load the\ndata file:\n$ cd data\n$ csvloader --separator \"|\" --skip 1 \\\n --file towns.txt towns\nIn the preceding commands:\nThe --separator flag lets you specify the character separating the individual data entries. Since\nthe GNIS data is not a standard CSV, we use --separator to identify the correct delimiter.\nThe data file includes a line with column headings. The --skip 1 flag tells csvloader to skip the\nfirst line.\nThe --file flag tells csvloader what file to use as input. If you do not specify a file, csvloader uses\nstandard input as the source for the data.\nThe argument, towns, tells csvloader which database table to load the data into.\nThe csvloader loads all of the records into the database and it generates three log files: one listing any\nerrors that occurred, one listing any records it could not load from the data file, and a summary report\nincluding statistics on how long the loading process took and how many records were loaded.\nQuerying the Database\nNow that we have real data, we can perform more interesting queries. For example, which towns are at\nthe highest elevation, or how many locations in the United States have the same name?\n4Loading and Managing Data\n$ sqlcmd\n1> SELECT town,state,elevation from towns order by elevation desc limit 5;\nTOWN STATE ELEVATION \n------------------------- ------ ----------\nCorona (historical) CO 3573\nQuartzville (historical) CO 3529\nLogtown (historical) CO 3524\nTomboy (historical) CO 3508\nRexford (historical) CO 3484\n(5 row(s) affected)\n2> select town, count(town) as duplicates from towns \n3> group by town order by duplicates desc limit 5;\nTOWN DUPLICATES \n--------------- -----------\nMidway 215\nFairview 213\nOak Grove 167\nFive Points 150\nRiverside 130\n(5 row(s) affected)\nAs we can see, the five highest towns are all what appear to be abandoned mining towns in the Rocky\nMountains. And Springfield, as common as it is, doesn't make it into the top five named places in the\nUnited States.\nWe can make even more interesting discoveries when we combine data. We already have information about\nlocations and elevation. The US Census Bureau can also provide us with information about population\ndensity. Population data for individual towns and counties in the United States can be downloaded from\ntheir web site , http://www.census.gov/popest/data/index.html .\nTo add the new data, we must add a new table to the database. So let's edit our schema to add a table for\npopulation that we will call people. While we are at it, we can create indexes for both tables, using the\ncolumns that will be used most frequently for searching and sorting, state_num and county_num.\nCREATE TABLE towns (\n town VARCHAR(64),\n state VARCHAR(2),\n state_num TINYINT NOT NULL,\n county VARCHAR(64),\n county_num SMALLINT NOT NULL,\n elevation INTEGER\n);\nCREATE TABLE people (\n state_num TINYINT NOT NULL,\n county_num SMALLINT NOT NULL,\n state VARCHAR(20),\n county VARCHAR(64),\n population INTEGER\n);\nCREATE INDEX town_idx ON towns (state_num, county_num);\nCREATE INDEX people_idx ON people (state_num, county_num);\n5Loading and Managing Data\nOnce again, we put the columns in the same order as they appear in the data file. We also need to trim\nthe data file to remove extraneous columns. The census bureau data includes both measured and estimated\nvalues. For the tutorial, we will focus on one population number.\nThe shell command to trim the data file is the following. (Again, the resulting data file is available as part\nof the downloadable tutorial package.)\n$ grep -v \"^040,\" CO-EST2011-Alldata.csv \\\n | cut --delimiter=\",\" --fields=4-8 > people.txt\nOnce we have the data and the new DDL statements, we can update the database schema. We could stop\nand restart the database, load the new schema from our text file and reload the data. But we don't have\nto. Since we are not changing the Towns table or adding a unique index, we can make our changes to the\nrunning database by simply cutting and pasting the new DDL statements into sqlcmd:\n$ sqlcmd\n1> CREATE TABLE people (\n2> state_num TINYINT NOT NULL,\n3> county_num SMALLINT NOT NULL,\n4> state VARCHAR(20),\n5> county VARCHAR(64),\n6> population INTEGER\n7> );\n8> CREATE INDEX town_idx ON towns (state_num, county_num);\n9> CREATE INDEX people_idx ON people (state_num, county_num);\nOnce we create the new table and indexes, we can load the accompanying data file:\n$ cd data\n$ csvloader --file people.txt --skip 1 people\nAt this point we now have two tables loaded with data. Now we can join the tables to look for correlations\nbetween elevation and population, like so:\n$ sqlcmd\n1> select top 5 min(t.elevation) as height, \n2> t.state,t.county, max(p.population) \n3> from towns as t, people as p \n4> where t.state_num=p.state_num and t.county_num=p.county_num \n5> group by t.state, t.county order by height desc;\nHEIGHT STATE COUNTY C4 \n------- ------ --------- ------\n 2754 CO Lake 7310\n 2640 CO Hinsdale 843\n 2609 CO Mineral 712\n 2523 CO San Juan 699\n 2454 CO Summit 27994\n(5 row(s) affected)\nIt turns out that, even discounting ghost towns that have no population, the five inhabited counties with\nhighest elevation are all in Colorado. In fact, if we reverse the select expression to find the lowest inhabited\ncounties (by changing the sort order from descending to ascending), the lowest is Inyo county in California\n— the home of Death Valley!\nThis ends Part Two of the tutorial.\n6Part 3: Paroning\nNow you have the hang of the basic features of VoltDB as a relational database, it's time to start looking\nat what makes VoltDB unique. One of the most important features of VoltDB is partitioning.\nPartitioning organizes the contents of a database table into separate autonomous units. Similar to sharding,\nVoltDB partitioning is unique because:\n•VoltDB partitions the database tables automatically, based on a partitioning column you specify. You\ndo not have to manually manage the partitions.\n•You can have multiple partitions, or sites, on a single server. In other words, partitioning is not just for\nscaling the data volume, it helps performance as well.\n•VoltDB partitions both the data and the processing that accesses that data, which is how VoltDB lever-\nages the throughput improvements parallelism provides.\nPartitioned Tables\nYou partition a table by specifying the partitioning column as part of your schema. If a table is partitioned,\neach time you insert a row into that table, VoltDB decides which partition the row goes into based on the\nvalue of the partitioning column. So, for example, if you partition the Towns table on the column Name,\nthe records for all towns with the same name end up in the same partition.\nHowever, although partitioning by name may be reasonable in terms of evenly distributing the records, the\ngoal of partitioning is to distribute both the data and the processing. We don't often compare information\nabout towns with the same name. Whereas, comparing towns within a given geographic region is very\ncommon. So let's partition the records by state so we can quickly do things like finding the largest or\nhighest town within a given state.\nBoth the Towns and the People tables have columns for the state name. However, they are slightly different;\none uses the state abbreviation and one uses the full name. To be consistent, we can use the State_num\ncolumn instead, which is common to both tables.\nTo partition the tables, we simply add a PARTITION TABLE statement to the database schema. Here are\nthe statements we can add to our schema to partition both tables by the State_num column:\nPARTITION TABLE towns ON COLUMN state_num;\nPARTITION TABLE people ON COLUMN state_num;\nHaving added partitioning information, we can stop the database, re-initialize, restart and reload the schema\nand data. This time, rather than using CTRL-C to kill the database process, we can use the voltadmin\nshutdown command. The voltadmin commands perform administrative functions for a database cluster\nand shutdown performs an orderly shutdown of the database whether a single node or a 15 node cluster.\nSo go to the second terminal session and use voltadmin shutdown to stop the database:\n$ voltadmin shutdown\nThen you can re-initialize and start the database and load the new schema and data files:\n7Partitioning\n [terminal 1]\n$ voltdb init --force\n$ voltdb start\n [terminal 2]\n$ sqlcmd\n1> FILE towns.sql;\nCommand succeeded.\n2> exit\n$ cd data\n$ csvloader --separator \"|\" --skip 1 \\\n --file towns.txt towns\n$ csvloader --file people.txt --skip 1 people\nThe first thing you might notice, without doing any other queries, is that loading the data files is faster.\nIn fact, when csvloader runs, it creates three log files summarizing the results of the loading process. One\nof these files, csvloader_TABLE-NAME_insert_report.log , describes how long the process\ntook and the average transactions per second (TPS). Comparing the load times before and after adding\npartitioning shows that adding partitioning increases the ingestion rate for the Towns table from approxi-\nmately 5,000 to 16,000 TPS — more than three times as fast! This performance improvement is a result of\nparallelizing the stored procedure calls across eight sites per host. Increasing the number of sites per host\ncan provide additional improvements, assuming the server has the core processors necessary to manage\nthe additional threads.\nReplicated Tables\nAs mentioned earlier, the two tables Towns and People both have a VARCHAR column for the state name,\nbut its use is not consistent. Instead we use the State_num column to do partitioning and joining of the\ntwo tables.\nThe State_num column contains the FIPS number. That is, a federal standardized identifier assigned to\neach state. The FIPS number ensures unique and consistent identification of the state. However, as useful\nas the FIPS number is for computation, most people think of their location by name, not number. So it\nwould be useful to have a consistent name to go along with the number.\nInstead of attempting to modify the fields in the individual tables, we can normalize our schema and\ncreate a separate table that provides an authoritative state name for each state number. Again, the federal\ngovernment makes this information freely available from the U.S. Environmental Protection Agency web\nsite, http://www.epa.gov/enviro/html/codes/state.html . Although it is not directly downloadable as a data\nfile, a copy of the FIPS numbers and names for all of the states is included in the tutorial files in the data\nsubfolder as data/state.txt .\nSo let's go and add a table definition for this data to our schema:\nCREATE TABLE states (\n abbreviation VARCHAR(20),\n state_num TINYINT,\n name VARCHAR(20),\n PRIMARY KEY (state_num)\n);\nThis sort of lookup table is very common in relational databases. They reduce redundancy and ensure data\nconsistency. Two of the most common attributes of lookup tables are that they are relatively small in size\nand they are static. That is, they are primarily read-only.\n8Partitioning\nIt would be possible to partition the States table on the State_num column, like we do the Towns and\nPeople tables. However, when a table is relatively small and not updated frequently, it is better to replicate\nit to all partitions. This way, even if another table is partitioned (such as a customer table partitioned on\nlast name), stored procedures can join the two tables, no matter what partition the procedure executes in.\nTables where all the records are available to all the partitions are called replicated tables . Note that tables\nare replicated by default. So to make the States table a replicated table, we simply include the CREATE\nTABLE statement without an accompanying PARTITION TABLE statement.\nOne last caveat concerning replicated tables: the benefits of having the data replicated is that it can be read\nfrom any individual partition. However, the deficit is that any updates or inserts to a replicated table must\nbe executed for all partitions at once. This sort of multi-partition procedure reduces the benefits of paral-\nlel processing and impacts throughput. Which is why you should not replicate tables that are frequently\nupdated.\nThis ends Part Three of the tutorial.\n9Part 4: Schema Updates and Durability\nThus far in the tutorial we have re-initialized and restarted the database from scratch and reloaded the\nschema and data manually each time we changed the schema. This is sometimes the easiest way to make\nchanges when you are first developing your application and making frequent changes. However, as your\napplication — and the data it uses — becomes more complex, you want to maintain your database state\nacross sessions.\nYou may have noticed that in the previous section of the tutorial we defined the States table but did not\nadd it to the running database yet. That is because we want to demonstrate ways of modifying the database\nwithout having to start from scratch each time.\nPreserving the Database\nFirst let's talk about durability. VoltDB is an in-memory database. Each time you re-initialize and start the\ndatabase with the init and start commands, it starts a new, empty database. Obviously, in real business\nsituations you want the data to persist. VoltDB has several features that preserve the database contents\nacross sessions.\nThe easiest way to preserve the database is to use command logging , which is enabled by default for the\nVoltDB Enterprise Edition. Command logging logs all of the database activity, including schema and data\nchanges, to disk. If the database ever stops, you can recover the command log simply by restarting the\ndatabase with the voltdb start command.\nIf you are using the Enterprise Edition, try it now. Stop the database process with voltadmin shutdown ,\nthen use voltdb start (without voltdb init ) to restore the database to its previous state:\n$ voltadmin shutdown\n$ voltdb start\nCommand logging makes saving and restoring your database easy and automatic. Alternately, if you are\nusing the Community Edition or are not using command logging, you can also save and restore your\ndatabase using snapshots .\nSnapshots are a complete disk-based representation of a VoltDB database, including everything needed\nto reproduce the database after a shutdown. You can create a snapshot of a running VoltDB database at\nanytime using the voltadmin save command. For example:\n voltadmin save\nBy default, the snapshot is saved to a subfolder of the database root directory. Alternately, you can specify\nthe location and name of the snapshot files as arguments to the voltadmin save command. But there is\nan advantage to saving the snapshot to the default location. Because if there are any snapshots in the root\ndirectory, the voltdb start command automatically restores the most recent snapshot when the database\nrestarts.\nTo make it even easier, VoltDB let's you create a final snapshot when you shutdown the database simply\nby adding the --save argument to the shutdown command. This is the recommended way to shutdown the\ndatabase when using the community edition (that is, not using command logging). Let's try it:\n$ voltadmin shutdown --save\n$ voltdb start\nWe can verify that the database was restored by doing some simple SQL queries in our other terminal\nsession:\n10Schema Updates and Durability\n$ sqlcmd\nSQL Command :: localhost:21212\n1> select count(*) from towns;\nC1 \n-------\n 193297\n(1 row(s) affected)\n2> select count(*) from people;\nC1 \n------\n 81691\n(1 row(s) affected)\nAdding and Removing Tables\nNow that we know how to save and restore the database, we can add the States table we defined in Part\nThree. Adding and dropping tables can be done \"on the fly\", while the database is running, using the\nsqlcmd utility. To add tables, you simply use the CREATE TABLE statement, like we did before. When\nmodifying existing tables you can use the ALTER TABLE statement. Alternately, if you are not concerned\nwith preserving existing data in the table, you can do a DROP TABLE followed by CREATE TABLE to\nreplace the table definition.\nIn the case of the States table we are adding a new table so we can simply type (or copy & paste) the\nCREATE TABLE statement into the sqlcmd prompt. We can also use the show tables directive to verify\nthat our new table has been added.\n$ sqlcmd\nSQL Command :: localhost:21212\n1> CREATE TABLE states (\n2> abbreviation VARCHAR(20),\n3> state_num TINYINT,\n4> name VARCHAR(20),\n5> PRIMARY KEY (state_num)\n6> );\nCommand successful\n7> show tables;\n--- User Tables --------------------------------------------\nPEOPLE\nSTATES\nTOWNS\n--- User Views --------------------------------------------\n--- User Export Streams --------------------------------------------\n8> exit\nNext we can load the state information from the data file. Finally, we can use the voltadmin save command\nto save a complete copy of the database.\n11Schema Updates and Durability\n$ csvloader --skip 1 -f data/states.csv states\n$ voltadmin save\nUpdating Existing Tables\nNow that we have a definitive lookup table for information about the states, we no longer need the redun-\ndant columns in the Towns and People tables. We want to keep the FIPS column, State_num, but can\nremove the State column from each table. Our updated schema for the two tables looks like this:\nCREATE TABLE towns (\n town VARCHAR(64),\n-- state VARCHAR(2),\n state_num TINYINT NOT NULL,\n county VARCHAR(64),\n county_num SMALLINT NOT NULL,\n elevation INTEGER\n);\nCREATE TABLE people (\n state_num TINYINT NOT NULL,\n county_num SMALLINT NOT NULL,\n-- state VARCHAR(20),\n town VARCHAR(64),\n population INTEGER\n);\nIt is good to have the complete schema on file in case we want to restart from scratch. (Or if we want to\nrecreate the database on another server.) However, to modify an existing schema under development it is\noften easier just to use ALTER TABLE statements. So to modify the running database to match our new\nschema, we can use ALTER TABLE with the DROP COLUMN clause from the sqlcmd prompt:\n$ sqlcmd\nSQL Command :: localhost:21212\n1> ALTER TABLE towns DROP COLUMN state;\nCommand successful\n2> ALTER TABLE people DROP COLUMN state;\nCommand successful\nMany schema changes, including adding, removing, and modifying tables, columns, and indexes can be\ndone on the fly. However, there are a few limitations. For example, you cannot add new unique constraints\nto a table that already has data in it. In this case, you can DROP and CREATE the table with the new\nconstraints if you do not need to save the data. Or, if you need to preserve the data and you know that\nit will not violate the new constraint, you can save the data to a snapshot using an explicit directory, re-\ninitialize the database root directory, restart, and reload the new schema, then restore the data from the\nsnapshot into the updated schema using the voltadmin restore command.\nThis ends Part Four of the tutorial.\n12Part 5: Stored Procedures\nWe now have a complete database that we can interact with using SQL queries. For example, we can find\nthe least populous county for any given state (California, for example) with the following SQL query:\n$ sqlcmd\n1> SELECT TOP 1 county, abbreviation, population \n2> FROM people, states WHERE people.state_num=6\n3> AND people.state_num=states.state_num\n4> ORDER BY population ASC;\nHowever, typing in the same query with a different state number over and over again gets tiring very\nquickly. The situation gets worse as the queries get more complex.\nSimple Stored Procedures\nFor queries you run frequently, only changing the input, you can create a simple stored procedure. Stored\nprocedures let you define the query once and change the input values when you execute the procedure.\nStored procedures have an additional benefit; because they are pre-compiled, the queries do not need to\nbe planned at runtime, reducing the time it takes for each query to execute.\nTo create simple stored procedures — that is, procedures consisting of a single SQL query — you can\ndefine the entire procedure in your database schema using the CREATE PROCEDURE AS statement. So,\nfor example to turn our previous query into a stored procedure, we might add the following statement to\nour schema:\nCREATE PROCEDURE leastpopulated AS \n SELECT TOP 1 county, abbreviation, population\n FROM people, states WHERE people.state_num=?\n AND people.state_num=states.state_num\n ORDER BY population ASC;\nIn the CREATE PROCEDURE AS statement:\nThe label, in this case leastpopulated , is the name given to the stored procedure.\nQuestion marks are used as placeholders for values that will be input at runtime.\nIn addition to creating the stored procedure, we can also specify if it is single-partitioned or not. When you\npartition a stored procedure, you associate it with a specific partition based on the table that it accesses.\nFor example, the preceding query accesses the People table and, more importantly, narrows the focus to\na specific value of the partitioning column, State_num.\nNote that you can access more than one table in a single-partition procedure, as we do in the preceding\nexample. However, all of the data you access must be in that partition. In other words, all the tables you\naccess must be partitioned on the same key value or, for read-only SELECT statements, you can also\ninclude replicated tables.\nSo we can partition our new procedure on the People table by adding a PARTITION ON clause and\nspecifying the table and partitioning column. The complete statement is as follows:\n13Stored Procedures\nCREATE PROCEDURE leastpopulated \n PARTITION ON TABLE people COLUMN state_num\nAS \n SELECT TOP 1 county, abbreviation, population\n FROM people, states WHERE people.state_num=?\n AND people.state_num=states.state_num\n ORDER BY population ASC;\nNow when we invoke the stored procedure, it is executed only in the partition where the State_num column\nmatches the first argument to the procedure, leaving the other partitions free to process other requests.\nOf course, before we can use the procedure we need to add it to the database. Modifying stored procedures\ncan be done on the fly, like adding and removing tables. So we do not need to restart the database, just\ntype the CREATE PROCEDURE statement at the sqlcmd prompt:\nsqlcmd>\n1> CREATE PROCEDURE leastpopulated \n2> PARTITION ON TABLE people COLUMN state_num\n3> AS \n4> SELECT TOP 1 county, abbreviation, population\n5> FROM people, states WHERE people.state_num=?\n6> AND people.state_num=states.state_num\n7> ORDER BY population ASC;\nOnce we update the schema, the new procedure becomes available. So we can now execute the query\nmultiple times for different states simply by changing the argument to the procedure:\n1> exec leastpopulated 6;\nCOUNTY ABBREVIATION POPULATION \n-------------- ------------- -----------\nAlpine County CA 1175\n(1 row(s) affected)\n2> exec leastpopulated 48;\nCOUNTY ABBREVIATION POPULATION \n-------------- ------------- -----------\nLoving County TX 82\n(1 row(s) affected)\nWriting More Powerful Stored Procedures\nSimple stored procedures written purely in SQL are very handy as short cuts. However, some procedures\nare more complex, requiring multiple queries and additional computation based on query results. For more\ninvolved procedures, VoltDB supports writing stored procedures in Java.\nIt isn't necessary to be a Java programming wizard to write VoltDB stored procedures. All VoltDB stored\nprocedures have the same basic structure. For example, the following code reproduces the simple stored\nprocedure leastpopulated we wrote in the previous section using Java:\nimport org.voltdb.*;\n14Stored Procedures\npublic class LeastPopulated extends VoltProcedure {\n public final SQLStmt getLeast = new SQLStmt(\n \" SELECT TOP 1 county, abbreviation, population \"\n + \" FROM people, states WHERE people.state_num=?\"\n + \" AND people.state_num=states.state_num\"\n + \" ORDER BY population ASC;\" );\n public VoltTable[] run(int state_num)\n throws VoltAbortException {\n voltQueueSQL( getLeast, state_num );\n return voltExecuteSQL();\n }\n}\nIn this example:\nWe start by importing the necessary VoltDB classes and methods.\nThe procedure itself is defined as a Java class. The Java class name is the name we use at runtime to\ninvoke the procedure. In this case, the procedure name is LeastPopulated .\nAt the beginning of the class, you declare the SQL queries that the stored procedure will use. Here\nwe use the same SQL query from the simple stored procedure, including the use of a question mark\nas a placeholder.\nThe body of the procedure is a single run method. The arguments to the run method are the arguments\nthat must be provided when invoking the procedure at runtime.\nWithin the run method, the procedure queues one or more queries, specifying the SQL query name,\ndeclared in step 3, and the arguments to be used for the placeholders. (Here we only have the one\nquery with one argument, the state number.)\nFinally, a call executes all of the queued queries and the results of those queries are returned to the\ncalling application.\nNow, writing a Java stored procedure to execute a single SQL query is overkill. But it does illustrate the\nbasic structure of the procedure.\nJava stored procedures become important when designing more complex interactions with the database.\nOne of the most important aspects of VoltDB stored procedures is that each stored procedure is executed\nas a complete unit, a transaction, that either succeeds or fails as a whole. If any errors occur during the\ntransaction, earlier queries in the transaction are rolled back before a response is returned to the calling\napplication, or any further work is done by the partition.\nOne such transaction might be updating the database. It just so happens that the population data from the\nU.S. Census Bureau contains both actual census results and estimated population numbers for following\nyears. If we want to update the database to replace the 2010 results with the 2011 estimated statistics (or\nsome future estimates), we would need a procedure to:\n1.Check to see if a record already exists for the specified state and county.\n2.If so, use the SQL UPDATE statement to update the record.\n3.If not, use an INSERT statement to create a new record.\nWe can do that by extending our original sample Java stored procedure. We can start be giving the Java\nclass a descriptive name, UpdatePeople . Next we include the three SQL statements we will use (SELECT,\n15Stored Procedures\nUPDATE, and INSERT). We also need to add more arguments to the procedure to provide data for all of\nthe columns in the People table. Finally, we add the query invocations and conditional logic needed. Note\nthat we queue and execute the SELECT statement first, then evaluate its results (that is, whether there is\nat least one record or not) before queuing either the UPDATE or INSERT statement.\nThe following is the completed stored procedure source code.\nimport org.voltdb.*;\npublic class UpdatePeople extends VoltProcedure {\n public final SQLStmt findCurrent = new SQLStmt(\n \" SELECT population FROM people WHERE state_num=? AND county_num=?\" \n + \" ORDER BY population;\");\n public final SQLStmt updateExisting = new SQLStmt(\n \" UPDATE people SET population=?\"\n + \" WHERE state_num=? AND county_num=?;\");\n public final SQLStmt addNew = new SQLStmt(\n \" INSERT INTO people VALUES (?,?,?,?);\");\n public VoltTable[] run( byte state_num,\n short county_num,\n String county,\n long population )\n throws VoltAbortException {\n voltQueueSQL( findCurrent, state_num, county_num );\n VoltTable[] results = voltExecuteSQL();\n if (results[0].getRowCount() > 0) { // found a record\n voltQueueSQL( updateExisting, population, \n state_num, \n county_num );\n } else { // no existing record\n voltQueueSQL( addNew, state_num, \n county_num,\n county,\n population);\n }\n return voltExecuteSQL();\n }\n}\nCompiling Java Stored Procedures\nOnce we write the Java stored procedure, we need to load it into the database and then declare it in DDL\nthe same way we do with simple stored procedures. But first, the Java class itself needs compiling. We\nuse the Java compiler, javac, to compile the procedure the same way we would any other Java program.\nWhen compiling stored procedures, the Java compiler must be able to find the VoltDB classes and methods\nimported at the beginning of the procedure. To do that, we must include the VoltDB libraries in the Java\nclasspath . The libraries are in the subfolder /voltdb where you installed VoltDB. For example, if you\n16Stored Procedures\ninstalled VoltDB in the directory /opt/voltdb , the command to compile the UpdatePeople procedure\nis the following:\n$ javac -cp \"$CLASSPATH:/opt/voltdb/voltdb/*\" UpdatePeople.java\nOnce we compile the source code into a Java class, we need to package it (and any other Java stored pro-\ncedures and classes the database uses) into a Jar file and load it into the database. Jar files are a standard\nformat for compressing and packaging Java files. You use the jar command to create a Jar file, specifying\nthe name of the Jar file to create and the files to include. For example, you can package the Update-\nPeople.class file you created in the previous step into a Jar file named storedprocs.jar with\nthe following command:\n$ jar cvf storedprocs.jar *.class\nOnce you package the stored procedures into a Jar file, you can then load them into the database using the\nsqlcmd load classes directive. For example:\n$ sqlcmd\n1> load classes storedprocs.jar;\nFinally, we can declare the stored procedure in our schema, in much the same way simple stored procedures\nare declared. But this time we use the CREATE PROCEDURE FROM CLASS statement, specifying the\nclass name rather than the SQL query. We can also partition the procedure on the People table, since all of\nthe queries are constrained to a specific value of State_num, the partitioning column. Here is the statement\nwe add to the schema.\nCREATE PROCEDURE \n PARTITION ON TABLE people COLUMN state_num\n FROM CLASS UpdatePeople;\nNotice that you do not need to specify the name of the procedure after \"CREATE PROCEDURE\" because,\nunlike simple stored procedures, the CREATE PROCEDURE FROM CLASS statement takes the proce-\ndure name from the name of the class; in this case, UpdatePeople .\nGo ahead and enter the CREATE PROCEDURE FROM CLASS statement at the sqlcmd prompt to bring\nyour database up to date:\n$ sqlcmd\n1> CREATE PROCEDURE \n2> PARTITION ON TABLE people COLUMN state_num\n3> FROM CLASS UpdatePeople;\nPung it All Together\nOK. Now we have a Java stored procedure and an updated schema. We are ready to try them out.\nObviously, we don't want to invoke our new procedure manually for each record in the People table. We\ncould write a program to do it for us. Fortunately, there is a program already available that we can use.\nThe csvloader command normally uses the default INSERT procedures to load data into a table. However,\nyou can specify an different procedure if you wish. So we can use csvloader to invoke our new procedure\nto update the database with every record in the data file.\nFirst we must filter the data to the columns we need. We use the same shell commands we used to create\nthe initial input file, except we switch to selecting the column with data for the 2011 estimate rather than\n17Stored Procedures\nthe actual census results. We can save this file as data/people2011.txt (which is included with\nthe source files):\n$ grep -v \"^040,\" data/CO-EST2011-Alldata.csv \\\n| cut --delimiter=\",\" --fields=4,5,7,11 > data/people2011.txt\nBefore we update the database, let's just check to see which are the two counties with the smallest pop-\nulation:\n$ sqlcmd\nSQL Command :: localhost:21212\n1> SELECT TOP 2 county, abbreviation, population\n2> FROM people,states WHERE people.state_num=states.state_num\n3> ORDER BY population ASC;\nCOUNTY ABBREVIATION POPULATION \n--------------- ------------- -----------\nLoving County TX 82\nKalawao County HI 90\n(2 row(s) affected)\nNow we can run csvloader to update the database, using the -p flag indicating that we are specifying a\nstored procedure name rather than a table name:\n$ csvloader --skip 1 --file data/people2011.txt \\\n -p UpdatePeople\nAnd finally, we can check to see the results of the update by repeating our earlier query:\n$ sqlcmd\nSQL Command :: localhost:21212\n1> SELECT TOP 2 county, abbreviation, population\n2> FROM people,states WHERE people.state_num=states.state_num\n3> ORDER BY population ASC;\nCOUNTY ABBREVIATION POPULATION \n--------------- ------------- -----------\nKalawao County HI 90\nLoving County TX 94\n(2 row(s) affected)\nAha! In fact, the estimates show that Loving County, Texas is growing and is no longer the smallest!\n18Part 6: Client Applicaons\nWe now have a working sample database with data. We even wrote a stored procedure demonstrating how\nto update the data. To run the stored procedure we used the pre-existing csvloader utility. However, most\napplications require more logic than a single stored procedure. Understanding how to integrate calls to the\ndatabase into your client applications is key to producing a complete business solution, In this lesson, we\nexplain how to interact with VoltDB from client applications.\nVoltDB provides client libraries in a number of different programming languages, each with their own\nunique syntax, supported datatypes, and capabilities. However, the general process for calling VoltDB\nfrom client applications is the same no matter what programming language you use:\n1.Create a client connection to the database.\n2.Make one of more calls to stored procedures and interpret their results.\n3.Close the connection when you are done.\nThis lesson will show you how to perform these steps in several different languages.\nMaking the Sample Application Interactive\nAs interesting as the population and location information is, it isn't terribly dynamic. Population does not\nchange that quickly and locations even less so. Creating an interactive application around this data alone\nis difficult. However, if we add just one more layer of data things get interesting.\nThe United States National Weather Service (part of the Department of Commerce) issues notices describ-\ning dangerous weather conditions. These alerts are available online in XML format and include the state\nand county FIPS numbers of the areas affected by each weather advisory. This means it is possible to load\nweather advisories correlated to the same locations for which we have population and elevation data. Not\nonly is it possible to list the weather alerts for a given state and county, we could also determine which\nevents have the highest impact, in terms of population affected.\nDesigning the Solution\nTo make use of this new data, we can build a solution composed of two separate applications:\n•One to load the weather advisory data\n•Another to fetch the alerts for a specific location\nThis matches the natural break down of activities, since loading the data can be repeated periodically —\nevery five or ten minutes say — to ensure the database has the latest information. Whereas fetching the\nalerts would normally be triggered by a user request.\nAt any given time, there are only a few hundred weather alerts and the alerts are updated only every\n5-10 minutes on the NWS web site. Because it is a small data set updated infrequently, the alerts would\nnormally be a good candidate for a replicated table. However, in this case, there can be — and usually\nare — multiple state/county pairs associated with each alert. Also, performance of user requests to look\nup alerts for a specific state and county could be critically important depending on the volume and use of\nthat function within the business solution.\n19Client Applications\nSo we can normalize the data into two separate tables: nws_alert for storing general information about\nthe alerts and local_event which correlates each alert (identified by a unique ID) to the state and county it\napplies to. This second table can be partitioned on the same column, state_num, as the towns and people\ntables. The new tables and associated indexes look like this:\nCREATE TABLE nws_event (\n id VARCHAR(256) NOT NULL,\n type VARCHAR(128),\n severity VARCHAR(128),\n SUMMARY VARCHAR(1024),\n starttime TIMESTAMP,\n endtime TIMESTAMP,\n updated TIMESTAMP,\n PRIMARY KEY (id)\n);\nCREATE TABLE local_event (\n state_num TINYINT NOT NULL,\n county_num SMALLINT NOT NULL,\n id VARCHAR(256) NOT NULL\n);\nCREATE INDEX local_event_idx ON local_event (state_num, county_num);\nCREATE INDEX nws_event_idx ON nws_event (id);\n \nPARTITION TABLE local_event ON COLUMN state_num;\nIt is possible to add the new table declarations to the existing schema file. However, it is possible to load\nmultiple separate schema files into the database using sqlcmd, as long as the DDL statements don't overlap\nor conflict. So to help organize our source files, we can create the new table declarations in a separate\nschema file, weather.sql . We will also need some new stored procedures, so we won't load the new\nDDL statements right now. But you can view the weather.sql file in your tutorial directory to see the new\ntable declarations.\nDesigning the Stored Procedures for Data Access\nHaving defined the schema, we can now define the stored procedures that the client applications need. The\nfirst application, which loads the weather alerts, needs two stored procedures:\n•FindAlert — to determine if a given alert already exists in the database\n•LoadAlert — to insert the information into both the nws_alert and local_alert table\nThe first stored procedure is a simple SQL query based on the id column and can be defined in the schema.\nThe second procedure needs to create a record in the replicated table nws_alert and then as many records\nin local_alert as needed. Additionally, the input file lists the state and county FIPS numbers as a string\nof six digit values separated by spaces rather than as separate fields. As a result, the second procedure\nmust be written in Java so it can queue multiple queries and decipher the input values before using them\nas query arguments. You can find the code for this stored procedure in the file LoadAlert.java in\nthe tutorial directory.\nThese procedures are not partitioned because they access the replicated table nws_alert and — in the case\nof the second procedure — must insert records into the partitioned table local_alert using multiple different\npartitioning column values.\n20Client Applications\nFinally, we also need a stored procedure to retrieve the alerts associated with a specific state and county.\nIn this case, we can partition the procedure based on the state_num field. This last procedure is called\nGetAlertsByLocation.\nThe following procedure declarations complete the weather.sql schema file:\nCREATE PROCEDURE FindAlert AS\n SELECT id, updated FROM nws_event\n WHERE id = ?;\nCREATE PROCEDURE FROM CLASS LoadAlert;\nCREATE PROCEDURE GetAlertsByLocation \n PARTITION ON TABLE local_event COLUMN state_num\n AS SELECT w.id, w.summary, w.type, w.severity,\n w.starttime, w.endtime \n FROM nws_event as w, local_event as l\n WHERE l.id=w.id and \n l.state_num=? and l.county_num = ? and\n w.endtime > TO_TIMESTAMP(MILLISECOND,?)\n ORDER BY w.endtime;\nNow the stored procedures are written and the additional schema file created, we can compile the Java\nstored procedure, package both it and the UpdatePeople procedure from Part Five into a Jar file, and load\nboth the procedures and schema into the database. Note that you must load the stored procedures first,\nso the database can find the class file when it processes the CREATE PROCEDURE FROM CLASS\nstatement in the schema:\n$ javac -cp \"$CLASSPATH:/opt/voltdb/voltdb/*\" LoadAlert.java\n$ jar cvf storedprocs.jar *.class\n$ sqlcmd\n1> load classes storedprocs.jar;\n2> file weather.sql;\nCreating the LoadWeather Client Application\nThe goal of the first client application, LoadWeather, is to read the weathers alerts from the National\nWeather Service and load them into the database. The basic program logic is:\n1.Read and parse the NWS alerts feed.\n2.For each alert, first check if it already exists in the database using the FindAlert procedure.\n•If yes, move on.\n•If no, insert the alert using the LoadAlert procedure.\nSince this application will be run periodically, we should write it in a programming language that allows\nfor easy parsing of XML and can be run from the command line. Python meets these requirements so we\nwill use it for the example application.\nThe first task for the client application is to include all the libraries we need. In this case we need the\nVoltDB client library and standard Python libraries for input/output and parsing XML. The start of our\nPython program looks like this:\n21Client Applications\nimport sys\nfrom xml.dom.minidom import parseString\nfrom voltdbclient import *\nThe beginning of the program also contains code to read and parse the XML from standard input and define\nsome useful functions. You can find this in the program LoadWeather.py in the tutorial directory.\nMore importantly, we must, as mentioned before, create a client connection. In Python this is done by\ncreating an instance of the FastSerializer:\nclient = FastSerializer(\"localhost\", 21212)\nIn Python, we must also declare any stored procedures we intend to use. In this case, we must declare\nFindAlert and LoadAlert:\nfinder = VoltProcedure( client, \"FindAlert\", [\nFastSerializer.VOLTTYPE_STRING, \n] )\nloader = VoltProcedure( client, \"LoadAlert\", [\nFastSerializer.VOLTTYPE_STRING, \nFastSerializer.VOLTTYPE_STRING, \nFastSerializer.VOLTTYPE_STRING, \nFastSerializer.VOLTTYPE_STRING, \nFastSerializer.VOLTTYPE_STRING, \nFastSerializer.VOLTTYPE_STRING,\nFastSerializer.VOLTTYPE_STRING,\nFastSerializer.VOLTTYPE_STRING\n] )\nThe bulk of the work of the application is a set of loops that walk through each alert in the XML structure\nchecking if it already exists in the database and, if not, adding it. Again, the code for parsing the XML\ncan be found in the tutorial directory if you are interested. But the code for calling the VoltDB stored\nprocedures is the following:\n# Check to see if the alert is already in the database.\nresponse = finder.call([ id ])\nif (response.tables):\n if (response.tables[0].tuples):\n # Existing alert\n cOld += 1\n else:\n # New alert\n response = loader.call([ id, wtype, severity, summary, \n starttime, endtime, updated, fips]) \n if response.status == 1:\n cLoaded += 1\nNote how the application uses the response from the procedures in two different ways:\n•The response from FindAlert (finder) is used to check if any records were returned. If so, the alert\nalready exists in the database.\n•The response from LoadAlert (loader) is used to verify the status of the call. If the return status is one,\nor success, then we know the alert was successfully added to the database.\n22Client Applications\nThere is additional information in the procedure response besides just a status code and the data returned\nby the queries. But LoadWeather shows two of the most commonly used components.\nThe last step, once all the alerts are processed, is to close the connection to the database:\nclient.close()\nRunning the LoadWeather Application\nBecause Python is a scripting language, you do not need to compile your code before running it. However,\nyou do need to tell Python where to find any custom libraries, such as the VoltDB client. Simply add the\nlocation of the VoltDB client library to the environment variable PYTHONPATH. For example, if VoltDB\nis installed in your home directory as the folder ~/voltdb , the command to use is:\n$ export PYTHONPATH=\"$HOME/voltdb/lib/python/\"\nOnce you define PYTHONPATH, you are ready to run LoadWeather. Of course, you will also need weath-\ner alerts data to load. A sample file of weather data is included in the tutorial files data directory:\n$ python LoadWeather.py < data/alerts.xml\nOr you can pipe the most recent alerts directly from the NWS web site:\n$ curl https://alerts.weather.gov/cap/us.php?x=0 | python LoadWeather.py\nCreating the GetWeather Application\nNow the database contains weather data, we can write the second half of the solution — an application\nto retrieve all of the alerts associated with a specific location. In a real world example, the GetWeather\napplication is relatively simple, consisting of a single call to the GetAlertsByLocation stored procedure\nfor the user's current location. Run manually, one query at a time, this is not much of a test of VoltDB —\nor any database. But in practice, where there can be hundreds or thousands of users running the application\nsimultaneously, VoltDB's performance profile excels.\nTo demonstrate both aspects of our hypothetical solution, we can write two versions of the GetWeather\napplication:\n•A user interface showing what it looks like to the user and how easy it is to integrate VoltDB into such\napplications.\n•A high-performance application to emulate real world loads on the database.\nVoltDB in User Applications\nThe first example adds VoltDB access to a user application. In this case, a web interface implemented in\nHTML and Javascript. You can find the complete application in the /MyWeather folder in the tutorial\ndirectory. Run the application by opening the file GetWeather.html in a web browser. If you use the\nsample alert data included in the tutorial directory, you can look up alerts for Androscoggin County in\nMaine to see what the warnings look like to the user.\nMost of the application revolves around the user interface, including HTML. CSS, and Javascript code\nto display the initial form and format the results. Only a very small part of the code is related to VoltDB\naccess.\nIn fact, for applications like this VoltDB has simplified programming interfaces that do not require the\nexplicit setup and tear down of a normal database application. In this case, we can use the JSON interface,\n23Client Applications\nwhich does not require you to open and close an explicit connection. Instead, you simply call the database\nwith your query and it returns the results in standard JavaScript Object Notation (JSON). VoltDB takes\ncare of managing the connection, pooling queries, and so on.\nSo the actual database call only takes up two statements in the file GetAlerts.js;\n•One to construct the URL that will be invoked, identifying the database server, the stored procedure\n(GetAlertsByLocation), and the parameters.\n•Another to do the actual invocation and specify a callback routine.\n var url = \"http://localhost:8080/api/2.0/\" +\n \"?Procedure=GetAlertsByLocation&Parameters=\" +\n \"[\" + statenum + \",\" + countynum + \",\" + currenttime + \"]\";\n callJSON(url,\"loadAlertsCallback\");\nOnce the stored procedure completes, the callback routine is invoked. The callback routine uses the pro-\ncedure response, this time in JSON format, much the same way the LoadWeather application does. First\nit checks the status to make sure the procedure succeeded and then it parses the results and formats them\nfor display to the user.\nfunction loadAlertsCallback(data) {\n if (data.status == 1) {\n var output = \"\";\n var results = data.results[0];\n if (results.length > 0) {\n var datarow = null;\n for (var i = 0; i < results.length; i++) {\n datarow = results[i];\n var link = datarow['ID'];\n var descr = datarow['SUMMARY'];\n var type = datarow['TYPE'];\n var severity = datarow['SEVERITY'];\n var starttime = datarow['STARTTIME']/1000;\n var endtime = datarow['ENDTIME']/1000;\n output += '<p><a href=\"' + link + '\">' + type + '</a> ' \n + severity + '<br/>' + descr + '</p>';\n } \n } else {\n output = \"<p>No current weather alerts for this location.</p>\";\n }\n var panel = document.getElementById('alerts');\n panel.innerHTML = \"<h3>Current Alerts</h3>\" + output;\n \n \n } else {\n alert(\"Failure: \" + data.statusstring);\n }\n}\nVoltDB in High Performance Applications\nThe second example of GetWeather emulates many users accessing the database at the same time. It is\nvery similar to the voter sample application that comes with the VoltDB software.\n24Client Applications\nIn this case we can write the application in Java. As we did before with LoadWeather, we need to import\nthe VoltDB libraries and open a connection to the database. The code to do that in Java looks like this:\nimport org.voltdb.*;\nimport org.voltdb.client.*;\n [ . . . ]\n /*\n * Instantiate a client and connect to the database.\n */\n org.voltdb.client.Client client;\n client = ClientFactory.createClient();\n client.createConnection(\"localhost\");\nThe program then does one ad hoc query. You can add ad hoc queries to your applications by calling the\n@AdHoc system procedure with the SQL statement you want to execute as the only argument to the call.\nNormally, it is best for performance to always use stored procedures since they are precompiled and can\nbe partitioned. However, in this case, where the query is run only once at the beginning of the program to\nget a list of valid county numbers per state, there is little or no negative impact.\nYou use system procedures such as @AdHoc just as you would your own stored procedures, identifying\nthe procedure and any arguments in the callProcedure method. Again, we use the status in the procedure\nresponse to verify that the procedure completed successfully.\nClientResponse response = client.callProcedure(\"@AdHoc\",\n \"Select state_num, max(county_num) from people \" +\n \"group by state_num order by state_num;\");\nif (response.getStatus() != ClientResponse.SUCCESS){\n System.err.println(response.getStatusString());\n System.exit(-1);\n}\nThe bulk of the application is a program loop that randomly assigns a state and county number and looks up\nweather alerts for that location using the GetAlertsByLocation stored procedure. The major difference here\nis that rather than calling the procedure synchronously and waiting for the results, we call the procedure\nasynchronously and move immediately on to the next call.\nwhile ( currenttime - starttime < timelimit) {\n // Pick a state and county\n int s = 1 + (int)(Math.random() * maxstate);\n int c = 1 + (int)(Math.random() * states[s]); \n // Get the alerts\n client.callProcedure(new AlertCallback(),\n \"GetAlertsByLocation\",\n s, c, new Date().getTime());\n \n currenttime = System.currentTimeMillis();\n if (currenttime > lastreport + reportdelta) {\n DisplayInfo(currenttime-lastreport);\n lastreport = currenttime;\n }\n}\n25Client Applications\nAsynchronous procedure calls are very useful for high velocity applications because they ensure that the\ndatabase always has queries to process. If you call stored procedures synchronously, one at a time, the\ndatabase can only process queries as quickly as your application can send them. Between each stored\nprocedure call, while your application is processing the results and setting up the next procedure call, the\ndatabase is idle. Essentially, all of the parallelism and partitioning of the database are wasted while your\napplication does other work.\nBy calling stored procedures asynchronously, the database can queue the queries and process multiple\nsingle-partitioned in parallel, while your application sets up the next procedure invocation. In other words,\nboth your application and the database can run at top speed. This is also a good way to emulate multiple\nsynchronous clients accessing the database simultaneously.\nOnce an asynchronous procedure call completes, the application is notified by invoking the callback pro-\ncedure identified in the first argument to the callProcedure method. In this case, the AlertCall-\nback procedure. The callback procedure then processes the procedure response, which it receives as an\nargument, just as your application would after a synchronous call.\nstatic class AlertCallback implements ProcedureCallback {\n @Override\n public void clientCallback(ClientResponse response) throws Exception {\n if (response.getStatus() == ClientResponse.SUCCESS) {\n VoltTable tuples = response.getResults()[0] ;\n // Could do something with the results.\n // For now we throw them away since we are\n // demonstrating load on the database\n tuples.resetRowPosition();\n while (tuples.advanceRow()) {\n String id = tuples.getString(0);\n String summary = tuples.getString(1);\n String type = tuples.getString(2);\n String severity = tuples.getString(3);\n long starttime = tuples.getTimestampAsLong(4);\n long endtime = tuples.getTimestampAsLong(5);\n }\n }\n txns++;\n if ( (txns % 50000) == 0) System.out.print(\".\");\n }\n}\nFinally, once the application has run for the predefined time period (by default, five minutes) it prints out\none final report and closes the connection.\n // one final report\nif (txns > 0 && currenttime > lastreport)\n DisplayInfo(currenttime - lastreport);\nclient.close();\nRunning the GetWeather Application\nHow you compile and run your client applications depends on the programming language they are written\nin. For Java programs, like the sample GetWeather application, you need to include the VoltDB JAR file\nin the class path. If you installed VoltDB as ~/voltdb , a subdirectory of your home directory, you can\nadd the VoltDB and associated JAR files and your current working directory to the Java classpath like so:\n26Client Applications\n$ export CLASSPATH=\"$CLASSPATH:$HOME/voltdb/voltdb/*:$HOME/voltdb/lib/*:./\"\nYou can then compile and run the application using standard Java commands:\n$ javac GetWeather.java\n$ java GetWeather\nEmulating read queries of weather alerts by location...\n............................\n1403551 Transactions in 30 seconds (46783 TPS)\n...............................\n1550652 Transactions in 30 seconds (51674 TPS)\nAs the application runs, it periodically shows metrics on the number of transactions processed. These\nvalues will vary based on the type of server, the configuration of the VoltDB cluster (sites per host, notes in\nthe cluster, etc) and other environmental factors. But the numbers give you a rough feel for the performance\nof your database under load.\nIn Conclusion\nHow your database schema is structured, how tables and procedures are partitioned, and how your client ap-\nplication is designed, will all impact performance. When writing client applications, although the specifics\nvary for each programming language, the basics are:\n•Create a connection to the database.\n•Call stored procedures and interpret the results. Use asynchronous calls where possible to maximize\nthroughput.\n•Close the connection when done.\nMuch more information about how to design your client application and how to tune it and the database for\nmaximum performance can be found online in the Using VoltDB manual and VoltDB Performance Guide .\n27Part 7: Next Steps\nThis tutorial has introduced you to the basic features of VoltDB. There are many more capabilities within\nVoltDB to optimize and enhance your database applications. To see more examples of VoltDB in action,\nsee the sample applications that are included in the VoltDB kit in the /examples subfolder. To learn\nmore about individual features and capabilities, see the Using VoltDB manual for more information.\n28" } ]
{ "category": "App Definition and Development", "file_name": "tutorial.pdf", "project_name": "VoltDB", "subcategory": "Database" }
[ { "data": "ZLIB(3) ZLIB(3)\nNAME\nzlib − compression/decompression libr ar y\nSYNOPSIS\n[see zlib.h forfull description]\nDESCRIPTION\nThezliblibrar yis a general purpose data compression libr ar y.The code is thread saf e, assuming\nthat the standard libr ar yfunctions used are thread saf e, such as memor yallocation routines .It\nprovides in-memor ycompression and decompression functions ,including integ rity checks of the\nuncompressed data. This version of the libr ar ysuppor ts only one compression method (defla-\ntion) but other algorithms ma yb ea dded later with the same stream interface.\nCompression can be done in a single step if the b uffers are large enough or can be done b y\nrepeated calls of the compression function. In the latter case ,the application must provide more\ninput and/or consume the output (providing more output space) before each call.\nThe libr ar yalso supports reading and writing files in gzip(1) (.gz) f or mat with an interface similar\nto that of stdio.\nThe libr ar ydoes not install an ysignal handler .The decoder checks the consistency of the com-\npressed data, so the libr ar yshould ne vercrash eveni nt he case of corrupted input.\nAll functions of the compression libr ar yare documented in the file zlib.h .The distr ibution source\nincludes examples of use of the libr ar yin the files test/example.c andtest/minigzip.c, as well as\nother examples in the examples/ director y .\nChanges to this version are documented in the file ChangeLog that accompanies the source.\nzlibis built in to man ylanguages and operating systems ,including but not limited to J ava, Python,\n.NET ,PHP,Per l,Ruby, Swift, and Go.\nAn exper imental package to read and write files in the .zip f or mat, wr itten on top of zlibby G illes\nVollant (info@winimage.com), is a vailable at:\nhttp://www.winimage.com/zLibDll/minizip .html and also in the contr ib/minizip director y of\nthe main zlibsource distribution.\nSEE ALSO\nThezlibwebsite can be found at:\nhttp://zlib.net/\nThe data f or mat used b ythezliblibrar yis described b yRFC (Request for Comments) 1950 to\n1952 in the files:\nhttp://tools.ietf.org/html/rfc1950 (for the zlib header and trailer f or mat)\nhttp://tools.ietf.org/html/rfc1951 (for the deflate compressed data f or mat)\nhttp://tools.ietf.org/html/rfc1952 (for the gzip header and trailer f or mat)\nMar k Nelson wrote an article about zlibforthe Jan. 1997 issue of Dr.Dobb’sJour nal; acopyof\nthe article is a vailable at:\nhttp://mar knelson.us/1997/01/01/zlib-engine/\nREPORTING PROBLEMS\nBefore reporting a problem, please chec kthezlibwebsite to v er ify that you ha ve the latest v er-\nsion of zlib;otherwise ,obtain the latest version and see if the problem still e xists.Please read\nthezlibFA Q at:\nhttp://zlib.net/zlib_faq.html\nbefore asking for help .Send questions and/or comments to zlib@gzip .org, or (for the Windo ws\nDLL version) to Gilles Vollant (info@winimage.com).\n13 Oct 2022 1ZLIB(3) ZLIB(3)\nAUTHORS AND LICENSE\nVersion 1.2.13\nCopyr ight (C) 1995-2022 Jean-loup Gailly and Mar kAdler\nThis software is provided ’as-is’, without an yexpress or implied w arranty .Inn oe vent will the\nauthors be held liable for an ydamages arising from the use of this software.\nPermission is g ranted to an yone to use this software for an ypur pose ,including commercial appli-\ncations ,and to alter it and redistribute it freely ,subject to the following restrictions:\n1. The or igin of this software must not be misrepresented; you must not claim that you wrote the\nor iginal software .I fyou use this software in a product, an ac knowledgment in the product doc-\numentation would be appreciated but is not required.\n2. Altered source versions must be plainly mar keda ss uch, and must not be misrepresented as\nbeing the original software.\n3. This notice ma ynot be remo vedo ra ltered from an ysource distribution.\nJean-loup Gailly Mar k Adler\njloup@gzip .org madler@alumni.caltech.edu\nThe deflate f or mat used b yzlibwasdefined b yPhil Katz. The deflate and zlibspecifications\nwere written b yL .P eter Deutsch. Thanks to all the people who reported problems and suggested\nvarious impro vements in zlib;who are too numerous to cite here.\nUNIX manual page b yR .P .C .R odgers ,U .S.N ational Libr ar y of Medicine\n(rodgers@nlm.nih.gov).\n13 Oct 2022 2" } ]
{ "category": "App Definition and Development", "file_name": "zlib.3.pdf", "project_name": "MySQL", "subcategory": "Database" }
[ { "data": " A Simple PDF File \n This is a small demonstration .pdf file - \n just for use in the Virtual Mechanics tutorials. More text. And more \n text. And more text. And more text. And more text. \n And more text. And more text. And more text. And more text. And more \n text. And more text. Boring, zzzzz. And more text. And more text. And \n more text. And more text. And more text. And more text. And more text. \n And more text. And more text. \n And more text. And more text. And more text. And more text. And more \n text. And more text. And more text. Even more. Continued on page 2 ... Simple PDF File 2 \n ...continued from page 1. Yet more text. And more text. And more text. \n And more text. And more text. And more text. And more text. And more \n text. Oh, how boring typing this stuff. But not as boring as watching \n paint dry. And more text. And more text. And more text. And more text. \n Boring. More, a little more text. The end, and just as well. " } ]
{ "category": "App Definition and Development", "file_name": "simple.pdf", "project_name": "Apache NiFi", "subcategory": "Streaming & Messaging" }
[ { "data": "The MPL Reference Manual2\nCopyright : Copyright c/circlecopyrtAleksey Gurtovoy and David Abrahams, 2001-2005.\nLicense: Distributed under the Boost Software License, Version 1.0. (See accompanying file\nLICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)\nRevision Date: 15th November 2004Contents\nContents 3\n1 Sequences 9\n1.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9\n1.1.1 Forward Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9\n1.1.2 Bidirectional Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10\n1.1.3 Random Access Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11\n1.1.4 Extensible Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12\n1.1.5 Front Extensible Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13\n1.1.6 Back Extensible Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14\n1.1.7 Associative Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14\n1.1.8 Extensible Associative Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16\n1.1.9 Integral Sequence Wrapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17\n1.1.10 Variadic Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18\n1.2 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19\n1.2.1 vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19\n1.2.2 list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20\n1.2.3 deque . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22\n1.2.4 set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22\n1.2.5 map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24\n1.2.6 range_c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25\n1.2.7 vector_c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27\n1.2.8 list_c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28\n1.2.9 set_c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29\n1.3 Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30\n1.3.1 empty_sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30\n1.3.2 filter_view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31\n1.3.3 iterator_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32\n1.3.4 joint_view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33\n1.3.5 single_view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35\n1.3.6 transform_view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36\n1.3.7 zip_view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37\n1.4 Intrinsic Metafunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38\n1.4.1 at . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39\n1.4.2 at_c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41\n1.4.3 back . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42\n1.4.4 begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43\n1.4.5 clear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44\n1.4.6 empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45\n1.4.7 end . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46CONTENTS CONTENTS 4\n1.4.8 erase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47\n1.4.9 erase_key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49\n1.4.10 front . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50\n1.4.11 has_key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52\n1.4.12 insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53\n1.4.13 insert_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55\n1.4.14 is_sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56\n1.4.15 key_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57\n1.4.16 order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59\n1.4.17 pop_back . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60\n1.4.18 pop_front . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61\n1.4.19 push_back . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62\n1.4.20 push_front . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64\n1.4.21 sequence_tag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65\n1.4.22 size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66\n1.4.23 value_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67\n2 Iterators 69\n2.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69\n2.1.1 Forward Iterator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69\n2.1.2 Bidirectional Iterator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70\n2.1.3 Random Access Iterator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71\n2.2 Iterator Metafunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72\n2.2.1 advance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72\n2.2.2 distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74\n2.2.3 next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75\n2.2.4 prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76\n2.2.5 deref . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77\n2.2.6 iterator_category . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78\n3 Algorithms 81\n3.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81\n3.1.1 Inserter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81\n3.1.2 Reversible Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82\n3.2 Inserters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84\n3.2.1 back_inserter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84\n3.2.2 front_inserter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85\n3.2.3 inserter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86\n3.3 Iteration Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87\n3.3.1 fold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88\n3.3.2 iter_fold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89\n3.3.3 reverse_fold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90\n3.3.4 reverse_iter_fold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92\n3.3.5 accumulate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94\n3.4 Querying Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95\n3.4.1 find . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95\n3.4.2 find_if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96\n3.4.3 contains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97\n3.4.4 count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98\n3.4.5 count_if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99\n3.4.6 lower_bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100\n3.4.7 upper_bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101\n3.4.8 min_element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103\nRevision Date: 15th November 20045 CONTENTS CONTENTS\n3.4.9 max_element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104\n3.4.10 equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105\n3.5 Transformation Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106\n3.5.1 copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106\n3.5.2 copy_if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107\n3.5.3 transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109\n3.5.4 replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111\n3.5.5 replace_if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112\n3.5.6 remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114\n3.5.7 remove_if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115\n3.5.8 unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116\n3.5.9 partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118\n3.5.10 stable_partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119\n3.5.11 sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121\n3.5.12 reverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123\n3.5.13 reverse_copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124\n3.5.14 reverse_copy_if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125\n3.5.15 reverse_transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127\n3.5.16 reverse_replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129\n3.5.17 reverse_replace_if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130\n3.5.18 reverse_remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132\n3.5.19 reverse_remove_if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133\n3.5.20 reverse_unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134\n3.5.21 reverse_partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136\n3.5.22 reverse_stable_partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138\n4 Metafunctions 141\n4.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141\n4.1.1 Metafunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141\n4.1.2 Metafunction Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142\n4.1.3 Lambda Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143\n4.1.4 Placeholder Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144\n4.1.5 Tag Dispatched Metafunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144\n4.1.6 Numeric Metafunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147\n4.1.7 Trivial Metafunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148\n4.2 Type Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149\n4.2.1 if_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149\n4.2.2 if_c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151\n4.2.3 eval_if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152\n4.2.4 eval_if_c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153\n4.3 Invocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154\n4.3.1 apply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154\n4.3.2 apply_wrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156\n4.3.3 unpack_args . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158\n4.4 Composition and Argument Binding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159\n4.4.1 Placeholders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159\n4.4.2 lambda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160\n4.4.3 bind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161\n4.4.4 quote . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164\n4.4.5 arg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166\n4.4.6 protect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167\n4.5 Arithmetic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169\n4.5.1 plus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169\nRevision Date: 15th November 2004CONTENTS CONTENTS 6\n4.5.2 minus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170\n4.5.3 times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172\n4.5.4 divides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173\n4.5.5 modulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175\n4.5.6 negate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176\n4.6 Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177\n4.6.1 less . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177\n4.6.2 less_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179\n4.6.3 greater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180\n4.6.4 greater_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181\n4.6.5 equal_to . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183\n4.6.6 not_equal_to . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184\n4.7 Logical Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185\n4.7.1 and_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185\n4.7.2 or_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186\n4.7.3 not_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188\n4.8 Bitwise Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189\n4.8.1 bitand_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189\n4.8.2 bitor_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190\n4.8.3 bitxor_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192\n4.8.4 shift_left . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194\n4.8.5 shift_right . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195\n4.9 Trivial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197\n4.9.1 Trivial MetafunctionsSummary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197\n4.10 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197\n4.10.1 identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197\n4.10.2 always . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198\n4.10.3 inherit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199\n4.10.4 inherit_linearly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202\n4.10.5 numeric_cast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203\n4.10.6 min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205\n4.10.7 max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206\n4.10.8 sizeof_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207\n5 Data Types 209\n5.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209\n5.1.1 Integral Constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209\n5.2 Numeric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210\n5.2.1 bool_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210\n5.2.2 int_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211\n5.2.3 long_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212\n5.2.4 size_t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213\n5.2.5 integral_c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214\n5.3 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216\n5.3.1 pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216\n5.3.2 empty_base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217\n5.3.3 void_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218\n6 Macros 219\n6.1 Asserts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219\n6.1.1 BOOST_MPL_ASSERT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219\n6.1.2 BOOST_MPL_ASSERT_MSG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220\n6.1.3 BOOST_MPL_ASSERT_NOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222\nRevision Date: 15th November 20047 CONTENTS CONTENTS\n6.1.4 BOOST_MPL_ASSERT_RELATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223\n6.2 Introspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224\n6.2.1 BOOST_MPL_HAS_XXX_TRAIT_DEF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224\n6.2.2 BOOST_MPL_HAS_XXX_TRAIT_NAMED_DEF . . . . . . . . . . . . . . . . . . . . . . . 225\n6.3 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227\n6.3.1 BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS . . . . . . . . . . . . . . . . . . . . 227\n6.3.2 BOOST_MPL_CFG_NO_HAS_XXX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228\n6.3.3 BOOST_MPL_LIMIT_METAFUNCTION_ARITY . . . . . . . . . . . . . . . . . . . . . . . 228\n6.3.4 BOOST_MPL_LIMIT_VECTOR_SIZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229\n6.3.5 BOOST_MPL_LIMIT_LIST_SIZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229\n6.3.6 BOOST_MPL_LIMIT_SET_SIZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230\n6.3.7 BOOST_MPL_LIMIT_MAP_SIZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231\n6.3.8 BOOST_MPL_LIMIT_UNROLLING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232\n6.4 Broken Compiler Workarounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232\n6.4.1 BOOST_MPL_AUX_LAMBDA_SUPPORT . . . . . . . . . . . . . . . . . . . . . . . . . . . 232\n7 Terminology 235\n8 Categorized Index 237\n8.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237\n8.2 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237\n9 Acknowledgements 243\nBibliography 245\nRevision Date: 15th November 2004CONTENTS CONTENTS 8\nRevision Date: 15th November 2004Chapter 1 Sequences\nCompile-timesequencesoftypesareoneofthebasicconceptsofC++templatemetaprogramming. Differencesintypes\nof objects being manipulated is the most common point of variability of similar, but not identical designs, and these\nare a direct target for metaprogramming. Templates were originally designed to address this exact problem. However,\nwithoutpredefinedmechanismsforrepresentingandmanipulating sequences oftypesasopposedtostandalonetemplate\nparameters, high-level template metaprogramming is severely limited in its capabitilies.\nThe MPL recognizes the importance of type sequences as a fundamental building block of many higher-level metapro-\ngramming designs by providing us with a conceptual framework for formal reasoning and understanding of sequence\nproperties, guarantees andcharacteristics, aswell as afirst-class implementation ofthat framework— a wealthof tools\nfor concise, convenient, conceptually precise and efficient sequence manipulation.\n1.1 Concepts\nThe taxonomy of sequence concepts in MPL parallels the taxonomy of the MPL Iterators, with two additional classifi-\ncation dimensions: extensibility andassociativeness .\n1.1.1 Forward Sequence\nDescription\nAForward Sequence is an MPL concept representing a compile-time sequence of elements. Sequence elements are\ntypes, and are accessible through Iterators. The beginandendmetafunctions provide iterators delimiting the range\nof the sequence elements. A sequence guarantees that its elements are arranged in a definite, but possibly unspecified,\norder. Every MPL sequence isa Forward Sequence .\nDefinitions\n—Thesizeof a sequence is the number of elements it contains. The size is a nonnegative number.\n—A sequence is emptyif its size is zero.\nExpression requirements\nFor anyForward Sequence sthe following expressions must be valid:\nExpression Type Complexity\nbegin<s>::type Forward Iterator Amortized constant time\nend<s>::type Forward Iterator Amortized constant time\nsize<s>::type Integral Constant Unspecified\nempty<s>::type BooleanIntegral Constant Constant time\nfront<s>::type Any type Amortized constant time1.1 Concepts Sequences 10\nExpression semantics\nExpression Semantics\nbegin<s>::type An iterator to the first element of the sequence; see begin.\nend<s>::type A past-the-end iterator to the sequence; see end.\nsize<s>::type The size of the sequence; see size.\nempty<s>::type A boolean Integral Constant csuch that c::value == true if and only if the\nsequence is empty; see empty.\nfront<s>::type The first element in the sequence; see front.\nInvariants\nFor anyForward Sequence sthe following invariants always hold:\n—[begin<s>::type ,end<s>::type ) is always a valid range.\n—Analgorithmthatiteratesthroughtherange[ begin<s>::type ,end<s>::type )willpassthrougheveryelement\nofsexactly once.\n—begin<s>::type is identical to end<s>::type if and only if sis empty.\n—Two different iterations through swill access its elements in the same order.\nModels\n—vector\n—map\n—range_c\n—iterator_range\n—filter_view\nSee also\nSequences ,Bidirectional Sequence ,Forward Iterator ,begin/end,size,empty,front\n1.1.2 Bidirectional Sequence\nDescription\nABidirectional Sequence is aForward Sequence whose iterators model Bidirectional Iterator .\nRefinement of\nForward Sequence\nExpression requirements\nIn addition to the requirements defined in Forward Sequence , for anyBidirectional Sequence sthe following must be\nmet:\nRevision Date: 15th November 200411 Sequences 1.1 Concepts\nExpression Type Complexity\nbegin<s>::type Bidirectional Iterator Amortized constant time\nend<s>::type Bidirectional Iterator Amortized constant time\nback<s>::type Any type Amortized constant time\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Forward Sequence .\nExpression Semantics\nback<s>::type The last element in the sequence; see back.\nModels\n—vector\n—range_c\nSee also\nSequences ,Forward Sequence ,Random Access Sequence ,Bidirectional Iterator ,begin/end,back\n1.1.3 Random Access Sequence\nDescription\nARandom Access Sequence is aBidirectional Sequence whose iterators model Random Access Iterator . A random\naccess sequence guaranteesamortized constant time access to an arbitrary sequence element.\nRefinement of\nBidirectional Sequence\nExpression requirements\nIn addition to the requirements defined in Bidirectional Sequence , for any Random Access Sequence sthe following\nmust be met:\nExpression Type Complexity\nbegin<s>::type Random Access Iterator Amortized constant time\nend<s>::type Random Access Iterator Amortized constant time\nat<s,n>::type Any type Amortized constant time\nExpression semantics\nSemantics of an expression is defined only where it differs from, or is not defined in Bidirectional Sequence .\nRevision Date: 15th November 20041.1 Concepts Sequences 12\nExpression Semantics\nExpression Semantics\nat<s,n>::type Thenth element from the beginning of the sequence; see at.\nModels\n—vector\n—range_c\nSee also\nSequences ,Bidirectional Sequence ,Extensible Sequence ,Random Access Iterator ,begin/end,at\n1.1.4 Extensible Sequence\nDescription\nAnExtensible Sequence is a sequence that supports insertion and removal of elements. Extensibility is orthogonal to\nsequence traversal characteristics.\nExpression requirements\nFor anyExtensible Sequence s, its iterators posandlast,Forward Sequence r, and any type x, the following expres-\nsions must be valid:\nExpression Type Complexity\ninsert<s,pos,x>::type Extensible Sequence Unspecified\ninsert_range<s,pos,r>::type Extensible Sequence Unspecified\nerase<s,pos>::type Extensible Sequence Unspecified\nerase<s,pos,last>::type Extensible Sequence Unspecified\nclear<s>::type Extensible Sequence Constant time\nExpression semantics\nExpression Semantics\ninsert<s,pos,x>::type A new sequence, concept-identical to s, of the following elements:\n[begin<s>::type ,pos),x, [pos,end<s>::type ); see insert.\ninsert_range<s,pos,r>::type A new sequence, concept-identical to s, of the following elements:\n[begin<s>::type ,pos), [begin<r>::type ,end<r>::type ), [pos,\nend<s>::type ); see insert_range .\nerase<s,pos>::type A new sequence, concept-identical to s, of the following elements:\n[begin<s>::type ,pos), [next<pos>::type ,end<s>::type ); see\nerase.\nerase<s,pos,last>::type A new sequence, concept-identical to s, of the following elements:\n[begin<s>::type ,pos), [last,end<s>::type ); see erase.\nclear<s>::type An empty sequence concept-identical to s; see clear.\nRevision Date: 15th November 200413 Sequences 1.1 Concepts\nModels\n—vector\n—list\nSee also\nSequences ,Back Extensible Sequence ,insert,insert_range ,erase,clear\n1.1.5 Front Extensible Sequence\nDescription\nAFront Extensible Sequence is anExtensible Sequence that supports amortized constant time insertion and removal\noperations at the beginning.\nRefinement of\nExtensible Sequence\nExpression requirements\nInadditiontotherequirementsdefinedin ExtensibleSequence ,foranyBackExtensibleSequence sthefollowingmust\nbe met:\nExpression Type Complexity\npush_front<s,x>::type Front Extensible Sequence Amortized constant time\npop_front<s>::type Front Extensible Sequence Amortized constant time\nfront<s>::type Any type Amortized constant time\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Extensible Sequence .\nExpression Semantics\npush_front<s,x>::type Equivalent to insert<s,begin<s>::type,x>::type ; see push_-\nfront.\npop_front<v>::type Equivalent to erase<s,begin<s>::type>::type ; see pop_front .\nfront<s>::type The first element in the sequence; see front.\nModels\n—vector\n—list\nSee also\nSequences ,Extensible Sequence ,Back Extensible Sequence ,push_front ,pop_front ,front\nRevision Date: 15th November 20041.1 Concepts Sequences 14\n1.1.6 Back Extensible Sequence\nDescription\nABack Extensible Sequence is anExtensible Sequence that supports amortized constant time insertion and removal\noperations at the end.\nRefinement of\nExtensible Sequence\nExpression requirements\nInadditiontotherequirementsdefinedin ExtensibleSequence ,foranyBackExtensibleSequence sthefollowingmust\nbe met:\nExpression Type Complexity\npush_back<s,x>::type Back Extensible Sequence Amortized constant time\npop_back<s>::type Back Extensible Sequence Amortized constant time\nback<s>::type Any type Amortized constant time\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Extensible Sequence .\nExpression Semantics\npush_back<s,x>::type Equivalent to insert<s,end<s>::type,x>::type ; see push_back .\npop_back<v>::type Equivalent to erase<s,end<s>::type>::type ; see pop_back .\nback<s>::type The last element in the sequence; see back.\nModels\n—vector\n—deque\nSee also\nSequences ,Extensible Sequence ,Front Extensible Sequence ,push_back ,pop_back ,back\n1.1.7 Associative Sequence\nDescription\nAnAssociative Sequence is aForward Sequence that allows efficient retrieval of elements based on keys. Unlike\nassociative containers in the C++ Standard Library, MPL associative sequences have no associated ordering relation.\nInstead,type identity is used to impose an equivalence relation on keys, and the order in which sequence elements are\ntraversed during iteration isleft unspecified.\nRevision Date: 15th November 200415 Sequences 1.1 Concepts\nDefinitions\n—Akeyis a part of the element type used to identify and retrieve the element within the sequence.\n—Avalueis a part of the element type retrievied from the sequence by its key.\nExpression requirements\nIn the following table and subsequent specifications, sis anAssociative Sequence ,xis a sequence element, and kand\ndefare arbitrary types.\nIn addition to the requirements defined in Forward Sequence , the following must be met:\nExpression Type Complexity\nhas_key<s,k>::type BooleanIntegral Constant Amortized constant time\ncount<s,k>::type Integral Constant Amortized constant time\norder<s,k>::type Integral Constant orvoid_ Amortized constant time\nat<s,k>::type Any type Amortized constant time\nat<s,k,def>::type Any type Amortized constant time\nkey_type<s,x>::type Any type Amortized constant time\nvalue_type<s,x>::type Any type Amortized constant time\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Forward Sequence .\nExpression Semantics\nhas_key<s,k>::type A boolean Integral Constant csuch that c::value == true if and only\nif there is one or more elements with the key kins; see has_key.\ncount<s,k>::type The number of elements with the key kins; see count.\norder<s,k>::type A unique unsigned Integral Constant associated with the key kin the se-\nquence s; see order.\nat<s,k>::type\nat<s,k,def>::typeThe first element associated with the key kin the sequence s; see at.\nkey_type<s,x>::type The key part of the element xthat would be used to identify xins; see\nkey_type .\nvalue_type<s,x>::type Thevaluepartoftheelement xthatwouldbeusedfor xins;see value_-\ntype.\nModels\n—set\n—map\nSee also\nSequences ,Extensible Associative Sequence ,has_key,count,order,at,key_type ,value_type\nRevision Date: 15th November 20041.1 Concepts Sequences 16\n1.1.8 Extensible Associative Sequence\nDescription\nAnExtensibleAssociativeSequence isanAssociativeSequence thatsupportsinsertionandremovalofelements. Incon-\ntrast toExtensible Sequence ,Extensible Associative Sequence does not provide a mechanism for inserting an element\nat a specific position.\nExpression requirements\nInthefollowingtableandsubsequentspecifications, sisanAssociativeSequence ,posisaniteratorinto s,and xandk\nare arbitrary types.\nIn addition to the Associative Sequence requirements, the followingmust be met:\nExpression Type Complexity\ninsert<s,x>::type Extensible Associative Sequence Amortized constant time\ninsert<s,pos,x>::type Extensible Associative Sequence Amortized constant time\nerase_key<s,k>::type Extensible Associative Sequence Amortized constant time\nerase<s,pos>::type Extensible Associative Sequence Amortized constant time\nclear<s>::type Extensible Associative Sequence Amortized constant time\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Associative Sequence .\nExpression Semantics\ninsert<s,x>::type Inserts xintos; the resulting sequence ris equivalent to sexcept that\nat< r, key_type<s,x>::type >::type\nis identical to value_type<s,x>::type ; see insert.\ninsert<s,pos,x>::type Equivalent to insert<s,x>::type ;posis ignored; see insert.\nerase_key<s,k>::type Erases elements in sassociated with the key k; the resulting sequence\nris equivalent to sexcept that has_key<r,k>::value == false ; see\nerase_key .\nerase<s,pos>::type Erases the element at a specific position; equivalent to erase_key<s,\nderef<pos>::type >::type ; see erase.\nclear<s>::type An empty sequence concept-identical to s; see clear.\nModels\n—set\n—map\nSee also\nSequences ,Associative Sequence ,insert,erase,clear\nRevision Date: 15th November 200417 Sequences 1.1 Concepts\n1.1.9 Integral Sequence Wrapper\nDescription\nAnIntegralSequenceWrapper isaclasstemplatethatprovidesaconciseinterfaceforcreatingacorrespondingsequence\nofIntegral Constant s. In particular, assuming that seqis a name of the wrapper’s underlying sequence and c1,c2,...cn\nare integral constants of an integral type Tto be stored in the sequence, the wrapper provides us with the following\nnotation:\nseq_c<T,c1,c2,...cn>\nIfseqis aVariadic Sequence ,numbered wrapper forms are also avaialable:\nseqn_c<T,c1,c2,...cn>\nExpression requirements\nIn the following table and subsequent specifications, seqis a placeholder token for the Integral Sequence Wrapper ’s\nunderlying sequence’s name.\nExpression Type Complexity\nseq_c<T,c1,c2,...cn> Forward Sequence Amortized constant time.\nseq_c<T,c1,c2,...cn>::type Forward Sequence Amortized constant time.\nseq_c<T,c1,c2,...cn>::value_type An integral type Amortized constant time.\nseqn_c<T,c1,c2,...cn> Forward Sequence Amortized constant time.\nseqn_c<T,c1,c2,...cn>::type Forward Sequence Amortized constant time.\nseqn_c<T,c1,c2,...cn>::value_type An integral type Amortized constant time.\nExpression semantics\ntypedef seq_c<T, c1,c2,... cn> s;\ntypedef seq n_c<T, c1,c2,... cn> s;\nSemantics: sis a sequence seqof integral constant wrappers integral_c<T, c1>,integral_c<T, c2>,\n...integral_c<T, cn>.\nPostcondition: size<s>::value == n .\ntypedef seq_c<T, c1,c2,... cn>::type s;\ntypedef seq n_c<T, c1,c2,... cn>::type s;\nSemantics: sis identical to seqn<integral_c<T, c1>,integral_c<T, c2>, ...integral_c<T, cn> >.\ntypedef seq_c<T, c1,c2,... cn>::value_type t;\ntypedef seq n_c<T, c1,c2,... cn>::value_type t;\nSemantics: is_same<t,T>::value == true .\nModels\n—vector_c\n—list_c\n—set_c\nRevision Date: 15th November 20041.1 Concepts Sequences 18\nSee also\nSequences ,Variadic Sequence ,Integral Constant\n1.1.10 Variadic Sequence\nDescription\nAVariadic Sequence is a member of a family of sequence classes with both variadicandnumbered forms. If seqis a\ngenericnameforsome VariadicSequence ,itsvariadicform allowsustospecifyasequenceof nelements t1,t2,...tn,for\nanynfrom 0 up to a preprocessor-configurable limit BOOST_MPL_LIMIT_ seq_SIZE, using the following notation:\nseq<t1,t2,...tn>\nBy contrast, each numbered sequence form accepts the exact number of elements that is encoded in the name of the\ncorresponding class template:\nseqn<t1,t2,...tn>\nFor numbered forms, there is no predefined top limit for n, aside from compiler limitations on the number of template\nparameters.\nExpression requirements\nIn the following table and subsequent specifications, seqis a placeholder token for the actual Variadic Sequence name.\nExpression Type Complexity\nseq<t1,t2,...tn> Forward Sequence Amortized constant time\nseq<t1,t2,...tn>::type Forward Sequence Amortized constant time\nseqn<t1,t2,...tn> Forward Sequence Amortized constant time\nseqn<t1,t2,...tn>::type Forward Sequence Amortized constant time\nExpression semantics\ntypedef seq< t1,t2,... tn> s;\ntypedef seq n<t1,t2,... tn> s;\nSemantics: sis a sequence of elements t1,t2,...tn.\nPostcondition: size<s>::value == n .\ntypedef seq< t1,t2,... tn>::type s;\ntypedef seq n<t1,t2,... tn>::type s;\nSemantics: sis identical to seqn<t1,t2,...tn>.\nPostcondition: size<s>::value == n .\nModels\n—vector\n—list\n—map\nRevision Date: 15th November 200419 Sequences 1.2 Classes\nSee also\nSequences ,Configuration ,Integral Sequence Wrapper\n1.2 Classes\nTheMPLprovidesalargenumberofpredefinedgeneral-purposesequenceclassescoveringmostofthetypicalmetapro-\ngramming needs out-of-box.\n1.2.1 vector\nDescription\nvectoris avariadic,random access ,extensible sequence of types that supports constant-time insertion and removal of\nelements at both ends, and linear-time insertion and removal of elements in the middle. On compilers that support the\ntypeofextension, vectoris the simplest and in many cases the most efficient sequence.\nHeader\nSequence form Header\nVariadic #include <boost/mpl/vector.hpp>\nNumbered #include <boost/mpl/vector/vector n.hpp>\nModel of\n—Variadic Sequence\n—Random Access Sequence\n—Extensible Sequence\n—Back Extensible Sequence\n—Front Extensible Sequence\nExpression semantics\nIn the following table, vis an instance of vector,posandlastare iterators into v,ris aForward Sequence ,nis an\nIntegral Constant , and xandt1,t2,...tnare arbitrary types.\nExpression Semantics\nvector< t1,t2,... tn>\nvector n<t1,t2,... tn>vectorof elements t1,t2,...tn; seeVariadic Sequence .\nvector< t1,t2,... tn>::type\nvector n<t1,t2,... tn>::typeIdentical to vectorn<t1,t2,...tn>; seeVariadic Sequence .\nbegin<v>::type An iterator pointing to the beginning of v; seeRandom Access Se-\nquence.\nend<v>::type Aniteratorpointingtotheendof v;seeRandomAccessSequence .\nsize<v>::type The size of v; seeRandom Access Sequence .\nempty<v>::type Aboolean IntegralConstant csuchthat c::value == true ifand\nonly if the sequence is empty; see Random Access Sequence .\nRevision Date: 15th November 20041.2 Classes Sequences 20\nExpression Semantics\nfront<v>::type The first element in v; seeRandom Access Sequence .\nback<v>::type The last element in v; seeRandom Access Sequence .\nat<v,n>::type Thenth element from the beginning of v; seeRandom Access Se-\nquence.\ninsert<v,pos,x>::type Anew vectoroffollowingelements: [ begin<v>::type ,pos),x,\n[pos,end<v>::type ); seeExtensible Sequence .\ninsert_range<v,pos,r>::type A new vectorof following elements: [ begin<v>::type ,pos),\n[begin<r>::type ,end<r>::type ) [pos,end<v>::type ); see\nExtensible Sequence .\nerase<v,pos>::type A new vectorof following elements: [ begin<v>::type ,pos),\n[next<pos>::type ,end<v>::type ); seeExtensible Sequence .\nerase<v,pos,last>::type A new vectorof following elements: [ begin<v>::type ,pos),\n[last,end<v>::type ); seeExtensible Sequence .\nclear<v>::type An empty vector; seeExtensible Sequence .\npush_back<v,x>::type A new vector of following elements: [ begin<v>::type ,\nend<v>::type ),x; seeBack Extensible Sequence .\npop_back<v>::type Anew vectoroffollowingelements: [ begin<v>::type ,prior<\nend<v>::type >::type ); seeBack Extensible Sequence .\npush_front<v,x>::type A new vector of following elements: [ begin<v>::type ,\nend<v>::type ),x; seeFront Extensible Sequence .\npop_front<v>::type A new vectorof following elements: [ next< begin<v>::type\n>::type,end<v>::type ); seeFront Extensible Sequence .\nExample\ntypedef vector<float,double,long double> floats;\ntypedef push_back<floats,int>::type types;\nBOOST_MPL_ASSERT(( is_same< at_c<types,3>::type, int > ));\nSee also\nSequences ,Variadic Sequence ,Random Access Sequence ,Extensible Sequence ,vector_c ,list\n1.2.2 list\nDescription\nAlistisavariadic,forward,extensible sequenceoftypesthatsupportsconstant-timeinsertionandremovalofelements\nat the beginning, and linear-time insertion and removal of elements at the end and in the middle.\nHeader\nSequence form Header\nVariadic #include <boost/mpl/list.hpp>\nNumbered #include <boost/mpl/list/list n.hpp>\nRevision Date: 15th November 200421 Sequences 1.2 Classes\nModel of\n—Variadic Sequence\n—Forward Sequence\n—Extensible Sequence\n—Front Extensible Sequence\nExpression semantics\nIn the following table, lis alist,posandlastare iterators into l,ris aForward Sequence , andt1,t2,...tnandxare\narbitrary types.\nExpression Semantics\nlist< t1,t2,... tn>\nlist n<t1,t2,... tn>listof elements t1,t2,...tn; seeVariadic Sequence .\nlist< t1,t2,... tn>::type\nlist n<t1,t2,... tn>::typeIdentical to listn<t1,t2,...tn>; seeVariadic Sequence .\nbegin<l>::type An iterator to the beginningof l; seeForward Sequence .\nend<l>::type An iterator to the end of l; seeForward Sequence .\nsize<l>::type The size of l; seeForward Sequence .\nempty<l>::type Aboolean IntegralConstant csuchthat c::value == true ifand\nonly if lis empty; see Forward Sequence .\nfront<l>::type The first element in l; seeForward Sequence .\ninsert<l,pos,x>::type A new listof following elements: [ begin<l>::type ,pos),x,\n[pos,end<l>::type ); seeExtensible Sequence .\ninsert_range<l,pos,r>::type A new listof following elements: [ begin<l>::type ,pos),\n[begin<r>::type ,end<r>::type ) [pos,end<l>::type ); see\nExtensible Sequence .\nerase<l,pos>::type A new listof following elements: [ begin<l>::type ,pos),\n[next<pos>::type ,end<l>::type ); seeExtensible Sequence .\nerase<l,pos,last>::type A new listof following elements: [ begin<l>::type ,pos),\n[last,end<l>::type ); seeExtensible Sequence .\nclear<l>::type An empty list; seeExtensible Sequence .\npush_front<l,x>::type A new listcontaining xas its first element; see Front Extensible\nSequence .\npop_front<l>::type A new listcontaining all but the first elements of lin the same\norder; see Front Extensible Sequence .\nExample\ntypedef list<float,double,long double> floats;\ntypedef push_front<floating_types,int>::type types;\nBOOST_MPL_ASSERT(( is_same< front<types>::type, int > ));\nRevision Date: 15th November 20041.2 Classes Sequences 22\nSee also\nSequences ,Variadic Sequence ,Forward Sequence ,Extensible Sequence ,vector,list_c\n1.2.3 deque\nDescription\ndequeis avariadic,random access ,extensible sequence of types that supports constant-time insertion and removal of\nelements at both ends, and linear-time insertion and removal of elements in the middle. In this implementation of the\nlibrary, dequeis a synonym for vector.\nHeader\n#include <boost/mpl/deque.hpp>\nModel of\n—Variadic Sequence\n—Random Access Sequence\n—Extensible Sequence\n—Back Extensible Sequence\n—Front Extensible Sequence\nExpression semantics\nSeevectorspecification.\nExample\ntypedef deque<float,double,long double> floats;\ntypedef push_back<floats,int>::type types;\nBOOST_MPL_ASSERT(( is_same< at_c<types,3>::type, int > ));\nSee also\nSequences ,vector,list,set\n1.2.4 set\nDescription\nsetisavariadic,associative ,extensible sequenceoftypesthatsupportsconstant-timeinsertionandremovalofelements,\nand testing for membership. A setmay contain at most one element for each key.\nHeader\nRevision Date: 15th November 200423 Sequences 1.2 Classes\nSequence form Header\nVariadic #include <boost/mpl/set.hpp>\nNumbered #include <boost/mpl/set/set n.hpp>\nModel of\n—Variadic Sequence\n—Associative Sequence\n—Extensible Associative Sequence\nExpression semantics\nIn the following table, sis an instance of set,posis an iterator into s, and x,k, andt1,t2,...tnare arbitrary types.\nExpression Semantics\nset< t1,t2,... tn>\nsetn<t1,t2,... tn>setof elements t1,t2,...tn; seeVariadic Sequence .\nset< t1,t2,... tn>::type\nsetn<t1,t2,... tn>::typeIdentical to setn<t1,t2,...tn>; seeVariadic Sequence .\nbegin<s>::type An iterator pointing to the beginning of s; seeAssociative Se-\nquence.\nend<s>::type An iterator pointing to the end of s; seeAssociative Sequence .\nsize<s>::type The size of s; seeAssociative Sequence .\nempty<s>::type Aboolean IntegralConstant csuchthat c::value == true ifand\nonly if sis empty; see Associative Sequence .\nfront<s>::type The first element in s; seeAssociative Sequence .\nhas_key<s,k>::type Aboolean IntegralConstant csuchthat c::value == true ifand\nonlyifthereisoneormoreelementswiththekey kins;seeAsso-\nciative Sequence .\ncount<s,k>::type The number of elements with the key kins; seeAssociative Se-\nquence.\norder<s,k>::type A unique unsigned Integral Constant associated with the key kin\ns; seeAssociative Sequence .\nat<s,k>::type\nat<s,k,def>::typeThe element associated with the key kins; seeAssociative Se-\nquence.\nkey_type<s,x>::type Identical to x; seeAssociative Sequence .\nvalue_type<s,x>::type Identical to x; seeAssociative Sequence .\ninsert<s,x>::type A new setequivalent to sexcept that\nat< t, key_type<s,x>::type >::type\nis identical to value_type<s,x>::type .\ninsert<s,pos,x>::type Equivalent to insert<s,x>::type ;posis ignored.\nerase_key<s,k>::type A new setequivalent to sexcept that has_key<t, k>::value\n== false .\nerase<s,pos>::type Equivalent to erase<s, deref<pos>::type >::type .\nclear<s>::type An empty set; see clear.\nRevision Date: 15th November 20041.2 Classes Sequences 24\nExample\ntypedef set< int,long,double,int_<5> > s;\nBOOST_MPL_ASSERT_RELATION( size<s>::value, ==, 4 );\nBOOST_MPL_ASSERT_NOT(( empty<s> ));\nBOOST_MPL_ASSERT(( is_same< at<s,int>::type, int > ));\nBOOST_MPL_ASSERT(( is_same< at<s,long>::type, long > ));\nBOOST_MPL_ASSERT(( is_same< at<s,int_<5> >::type, int_<5> > ));\nBOOST_MPL_ASSERT(( is_same< at<s,char>::type, void_ > ));\nSee also\nSequences ,Variadic Sequence ,Associative Sequence ,Extensible Associative Sequence ,set_c,map,vector\n1.2.5 map\nDescription\nmapis avariadic,associative ,extensible sequence of type pairs that supports constant-time insertion and removal of\nelements, and testing for membership. A mapmay contain at most one element for each key.\nHeader\nSequence form Header\nVariadic #include <boost/mpl/map.hpp>\nNumbered #include <boost/mpl/map/map n.hpp>\nModel of\n—Variadic Sequence\n—Associative Sequence\n—Extensible Associative Sequence\nExpression semantics\nIn the following table and subsequent specifications, mis an instance of map,posis an iterator into m,xandp1,p2,...pn\narepairs, and kis an arbitrary type.\nExpression Semantics\nmap< p1,p2,... pn>\nmapn<p1,p2,... pn>mapof elements p1,p2,...pn; seeVariadic Sequence .\nmap< p1,p2,... pn>::type\nmapn<p1,p2,... pn>::typeIdentical to mapn<p1,p2,...pn>; seeVariadic Sequence .\nbegin<m>::type An iterator pointing to the beginning of m; seeAssociative Se-\nquence.\nend<m>::type An iterator pointing to the end of m; seeAssociative Sequence .\nsize<m>::type The size of m; seeAssociative Sequence .\nRevision Date: 15th November 200425 Sequences 1.2 Classes\nExpression Semantics\nempty<m>::type Aboolean IntegralConstant csuchthat c::value == true ifand\nonly if mis empty; see Associative Sequence .\nfront<m>::type The first element in m; seeAssociative Sequence .\nhas_key<m,k>::type Queries the presence of elements with the key kinm; seeAssocia-\ntive Sequence .\ncount<m,k>::type The number of elements with the key kinm; seeAssociative Se-\nquence.\norder<m,k>::type A unique unsigned Integral Constant associated with the key kin\nm; seeAssociative Sequence .\nat<m,k>::type\nat<m,k,default>::typeThe element associated with the key kinm; seeAssociative Se-\nquence.\nkey_type<m,x>::type Identical to x::first ; seeAssociative Sequence .\nvalue_type<m,x>::type Identical to x::second ; seeAssociative Sequence .\ninsert<m,x>::type A new mapequivalent to mexcept that\nat< t, key_type<m,x>::type >::type\nis identical to value_type<m,x>::type .\ninsert<m,pos,x>::type Equivalent to insert<m,x>::type ;posis ignored.\nerase_key<m,k>::type A new mapequivalent to mexcept that has_key<t, k>::value\n== false .\nerase<m,pos>::type Equivalent to erase<m, deref<pos>::type >::type .\nclear<m>::type An empty map; see clear.\nExample\ntypedef map<\npair<int,unsigned>\n, pair<char,unsigned char>\n, pair<long_<5>,char[17]>\n, pair<int[42],bool>\n> m;\nBOOST_MPL_ASSERT_RELATION( size<m>::value, ==, 4 );\nBOOST_MPL_ASSERT_NOT(( empty<m> ));\nBOOST_MPL_ASSERT(( is_same< at<m,int>::type, unsigned > ));\nBOOST_MPL_ASSERT(( is_same< at<m,long_<5> >::type, char[17] > ));\nBOOST_MPL_ASSERT(( is_same< at<m,int[42]>::type, bool > ));\nBOOST_MPL_ASSERT(( is_same< at<m,long>::type, void_ > ));\nSee also\nSequences ,Variadic Sequence ,Associative Sequence ,Extensible Associative Sequence ,set,vector\n1.2.6 range_c\nSynopsis\ntemplate<\nRevision Date: 15th November 20041.2 Classes Sequences 26\ntypename T\n, T Start\n, T Finish\n>\nstruct range_c\n{\ntypedef integral_c<T,Start> start;\ntypedef integral_c<T,Finish> finish;\n//unspecified\n//...\n};\nDescription\nrange_c isasorted RandomAccessSequence ofIntegralConstant s. Notethatbecauseitisnotan ExtensibleSequence ,\nsequence-buildingintrinsicmetafunctionssuchas push_front andtransformationalgorithmssuchas replace arenot\ndirectly applicable — to be able to use them, you’d first need to copy the content of the range into a more suitable\nsequence.\nHeader\n#include <boost/mpl/range_c.hpp>\nModel of\nRandom Access Sequence\nExpression semantics\nInthefollowingtable, risaninstanceof range_c,nisanIntegralConstant ,Tisanarbitraryintegraltype,and nandm\nare integral constant valuesof type T.\nExpression Semantics\nrange_c<T,n,m>\nrange_c<T,n,m>::typeA sorted Random Access Sequence of integral constant wrappers for\nthe half-open range of values [ n,m):integral_c<T,n> ,integral_-\nc<T,n+1> ,...integral_c<T,m-1> .\nbegin<r>::type Aniteratorpointingtothebeginningof r;seeRandomAccessSequence .\nend<r>::type An iterator pointing to the endof r; seeRandom Access Sequence .\nsize<r>::type The size of r; seeRandom Access Sequence .\nempty<r>::type Aboolean IntegralConstant csuchthat c::value == true ifandonly\nifris empty; see Random Access Sequence .\nfront<r>::type The first element in r; seeRandom Access Sequence .\nback<r>::type The last element in r; seeRandom Access Sequence .\nat<r,n>::type Thenthelementfromthebeginningof r;seeRandomAccessSequence .\nExample\ntypedef range_c<int,0,0> range0;\ntypedef range_c<int,0,1> range1;\nRevision Date: 15th November 200427 Sequences 1.2 Classes\ntypedef range_c<int,0,10> range10;\nBOOST_MPL_ASSERT_RELATION( size<range0>::value, ==, 0 );\nBOOST_MPL_ASSERT_RELATION( size<range1>::value, ==, 1 );\nBOOST_MPL_ASSERT_RELATION( size<range10>::value, ==, 10 );\nBOOST_MPL_ASSERT(( empty<range0> ));\nBOOST_MPL_ASSERT_NOT(( empty<range1> ));\nBOOST_MPL_ASSERT_NOT(( empty<range10> ));\nBOOST_MPL_ASSERT(( is_same< begin<range0>::type, end<range0>::type > ));\nBOOST_MPL_ASSERT_NOT(( is_same< begin<range1>::type, end<range1>::type > ));\nBOOST_MPL_ASSERT_NOT(( is_same< begin<range10>::type, end<range10>::type > ));\nBOOST_MPL_ASSERT_RELATION( front<range1>::type::value, ==, 0 );\nBOOST_MPL_ASSERT_RELATION( back<range1>::type::value, ==, 0 );\nBOOST_MPL_ASSERT_RELATION( front<range10>::type::value, ==, 0 );\nBOOST_MPL_ASSERT_RELATION( back<range10>::type::value, ==, 9 );\nSee also\nSequences ,Random Access Sequence ,vector_c ,set_c,list_c\n1.2.7 vector_c\nDescription\nvector_c isanIntegralSequenceWrapper forvector. Assuch,itsharesall vectorcharacteristicsandrequirements,\nand differs only in the way theoriginal sequence content is specified.\nHeader\nSequence form Header\nVariadic #include <boost/mpl/vector_c.hpp>\nNumbered #include <boost/mpl/vector/vector n_c.hpp>\nModel of\n—Integral Sequence Wrapper\n—Variadic Sequence\n—Random Access Sequence\n—Extensible Sequence\n—Back Extensible Sequence\n—Front Extensible Sequence\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin vector.\nRevision Date: 15th November 20041.2 Classes Sequences 28\nExpression Semantics\nvector_c<T, c1,c2,... cn>\nvector n_c<T, c1,c2,... cn>Avectorof integral constant wrappers integral_-\nc<T,c1>,integral_c<T, c2>, ... integral_c<T, cn>;\nseeIntegral Sequence Wrapper .\nvector_c<T, c1,c2,... cn>::type\nvector n_c<T, c1,c2,... cn>::typeIdentical to vectorn< integral_c<T, c1>,integral_-\nc<T,c2>, ...integral_c<T, cn> >; seeIntegral Sequence\nWrapper.\nvector_c<T, c1,c2,... cn>::value_type\nvector n_c<T, c1,c2,... cn>::value_typeIdentical to T; seeIntegral Sequence Wrapper .\nExample\ntypedef vector_c<int,1,2,3,5,7,12,19,31> fibonacci;\ntypedef push_back<fibonacci,int_<50> >::type fibonacci2;\nBOOST_MPL_ASSERT_RELATION( front<fibonacci2>::type::value, ==, 1 );\nBOOST_MPL_ASSERT_RELATION( back<fibonacci2>::type::value, ==, 50 );\nSee also\nSequences ,Integral Sequence Wrapper ,vector,integral_c ,set_c,list_c,range_c\n1.2.8 list_c\nDescription\nlist_cis anIntegral Sequence Wrapper forlist. As such, it shares all listcharacteristics and requirements, and\ndiffers only in the way the original sequence content is specified.\nHeader\nSequence form Header\nVariadic #include <boost/mpl/list_c.hpp>\nNumbered #include <boost/mpl/list/list n_c.hpp>\nModel of\n—Integral Sequence Wrapper\n—Variadic Sequence\n—Forward Sequence\n—Extensible Sequence\n—Front Extensible Sequence\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin list.\nRevision Date: 15th November 200429 Sequences 1.2 Classes\nExpression Semantics\nlist_c<T, c1,c2,... cn>\nlist n_c<T, c1,c2,... cn>Alistof integral constant wrappers integral_c<T, c1>,\nintegral_c<T, c2>, ... integral_c<T, cn>; seeIntegral\nSequence Wrapper .\nlist_c<T, c1,c2,... cn>::type\nlist n_c<T, c1,c2,... cn>::typeIdentical to listn< integral_c<T, c1>,integral_-\nc<T,c2>, ... integral_c<T, cn> >; seeIntegral Sequence\nWrapper.\nlist_c<T, c1,c2,... cn>::value_type\nlist n_c<T, c1,c2,... cn>::value_typeIdentical to T; seeIntegral Sequence Wrapper .\nExample\ntypedef list_c<int,1,2,3,5,7,12,19,31> fibonacci;\ntypedef push_front<fibonacci,int_<1> >::type fibonacci2;\nBOOST_MPL_ASSERT_RELATION( front<fibonacci2>::type::value, ==, 1 );\nSee also\nSequences ,Integral Sequence Wrapper ,list,integral_c ,vector_c ,set_c,range_c\n1.2.9 set_c\nDescription\nset_cis anIntegral Sequence Wrapper forset. As such, it shares all setcharacteristics and requirements, and differs\nonly in the way the original sequence content is specified.\nHeader\nSequence form Header\nVariadic #include <boost/mpl/set_c.hpp>\nNumbered #include <boost/mpl/set/set n_c.hpp>\nModel of\n—Variadic Sequence\n—Associative Sequence\n—Extensible Associative Sequence\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin set.\nExpression Semantics\nset_c<T, c1,c2,... cn>\nsetn_c<T, c1,c2,... cn>Asetof integral constant wrappers integral_c<T, c1>,\nintegral_c<T, c2>, ... integral_c<T, cn>; seeIntegral\nSequence Wrapper .\nRevision Date: 15th November 20041.3 Views Sequences 30\nExpression Semantics\nset_c<T, c1,c2,... cn>::type\nsetn_c<T, c1,c2,... cn>::typeIdentical to setn< integral_c<T, c1>,integral_-\nc<T,c2>, ... integral_c<T, cn> >; seeIntegral Sequence\nWrapper.\nset_c<T, c1,c2,... cn>::value_type\nsetn_c<T, c1,c2,... cn>::value_typeIdentical to T; seeIntegral Sequence Wrapper .\nExample\ntypedef set_c< int,1,3,5,7,9 > odds;\nBOOST_MPL_ASSERT_RELATION( size<odds>::value, ==, 5 );\nBOOST_MPL_ASSERT_NOT(( empty<odds> ));\nBOOST_MPL_ASSERT(( has_key< odds, integral_c<int,5> > ));\nBOOST_MPL_ASSERT_NOT(( has_key< odds, integral_c<int,4> > ));\nBOOST_MPL_ASSERT_NOT(( has_key< odds, integral_c<int,15> > ));\nSee also\nSequences ,Integral Sequence Wrapper ,set,integral_c ,vector_c ,list_c,range_c\n1.3 Views\nAviewis a sequence adaptor delivering an altered presentation of one or more underlying sequences. Views are lazy,\nmeaningthattheirelementsareonlycomputedondemand. Similarlytotheshort-circuit logicaloperations andeval_-\nif, views make it possible to avoid premature errors and inefficiencies from computations whose results will never be\nused. When approached with views in mind, many algorithmic problems can be solved in a simpler, more conceptually\nprecise, more expressive way.\n1.3.1 empty_sequence\nSynopsis\nstruct empty_sequence\n{\n//unspecified\n//...\n};\nDescription\nRepresents a sequence containing no elements.\nHeader\n#include <boost/mpl/empty_sequence.hpp>\nRevision Date: 15th November 200431 Sequences 1.3 Views\nExpression semantics\nThesemanticsofanexpressionaredefinedonlywheretheydifferfrom,orarenotdefinedin RandomAccessSequence .\nIn the following table, sis an instance of empty_sequence .\nExpression Semantics\nempty_sequence An empty Random Access Sequence .\nsize<s>::type size<s>::value == 0 ; seeRandom Access Sequence .\nExample\ntypedef begin<empty_sequence>::type first;\ntypedef end<empty_sequence>::type last;\nBOOST_MPL_ASSERT(( is_same<first,last> ));\nBOOST_MPL_ASSERT_RELATION( size<empty_sequence>::value, ==, 0 );\ntypedef transform_view<\nempty_sequence\n, add_pointer<_>\n> empty_view;\nBOOST_MPL_ASSERT_RELATION( size<empty_sequence>::value, ==, 0 );\nSee also\nSequences ,Views,vector,list,single_view\n1.3.2 filter_view\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pred\n>\nstruct filter_view\n{\n//unspecified\n//...\n};\nDescription\nA view into a subset of Sequence ’s elements satisfying the predicate Pred.\nHeader\n#include <boost/mpl/filter_view.hpp>\nRevision Date: 15th November 20041.3 Views Sequences 32\nModel of\n—Forward Sequence\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to wrap.\nPred UnaryLambda Expression A filtering predicate.\nExpression semantics\nSemantics of an expression is defined only where it differs from, or is not defined in Forward Sequence .\nIn the following table, vis an instance of filter_view ,sis an arbitrary Forward Sequence ,predis an unary Lambda\nExpression .\nExpression Semantics\nfilter_view<s,pred>\nfilter_view<s,pred>::typeA lazyForward Sequence sequence of all the elements in the\nrange [ begin<s>::type ,end<s>::type ) that satisfy the predi-\ncatepred.\nsize<v>::type The size of v; size<v>::value == count_-\nif<s,pred>::value ; linear complexity; see Forward Sequence .\nExample\nFind the largest floating typein a sequence.\ntypedef vector<int,float,long,float,char[50],long double,char> types;\ntypedef max_element<\ntransform_view< filter_view< types,boost::is_float<_> >, size_of<_> >\n>::type iter;\nBOOST_MPL_ASSERT(( is_same< deref<iter::base>::type, long double > ));\nSee also\nSequences ,Views,transform_view ,joint_view ,zip_view ,iterator_range\n1.3.3 iterator_range\nSynopsis\ntemplate<\ntypename First\n, typename Last\n>\nstruct iterator_range\n{\n//unspecified\n//...\nRevision Date: 15th November 200433 Sequences 1.3 Views\n};\nDescription\nA view into subset of sequence elements identified by a pair of iterators.\nHeader\n#include <boost/mpl/fold.hpp>\nModel of\n—Forward,Bidirectional , orRandom Access Sequence , depending on the category of the underlaying iterators.\nParameters\nParameter Requirement Description\nFirst,Last Forward Iterator Iterators identifying the view’s boundaries.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Forward Sequence .\nIn the following table, vis an instance of iterator_range ,firstandlastare iterators into a Forward Sequence ,\nand [ first,last) form a valid range.\nExpression Semantics\niterator_range<first,last>\niterator_range<first,last>::typeA lazy sequence all the elements in the range [ first,last).\nExample\ntypedef range_c<int,0,100> r;\ntypedef advance_c< begin<r>::type,10 >::type first;\ntypedef advance_c< end<r>::type,-10 >::type last;\nBOOST_MPL_ASSERT(( equal<\niterator_range<first,last>\n, range_c<int,10,90>\n> ));\nSee also\nSequences ,Views,filter_view ,transform_view ,joint_view ,zip_view ,max_element\n1.3.4 joint_view\nSynopsis\ntemplate<\nRevision Date: 15th November 20041.3 Views Sequences 34\ntypename Sequence1\n, typename Sequence2\n>\nstruct joint_view\n{\n//unspecified\n//...\n};\nDescription\nA view into the sequence of elements formed by concatenating Sequence1 andSequence2 elements.\nHeader\n#include <boost/mpl/joint_view.hpp>\nModel of\n—Forward Sequence\nParameters\nParameter Requirement Description\nSequence1 ,Sequence2 Forward Sequence Sequences to create a view on.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Forward Sequence .\nIn the following table, vis an instance of joint_view ,s1ands2are arbitrary Forward Sequence s.\nExpression Semantics\njoint_view<s1,s2>\njoint_view<s1,s2>::typeA lazy Forward Sequence of all the elements in the ranges\n[begin<s1>::type , end<s1>::type ), [ begin<s2>::type ,\nend<s2>::type ).\nsize<v>::type The size of v; size<v>::value == size<s1>::value +\nsize<s2>::value ; linear complexity; see Forward Sequence .\nExample\ntypedef joint_view<\nrange_c<int,0,10>\n, range_c<int,10,15>\n> numbers;\nBOOST_MPL_ASSERT(( equal< numbers, range_c<int,0,15> > ));\nRevision Date: 15th November 200435 Sequences 1.3 Views\nSee also\nSequences ,Views,filter_view ,transform_view ,zip_view ,iterator_range\n1.3.5 single_view\nSynopsis\ntemplate<\ntypename T\n>\nstruct single_view\n{\n//unspecified\n//...\n};\nDescription\nA view onto an arbitrary type Tas on a single-element sequence.\nHeader\n#include <boost/mpl/single_view.hpp>\nModel of\n—Random Access Sequence\nParameters\nParameter Requirement Description\nT Any type The type to be wrapped in a sequence.\nExpression semantics\nThesemanticsofanexpressionaredefinedonlywheretheydifferfrom,orarenotdefinedin RandomAccessSequence .\nIn the following table, vis an instance of single_view ,xis an arbitrary type.\nExpression Semantics\nsingle_view<x>\nsingle_view<x>::typeA single-element Random Access Sequence vsuch that\nfront<v>::type is identical to x.\nsize<v>::type The size of v;size<v>::value == 1 ; seeRandom Access Sequence .\nExample\ntypedef single_view<int> view;\ntypedef begin<view>::type first;\ntypedef end<view>::type last;\nRevision Date: 15th November 20041.3 Views Sequences 36\nBOOST_MPL_ASSERT(( is_same< deref<first>::type,int > ));\nBOOST_MPL_ASSERT(( is_same< next<first>::type,last > ));\nBOOST_MPL_ASSERT(( is_same< prior<last>::type,first > ));\nBOOST_MPL_ASSERT_RELATION( size<view>::value, ==, 1 );\nSee also\nSequences ,Views,iterator_range ,filter_view ,transform_view ,joint_view ,zip_view\n1.3.6 transform_view\nSynopsis\ntemplate<\ntypename Sequence\n, typename F\n>\nstruct transform_view\n{\n//unspecified\n//...\n};\nDescription\nA view the full range of Sequence ’s transformed elements.\nHeader\n#include <boost/mpl/transform_view.hpp>\nModel of\n—Forward Sequence\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to wrap.\nF UnaryLambda Expression A transformation.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Forward Sequence .\nIn the following table, vis an instance of transform_view ,sis an arbitrary Forward Sequence , and fis an unary\nLambda Expression .\nRevision Date: 15th November 200437 Sequences 1.3 Views\nExpression Semantics\ntransform_view<s,f>\ntransform_view<s,f>::typeA lazyForward Sequence such that for each iin the range\n[begin<v>::type ,end<v>::type ) and each jin for in the range\n[begin<s>::type ,end<s>::type )deref<i>::type is identical to\napply< f, deref<j>::type >::type .\nsize<v>::type The size of v;size<v>::value == size<s>::value ; linear com-\nplexity; see Forward Sequence .\nExample\nFind the largest type in a sequence.\ntypedef vector<int,long,char,char[50],double> types;\ntypedef max_element<\ntransform_view< types, size_of<_> >\n>::type iter;\nBOOST_MPL_ASSERT_RELATION( deref<iter>::type::value, ==, 50 );\nSee also\nSequences ,Views,filter_view ,joint_view ,zip_view ,iterator_range\n1.3.7 zip_view\nSynopsis\ntemplate<\ntypename Sequences\n>\nstruct zip_view\n{\n//unspecified\n//...\n};\nDescription\nProvides a “zipped” view onto several sequences; that is, represents several sequences as a single sequence of elements\neach of which, in turn, is a sequence of the corresponding Sequences ’ elements.\nHeader\n#include <boost/mpl/zip_view.hpp>\nModel of\n—Forward Sequence\nParameters\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 38\nParameter Requirement Description\nSequences AForward Sequence ofForward Sequence sSequences to be “zipped”.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Forward Sequence .\nIn the following table, vis an instance of zip_view ,seqaForward Sequence ofnForward Sequence s.\nExpression Semantics\nzip_view<seq>\nzip_view<seq>::typeA lazyForward Sequence vsuch that for each iin [begin<v>::type ,\nend<v>::type ) and for each jin [ begin<seq>::type ,\nend<seq>::type )deref<i>::type is identical to transform<\nderef<j>::type, deref<_1> >::type .\nsize<v>::type The size of v;size<v>::value is equal to\nderef< min_element<\ntransform_view< seq, size<_1> >\n>::type >::type::value;\nlinear complexity; see Forward Sequence .\nExample\nElement-wise sum of threevectors.\ntypedef vector_c<int,1,2,3,4,5> v1;\ntypedef vector_c<int,5,4,3,2,1> v2;\ntypedef vector_c<int,1,1,1,1,1> v3;\ntypedef transform_view<\nzip_view< vector<v1,v2,v3> >\n, unpack_args< plus<_1,_2,_3> >\n> sum;\nBOOST_MPL_ASSERT(( equal< sum, vector_c<int,7,7,7,7,7> > ));\nSee also\nSequences ,Views,filter_view ,transform_view ,joint_view ,single_view ,iterator_range\n1.4 Intrinsic Metafunctions\nThe metafunctions that form the essential interface of sequence classesdocumented in the corresponding sequence\nconcepts are known as intrinsic sequence operations . They differ from generic sequence algorithms in that, in general,\nthey need to be implementedfrom scratch for each new sequence class1).\nIt’s worth noting that STL counterparts of these metafunctions are usually implemented as member functions.\n1)Inpractice,manyofintrinsicmetafunctionsofferadefaultimplementationthatwillworkinmajorityofcases,giventhatyou’veimplementedthe\ncore functionality they rely on (such as begin/end).\nRevision Date: 15th November 200439 Sequences 1.4 Intrinsic Metafunctions\n1.4.1 at\nSynopsis\ntemplate<\ntypename Sequence\n, typename N\n>\nstruct at\n{\ntypedef unspecified type;\n};\ntemplate<\ntypename AssocSeq\n, typename Key\n, typename Default = unspecified\n>\nstruct at\n{\ntypedef unspecified type;\n};\nDescription\natis anoverloaded name :\n—at<Sequence,N> returns the N-th element from the beginning of the Forward Sequence Sequence .\n—at<AssocSeq,Key,Default> returnsthefirstelementassociatedwith KeyintheAssociativeSequence Assoc-\nSeq, orDefault if no such element exists.\nHeader\n#include <boost/mpl/at.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to be examined.\nAssocSeq Associative Sequence A sequence to be examined.\nN Integral Constant An offset from the beginning of the sequence specifying\nthe element to be retrieved.\nKey Any type A key for the element to beretrieved.\nDefault Any type A default value to return ifthe element is not found.\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 40\nExpression semantics\nFor anyForward Sequence s, andIntegral Constant n:\ntypedef at<s,n>::type t;\nReturn type: A type.\nPrecondition: 0 <= n::value < size<s>::value .\nSemantics: Equivalent to\ntypedef deref< advance< begin<s>::type,n >::type >::type t;\nFor anyAssociative Sequence s, and arbitrary types keyandx:\ntypedef at<s,key,x>::type t;\nReturn type: A type.\nSemantics: Ifhas_key<s,key>::value == true ,tis the value type associated with key; otherwise t\nis identical to x.\ntypedef at<s,key>::type t;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef at<s,key,void_>::type t;\nComplexity\nSequence archetype Complexity\nForward Sequence Linear.\nRandom Access Sequence Amortized constant time.\nAssociative Sequence Amortized constant time.\nExample\ntypedef range_c<long,10,50> range;\nBOOST_MPL_ASSERT_RELATION( (at< range, int_<0> >::value), ==, 10 );\nBOOST_MPL_ASSERT_RELATION( (at< range, int_<10> >::value), ==, 20 );\nBOOST_MPL_ASSERT_RELATION( (at< range, int_<40> >::value), ==, 50 );\ntypedef set< int const,long*,double > s;\nBOOST_MPL_ASSERT(( is_same< at<s,char>::type, void_ > ));\nBOOST_MPL_ASSERT(( is_same< at<s,int>::type, int > ));\nSee also\nForward Sequence ,Random Access Sequence ,Associative Sequence ,at_c,front,back\nRevision Date: 15th November 200441 Sequences 1.4 Intrinsic Metafunctions\n1.4.2 at_c\nSynopsis\ntemplate<\ntypename Sequence\n, long n\n>\nstruct at_c\n{\ntypedef unspecified type;\n};\nDescription\nReturnsatypeidenticaltothe nthelementfromthebeginningofthesequence. at_c<Sequence,n>::type isashorcut\nnotation for at< Sequence, long_<n> >::type .\nHeader\n#include <boost/mpl/at.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to be examined.\nn A compile-time integral constant Anoffsetfromthebeginningofthesequencespecify-\ning the element to be retrieved.\nExpression semantics\ntypedef at_c<Sequence,n>::type t;\nReturn type: A type\nPrecondition: 0 <= n < size<Sequence>::value\nSemantics: Equivalent to\ntypedef at< Sequence, long_<n> >::type t;\nComplexity\nSequence archetype Complexity\nForward Sequence Linear.\nRandom Access Sequence Amortized constant time.\nExample\ntypedef range_c<long,10,50> range;\nBOOST_MPL_ASSERT_RELATION( (at_c< range,0 >::value), ==, 10 );\nBOOST_MPL_ASSERT_RELATION( (at_c< range,10 >::value), ==, 20 );\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 42\nBOOST_MPL_ASSERT_RELATION( (at_c< range,40 >::value), ==, 50 );\nSee also\nForward Sequence ,Random Access Sequence ,at,front,back\n1.4.3 back\nSynopsis\ntemplate<\ntypename Sequence\n>\nstruct back\n{\ntypedef unspecified type;\n};\nDescription\nReturns the last element inthe sequence.\nHeader\n#include <boost/mpl/back.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Bidirectional Sequence A sequence to be examined.\nExpression semantics\nFor anyBidirectional Sequence s:\ntypedef back<s>::type t;\nReturn type: A type.\nPrecondition: empty<s>::value == false .\nSemantics: Equivalent to\ntypedef deref< prior< end<s>::type >::type >::type t;\nComplexity\nAmortized constant time.\nRevision Date: 15th November 200443 Sequences 1.4 Intrinsic Metafunctions\nExample\ntypedef range_c<int,0,1> range1;\ntypedef range_c<int,0,10> range2;\ntypedef range_c<int,-10,0> range3;\nBOOST_MPL_ASSERT_RELATION( back<range1>::value, ==, 0 );\nBOOST_MPL_ASSERT_RELATION( back<range2>::value, ==, 9 );\nBOOST_MPL_ASSERT_RELATION( back<range3>::value, ==, -1 );\nSee also\nBidirectional Sequence ,front,push_back ,end,deref,at\n1.4.4 begin\nSynopsis\ntemplate<\ntypename X\n>\nstruct begin\n{\ntypedef unspecified type;\n};\nDescription\nReturns an iterator that points to the first element of the sequence. If the argument is not a Forward Sequence , returns\nvoid_.\nHeader\n#include <boost/mpl/begin_end.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nX Any type A type whose begin iterator, if any, will be returned.\nExpression semantics\nFor any arbitrary type x:\ntypedef begin<x>::type first;\nReturn type: Forward Iterator orvoid_.\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 44\nSemantics: Ifxis aForward Sequence ,firstis an iterator pointing to the first element of s; otherwise\nfirstisvoid_.\nPostcondition: Iffirstis an iterator, it is either dereferenceable or past-the-end; it is past-the-end if and\nonly if size<x>::value == 0 .\nComplexity\nAmortized constant time.\nExample\ntypedef vector< unsigned char,unsigned short,\nunsigned int,unsigned long > unsigned_types;\ntypedef begin<unsigned_types>::type iter;\nBOOST_MPL_ASSERT(( is_same< deref<iter>::type, unsigned char > ));\nBOOST_MPL_ASSERT(( is_same< begin<int>::type, void_ > ));\nSee also\nIterators,Forward Sequence ,end,size,empty\n1.4.5 clear\nSynopsis\ntemplate<\ntypename Sequence\n>\nstruct clear\n{\ntypedef unspecified type;\n};\nDescription\nReturns an empty sequence concept-identical toSequence .\nHeader\n#include <boost/mpl/clear.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nRevision Date: 15th November 200445 Sequences 1.4 Intrinsic Metafunctions\nParameter Requirement Description\nSequence Extensible Sequence orExtensible Associa-\ntive SequenceA sequence to get an empty “copy” of.\nExpression semantics\nFor anyExtensible Sequence orExtensible Associative Sequence s:\ntypedef clear<s>::type t;\nReturn type: Extensible Sequence orExtensible Associative Sequence .\nSemantics: Equivalent to\ntypedef erase< s, begin<s>::type, end<s>::type >::type t;\nPostcondition: empty<s>::value == true .\nComplexity\nAmortized constant time.\nExample\ntypedef vector_c<int,1,3,5,7,9,11> odds;\ntypedef clear<odds>::type nothing;\nBOOST_MPL_ASSERT(( empty<nothing> ));\nSee also\nExtensible Sequence ,Extensible Associative Sequence ,erase,empty,begin,end\n1.4.6 empty\nSynopsis\ntemplate<\ntypename Sequence\n>\nstruct empty\n{\ntypedef unspecified type;\n};\nDescription\nReturns an Integral Constant csuch that c::value == true if and only if the sequence is empty.\nHeader\n#include <boost/mpl/empty.hpp>\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 46\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to test.\nExpression semantics\nFor anyForward Sequence s:\ntypedef empty<s>::type c;\nReturn type: BooleanIntegral Constant .\nSemantics: Equivalent to typedef is_same< begin<s>::type,end<s>::type >::type c; .\nPostcondition: empty<s>::value == ( size<s>::value == 0 ) .\nComplexity\nAmortized constant time.\nExample\ntypedef range_c<int,0,0> empty_range;\ntypedef vector<long,float,double> types;\nBOOST_MPL_ASSERT( empty<empty_range> );\nBOOST_MPL_ASSERT_NOT( empty<types> );\nSee also\nForward Sequence ,Integral Constant ,size,begin/end\n1.4.7 end\nSynopsis\ntemplate<\ntypename X\n>\nstruct end\n{\ntypedef unspecified type;\n};\nDescription\nReturns the sequence’s past-the-end iterator. If the argument is not a Forward Sequence , returns void_.\nRevision Date: 15th November 200447 Sequences 1.4 Intrinsic Metafunctions\nHeader\n#include <boost/mpl/begin_end.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nX Any type A type whose end iterator, if any, will be returned.\nExpression semantics\nFor any arbitrary type x:\ntypedef end<x>::type last;\nReturn type: Forward Iterator orvoid_.\nSemantics: IfxisForwardSequence ,lastisaniteratorpointingonepastthelastelementin s;otherwise\nlastisvoid_.\nPostcondition: Iflastis an iterator, it is past-the-end.\nComplexity\nAmortized constant time.\nExample\ntypedef vector<long> v;\ntypedef begin<v>::type first;\ntypedef end<v>::type last;\nBOOST_MPL_ASSERT(( is_same< next<first>::type, last > ));\nSee also\nIterators,Forward Sequence ,begin,end,next\n1.4.8 erase\nSynopsis\ntemplate<\ntypename Sequence\n, typename First\n, typename Last = unspecified\n>\nstruct erase\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 48\n{\ntypedef unspecified type;\n};\nDescription\neraseperforms a removal of one ormore adjacent elements in the sequence starting from an arbitrary position.\nHeader\n#include <boost/mpl/erase.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence ExtensibleSequence orExtensibleAsso-\nciative SequenceA sequence to erase from.\nFirst Forward Iterator An iterator to the beginning ofthe range to be erased.\nLast Forward Iterator An iterator past-the-end of the range to be erased.\nExpression semantics\nFor anyExtensible Sequence s, and iterators pos,firstandlastintos:\ntypedef erase<s,first,last>::type r;\nReturn type: Extensible Sequence .\nPrecondition: [first,last) is a valid range in s.\nSemantics: ris a new sequence, concept-identical tos, of the following elements: [ begin<s>::type ,\npos), [last,end<s>::type ).\nPostcondition: The relative order of the elements in ris the same as in s;\nsize<r>::value == size<s>::value - distance<first,last>::value\ntypedef erase<s,pos>::type r;\nReturn type: Extensible Sequence .\nPrecondition: posis a dereferenceable iterator in s.\nSemantics: Equivalent to\ntypedef erase< s,pos,next<pos>::type >::type r;\nFor anyExtensible Associative Sequence s, and iterator posintos:\ntypedef erase<s,pos>::type r;\nReturn type: Extensible Sequence .\nRevision Date: 15th November 200449 Sequences 1.4 Intrinsic Metafunctions\nPrecondition: posis a dereferenceable iterator to s.\nSemantics: Erases the element at a specific position pos; equivalent to erase_key<s,\nderef<pos>::type >::type .\nPostcondition: size<r>::value == size<s>::value - 1 .\nComplexity\nSequence archetype Complexity (the range form)\nExtensible Associative Sequence Amortized constant time.\nExtensible Sequence Quadratic in the worst case, linear at best.\nExample\ntypedef vector_c<int,1,0,5,1,7,5,0,5> values;\ntypedef find< values, integral_c<int,7> >::type pos;\ntypedef erase<values,pos>::type result;\nBOOST_MPL_ASSERT_RELATION( size<result>::value, ==, 7 );\ntypedef find<result, integral_c<int,7> >::type iter;\nBOOST_MPL_ASSERT(( is_same< iter, end<result>::type > ));\nSee also\nExtensible Sequence ,Extensible Associative Sequence ,erase_key ,pop_front ,pop_back ,insert\n1.4.9 erase_key\nSynopsis\ntemplate<\ntypename AssocSeq\n, typename Key\n>\nstruct erase_key\n{\ntypedef unspecified type;\n};\nDescription\nErases elements associated with the key Keyin theExtensible Associative Sequence AssocSeq .\nHeader\n#include <boost/mpl/erase_key.hpp>\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 50\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nAssocSeq Extensible Associative Sequence A sequence to erase elements from.\nKey Any type A key for the elements to be removed.\nExpression semantics\nFor anyExtensible Associative Sequence s, and arbitrary type key:\ntypedef erase_key<s,key>::type r;\nReturn type: Extensible Associative Sequence .\nSemantics: risconcept-identical and equivalent to sexcept that has_key<r,k>::value == false .\nPostcondition: size<r>::value == size<s>::value - 1 .\nComplexity\nAmortized constant time.\nExample\ntypedef map< pair<int,unsigned>, pair<char,long> > m;\ntypedef erase_key<m,char>::type m1;\nBOOST_MPL_ASSERT_RELATION( size<m1>::type::value, ==, 1 );\nBOOST_MPL_ASSERT(( is_same< at<m1,char>::type,void_ > ));\nBOOST_MPL_ASSERT(( is_same< at<m1,int>::type,unsigned > ));\nSee also\nExtensible Associative Sequence ,erase,has_key,insert\n1.4.10 front\nSynopsis\ntemplate<\ntypename Sequence\n>\nstruct front\n{\ntypedef unspecified type;\n};\nRevision Date: 15th November 200451 Sequences 1.4 Intrinsic Metafunctions\nDescription\nReturns the first element inthe sequence.\nHeader\n#include <boost/mpl/front.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to be examined.\nExpression semantics\nFor anyForward Sequence s:\ntypedef front<s>::type t;\nReturn type: A type.\nPrecondition: empty<s>::value == false .\nSemantics: Equivalent to\ntypedef deref< begin<s>::type >::type t;\nComplexity\nAmortized constant time.\nExample\ntypedef list<long>::type types1;\ntypedef list<int,long>::type types2;\ntypedef list<char,int,long>::type types3;\nBOOST_MPL_ASSERT(( is_same< front<types1>::type, long > ));\nBOOST_MPL_ASSERT(( is_same< front<types2>::type, int> ));\nBOOST_MPL_ASSERT(( is_same< front<types3>::type, char> ));\nSee also\nForward Sequence ,back,push_front ,begin,deref,at\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 52\n1.4.11 has_key\nSynopsis\ntemplate<\ntypename Sequence\n, typename Key\n>\nstruct has_key\n{\ntypedef unspecified type;\n};\nDescription\nReturns a true-valued Integral Constant ifSequence contains an element with key Key.\nHeader\n#include <boost/mpl/has_key.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Associative Sequence A sequence to query.\nKey Any type The queried key.\nExpression semantics\nFor anyAssociative Sequence s, and arbitrary type key:\ntypedef has_key<s,key>::type c;\nReturn type: BooleanIntegral Constant .\nSemantics: c::value == true ifkeyis ins’s set of keys; otherwise c::value == false .\nComplexity\nAmortized constant time.\nExample\ntypedef map< pair<int,unsigned>, pair<char,long> > m;\nBOOST_MPL_ASSERT_NOT(( has_key<m,long> ));\ntypedef insert< m, pair<long,unsigned long> > m1;\nBOOST_MPL_ASSERT(( has_key<m1,long> ));\nRevision Date: 15th November 200453 Sequences 1.4 Intrinsic Metafunctions\nSee also\nAssociative Sequence ,count,insert,erase_key\n1.4.12 insert\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pos\n, typename T\n>\nstruct insert\n{\ntypedef unspecified type;\n};\ntemplate<\ntypename Sequence\n, typename T\n>\nstruct insert\n{\ntypedef unspecified type;\n};\nDescription\ninsertis anoverloaded name :\n—insert<Sequence,Pos,T> performs an insertion of type Tat an arbitrary position PosinSequence .Posis\nignored is Sequence is a model of Extensible Associative Sequence .\n—insert<Sequence,T> is a shortcut notation for insert<Sequence,Pos,T> for the case when Sequence is a\nmodel of Extensible Associative Sequence .\nHeader\n#include <boost/mpl/insert.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence ExtensibleSequence orExtensibleAsso-\nciative SequenceA sequence to insert into.\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 54\nParameter Requirement Description\nPos Forward Iterator An iterator in Sequence specifying the insertion po-\nsition.\nT Any type The element to be inserted.\nExpression semantics\nFor anyExtensible Sequence s, iterator posins, and arbitrary type x:\ntypedef insert<s,pos,x>::type r;\nReturn type: Extensible Sequence\nPrecondition: posis an iterator in s.\nSemantics: ris a sequence, concept-identical tos, of the following elements: [ begin<s>::type ,pos),\nx, [pos,end<s>::type ).\nPostcondition: The relative order of the elements in ris the same as in s.\nat< r, distance< begin<s>::type,pos >::type >::type\nis identical to x;\nsize<r>::value == size<s>::value + 1;\nFor anyExtensible Associative Sequence s, iterator posins, and arbitrary type x:\ntypedef insert<s,x>::type r;\nReturn type: Extensible Associative Sequence\nSemantics: risconcept-identical and equivalent to s, except that at< r, key_type<s,x>::type\n>::type is identical to value_type<s,x>::type .\nPostcondition: size<r>::value == size<s>::value + 1 .\ntypedef insert<s,pos,x>::type r;\nReturn type: Extensible Associative Sequence\nPrecondition: posis an iterator in s.\nSemantics: Equivalent to typedef insert<s,x>::type r ;posis ignored.\nComplexity\nSequence archetype Complexity\nExtensible Associative Sequence Amortized constant time.\nExtensible Sequence Linear in the worst case, or amortized constant time.\nExample\ntypedef vector_c<int,0,1,3,4,5,6,7,8,9> numbers;\ntypedef find< numbers,integral_c<int,3> >::type pos;\ntypedef insert< numbers,pos,integral_c<int,2> >::type range;\nBOOST_MPL_ASSERT_RELATION( size<range>::value, ==, 10 );\nBOOST_MPL_ASSERT(( equal< range,range_c<int,0,10> > ));\nRevision Date: 15th November 200455 Sequences 1.4 Intrinsic Metafunctions\ntypedef map< mpl::pair<int,unsigned> > m;\ntypedef insert<m,mpl::pair<char,long> >::type m1;\nBOOST_MPL_ASSERT_RELATION( size<m1>::value, ==, 2 );\nBOOST_MPL_ASSERT(( is_same< at<m1,int>::type,unsigned > ));\nBOOST_MPL_ASSERT(( is_same< at<m1,char>::type,long > ));\nSee also\nExtensible Sequence ,Extensible Associative Sequence ,insert_range ,push_front ,push_back ,erase\n1.4.13 insert_range\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pos\n, typename Range\n>\nstruct insert_range\n{\ntypedef unspecified type;\n};\nDescription\ninsert_range performs an insertion of a range of elements at an arbitrary position in the sequence.\nHeader\n#include <boost/mpl/insert_range.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence ExtensibleSequence orExtensibleAsso-\nciative SequenceA sequence to insert into.\nPos Forward Iterator An iterator in Sequence specifying the insertion po-\nsition.\nRange Forward Sequence The range of elements to be inserted.\nExpression semantics\nFor anyExtensible Sequence s, iterator posins, andForward Sequence range:\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 56\ntypedef insert<s,pos,range>::type r;\nReturn type: Extensible Sequence .\nPrecondition: posis an iterator into s.\nSemantics: ris a sequence, concept-identical tos, of the following elements: [ begin<s>::type ,pos),\n[begin<r>::type ,end<r>::type ), [pos,end<s>::type ).\nPostcondition: The relative order of the elements in ris the same as in s;\nsize<r>::value == size<s>::value + size<range>::value\nComplexity\nSequence dependent. Quadratic in the worst case, linear at best; see the particular sequence class’ specification for\ndetails.\nExample\ntypedef vector_c<int,0,1,7,8,9> numbers;\ntypedef find< numbers,integral_c<int,7> >::type pos;\ntypedef insert_range< numbers,pos,range_c<int,2,7> >::type range;\nBOOST_MPL_ASSERT_RELATION( size<range>::value, ==, 10 );\nBOOST_MPL_ASSERT(( equal< range,range_c<int,0,10> > ));\ntypedef insert_range<\nlist0<>\n, end< list0<> >::type\n, list<int>\n>::type result2;\nBOOST_MPL_ASSERT_RELATION( size<result2>::value, ==, 1 );\nSee also\nExtensible Sequence ,insert,push_front ,push_back ,erase\n1.4.14 is_sequence\nSynopsis\ntemplate<\ntypename X\n>\nstruct is_sequence\n{\ntypedef unspecified type;\n};\nDescription\nReturns a boolean Integral Constant csuch that c::value == true if and only if Xis a model of Forward Sequence .\nRevision Date: 15th November 200457 Sequences 1.4 Intrinsic Metafunctions\nHeader\n#include <boost/mpl/is_sequence.hpp>\nParameters\nParameter Requirement Description\nX Any type The type to query.\nExpression semantics\ntypedef is_sequence<X>::type c;\nReturn type: BooleanIntegral Constant .\nSemantics: Equivalent to\ntypedef not_< is_same< begin<T>::type,void_ > >::type c;\nComplexity\nAmortized constant time.\nExample\nstruct UDT {};\nBOOST_MPL_ASSERT_NOT(( is_sequence< std::vector<int> > ));\nBOOST_MPL_ASSERT_NOT(( is_sequence< int > ));\nBOOST_MPL_ASSERT_NOT(( is_sequence< int& > ));\nBOOST_MPL_ASSERT_NOT(( is_sequence< UDT > ));\nBOOST_MPL_ASSERT_NOT(( is_sequence< UDT* > ));\nBOOST_MPL_ASSERT(( is_sequence< range_c<int,0,0> > ));\nBOOST_MPL_ASSERT(( is_sequence< list<> > ));\nBOOST_MPL_ASSERT(( is_sequence< list<int> > ));\nBOOST_MPL_ASSERT(( is_sequence< vector<> > ));\nBOOST_MPL_ASSERT(( is_sequence< vector<int> > ));\nSee also\nForward Sequence ,begin,end,vector,list,range_c\n1.4.15 key_type\nSynopsis\ntemplate<\ntypename Sequence\n, typename X\n>\nstruct key_type\n{\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 58\ntypedef unspecified type;\n};\nDescription\nReturns the keythat would be used to identify XinSequence .\nHeader\n#include <boost/mpl/key_type.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Associative Sequence A sequence to query.\nX Any type The type to get the keyfor.\nExpression semantics\nFor anyAssociative Sequence s, iterators pos1andpos2ins, and an artibrary type x:\ntypedef key_type<s,x>::type k;\nReturn type: A type.\nPrecondition: xcan be put in s.\nSemantics: kis thekeythat would be used to identify xins.\nPostcondition: Ifkey_type< s,deref<pos1>::type >::type is identical to key_type<\ns,deref<pos2>::type >::type then pos1is identical to pos2.\nComplexity\nAmortized constant time.\nExample\ntypedef key_type< map<>,pair<int,unsigned> >::type k1;\ntypedef key_type< set<>,pair<int,unsigned> >::type k2;\nBOOST_MPL_ASSERT(( is_same< k1,int > ));\nBOOST_MPL_ASSERT(( is_same< k2,pair<int,unsigned> > ));\nSee also\nAssociative Sequence ,value_type ,has_key,set,map\nRevision Date: 15th November 200459 Sequences 1.4 Intrinsic Metafunctions\n1.4.16 order\nSynopsis\ntemplate<\ntypename Sequence\n, typename Key\n>\nstruct order\n{\ntypedef unspecified type;\n};\nDescription\nReturns a unique unsigned Integral Constant associated with the key KeyinSequence .\nHeader\n#include <boost/mpl/order.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Associative Sequence A sequence to query.\nKey Any type The queried key.\nExpression semantics\nFor anyAssociative Sequence s, and arbitrary type key:\ntypedef order<s,key>::type n;\nReturn type: Unsigned Integral Constant .\nSemantics: Ifhas_key<s,key>::value == true ,nis a unique unsigned Integral Constant associated\nwith keyins; otherwise, nis identical to void_.\nComplexity\nAmortized constant time.\nExample\ntypedef map< pair<int,unsigned>, pair<char,long> > m;\nBOOST_MPL_ASSERT_NOT(( is_same< order<m,int>::type, void_ > ));\nBOOST_MPL_ASSERT(( is_same< order<m,long>::type,void_ > ));\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 60\nSee also\nAssociative Sequence ,has_key,count,map\n1.4.17 pop_back\nSynopsis\ntemplate<\ntypename Sequence\n>\nstruct pop_back\n{\ntypedef unspecified type;\n};\nDescription\npop_back performs a removal at the endof the sequence with guaranteed O(1)complexity.\nHeader\n#include <boost/mpl/pop_back.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Back Extensible Sequence A sequence to erase the last element from.\nExpression semantics\nFor anyBack Extensible Sequence s:\ntypedef pop_back<s>::type r;\nReturn type: Back Extensible Sequence .\nPrecondition: empty<s>::value == false .\nSemantics: Equivalent to erase<s,end<s>::type>::type; .\nPostcondition: size<r>::value == size<s>::value - 1 .\nComplexity\nAmortized constant time.\nRevision Date: 15th November 200461 Sequences 1.4 Intrinsic Metafunctions\nExample\ntypedef vector<long>::type types1;\ntypedef vector<long,int>::type types2;\ntypedef vector<long,int,char>::type types3;\ntypedef pop_back<types1>::type result1;\ntypedef pop_back<types2>::type result2;\ntypedef pop_back<types3>::type result3;\nBOOST_MPL_ASSERT_RELATION( size<result1>::value, ==, 0 );\nBOOST_MPL_ASSERT_RELATION( size<result2>::value, ==, 1 );\nBOOST_MPL_ASSERT_RELATION( size<result3>::value, ==, 2 );\nBOOST_MPL_ASSERT(( is_same< back<result2>::type, long> ));\nBOOST_MPL_ASSERT(( is_same< back<result3>::type, int > ));\nSee also\nBack Extensible Sequence ,erase,push_back ,back,pop_front\n1.4.18 pop_front\nSynopsis\ntemplate<\ntypename Sequence\n>\nstruct pop_front\n{\ntypedef unspecified type;\n};\nDescription\npop_front performs a removal at the beginning of the sequence with guaranteed O(1)complexity.\nHeader\n#include <boost/mpl/pop_front.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Front Extensible Sequence A sequence to erase the firstelement from.\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 62\nExpression semantics\nFor anyFront Extensible Sequence s:\ntypedef pop_front<s>::type r;\nReturn type: Front Extensible Sequence .\nPrecondition: empty<s>::value == false .\nSemantics: Equivalent to erase<s,begin<s>::type>::type; .\nPostcondition: size<r>::value == size<s>::value - 1 .\nComplexity\nAmortized constant time.\nExample\ntypedef vector<long>::type types1;\ntypedef vector<int,long>::type types2;\ntypedef vector<char,int,long>::type types3;\ntypedef pop_front<types1>::type result1;\ntypedef pop_front<types2>::type result2;\ntypedef pop_front<types3>::type result3;\nBOOST_MPL_ASSERT_RELATION( size<result1>::value, ==, 0 );\nBOOST_MPL_ASSERT_RELATION( size<result2>::value, ==, 1 );\nBOOST_MPL_ASSERT_RELATION( size<result3>::value, ==, 2 );\nBOOST_MPL_ASSERT(( is_same< front<result2>::type, long > ));\nBOOST_MPL_ASSERT(( is_same< front<result3>::type, int > ));\nSee also\nFront Extensible Sequence ,erase,push_front ,front,pop_back\n1.4.19 push_back\nSynopsis\ntemplate<\ntypename Sequence\n, typename T\n>\nstruct push_back\n{\ntypedef unspecified type;\n};\nRevision Date: 15th November 200463 Sequences 1.4 Intrinsic Metafunctions\nDescription\npush_back performs an insertion at the end of the sequence with guaranteed O(1)complexity.\nHeader\n#include <boost/mpl/push_back.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Back Extensible Sequence A sequence to insert into.\nT Any type The element to be inserted.\nExpression semantics\nFor anyBack Extensible Sequence sand arbitrary type x:\ntypedef push_back<s,x>::type r;\nReturn type: Back Extensible Sequence .\nSemantics: Equivalent to\ntypedef insert< s,end<s>::type,x >::type r;\nPostcondition: back<r>::type is identical to x;\nsize<r>::value == size<s>::value + 1\nComplexity\nAmortized constant time.\nExample\ntypedef vector_c<bool,false,false,false,\ntrue,true,true,false,false> bools;\ntypedef push_back<bools,false_>::type message;\nBOOST_MPL_ASSERT_RELATION( back<message>::type::value, ==, false );\nBOOST_MPL_ASSERT_RELATION(\n( count_if<message, equal_to<_1,false_> >::value ), ==, 6\n);\nSee also\nBack Extensible Sequence ,insert,pop_back ,back,push_front\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 64\n1.4.20 push_front\nSynopsis\ntemplate<\ntypename Sequence\n, typename T\n>\nstruct push_front\n{\ntypedef unspecified type;\n};\nDescription\npush_front performs an insertion at thebeginning of the sequence with guaranteed O(1)complexity.\nHeader\n#include <boost/mpl/push_front.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Front Extensible Sequence A sequence to insert into.\nT Any type The element to be inserted.\nExpression semantics\nFor anyFront Extensible Sequence sand arbitrary type x:\ntypedef push_front<s,x>::type r;\nReturn type: Front Extensible Sequence .\nSemantics: Equivalent to\ntypedef insert< s,begin<s>::type,x >::type r;\nPostcondition: size<r>::value == size<s>::value + 1 ;front<r>::type is identical to x.\nComplexity\nAmortized constant time.\nExample\ntypedef vector_c<int,1,2,3,5,8,13,21> v;\nBOOST_MPL_ASSERT_RELATION( size<v>::value, ==, 7 );\nRevision Date: 15th November 200465 Sequences 1.4 Intrinsic Metafunctions\ntypedef push_front< v,integral_c<int,1> >::type fibonacci;\nBOOST_MPL_ASSERT_RELATION( size<fibonacci>::value, ==, 8 );\nBOOST_MPL_ASSERT(( equal<\nfibonacci\n, vector_c<int,1,1,2,3,5,8,13,21>\n, equal_to<_,_>\n> ));\nSee also\nFront Extensible Sequence ,insert,pop_front ,front,push_back\n1.4.21 sequence_tag\nSynopsis\ntemplate<\ntypename X\n>\nstruct sequence_tag\n{\ntypedef unspecified type;\n};\nDescription\nsequence_tag is atag metafunction for alltag dispatched intrinsic sequence operations .\nHeader\n#include <boost/mpl/sequence_tag.hpp>\nParameters\nParameter Requirement Description\nX Any type A type to obtain a sequence tag for.\nExpression semantics\nFor any arbitrary type x:\ntypedef sequence_tag<x>::type tag;\nReturn type: A type.\nSemantics: tagis an unspecified tag type for x.\nComplexity\nAmortized constant time.\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 66\nSee also\nIntrinsic Metafunctions ,Tag Dispatched Metafunction\n1.4.22 size\nSynopsis\ntemplate<\ntypename Sequence\n>\nstruct size\n{\ntypedef unspecified type;\n};\nDescription\nsizereturns the number of elements in the sequence, that is, the number of elements in the range\n[begin<Sequence>::type ,end<Sequence>::type ).\nHeader\n#include <boost/mpl/size.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to query.\nExpression semantics\nFor anyForward Sequence s:\ntypedef size<s>::type n;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef distance< begin<s>::type,end<s>::type >::type n;\nPostcondition: n::value >= 0 .\nComplexity\nThecomplexityofthe sizemetafunctiondirectlydependsontheimplementationoftheparticularsequenceitisapplied\nto. In the worst case, sizeguarantees a linear complexity.\nRevision Date: 15th November 200467 Sequences 1.4 Intrinsic Metafunctions\nIf the sis aRandom Access Sequence ,size<s>::type is anO(1)operation. The opposite is not necessarily true —\nfor example, a sequence classthat models Forward Sequence might still give us an O(1) sizeimplementation.\nExample\ntypedef list0<> empty_list;\ntypedef vector_c<int,0,1,2,3,4,5> numbers;\ntypedef range_c<int,0,100> more_numbers;\nBOOST_MPL_ASSERT_RELATION( size<list>::value, ==, 0 );\nBOOST_MPL_ASSERT_RELATION( size<numbers>::value, ==, 5 );\nBOOST_MPL_ASSERT_RELATION( size<more_numbers>::value, ==, 100 );\nSee also\nForward Sequence ,Random Access Sequence ,empty,begin,end,distance\n1.4.23 value_type\nSynopsis\ntemplate<\ntypename Sequence\n, typename X\n>\nstruct value_type\n{\ntypedef unspecified type;\n};\nDescription\nReturns the valuethat would be used for element XinSequence .\nHeader\n#include <boost/mpl/value_type.hpp>\nModel of\nTag Dispatched Metafunction\nParameters\nParameter Requirement Description\nSequence Associative Sequence A sequence to query.\nX Any type The type to get the valuefor.\nRevision Date: 15th November 20041.4 Intrinsic Metafunctions Sequences 68\nExpression semantics\nFor anyAssociative Sequence s, and an artibrary type x:\ntypedef value_type<s,x>::type v;\nReturn type: A type.\nPrecondition: xcan be put in s.\nSemantics: vis thevaluethat would be used for xins.\nPostcondition: If .. parsed-literal:\nhas_key< s,key_type<s,x>::type >::type\nthen .. parsed-literal:\nat< s,key_type<s,x>::type >::type\nis identical to value_type<s,x>::type .\nComplexity\nAmortized constant time.\nExample\ntypedef value_type< map<>,pair<int,unsigned> >::type v1;\ntypedef value_type< set<>,pair<int,unsigned> >::type v2;\nBOOST_MPL_ASSERT(( is_same< v1,unsigned > ));\nBOOST_MPL_ASSERT(( is_same< v2,pair<int,unsigned> > ));\nSee also\nAssociative Sequence ,key_type ,at,set,map\nRevision Date: 15th November 2004Chapter 2 Iterators\nIteratorsaregenericmeansofaddressingaparticularelementorarangeofsequentialelementsinasequence. Theyare\nalsoamechanismthatmakesitpossibletodecouple algorithms fromconcretecompile-time sequenceimplementations .\nUnder the hood, all MPL sequence algorithms are implemented in terms of iterators. In particular, that means that they\nwill work on any custom compile-time sequence, given that the appropriate iterator inteface is provided.\n2.1 Concepts\nAll iterators in MPL are classified into three iterator concepts, or categories , named according to the type of traversal\nprovided. The categories are: Forward Iterator ,Bidirectional Iterator , andRandom Access Iterator . The concepts are\nhierarchical: Random Access Iterator is a refinement of Bidirectional Iterator , which, in its turn, is a refinement of\nForward Iterator .\nBecauseoftheinherentlyimmutablenatureofthevalueaccess,MPLiteratorsescapetheproblemsofthetraversal-only\ncategorization discussed atlength in [ n1550].\n2.1.1 Forward Iterator\nDescription\nAForward Iterator iis a type that represents a positional reference to an element of a Forward Sequence . It allows to\naccess the element through a dereference operation, and provides a way to obtain an iterator to the next element in a\nsequence.\nDefinitions\n—An iterator can be dereferenceable , meaning that deref<i>::type is a well-defined expression.\n—An iterator is past-the-end if it points beyond the last element of a sequence; past-the-end iterators are non-\ndereferenceable.\n—An iterator iisincrementable if there is a “next” iterator, that is, if next<i>::type expression is well-defined;\npast-the-end iterators arenot incrementable.\n—Two iterators into the same sequence are equivalent if they have the same type.\n—An iterator jisreachable from an iterator iif , after recursive application of nextmetafunction to ia finite\nnumber of times, iis equivalent to j.\n—The notation [ i,j) refers to a rangeof iterators beginning with iand up to but not including j.\n—The range [ i,j) is a valid range ifjis reachable from i.\nExpression requirements2.1 Concepts Iterators 70\nExpression Type Complexity\nderef<i>::type Any type Amortized constant time\nnext<i>::type Forward Iterator Amortized constant time\ni::category Integral Constant , convertible to forward_itera-\ntor_tagConstant time\nExpression semantics\ntypedef deref<i>::type j;\nPrecondition: iis dereferenceable\nSemantics: jis identical to the type of thepointed element\ntypedef next<i>::type j;\nPrecondition: iis incrementable\nSemantics: jis the next iterator in a sequence\nPostcondition: jis dereferenceable or past-the-end\ntypedef i::category c;\nSemantics: cis identical to the iterator’scategory tag\nInvariants\nFor any forward iterators iandjthe following invariants always hold:\n—iandjare equivalent if and only if theyare pointing to the same element.\n—Ifiis dereferenceable, and jis equivalent to i, then jis dereferenceable as well.\n—Ifiandjare equivalent and dereferenceable, then deref<i>::type andderef<j>::type are identical.\n—Ifiis incrementable, and jis equivalent to i, then jis incrementable as well.\n—Ifiandjare equivalent and incrementable, then next<i>::type andnext<j>::type are equivalent.\nSee also\nIterators,Bidirectional Iterator ,Forward Sequence ,deref,next\n2.1.2 Bidirectional Iterator\nDescription\nABidirectional Iterator is aForward Iterator that provides a way to obtain an iterator to the previous element in a\nsequence.\nRefinement of\nForward Iterator\nRevision Date: 15th November 200471 Iterators 2.1 Concepts\nDefinitions\n—abidirectionaliterator iisdecrementable ifthereisa“previous”iterator,thatis,if prior<i>::type expression\nis well-defined; iteratorspointing to the first element of the sequence are not decrementable.\nExpression requirements\nIn addition to the requirements defined in Forward Iterator , the following requirementsmust be met.\nExpression Type Complexity\nnext<i>::type Bidirectional Iterator Amortized constant time\nprior<i>::type Bidirectional Iterator Amortized constant time\ni::category Integral Constant , convertible to bidirec-\ntional_iterator_tagConstant time\nExpression semantics\ntypedef prior<i>::type j;\nPrecondition: iis decrementable\nSemantics: jis an iterator pointing to theprevious element of the sequence\nPostcondition: jis dereferenceable and incrementable\nInvariants\nFor any bidirectional iterators iandjthe following invariants always hold:\n—Ifiis incrementable, then prior< next<i>::type >::type is a null operation; similarly, if iis decre-\nmentable, next< prior<i>::type >::type is a null operation.\nSee also\nIterators,Forward Iterator ,Random Access Iterator ,Bidirectional Sequence ,prior\n2.1.3 Random Access Iterator\nDescription\nARandom Access Iterator is aBidirectional Iterator that provides constant-time guarantees on moving the iterator\nan arbitrary number of positions forward or backward and for measuring the distance to another iterator in the same\nsequence.\nRefinement of\nBidirectional Iterator\nExpression requirements\nIn addition to the requirements defined in Bidirectional Iterator , the following requirementsmust be met.\nRevision Date: 15th November 20042.2 Iterator Metafunctions Iterators 72\nExpression Type Complexity\nnext<i>::type Random Access Iterator Amortized constant time\nprior<i>::type Random Access Iterator Amortized constant time\ni::category Integral Constant , convertible to random_ac-\ncess_iterator_tagConstant time\nadvance<i,n>::type Random Access Iterator Amortized constant time\ndistance<i,j>::type Integral Constant Amortized constant time\nExpression semantics\ntypedef advance<i,n>::type j;\nSemantics: Seeadvance specification\ntypedef distance<i,j>::type n;\nSemantics: Seedistance specification\nInvariants\nFor any random access iterators iandjthe following invariants always hold:\n—Ifadvance<i,n>::type is well-defined, then advance< advance<i,n>::type, negate<n>::type\n>::type is a null operation.\nSee also\nIterators,Bidirectional Iterator ,Random Access Sequence ,advance,distance\n2.2 Iterator Metafunctions\n2.2.1 advance\nSynopsis\ntemplate<\ntypename Iterator\n, typename N\n>\nstruct advance\n{\ntypedef unspecified type;\n};\nDescription\nMoves Iterator by the distance N. Forbidirectional andrandom access iterators, the distance may be negative.\nHeader\n#include <boost/mpl/advance.hpp>\nRevision Date: 15th November 200473 Iterators 2.2 Iterator Metafunctions\nParameters\nParameter Requirement Description\nIterator Forward Iterator An iterator to advance.\nN Integral Constant A distance.\nModel Of\nTag Dispatched Metafunction\nExpression semantics\nFor aForward Iterator iterand arbitrary Integral Constant n:\ntypedef advance<iter,n>::type j;\nReturn type: Forward Iterator .\nPrecondition: IfIterator is aForward Iterator ,n::value must be nonnegative.\nSemantics: Equivalent to:\ntypedef iter i0;\ntypedef next<i0>::type i1;\n...\ntypedef next<i n-1>::type j;\nifn::value > 0 , and\ntypedef iter i0;\ntypedef prior<i0>::type i1;\n...\ntypedef prior<i n-1>::type j;\notherwise.\nPostcondition: jis dereferenceable or past-the-end; distance<iter,j>::value == n::value if\nn::value > 0 , and distance<j,iter>::value == n::value otherwise.\nComplexity\nAmortized constant time if iteris a model of Random Access Iterator , otherwise linear time.\nExample\ntypedef range_c<int,0,10> numbers;\ntypedef begin<numbers>::type first;\ntypedef end<numbers>::type last;\ntypedef advance<first,int_<10> >::type i1;\ntypedef advance<last,int_<-10> >::type i2;\nBOOST_MPL_ASSERT(( boost::is_same<i1,last> ));\nBOOST_MPL_ASSERT(( boost::is_same<i2,first> ));\nRevision Date: 15th November 20042.2 Iterator Metafunctions Iterators 74\nSee also\nIterators,Tag Dispatched Metafunction ,distance ,next\n2.2.2 distance\nSynopsis\ntemplate<\ntypename First\n, typename Last\n>\nstruct distance\n{\ntypedef unspecified type;\n};\nDescription\nReturns the distance between Firstand Lastiterators, that is, an Integral Constant nsuch that ad-\nvance<First,n>::type is identical to Last.\nHeader\n#include <boost/mpl/distance.hpp>\nParameters\nParameter Requirement Description\nFirst,Last Forward Iterator Iterators to compute a distance between.\nModel Of\nTag Dispatched Metafunction\nExpression semantics\nFor anyForward Iterator sfirstandlast:\ntypedef distance<first,last>::type n;\nReturn type: Integral Constant .\nPrecondition: [first,last) is a valid range.\nSemantics: Equivalent to\ntypedef iter_fold<\niterator_range<first,last>\n, long_<0>\n, next<_1>\n>::type n;\nPostcondition: is_same< advance<first,n>::type, last >::value == true .\nRevision Date: 15th November 200475 Iterators 2.2 Iterator Metafunctions\nComplexity\nAmortized constant time if firstandlastareRandom Access Iterator s, otherwise linear time.\nExample\ntypedef range_c<int,0,10>::type range;\ntypedef begin<range>::type first;\ntypedef end<range>::type last;\nBOOST_MPL_ASSERT_RELATION( (distance<first,last>::value), ==, 10);\nSee also\nIterators,Tag Dispatched Metafunction ,advance,next,prior\n2.2.3 next\nSynopsis\ntemplate<\ntypename Iterator\n>\nstruct next\n{\ntypedef unspecified type;\n};\nDescription\nReturns the next iterator in the sequence. [ Note: nexthas a number of overloaded meanings, depending on the type of\nits argument. For instance, if Xis anIntegral Constant ,next<X> returns an incremented Integral Constant of the same\ntype. The following specification is iterator-specific. Please refer to the corresponding concept’s documentation for the\ndetails of the alternative semantics — end note].\nHeader\n#include <boost/mpl/next_prior.hpp>\nParameters\nParameter Requirement Description\nIterator Forward Iterator . An iterator to increment.\nExpression semantics\nFor anyForward Iterator siter:\ntypedef next<iter>::type j;\nReturn type: Forward Iterator .\nRevision Date: 15th November 20042.2 Iterator Metafunctions Iterators 76\nPrecondition: iteris incrementable.\nSemantics: jis an iterator pointing to the next element in the sequence, or is past-the-end. If iteris a\nuser-defined iterator, the library-provided default implementation is equivalent to\ntypedef iter::next j;\nComplexity\nAmortized constant time.\nExample\ntypedef vector_c<int,1> v;\ntypedef begin<v>::type first;\ntypedef end<v>::type last;\nBOOST_MPL_ASSERT(( is_same< next<first>::type, last > ));\nSee also\nIterators,begin/end,prior,deref\n2.2.4 prior\nSynopsis\ntemplate<\ntypename Iterator\n>\nstruct prior\n{\ntypedef unspecified type;\n};\nDescription\nReturns the previous iterator in the sequence. [ Note: priorhas a number of overloaded meanings, depending on the\ntypeofitsargument. Forinstance,if XisanIntegralConstant ,prior<X> returnsandecremented IntegralConstant ofthe\nsame type. The following specification is iterator-specific. Please refer to the corresponding concept’s documentation\nfor the details of the alternative semantics — end note].\nHeader\n#include <boost/mpl/next_prior.hpp>\nParameters\nParameter Requirement Description\nIterator Forward Iterator . An iterator to decrement.\nRevision Date: 15th November 200477 Iterators 2.2 Iterator Metafunctions\nExpression semantics\nFor anyForward Iterator siter:\ntypedef prior<iter>::type j;\nReturn type: Forward Iterator .\nPrecondition: iteris decrementable.\nSemantics: jis an iterator pointing to the previous element in the sequence. If iteris a user-defined\niterator, the library-provided default implementation is equivalent to\ntypedef iter::prior j;\nComplexity\nAmortized constant time.\nExample\ntypedef vector_c<int,1> v;\ntypedef begin<v>::type first;\ntypedef end<v>::type last;\nBOOST_MPL_ASSERT(( is_same< prior<last>::type, first > ));\nSee also\nIterators,begin/end,next,deref\n2.2.5 deref\nSynopsis\ntemplate<\ntypename Iterator\n>\nstruct deref\n{\ntypedef unspecified type;\n};\nDescription\nDereferences an iterator.\nHeader\n#include <boost/mpl/deref.hpp>\nParameters\nRevision Date: 15th November 20042.2 Iterator Metafunctions Iterators 78\nParameter Requirement Description\nIterator Forward Iterator The iterator to dereference.\nExpression semantics\nFor anyForward Iterator siter:\ntypedef deref<iter>::type t;\nReturn type: A type.\nPrecondition: iteris dereferenceable.\nSemantics: tisidentical totheelement referencedby iter. Ifiterisa user-definediterator,the library-\nprovided default implementation is equivalent to\ntypedef iter::type t;\nComplexity\nAmortized constant time.\nExample\ntypedef vector<char,short,int,long> types;\ntypedef begin<types>::type iter;\nBOOST_MPL_ASSERT(( is_same< deref<iter>::type, char > ));\nSee also\nIterators,begin/end,next\n2.2.6 iterator_category\nSynopsis\ntemplate<\ntypename Iterator\n>\nstruct iterator_category\n{\ntypedef typename Iterator::category type;\n};\nDescription\nReturns one of the following iterator category tags:\n—forward_iterator_tag\n—bidirectional_iterator_tag\n—random_access_iterator_tag\nRevision Date: 15th November 200479 Iterators 2.2 Iterator Metafunctions\nHeader\n#include <boost/mpl/iterator_category.hpp>\n#include <boost/mpl/iterator_tags.hpp>\nParameters\nParameter Requirement Description\nIterator Forward Iterator The iterator to obtain a category for.\nExpression semantics\nFor anyForward Iterator siter:\ntypedef iterator_category<iter>::type tag;\nReturn type: Integral Constant .\nSemantics: tagisforward_iterator_tag ifiteris a model of Forward Iterator ,bidirectional_-\niterator_tag ifiteris a model of Bidirectional Iterator , orrandom_access_iterator_tag if\niteris a model of Random Access Iterator ;\nPostcondition: forward_iterator_tag::value < bidirectional_iterator_tag::value ,\nbidirectional_iterator_tag::value < random_access_iterator_tag::value .\nComplexity\nAmortized constant time.\nExample\ntemplate< typename Tag, typename Iterator >\nstruct algorithm_impl\n{\n//O(n) implementation\n};\ntemplate< typename Iterator >\nstruct algorithm_impl<random_access_iterator_tag,Iterator>\n{\n//O(1) implementation\n};\ntemplate< typename Iterator >\nstruct algorithm\n: algorithm_impl<\niterator_category<Iterator>::type\n, Iterator\n>\n{\n};\nRevision Date: 15th November 20042.2 Iterator Metafunctions Iterators 80\nSee also\nIterators,begin/end,advance,distance ,next\nRevision Date: 15th November 2004Chapter 3 Algorithms\nThe MPL provides a broad range of fundamental algorithms aimed to satisfy the majority of sequential compile-time\ndata processing needs. The algorithms include compile-time counterparts of many of the STL algorithms, iteration\nalgorithms borrowed from functional programming languages, and more.\nUnlikethealgorithmsintheC++StandardLibrary,whichoperateonimplict iteratorranges ,themajorityofMPLcoun-\nterpartstakeandreturn sequences . ThisderivationisnotdictatedbythefunctionalnatureofC++compile-timecompu-\ntationsperse,butratherbyadesiretoimprovegeneralusabilityofthelibrary,makingprogrammingwithcompile-time\ndata structures as enjoyableas possible.\nIn the spirit of the STL, MPL algorithms are generic, meaning that they are not tied to particular sequence class imple-\nmentations, and can operate on a wide range of arguments as long as they satisfy the documented requirements. The\nrequirements are formulated in terms of concepts. Under the hood, algorithms are decoupled from concrete sequence\nimplementations by operating on Iterators.\nAll MPL algorithms can be sorted into three major categories: iteration algorithms, querying algorithms, and transfor-\nmation algorithms. The transformation algorithms introduce an associated Inserterconcept, a rough equivalent for the\nnotionof OutputIterator intheStandardLibrary. Moreover,everytransformationalgorithmprovidesa reverse_ coun-\nterpart,allowingforawiderrangeofefficienttransformations—acommonfunctionalitydocumentedbythe Reversible\nAlgorithm concept.\n3.1 Concepts\n3.1.1 Inserter\nDescription\nAnInserterisacompile-timesubstituteforSTL OutputIterator . Underthehood,it’ssimplyatypeholdingtwoentities:\nastateandanoperation . Whenpassedtoa transformationalgorithm ,theinserter’sbinaryoperationisinvokedforevery\nelementthatwouldnormallybewrittenintotheoutputiterator,withtheelementitself(asthesecondargument)andthe\nresult of the previous operation’s invocation — or, for the very first element, the inserter’s initial state.\nTechnically, instead of taking a single inserter parameter, transformation algorithms could accept the state and the\n“output” operation separately. Grouping these in a single parameter entity, however, brings the algorithms semantically\nand syntactically closer to their STL counterparts, significantly simplifying many ofthe common use cases.\nValid expressions\nIn the following table and subsequent specifications, inis a model of Inserter.\nExpression Type\nin::state Any type\nin::operation BinaryLambda Expression3.1 Concepts Algorithms 82\nExpression semantics\nExpression Semantics\nin::state The inserter’s initial state.\nin::operation The inserter’s “output” operation.\nExample\ntypedef transform<\nrange_c<int,0,10>\n, plus<_1,_1>\n, back_inserter< vector0<> >\n>::type result;\nModels\n—inserter\n—front_inserter\n—back_inserter\nSee also\nAlgorithms ,Transformation Algorithms ,inserter ,front_inserter ,back_inserter\n3.1.2 Reversible Algorithm\nDescription\nAReversible Algorithm is a member of a pair of transformation algorithms that iterate over their input sequence(s) in\nopposite directions. For each reversible algorithm xthere exists a counterpart algorithm reverse_x , that exhibits the\nexact semantics of xexcept that the elements of its input sequence argument(s) are processed in the reverseorder.\nExpression requirements\nInthefollowingtableandsubsequentspecifications, xisaplaceholdertokenfortheactual ReversibleAlgorithm ’sname,\ns1,s2,...snareForward Sequence s, and inis anInserter.\nExpression Type Complexity\nx<s1,s2,...sn, ...>::type Forward Sequence Unspecified.\nx<s1,s2,...sn, ...in>::type Any type Unspecified.\nreverse_x< s1,s2,...sn, ...>::type Forward Sequence Unspecified.\nreverse_x< s1,s2,...sn, ...in>::type Any type Unspecified.\nExpression semantics\ntypedef x< s1,s2,... sn,...>::type t;\nPrecondition: s1is anExtensible Sequence .\nRevision Date: 15th November 200483 Algorithms 3.1 Concepts\nSemantics: tis equivalent to\nx<\ns1,s2,... sn,...\n, back_inserter< clear< s1>::type >\n>::type\nifhas_push_back< s1>::value == true and\nreverse_x<\ns1,s2,... sn,...\n, front_inserter< clear< s1>::type >\n>::type\notherwise.\ntypedef x< s1,s2,... sn,...in>::type t;\nSemantics: tis the result of an xinvocation with arguments s1,s2,...sn,...in.\ntypedef reverse_x< s1,s2,... sn,... >::type t;\nPrecondition: s1is anExtensible Sequence .\nSemantics: tis equivalent to\nx<\ns1,s2,... sn,...\n, front_inserter< clear< s1>::type >\n>::type\nifhas_push_front< s1>::value == true and\nreverse_x<\ns1,s2,... sn,...\n, back_inserter< clear< s1>::type >\n>::type\notherwise.\ntypedef reverse_x< s1,s2,... sn,... in>::type t;\nSemantics: tis the result of a reverse_x invocation with arguments s1,s2,...sn,...in.\nExample\ntypedef transform<\nrange_c<int,0,10>\n, plus<_1,int_<7> >\n, back_inserter< vector0<> >\n>::type r1;\ntypedef transform< r1, minus<_1,int_<2> > >::type r2;\ntypedef reverse_transform<\nr2\n, minus<_1,5>\n, front_inserter< vector0<> >\n>::type r3;\nBOOST_MPL_ASSERT(( equal<r1, range_c<int,7,17> > ));\nRevision Date: 15th November 20043.2 Inserters Algorithms 84\nBOOST_MPL_ASSERT(( equal<r2, range_c<int,5,15> > ));\nBOOST_MPL_ASSERT(( equal<r3, range_c<int,0,10> > ));\nModels\n—transform\n—remove\n—replace\nSee also\nTransformation Algorithms ,Inserter\n3.2 Inserters\n3.2.1 back_inserter\nSynopsis\ntemplate<\ntypename Seq\n>\nstruct back_inserter\n{\n//unspecified\n//...\n};\nDescription\nInserts elements at the endof the sequence.\nHeader\n#include <boost/mpl/back_inserter.hpp>\nModel of\nInserter\nParameters\nParameter Requirement Description\nSeq Back Extensible Sequence A sequence to bind the inserter to.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Inserter.\nFor anyBack Extensible Sequence s:\nRevision Date: 15th November 200485 Algorithms 3.2 Inserters\nExpression Semantics\nback_inserter<s> AnInserter in, equivalent to\nstruct in : inserter<s,push_back<_1,_2> > {};\nComplexity\nAmortized constant time.\nExample\ntypedef copy<\nrange_c<int,5,10>\n, back_inserter< vector_c<int,0,1,2,3,4> >\n>::type range;\nBOOST_MPL_ASSERT(( equal< range, range_c<int,0,10> > ));\nSee also\nAlgorithms ,Inserter,Reversible Algorithm ,inserter ,front_inserter ,push_back\n3.2.2 front_inserter\nSynopsis\ntemplate<\ntypename Seq\n>\nstruct front_inserter\n{\n//unspecified\n//...\n};\nDescription\nInserts elements at the beginning of the sequence.\nHeader\n#include <boost/mpl/front_inserter.hpp>\nModel of\nInserter\nParameters\nRevision Date: 15th November 20043.2 Inserters Algorithms 86\nParameter Requirement Description\nSeq Front Extensible Sequence A sequence to bind the inserter to.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Inserter.\nFor anyFront Extensible Sequence s:\nExpression Semantics\nfront_inserter<s> AnInserter in, equivalent to\nstruct in : inserter<s,push_front<_1,_2> > {};\nComplexity\nAmortized constant time.\nExample\ntypedef reverse_copy<\nrange_c<int,0,5>\n, front_inserter< vector_c<int,5,6,7,8,9> >\n>::type range;\nBOOST_MPL_ASSERT(( equal< range, range_c<int,0,10> > ));\nSee also\nAlgorithms ,Inserter,Reversible Algorithm ,inserter ,back_inserter ,push_front\n3.2.3 inserter\nSynopsis\ntemplate<\ntypename State\n, typename Operation\n>\nstruct inserter\n{\ntypedef State state;\ntypedef Operation operation;\n};\nDescription\nA general-purpose model of the Inserterconcept.\nRevision Date: 15th November 200487 Algorithms 3.3 Iteration Algorithms\nHeader\n#include <boost/mpl/inserter.hpp>\nModel of\nInserter\nParameters\nParameter Requirement Description\nState Any type A initial state.\nOperation BinaryLambda Expression An output operation.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Inserter.\nFor any binary Lambda Expression opand arbitrary type state:\nExpression Semantics\ninserter<op,state> AnInserter.\nComplexity\nAmortized constant time.\nExample\ntemplate< typename N > struct is_odd : bool_< ( N::value % 2 ) > {};\ntypedef copy<\nrange_c<int,0,10>\n, inserter< // a filtering ’push_back’ inserter\nvector<>\n, if_< is_odd<_2>, push_back<_1,_2>, _1 >\n>\n>::type odds;\nBOOST_MPL_ASSERT(( equal< odds, vector_c<int,1,3,5,7,9>, equal_to<_,_> > ));\nSee also\nAlgorithms ,Inserter,Reversible Algorithm ,front_inserter ,back_inserter\n3.3 Iteration Algorithms\nIteration algorithms are the basic building blocks behind many of the MPL’s algorithms, and are usually the first place\nto look at when starting to build a new one. Abstracting away the details of sequence iteration and employing various\noptimizations such as recursion unrolling, they provide significant advantages over a hand-coded approach.\nRevision Date: 15th November 20043.3 Iteration Algorithms Algorithms 88\n3.3.1 fold\nSynopsis\ntemplate<\ntypename Sequence\n, typename State\n, typename ForwardOp\n>\nstruct fold\n{\ntypedef unspecified type;\n};\nDescription\nReturns the result of the successive application of binary ForwardOp to the result of the previous ForwardOp in-\nvocation ( Stateif it’s the first call) and every element of the sequence in the range [ begin<Sequence>::type ,\nend<Sequence>::type ) in order.\nHeader\n#include <boost/mpl/fold.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to iterate.\nState Any type The initial state for the first ForwardOp application.\nForwardOp BinaryLambda Expression The operation to be executed onforward traversal.\nExpression semantics\nFor anyForward Sequence s, binaryLambda Expression op, and arbitrary type state:\ntypedef fold<s,state,op>::type t;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef iter_fold< s,state,apply<op,_1,deref<_2> > >::type t;\nComplexity\nLinear. Exactly size<s>::value applications of op.\nExample\ntypedef vector<long,float,short,double,float,long,long double> types;\ntypedef fold<\ntypes\nRevision Date: 15th November 200489 Algorithms 3.3 Iteration Algorithms\n, int_<0>\n, if_< is_float<_2>,next<_1>,_1 >\n>::type number_of_floats;\nBOOST_MPL_ASSERT_RELATION( number_of_floats::value, ==, 4 );\nSee also\nAlgorithms ,accumulate ,reverse_fold ,iter_fold ,reverse_iter_fold ,copy,copy_if\n3.3.2 iter_fold\nSynopsis\ntemplate<\ntypename Sequence\n, typename State\n, typename ForwardOp\n>\nstruct iter_fold\n{\ntypedef unspecified type;\n};\nDescription\nReturnstheresultofthesuccessiveapplicationofbinary ForwardOp totheresultoftheprevious ForwardOp invocation\n(Stateifit’sthefirstcall)andeachiteratorintherange[ begin<Sequence>::type ,end<Sequence>::type )inorder.\nHeader\n#include <boost/mpl/iter_fold.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to iterate.\nState Any type The initial state for the first ForwardOp application.\nForwardOp BinaryLambda Expression The operation to be executed onforward traversal.\nExpression semantics\nFor anyForward Sequence s, binaryLambda Expression op, and an arbitrary type state:\ntypedef iter_fold<s,state,op>::type t;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef begin<Sequence>::type i 1;\ntypedef apply<op,state,i 1>::type state 1;\nRevision Date: 15th November 20043.3 Iteration Algorithms Algorithms 90\ntypedef next<i 1>::type i 2;\ntypedef apply<op,state 1,i2>::type state 2;\n...\ntypedef apply<op,state n−1,in>::type state n;\ntypedef next<i n>::type last;\ntypedef state nt;\nwhere n == size<s>::value andlastis identical to end<s>::type ; equivalent to typedef\nstate t; ifempty<s>::value == true .\nComplexity\nLinear. Exactly size<s>::value applications of op.\nExample\ntypedef vector_c<int,5,-1,0,7,2,0,-5,4> numbers;\ntypedef iter_fold<\nnumbers\n, begin<numbers>::type\n, if_< less< deref<_1>, deref<_2> >,_2,_1 >\n>::type max_element_iter;\nBOOST_MPL_ASSERT_RELATION( deref<max_element_iter>::type::value, ==, 7 );\nSee also\nAlgorithms ,reverse_iter_fold ,fold,reverse_fold ,copy\n3.3.3 reverse_fold\nSynopsis\ntemplate<\ntypename Sequence\n, typename State\n, typename BackwardOp\n, typename ForwardOp = _1\n>\nstruct reverse_fold\n{\ntypedef unspecified type;\n};\nDescription\nReturnstheresultofthesuccessiveapplicationofbinary BackwardOp totheresultoftheprevious BackwardOp invoca-\ntion ( Stateif it’s the first call) and every element in the range [ begin<Sequence>::type ,end<Sequence>::type )\ninreverseorder. If ForwardOp isprovided,thenitisappliedonforwardtraversaltoformtheresultthatispassedtothe\nfirstBackwardOp call.\nRevision Date: 15th November 200491 Algorithms 3.3 Iteration Algorithms\nHeader\n#include <boost/mpl/reverse_fold.hpp>\nParameters\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to iterate.\nState Any type The initial state for the first BackwardOp /ForwardOp\napplication.\nBackwardOp BinaryLambda Expression The operation to be executed on backward traversal.\nForwardOp BinaryLambda Expression The operation to be executed on forward traversal.\nExpression semantics\nFor anyForward Sequence s, binaryLambda Expression backward_op andforward_op , and arbitrary type state:\ntypedef reverse_fold< s,state,backward_op >::type t;\nReturn type: A type\nSemantics: Equivalent to\ntypedef reverse_iter_fold<\ns\n, state\n, apply<backward_op,_1,deref<_2> >\n>::type t;\ntypedef reverse_fold< s,state,backward_op,forward_op >::type t;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef reverse_fold<\nSequence\n, fold<s,state,forward_op>::type\n, backward_op\n>::type t;\nComplexity\nLinear. Exactly size<s>::value applications of backward_op andforward_op .\nExample\nRemove negative elements froma sequence2).\ntypedef list_c<int,5,-1,0,-7,-2,0,-5,4> numbers;\ntypedef list_c<int,-1,-7,-2,-5> negatives;\ntypedef reverse_fold<\nnumbers\nRevision Date: 15th November 20043.3 Iteration Algorithms Algorithms 92\n, list_c<int>\n, if_< less< _2,int_<0> >, push_front<_1,_2,>, _1 >\n>::type result;\nBOOST_MPL_ASSERT(( equal< negatives,result > ));\nSee also\nAlgorithms ,fold,reverse_iter_fold ,iter_fold\n3.3.4 reverse_iter_fold\nSynopsis\ntemplate<\ntypename Sequence\n, typename State\n, typename BackwardOp\n, typename ForwardOp = _1\n>\nstruct reverse_iter_fold\n{\ntypedef unspecified type;\n};\nDescription\nReturns the result of the successive application of binary BackwardOp to the result of the previous BackwardOp invo-\ncation ( Stateif it’s the first call) and each iterator in the range [ begin<Sequence>::type ,end<Sequence>::type )\nin reverse order. If ForwardOp is provided, then it’s applied on forward traversal to form the result which is passed to\nthe first BackwardOp call.\nHeader\n#include <boost/mpl/reverse_iter_fold.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to iterate.\nState Any type The initial state for the first BackwardOp /ForwardOp\napplication.\nBackwardOp BinaryLambda Expression The operation to be executed on backward traversal.\nForwardOp BinaryLambda Expression The operation to be executed on forward traversal.\nExpression semantics\nFor anyForward Sequence s, binaryLambda Expression backward_op andforward_op , and arbitrary type state:\n2)Seeremove_if for a more compact way to do this.\nRevision Date: 15th November 200493 Algorithms 3.3 Iteration Algorithms\ntypedef reverse_iter_fold< s,state,backward_op >::type t;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef begin<s>::type i 1;\ntypedef next<i 1>::type i 2;\n...\ntypedef next<i n>::type last;\ntypedef apply<backward_op,state,i n>::type state n;\ntypedef apply<backward_op,state n,in−1>::type state n−1;\n...\ntypedef apply<backward_op,state 2,i1>::type state 1;\ntypedef state 1t;\nwhere n == size<s>::value andlastis identical to end<s>::type ; equivalent to typedef\nstate t; ifempty<s>::value == true .\ntypedef reverse_iter_fold< s,state,backward_op,forward_op >::type t;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef reverse_iter_fold<\nSequence\n, iter_fold<s,state,forward_op>::type\n, backward_op\n>::type t;\nComplexity\nLinear. Exactly size<s>::value applications of backward_op andforward_op .\nExample\nBuild a list of iterators to the negative elements in a sequence.\ntypedef vector_c<int,5,-1,0,-7,-2,0,-5,4> numbers;\ntypedef list_c<int,-1,-7,-2,-5> negatives;\ntypedef reverse_iter_fold<\nnumbers\n, list<>\n, if_< less< deref<_2>,int_<0> >, push_front<_1,_2>, _1 >\n>::type iters;\nBOOST_MPL_ASSERT(( equal<\nnegatives\n, transform_view< iters,deref<_1> >\n> ));\nSee also\nAlgorithms ,iter_fold ,reverse_fold ,fold\nRevision Date: 15th November 20043.3 Iteration Algorithms Algorithms 94\n3.3.5 accumulate\nSynopsis\ntemplate<\ntypename Sequence\n, typename State\n, typename ForwardOp\n>\nstruct accumulate\n{\ntypedef unspecified type;\n};\nDescription\nReturns the result of the successive application of binary ForwardOp to the result of the previous ForwardOp in-\nvocation ( Stateif it’s the first call) and every element of the sequence in the range [ begin<Sequence>::type ,\nend<Sequence>::type ) in order. [ Note: accumulate is a synonym for fold—end note]\nHeader\n#include <boost/mpl/accumulate.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to iterate.\nState Any type The initial state for the first ForwardOp application.\nForwardOp BinaryLambda Expression The operation to be executed onforward traversal.\nExpression semantics\nFor anyForward Sequence s, binaryLambda Expression op, and arbitrary type state:\ntypedef accumulate<s,state,op>::type t;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef fold<s,state,op>::type t;\nComplexity\nLinear. Exactly size<s>::value applications of op.\nExample\ntypedef vector<long,float,short,double,float,long,long double> types;\ntypedef accumulate<\ntypes\nRevision Date: 15th November 200495 Algorithms 3.4 Querying Algorithms\n, int_<0>\n, if_< is_float<_2>,next<_1>,_1 >\n>::type number_of_floats;\nBOOST_MPL_ASSERT_RELATION( number_of_floats::value, ==, 4 );\nSee also\nAlgorithms ,fold,reverse_fold ,iter_fold ,reverse_iter_fold ,copy,copy_if\n3.4 Querying Algorithms\n3.4.1 find\nSynopsis\ntemplate<\ntypename Sequence\n, typename T\n>\nstruct find\n{\ntypedef unspecified type;\n};\nDescription\nReturns an iterator to the first occurrence of type Tin aSequence .\nHeader\n#include <boost/mpl/find.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to search in.\nT Any type A type to search for.\nExpression semantics\nFor anyForward Sequence sand arbitrary type t:\ntypedef find<s,t>::type i;\nReturn type: Forward Iterator .\nSemantics: Equivalent to\ntypedef find_if<s, is_same<_,t> >::type i;\nRevision Date: 15th November 20043.4 Querying Algorithms Algorithms 96\nComplexity\nLinear. At most size<s>::value comparisons for identity.\nExample\ntypedef vector<char,int,unsigned,long,unsigned long> types;\ntypedef find<types,unsigned>::type iter;\nBOOST_MPL_ASSERT(( is_same< deref<iter>::type, unsigned > ));\nBOOST_MPL_ASSERT_RELATION( iter::pos::value, ==, 2 );\nSee also\nQuerying Algorithms ,contains ,find_if,count,lower_bound\n3.4.2 find_if\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pred\n>\nstruct find_if\n{\ntypedef unspecified type;\n};\nDescription\nReturns an iterator to the first element in Sequence that satisfies the predicate Pred.\nHeader\n#include <boost/mpl/find_if.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to search in.\nPred UnaryLambda Expression A search condition.\nExpression semantics\nFor anyForward Sequence sand unary Lambda Expression pred:\ntypedef find_if<s,pred>::type i;\nReturn type: Forward Iterator .\nSemantics: iis the first iterator in the range [ begin<s>::type ,end<s>::type ) such that\nRevision Date: 15th November 200497 Algorithms 3.4 Querying Algorithms\napply< pred,deref<i>::type >::type::value == true\nIf no such iterator exists, iis identical to end<s>::type .\nComplexity\nLinear. At most size<s>::value applications of pred.\nExample\ntypedef vector<char,int,unsigned,long,unsigned long> types;\ntypedef find_if<types, is_same<_1,unsigned> >::type iter;\nBOOST_MPL_ASSERT(( is_same< deref<iter>::type, unsigned > ));\nBOOST_MPL_ASSERT_RELATION( iter::pos::value, ==, 2 );\nSee also\nQuerying Algorithms ,find,count_if ,lower_bound\n3.4.3 contains\nSynopsis\ntemplate<\ntypename Sequence\n, typename T\n>\nstruct contains\n{\ntypedef unspecified type;\n};\nDescription\nReturns a true-valued Integral Constant if one or more elements in Sequence are identical to T.\nHeader\n#include <boost/mpl/contains.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to be examined.\nT Any type A type to search for.\nExpression semantics\nFor anyForward Sequence sand arbitrary type t:\nRevision Date: 15th November 20043.4 Querying Algorithms Algorithms 98\ntypedef contains<s,t>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef not_< is_same<\nfind<s,t>::type\n, end<s>::type\n> >::type r;\nComplexity\nLinear. At most size<s>::value comparisons for identity.\nExample\ntypedef vector<char,int,unsigned,long,unsigned long> types;\nBOOST_MPL_ASSERT_NOT(( contains<types,bool> ));\nSee also\nQuerying Algorithms ,find,find_if,count,lower_bound\n3.4.4 count\nSynopsis\ntemplate<\ntypename Sequence\n, typename T\n>\nstruct count\n{\ntypedef unspecified type;\n};\nDescription\nReturns the number of elements in a Sequence that are identical to T.\nHeader\n#include <boost/mpl/count.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to be examined.\nT Any type A type to search for.\nRevision Date: 15th November 200499 Algorithms 3.4 Querying Algorithms\nExpression semantics\nFor anyForward Sequence sand arbitrary type t:\ntypedef count<s,t>::type n;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef count_if< s,is_same<_,T> >::type n;\nComplexity\nLinear. Exactly size<s>::value comparisons for identity.\nExample\ntypedef vector<int,char,long,short,char,short,double,long> types;\ntypedef count<types, short>::type n;\nBOOST_MPL_ASSERT_RELATION( n::value, ==, 2 );\nSee also\nQuerying Algorithms ,count_if ,find,find_if,contains ,lower_bound\n3.4.5 count_if\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pred\n>\nstruct count_if\n{\ntypedef unspecified type;\n};\nDescription\nReturns the number of elements in Sequence that satisfy the predicate Pred.\nHeader\n#include <boost/mpl/count_if.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to be examined.\nPred UnaryLambda Expression A count condition.\nRevision Date: 15th November 20043.4 Querying Algorithms Algorithms 100\nExpression semantics\nFor anyForward Sequence sand unary Lambda Expression pred:\ntypedef count_if<s,pred>::type n;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef lambda<pred>::type p;\ntypedef fold<\ns\n, long_<0>\n, if_< apply_wrap1<p,_2>, next<_1>, _1 >\n>::type n;\nComplexity\nLinear. Exactly size<s>::value applications of pred.\nExample\ntypedef vector<int,char,long,short,char,long,double,long> types;\nBOOST_MPL_ASSERT_RELATION( (count_if< types, is_float<_> >::value), ==, 1 );\nBOOST_MPL_ASSERT_RELATION( (count_if< types, is_same<_,char> >::value), ==, 2 );\nBOOST_MPL_ASSERT_RELATION( (count_if< types, is_same<_,void> >::value), ==, 0 );\nSee also\nQuerying Algorithms ,count,find,find_if,contains\n3.4.6 lower_bound\nSynopsis\ntemplate<\ntypename Sequence\n, typename T\n, typename Pred = less<_1,_2>\n>\nstruct lower_bound\n{\ntypedef unspecified type;\n};\nDescription\nReturns the first position inthe sorted Sequence where Tcould be inserted without violating the ordering.\nRevision Date: 15th November 2004101 Algorithms 3.4 Querying Algorithms\nHeader\n#include <boost/mpl/lower_bound.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sorted sequence to search in.\nT Any type A type to search a position for.\nPred BinaryLambda Expression A search criteria.\nExpression semantics\nFor any sorted Forward Sequence s, binaryLambda Expression pred, and arbitrary type x:\ntypedef lower_bound< s,x,pred >::type i;\nReturn type: Forward Iterator .\nSemantics: iisthefurthermostiteratorin[ begin<s>::type ,end<s>::type )suchthat,foreveryiterator\njin [begin<s>::type ,i),\napply< pred, deref<j>::type, x >::type::value == true\nComplexity\nThe number of comparisons is logarithmic: at most log 2(size<s>::value ) + 1. If sis aRandom Access Se-\nquencethenthenumberofstepsthroughtherangeisalsologarithmic;otherwise,thenumberofstepsisproportionalto\nsize<s>::value .\nExample\ntypedef vector_c<int,1,2,3,3,3,5,8> numbers;\ntypedef lower_bound< numbers, int_<3> >::type iter;\nBOOST_MPL_ASSERT_RELATION(\n(distance< begin<numbers>::type,iter >::value), ==, 2\n);\nBOOST_MPL_ASSERT_RELATION( deref<iter>::type::value, ==, 3 );\nSee also\nQuerying Algorithms ,upper_bound ,find,find_if,min_element\n3.4.7 upper_bound\nSynopsis\ntemplate<\ntypename Sequence\n, typename T\nRevision Date: 15th November 20043.4 Querying Algorithms Algorithms 102\n, typename Pred = less<_1,_2>\n>\nstruct upper_bound\n{\ntypedef unspecified type;\n};\nDescription\nReturns the last position inthe sorted Sequence where Tcould be inserted without violating the ordering.\nHeader\n#include <boost/mpl/upper_bound.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sorted sequence to search in.\nT Any type A type to search a position for.\nPred BinaryLambda Expression A search criteria.\nExpression semantics\nFor any sorted Forward Sequence s, binaryLambda Expression pred, and arbitrary type x:\ntypedef upper_bound< s,x,pred >::type i;\nReturn type: Forward Iterator\nSemantics: iisthefurthermostiteratorin[ begin<s>::type ,end<s>::type )suchthat,foreveryiterator\njin[begin<s>::type, i) ,\napply< pred, x, deref<j>::type >::type::value == false\nComplexity\nThe number of comparisons is logarithmic: at most log 2(size<s>::value ) + 1. If sis aRandom Access Se-\nquencethenthenumberofstepsthroughtherangeisalsologarithmic;otherwise,thenumberofstepsisproportionalto\nsize<s>::value .\nExample\ntypedef vector_c<int,1,2,3,3,3,5,8> numbers;\ntypedef upper_bound< numbers, int_<3> >::type iter;\nBOOST_MPL_ASSERT_RELATION(\n(distance< begin<numbers>::type,iter >::value), ==, 5\n);\nBOOST_MPL_ASSERT_RELATION( deref<iter>::type::value, ==, 5 );\nRevision Date: 15th November 2004103 Algorithms 3.4 Querying Algorithms\nSee also\nQuerying Algorithms ,lower_bound ,find,find_if,min_element\n3.4.8 min_element\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pred = less<_1,_2>\n>\nstruct min_element\n{\ntypedef unspecified type;\n};\nDescription\nReturns an iterator to the smallest element in Sequence .\nHeader\n#include <boost/mpl/min_element.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to be searched.\nPred BinaryLambda Expression A comparison criteria.\nExpression semantics\nFor anyForward Sequence sand binary Lambda Expression pred:\ntypedef min_element<s,pred>::type i;\nReturn type: Forward Iterator .\nSemantics: iis the first iterator in [ begin<s>::type ,end<s>::type ) such that for every iterator jin\n[begin<s>::type ,end<s>::type ),\napply< pred, deref<j>::type, deref<i>::type >::type::value == false\nComplexity\nLinear. Zero comparisons if sis empty, otherwise exactly size<s>::value - 1 comparisons.\nExample\ntypedef vector<bool,char[50],long,double> types;\ntypedef min_element<\nRevision Date: 15th November 20043.4 Querying Algorithms Algorithms 104\ntransform_view< types,sizeof_<_1> >\n>::type iter;\nBOOST_MPL_ASSERT(( is_same< deref<iter::base>::type, bool> ));\nSee also\nQuerying Algorithms ,max_element ,find_if,upper_bound ,find\n3.4.9 max_element\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pred = less<_1,_2>\n>\nstruct max_element\n{\ntypedef unspecified type;\n};\nDescription\nReturns an iterator to the largest element in Sequence .\nHeader\n#include <boost/mpl/max_element.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to be searched.\nPred BinaryLambda Expression A comparison criteria.\nExpression semantics\nFor anyForward Sequence sand binary Lambda Expression pred:\ntypedef max_element<s,pred>::type i;\nReturn type: Forward Iterator .\nSemantics: iis the first iterator in [ begin<s>::type ,end<s>::type ) such that for every iterator jin\n[begin<s>::type ,end<s>::type ),\napply< pred, deref<i>::type, deref<j>::type >::type::value == false\nComplexity\nLinear. Zero comparisons if sis empty, otherwise exactly size<s>::value - 1 comparisons.\nRevision Date: 15th November 2004105 Algorithms 3.4 Querying Algorithms\nExample\ntypedef vector<bool,char[50],long,double> types;\ntypedef max_element<\ntransform_view< types,sizeof_<_1> >\n>::type iter;\nBOOST_MPL_ASSERT(( is_same< deref<iter::base>::type, char[50]> ));\nSee also\nQuerying Algorithms ,min_element ,find_if,upper_bound ,find\n3.4.10 equal\nSynopsis\ntemplate<\ntypename Seq1\n, typename Seq2\n, typename Pred = is_same<_1,_2>\n>\nstruct equal\n{\ntypedef unspecified type;\n};\nDescription\nReturns a true-valued Integral Constant if the two sequences Seq1andSeq2are identical when compared element-by-\nelement.\nHeader\n#include <boost/mpl/equal.hpp>\nParameters\nParameter Requirement Description\nSeq1,Seq2 Forward Sequence Sequences to compare.\nPred BinaryLambda Expression A comparison criterion.\nExpression semantics\nFor anyForward Sequence ss1ands2and a binary Lambda Expression pred:\ntypedef equal<s1,s2,pred>::type c;\nReturn type: Integral Constant\nSemantics: c::value == true is and only if size<s1>::value == size<s2>::value and for every\niterator iin [begin<s1>::type ,end<s1>::type )deref<i>::type is identical to\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 106\nadvance< begin<s2>::type, distance< begin<s1>::type,i >::type >::type\nComplexity\nLinear. At most size<s1>::value comparisons.\nExample\ntypedef vector<char,int,unsigned,long,unsigned long> s1;\ntypedef list<char,int,unsigned,long,unsigned long> s2;\nBOOST_MPL_ASSERT(( equal<s1,s2> ));\nSee also\nQuerying Algorithms ,find,find_if\n3.5 Transformation Algorithms\nAccording to their name, MPL’s transformation , orsequence-building algorithms provide the tools for building new\nsequences from the existing ones by performing some kind of transformation. A typical transformation alogrithm takes\noneormoreinputsequencesandatransformationmetafunction/predicate,andreturnsanewsequencebuiltaccordingto\nthe algorithm’s semantics through the means of its Inserterargument, which plays a role similar to the role of run-time\nOutput Iterator .\nEvery transformation algorithm is a Reversible Algorithm , providing an accordingly named reverse_ counterpart\ncarrying the transformation in the reverse order. Thus, all sequence-building algorithms come in pairs, for instance\nreplace /reverse_replace . In presence of variability of the output sequence’s properties such as front or backward\nextensibility, the existence of the bidirectional algorithms allows for the most efficient way to perform the required\ntransformation.\n3.5.1 copy\nSynopsis\ntemplate<\ntypename Sequence\n, typename In = unspecified\n>\nstruct copy\n{\ntypedef unspecified type;\n};\nDescription\nReturns a copy of the originalsequence.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nRevision Date: 15th November 2004107 Algorithms 3.5 Transformation Algorithms\nHeader\n#include <boost/mpl/copy.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to copy.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, and anInserter in:\ntypedef copy<s,in>::type r;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef fold< s,in::state,in::operation >::type r;\nComplexity\nLinear. Exactly size<s>::value applications of in::operation .\nExample\ntypedef vector_c<int,0,1,2,3,4,5,6,7,8,9> numbers;\ntypedef copy<\nrange_c<int,10,20>\n, back_inserter< numbers >\n>::type result;\nBOOST_MPL_ASSERT_RELATION( size<result>::value, ==, 20 );\nBOOST_MPL_ASSERT(( equal< result,range_c<int,0,20> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,reverse_copy ,copy_if,transform\n3.5.2 copy_if\nSynopsis\ntemplate<\ntypename Sequence\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 108\n, typename Pred\n, typename In = unspecified\n>\nstruct copy_if\n{\ntypedef unspecified type;\n};\nDescription\nReturns a filtered copy of theoriginal sequence containing the elements that satisfythe predicate Pred.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/copy_if.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to copy.\nPred UnaryLambda Expression A copying condition.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, an unary Lambda Expression pred, and anInserter in:\ntypedef copy_if<s,pred,in>::type r;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef lambda<pred>::type p;\ntypedef lambda<in::operation>::type op;\ntypedef fold<\ns\n, in::state\n, eval_if<\napply_wrap1<p,_2>\n, apply_wrap2<op,_1,_2>\n, identity<_1>\n>\nRevision Date: 15th November 2004109 Algorithms 3.5 Transformation Algorithms\n>::type r;\nComplexity\nLinear. Exactly size<s>::value applications of pred, and at most size<s>::value applications of\nin::operation .\nExample\ntypedef copy_if<\nrange_c<int,0,10>\n, less< _1, int_<5> >\n, back_inserter< vector<> >\n>::type result;\nBOOST_MPL_ASSERT_RELATION( size<result>::value, ==, 5 );\nBOOST_MPL_ASSERT(( equal<result,range_c<int,0,5> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,reverse_copy_if ,copy,remove_if ,replace_if\n3.5.3 transform\nSynopsis\ntemplate<\ntypename Seq\n, typename Op\n, typename In = unspecified\n>\nstruct transform\n{\ntypedef unspecified type;\n};\ntemplate<\ntypename Seq1\n, typename Seq2\n, typename BinaryOp\n, typename In = unspecified\n>\nstruct transform\n{\ntypedef unspecified type;\n};\nDescription\ntransform is anoverloaded name :\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 110\n—transform<Seq,Op> returns a transformed copy of the original sequence produced by applying an unary trans-\nformation Opto every element in the [ begin<Sequence>::type ,end<Sequence>::type ) range.\n—transform<Seq1,Seq2,Op> returns a new sequence produced by applying a binary transformation Bina-\nryOpto a pair of elements (e 1, e21) from the corresponding [ begin<Seq1>::type ,end<Seq1>::type ) and\n[begin<Seq2>::type ,end<Seq2>::type ) ranges.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/transform.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence ,Seq1,Seq2 Forward Sequence Sequences to transform.\nOp,BinaryOp Lambda Expression A transformation.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence ss,s1ands2,Lambda Expression sopandop2, and anInserter in:\ntypedef transform<s,op,in>::type r;\nReturn type: A type.\nPostcondition: Equivalent to\ntypedef lambda<op>::type f;\ntypedef lambda<in::operation>::type in_op;\ntypedef fold<\ns\n, in::state\n, bind< in_op, _1, bind<f, _2> >\n>::type r;\ntypedef transform<s1,s2,op,in>::type r;\nReturn type: A type.\nPostcondition: Equivalent to\ntypedef lambda<op2>::type f;\ntypedef lambda<in::operation>::type in_op;\ntypedef fold<\nRevision Date: 15th November 2004111 Algorithms 3.5 Transformation Algorithms\npair_view<s1,s2>\n, in::state\n, bind<\nin_op\n, _1\n, bind<f, bind<first<>,_2>, bind<second<>,_2> >\n>\n>::type r;\nComplexity\nLinear. Exactly size<s>::value /size<s1>::value applications of op/op2andin::operation .\nExample\ntypedef vector<char,short,int,long,float,double> types;\ntypedef vector<char*,short*,int*,long*,float*,double*> pointers;\ntypedef transform< types,boost::add_pointer<_1> >::type result;\nBOOST_MPL_ASSERT(( equal<result,pointers> ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,reverse_transform ,copy,replace_if\n3.5.4 replace\nSynopsis\ntemplate<\ntypename Sequence\n, typename OldType\n, typename NewType\n, typename In = unspecified\n>\nstruct replace\n{\ntypedef unspecified type;\n};\nDescription\nReturns a copy of the originalsequence where every type identical to OldType has been replaced with NewType.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/replace.hpp>\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 112\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence Forward Sequence A original sequence.\nOldType Any type A type to be replaced.\nNewType Any type A type to replace with.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, anInserter in, and arbitrary types xandy:\ntypedef replace<s,x,y,in>::type r;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef replace_if< s,y,is_same<_,x>,in >::type r;\nComplexity\nLinear. Performs exactly size<s>::value comparisons for identity / insertions.\nExample\ntypedef vector<int,float,char,float,float,double> types;\ntypedef vector<int,double,char,double,double,double> expected;\ntypedef replace< types,float,double >::type result;\nBOOST_MPL_ASSERT(( equal< result,expected > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,reverse_replace ,replace_if ,remove,transform\n3.5.5 replace_if\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pred\n, typename In = unspecified\n>\nstruct replace_if\n{\nRevision Date: 15th November 2004113 Algorithms 3.5 Transformation Algorithms\ntypedef unspecified type;\n};\nDescription\nReturns a copy of the original sequence where every type that satisfies the predicate Predhas been replaced with\nNewType.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/replace_if.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence Forward Sequence An original sequence.\nPred UnaryLambda Expression A replacement condition.\nNewType Any type A type to replace with.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, an unary Lambda Expression pred, anInserter in, and arbitrary type x:\ntypedef replace_if<s,pred,x,in>::type r;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef lambda<pred>::type p;\ntypedef transform< s, if_< apply_wrap1<p,_1>,x,_1>, in >::type r;\nComplexity\nLinear. Performs exactly size<s>::value applications of pred, and at most size<s>::value insertions.\nExample\ntypedef vector_c<int,1,4,5,2,7,5,3,5> numbers;\ntypedef vector_c<int,1,4,0,2,0,0,3,0> expected;\ntypedef replace_if< numbers, greater<_,int_<4> >, int_<0> >::type result;\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 114\nBOOST_MPL_ASSERT(( equal< result,expected, equal_to<_,_> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,reverse_replace_if ,replace,remove_if ,transform\n3.5.6 remove\nSynopsis\ntemplate<\ntypename Sequence\n, typename T\n, typename In = unspecified\n>\nstruct remove\n{\ntypedef unspecified type;\n};\nDescription\nReturns a new sequence that contains all elements from [ begin<Sequence>::type ,end<Sequence>::type ) range\nexcept those that are identical to T.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/remove.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence Forward Sequence An original sequence.\nT Any type A type to be removed.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, anInserter in, and arbitrary type x:\ntypedef remove<s,x,in>::type r;\nReturn type: A type.\nRevision Date: 15th November 2004115 Algorithms 3.5 Transformation Algorithms\nSemantics: Equivalent to\ntypedef remove_if< s,is_same<_,x>,in >::type r;\nComplexity\nLinear. Performs exactly size<s>::value comparisons for equality, and at most size<s>::value insertions.\nExample\ntypedef vector<int,float,char,float,float,double>::type types;\ntypedef remove< types,float >::type result;\nBOOST_MPL_ASSERT(( equal< result, vector<int,char,double> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,reverse_remove ,remove_if ,copy,replace\n3.5.7 remove_if\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pred\n, typename In = unspecified\n>\nstruct remove_if\n{\ntypedef unspecified type;\n};\nDescription\nReturns a new sequence that contains all the elements from [ begin<Sequence>::type ,end<Sequence>::type )\nrange except those that satisfy the predicate Pred.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/remove_if.hpp>\nModel of\nReversible Algorithm\nParameters\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 116\nParameter Requirement Description\nSequence Forward Sequence An original sequence.\nPred UnaryLambda Expression A removal condition.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, and anInserter in, and an unary Lambda Expression pred:\ntypedef remove_if<s,pred,in>::type r;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef lambda<pred>::type p;\ntypedef lambda<in::operation>::type op;\ntypedef fold<\ns\n, in::state\n, eval_if<\napply_wrap1<p,_2>\n, identity<_1>\n, apply_wrap2<op,_1,_2>\n>\n>::type r;\nComplexity\nLinear. Performs exactly size<s>::value applications of pred, and at most size<s>::value insertions.\nExample\ntypedef vector_c<int,1,4,5,2,7,5,3,5>::type numbers;\ntypedef remove_if< numbers, greater<_,int_<4> > >::type result;\nBOOST_MPL_ASSERT(( equal< result,vector_c<int,1,4,2,3>,equal_to<_,_> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,reverse_remove_if ,remove,copy_if,replace_if\n3.5.8 unique\nSynopsis\ntemplate<\ntypename Seq\n, typename Pred\n, typename In = unspecified\nRevision Date: 15th November 2004117 Algorithms 3.5 Transformation Algorithms\n>\nstruct unique\n{\ntypedef unspecified type;\n};\nDescription\nReturns a sequence of the initial elements of every subrange of the original sequence Seqwhose elements are all the\nsame.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/unique.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence Forward Sequence An original sequence.\nPred BinaryLambda Expression An equivalence relation.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, a binary Lambda Expression pred, and anInserter in:\ntypedef unique<s,pred,in>::type r;\nReturn type: A type.\nSemantics: Ifsize<s>::value <= 1 , then equivalent to\ntypedef copy<s,in>::type r;\notherwise equivalent to\ntypedef lambda<pred>::type p;\ntypedef lambda<in::operation>::type in_op;\ntypedef apply_wrap2<\nin_op\n, in::state\n, front<types>::type\n>::type in_state;\ntypedef fold<\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 118\ns\n, pair< in_state, front<s>::type >\n, eval_if<\napply_wrap2<p, second<_1>, _2>\n, identity< first<_1> >\n, apply_wrap2<in_op, first<_1>, _2>\n>\n>::type::first r;\nComplexity\nLinear. Performs exactly size<s>::value - 1 applications of pred, and at most size<s>::value insertions.\nExample\ntypedef vector<int,float,float,char,int,int,int,double> types;\ntypedef vector<int,float,char,int,double> expected;\ntypedef unique< types, is_same<_1,_2> >::type result;\nBOOST_MPL_ASSERT(( equal< result,expected > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,reverse_unique ,remove,copy_if,replace_if\n3.5.9 partition\nSynopsis\ntemplate<\ntypename Seq\n, typename Pred\n, typename In1 = unspecified\n, typename In2 = unspecified\n>\nstruct partition\n{\ntypedef unspecified type;\n};\nDescription\nReturnsapairofsequencestogethercontainingallelementsintherange[ begin<Seq>::type ,end<Seq>::type )split\ninto two groups based on the predicate Pred.partition is a synonym for stable_partition .\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/partition.hpp>\nRevision Date: 15th November 2004119 Algorithms 3.5 Transformation Algorithms\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSeq Forward Sequence An original sequence.\nPred UnaryLambda Expression A partitioning predicate.\nIn1,In2 Inserter Output inserters.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, an unary Lambda Expression pred, andInsertersin1andin2:\ntypedef partition<s,pred,in1,in2>::type r;\nReturn type: Apair.\nSemantics: Equivalent to\ntypedef stable_partition<s,pred,in1,in2>::type r;\nComplexity\nLinear. Exactly size<s>::value applications of pred, and size<s>::value of summarized in1::operation /\nin2::operation applications.\nExample\ntemplate< typename N > struct is_odd : bool_<(N::value % 2)> {};\ntypedef partition<\nrange_c<int,0,10>\n, is_odd<_1>\n, back_inserter< vector<> >\n, back_inserter< vector<> >\n>::type r;\nBOOST_MPL_ASSERT(( equal< r::first, vector_c<int,1,3,5,7,9> > ));\nBOOST_MPL_ASSERT(( equal< r::second, vector_c<int,0,2,4,6,8> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,reverse_partition ,stable_partition ,sort\n3.5.10 stable_partition\nSynopsis\ntemplate<\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 120\ntypename Seq\n, typename Pred\n, typename In1 = unspecified\n, typename In2 = unspecified\n>\nstruct stable_partition\n{\ntypedef unspecified type;\n};\nDescription\nReturnsapairofsequencestogethercontainingallelementsintherange[ begin<Seq>::type ,end<Seq>::type )split\ninto two groups based on the predicate Pred.stable_partition is guaranteed to preserve the relative order of the\nelements in the resulting sequences.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/stable_partition.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSeq Forward Sequence An original sequence.\nPred UnaryLambda Expression A partitioning predicate.\nIn1,In2 Inserter Output inserters.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, an unary Lambda Expression pred, andInsertersin1andin2:\ntypedef stable_partition<s,pred,in1,in2>::type r;\nReturn type: Apair.\nSemantics: Equivalent to\ntypedef lambda<pred>::type p;\ntypedef lambda<in1::operation>::type in1_op;\ntypedef lambda<in2::operation>::type in2_op;\ntypedef fold<\ns\n, pair< in1::state, in2::state >\nRevision Date: 15th November 2004121 Algorithms 3.5 Transformation Algorithms\n, if_<\napply_wrap1<p,_2>\n, pair< apply_wrap2<in1_op,first<_1>,_2>, second<_1> >\n, pair< first<_1>, apply_wrap2<in2_op,second<_1>,_2> >\n>\n>::type r;\nComplexity\nLinear. Exactly size<s>::value applications of pred, and size<s>::value of summarized in1::operation /\nin2::operation applications.\nExample\ntemplate< typename N > struct is_odd : bool_<(N::value % 2)> {};\ntypedef stable_partition<\nrange_c<int,0,10>\n, is_odd<_1>\n, back_inserter< vector<> >\n, back_inserter< vector<> >\n>::type r;\nBOOST_MPL_ASSERT(( equal< r::first, vector_c<int,1,3,5,7,9> > ));\nBOOST_MPL_ASSERT(( equal< r::second, vector_c<int,0,2,4,6,8> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,reverse_stable_partition ,partition ,sort,transform\n3.5.11 sort\nSynopsis\ntemplate<\ntypename Seq\n, typename Pred = less<_1,_2>\n, typename In = unspecified\n>\nstruct sort\n{\ntypedef unspecified type;\n};\nDescription\nReturns a new sequence of all elements in the range [ begin<Seq>::type ,end<Seq>::type ) sorted according to the\nordering relation Pred.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 122\nHeader\n#include <boost/mpl/sort.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSeq Forward Sequence An original sequence.\nPred BinaryLambda Expression An ordering relation.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, a binary Lambda Expression pred, and anInserter in:\ntypedef sort<s,pred,in>::type r;\nReturn type: A type.\nSemantics: Ifsize<s>::value <= 1 , equivalent to\ntypedef copy<s,in>::type r;\notherwise equivalent to\ntypedef back_inserter< vector<> > aux_in;\ntypedef lambda<pred>::type p;\ntypedef begin<s>::type pivot;\ntypedef partition<\niterator_range< next<pivot>::type, end<s>::type >\n, apply_wrap2<p,_1,deref<pivot>::type>\n, aux_in\n, aux_in\n>::type partitioned;\ntypedef sort<partitioned::first,p,aux_in >::type part1;\ntypedef sort<partitioned::second,p,aux_in >::type part2;\ntypedef copy<\njoint_view<\njoint_view<part1,single_view< deref<pivot>::type > >\n, part2\n>\n, in\n>::type r;\nRevision Date: 15th November 2004123 Algorithms 3.5 Transformation Algorithms\nComplexity\nAverageO(n log(n)) wheren==size<s>::value , quadratic at worst.\nExample\ntypedef vector_c<int,3,4,0,-5,8,-1,7> numbers;\ntypedef vector_c<int,-5,-1,0,3,4,7,8> expected;\ntypedef sort<numbers>::type result;\nBOOST_MPL_ASSERT(( equal< result, expected, equal_to<_,_> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,partition\n3.5.12 reverse\nSynopsis\ntemplate<\ntypename Sequence\n, typename In = unspecified\n>\nstruct reverse\n{\ntypedef unspecified type;\n};\nDescription\nReturns a reversed copy of theoriginal sequence. reverse is a synonym for reverse_copy .\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/reverse.hpp>\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to reverse.\nIn Inserter An inserter.\nExpression semantics\nFor anyForward Sequence s, and anInserter in:\ntypedef reverse<s,in>::type r;\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 124\nReturn type: A type.\nSemantics: Equivalent to\ntypedef reverse_copy<s,in>::type r;\nComplexity\nLinear.\nExample\ntypedef vector_c<int,9,8,7,6,5,4,3,2,1,0> numbers;\ntypedef reverse< numbers >::type result;\nBOOST_MPL_ASSERT(( equal< result, range_c<int,0,10> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,reverse_copy ,copy,copy_if\n3.5.13 reverse_copy\nSynopsis\ntemplate<\ntypename Sequence\n, typename In = unspecified\n>\nstruct reverse_copy\n{\ntypedef unspecified type;\n};\nDescription\nReturns a reversed copy of theoriginal sequence.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/copy.hpp>\nModel of\nReversible Algorithm\nParameters\nRevision Date: 15th November 2004125 Algorithms 3.5 Transformation Algorithms\nParameter Requirement Description\nSequence Forward Sequence A sequence to copy.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, and anInserter in:\ntypedef reverse_copy<s,in>::type r;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef reverse_fold< s,in::state,in::operation >::type r;\nComplexity\nLinear. Exactly size<s>::value applications of in::operation .\nExample\ntypedef list_c<int,10,11,12,13,14,15,16,17,18,19>::type numbers;\ntypedef reverse_copy<\nrange_c<int,0,10>\n, front_inserter< numbers >\n>::type result;\nBOOST_MPL_ASSERT_RELATION( size<result>::value, ==, 20 );\nBOOST_MPL_ASSERT(( equal< result,range_c<int,0,20> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,copy,reverse_copy_if ,reverse_transform\n3.5.14 reverse_copy_if\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pred\n, typename In = unspecified\n>\nstruct reverse_copy_if\n{\ntypedef unspecified type;\n};\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 126\nDescription\nReturns a reversed, filtered copy of the original sequence containing the elements thatsatisfy the predicate Pred.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/copy_if.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence Forward Sequence A sequence to copy.\nPred UnaryLambda Expression A copying condition.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, an unary Lambda Expression pred, and anInserter in:\ntypedef reverse_copy_if<s,pred,in>::type r;\nReturn type: A type\nSemantics: Equivalent to\ntypedef lambda<pred>::type p;\ntypedef lambda<in::operation>::type op;\ntypedef reverse_fold<\ns\n, in::state\n, eval_if<\napply_wrap1<p,_2>\n, apply_wrap2<op,_1,_2>\n, identity<_1>\n>\n>::type r;\nComplexity\nLinear. Exactly size<s>::value applications of pred, and at most size<s>::value applications of\nin::operation .\nRevision Date: 15th November 2004127 Algorithms 3.5 Transformation Algorithms\nExample\ntypedef reverse_copy_if<\nrange_c<int,0,10>\n, less< _1, int_<5> >\n, front_inserter< vector<> >\n>::type result;\nBOOST_MPL_ASSERT_RELATION( size<result>::value, ==, 5 );\nBOOST_MPL_ASSERT(( equal<result,range_c<int,0,5> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,copy_if,reverse_copy ,remove_if ,replace_if\n3.5.15 reverse_transform\nSynopsis\ntemplate<\ntypename Seq\n, typename Op\n, typename In = unspecified\n>\nstruct reverse_transform\n{\ntypedef unspecified type;\n};\ntemplate<\ntypename Seq1\n, typename Seq2\n, typename BinaryOp\n, typename In = unspecified\n>\nstruct reverse_transform\n{\ntypedef unspecified type;\n};\nDescription\nreverse_transform is anoverloaded name :\n—reverse_transform<Seq,Op> returns a reversed, transformed copy of the original sequence produced by ap-\nplyinganunarytransformation Optoeveryelementinthe[ begin<Sequence>::type ,end<Sequence>::type )\nrange.\n—reverse_transform<Seq1,Seq2,Op> returns a new sequence produced by applying a binary transformation\nBinaryOp to a pair of elements (e 1, e21) from the corresponding [ begin<Seq1>::type ,end<Seq1>::type )\nand [ begin<Seq2>::type ,end<Seq2>::type ) ranges in reverse order.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 128\nHeader\n#include <boost/mpl/transform.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence ,Seq1,Seq2 Forward Sequence Sequences to transform.\nOp,BinaryOp Lambda Expression A transformation.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence ss,s1ands2,Lambda Expression sopandop2, and anInserter in:\ntypedef reverse_transform<s,op,in>::type r;\nReturn type: A type.\nPostcondition: Equivalent to\ntypedef lambda<op>::type f;\ntypedef lambda<in::operation>::type in_op;\ntypedef reverse_fold<\ns\n, in::state\n, bind< in_op, _1, bind<f, _2> >\n>::type r;\ntypedef transform<s1,s2,op,in>::type r;\nReturn type: A type.\nPostcondition: Equivalent to\ntypedef lambda<op2>::type f;\ntypedef lambda<in::operation>::type in_op;\ntypedef reverse_fold<\npair_view<s1,s2>\n, in::state\n, bind<\nin_op\n, _1\n, bind<f, bind<first<>,_2>, bind<second<>,_2> >\n>\n>::type r;\nRevision Date: 15th November 2004129 Algorithms 3.5 Transformation Algorithms\nComplexity\nLinear. Exactly size<s>::value /size<s1>::value applications of op/op2andin::operation .\nExample\ntypedef vector<char,short,int,long,float,double> types;\ntypedef vector<double*,float*,long*,int*,short*,char*> pointers;\ntypedef reverse_transform< types,boost::add_pointer<_1> >::type result;\nBOOST_MPL_ASSERT(( equal<result,pointers> ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,transform ,reverse_copy ,replace_if\n3.5.16 reverse_replace\nSynopsis\ntemplate<\ntypename Sequence\n, typename OldType\n, typename NewType\n, typename In = unspecified\n>\nstruct reverse_replace\n{\ntypedef unspecified type;\n};\nDescription\nReturnsareversedcopyoftheoriginalsequencewhereeverytypeidenticalto OldType hasbeenreplacedwith NewType.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/replace.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence Forward Sequence A original sequence.\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 130\nParameter Requirement Description\nOldType Any type A type to be replaced.\nNewType Any type A type to replace with.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, anInserter in, and arbitrary types xandy:\ntypedef reverse_replace<s,x,y,in>::type r;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef reverse_replace_if< s,y,is_same<_,x>,in >::type r;\nComplexity\nLinear. Performs exactly size<s>::value comparisons for identity / insertions.\nExample\ntypedef vector<int,float,char,float,float,double> types;\ntypedef vector<double,double,double,char,double,int> expected;\ntypedef reverse_replace< types,float,double >::type result;\nBOOST_MPL_ASSERT(( equal< result,expected > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,replace,reverse_replace_if ,remove,reverse_transform\n3.5.17 reverse_replace_if\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pred\n, typename In = unspecified\n>\nstruct reverse_replace_if\n{\ntypedef unspecified type;\n};\nRevision Date: 15th November 2004131 Algorithms 3.5 Transformation Algorithms\nDescription\nReturns a reversed copy of the original sequence where every type that satisfies the predicate Predhas been replaced\nwith NewType.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/replace_if.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence Forward Sequence An original sequence.\nPred UnaryLambda Expression A replacement condition.\nNewType Any type A type to replace with.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, an unary Lambda Expression pred, anInserter in, and arbitrary type x:\ntypedef reverse_replace_if<s,pred,x,in>::type r;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef lambda<pred>::type p;\ntypedef reverse_transform< s, if_< apply_wrap1<p,_1>,x,_-\n1>, in >::type r;\nComplexity\nLinear. Performs exactly size<s>::value applications of pred, and at most size<s>::value insertions.\nExample\ntypedef vector_c<int,1,4,5,2,7,5,3,5> numbers;\ntypedef vector_c<int,1,4,0,2,0,0,3,0> expected;\ntypedef reverse_replace_if<\nnumbers\n, greater< _, int_<4> >\n, int_<0>\n, front_inserter< vector<> >\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 132\n>::type result;\nBOOST_MPL_ASSERT(( equal< result,expected, equal_to<_,_> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,replace_if ,reverse_replace ,remove_if ,transform\n3.5.18 reverse_remove\nSynopsis\ntemplate<\ntypename Sequence\n, typename T\n, typename In = unspecified\n>\nstruct reverse_remove\n{\ntypedef unspecified type;\n};\nDescription\nReturns a new sequence that contains all elements from [ begin<Sequence>::type ,end<Sequence>::type ) range\nin reverse order except those that are identical to T.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/remove.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence Forward Sequence An original sequence.\nT Any type A type to be removed.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, anInserter in, and arbitrary type x:\nRevision Date: 15th November 2004133 Algorithms 3.5 Transformation Algorithms\ntypedef reverse_remove<s,x,in>::type r;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef reverse_remove_if< s,is_same<_,x>,in >::type r;\nComplexity\nLinear. Performs exactly size<s>::value comparisons for equality, and at most size<s>::value insertions.\nExample\ntypedef vector<int,float,char,float,float,double>::type types;\ntypedef reverse_remove< types,float >::type result;\nBOOST_MPL_ASSERT(( equal< result, vector<double,char,int> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,remove,reverse_remove_if ,reverse_copy ,transform ,re-\nplace\n3.5.19 reverse_remove_if\nSynopsis\ntemplate<\ntypename Sequence\n, typename Pred\n, typename In = unspecified\n>\nstruct reverse_remove_if\n{\ntypedef unspecified type;\n};\nDescription\nReturns a new sequence that contains all the elements from [ begin<Sequence>::type ,end<Sequence>::type )\nrange in reverse order except those that satisfy the predicate Pred.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/remove_if.hpp>\nModel of\nReversible Algorithm\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 134\nParameters\nParameter Requirement Description\nSequence Forward Sequence An original sequence.\nPred UnaryLambda Expression A removal condition.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, and anInserter in, and an unary Lambda Expression pred:\ntypedef reverse_remove_if<s,pred,in>::type r;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef lambda<pred>::type p;\ntypedef lambda<in::operation>::type op;\ntypedef reverse_fold<\ns\n, in::state\n, eval_if<\napply_wrap1<p,_2>\n, identity<_1>\n, apply_wrap2<op,_1,_2>\n>\n>::type r;\nComplexity\nLinear. Performs exactly size<s>::value applications of pred, and at most size<s>::value insertions.\nExample\ntypedef vector_c<int,1,4,5,2,7,5,3,5>::type numbers;\ntypedef reverse_remove_if< numbers, greater<_,int_<4> > >::type result;\nBOOST_MPL_ASSERT(( equal< result,vector_c<int,3,2,4,1>,equal_to<_,_> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,remove_if ,reverse_remove ,reverse_copy_if ,replace_if\n3.5.20 reverse_unique\nSynopsis\ntemplate<\ntypename Seq\nRevision Date: 15th November 2004135 Algorithms 3.5 Transformation Algorithms\n, typename Pred\n, typename In = unspecified\n>\nstruct reverse_unique\n{\ntypedef unspecified type;\n};\nDescription\nReturns a sequence of the initial elements of every subrange of the reversed original sequence Seqwhose elements are\nall the same.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/unique.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSequence Forward Sequence An original sequence.\nPred BinaryLambda Expression An equivalence relation.\nIn Inserter An inserter.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, a binary Lambda Expression pred, and anInserter in:\ntypedef reverse_unique<s,pred,in>::type r;\nReturn type: A type.\nSemantics: Ifsize<s>::value <= 1 , then equivalent to\ntypedef reverse_copy<s,in>::type r;\notherwise equivalent to\ntypedef lambda<pred>::type p;\ntypedef lambda<in::operation>::type in_op;\ntypedef apply_wrap2<\nin_op\n, in::state\n, front<types>::type\n>::type in_state;\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 136\ntypedef reverse_fold<\ns\n, pair< in_state, front<s>::type >\n, eval_if<\napply_wrap2<p, second<_1>, _2>\n, identity< first<_1> >\n, apply_wrap2<in_op, first<_1>, _2>\n>\n>::type::first r;\nComplexity\nLinear. Performs exactly size<s>::value - 1 applications of pred, and at most size<s>::value insertions.\nExample\ntypedef vector<int,float,float,char,int,int,int,double> types;\ntypedef vector<double,int,char,float,int> expected;\ntypedef reverse_unique< types, is_same<_1,_2> >::type result;\nBOOST_MPL_ASSERT(( equal< result,expected > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,unique,reverse_remove ,reverse_copy_if ,replace_if\n3.5.21 reverse_partition\nSynopsis\ntemplate<\ntypename Seq\n, typename Pred\n, typename In1 = unspecified\n, typename In2 = unspecified\n>\nstruct reverse_partition\n{\ntypedef unspecified type;\n};\nDescription\nReturnsapairofsequencestogethercontainingallelementsintherange[ begin<Seq>::type ,end<Seq>::type )split\ninto two groups based on the predicate Pred.reverse_partition is a synonym for reverse_stable_partition .\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nRevision Date: 15th November 2004137 Algorithms 3.5 Transformation Algorithms\nHeader\n#include <boost/mpl/partition.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSeq Forward Sequence An original sequence.\nPred UnaryLambda Expression A partitioning predicate.\nIn1,In2 Inserter Output inserters.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, an unary Lambda Expression pred, andInsertersin1andin2:\ntypedef reverse_partition<s,pred,in1,in2>::type r;\nReturn type: Apair.\nSemantics: Equivalent to\ntypedef reverse_stable_partition<s,pred,in1,in2>::type r;\nComplexity\nLinear. Exactly size<s>::value applications of pred, and size<s>::value of summarized in1::operation /\nin2::operation applications.\nExample\ntemplate< typename N > struct is_odd : bool_<(N::value % 2)> {};\ntypedef partition<\nrange_c<int,0,10>\n, is_odd<_1>\n, back_inserter< vector<> >\n, back_inserter< vector<> >\n>::type r;\nBOOST_MPL_ASSERT(( equal< r::first, vector_c<int,9,7,5,3,1> > ));\nBOOST_MPL_ASSERT(( equal< r::second, vector_c<int,8,6,4,2,0> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,partition ,reverse_stable_partition ,sort\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 138\n3.5.22 reverse_stable_partition\nSynopsis\ntemplate<\ntypename Seq\n, typename Pred\n, typename In1 = unspecified\n, typename In2 = unspecified\n>\nstruct reverse_stable_partition\n{\ntypedef unspecified type;\n};\nDescription\nReturnsapairofsequencestogethercontainingallelementsintherange[ begin<Seq>::type ,end<Seq>::type )split\ninto two groups based on the predicate Pred.reverse_stable_partition is guaranteed to preserve the reversed\nrelative order of the elements in the resulting sequences.\n[Note:This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a\nprecise specification of thealgorithm’s details in all cases — end note]\nHeader\n#include <boost/mpl/stable_partition.hpp>\nModel of\nReversible Algorithm\nParameters\nParameter Requirement Description\nSeq Forward Sequence An original sequence.\nPred UnaryLambda Expression A partitioning predicate.\nIn1,In2 Inserter Output inserters.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Reversible Algorithm .\nFor anyForward Sequence s, an unary Lambda Expression pred, andInsertersin1andin2:\ntypedef reverse_stable_partition<s,pred,in1,in2>::type r;\nReturn type: Apair.\nSemantics: Equivalent to\ntypedef lambda<pred>::type p;\ntypedef lambda<in1::operation>::type in1_op;\ntypedef lambda<in2::operation>::type in2_op;\nRevision Date: 15th November 2004139 Algorithms 3.5 Transformation Algorithms\ntypedef reverse_fold<\ns\n, pair< in1::state, in2::state >\n, if_<\napply_wrap1<p,_2>\n, pair< apply_wrap2<in1_op,first<_1>,_2>, second<_1> >\n, pair< first<_1>, apply_wrap2<in2_op,second<_1>,_2> >\n>\n>::type r;\nComplexity\nLinear. Exactly size<s>::value applications of pred, and size<s>::value of summarized in1::operation /\nin2::operation applications.\nExample\ntemplate< typename N > struct is_odd : bool_<(N::value % 2)> {};\ntypedef reverse_stable_partition<\nrange_c<int,0,10>\n, is_odd<_1>\n, back_inserter< vector<> >\n, back_inserter< vector<> >\n>::type r;\nBOOST_MPL_ASSERT(( equal< r::first, vector_c<int,9,7,5,3,1> > ));\nBOOST_MPL_ASSERT(( equal< r::second, vector_c<int,8,6,4,2,0> > ));\nSee also\nTransformation Algorithms ,Reversible Algorithm ,stable_partition ,reverse_partition ,sort,transform\nRevision Date: 15th November 20043.5 Transformation Algorithms Algorithms 140\nRevision Date: 15th November 2004Chapter 4 Metafunctions\nThe MPL includes a number of predefined metafunctions that can be roughly classified in two categories: general pur-\npose metafunctions ,dealingwithconditional typeselection andhigher-ordermetafunction invocation ,composition ,and\nargument binding , and numeric metafunctions , incapsulating built-in and user-defined arithmetic ,comparison ,logical,\nandbitwiseoperations.\nGiven that it is possible to perform integer numeric computations at compile time using the conventional operators\nnotation,theneedforthesecondcategorymightbenotobvious,butitinfactplaysacentalroleinmakingprogramming\nwith MPL seemingly effortless. In particular, there are at least two contexts where built-in language facilities fall\nshort3):\n1)Passing a computation to an algorithm.\n2)Performing a computationon non-integer data.\nThe second use case deserves special attention. In contrast to the built-in, strictly integer compile-time arithmetics, the\nMPLnumericmetafunctionsare polymorphic ,withsupportfor mixed-typearithmetics . Thismeansthattheycanoperate\non a variety of numeric types — for instance, rational, fixed-point or complex numbers, — and that, in general, you are\nallowed to freely intermix these types within a single expression. See Numeric Metafunction concept for more details\non the MPL numeric infrastructure.\nTo reduce a negative syntactical impact of the metafunctions notation over the infix operator notation, all numeric\nmetafunctions allow to pass up to N arguments, where N is defined by the value of BOOST_MPL_LIMIT_METAFUNC-\nTION_ARITY configuration macro.\n4.1 Concepts\n4.1.1 Metafunction\nDescription\nAmetafunction isaclassoraclasstemplatethatrepresentsafunctioninvocableatcompile-time. Annon-nullarymeta-\nfunctionisinvokedbyinstantiatingtheclasstemplatewithparticulartemplateparameters(metafunctionarguments);the\nresult of the metafunction application is accessible through the instantiation’s nested typetypedef. All metafunction’s\narguments must be types (i.e. only type template parameters are allowed). A metafunction can have a variable number\nof parameters. A nullary metafunction is represented as a (template) class with a nested typetypename member.\nExpression requirements\nIn the following table and subsequent specifications, fis aMetafunction .\n3)All other considerations aside, as of the time of this writing (early 2004), using built-in operators on integral constants still often present a\nportability problem — many compilers cannot handle particular forms of expressions, forcing us to use conditional compilation. Because MPL\nnumeric metafunctions work on types and encapsulate these kind of workarounds internally, they elude these problems, so if you aim for portability,\nit is generally adviced to use them in the place of the conventional operators, even at the price of slightly decreased readability.4.1 Concepts Metafunctions 142\nExpression Type Complexity\nExpression Type Complexity\nf::type Any type Unspecified.\nf<>::type Any type Unspecified.\nf<a1,..,an>::type Any type Unspecified.\nExpression semantics\ntypedef f::type x;\nPrecondition: fis a nullary Metafunction ;f::type is atype-name .\nSemantics: xis the result of the metafunction invocation.\ntypedef f<>::type x;\nPrecondition: fis a nullary Metafunction ;f<>::type is atype-name .\nSemantics: xis the result of the metafunction invocation.\ntypedef f<a1, ... an>::type x;\nPrecondition: fis ann-aryMetafunction ;a1,...anare types; f<a1,...an>::type is atype-name .\nSemantics: xis the result of the metafunction invocation with the actual arguments a1,...an.\nModels\n—identity\n—plus\n—begin\n—insert\n—fold\nSee also\nMetafunctions ,Metafunction Class ,Lambda Expression ,invocation ,apply,lambda,bind\n4.1.2 Metafunction Class\nSummary\nAmetafunctionclass isacertainformofmetafunctionrepresentationthatenableshigher-ordermetaprogramming. More\nprecisely, it’s a class with a publicly-accessible nested Metafunction called apply. Correspondingly, a metafunction\nclass invocation is defined asinvocation of its nested applymetafunction.\nExpression requirements\nIn the following table and subsequent specifications, fis aMetafunction Class .\nRevision Date: 15th November 2004143 Metafunctions 4.1 Concepts\nExpression Type Complexity\nf::apply::type Any type Unspecified.\nf::apply<>::type Any type Unspecified.\nf::apply<a1,...an>::type Any type Unspecified.\nExpression semantics\ntypedef f::apply::type x;\nPrecondition: fis a nullary Metafunction Class ;f::apply::type is atype-name .\nSemantics: xis the result of the metafunction class invocation.\ntypedef f::apply<>::type x;\nPrecondition: fis a nullary Metafunction Class ;f::apply<>::type is atype-name .\nSemantics: xis the result of the metafunction class invocation.\ntypedef f::apply<a1, ...an>::type x;\nPrecondition: fis ann-ary metafunction class; applyis aMetafunction .\nSemantics: xis the result of the metafunction class invocation with the actual arguments a1,...an.\nModels\n—always\n—arg\n—quote\n—numeric_cast\n—unpack_args\nSee also\nMetafunctions ,Metafunction ,Lambda Expression ,invocation ,apply_wrap ,bind,quote\n4.1.3 Lambda Expression\nDescription\nALambda Expression is a compile-time invocable entity in either of the following two forms:\n—Metafunction Class\n—Placeholder Expression\nMost of the MPL components accept either of those, and the concept gives us a consice way to describe these require-\nments.\nExpression requirements\nSee corresponding Metafunction Class andPlaceholder Expression specifications.\nRevision Date: 15th November 20044.1 Concepts Metafunctions 144\nModels\n—always\n—unpack_args\n—plus<_, int_<2> >\n—if_< less<_1, int_<7> >, plus<_1,_2>, _1 >\nSee also\nMetafunctions ,Placeholders ,apply,lambda\n4.1.4 Placeholder Expression\nDescription\nAPlaceholder Expression is a type that is either a placeholder or a class template specialization with at least one\nargument that itself is a Placeholder Expression .\nExpression requirements\nIfXis a class template, and a1,...anare arbitrary types, then X<a1,...,an> is aPlaceholder Expression if and only if\nall of the following conditions hold:\n—At least one of the template arguments a1,...anis aplaceholder or aPlaceholder Expression .\n—All of X’s template parameters, including the default ones, are types.\n—The number of X’s template parameters, including the default ones, is less or equal to the value of BOOST_MPL_-\nLIMIT_METAFUNCTION_ARITY configuration macro .\nModels\n—_1\n—plus<_, int_<2> >\n—if_< less<_1, int_<7> >, plus<_1,_2>, _1 >\nSee also\nLambda Expression ,Placeholders ,Metafunctions ,apply,lambda\n4.1.5 Tag Dispatched Metafunction\nSummary\nATag Dispatched Metafunction is aMetafunction that employs a tag dispatching technique in its implementation to\nbuild an infrastructure for easy overriding/extenstion of the metafunction’s behavior.\nNotation\nRevision Date: 15th November 2004145 Metafunctions 4.1 Concepts\nSymbol Legend\nname A placeholder token for the specific metafunction’s name.\ntag-metafunction A placeholder token for the tag metafunction’s name.\ntag A placeholder token for one of possible tag types returned by the tag meta-\nfunction.\nSynopsis\ntemplate< typename Tag > struct name _impl;\ntemplate<\ntypename X\n[, ...]\n>\nstruct name\n:name _impl< typename tag-metafunction <X>::type >\n::template apply<X [, ...] >\n{\n};\ntemplate< typename Tag > struct name _impl\n{\ntemplate< typename X [, ...] > struct apply\n{\n//default implementation\n};\n};\ntemplate<> struct name _impl< tag>\n{\ntemplate< typename X [, ...] > struct apply\n{\n//tag-specific implementation\n};\n};\nDescription\nTheusualmechanismforoverridingametafunction’sbehaviorisclasstemplatespecialization—givenalibrary-defined\nmetafunction f, it’s possible to write a specialization of ffor a specific type user_type that would have the required\nsemantics4).\nWhile this mechanism is always available, it’s not always the most convenient one, especially if it is desirable to spe-\ncialize a metafunction’s behavior for a familyof related types. A typical example of it is numbered forms of sequence\nclasses in MPL itself ( list0, ...,list50, et al.), and sequence classes in general.\nATag Dispatched Metafunction is a concept name for an instance of the metafunction implementation infrastructure\nbeing employed by the library to make it easier for users and implementors to override the behavior of library’s meta-\nfunctions operating on families of specific types.\nTheinfrastructureisbuiltonavariationofthetechniquecommonlyknownas tagdispatching (hencetheconceptname),\nand involves three entities: a metafunction itself, an associated tag-producing tag metafunction , and the metafunction’s\nimplementation,intheformofa MetafunctionClass templateparametrizedbya Tagtypeparameter. Themetafunction\nRevision Date: 15th November 20044.1 Concepts Metafunctions 146\nredirectstoitsimplementationclasstemplatebyinvokingitsspecializationonatagtypeproducedbythetagmetafunc-\ntion with the original metafunction’s parameters.\nExample\n#include <boost/mpl/size.hpp>\nnamespace user {\nstruct bitset_tag;\nstruct bitset0\n{\ntypedef bitset_tag tag;\n// ...\n};\ntemplate< typename B0 > struct bitset1\n{\ntypedef bitset_tag tag;\n// ...\n};\ntemplate< typename B0, ..., typename B n> struct bitset n\n{\ntypedef bitset_tag tag;\n// ...\n};\n} // namespace user\nnamespace boost { namespace mpl {\ntemplate<> struct size_impl<user::bitset_tag>\n{\ntemplate< typename Bitset > struct apply\n{\ntypedef typename Bitset::size type;\n};\n};\n}}\nModels\n—sequence_tag\nSee also\nMetafunction ,Metafunction Class ,Numeric Metafunction\n4)Usually such user-defined specialization is still required to preserve the f’s original invariants and complexity requirements.\nRevision Date: 15th November 2004147 Metafunctions 4.1 Concepts\n4.1.6 Numeric Metafunction\nDescription\nANumeric Metafunction is aTag Dispatched Metafunction that provides a built-in infrastructure for easy implementa-\ntion of mixed-type operations.\nExpression requirements\nIn the following table and subsequent specifications, opis a placeholder token for the actual Numeric Metafunction ’s\nname, and x,yandx1,x2,...xnare arbitrary numeric types.\nExpression Type Complexity\nop_tag<x>::type Integral Constant Amortized constant time.\nop_impl<\nop_tag<x>::type\n, op_tag<y>::type\n>::apply<x,y>::typeAny type Unspecified.\nop<x1,x2,...xn>::type Any type Unspecified.\nExpression semantics\ntypedef op_tag<x>::type tag;\nSemantics: tagis a tag type for xforop.tag::value isx’sconversion rank .\ntypedef op_impl<\nop_tag<x>::type\n, op_tag<y>::type\n>::apply<x,y>::type r;\nSemantics: ris the result of opapplication on arguments xandy.\ntypedef op< x1,x2,... xn>::type r;\nSemantics: ris the result of opapplication on arguments x1,x2,...xn.\nExample\nstruct complex_tag : int_<10> {};\ntemplate< typename Re, typename Im > struct complex\n{\ntypedef complex_tag tag;\ntypedef complex type;\ntypedef Re real;\ntypedef Im imag;\n};\ntemplate< typename C > struct real : C::real {};\ntemplate< typename C > struct imag : C::imag {};\nnamespace boost { namespace mpl {\nRevision Date: 15th November 20044.1 Concepts Metafunctions 148\ntemplate<>\nstruct plus_impl< complex_tag,complex_tag >\n{\ntemplate< typename N1, typename N2 > struct apply\n: complex<\nplus< typename N1::real, typename N2::real >\n, plus< typename N1::imag, typename N2::imag >\n>\n{\n};\n};\n}}\ntypedef complex< int_<5>, int_<-1> > c1;\ntypedef complex< int_<-5>, int_<1> > c2;\ntypedef plus<c1,c2> r1;\nBOOST_MPL_ASSERT_RELATION( real<r1>::value, ==, 0 );\nBOOST_MPL_ASSERT_RELATION( imag<r1>::value, ==, 0 );\ntypedef plus<c1,c1> r2;\nBOOST_MPL_ASSERT_RELATION( real<r2>::value, ==, 10 );\nBOOST_MPL_ASSERT_RELATION( imag<r2>::value, ==, -2 );\ntypedef plus<c2,c2> r3;\nBOOST_MPL_ASSERT_RELATION( real<r3>::value, ==, -10 );\nBOOST_MPL_ASSERT_RELATION( imag<r3>::value, ==, 2 );\nModels\n—plus\n—minus\n—times\n—divides\nSee also\nTag Dispatched Metafunction ,Metafunctions ,numeric_cast\n4.1.7 Trivial Metafunction\nDescription\nATrivial Metafunction accepts a single argument of a class type xand returns the x’s nested type member x::name,\nwhere nameis a placeholder token for the actual member’s name accessed by a specific metafunction’s instance. By\nconvention,all trivialmetafunctions inMPLarenamedafterthememberstheyprovideassessto. Forinstance,a Trivial\nMetafunction named firstreaches for the x’s nested member ::first.\nRevision Date: 15th November 2004149 Metafunctions 4.2 Type Selection\nExpression requirements\nInthefollowingtableandsubsequentspecifications, nameisplaceholdertokenforthenamesofthe TrivialMetafunction\nitself and the accessed member, and xis a class type such that x::name is a valid type-name .\nExpression Type Complexity\nname<x>::type Any type Constant time.\nExpression semantics\ntypedef name<x>::type r;\nPrecondition: x::name is a valid type-name .\nSemantics: is_same<r,x::name>::value == true .\nModels\n—first\n—second\n—base\nSee also\nMetafunctions ,Trivial Metafunctions ,identity\n4.2 Type Selection\n4.2.1 if_\nSynopsis\ntemplate<\ntypename C\n, typename T1\n, typename T2\n>\nstruct if_\n{\ntypedef unspecified type;\n};\nDescription\nReturns one of its two arguments, T1orT2, depending on the value C.\nHeader\n#include <boost/mpl/if.hpp>\nRevision Date: 15th November 20044.2 Type Selection Metafunctions 150\nParameters\nRevision Date: 15th November 2004151 Metafunctions 4.2 Type Selection\nParameter Requirement Description\nC Integral Constant A selection condition.\nT1,T2 Any type Types to select from.\nExpression semantics\nFor anyIntegral Constant cand arbitrary types t1,t2:\ntypedef if_<c,t1,t2>::type t;\nReturn type: Any type.\nSemantics: Ifc::value == true ,tis identical to t1; otherwise tis identical to t2.\nExample\ntypedef if_<true_,char,long>::type t1;\ntypedef if_<false_,char,long>::type t2;\nBOOST_MPL_ASSERT(( is_same<t1, char> ));\nBOOST_MPL_ASSERT(( is_same<t2, long> ));\nSee also\nMetafunctions ,Integral Constant ,if_c,eval_if\n4.2.2 if_c\nSynopsis\ntemplate<\nbool c\n, typename T1\n, typename T2\n>\nstruct if_c\n{\ntypedef unspecified type;\n};\nDescription\nReturnsoneofitstwoarguments, T1orT2,dependingonthevalueofintegralconstant c.if_c<c,t1,t2>::type isa\nshorcut notation for if_< bool_<c>,t1,t2 >::type .\nHeader\n#include <boost/mpl/if.hpp>\nParameters\nRevision Date: 15th November 20044.2 Type Selection Metafunctions 152\nParameter Requirement Description\nc An integral constant A selection condition.\nT1,T2 Any type Types to select from.\nExpression semantics\nFor any integral constant cand arbitrary types t1,t2:\ntypedef if_c<c,t1,t2>::type t;\nReturn type: Any type.\nSemantics: Equivalent to typedef if_< bool_<c>,t1,t2 >::type t;\nExample\ntypedef if_c<true,char,long>::type t1;\ntypedef if_c<false,char,long>::type t2;\nBOOST_MPL_ASSERT(( is_same<t1, char> ));\nBOOST_MPL_ASSERT(( is_same<t2, long> ));\nSee also\nMetafunctions ,Integral Constant ,if_,eval_if,bool_\n4.2.3 eval_if\nSynopsis\ntemplate<\ntypename C\n, typename F1\n, typename F2\n>\nstruct eval_if\n{\ntypedef unspecified type;\n};\nDescription\nEvaluates one of its two nullary-metafunction arguments, F1orF2, depending on the value C.\nHeader\n#include <boost/mpl/eval_if.hpp>\nParameters\nRevision Date: 15th November 2004153 Metafunctions 4.2 Type Selection\nParameter Requirement Description\nC Integral Constant An evaluation condition.\nF1,F2 NullaryMetafunction Metafunctions to select forevaluation from.\nExpression semantics\nFor anyIntegral Constant cand nullary Metafunction sf1,f2:\ntypedef eval_if<c,f1,f2>::type t;\nReturn type: Any type.\nSemantics: Ifc::value == true ,tis identical to f1::type ; otherwise tis identical to f2::type .\nExample\ntypedef eval_if< true_, identity<char>, identity<long> >::type t1;\ntypedef eval_if< false_, identity<char>, identity<long> >::type t2;\nBOOST_MPL_ASSERT(( is_same<t1,char> ));\nBOOST_MPL_ASSERT(( is_same<t2,long> ));\nSee also\nMetafunctions ,Integral Constant ,eval_if_c ,if_\n4.2.4 eval_if_c\nSynopsis\ntemplate<\nbool c\n, typename F1\n, typename F2\n>\nstruct eval_if_c\n{\ntypedef unspecified type;\n};\nDescription\nEvaluates one of its two nullary-metafunction arguments, F1orF2, depending on the value of integral constant c.\neval_if_c<c,f1,f2>::type is a shorcut notation for eval_if< bool_<c>,f1,f2 >::type .\nHeader\n#include <boost/mpl/eval_if.hpp>\nParameters\nRevision Date: 15th November 20044.3 Invocation Metafunctions 154\nParameter Requirement Description\nc An integral constant An evaluation condition.\nF1,F2 NullaryMetafunction Metafunctions to select forevaluation from.\nExpression semantics\nFor any integral constant cand nullary Metafunction sf1,f2:\ntypedef eval_if_c<c,f1,f2>::type t;\nReturn type: Any type.\nSemantics: Equivalent to typedef eval_if< bool_<c>,f1,f2 >::type t;\nExample\ntypedef eval_if_c< true, identity<char>, identity<long> >::type t1;\ntypedef eval_if_c< false, identity<char>, identity<long> >::type t2;\nBOOST_MPL_ASSERT(( is_same<t1,char> ));\nBOOST_MPL_ASSERT(( is_same<t2,long> ));\nSee also\nMetafunctions ,Integral Constant ,eval_if,if_,bool_\n4.3 Invocation\n4.3.1 apply\nSynopsis\ntemplate<\ntypename F\n>\nstruct apply0\n{\ntypedef unspecified type;\n};\ntemplate<\ntypename F, typename A1\n>\nstruct apply1\n{\ntypedef unspecified type;\n};\n...\ntemplate<\ntypename F, typename A1, ... typename An\n>\nRevision Date: 15th November 2004155 Metafunctions 4.3 Invocation\nstruct apply n\n{\ntypedef unspecified type;\n};\ntemplate<\ntypename F\n, typename A1 = unspecified\n...\n, typename An = unspecified\n>\nstruct apply\n{\ntypedef unspecified type;\n};\nDescription\nInvokes a Metafunction Class or aLambda Expression Fwith arguments A1,...An.\nHeader\n#include <boost/mpl/apply.hpp>\nParameters\nParameter Requirement Description\nF Lambda Expression An expression to invoke.\nA1,...An Any type Invocation arguments.\nExpression semantics\nFor anyLambda Expression fand arbitrary types a1,...an:\ntypedef apply n<f,a1, ...an>::type t;\ntypedef apply<f,a1, ...an>::type t;\nReturn type: Any type.\nSemantics: Equivalent to typedef apply_wrap n< lambda<f>::type,a1,... an>::type t; .\nExample\ntemplate< typename N1, typename N2 > struct int_plus\n: int_<( N1::value + N2::value )>\n{\n};\ntypedef apply< int_plus<_1,_2>, int_<2>, int_<3> >::type r1;\ntypedef apply< quote2<int_plus>, int_<2>, int_<3> >::type r2;\nBOOST_MPL_ASSERT_RELATION( r1::value, ==, 5 );\nRevision Date: 15th November 20044.3 Invocation Metafunctions 156\nBOOST_MPL_ASSERT_RELATION( r2::value, ==, 5 );\nSee also\nMetafunctions ,apply_wrap ,lambda,quote,bind\n4.3.2 apply_wrap\nSynopsis\ntemplate<\ntypename F\n>\nstruct apply_wrap0\n{\ntypedef unspecified type;\n};\ntemplate<\ntypename F, typename A1\n>\nstruct apply_wrap1\n{\ntypedef unspecified type;\n};\n...\ntemplate<\ntypename F, typename A1, ... typename An\n>\nstruct apply_wrap n\n{\ntypedef unspecified type;\n};\nDescription\nInvokes a Metafunction Class Fwith arguments A1,...An.\nIn essence, apply_wrap forms are nothing more than syntactic wrappers around F::apply<A1,... An>::type /\nF::apply::type expressions(hencethename). Theyprovideamoreconcisenotationandhigherportabilitythantheir\nunderlaying constructs at thecost of an extra template instantiation.\nHeader\n#include <boost/mpl/apply_wrap.hpp>\nParameters\nRevision Date: 15th November 2004157 Metafunctions 4.3 Invocation\nParameter Requirement Description\nF Metafunction Class A metafunction class to invoke.\nA1,...An Any type Invocation arguments.\nExpression semantics\nFor anyMetafunction Class fand arbitrary types a1,...an:\ntypedef apply_wrap n<f,a1, ...an>::type t;\nReturn type: Any type.\nSemantics: Ifn > 0, equivalent to typedef f::apply<a1,... an>::type t; , otherwise equiva-\nlent to either typedef f::apply::type t; ortypedef f::apply<>::type t; depending on\nwhether f::apply is a class or a class template.\nExample\nstruct f0\n{\ntemplate< typename T = int > struct apply\n{\ntypedef char type;\n};\n};\nstruct g0\n{\nstruct apply { typedef char type; };\n};\nstruct f2\n{\ntemplate< typename T1, typename T2 > struct apply\n{\ntypedef T2 type;\n};\n};\ntypedef apply_wrap0< f0 >::type r1;\ntypedef apply_wrap0< g0 >::type r2;\ntypedef apply_wrap2< f2,int,char >::type r3;\nBOOST_MPL_ASSERT(( is_same<r1,char> ));\nBOOST_MPL_ASSERT(( is_same<r2,char> ));\nBOOST_MPL_ASSERT(( is_same<r3,char> ));\nSee also\nMetafunctions ,invocation ,apply,lambda,quote,bind,protect\nRevision Date: 15th November 20044.3 Invocation Metafunctions 158\n4.3.3 unpack_args\nSynopsis\ntemplate<\ntypename F\n>\nstruct unpack_args\n{\n//unspecified\n//...\n};\nDescription\nA higher-order primitive transforming an n-aryLambda Expression Finto an unary Metafunction Class gaccepting a\nsingle sequence of narguments.\nHeader\n#include <boost/mpl/unpack_args.hpp>\nModel of\nMetafunction Class\nParameters\nParameter Requirement Description\nF Lambda Expression A lambda expression to adopt.\nExpression semantics\nFor an arbitrary Lambda Expression f, and arbitrary types a1,...an:\ntypedef unpack_args<f> g;\nReturn type: Metafunction Class .\nSemantics: gis a unary Metafunction Class such that\napply_wrap n< g, vector<a1, ...an> >::type\nis identical to\napply<F,a1, ...an>::type\nExample\nBOOST_MPL_ASSERT(( apply<\nunpack_args< is_same<_1,_2> >\n, vector<int,int>\n> ));\nRevision Date: 15th November 2004159 Metafunctions 4.4 Composition and Argument Binding\nSee also\nMetafunctions ,Lambda Expression ,Metafunction Class ,apply,apply_wrap ,bind\n4.4 Composition and Argument Binding\n4.4.1 Placeholders\nSynopsis\nnamespace placeholders {\ntypedef unspecified _;\ntypedef arg<1> _1;\ntypedef arg<2> _2;\n...\ntypedef arg< n> _ n;\n}\nusing placeholders::_;\nusing placeholders::_1;\nusing placeholders::_2;\n...\nusing placeholders::_ n;\nDescription\nAplaceholderinaform _nissimplyasynonymforthecorresponding arg<n>specialization. Theunnamedplaceholder\n_(underscore) carries special meaning in bind and lambda expressions, and does not have defined semantics outside of\nthese contexts.\nPlaceholder names can be made available in the user namespace through using namespace mpl::placeholders;\ndirective.\nHeader\n#include <boost/mpl/placeholders.hpp>\n[Note:Theincludemightbeomittedwhenusingplaceholderstoconstructa LambdaExpression forpassingittoMPL’s\nown algorithm or metafunction: any library component that is documented to accept a lambda expression makes the\nplaceholders implicitly available for the user code — end note]\nParameters\nNone.\nExpression semantics\nFor any integral constant nin the range [1, BOOST_MPL_LIMIT_METAFUNCTION_ARITY ] and arbitrary types a1,...an:\ntypedef apply_wrap n<_n,a1, ...an>::type x;\nReturn type: A type.\nSemantics: Equivalent to\nRevision Date: 15th November 20044.4 Composition and Argument Binding Metafunctions 160\ntypedef apply_wrap n< arg< n>,a1, ...an>::type x;\nExample\ntypedef apply_wrap5< _1,bool,char,short,int,long >::type t1;\ntypedef apply_wrap5< _3,bool,char,short,int,long >::type t3;\nBOOST_MPL_ASSERT(( is_same< t1, bool > ));\nBOOST_MPL_ASSERT(( is_same< t3, short > ));\nSee also\nComposition and Argument Binding ,arg,lambda,bind,apply,apply_wrap\n4.4.2 lambda\nSynopsis\ntemplate<\ntypename X\n, typename Tag = unspecified\n>\nstruct lambda\n{\ntypedef unspecified type;\n};\nDescription\nIfXis aPlaceholder Expression , transforms Xinto a corresponding Metafunction Class , otherwise Xis returned un-\nchanged.\nHeader\n#include <boost/mpl/lambda.hpp>\nParameters\nParameter Requirement Description\nX Any type An expression to transform.\nTag Any type A tag determining transformsemantics.\nExpression semantics\nFor arbitrary types xandtag:\ntypedef lambda<x>::type f;\nReturn type: Metafunction Class .\nSemantics: Ifxis aPlaceholder Expression in a general form X<a1,...an> , where Xis a class template\nRevision Date: 15th November 2004161 Metafunctions 4.4 Composition and Argument Binding\nanda1,...anare arbitrary types, equivalentto\ntypedef protect< bind<\nquote n<X>\n, lambda<a1>::type, ... lambda<a n>::type\n> > f;\notherwise, fis identical to x.\ntypedef lambda<x,tag>::type f;\nReturn type: Metafunction Class .\nSemantics: Ifxis aPlaceholder Expression in a general form X<a1,...an> , where Xis a class template\nanda1,...anare arbitrary types, equivalentto\ntypedef protect< bind<\nquote n<X,tag>\n, lambda<a1,tag>::type, ... lambda<a n,tag>::type\n> > f;\notherwise, fis identical to x.\nExample\ntemplate< typename N1, typename N2 > struct int_plus\n: int_<( N1::value + N2::value )>\n{\n};\ntypedef lambda< int_plus<_1, int_<42> > >::type f1;\ntypedef bind< quote2<int_plus>, _1, int_<42> > f2;\ntypedef f1::apply<42>::type r1;\ntypedef f2::apply<42>::type r2;\nBOOST_MPL_ASSERT_RELATION( r1::value, ==, 84 );\nBOOST_MPL_ASSERT_RELATION( r2::value, ==, 84 );\nSee also\nComposition and Argument Binding ,invocation ,Placeholders ,bind,quote,protect,apply\n4.4.3 bind\nSynopsis\ntemplate<\ntypename F\n>\nstruct bind0\n{\n//unspecified\n//...\n};\nRevision Date: 15th November 20044.4 Composition and Argument Binding Metafunctions 162\ntemplate<\ntypename F, typename A1\n>\nstruct bind1\n{\n//unspecified\n//...\n};\n...\ntemplate<\ntypename F, typename A1, ... typename An\n>\nstruct bind n\n{\n//unspecified\n//...\n};\ntemplate<\ntypename F\n, typename A1 = unspecified\n...\n, typename An = unspecified\n>\nstruct bind\n{\n//unspecified\n//...\n};\nDescription\nbindis a higher-order primitive for Metafunction Class composition and argument binding. In essence, it’s a compile-\ntime counterpart of the similar run-time functionality provided by Boost.Bind andBoost.Lambda libraries.\nHeader\n#include <boost/mpl/bind.hpp>\nModel of\nMetafunction Class\nParameters\nParameter Requirement Description\nF Metafunction Class An metafunction class to perform binding on.\nA1,...An Any type Arguments to bind.\nRevision Date: 15th November 2004163 Metafunctions 4.4 Composition and Argument Binding\nExpression semantics\nFor anyMetafunction Class fand arbitrary types a1,...an:\ntypedef bind<f,a1,...a n> g;\ntypedef bind n<f,a1,...a n> g;\nReturn type: Metafunction Class\nSemantics: Equivalent to\nstruct g\n{\ntemplate<\ntypename U1 = unspecified\n...\n, typename U n=unspecified\n>\nstruct apply\n: apply_wrap n<\ntypename h0<f,U1, ...Un>::type\n, typename h1<a1,U1, ...Un>::type\n...\n, typename h n<an,U1, ...Un>::type\n>\n{\n};\n};\nwhere hkis equivalent to\ntemplate< typename X, typename U1, ... typename U n> struct h k\n: apply_wrap n<X,U1, ...Un>\n{\n};\nifforakis abind expression or aplaceholder , and\ntemplate< typename X, typename U1, ... typename U n> struct h k\n{\ntypedef X type;\n};\notherwise. [ Note:Every nth appearance of the unnamed placeholder in the bind<f,a1,...an>\nspecialization is replaced with the corresponding numbered placeholder _n—end note]\nExample\nstruct f1\n{\ntemplate< typename T1 > struct apply\n{\ntypedef T1 type;\n};\n};\nstruct f5\nRevision Date: 15th November 20044.4 Composition and Argument Binding Metafunctions 164\n{\ntemplate< typename T1, typename T2, typename T3, typename T4, typename T5 >\nstruct apply\n{\ntypedef T5 type;\n};\n};\ntypedef apply_wrap1<\nbind1<f1,_1>\n, int\n>::type r11;\ntypedef apply_wrap5<\nbind1<f1,_5>\n, void,void,void,void,int\n>::type r12;\nBOOST_MPL_ASSERT(( is_same<r11,int> ));\nBOOST_MPL_ASSERT(( is_same<r12,int> ));\ntypedef apply_wrap5<\nbind5<f5,_1,_2,_3,_4,_5>\n, void,void,void,void,int\n>::type r51;\ntypedef apply_wrap5<\nbind5<f5,_5,_4,_3,_2,_1>\n, int,void,void,void,void\n>::type r52;\nBOOST_MPL_ASSERT(( is_same<r51,int> ));\nBOOST_MPL_ASSERT(( is_same<r52,int> ));\nSee also\nComposition and Argument Binding ,invocation ,Placeholders ,lambda,quote,protect,apply,apply_wrap\n4.4.4 quote\nSynopsis\ntemplate<\ntemplate< typename P1 > class F\n, typename Tag = unspecified\n>\nstruct quote1\n{\n//unspecified\n//...\n};\nRevision Date: 15th November 2004165 Metafunctions 4.4 Composition and Argument Binding\n...\ntemplate<\ntemplate< typename P1, ... typename P n> class F\n, typename Tag = unspecified\n>\nstruct quote n\n{\n//unspecified\n//...\n};\nDescription\nquotenis a higher-order primitive that wraps an n-aryMetafunction to create a corresponding Metafunction Class .\nHeader\n#include <boost/mpl/quote.hpp>\nModel of\nMetafunction Class\nParameters\nParameter Requirement Description\nF Metafunction A metafunction to wrap.\nTag Any type A tag determining wrap semantics.\nExpression semantics\nFor anyn-aryMetafunction fand arbitrary type tag:\ntypedef quote n<f> g;\ntypedef quote n<f,tag> g;\nReturn type: Metafunction Class\nSemantics: Equivalent to\nstruct g\n{\ntemplate< typename A1, ... typename A n> struct apply\n: f<A1, ...An>\n{\n};\n};\niff<A1,...An> has a nested type member ::type, and to\nstruct g\n{\nRevision Date: 15th November 20044.4 Composition and Argument Binding Metafunctions 166\ntemplate< typename A1, ... typename A n> struct apply\n{\ntypedef f<A1, ...An> type;\n};\n};\notherwise.\nExample\ntemplate< typename T > struct f1\n{\ntypedef T type;\n};\ntemplate<\ntypename T1, typename T2, typename T3, typename T4, typename T5\n>\nstruct f5\n{\n// no ’type’ member!\n};\ntypedef quote1<f1>::apply<int>::type t1;\ntypedef quote5<f5>::apply<char,short,int,long,float>::type t5;\nBOOST_MPL_ASSERT(( is_same< t1, int > ));\nBOOST_MPL_ASSERT(( is_same< t5, f5<char,short,int,long,float> > ));\nSee also\nComposition and Argument Binding ,invocation ,bind,lambda,protect,apply\n4.4.5 arg\nSynopsis\ntemplate< int n > struct arg;\ntemplate<> struct arg<1>\n{\ntemplate< typename A1, ... typename A n=unspecified >\nstruct apply\n{\ntypedef A1 type;\n};\n};\n...\ntemplate<> struct arg< n>\n{\nRevision Date: 15th November 2004167 Metafunctions 4.4 Composition and Argument Binding\ntemplate< typename A1, ... typename A n>\nstruct apply\n{\ntypedef A ntype;\n};\n};\nDescription\narg<n>specialization is a Metafunction Class that return the nth of its arguments.\nHeader\n#include <boost/mpl/arg.hpp>\nParameters\nParameter Requirement Description\nn An integral constant A number of argument to return.\nExpression semantics\nFor any integral constant nin the range [1, BOOST_MPL_LIMIT_METAFUNCTION_ARITY ] and arbitrary types a1,...an:\ntypedef apply_wrap n< arg< n>,a1, ...an>::type x;\nReturn type: A type.\nSemantics: xis identical to an.\nExample\ntypedef apply_wrap5< arg<1>,bool,char,short,int,long >::type t1;\ntypedef apply_wrap5< arg<3>,bool,char,short,int,long >::type t3;\nBOOST_MPL_ASSERT(( is_same< t1, bool > ));\nBOOST_MPL_ASSERT(( is_same< t3, short > ));\nSee also\nComposition and Argument Binding ,Placeholders ,lambda,bind,apply,apply_wrap\n4.4.6 protect\nSynopsis\ntemplate<\ntypename F\n>\nstruct protect\n{\nRevision Date: 15th November 20044.4 Composition and Argument Binding Metafunctions 168\n//unspecified\n//...\n};\nDescription\nprotect is an identity wrapper for a Metafunction Class that prevents its argument from being recognized as a bind\nexpression .\nHeader\n#include <boost/mpl/protect.hpp>\nParameters\nParameter Requirement Description\nF Metafunction Class A metafunction class to wrap.\nExpression semantics\nFor anyMetafunction Class f:\ntypedef protect<f> g;\nReturn type: Metafunction Class .\nSemantics: Iffis abind expression , equivalent to\nstruct g\n{\ntemplate<\ntypename U1 = unspecified ,... typename U n=unspecified\n>\nstruct apply\n: apply_wrap n<f,U1, ...Un>\n{\n};\n};\notherwise equivalent to typedef f g; .\nExample\nFIXME\nstruct f\n{\ntemplate< typename T1, typename T2 > struct apply\n{\n//...\n};\n};\nRevision Date: 15th November 2004169 Metafunctions 4.5 Arithmetic Operations\ntypedef bind<_1, protect< bind<f,_1,_2> > >\ntypedef apply_wrap0< f0 >::type r1;\ntypedef apply_wrap0< g0 >::type r2;\ntypedef apply_wrap2< f2,int,char >::type r3;\nBOOST_MPL_ASSERT(( is_same<r1,char> ));\nBOOST_MPL_ASSERT(( is_same<r2,char> ));\nBOOST_MPL_ASSERT(( is_same<r3,char> ));\nSee also\nComposition and Argument Binding ,invocation ,bind,quote,apply_wrap\n4.5 Arithmetic Operations\n4.5.1 plus\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n, typename T3 = unspecified\n...\n, typename T n=unspecified\n>\nstruct plus\n{\ntypedef unspecified type;\n};\nDescription\nReturns the sum of its arguments.\nHeader\n#include <boost/mpl/plus.hpp>\n#include <boost/mpl/arithmetic.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2,...Tn Integral Constant Operation’s arguments.\nRevision Date: 15th November 20044.5 Arithmetic Operations Metafunctions 170\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1,c2,...cn:\ntypedef plus<c1, ...cn>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef integral_c<\ntypeof(c1::value + c2::value)\n, ( c1::value + c2::value )\n> c;\ntypedef plus<c,c3, ...cn>::type r;\ntypedef plus<c1, ...cn> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : plus<c1, ...cn>::type {};\nComplexity\nAmortized constant time.\nExample\ntypedef plus< int_<-10>, int_<3>, long_<1> >::type r;\nBOOST_MPL_ASSERT_RELATION( r::value, ==, -6 );\nBOOST_MPL_ASSERT(( is_same< r::value_type, long > ));\nSee also\nArithmetic Operations ,Numeric Metafunction ,numeric_cast ,minus,negate,times\n4.5.2 minus\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n, typename T3 = unspecified\n...\n, typename T n=unspecified\n>\nstruct minus\nRevision Date: 15th November 2004171 Metafunctions 4.5 Arithmetic Operations\n{\ntypedef unspecified type;\n};\nDescription\nReturns the difference of itsarguments.\nHeader\n#include <boost/mpl/minus.hpp>\n#include <boost/mpl/arithmetic.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2,...Tn Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1,c2,...cn:\ntypedef minus<c1, ...cn>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef integral_c<\ntypeof(c1::value - c2::value)\n, ( c1::value - c2::value )\n> c;\ntypedef minus<c,c3, ...cn>::type r;\ntypedef minus<c1, ...cn> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : minus<c1, ...cn>::type {};\nComplexity\nAmortized constant time.\nRevision Date: 15th November 20044.5 Arithmetic Operations Metafunctions 172\nExample\ntypedef minus< int_<-10>, int_<3>, long_<1> >::type r;\nBOOST_MPL_ASSERT_RELATION( r::value, ==, -14 );\nBOOST_MPL_ASSERT(( is_same< r::value_type, long > ));\nSee also\nArithmetic Operations ,Numeric Metafunction ,numeric_cast ,plus,negate,times\n4.5.3 times\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n, typename T3 = unspecified\n...\n, typename T n=unspecified\n>\nstruct times\n{\ntypedef unspecified type;\n};\nDescription\nReturns the product of its arguments.\nHeader\n#include <boost/mpl/times.hpp>\n#include <boost/mpl/arithmetic.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2,...Tn Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nRevision Date: 15th November 2004173 Metafunctions 4.5 Arithmetic Operations\nExpression semantics\nFor anyIntegral Constant sc1,c2,...cn:\ntypedef times<c1, ...cn>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef integral_c<\ntypeof(c1::value * c2::value)\n, ( c1::value * c2::value )\n> c;\ntypedef times<c,c3, ...cn>::type r;\ntypedef times<c1, ...cn> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : times<c1, ...cn>::type {};\nComplexity\nAmortized constant time.\nExample\ntypedef times< int_<-10>, int_<3>, long_<1> >::type r;\nBOOST_MPL_ASSERT_RELATION( r::value, ==, -30 );\nBOOST_MPL_ASSERT(( is_same< r::value_type, long > ));\nSee also\nMetafunctions ,Numeric Metafunction ,numeric_cast ,divides,modulus,plus\n4.5.4 divides\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n, typename T3 = unspecified\n...\n, typename T n=unspecified\n>\nstruct divides\n{\ntypedef unspecified type;\n};\nRevision Date: 15th November 20044.5 Arithmetic Operations Metafunctions 174\nDescription\nReturns the quotient of its arguments.\nHeader\n#include <boost/mpl/divides.hpp>\n#include <boost/mpl/arithmetic.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2,...Tn Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1,c2,...cn:\ntypedef divides<c1, ...cn>::type r;\nReturn type: Integral Constant .\nPrecondition: c2::value != 0 ,...cn::value != 0 .\nSemantics: Equivalent to\ntypedef integral_c<\ntypeof(c1::value / c2::value)\n, ( c1::value / c2::value )\n> c;\ntypedef divides<c,c3, ...cn>::type r;\ntypedef divides<c1, ...cn> r;\nReturn type: Integral Constant .\nPrecondition: c2::value != 0 ,...cn::value != 0 .\nSemantics: Equivalent to\nstruct r : divides<c1, ...cn>::type {};\nComplexity\nAmortized constant time.\nRevision Date: 15th November 2004175 Metafunctions 4.5 Arithmetic Operations\nExample\ntypedef divides< int_<-10>, int_<3>, long_<1> >::type r;\nBOOST_MPL_ASSERT_RELATION( r::value, ==, -3 );\nBOOST_MPL_ASSERT(( is_same< r::value_type, long > ));\nSee also\nArithmetic Operations ,Numeric Metafunction ,numeric_cast ,times,modulus,plus\n4.5.5 modulus\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n>\nstruct modulus\n{\ntypedef unspecified type;\n};\nDescription\nReturns the modulus of its arguments.\nHeader\n#include <boost/mpl/modulus.hpp>\n#include <boost/mpl/arithmetic.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2 Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1andc2:\ntypedef modulus<c1,c2>::type r;\nRevision Date: 15th November 20044.5 Arithmetic Operations Metafunctions 176\nReturn type: Integral Constant .\nPrecondition: c2::value != 0\nSemantics: Equivalent to\ntypedef integral_c<\ntypeof(c1::value % c2::value)\n, ( c1::value % c2::value )\n> r;\ntypedef modulus<c1,c2> r;\nReturn type: Integral Constant .\nPrecondition: c2::value != 0\nSemantics: Equivalent to\nstruct r : modulus<c1,c2>::type {};\nComplexity\nAmortized constant time.\nExample\ntypedef modulus< int_<10>, long_<3> >::type r;\nBOOST_MPL_ASSERT_RELATION( r::value, ==, 1 );\nBOOST_MPL_ASSERT(( is_same< r::value_type, long > ));\nSee also\nMetafunctions ,Numeric Metafunction ,numeric_cast ,divides,times,plus\n4.5.6 negate\nSynopsis\ntemplate<\ntypename T\n>\nstruct negate\n{\ntypedef unspecified type;\n};\nDescription\nReturns the negative (additiveinverse) of its argument.\nHeader\n#include <boost/mpl/negate.hpp>\n#include <boost/mpl/arithmetic.hpp>\nRevision Date: 15th November 2004177 Metafunctions 4.6 Comparisons\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT Integral Constant Operation’s argument.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant c:\ntypedef negate<c>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef integral_c< c::value_type, ( -c::value ) > r;\ntypedef negate<c> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : negate<c>::type {};\nComplexity\nAmortized constant time.\nExample\ntypedef negate< int_<-10> >::type r;\nBOOST_MPL_ASSERT_RELATION( r::value, ==, 10 );\nBOOST_MPL_ASSERT(( is_same< r::value_type, int > ));\nSee also\nArithmetic Operations ,Numeric Metafunction ,numeric_cast ,plus,minus,times\n4.6 Comparisons\n4.6.1 less\nSynopsis\ntemplate<\ntypename T1\nRevision Date: 15th November 20044.6 Comparisons Metafunctions 178\n, typename T2\n>\nstruct less\n{\ntypedef unspecified type;\n};\nDescription\nReturns a true-valued Integral Constant ifT1is less than T2.\nHeader\n#include <boost/mpl/less.hpp>\n#include <boost/mpl/comparison.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2 Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1andc2:\ntypedef less<c1,c2>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef bool_< (c1::value < c2::value) > r;\ntypedef less<c1,c2> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : less<c1,c2>::type {};\nComplexity\nAmortized constant time.\nRevision Date: 15th November 2004179 Metafunctions 4.6 Comparisons\nExample\nBOOST_MPL_ASSERT(( less< int_<0>, int_<10> > ));\nBOOST_MPL_ASSERT_NOT(( less< long_<10>, int_<0> > ));\nBOOST_MPL_ASSERT_NOT(( less< long_<10>, int_<10> > ));\nSee also\nComparisons ,Numeric Metafunction ,numeric_cast ,less_equal ,greater,equal\n4.6.2 less_equal\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n>\nstruct less_equal\n{\ntypedef unspecified type;\n};\nDescription\nReturns a true-valued Integral Constant ifT1is less than or equal to T2.\nHeader\n#include <boost/mpl/less_equal.hpp>\n#include <boost/mpl/comparison.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2 Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1andc2:\ntypedef less_equal<c1,c2>::type r;\nRevision Date: 15th November 20044.6 Comparisons Metafunctions 180\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef bool_< (c1::value <= c2::value) > r;\ntypedef less_equal<c1,c2> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : less_equal<c1,c2>::type {};\nComplexity\nAmortized constant time.\nExample\nBOOST_MPL_ASSERT(( less_equal< int_<0>, int_<10> > ));\nBOOST_MPL_ASSERT_NOT(( less_equal< long_<10>, int_<0> > ));\nBOOST_MPL_ASSERT(( less_equal< long_<10>, int_<10> > ));\nSee also\nComparisons ,Numeric Metafunction ,numeric_cast ,less,greater,equal\n4.6.3 greater\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n>\nstruct greater\n{\ntypedef unspecified type;\n};\nDescription\nReturns a true-valued Integral Constant ifT1is greater than T2.\nHeader\n#include <boost/mpl/greater.hpp>\n#include <boost/mpl/comparison.hpp>\nModel of\nNumeric Metafunction\nRevision Date: 15th November 2004181 Metafunctions 4.6 Comparisons\nParameters\nParameter Requirement Description\nT1,T2 Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1andc2:\ntypedef greater<c1,c2>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef bool_< (c1::value < c2::value) > r;\ntypedef greater<c1,c2> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : greater<c1,c2>::type {};\nComplexity\nAmortized constant time.\nExample\nBOOST_MPL_ASSERT(( greater< int_<10>, int_<0> > ));\nBOOST_MPL_ASSERT_NOT(( greater< long_<0>, int_<10> > ));\nBOOST_MPL_ASSERT_NOT(( greater< long_<10>, int_<10> > ));\nSee also\nComparisons ,Numeric Metafunction ,numeric_cast ,greater_equal ,less,equal_to\n4.6.4 greater_equal\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n>\nstruct greater_equal\n{\ntypedef unspecified type;\n};\nRevision Date: 15th November 20044.6 Comparisons Metafunctions 182\nDescription\nReturns a true-valued Integral Constant ifT1is greater than or equal to T2.\nHeader\n#include <boost/mpl/greater_equal.hpp>\n#include <boost/mpl/comparison.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2 Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1andc2:\ntypedef greater_equal<c1,c2>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef bool_< (c1::value < c2::value) > r;\ntypedef greater_equal<c1,c2> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : greater_equal<c1,c2>::type {};\nComplexity\nAmortized constant time.\nExample\nBOOST_MPL_ASSERT(( greater_equal< int_<10>, int_<0> > ));\nBOOST_MPL_ASSERT_NOT(( greater_equal< long_<0>, int_<10> > ));\nBOOST_MPL_ASSERT(( greater_equal< long_<10>, int_<10> > ));\nRevision Date: 15th November 2004183 Metafunctions 4.6 Comparisons\nSee also\nComparisons ,Numeric Metafunction ,numeric_cast ,greater,less,equal_to\n4.6.5 equal_to\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n>\nstruct equal_to\n{\ntypedef unspecified type;\n};\nDescription\nReturns a true-valued Integral Constant ifT1andT2are equal.\nHeader\n#include <boost/mpl/equal_to.hpp>\n#include <boost/mpl/comparison.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2 Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1andc2:\ntypedef equal_to<c1,c2>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef bool_< (c1::value == c2::value) > r;\ntypedef equal_to<c1,c2> r;\nRevision Date: 15th November 20044.6 Comparisons Metafunctions 184\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : equal_to<c1,c2>::type {};\nComplexity\nAmortized constant time.\nExample\nBOOST_MPL_ASSERT_NOT(( equal_to< int_<0>, int_<10> > ));\nBOOST_MPL_ASSERT_NOT(( equal_to< long_<10>, int_<0> > ));\nBOOST_MPL_ASSERT(( equal_to< long_<10>, int_<10> > ));\nSee also\nComparisons ,Numeric Metafunction ,numeric_cast ,not_equal_to ,less\n4.6.6 not_equal_to\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n>\nstruct not_equal_to\n{\ntypedef unspecified type;\n};\nDescription\nReturns a true-valued Integral Constant ifT1andT2are not equal.\nHeader\n#include <boost/mpl/not_equal_to.hpp>\n#include <boost/mpl/comparison.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2 Integral Constant Operation’s arguments.\nRevision Date: 15th November 2004185 Metafunctions 4.7 Logical Operations\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1andc2:\ntypedef not_equal_to<c1,c2>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef bool_< (c1::value != c2::value) > r;\ntypedef not_equal_to<c1,c2> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : not_equal_to<c1,c2>::type {};\nComplexity\nAmortized constant time.\nExample\nBOOST_MPL_ASSERT(( not_equal_to< int_<0>, int_<10> > ));\nBOOST_MPL_ASSERT(( not_equal_to< long_<10>, int_<0> > ));\nBOOST_MPL_ASSERT_NOT(( not_equal_to< long_<10>, int_<10> > ));\nSee also\nComparisons ,Numeric Metafunction ,numeric_cast ,equal_to ,less\n4.7 Logical Operations\n4.7.1 and_\nSynopsis\ntemplate<\ntypename F1\n, typename F2\n...\n, typename F n=unspecified\n>\nstruct and_\n{\ntypedef unspecified type;\n};\nRevision Date: 15th November 20044.7 Logical Operations Metafunctions 186\nDescription\nReturns the result of short-circuit logical and (&&) operation on its arguments.\nHeader\n#include <boost/mpl/and.hpp>\n#include <boost/mpl/logical.hpp>\nParameters\nParameter Requirement Description\nF1,F2,...Fn NullaryMetafunction Operation’s arguments.\nExpression semantics\nFor arbitrary nullary Metafunction sf1,f2,...fn:\ntypedef and_<f1,f2, ...,fn>::type r;\nReturn type: Integral Constant .\nSemantics: risfalse_if either of f1::type::value ,f2::type::value ,...fn::type::value ex-\npressions evaluates to false, and true_otherwise; guarantees left-to-right evaluation; the operands\nsubsequent to the first fimetafunction that evaluates to falseare not evaluated.\ntypedef and_<f1,f2, ...,fn> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : and_<f1,f2, ...,fn>::type {};\nExample\nstruct unknown;\nBOOST_MPL_ASSERT(( and_< true_,true_ > ));\nBOOST_MPL_ASSERT_NOT(( and_< false_,true_ > ));\nBOOST_MPL_ASSERT_NOT(( and_< true_,false_ > ));\nBOOST_MPL_ASSERT_NOT(( and_< false_,false_ > ));\nBOOST_MPL_ASSERT_NOT(( and_< false_,unknown > )); // OK\nBOOST_MPL_ASSERT_NOT(( and_< false_,unknown,unknown > )); // OK too\nSee also\nMetafunctions ,Logical Operations ,or_,not_\n4.7.2 or_\nSynopsis\ntemplate<\nRevision Date: 15th November 2004187 Metafunctions 4.7 Logical Operations\ntypename F1\n, typename F2\n...\n, typename F n=unspecified\n>\nstruct or_\n{\ntypedef unspecified type;\n};\nDescription\nReturns the result of short-circuit logical or (||) operation on its arguments.\nHeader\n#include <boost/mpl/or.hpp>\n#include <boost/mpl/logical.hpp>\nParameters\nParameter Requirement Description\nF1,F2,...Fn NullaryMetafunction Operation’s arguments.\nExpression semantics\nFor arbitrary nullary Metafunction sf1,f2,...fn:\ntypedef or_<f1,f2, ...,fn>::type r;\nReturn type: Integral Constant .\nSemantics: ristrue_if either of f1::type::value ,f2::type::value ,...fn::type::value ex-\npressions evaluates to true, and false_otherwise; guarantees left-to-right evaluation; the operands\nsubsequent to the first fimetafunction that evaluates to trueare not evaluated.\ntypedef or_<f1,f2, ...,fn> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : or_<f1,f2, ...,fn>::type {};\nExample\nstruct unknown;\nBOOST_MPL_ASSERT(( or_< true_,true_ > ));\nBOOST_MPL_ASSERT(( or_< false_,true_ > ));\nBOOST_MPL_ASSERT(( or_< true_,false_ > ));\nBOOST_MPL_ASSERT_NOT(( or_< false_,false_ > ));\nBOOST_MPL_ASSERT(( or_< true_,unknown > )); // OK\nBOOST_MPL_ASSERT(( or_< true_,unknown,unknown > )); // OK too\nRevision Date: 15th November 20044.7 Logical Operations Metafunctions 188\nSee also\nMetafunctions ,Logical Operations ,and_,not_\n4.7.3 not_\nSynopsis\ntemplate<\ntypename F\n>\nstruct not_\n{\ntypedef unspecified type;\n};\nDescription\nReturns the result of logical not (!) operation on its argument.\nHeader\n#include <boost/mpl/not.hpp>\n#include <boost/mpl/logical.hpp>\nParameters\nParameter Requirement Description\nF NullaryMetafunction Operation’s argument.\nExpression semantics\nFor arbitrary nullary Metafunction f:\ntypedef not_<f>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef bool_< (!f::type::value) > r;\ntypedef not_<f> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : not_<f>::type {};\nExample\nBOOST_MPL_ASSERT_NOT(( not_< true_ > ));\nBOOST_MPL_ASSERT(( not_< false_ > ));\nRevision Date: 15th November 2004189 Metafunctions 4.8 Bitwise Operations\nSee also\nMetafunctions ,Logical Operations ,and_,or_\n4.8 Bitwise Operations\n4.8.1 bitand_\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n, typename T3 = unspecified\n...\n, typename T n=unspecified\n>\nstruct bitand_\n{\ntypedef unspecified type;\n};\nDescription\nReturns the result of bitwise and (&) operation of its arguments.\nHeader\n#include <boost/mpl/bitand.hpp>\n#include <boost/mpl/bitwise.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2,...Tn Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1,c2,...cn:\ntypedef bitand_<c1, ...cn>::type r;\nReturn type: Integral Constant .\nRevision Date: 15th November 20044.8 Bitwise Operations Metafunctions 190\nSemantics: Equivalent to\ntypedef integral_c<\ntypeof(c1::value & c2::value)\n, ( c1::value & c2::value )\n> c;\ntypedef bitand_<c,c3, ...cn>::type r;\ntypedef bitand_<c1, ...cn> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : bitand_<c1, ...cn>::type {};\nComplexity\nAmortized constant time.\nExample\ntypedef integral_c<unsigned,0> u0;\ntypedef integral_c<unsigned,1> u1;\ntypedef integral_c<unsigned,2> u2;\ntypedef integral_c<unsigned,8> u8;\ntypedef integral_c<unsigned,0xffffffff> uffffffff;\nBOOST_MPL_ASSERT_RELATION( (bitand_<u0,u0>::value), ==, 0 );\nBOOST_MPL_ASSERT_RELATION( (bitand_<u1,u0>::value), ==, 0 );\nBOOST_MPL_ASSERT_RELATION( (bitand_<u0,u1>::value), ==, 0 );\nBOOST_MPL_ASSERT_RELATION( (bitand_<u0,uffffffff>::value), ==, 0 );\nBOOST_MPL_ASSERT_RELATION( (bitand_<u1,uffffffff>::value), ==, 1 );\nBOOST_MPL_ASSERT_RELATION( (bitand_<u8,uffffffff>::value), ==, 8 );\nSee also\nBitwise Operations ,Numeric Metafunction ,numeric_cast ,bitor_,bitxor_,shift_left\n4.8.2 bitor_\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n, typename T3 = unspecified\n...\n, typename T n=unspecified\n>\nstruct bitor_\n{\ntypedef unspecified type;\nRevision Date: 15th November 2004191 Metafunctions 4.8 Bitwise Operations\n};\nDescription\nReturns the result of bitwise or (|) operation of its arguments.\nHeader\n#include <boost/mpl/bitor.hpp>\n#include <boost/mpl/bitwise.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT1,T2,...Tn Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1,c2,...cn:\ntypedef bitor_<c1, ...cn>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef integral_c<\ntypeof(c1::value | c2::value)\n, ( c1::value | c2::value )\n> c;\ntypedef bitor_<c,c3, ...cn>::type r;\ntypedef bitor_<c1, ...cn> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : bitor_<c1, ...cn>::type {};\nComplexity\nAmortized constant time.\nRevision Date: 15th November 20044.8 Bitwise Operations Metafunctions 192\nExample\ntypedef integral_c<unsigned,0> u0;\ntypedef integral_c<unsigned,1> u1;\ntypedef integral_c<unsigned,2> u2;\ntypedef integral_c<unsigned,8> u8;\ntypedef integral_c<unsigned,0xffffffff> uffffffff;\nBOOST_MPL_ASSERT_RELATION( (bitor_<u0,u0>::value), ==, 0 );\nBOOST_MPL_ASSERT_RELATION( (bitor_<u1,u0>::value), ==, 1 );\nBOOST_MPL_ASSERT_RELATION( (bitor_<u0,u1>::value), ==, 1 );\nBOOST_MPL_ASSERT_RELATION( (bitor_<u0,uffffffff>::value), ==, 0xffffffff );\nBOOST_MPL_ASSERT_RELATION( (bitor_<u1,uffffffff>::value), ==, 0xffffffff );\nBOOST_MPL_ASSERT_RELATION( (bitor_<u8,uffffffff>::value), ==, 0xffffffff );\nSee also\nBitwise Operations ,Numeric Metafunction ,numeric_cast ,bitand_,bitxor_,shift_left\n4.8.3 bitxor_\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n, typename T3 = unspecified\n...\n, typename T n=unspecified\n>\nstruct bitxor_\n{\ntypedef unspecified type;\n};\nDescription\nReturns the result of bitwise xor (^) operation of its arguments.\nHeader\n#include <boost/mpl/bitxor.hpp>\n#include <boost/mpl/bitwise.hpp>\nModel of\nNumeric Metafunction\nParameters\nRevision Date: 15th November 2004193 Metafunctions 4.8 Bitwise Operations\nParameter Requirement Description\nT1,T2,...Tn Integral Constant Operation’s arguments.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor anyIntegral Constant sc1,c2,...cn:\ntypedef bitxor_<c1, ...cn>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef integral_c<\ntypeof(c1::value ^ c2::value)\n, ( c1::value ^ c2::value )\n> c;\ntypedef bitxor_<c,c3, ...cn>::type r;\ntypedef bitxor_<c1, ...cn> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : bitxor_<c1, ...cn>::type {};\nComplexity\nAmortized constant time.\nExample\ntypedef integral_c<unsigned,0> u0;\ntypedef integral_c<unsigned,1> u1;\ntypedef integral_c<unsigned,2> u2;\ntypedef integral_c<unsigned,8> u8;\ntypedef integral_c<unsigned,0xffffffff> uffffffff;\nBOOST_MPL_ASSERT_RELATION( (bitxor_<u0,u0>::value), ==, 0 );\nBOOST_MPL_ASSERT_RELATION( (bitxor_<u1,u0>::value), ==, 1 );\nBOOST_MPL_ASSERT_RELATION( (bitxor_<u0,u1>::value), ==, 1 );\nBOOST_MPL_ASSERT_RELATION( (bitxor_<u0,uffffffff>::value), ==, 0xffffffff ^ 0 );\nBOOST_MPL_ASSERT_RELATION( (bitxor_<u1,uffffffff>::value), ==, 0xffffffff ^ 1 );\nBOOST_MPL_ASSERT_RELATION( (bitxor_<u8,uffffffff>::value), ==, 0xffffffff ^ 8 );\nSee also\nBitwise Operations ,Numeric Metafunction ,numeric_cast ,bitand_,bitor_,shift_left\nRevision Date: 15th November 20044.8 Bitwise Operations Metafunctions 194\n4.8.4 shift_left\nSynopsis\ntemplate<\ntypename T\n, typename Shift\n>\nstruct shift_left\n{\ntypedef unspecified type;\n};\nDescription\nReturns the result of bitwise shift left(<<) operation on T.\nHeader\n#include <boost/mpl/shift_left.hpp>\n#include <boost/mpl/bitwise.hpp>\nModel of\nNumeric Metafunction\nParameters\nParameter Requirement Description\nT Integral Constant A value to shift.\nShift Unsigned Integral Constant A shift distance.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor arbitrary Integral Constant cand unsigned Integral Constant shift:\ntypedef shift_left<c,shift>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef integral_c<\nc::value_type\n, ( c::value << shift::value )\n> r;\ntypedef shift_left<c,shift> r;\nRevision Date: 15th November 2004195 Metafunctions 4.8 Bitwise Operations\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : shift_left<c,shift>::type {};\nComplexity\nAmortized constant time.\nExample\ntypedef integral_c<unsigned,0> u0;\ntypedef integral_c<unsigned,1> u1;\ntypedef integral_c<unsigned,2> u2;\ntypedef integral_c<unsigned,8> u8;\nBOOST_MPL_ASSERT_RELATION( (shift_left<u0,u0>::value), ==, 0 );\nBOOST_MPL_ASSERT_RELATION( (shift_left<u1,u0>::value), ==, 1 );\nBOOST_MPL_ASSERT_RELATION( (shift_left<u1,u1>::value), ==, 2 );\nBOOST_MPL_ASSERT_RELATION( (shift_left<u2,u1>::value), ==, 4 );\nBOOST_MPL_ASSERT_RELATION( (shift_left<u8,u1>::value), ==, 16 );\nSee also\nBitwise Operations ,Numeric Metafunction ,numeric_cast ,shift_right ,bitand_\n4.8.5 shift_right\nSynopsis\ntemplate<\ntypename T\n, typename Shift\n>\nstruct shift_right\n{\ntypedef unspecified type;\n};\nDescription\nReturns the result of bitwise shift right (>>) operation on T.\nHeader\n#include <boost/mpl/shift_right.hpp>\n#include <boost/mpl/bitwise.hpp>\nModel of\nNumeric Metafunction\nRevision Date: 15th November 20044.8 Bitwise Operations Metafunctions 196\nParameters\nParameter Requirement Description\nT Integral Constant A value to shift.\nShift Unsigned Integral Constant A shift distance.\n[Note:The requirements listed in this specification are the ones imposed by the default implementation. See Numeric\nMetafunction conceptforthedetailsonhowtoprovideanimplementationforauser-definednumerictypethatdoesnot\nsatisfy the Integral Constant requirements. — end note]\nExpression semantics\nFor arbitrary Integral Constant cand unsigned Integral Constant shift:\ntypedef shift_right<c,shift>::type r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\ntypedef integral_c<\nc::value_type\n, ( c::value >> shift::value )\n> r;\ntypedef shift_right<c,shift> r;\nReturn type: Integral Constant .\nSemantics: Equivalent to\nstruct r : shift_right<c,shift>::type {};\nComplexity\nAmortized constant time.\nExample\ntypedef integral_c<unsigned,0> u0;\ntypedef integral_c<unsigned,1> u1;\ntypedef integral_c<unsigned,2> u2;\ntypedef integral_c<unsigned,8> u8;\nBOOST_MPL_ASSERT_RELATION( (shift_right<u0,u0>::value), ==, 0 );\nBOOST_MPL_ASSERT_RELATION( (shift_right<u1,u0>::value), ==, 1 );\nBOOST_MPL_ASSERT_RELATION( (shift_right<u1,u1>::value), ==, 0 );\nBOOST_MPL_ASSERT_RELATION( (shift_right<u2,u1>::value), ==, 1 );\nBOOST_MPL_ASSERT_RELATION( (shift_right<u8,u1>::value), ==, 4 );\nSee also\nBitwise Operations ,Numeric Metafunction ,numeric_cast ,shift_left ,bitand_\nRevision Date: 15th November 2004197 Metafunctions 4.10 Miscellaneous\n4.9 Trivial\nThe MPL provides a number of Trivial Metafunction s that a nothing more than thin wrappers for a differently-named\nclass nested type members. While important in the context of in-place metafunction composition , these metafunctions\nhavesolittletothemthatpresentingtheminthesameformatastherestofthecompomentsinthismanualwouldresult\nin more boilerplate syntactic baggage than the actual content. To avoid this problem, we instead factor out the common\nmetafunctions’ requirements into the corresponding concept and gather all of them in a single place — this subsection\n— in a compact table form that is presented below.\n4.9.1 Trivial Metafunctions Summary\nIn the following table, xis an arbitrary class type.\nMetafunction Header\nfirst<x>::type #include <boost/mpl/pair.hpp>\nsecond<x>::type #include <boost/mpl/pair.hpp>\nbase<x>::type #include <boost/mpl/base.hpp>\nSee Also\nMetafunctions ,Trivial Metafunction\n4.10 Miscellaneous\n4.10.1 identity\nSynopsis\ntemplate<\ntypename X\n>\nstruct identity\n{\ntypedef X type;\n};\nDescription\nTheidentitymetafunction. Returns Xunchanged.\nHeader\n#include <boost/mpl/identity.hpp>\nModel of\nMetafunction\nParameters\nRevision Date: 15th November 20044.10 Miscellaneous Metafunctions 198\nParameter Requirement Description\nX Any type An argument to be returned.\nExpression semantics\nFor an arbitrary type x:\ntypedef identity<x>::type r;\nReturn type: A type.\nSemantics: Equivalent to\ntypedef x r;\nPostcondition: is_same<r,x>::value == true .\nExample\ntypedef apply< identity<_1>, char >::type t1;\ntypedef apply< identity<_2>, char,int >::type t2;\nBOOST_MPL_ASSERT(( is_same< t1, char > ));\nBOOST_MPL_ASSERT(( is_same< t2, int > ));\nSee also\nMetafunctions ,Placeholders ,Trivial Metafunctions ,always,apply\n4.10.2 always\nSynopsis\ntemplate<\ntypename X\n>\nstruct always\n{\n//unspecified\n//...\n};\nDescription\nalways<X> specialization is a variadic Metafunction Class always returning the same type, X, regardless of the number\nand types of passed arguments.\nHeader\n#include <boost/mpl/always.hpp>\nRevision Date: 15th November 2004199 Metafunctions 4.10 Miscellaneous\nModel of\nMetafunction Class\nParameters\nParameter Requirement Description\nX Any type A type to be returned.\nExpression semantics\nFor an arbitrary type x:\ntypedef always<x> f;\nReturn type: Metafunction Class .\nSemantics: Equivalent to\nstruct f : bind< identity<_1>, x > {};\nExample\ntypedef always<true_> always_true;\nBOOST_MPL_ASSERT(( apply< always_true,false_> ));\nBOOST_MPL_ASSERT(( apply< always_true,false_,false_ > ));\nBOOST_MPL_ASSERT(( apply< always_true,false_,false_,false_ > ));\nSee also\nMetafunctions ,Metafunction Class ,identity ,bind,apply\n4.10.3 inherit\nSynopsis\ntemplate<\ntypename T1, typename T2\n>\nstruct inherit2\n{\ntypedef unspecified type;\n};\n...\ntemplate<\ntypename T1, typename T2, ... typename T n\n>\nstruct inherit n\n{\ntypedef unspecified type;\nRevision Date: 15th November 20044.10 Miscellaneous Metafunctions 200\n};\ntemplate<\ntypename T1\n, typename T2\n...\n, typename T n=unspecified\n>\nstruct inherit\n{\ntypedef unspecified type;\n};\nDescription\nReturnsanunspecifiedclasstypepublicallyderivedfrom T1,T2,...Tn. Guaranteesthatderivationfrom empty_base is\nalways a no-op, regardless of the position and number of empty_base classes in T1,T2,...Tn.\nHeader\n#include <boost/mpl/inherit.hpp>\nModel of\nMetafunction\nParameters\nParameter Requirement Description\nT1,T2,...Tn A class type Classes to derived from.\nExpression semantics\nFor artibrary class types t1,t2,...tn:\ntypedef inherit2<t1,t2>::type r;\nReturn type: A class type.\nPrecondition: t1andt2are complete types.\nSemantics: If both t1andt2are identical to empty_base , equivalent to\ntypedef empty_base r;\notherwise, if t1is identical to empty_base , equivalent to\ntypedef t2 r;\notherwise, if t2is identical to empty_base , equivalent to\ntypedef t1 r;\notherwise equivalent to\nstruct r : t1, t2 {};\nRevision Date: 15th November 2004201 Metafunctions 4.10 Miscellaneous\ntypedef inherit n<t1,t2, ...tn>::type r;\nReturn type: A class type.\nPrecondition: t1,t2,...tnare complete types.\nSemantics: Equivalent to\nstruct r\n: inherit2<\ninherit n-1<t1,t2, ...tn-1>::type\n, tn\n>\n{\n};\ntypedef inherit<t1,t2, ...tn>::type r;\nPrecondition: t1,t2,...tnare complete types.\nReturn type: A class type.\nSemantics: Equivalent to\ntypedef inherit n<t1,t2, ...tn>::type r;\nComplexity\nAmortized constant time.\nExample\nstruct udt1 { int n; };\nstruct udt2 {};\ntypedef inherit<udt1,udt2>::type r1;\ntypedef inherit<empty_base,udt1>::type r2;\ntypedef inherit<empty_base,udt1,empty_base,empty_base>::type r3;\ntypedef inherit<udt1,empty_base,udt2>::type r4;\ntypedef inherit<empty_base,empty_base>::type r5;\nBOOST_MPL_ASSERT(( is_base_and_derived< udt1, r1> ));\nBOOST_MPL_ASSERT(( is_base_and_derived< udt2, r1> ));\nBOOST_MPL_ASSERT(( is_same< r2, udt1> ));\nBOOST_MPL_ASSERT(( is_same< r3, udt1 > ));\nBOOST_MPL_ASSERT(( is_base_and_derived< udt1, r4 > ));\nBOOST_MPL_ASSERT(( is_base_and_derived< udt2, r4 > ));\nBOOST_MPL_ASSERT(( is_same< r5, empty_base > ));\nSee also\nMetafunctions ,empty_base ,inherit_linearly ,identity\nRevision Date: 15th November 20044.10 Miscellaneous Metafunctions 202\n4.10.4 inherit_linearly\nSynopsis\ntemplate<\ntypename Types\n, typename Node\n, typename Root = empty_base\n>\nstruct inherit_linearly\n: fold<Types,Root,Node>\n{\n};\nDescription\nA convenience wrapper for foldto use in the context of sequence-driven class composition. Returns the result the\nsuccessive application of binary Nodeto the result of the previous Nodeinvocation ( Rootif it’s the first call) and every\ntype in the Forward Sequence Typesin order.\nHeader\n#include <boost/mpl/inherit_linearly.hpp>\nModel of\nMetafunction\nParameters\nParameter Requirement Description\nTypes Forward Sequence Types to inherit from.\nNode BinaryLambda Expression A derivation metafunction.\nRoot A class type A type to be placed at the rootof the class hierarchy.\nExpression semantics\nFor anyForward Sequence types, binaryLambda Expression node, and arbitrary class type root:\ntypedef inherit_linearly<types,node,root>::type r;\nReturn type: A class type.\nSemantics: Equivalent to\ntypedef fold<types,root,node>::type r;\nComplexity\nLinear. Exactly size<types>::value applications of node.\nRevision Date: 15th November 2004203 Metafunctions 4.10 Miscellaneous\nExample\ntemplate< typename T > struct tuple_field\n{\nT field;\n};\ntemplate< typename T >\ninline\nT& field(tuple_field<T>& t)\n{\nreturn t.field;\n}\ntypedef inherit_linearly<\nvector<int,char const*,bool>\n, inherit< _1, tuple_field<_2> >\n>::type tuple;\nint main()\n{\ntuple t;\nfield<int>(t) = -1;\nfield<char const*>(t) = \"text\";\nfield<bool>(t) = false;\nstd::cout\n<< field<int>(t) << ’n’\n<< field<char const*>(t) << ’n’\n<< field<bool>(t) << ’n’\n;\n}\nSee also\nMetafunctions ,Algorithms ,inherit,empty_base ,fold,reverse_fold\n4.10.5 numeric_cast\nSynopsis\ntemplate<\ntypename SourceTag\n, typename TargetTag\n>\nstruct numeric_cast;\nDescription\nEach numeric_cast specializationisauser-specializedunary MetafunctionClass providingaconversionbetweentwo\nnumeric types.\nRevision Date: 15th November 20044.10 Miscellaneous Metafunctions 204\nHeader\n#include <boost/mpl/numeric_cast.hpp>\nParameters\nParameter Requirement Description\nSourceTag Integral Constant A tag for the conversion’s source type.\nTargetTag Integral Constant A tag for the conversion’s destination type.\nExpression semantics\nIfxandyare two numeric types, xis convertible to y, and x_tagandy_tagare the types’ corresponding Integral\nConstant tags:\ntypedef apply_wrap2< numeric_cast<x_tag,y_tag>,x >::type r;\nReturn type: A type.\nSemantics: ris a value of xconverted to the type of y.\nComplexity\nUnspecified.\nExample\nstruct complex_tag : int_<10> {};\ntemplate< typename Re, typename Im > struct complex\n{\ntypedef complex_tag tag;\ntypedef complex type;\ntypedef Re real;\ntypedef Im imag;\n};\ntemplate< typename C > struct real : C::real {};\ntemplate< typename C > struct imag : C::imag {};\nnamespace boost { namespace mpl {\ntemplate<> struct numeric_cast< integral_c_tag,complex_tag >\n{\ntemplate< typename N > struct apply\n: complex< N, integral_c< typename N::value_type, 0 > >\n{\n};\n};\ntemplate<>\nstruct plus_impl< complex_tag,complex_tag >\nRevision Date: 15th November 2004205 Metafunctions 4.10 Miscellaneous\n{\ntemplate< typename N1, typename N2 > struct apply\n: complex<\nplus< typename N1::real, typename N2::real >\n, plus< typename N1::imag, typename N2::imag >\n>\n{\n};\n};\n}}\ntypedef int_<2> i;\ntypedef complex< int_<5>, int_<-1> > c1;\ntypedef complex< int_<-5>, int_<1> > c2;\ntypedef plus<c1,i> r4;\nBOOST_MPL_ASSERT_RELATION( real<r4>::value, ==, 7 );\nBOOST_MPL_ASSERT_RELATION( imag<r4>::value, ==, -1 );\ntypedef plus<i,c2> r5;\nBOOST_MPL_ASSERT_RELATION( real<r5>::value, ==, -3 );\nBOOST_MPL_ASSERT_RELATION( imag<r5>::value, ==, 1 );\nSee also\nMetafunctions ,Numeric Metafunction ,plus,minus,times\n4.10.6 min\nSynopsis\ntemplate<\ntypename N1\n, typename N2\n>\nstruct min\n{\ntypedef unspecified type;\n};\nDescription\nReturns the smaller of its two arguments.\nHeader\n#include <boost/mpl/min_max.hpp>\nModel of\nMetafunction\nRevision Date: 15th November 20044.10 Miscellaneous Metafunctions 206\nParameters\nParameter Requirement Description\nN1,N2 Any type Types to compare.\nExpression semantics\nFor arbitrary types xandy:\ntypedef min<x,y>::type r;\nReturn type: A type.\nPrecondition: less<x,y>::value is a well-formed integral constant expression.\nSemantics: Equivalent to\ntypedef if_< less<x,y>,x,y >::type r;\nComplexity\nConstant time.\nExample\ntypedef fold<\nvector_c<int,1,7,0,-2,5,-1>\n, int_<-10>\n, min<_1,_2>\n>::type r;\nBOOST_MPL_ASSERT(( is_same< r, int_<-10> > ));\nSee also\nMetafunctions ,comparison ,max,less,min_element\n4.10.7 max\nSynopsis\ntemplate<\ntypename N1\n, typename N2\n>\nstruct max\n{\ntypedef unspecified type;\n};\nDescription\nReturns the larger of its twoarguments.\nRevision Date: 15th November 2004207 Metafunctions 4.10 Miscellaneous\nHeader\n#include <boost/mpl/min_max.hpp>\nModel of\nMetafunction\nParameters\nParameter Requirement Description\nN1,N2 Any type Types to compare.\nExpression semantics\nFor arbitrary types xandy:\ntypedef max<x,y>::type r;\nReturn type: A type.\nPrecondition: less<x,y>::value is a well-formed integral constant expression.\nSemantics: Equivalent to\ntypedef if_< less<x,y>,y,x >::type r;\nComplexity\nConstant time.\nExample\ntypedef fold<\nvector_c<int,1,7,0,-2,5,-1>\n, int_<10>\n, max<_1,_2>\n>::type r;\nBOOST_MPL_ASSERT(( is_same< r, int_<10> > ));\nSee also\nMetafunctions ,comparison ,min,less,max_element\n4.10.8 sizeof_\nSynopsis\ntemplate<\ntypename X\n>\nstruct sizeof_\nRevision Date: 15th November 20044.10 Miscellaneous Metafunctions 208\n{\ntypedef unspecified type;\n};\nDescription\nReturns the result of a sizeof(X) expression wrapped into an Integral Constant of the corresponding type,\nstd::size_t .\nHeader\n#include <boost/mpl/sizeof.hpp>\nModel of\nMetafunction\nParameters\nParameter Requirement Description\nX Any type A type to compute the sizeoffor.\nExpression semantics\nFor an arbitrary type x:\ntypedef sizeof_<x>::type n;\nReturn type: Integral Constant .\nPrecondition: xis a complete type.\nSemantics: Equivalent to\ntypedef size_t< sizeof(x) > n;\nComplexity\nConstant time.\nExample\nstruct udt { char a[100]; };\nBOOST_MPL_ASSERT_RELATION( sizeof_<char>::value, ==, sizeof(char) );\nBOOST_MPL_ASSERT_RELATION( sizeof_<int>::value, ==, sizeof(int) );\nBOOST_MPL_ASSERT_RELATION( sizeof_<double>::value, ==, sizeof(double) );\nBOOST_MPL_ASSERT_RELATION( sizeof_<udt>::value, ==, sizeof(my) );\nSee also\nMetafunctions ,Integral Constant ,size_t\nRevision Date: 15th November 2004Chapter 5 Data Types\n5.1 Concepts\n5.1.1 Integral Constant\nDescription\nAnIntegral Constant is a holder class for a compile-time value of an integral type. Every Integral Constant is also a\nnullaryMetafunction ,returningitself. Anintegralconstant objectisimplicitlyconvertibletothecorrespondingrun-time\nvalue of the wrapped integraltype.\nExpression requirements\nIn the following table and subsequent specifications, nis a model of Integral Constant .\nExpression Type Complexity\nn::value_type An integral type Constant time.\nn::value An integral constant expression Constant time.\nn::type Integral Constant Constant time.\nnext<n>::type Integral Constant Constant time.\nprior<n>::type Integral Constant Constant time.\nn::value_type const c = n() Constant time.\nExpression semantics\nExpression Semantics\nn::value_type A cv-unqualified type of n::value .\nn::value The value of the wrapped integral constant.\nn::type is_same<n::type,n>::value == true .\nnext<n>::type AnIntegral Constant cof type n::value_type such that\nc::value == n::value + 1 .\nprior<n>::type AnIntegral Constant cof type n::value_type such that\nc::value == n::value - 1 .\nn::value_type const c = n() c == n::value .\nModels\n—bool_\n—int_5.2 Numeric Data Types 210\n—long_\n—integral_c\nSee also\nData Types ,Integral Sequence Wrapper ,integral_c\n5.2 Numeric\n5.2.1 bool_\nSynopsis\ntemplate<\nbool C\n>\nstruct bool_\n{\n//unspecified\n// ...\n};\ntypedef bool_<true> true_;\ntypedef bool_<false> false_;\nDescription\nA boolean Integral Constant wrapper.\nHeader\n#include <boost/mpl/bool.hpp>\nModel of\nIntegral Constant\nParameters\nParameter Requirement Description\nC A boolean integral constant A value to wrap.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Integral Constant .\nFor arbitrary integral constant c:\nExpression Semantics\nbool_<c> AnIntegralConstant xsuchthat x::value == c andx::value_type isidentical\ntobool.\nRevision Date: 15th November 2004211 Data Types 5.2 Numeric\nExample\nBOOST_MPL_ASSERT(( is_same< bool_<true>::value_type, bool > ));\nBOOST_MPL_ASSERT(( is_same< bool_<true>, true_ > )); }\nBOOST_MPL_ASSERT(( is_same< bool_<true>::type, bool_<true> > ));\nBOOST_MPL_ASSERT_RELATION( bool_<true>::value, ==, true );\nassert( bool_<true>() == true );\nSee also\nData Types ,Integral Constant ,int_,long_,integral_c\n5.2.2 int_\nSynopsis\ntemplate<\nint N\n>\nstruct int_\n{\n//unspecified\n// ...\n};\nDescription\nAnIntegral Constant wrapper for int.\nHeader\n#include <boost/mpl/int.hpp>\nModel of\nIntegral Constant\nParameters\nParameter Requirement Description\nN An integral constant A value to wrap.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Integral Constant .\nFor arbitrary integral constant n:\nExpression Semantics\nint_<c> AnIntegralConstant xsuchthat x::value == c andx::value_type isidentical\ntoint.\nRevision Date: 15th November 20045.2 Numeric Data Types 212\nExample\ntypedef int_<8> eight;\nBOOST_MPL_ASSERT(( is_same< eight::value_type, int > ));\nBOOST_MPL_ASSERT(( is_same< eight::type, eight > ));\nBOOST_MPL_ASSERT(( is_same< next< eight >::type, int_<9> > ));\nBOOST_MPL_ASSERT(( is_same< prior< eight >::type, int_<7> > ));\nBOOST_MPL_ASSERT_RELATION( (eight::value), ==, 8 );\nassert( eight() == 8 );\nSee also\nData Types ,Integral Constant ,long_,size_t,integral_c\n5.2.3 long_\nSynopsis\ntemplate<\nlong N\n>\nstruct long_\n{\n//unspecified\n// ...\n};\nDescription\nAnIntegral Constant wrapper for long.\nHeader\n#include <boost/mpl/long.hpp>\nModel of\nIntegral Constant\nParameters\nParameter Requirement Description\nN An integral constant A value to wrap.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Integral Constant .\nFor arbitrary integral constant n:\nRevision Date: 15th November 2004213 Data Types 5.2 Numeric\nExpression Semantics\nlong_<c> AnIntegralConstant xsuchthat x::value == c andx::value_type isidentical\ntolong.\nExample\ntypedef long_<8> eight;\nBOOST_MPL_ASSERT(( is_same< eight::value_type, long > ));\nBOOST_MPL_ASSERT(( is_same< eight::type, eight > ));\nBOOST_MPL_ASSERT(( is_same< next< eight >::type, long_<9> > ));\nBOOST_MPL_ASSERT(( is_same< prior< eight >::type, long_<7> > ));\nBOOST_MPL_ASSERT_RELATION( (eight::value), ==, 8 );\nassert( eight() == 8 );\nSee also\nData Types ,Integral Constant ,int_,size_t,integral_c\n5.2.4 size_t\nSynopsis\ntemplate<\nstd::size_t N\n>\nstruct size_t\n{\n//unspecified\n// ...\n};\nDescription\nAnIntegral Constant wrapper for std::size_t .\nHeader\n#include <boost/mpl/size_t.hpp>\nModel of\nIntegral Constant\nParameters\nParameter Requirement Description\nN An integral constant A value to wrap.\nRevision Date: 15th November 20045.2 Numeric Data Types 214\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Integral Constant .\nFor arbitrary integral constant n:\nExpression Semantics\nsize_t<c> AnIntegralConstant xsuchthat x::value == c andx::value_type isidentical\ntostd::size_t .\nExample\ntypedef size_t<8> eight;\nBOOST_MPL_ASSERT(( is_same< eight::value_type, std::size_t > ));\nBOOST_MPL_ASSERT(( is_same< eight::type, eight > ));\nBOOST_MPL_ASSERT(( is_same< next< eight >::type, size_t<9> > ));\nBOOST_MPL_ASSERT(( is_same< prior< eight >::type, size_t<7> > ));\nBOOST_MPL_ASSERT_RELATION( (eight::value), ==, 8 );\nassert( eight() == 8 );\nSee also\nData Types ,Integral Constant ,int_,long_,integral_c\n5.2.5 integral_c\nSynopsis\ntemplate<\ntypename T, T N\n>\nstruct integral_c\n{\n//unspecified\n// ...\n};\nDescription\nA generic Integral Constant wrapper.\nHeader\n#include <boost/mpl/integral_c.hpp>\nModel of\nIntegral Constant\nRevision Date: 15th November 2004215 Data Types 5.2 Numeric\nParameters\nRevision Date: 15th November 20045.3 Miscellaneous Data Types 216\nParameter Requirement Description\nT An integral type Wrapper’s value type.\nN An integral constant A value to wrap.\nExpression semantics\nThe semantics of an expression are defined only where they differ from, or are not definedin Integral Constant .\nFor arbitrary integral type tand integral constant n:\nExpression Semantics\nintegral_c<t,c> AnIntegralConstant xsuchthat x::value == c andx::value_type isiden-\ntical to t.\nExample\ntypedef integral_c<short,8> eight;\nBOOST_MPL_ASSERT(( is_same< eight::value_type, short > ));\nBOOST_MPL_ASSERT(( is_same< eight::type, eight > ));\nBOOST_MPL_ASSERT(( is_same< next< eight >::type, integral_c<short,9> > ));\nBOOST_MPL_ASSERT(( is_same< prior< eight >::type, integral_c<short,7> > ));\nBOOST_MPL_ASSERT_RELATION( (eight::value), ==, 8 );\nassert( eight() == 8 );\nSee also\nData Types ,Integral Constant ,bool_,int_,long_,size_t\n5.3 Miscellaneous\n5.3.1 pair\nSynopsis\ntemplate<\ntypename T1\n, typename T2\n>\nstruct pair\n{\ntypedef pair type;\ntypedef T1 first;\ntypedef T2 second;\n};\nDescription\nA transparent holder for two arbitrary types.\nRevision Date: 15th November 2004217 Data Types 5.3 Miscellaneous\nHeader\n#include <boost/mpl/pair.hpp>\nExample\nCount a number of elements in the sequence together with a number of negative elements among these.\ntypedef fold<\nvector_c<int,-1,0,5,-7,-2,4,5,7>\n, pair< int_<0>, int_<0> >\n, pair<\nnext< first<_1> >\n, if_<\nless< _2, int_<0> >\n, next< second<_1> >\n, second<_1>\n>\n>\n>::type p;\nBOOST_MPL_ASSERT_RELATION( p::first::value, ==, 8 );\nBOOST_MPL_ASSERT_RELATION( p::second::value, ==, 3 );\nSee also\nData Types ,Sequences ,first,second\n5.3.2 empty_base\nSynopsis\nstruct empty_base {};\nDescription\nAn empty base class. Inheritance from empty_base through the inherit metafunction is a no-op.\nHeader\n#include <boost/mpl/empty_base.hpp>\nSee also\nData Types ,inherit,inherit_linearly ,void_\nRevision Date: 15th November 20045.3 Miscellaneous Data Types 218\n5.3.3 void_\nSynopsis\nstruct void_\n{\ntypedef void_ type;\n};\ntemplate< typename T > struct is_void;\nDescription\nvoid_is a generic type placeholder representing “nothing”.\nHeader\n#include <boost/mpl/void.hpp>\nSee also\nData Types ,pair,empty_base ,bool_,int_,integral_c\nRevision Date: 15th November 2004Chapter 6 Macros\nBeing atemplatemetaprogramming framework, the MPL concentrates on getting one thing done well and leaves most\nof the clearly preprocessor-related tasks to the corresponding specialized libraries [ PRE], [Ve03]. But whether we\nlike it or not, macros play an important role on today’s C++ metaprogramming, and some of the useful MPL-level\nfunctionalitycannotbeimplementedwithoutleakingitspreprocessor-dependentimplementationnatureintothelibrary’s\npublic interface.\n6.1 Asserts\nThe MPL supplies a suite of static assertion macros that are specifically designed to generate maximally useful and\ninformative error messageswithin the diagnostic capabilities of each compiler.\nAll assert macros can be used at class, function, or namespace scope.\n6.1.1 BOOST_MPL_ASSERT\nSynopsis\n#define BOOST_MPL_ASSERT( pred ) \\\nunspecified token sequence \\\n/**/\nDescription\nGenerates a compilation error when the predicate predholds false.\nHeader\n#include <boost/mpl/assert.hpp>\nParameters\nParameter Requirement Description\npred Boolean nullary Metafunction A predicate to be asserted.\nExpression semantics\nFor any boolean nullary Metafunction pred:\nBOOST_MPL_ASSERT(( pred ));\nReturn type: None.6.1 Asserts Macros 220\nSemantics: Generates a compilation error if pred::type::value != true , otherwise has no effect.\nNote that double parentheses are required even if no commas appear in the condition.\nWhen possible within the compiler’s diagnostic capabilities, the error message will include the predi-\ncate’s full type name, and have ageneral form of:\n... ************ pred::************ ...\nExample\ntemplate< typename T, typename U > struct my\n{\n// ...\nBOOST_MPL_ASSERT(( is_same< T,U > ));\n};\nmy<void*,char*> test;\n// In instantiation of ‘my<void, char*>’:\n// instantiated from here\n// conversion from ‘\n// mpl_::failed************boost::is_same<void, char*>::************’ to\n// non-scalar type ‘mpl_::assert<false>’ requested\nSee also\nAsserts,BOOST_MPL_ASSERT_NOT ,BOOST_MPL_ASSERT_MSG ,BOOST_MPL_ASSERT_RELATION\n6.1.2 BOOST_MPL_ASSERT_MSG\nSynopsis\n#define BOOST_MPL_ASSERT_MSG( condition, message, types ) \\\nunspecified token sequence \\\n/**/\nDescription\nGenerates a compilation error with an embedded custom message when the conditiondoesn’t hold.\nHeader\n#include <boost/mpl/assert.hpp>\nParameters\nParameter Requirement Description\ncondition An integral constant expression A condition to be asserted.\nmessage A legal identifier token A custom message in a form of a legal C++ identifier\ntoken.\nRevision Date: 15th November 2004221 Macros 6.1 Asserts\nParameter Requirement Description\ntypes A legal function parameter list Aparenthizedlistoftypestobedisplayedintheerror\nmessage.\nExpression semantics\nFor any integral constant expression expr, legal C++ identifier message, and arbitrary types t1,t2,...tn:\nBOOST_MPL_ASSERT_MSG( expr, message, (t1, t2,... tn) );\nReturn type: None.\nPrecondition: t1,t2,...tnare non- void.\nSemantics: Generates a compilation error if expr::value != true , otherwise has no effect.\nWhen possible within the compiler’s diagnostic capabilities, the error message will include the mes-\nsageidentifier and the parenthized list of t1,t2,...tntypes, and have a general formof:\n... ************( ...::message )************)(t1, t2,... tn) ...\nBOOST_MPL_ASSERT_MSG( expr, message, (types<t1, t2,... tn>) );\nReturn type: None.\nPrecondition: None.\nSemantics: Generates a compilation error if expr::value != true , otherwise has no effect.\nWhenpossiblewithinthecompiler’sdiagnosticscapabilities,theerrormessagewillincludethe mes-\nsageidentifier and the list of t1,t2,...tntypes, and have a general formof:\n... ************( ...::message )************)(types<t1, t2,... tn>) ...\nExample\ntemplate< typename T > struct my\n{\n// ...\nBOOST_MPL_ASSERT_MSG(\nis_integral<T>::value\n, NON_INTEGRAL_TYPES_ARE_NOT_ALLOWED\n, (T)\n);\n};\nmy<void*> test;\n// In instantiation of ‘my<void*>’:\n// instantiated from here\n// conversion from ‘\n// mpl_::failed************(my<void*>::\n// NON_INTEGRAL_TYPES_ARE_NOT_ALLOWED::************)(void*)\n// ’ to non-scalar type ‘mpl_::assert<false>’ requested\nSee also\nAsserts,BOOST_MPL_ASSERT ,BOOST_MPL_ASSERT_NOT ,BOOST_MPL_ASSERT_RELATION\nRevision Date: 15th November 20046.1 Asserts Macros 222\n6.1.3 BOOST_MPL_ASSERT_NOT\nSynopsis\n#define BOOST_MPL_ASSERT_NOT( pred ) \\\nunspecified token sequence \\\n/**/\nDescription\nGenerates a compilation error when predicate holds true.\nHeader\n#include <boost/mpl/assert.hpp>\nParameters\nParameter Requirement Description\npred Boolean nullary Metafunction A predicate to be asserted to be false.\nExpression semantics\nFor any boolean nullary Metafunction pred:\nBOOST_MPL_ASSERT_NOT(( pred ));\nReturn type: None.\nSemantics: Generates a compilation error if pred::type::value != false , otherwise has no effect.\nNote that double parentheses are required even if no commas appear in the condition.\nWhen possible within the compiler’s diagnostic capabilities, the error message will include the predi-\ncate’s full type name, and have ageneral form of:\n... ************boost::mpl::not_< pred >::************ ...\nExample\ntemplate< typename T, typename U > struct my\n{\n// ...\nBOOST_MPL_ASSERT_NOT(( is_same< T,U > ));\n};\nmy<void,void> test;\n// In instantiation of ‘my<void, void>’:\n// instantiated from here\n// conversion from ‘\n// mpl_::failed************boost::mpl::not_<boost::is_same<void, void>\n// >::************’ to non-scalar type ‘mpl_::assert<false>’ requested\nRevision Date: 15th November 2004223 Macros 6.1 Asserts\nSee also\nAsserts,BOOST_MPL_ASSERT ,BOOST_MPL_ASSERT_MSG ,BOOST_MPL_ASSERT_RELATION\n6.1.4 BOOST_MPL_ASSERT_RELATION\nSynopsis\n#define BOOST_MPL_ASSERT_RELATION( x, relation, y ) \\\nunspecified token sequence \\\n/**/\nDescription\nA specialized assertion macro for checking numerical conditions. Generates a compilation error when the condition (\nx relation y ) doesn’t hold.\nHeader\n#include <boost/mpl/assert.hpp>\nParameters\nParameter Requirement Description\nx An integral constant Left operand of the checked relation.\ny An integral constant Right operand of the checked relation.\nrelation A C++ operator token An operator token for the relation being checked.\nExpression semantics\nFor any integral constants x,yand a legal C++ operator token op:\nBOOST_MPL_ASSERT_RELATION( x, op, y );\nReturn type: None.\nSemantics: Generates a compilation error if ( x op y ) != true , otherwise has no effect.\nWhenpossiblewithinthecompiler’sdiagnosticcapabilities,theerrormessagewillincludeanameof\nthe relation being checked, the actual values of both operands, and have a general form of:\n... ************ ...assert_relation<op, x, y>::************) ...\nExample\ntemplate< typename T, typename U > struct my\n{\n// ...\nBOOST_MPL_ASSERT_RELATION( sizeof(T), <, sizeof(U) );\n};\nmy<char[50],char[10]> test;\nRevision Date: 15th November 20046.2 Introspection Macros 224\n// In instantiation of ‘my<char[50], char[10]>’:\n// instantiated from here\n// conversion from ‘\n// mpl_::failed************mpl_::assert_relation<less, 50, 10>::************’\n// to non-scalar type ‘mpl_::assert<false>’ requested\nSee also\nAsserts,BOOST_MPL_ASSERT ,BOOST_MPL_ASSERT_NOT ,BOOST_MPL_ASSERT_MSG\n6.2 Introspection\n6.2.1 BOOST_MPL_HAS_XXX_TRAIT_DEF\nSynopsis\n#define BOOST_MPL_HAS_XXX_TRAIT_DEF(name) \\\nunspecified token sequence \\\n/**/\nDescription\nExpands into a definition of a boolean unary Metafunction has_name such that for any type x has_name<x>::value\n== true if and only if xis a class type and has a nested type memeber x::name.\nOn the deficient compilers not capabale of performing the detection, has_name<x>::value always returns false. A\nboolean configuraion macro, BOOST_MPL_CFG_NO_HAS_XXX , is provided to signal or override the “deficient” status of\na particular compiler.\n[Note: BOOST_MPL_HAS_XXX_TRAIT_DEF is a simplified front end to the BOOST_MPL_HAS_XXX_TRAIT_NAMED_DEF\nintrospection macro — end note]\nHeader\n#include <boost/mpl/has_xxx.hpp>\nParameters\nParameter Requirement Description\nname A legal identifier token A name of the member being detected.\nExpression semantics\nFor any legal C++ identifier name:\nBOOST_MPL_HAS_XXX_TRAIT_DEF(name)\nPrecondition: Appears at namespace scope.\nReturn type: None.\nSemantics: Equivalent to\nBOOST_MPL_HAS_XXX_TRAIT_NAMED_DEF(\nBOOST_PP_CAT(has_,name), name, false\nRevision Date: 15th November 2004225 Macros 6.2 Introspection\n)\nExample\nBOOST_MPL_HAS_XXX_TRAIT_DEF(has_xxx)\nstruct test1 {};\nstruct test2 { void xxx(); };\nstruct test3 { int xxx; };\nstruct test4 { static int xxx(); };\nstruct test5 { template< typename T > struct xxx {}; };\nstruct test6 { typedef int xxx; };\nstruct test7 { struct xxx; };\nstruct test8 { typedef void (*xxx)(); };\nstruct test9 { typedef void (xxx)(); };\nBOOST_MPL_ASSERT_NOT(( has_xxx<test1> ));\nBOOST_MPL_ASSERT_NOT(( has_xxx<test2> ));\nBOOST_MPL_ASSERT_NOT(( has_xxx<test3> ));\nBOOST_MPL_ASSERT_NOT(( has_xxx<test4> ));\nBOOST_MPL_ASSERT_NOT(( has_xxx<test5> ));\n#if !defined(BOOST_MPL_CFG_NO_HAS_XXX)\nBOOST_MPL_ASSERT(( has_xxx<test6> ));\nBOOST_MPL_ASSERT(( has_xxx<test7> ));\nBOOST_MPL_ASSERT(( has_xxx<test8> ));\nBOOST_MPL_ASSERT(( has_xxx<test9> ));\n#endif\nBOOST_MPL_ASSERT(( has_xxx<test6,true_> ));\nBOOST_MPL_ASSERT(( has_xxx<test7,true_> ));\nBOOST_MPL_ASSERT(( has_xxx<test8,true_> ));\nBOOST_MPL_ASSERT(( has_xxx<test9,true_> ));\nSee also\nMacros,BOOST_MPL_HAS_XXX_TRAIT_NAMED_DEF ,BOOST_MPL_CFG_NO_HAS_XXX\n6.2.2 BOOST_MPL_HAS_XXX_TRAIT_NAMED_DEF\nSynopsis\n#define BOOST_MPL_HAS_XXX_TRAIT_NAMED_DEF(trait, name, default_) \\\nunspecified token sequence \\\n/**/\nDescription\nExpandsintoadefinitionofabooleanunary Metafunction traitsuchthatforanytype x trait<x>::value == true\nif and only if xis a class type and has a nested type memeber x::name.\nRevision Date: 15th November 20046.2 Introspection Macros 226\nOn the deficient compilers not capabale of performing the detection, trait<x>::value always returns a fallback\nvalue default_ . A boolean configuraion macro, BOOST_MPL_CFG_NO_HAS_XXX , is provided to signal or override the\n“deficient”statusofaparticularcompiler. [ Note:Thefallbackvaluecallalsobeprovidedatthepointofthemetafunction\ninvocation; see the Expression semantics section for details — end note]\nHeader\n#include <boost/mpl/has_xxx.hpp>\nParameters\nParameter Requirement Description\ntrait A legal identifier token A name of the metafunction to be generated.\nname A legal identifier token A name of the member being detected.\ndefault_ An boolean constant A fallback value for the deficient compilers.\nExpression semantics\nFor any legal C++ identifiers traitandname, boolean constant expression c1, boolean Integral Constant c2, and\narbitrary type x:\nBOOST_MPL_HAS_XXX_TRAIT_NAMED_DEF(trait, name, c1)\nPrecondition: Appears at namespace scope.\nReturn type: None.\nSemantics: Expands into an equivalent ofthe following class template definition\ntemplate< typename X, typename fallback = boost::mpl::bool_<c1> >\nstruct trait\n{\n//unspecified\n// ...\n};\nwhere traitis a boolean Metafunction with the following semantics:\ntypedef trait<x>::type r;\nReturn type: Integral Constant .\nSemantics: IfBOOST_MPL_CFG_NO_HAS_XXX is defined, r::value == c1 ; otherwise,\nr::value == true if and only if xis a class type that has a nested type memeber\nx::name.\ntypedef trait< x,c2 >::type r;\nReturn type: Integral Constant .\nSemantics: IfBOOST_MPL_CFG_NO_HAS_XXX is defined, r::value == c2::value ; oth-\nerwise, equivalent to\ntypedef trait<x>::type r;\nRevision Date: 15th November 2004227 Macros 6.3 Configuration\nExample\nBOOST_MPL_HAS_XXX_TRAIT_NAMED_DEF(has_xxx, xxx, false)\nstruct test1 {};\nstruct test2 { void xxx(); };\nstruct test3 { int xxx; };\nstruct test4 { static int xxx(); };\nstruct test5 { template< typename T > struct xxx {}; };\nstruct test6 { typedef int xxx; };\nstruct test7 { struct xxx; };\nstruct test8 { typedef void (*xxx)(); };\nstruct test9 { typedef void (xxx)(); };\nBOOST_MPL_ASSERT_NOT(( has_xxx<test1> ));\nBOOST_MPL_ASSERT_NOT(( has_xxx<test2> ));\nBOOST_MPL_ASSERT_NOT(( has_xxx<test3> ));\nBOOST_MPL_ASSERT_NOT(( has_xxx<test4> ));\nBOOST_MPL_ASSERT_NOT(( has_xxx<test5> ));\n#if !defined(BOOST_MPL_CFG_NO_HAS_XXX)\nBOOST_MPL_ASSERT(( has_xxx<test6> ));\nBOOST_MPL_ASSERT(( has_xxx<test7> ));\nBOOST_MPL_ASSERT(( has_xxx<test8> ));\nBOOST_MPL_ASSERT(( has_xxx<test9> ));\n#endif\nBOOST_MPL_ASSERT(( has_xxx<test6,true_> ));\nBOOST_MPL_ASSERT(( has_xxx<test7,true_> ));\nBOOST_MPL_ASSERT(( has_xxx<test8,true_> ));\nBOOST_MPL_ASSERT(( has_xxx<test9,true_> ));\nSee also\nMacros,BOOST_MPL_HAS_XXX_TRAIT_DEF ,BOOST_MPL_CFG_NO_HAS_XXX\n6.3 Configuration\n6.3.1 BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\nSynopsis\n// #define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\nDescription\nBOOST_MPL_CFG_NO_PREPROCESSED_HEADERS is an boolean configuration macro regulating library’s internal use of\npreprocessed headers. When defined, it instructs the MPL to discard the pre-generated headers found in boost/m-\npl/aux_/preprocessed directory and use preprocessor metaprogramming techniques to generate the necessary ver-\nsions of the library components on the fly.\nIn this implementation of the library, the macro is not defined by default. To change the default configuration, define\nBOOST_MPL_CFG_NO_PREPROCESSED_HEADERS before including any library header.\nRevision Date: 15th November 20046.3 Configuration Macros 228\nSee also\nMacros,Configuration\n6.3.2 BOOST_MPL_CFG_NO_HAS_XXX\nSynopsis\n// #define BOOST_MPL_CFG_NO_HAS_XXX\nDescription\nBOOST_MPL_CFG_NO_HAS_XXX isanbooleanconfigurationmacrosignalingavailabilityofthe BOOST_MPL_HAS_XXX_-\nTRAIT_DEF /BOOST_MPL_HAS_XXX_TRAIT_NAMED_DEF introspection macros’ functionality on a particular compiler.\nSee also\nMacros,Configuration ,BOOST_MPL_HAS_XXX_TRAIT_DEF ,BOOST_MPL_HAS_XXX_TRAIT_NAMED_DEF\n6.3.3 BOOST_MPL_LIMIT_METAFUNCTION_ARITY\nSynopsis\n#if !defined(BOOST_MPL_LIMIT_METAFUNCTION_ARITY)\n# define BOOST_MPL_LIMIT_METAFUNCTION_ARITY \\\nimplementation-defined integral constant \\\n/**/\n#endif\nDescription\nBOOST_MPL_LIMIT_METAFUNCTION_ARITY is an overridable configuration macro regulating the maximum supported\narity ofmetafunctions andmetafunctionclasses . In this implementation ofthe library, BOOST_MPL_LIMIT_METAFUNC-\nTION_ARITY has a default value of 5. To override the default limit, define BOOST_MPL_LIMIT_METAFUNCTION_ARITY\ntothedesiredmaximumaritybeforeincludinganylibraryheader. [ Note:Overridingwilltakeeffect onlyifthelibraryis\nconfigurednottouse preprocessedheaders . See BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS formoreinformation.\n—end note]\nExample\n#define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\n#define BOOST_MPL_LIMIT_METAFUNCTION_ARITY 2\n#include <boost/mpl/apply.hpp>\nusing namespace boost::mpl;\ntemplate< typename T1, typename T2 > struct second\n{\ntypedef T2 type;\n};\ntemplate< typename T1, typename T2, typename T3 > struct third\nRevision Date: 15th November 2004229 Macros 6.3 Configuration\n{\ntypedef T3 type;\n};\ntypedef apply< second<_1,_2_>,int,long >::type r1;\n// typedef apply< third<_1,_2_,_3>,int,long,float >::type r2; // error!\nSee also\nMacros,Configuration ,BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\n6.3.4 BOOST_MPL_LIMIT_VECTOR_SIZE\nSynopsis\n#if !defined(BOOST_MPL_LIMIT_VECTOR_SIZE)\n# define BOOST_MPL_LIMIT_VECTOR_SIZE \\\nimplementation-defined integral constant \\\n/**/\n#endif\nDescription\nBOOST_MPL_LIMIT_VECTOR_SIZE isanoverridableconfigurationmacroregulatingthemaximumarityofthe vector’s\nandvector_c ’svariadic forms . In this implementation of the library, BOOST_MPL_LIMIT_VECTOR_SIZE has a default\nvalue of 20. To override the default limit, define BOOST_MPL_LIMIT_VECTOR_SIZE to the desired maximum arity\nrounded up to the nearest multiple of ten before including any library header. [ Note:Overriding will take effect onlyif\nthelibraryisconfigurednottouse preprocessedheaders . See BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS formore\ninformation. — end note]\nExample\n#define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\n#define BOOST_MPL_LIMIT_VECTOR_SIZE 10\n#include <boost/mpl/vector.hpp>\nusing namespace boost::mpl;\ntypedef vector_c<int,1> v_1;\ntypedef vector_c<int,1,2,3,4,5,6,7,8,9,10> v_10;\n// typedef vector_c<int,1,2,3,4,5,6,7,8,9,10,11> v_11; // error!\nSee also\nConfiguration ,BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS ,BOOST_MPL_LIMIT_LIST_SIZE\n6.3.5 BOOST_MPL_LIMIT_LIST_SIZE\nSynopsis\n#if !defined(BOOST_MPL_LIMIT_LIST_SIZE)\nRevision Date: 15th November 20046.3 Configuration Macros 230\n# define BOOST_MPL_LIMIT_LIST_SIZE \\\nimplementation-defined integral constant \\\n/**/\n#endif\nDescription\nBOOST_MPL_LIMIT_LIST_SIZE isanoverridableconfigurationmacroregulatingthemaximumarityofthe list’sand\nlist_c’svariadic forms . In this implementation of the library, BOOST_MPL_LIMIT_LIST_SIZE has a default value of\n20. To override the default limit, define BOOST_MPL_LIMIT_LIST_SIZE to the desired maximum arity rounded up to\nthe nearest multiple of ten before including any library header. [ Note:Overriding will take effect onlyif the library is\nconfigurednottouse preprocessedheaders . See BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS formoreinformation.\n—end note]\nExample\n#define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\n#define BOOST_MPL_LIMIT_LIST_SIZE 10\n#include <boost/mpl/list.hpp>\nusing namespace boost::mpl;\ntypedef list_c<int,1> l_1;\ntypedef list_c<int,1,2,3,4,5,6,7,8,9,10> l_10;\n// typedef list_c<int,1,2,3,4,5,6,7,8,9,10,11> l_11; // error!\nSee also\nConfiguration ,BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS ,BOOST_MPL_LIMIT_VECTOR_SIZE\n6.3.6 BOOST_MPL_LIMIT_SET_SIZE\nSynopsis\n#if !defined(BOOST_MPL_LIMIT_SET_SIZE)\n# define BOOST_MPL_LIMIT_SET_SIZE \\\nimplementation-defined integral constant \\\n/**/\n#endif\nDescription\nBOOST_MPL_LIMIT_SET_SIZE is an overridable configuration macro regulating the maximum arity of the set’s and\nset_c’svariadic forms . In this implementation of the library, BOOST_MPL_LIMIT_SET_SIZE has a default value of\n20. To override the default limit, define BOOST_MPL_LIMIT_SET_SIZE to the desired maximum arity rounded up to\nthe nearest multiple of ten before including any library header. [ Note:Overriding will take effect onlyif the library is\nconfigurednottouse preprocessedheaders . See BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS formoreinformation.\n—end note]\nRevision Date: 15th November 2004231 Macros 6.3 Configuration\nExample\n#define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\n#define BOOST_MPL_LIMIT_SET_SIZE 10\n#include <boost/mpl/set.hpp>\nusing namespace boost::mpl;\ntypedef set_c<int,1> s_1;\ntypedef set_c<int,1,2,3,4,5,6,7,8,9,10> s_10;\n// typedef set_c<int,1,2,3,4,5,6,7,8,9,10,11> s_11; // error!\nSee also\nConfiguration ,BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS ,BOOST_MPL_LIMIT_MAP_SIZE\n6.3.7 BOOST_MPL_LIMIT_MAP_SIZE\nSynopsis\n#if !defined(BOOST_MPL_LIMIT_MAP_SIZE)\n# define BOOST_MPL_LIMIT_MAP_SIZE \\\nimplementation-defined integral constant \\\n/**/\n#endif\nDescription\nBOOST_MPL_LIMIT_MAP_SIZE isanoverridableconfigurationmacroregulatingthemaximumarityofthe map’svariadic\nform. In this implementation of the library, BOOST_MPL_LIMIT_MAP_SIZE has a default value of 20. To override the\ndefaultlimit,define BOOST_MPL_LIMIT_MAP_SIZE tothedesiredmaximumarityroundeduptothenearestmultipleof\nten before including any library header. [ Note:Overriding will take effect onlyif the library is configured not to use\npreprocessed headers . See BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS for more information. — end note]\nExample\n#define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\n#define BOOST_MPL_LIMIT_MAP_SIZE 10\n#include <boost/mpl/map.hpp>\n#include <boost/mpl/pair.hpp>\n#include <boost/mpl/int.hpp>\nusing namespace boost::mpl;\ntemplate< int i > struct ints : pair< int_<i>,int_<i> > {};\ntypedef map< ints<1> > m_1;\ntypedef map< ints<1>, ints<2>, ints<3>, ints<4>, ints<5>\nints<6>, ints<7>, ints<8>, ints<9>, ints<10> > m_10;\n// typedef map< ints<1>, ints<2>, ints<3>, ints<4>, ints<5>\n// ints<6>, ints<7>, ints<8>, ints<9>, ints<10>, ints<11> > m_11; // error!\nRevision Date: 15th November 20046.4 Broken Compiler Workarounds Macros 232\nSee also\nConfiguration ,BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS ,BOOST_MPL_LIMIT_SET_SIZE\n6.3.8 BOOST_MPL_LIMIT_UNROLLING\nSynopsis\n#if !defined(BOOST_MPL_LIMIT_UNROLLING)\n# define BOOST_MPL_LIMIT_UNROLLING \\\nimplementation-defined integral constant \\\n/**/\n#endif\nDescription\nBOOST_MPL_LIMIT_UNROLLING is an overridable configuration macro regulating the unrolling depth of the library’s\niteration algorithms. In this implementation of the library, BOOST_MPL_LIMIT_UNROLLING has a default value of 4. To\noverride the default, define BOOST_MPL_LIMIT_UNROLLING to the desired value before including any library header.\n[Note:Overriding will take effect onlyif the library is configured not to use preprocessed headers . See BOOST_MPL_-\nCFG_NO_PREPROCESSED_HEADERS for more information. — end note]\nExample\nExcept for overall library performace, overriding the BOOST_MPL_LIMIT_UNROLLING ’s default value has no user-\nobservable effects.\nSee also\nConfiguration ,BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\n6.4 Broken Compiler Workarounds\n6.4.1 BOOST_MPL_AUX_LAMBDA_SUPPORT\nSynopsis\n#define BOOST_MPL_AUX_LAMBDA_SUPPORT(arity, fun, params) \\\nunspecified token sequence \\\n/**/\nDescription\nEnables metafunction funfor the use in Lambda Expression s on compilers that don’t support partial template special-\nization or/and template template parameters. Expands to nothing on conforming compilers.\nHeader\n#include <boost/mpl/aux_/lambda_support.hpp>\nParameters\nRevision Date: 15th November 2004233 Macros 6.4 Broken Compiler Workarounds\nParameter Requirement Description\narity An integral constant The metafunction’s arity, i.e. the number of its template\nparameters, including the defaults.\nfun A legal identifier token The metafunction’s name.\nparams APP-tuple A tuple of the metafunction’s parameter names, in their\noriginal order, including the defaults.\nExpression semantics\nFor any integral constant n, aMetafunction fun, and arbitrary types A1,...An:\ntemplate< typename A1, ... typename A n> struct fun\n{\n//...\nBOOST_MPL_AUX_LAMBDA_SUPPORT(n, fun, (A1, ...An))\n};\nPrecondition: Appears in fun’s scope, immediately followed by the scope-closing bracket ( }).\nReturn type: None.\nSemantics: Expands to nothing and has no effect on conforming compilers. On compilers that don’t sup-\nport partial template specialization or/and template template parameters expands to an unspecified\ntoken sequence enabling funto participate in Lambda Expression s with the semantics described in\nthis manual.\nExample\ntemplate< typename T, typename U = int > struct f\n{\ntypedef T type[sizeof(U)];\nBOOST_MPL_AUX_LAMBDA_SUPPORT(2, f, (T,U))\n};\ntypedef apply1< f<char,_1>,long >::type r;\nBOOST_MPL_ASSERT(( is_same< r, char[sizeof(long)] > ));\nSee also\nMacros,Metafunctions ,Lambda Expression\nRevision Date: 15th November 20046.4 Broken Compiler Workarounds Macros 234\nRevision Date: 15th November 2004Chapter 7 Terminology\nOverloaded name Overloaded name is a term used in this reference documentation to designate a metafunction pro-\nviding more than one public interface. In reality, class template overloading is nonexistent and the referenced\nfunctionality is implemented by other, unspecified, means.\nConcept-identical A sequence s1is said to be concept-identical to a sequence s2ifs1ands2model the exact same\nset of concepts.\nBind expression A bind expression is simply that — an instantiation of one of the bindclass templates. For instance,\nthese are all bind expressions:\nbind< quote3<if_>, _1,int,long >\nbind< _1, bind< plus<>, int_<5>, _2> >\nbind< times<>, int_<2>, int_<2> >\nand these are not:\nif_< _1, bind< plus<>, int_<5>, _2>, _2 >\nprotect< bind< quote3<if_>, _1,int,long > >\n_2Terminology 236\nRevision Date: 15th November 2004Chapter 8 Categorized Index\n8.1 Concepts\n—Associative Sequence\n—Back Extensible Sequence\n—Bidirectional Iterator\n—Bidirectional Sequence\n—Extensible Associative Sequence\n—Extensible Sequence\n—Forward Iterator\n—Forward Sequence\n—Front Extensible Sequence\n—Inserter\n—Integral Constant\n—Integral Sequence Wrapper\n—Lambda Expression\n—Metafunction\n—Metafunction Class\n—Numeric Metafunction\n—Placeholder Expression\n—Random Access Iterator\n—Random Access Sequence\n—Reversible Algorithm\n—Tag Dispatched Metafunction\n—Trivial Metafunction\n—Variadic Sequence\n8.2 Components\n—BOOST_MPL_ASSERT\n—BOOST_MPL_ASSERT_MSG\n—BOOST_MPL_ASSERT_NOT8.2 Components Categorized Index 238\n—BOOST_MPL_ASSERT_RELATION\n—BOOST_MPL_AUX_LAMBDA_SUPPORT\n—BOOST_MPL_CFG_NO_HAS_XXX\n—BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\n—BOOST_MPL_HAS_XXX_TRAIT_DEF\n—BOOST_MPL_HAS_XXX_TRAIT_NAMED_DEF\n—BOOST_MPL_LIMIT_LIST_SIZE\n—BOOST_MPL_LIMIT_MAP_SIZE\n—BOOST_MPL_LIMIT_METAFUNCTION_ARITY\n—BOOST_MPL_LIMIT_SET_SIZE\n—BOOST_MPL_LIMIT_UNROLLING\n—BOOST_MPL_LIMIT_VECTOR_SIZE\n—_1,_2,_3,...\n—accumulate\n—advance\n—always\n—and_\n—apply\n—apply_wrap\n—arg\n—at\n—at_c\n—back\n—back_inserter\n—begin\n—bind\n—bitand_\n—bitor_\n—bitxor_\n—bool_\n—clear\n—contains\n—copy\n—copy_if\n—count\nRevision Date: 15th November 2004239 Categorized Index 8.2 Components\n—count_if\n—deque\n—deref\n—distance\n—divides\n—empty\n—empty_base\n—empty_sequence\n—end\n—equal\n—equal_to\n—erase\n—erase_key\n—eval_if\n—eval_if_c\n—filter_view\n—find\n—find_if\n—fold\n—front\n—front_inserter\n—greater\n—greater_equal\n—has_key\n—identity\n—if_\n—if_c\n—inherit\n—inherit_linearly\n—insert\n—insert_range\n—inserter\n—int_\n—integral_c\n—is_sequence\nRevision Date: 15th November 20048.2 Components Categorized Index 240\n—iter_fold\n—iterator_category\n—iterator_range\n—joint_view\n—key_type\n—lambda\n—less\n—less_equal\n—list\n—list_c\n—long_\n—lower_bound\n—map\n—max\n—max_element\n—min\n—min_element\n—minus\n—modulus\n—negate\n—next\n—not_\n—not_equal_to\n—numeric_cast\n—or_\n—order\n—pair\n—partition\n—plus\n—pop_back\n—pop_front\n—prior\n—protect\n—push_back\n—push_front\nRevision Date: 15th November 2004241 Categorized Index 8.2 Components\n—quote\n—range_c\n—remove\n—remove_if\n—replace\n—replace_if\n—reverse\n—reverse_copy\n—reverse_copy_if\n—reverse_fold\n—reverse_iter_fold\n—reverse_partition\n—reverse_remove\n—reverse_remove_if\n—reverse_replace\n—reverse_replace_if\n—reverse_stable_partition\n—reverse_transform\n—reverse_unique\n—sequence_tag\n—set\n—set_c\n—shift_left\n—shift_right\n—single_view\n—size\n—size_t\n—sizeof_\n—sort\n—stable_partition\n—times\n—transform\n—transform_view\n—unique\n—unpack_args\nRevision Date: 15th November 20048.2 Components Categorized Index 242\n—upper_bound\n—value_type\n—vector\n—vector_c\n—void_\n—zip_view\nRevision Date: 15th November 2004Chapter 9 Acknowledgements\nThe format and language of this reference documentation has been greatly influenced by the SGI’s Standard Template\nLibrary Programmer’s Guide .Acknowledgements 244\nRevision Date: 15th November 2004Bibliography\n[n1550]http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1550.htm\n[PRE]Vesa Karvonen, Paul Mensonides, The Boost Preprocessor Metaprogramming library\n[Ve03]Vesa Karvonen, The Order Programming Language , 2003." } ]
{ "category": "App Definition and Development", "file_name": "refmanual.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "13:00 14:00 15:00\n13:37\n- node starts\n- announce segment \nfor data 13:00-14:0013:47\npersist data for 13:00-14:00\n~14:00\n- announce segment \nfor data 14:00-15:0014:10\n- merge and handoff for data 13:00-14:00\n- persist data for 14:00-15:00~14:11\n- unannounce segment \nfor data 13:00-14:00\n13:57\npersist data for 13:00-14:0014:07\npersist data for 13:00-14:00" } ]
{ "category": "App Definition and Development", "file_name": "realtime_timeline.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "Vald\nBRAND\nGUIDELINES\nVer.1 February, 2021\n© Vald teamVald BRAND\nGUIDELINES\nContents 3......................\n4......................\n5......................\n6......................\n7......................\n8......................\n9......................Color logo mark RGB\nColor logo mark CMYK\nMonochrome logo mark\nUsing the full-color Vald Logo\nUsing the monochrome Vald Logo\nClear space & Logo sizing\nWhat not to do with the logo\n2Color logo mark\nRGB\nPRIMARY GRADIENT\nSECONDARY GREEN to\nMAIN GREEN\nMAIN GREEN\nR0 G96 B118\nHEX #006076\nSECONDARY GREEN\nR0 G186 B177\nHEX #00BAB1\nDark Gray\nR48 G48 B48\nHEX #303030\n34PRIMARY GRADIENT\nSECONDARY GREEN to\nMAIN GREEN\nMAIN GREEN\nC89 M54 Y52 K4\nSECONDARY GREEN\nC72 M0 Y41 K0\nDark Gray\nC80 M74 Y72 K48Color logo mark\nCMYKDark Gray\nC80 M74 Y72 K48\nR48 G48 B48\nHEX #303030White\nC0 M0 Y0 K0\nR255 G255 B255\nHEX #ffffffMonochrome \nlogo mark\n510% gray\n#1a1a1a60% gray\n#99999970% gray\n#1a1a1a90% gray\n#e5e5e5Using the full-color\nVald Logo\nThese examples show the correct application of the Vald \nLogo on different solid backgrounds. The almost color logo \nshould be used on a background that’ s lighter than 70% \ngray. \n6If a background color makes the full-color logo hard to see, \nyou should use a monochrome logo instead.Using the mono-\nchrome Vald Logo\n7Y\nY\nYClear space buffers the logo icon from images, text, or \nother graphics that compromise its impact and visibility. Clear space &\nLogo sizing\nClear space\nWe’ve optimized the Vald Logo for specific sizes. \nMinimum digital height: 35dp\nMinimum print height:10mmLogo sizing\n35dp 10mm\n8/gid00055/gid00064/gid00075/gid00067ɾChange orientations\nɾAlter the propotions\nɾAdd visual effects\nɾUse any colors other than designated color\nɾChange the type layout\nɾChoose a different typeface\nɾPlace the mark on image that are too complex\nɾPlace objects in clear spaceDon’ tWhat not to do with \nthe Logo\n9" } ]
{ "category": "App Definition and Development", "file_name": "Vald_brandguidelines.pdf", "project_name": "Vald", "subcategory": "Database" }
[ { "data": "An Implementation of Graph Isomorphism Testing\nJeremy G. Siek\nDecember 9, 20012\n0.1 Introduction\nThis paper documents the implementation of the isomorphism() function of the Boost\nGraph Library. The implementation was by Jeremy Siek with algorithmic improve-\nments and test code from Douglas Gregor and Brian Osman. The isomorphism()\nfunction answers the question, \\are these two graphs equal?\" By equal we mean the\ntwo graphs have the same structure|the vertices and edges are connected in the same\nway. The mathematical name for this kind of equality is isomorphism .\nMore precisely, an isomorphism is a one-to-one mapping of the vertices in one\ngraph to the vertices of another graph such that adjacency is preserved. Another\nwords, given graphs G1= (V1;E1) andG2= (V2;E2), an isomorphism is a function\nfsuch that for all pairs of vertices a;binV1, edge (a;b) is inE1if and only if edge\n(f(a);f(b)) is inE2.\nThe graph G1isisomorphic toG2if an isomorphism exists between the two\ngraphs, which we denote by G1\u0018=G2. Both graphs must be the same size, so let\nN=jV1j=jV2j.\nIn the following discussion we will need to use several more notions from graph\ntheory. The graph Gs= (Vs;Es) is a subgraph of graphG= (V;E) ifVs\u0012Vand\nEs\u0012E. An induced subgraph , denoted by G[Vs], of a graph G= (V;E) consists of\nthe vertices in Vs, which is a subset of V, and every edge ( u;v) inEsuch that both\nuandvare inVs. We use the notation E[Vs] to mean the edges in G[Vs].\n0.2 Backtracking Search\nThe algorithm used by the isomorphism() function is, at \frst approximation, an\nexhaustive search implemented via backtracking. The backtracking algorithm is a\nrecursive function. At each stage we will try to extend the match that we have found\nso far. So suppose that we have already determined that some subgraph of G1is\nisomorphic to a subgraph of G2. We then try to add a vertex to each subgraph such\nthat the new subgraphs are still isomorphic to one another. At some point we may hit a\ndead end|there are no vertices that can be added to extend the isomorphic subgraphs.\nWe then backtrack to previous smaller matching subgraphs, and try extending with\na di\u000berent vertex choice. The process ends by either \fnding a complete mapping\nbetweenG1andG2and returning true, or by exhausting all possibilities and returning\nfalse.\nThe problem with the exhaustive backtracking algorithm is that there are N!\npossible vertex mappings, and N! gets very large as Nincreases, so we need to prune\nthe search space. We use the pruning techniques described in [ 1,2,3], some of which\noriginated in [ 4,5]. Also, the speci\fc backtracking method we use is the one from [ 1].\nWe consider the vertices of G1for addition to the matched subgraph in a speci\fc\norder, so assume that the vertices of G1are labeled 1 ;:::;N according to that order.0.2. BACKTRACKING SEARCH 3\nAs we will see later, a good ordering of the vertices is by DFS discover time. Let G1[k]\ndenote the subgraph of G1induced by the \frst kvertices, with G1[0] being an empty\ngraph. We also consider the edges of G1in a speci\fc order. We always examine edges\nin the current subgraph G1[k] \frst, that is, edges ( u;v) where both u\u0014kandv\u0014k.\nThis ordering of edges can be acheived by sorting each edge ( u;v) by lexicographical\ncomparison on the tuple hmax(u;v);u;vi. Figure 1shows an example of a graph with\nthe vertices labelled by DFS discover time. The edge ordering for this graph is as\nfollows:\nsource: 0 1 0 1 3 0 5 6 6\ntarget: 1 2 3 3 2 4 6 4 7\nc (0)\na (1)\nd (3)e (4)\nb (2)f (5)g (6)\nh (7)\nFigure 1: Vertices numbered by DFS discover time. The DFS tree edges are the solid\nlines. Nodes 0 and 5 are DFS tree root nodes.\nEach step of the backtracking search moves from left to right though the ordered\nedges. At each step it examines an edge ( u;v) ofG1and decides whether to continue\nto the left or to go back. There are three cases to consider:\n1.i>k\n2.i\u0014kandj >k .\n3.i\u0014kandj\u0014k.\nCase 1:i>k.iis not in the matched subgraph G1[k]. This situation only happens\nat the very beginning of the search, or when iis not reachable from any of the vertices\ninG1[k]. This means that we are \fnished with G1[k]. We increment kand \fnd match\nfor it amongst any of the eligible vertices in V2\u0000S. We then proceed to Case 2. It is4\nusually the case that iis equal to the new k, but when there is another DFS root r\nwith no in-edges or out-edges and if r<i then it will be the new k.\nCase 2:i\u0014kandj >k.iis in the matched subgraph G1[k], butjis not. We are\nabout to increment kto try and grow the matched subgraph to include j. However,\n\frst we need to \fnish verifying that G1[k]\u0018=G2[S]. In previous steps we proved\nthatG1[k\u00001]\u0018=G2[S\u0000ff(k)g], so now we just need to verify the extension of the\nisomorphism to k. At this point we are guaranteed to have seen all the edges to and\nfrom vertex k(because the edges are sorted), and in previous steps we have checked\nthat for each edge incident on kinE1[k] there is a matching edge in E2[S]. However\nwe still need to check the \\only if\" part of the \\if and only if\". So we check that for\nevery edge ( u;v) incident on f(k) there is (f\u00001(u);f\u00001(v))2E1[k]. A quick way to\nverify this is to make sure that the number of edges incident on kinE1[k] is the same\nas the number of edges incident on f(k) inE2[S]. We create an edge counter that\nwe increment every time we see an edge incident on kand decrement for each edge\nincident on f(k). If the counter gets back to zero we know the edges match up.\nOnce we have veri\fed that G1[k]\u0018=G2[S] we addf(k) toS, increment k, and then\ntry assigning jto any of the eligible vertices in V2\u0000S. More about what \\eligible\"\nmeans below.\nCase 3:i\u0014kandj\u0014k.Bothiandjare inG1[k]. We check to make sure that\n(f(i);f(j))2E2[S] and then proceed to the next edge.\n0.2.1 Vertex Invariants\nOne way to reduce the search space is through the use of vertex invariants . The idea\nis to compute a number for each vertex i(v) such that i(v) =i(v0) if there exists some\nisomorphism fwheref(v) =v0. Then when we look for a match to some vertex v,\nonly those vertices that have the same vertex invariant number are \\eligible\". The\nnumber of vertices in a graph with the same vertex invariant number iis called the\ninvariant multiplicity fori. In this implementation, by default we use the function\ni(v) = (jVj+ 1)\u0002out-degree( v) + in-degree( v), though the user can also supply there\nown invariant function. The ability of the invariant function to prune the search space\nvaries widely with the type of graph.\nThe following is the de\fnition of the functor that implements the default vertex\ninvariant. The functor models the AdaptableUnaryFunction concept.\nhDegree vertex invariant functor 4i\u0011\ntemplate<typename InDegreeMap ,typename Graph >\nclass degree vertex invariant\nf0.2. BACKTRACKING SEARCH 5\ntypedef typename graph traits<Graph>::vertex descriptor vertex t;\ntypedef typename graph traits<Graph>::degree size type size type;\npublic:\ntypedef vertex t argument type;\ntypedef size type result type;\ndegree vertex invariant (const InDegreeMap &indegree map,const Graph &g)\n: m indegree map(indegree map),mg(g)fg\nsize type operator ()(vertex t v)constf\nreturn(num vertices(mg) +1) *outdegree(v,mg)\n+get(mindegree map,v);\ng\n// The largest possible vertex invariant number\nsize type max ()constf\nreturn num vertices(mg) *num vertices(mg) +num vertices(mg);\ng\nprivate:\nInDegreeMap m indegree map;\nconst Graph &mg;\ng;\n0.2.2 Vertex Order\nA good choice of the labeling for the vertices (which determines the order in which\nthe subgraph G1[k] is grown) can also reduce the search space. In the following we\ndiscuss two labeling heuristics.\nMost Constrained First\nConsider the most constrained vertices \frst. That is, examine lower-degree vertices\nbefore higher-degree vertices. This reduces the search space because it chops o\u000b\na trunk before the trunk has a chance to blossom out. We can generalize this to\nuse vertex invariants. We examine vertices with low invariant multiplicity before\nexamining vertices with high invariant multiplicity.\nAdjacent First\nIt only makes sense to examine an edge if one or more of its vertices has been assigned\na mapping. This means that we should visit vertices adjacent to those in the current\nmatched subgraph before proceeding.6\nDFS Order, Starting with Lowest Multiplicity\nFor this implementation, we combine the above two heuristics in the following way.\nTo implement the \\adjacent \frst\" heuristic we apply DFS to the graph, and use the\nDFS discovery order as our vertex order. To comply with the \\most constrained \frst\"\nheuristic we order the roots of our DFS trees by invariant multiplicity.\n0.2.3 Implementation of the match function\nThe match function implements the recursive backtracking, handling the four cases\ndescribed inx0.2.\nhMatch function 6ai\u0011\nbool match (edge iter iter ,int dfs num k)\nf\nif(iter!=ordered edges.end())f\nvertex1 t i=source(*iter,G1),j=target(*iter,G2);\nif(dfsnum[i]>dfsnum k)f\nhFind a match for the DFS tree root k+ 16bi\ng\nelse if(dfsnum[j]>dfsnum k)f\nhVerifyG1[k]\u0018=G2[S]and then \fnd match for j7ai\ng\nelsef\nhCheck to see if (f(i);f(j))2E2[S]and continue 8bi\ng\ngelse\nreturn true ;\nreturn false ;\ng\nNow to describe how each of the four cases is implemented.\nCase 1:i62G1[k].We increment kand try to map it to any of the eligible vertices\nofV2\u0000S. After matching the new kwe proceed by invoking match . We do not yet\nmove on to the next edge, since we have not yet found a match for edge, or for target\nj. We reset the edge counter to zero.\nhFind a match for the DFS tree root k+ 16bi\u0011\nvertex1 t kp1=dfsvertices[dfsnum k+1];\nBGL FORALL VERTICES T(u,G2,Graph2)f\nif(invariant1 (kp1) == invariant2 (u) && inS[u] == false)f\nf[kp1] =u;\ninS[u] =true;\nnum edges onk=0;0.2. BACKTRACKING SEARCH 7\nif(match(iter,dfsnum k+1));\nreturn true ;\ninS[u] =false;\ng\ng\nCase 2:i2G1[k]andj62G1[k].Before we extend the subgraph by incrementing\nk, we need to \fnish verifying that G1[k] andG2[S] are isomorphic. We decrement the\nedge counter for every edge incident to f(k) inG2[S], which should bring the counter\nback down to zero. If not we return false.\nhVerifyG1[k]\u0018=G2[S] and then \fnd match for j7ai\u0011\nvertex1 t k=dfsvertices[dfsnum k];\nhCount out-edges of f(k)inG2[S]7bi\nhCount in-edges of f(k)inG2[S]7ci\nif(num edges onk!=0)\nreturn false ;\nhFind a match for jand continue 8ai\nWe decrement the edge counter for every vertex in Adj[f(k)] that is also in S. We\ncallcount ifto do the counting, using boost::bind to create the predicate functor.\nhCount out-edges of f(k) inG2[S]7bi\u0011\nnum edges onk\u0000=\ncount if(adjacent vertices(f[k],G2),make indirect pmap(inS));\nNext we iterate through all the vertices in Sand for each we decrement the counter\nfor each edge whose target is k.\nhCount in-edges of f(k) inG2[S]7ci\u0011\nfor(int jj=0;jj<dfsnum k; ++jj)f\nvertex1 t j=dfsvertices[jj];\nnum edges onk\u0000=count(adjacent vertices(f[j],G2),f[k]);\ng\nNow that we have \fnished verifying that G1[k]\u0018=G2[S], we can now consider\nextending the isomorphism. We need to \fnd a match for jinV2\u0000S. Sincejis\nadjacent to i, we can further narrow down the search by only considering vertices\nadjacent to f(i). Also, the vertex must have the same vertex invariant. Once we have\na matching vertex vwe extend the matching subgraphs by incrementing kand adding\nvtoS, we setf(j) =v, and we set the edge counter to 1 (since ( i;j) is the \frst edge\nincident on our new k). We continue to the next edge by calling match . If that fails\nwe undo the assignment f(j) =v.8\nhFind a match for jand continue 8ai\u0011\nBGL FORALL ADJ T(f[i],v,G2,Graph2)\nif(invariant2 (v) == invariant1 (j) && inS[v] == false)f\nf[j] =v;\ninS[v] =true;\nnum edges onk=1;\nint next k=std::max (dfsnum k,std::max (dfsnum[i],dfsnum[j]));\nif(match(next(iter),next k))\nreturn true ;\ninS[v] =false;\ng\nCase 3: both iandjare inG1[k].Our goal is to check whether ( f(i);f(j))2\nE2[S]. We examine the vertices Adj[f(i)] to see if any of them is equal to f(j). If\nso, then we have a match for the edge ( i;j), and can increment the counter for the\nnumber of edges incident on kinE1[k]. We continue by calling match on the next\nedge.\nhCheck to see if ( f(i);f(j))2E2[S] and continue 8bi\u0011\nif(any equal(adjacent vertices(f[i],G2),f[j]))f\n++num edges onk;\nif(match(next(iter),dfsnum k))\nreturn true ;\ng\n0.3 Public Interface\nThe following is the public interface for the isomorphism function. The input to the\nfunction is the two graphs G1andG2, mappings from the vertices in the graphs to\nintegers (in the range [0 ;jVj)), and a vertex invariant function object. The output of\nthe function is an isomorphism fif there is one. The isomorphism function returns\ntrue if the graphs are isomorphic and false otherwise. The invariant parameters are\nfunction objects that compute the vertex invariants for vertices of the two graphs.\nThe max invariant parameter is to specify one past the largest integer that a vertex\ninvariant number could be (the invariants numbers are assumed to span from zero to\nmax invariant-1 ). The requirements on the template parameters are described below\nin the \\Concept checking\" code part.\nhIsomorphism function interface 8ci\u0011\ntemplate<typename Graph1 ,typename Graph2 ,typename IsoMapping ,\ntypename Invariant1 ,typename Invariant2 ,0.3. PUBLIC INTERFACE 9\ntypename IndexMap1 ,typename IndexMap2 >\nbool isomorphism (const Graph1 &G1,const Graph2 &G2,IsoMapping f ,\nInvariant1 invariant1 ,Invariant2 invariant2 ,\nstd::size t max invariant ,\nIndexMap1 index map1,IndexMap2 index map2)\nThe function body consists of the concept checks followed by a quick check for\nempty graphs or graphs of di\u000berent size and then constructs an algorithm object.\nWe then call the testisomorphism member function, which runs the algorithm. The\nreason that we implement the algorithm using a class is that there are a fair number\nof internal data structures required, and it is easier to make these data members of\na class and make each section of the algorithm a member function. This relieves us\nfrom the burden of passing lots of arguments to each function, while at the same time\navoiding the evils of global variables (non-reentrant, etc.).\nhIsomorphism function body 9ai\u0011\nf\nhConcept checking 10ai\nhQuick return based on size 9bi\ndetail::isomorphism algo<Graph1,Graph2,IsoMapping ,Invariant1 ,\nInvariant2 ,IndexMap1 ,IndexMap2 >\nalgo(G1,G2,f,invariant1 ,invariant2 ,max invariant ,\nindex map1,index map2);\nreturn algo .test isomorphism ();\ng\nIf there are no vertices in either graph, then they are trivially isomorphic. If the graphs\nhave di\u000berent numbers of vertices then they are not isomorphic. We could also check\nthe number of edges here, but that would introduce the EdgeListGraph requirement,\nwhich we otherwise do not need.\nhQuick return based on size 9bi\u0011\nif(num vertices(G1) != num vertices(G2))\nreturn false ;\nif(num vertices(G1) == 0&&num vertices(G2) == 0)\nreturn true ;\nWe use the Boost Concept Checking Library to make sure that the template argu-\nments ful\fll certain requirements. The graph types must model the VertexListGraph and\nAdjacencyGraph concepts. The vertex invariants must model the AdaptableUnaryFunction\nconcept, with a vertex as their argument and an integer return type. The IsoMapping\ntype representing the isomorphism fmust be a ReadWritePropertyMap that maps from\nvertices inG1to vertices in G2. The two other index maps are ReadablePropertyMap s\nfrom vertices in G1andG2to unsigned integers.10\nhConcept checking 10ai\u0011\n// Graph requirements\nfunction requires<VertexListGraphConcept <Graph1> >();\nfunction requires<EdgeListGraphConcept <Graph1> >();\nfunction requires<VertexListGraphConcept <Graph2> >();\nfunction requires<BidirectionalGraphConcept <Graph2> >();\ntypedef typename graph traits<Graph1>::vertex descriptor vertex1 t;\ntypedef typename graph traits<Graph2>::vertex descriptor vertex2 t;\ntypedef typename graph traits<Graph1>::vertices size type size type;\n// Vertex invariant requirement\nfunction requires<AdaptableUnaryFunctionConcept <Invariant1 ,\nsize type,vertex1 t> >();\nfunction requires<AdaptableUnaryFunctionConcept <Invariant2 ,\nsize type,vertex2 t> >();\n// Property map requirements\nfunction requires<ReadWritePropertyMapConcept <IsoMapping ,vertex1 t> >();\ntypedef typename property traits<IsoMapping >::value type IsoMappingValue ;\nBOOST STATIC ASSERT ((issame<IsoMappingValue ,vertex2 t>::value));\nfunction requires<ReadablePropertyMapConcept <IndexMap1 ,vertex1 t> >();\ntypedef typename property traits<IndexMap1 >::value type IndexMap1Value ;\nBOOST STATIC ASSERT ((isconvertible <IndexMap1Value ,size type>::value));\nfunction requires<ReadablePropertyMapConcept <IndexMap2 ,vertex2 t> >();\ntypedef typename property traits<IndexMap2 >::value type IndexMap2Value ;\nBOOST STATIC ASSERT ((isconvertible <IndexMap2Value ,size type>::value));\n0.4 Data Structure Setup\nThe following is the outline of the isomorphism algorithm class. The class is tem-\nplated on all of the same parameters as the isomorphism function, and all of the\nparameter values are stored in the class as data members, in addition to the internal\ndata structures.\nhIsomorphism algorithm class 10bi\u0011\ntemplate<typename Graph1 ,typename Graph2 ,typename IsoMapping ,\ntypename Invariant1 ,typename Invariant2 ,\ntypename IndexMap1 ,typename IndexMap2 >\nclass isomorphism algo\nf\nhTypedefs for commonly used types 14ci0.4. DATA STRUCTURE SETUP 11\nhData members for the parameters 14di\nhInternal data structures 15ai\nfriend struct compare multiplicity ;\nhInvariant multiplicity comparison functor 12bi\nhDFS visitor to record vertex and edge order 13bi\nhEdge comparison predicate 14bi\npublic:\nhIsomorphism algorithm constructor 15bi\nhTest isomorphism member function 11ai\nprivate:\nhMatch function 6ai\ng;\nThe interesting parts of this class are the testisomorphism function and the\nmatch function. We focus on those in in the following sections, and leave the other\nparts of the class to the Appendix.\nThe testisomorphism function does all of the setup required of the algorithm.\nThis consists of sorting the vertices according to invariant multiplicity, and then by\nDFS order. The edges are then sorted as previously described. The last step of this\nfunction is to begin the backtracking search.\nhTest isomorphism member function 11ai\u0011\nbool test isomorphism ()\nf\nhQuick return if the vertex invariants do not match up 11bi\nhSort vertices according to invariant multiplicity 12ai\nhOrder vertices and edges by DFS 13ai\nhSort edges according to vertex DFS order 14ai\nint dfs num k=\u00001;\nreturn this\u0000>match(ordered edges.begin(),dfsnum k);\ng\nAs a \frst check to rule out graphs that have no possibility of matching, one can\ncreate a list of computed vertex invariant numbers for the vertices in each graph,\nsort the two lists, and then compare them. If the two lists are di\u000berent then the two\ngraphs are not isomorphic. If the two lists are the same then the two graphs may be\nisomorphic.\nhQuick return if the vertex invariants do not match up 11bi\u0011\nf\nstd::vector <invar1 value>invar1 array;\nBGL FORALL VERTICES T(v,G1,Graph1)\ninvar1 array.push back(invariant1 (v));\nsort(invar1 array);12\nstd::vector <invar2 value>invar2 array;\nBGL FORALL VERTICES T(v,G2,Graph2)\ninvar2 array.push back(invariant2 (v));\nsort(invar2 array);\nif(!equal(invar1 array,invar2 array))\nreturn false ;\ng\nNext we compute the invariant multiplicity, the number of vertices with the same\ninvariant number. The invar mult vector is indexed by invariant number. We loop\nthrough all the vertices in the graph to record the multiplicity. We then order the ver-\ntices by their invariant multiplicity. This will allow us to search the more constrained\nvertices \frst.\nhSort vertices according to invariant multiplicity 12ai\u0011\nstd::vector <vertex1 t>Vmult;\nBGL FORALL VERTICES T(v,G1,Graph1)\nVmult.push back(v);\nf\nstd::vector <size type>multiplicity (max invariant ,0);\nBGL FORALL VERTICES T(v,G1,Graph1)\n++multiplicity [invariant1 (v)];\nsort(Vmult,compare multiplicity (invariant1 , &multiplicity [0]));\ng\nThe de\fnition of the compare multiplicity predicate is shown below. This predicate\nprovides the glue that binds std::sort to our current purpose.\nhInvariant multiplicity comparison functor 12bi\u0011\nstruct compare multiplicity\nf\ncompare multiplicity (Invariant1 invariant1 ,size type*multiplicity )\n: invariant1 (invariant1 ),multiplicity (multiplicity )fg\nbool operator ()(const vertex1 t&x,const vertex1 t&y)constf\nreturn multiplicity [invariant1 (x)]<multiplicity [invariant1 (y)];\ng\nInvariant1 invariant1 ;\nsize type*multiplicity ;\ng;\n0.4.1 Ordering by DFS Discover Time\nNext we order the vertices and edges by DFS discover time. We would normally call\nthe BGL depth \frst search function to do this, but we want the roots of the DFS0.4. DATA STRUCTURE SETUP 13\ntree's to be ordered by invariant multiplicity. Therefore we implement the outer-loop\nof the DFS here and then call depth \frst visit to handle the recursive portion of the\nDFS. The record dfsorder adapts the DFS to record the ordering, storing the results\nin in the dfsvertices andordered edges arrays. We then create the dfsnum array\nwhich provides a mapping from vertex to DFS number.\nhOrder vertices and edges by DFS 13ai\u0011\nstd::vector <default color type>color vec(num vertices(G1));\nsafe iterator property map<std::vector <default color type>::iterator ,IndexMap1 >\ncolor map(color vec.begin(),color vec.size(),index map1);\nrecord dfsorder dfs visitor(dfsvertices,ordered edges);\ntypedef color traits<default color type>Color;\nfor(vertex iter u=Vmult.begin();u!=Vmult.end(); ++ u)f\nif(color map[*u] == Color::white ())f\ndfsvisitor.start vertex(*u,G1);\ndepth \frst visit(G1, *u,dfsvisitor,color map);\ng\ng\n// Create the dfs num array and dfs num map\ndfsnum vec.resize(num vertices(G1));\ndfsnum=make safe iterator property map(dfsnum vec.begin(),\ndfsnum vec.size(),index map1);\nsize type n=0;\nfor(vertex iter v=dfsvertices.begin();v!=dfsvertices.end(); ++ v)\ndfsnum[*v] =n++;\nThe de\fnition of the record dfsorder visitor class is as follows.\nhDFS visitor to record vertex and edge order 13bi\u0011\nstruct record dfsorder : default dfsvisitor\nf\nrecord dfsorder(std::vector <vertex1 t>&v,std::vector <edge1 t>&e)\n: vertices (v),edges(e)fg\nvoid discover vertex(vertex1 t v,const Graph1 &)constf\nvertices.push back(v);\ng\nvoid examine edge(edge1 t e,const Graph1 &G1)constf\nedges.push back(e);\ng\nstd::vector <vertex1 t>&vertices;\nstd::vector <edge1 t>&edges;\ng;14\nThe \fnal stage of the setup is to reorder the edges so that all edges belonging to\nG1[k] appear before any edges not in G1[k], fork= 1;:::;n .\nhSort edges according to vertex DFS order 14ai\u0011\nsort(ordered edges,edge cmp(G1,dfsnum));\nThe edge comparison function object is de\fned as follows.\nhEdge comparison predicate 14bi\u0011\nstruct edge cmpf\nedge cmp(const Graph1 &G1,DFSNumMap dfs num)\n: G1(G1),dfsnum(dfsnum)fg\nbool operator ()(const edge1 t&e1,const edge1 t&e2)constf\nusing namespace std ;\nvertex1 t u1=dfsnum[source(e1,G1)],v1=dfsnum[target(e1,G1)];\nvertex1 t u2=dfsnum[source(e2,G1)],v2=dfsnum[target(e2,G1)];\nint m1 =max(u1,v1);\nint m2 =max(u2,v2);\n// lexicographical comparison\nreturn make pair(m1,make pair(u1,v1))\n<make pair(m2,make pair(u2,v2));\ng\nconst Graph1 &G1;\nDFSNumMap dfs num;\ng;\n0.5 Appendix\nhTypedefs for commonly used types 14ci\u0011\ntypedef typename graph traits<Graph1>::vertex descriptor vertex1 t;\ntypedef typename graph traits<Graph2>::vertex descriptor vertex2 t;\ntypedef typename graph traits<Graph1>::edge descriptor edge1 t;\ntypedef typename graph traits<Graph1>::vertices size type size type;\ntypedef typename Invariant1::result type invar1 value;\ntypedef typename Invariant2::result type invar2 value;\nhData members for the parameters 14di\u0011\nconst Graph1 &G1;\nconst Graph2 &G2;\nIsoMapping f ;\nInvariant1 invariant1 ;\nInvariant2 invariant2 ;0.5. APPENDIX 15\nstd::size t max invariant ;\nIndexMap1 index map1;\nIndexMap2 index map2;\nhInternal data structures 15ai\u0011\nstd::vector <vertex1 t>dfsvertices;\ntypedef std::vector <vertex1 t>::iterator vertex iter;\nstd::vector <int>dfsnum vec;\ntypedef safe iterator property map<typename std::vector <int>::iterator ,IndexMap1 >DFSNumMap ;\nDFSNumMap dfs num;\nstd::vector <edge1 t>ordered edges;\ntypedef std::vector <edge1 t>::iterator edge iter;\nstd::vector <char>inSvec;\ntypedef safe iterator property map<typename std::vector <char>::iterator ,\nIndexMap2 >InSMap ;\nInSMap in S;\nint num edges onk;\nhIsomorphism algorithm constructor 15bi\u0011\nisomorphism algo(const Graph1 &G1,const Graph2 &G2,IsoMapping f ,\nInvariant1 invariant1 ,Invariant2 invariant2 ,std::size t max invariant ,\nIndexMap1 index map1,IndexMap2 index map2)\n: G1(G1),G2(G2),f(f),invariant1 (invariant1 ),invariant2 (invariant2 ),\nmax invariant (max invariant ),\nindex map1(index map1),index map2(index map2)\nf\ninSvec.resize(num vertices(G1));\ninS=make safe iterator property map\n(inSvec.begin(),inSvec.size(),index map2);\ng\nhisomorphism.hpp 15ci\u0011\n// Copyright (C) 2001 Jeremy Siek, Doug Gregor, Brian Osman\n//\n// Permission to copy, use, sell and distribute this software is granted\n// provided this copyright notice appears in all copies.\n// Permission to modify the code and to distribute modi\fed code is granted\n// provided this copyright notice appears in all copies, and a notice\n// that the code was modi\fed is included with the copyright notice.\n//\n// This software is provided \\as is\" without express or implied warranty,16\n// and with no claim as to its suitability for any purpose.\n#ifndef BOOST GRAPH ISOMORPHISM HPP\n#de\fne BOOST GRAPH ISOMORPHISM HPP\n#include <utility>\n#include <vector>\n#include <iterator>\n#include <algorithm >\n#include <boost/graph/iteration macros.hpp>\n#include <boost/graph/depth \frst search.hpp>\n#include <boost/utility.hpp>\n#include <boost/algorithm .hpp>\n#include <boost/pending /indirect cmp.hpp>// for make indirect pmap\nnamespace boost f\nnamespace detail f\nhIsomorphism algorithm class 10bi\ntemplate<typename Graph ,typename InDegreeMap >\nvoid compute indegree(const Graph &g,InDegreeMap in degree map)\nf\nBGL FORALL VERTICES T(v,g,Graph)\nput(indegree map,v,0);\nBGL FORALL VERTICES T(u,g,Graph)\nBGL FORALL ADJ T(u,v,g,Graph)\nput(indegree map,v,get(indegree map,v) +1);\ng\ng// namespace detail\nhDegree vertex invariant functor 4i\nhIsomorphism function interface 8ci\nhIsomorphism function body 9ai\nnamespace detail f\ntemplate<typename Graph1 ,typename Graph2 ,\ntypename IsoMapping ,\ntypename IndexMap1 ,typename IndexMap2 ,\ntypename P ,typename T ,typename R >\nbool isomorphism impl(const Graph1 &G1,const Graph2 &G2,0.5. APPENDIX 17\nIsoMapping f ,IndexMap1 index map1,IndexMap2 index map2,\nconst bgl named params<P,T,R>&params)\nf\nstd::vector <std::size t>indegree1 vec(num vertices(G1));\ntypedef safe iterator property map<std::vector <std::size t>::iterator ,IndexMap1 >InDeg1;\nInDeg1 in degree1(indegree1 vec.begin(),indegree1 vec.size(),index map1);\ncompute indegree(G1,indegree1);\nstd::vector <std::size t>indegree2 vec(num vertices(G2));\ntypedef safe iterator property map<std::vector <std::size t>::iterator ,IndexMap2 >InDeg2;\nInDeg2 in degree2(indegree2 vec.begin(),indegree2 vec.size(),index map2);\ncompute indegree(G2,indegree2);\ndegree vertex invariant<InDeg1,Graph1>invariant1 (indegree1,G1);\ndegree vertex invariant<InDeg2,Graph2>invariant2 (indegree2,G2);\nreturn isomorphism (G1,G2,f,\nchoose param(getparam(params,vertex invariant1 t()), invariant1 ),\nchoose param(getparam(params,vertex invariant2 t()), invariant2 ),\nchoose param(getparam(params,vertex max invariant t()), invariant2 .max()),\nindex map1,index map2\n);\ng\ng// namespace detail\n// Named parameter interface\ntemplate<typename Graph1 ,typename Graph2 ,class P,class T,class R>\nbool isomorphism (const Graph1 &g1,\nconst Graph2 &g2,\nconst bgl named params<P,T,R>&params)\nf\ntypedef typename graph traits<Graph2>::vertex descriptor vertex2 t;\ntypename std::vector <vertex2 t>::size type n=num vertices(g1);\nstd::vector <vertex2 t>f(n);\nreturn detail::isomorphism impl\n(g1,g2,\nchoose param(getparam(params,vertex isomorphism t()),\nmake safe iterator property map(f.begin(),f.size(),\nchoose const pmap(getparam(params,vertex index1),\ng1,vertex index),vertex2 t())),\nchoose const pmap(getparam(params,vertex index1),g1,vertex index),\nchoose const pmap(getparam(params,vertex index2),g2,vertex index),\nparams\n);18\ng\n// All defaults interface\ntemplate<typename Graph1 ,typename Graph2 >\nbool isomorphism (const Graph1 &g1,const Graph2 &g2)\nf\nreturn isomorphism (g1,g2,\nbglnamed params<int,bu\u000ber param t>(0));// bogus named param\ng\n// Verify that the given mapping iso map from the vertices of g1 to the\n// vertices of g2 describes an isomorphism.\n// Note: this could be made much faster by specializing based on the graph\n// concepts modeled, but since we're verifying an O(n ^(lg n)) algorithm,\n// O(n^4) won't hurt us.\ntemplate<typename Graph1 ,typename Graph2 ,typename IsoMap >\ninline bool verify isomorphism (const Graph1 &g1,const Graph2 &g2,IsoMap iso map)\nf\nif(num vertices(g1) != num vertices(g2)jjnum edges(g1) != num edges(g2))\nreturn false ;\nfor(typename graph traits<Graph1>::edge iterator e1 =edges(g1).\frst;\ne1!=edges(g1).second; ++e1)f\nbool found edge=false;\nfor(typename graph traits<Graph2>::edge iterator e2 =edges(g2).\frst;\ne2!=edges(g2).second&& !found edge; ++e2)f\nif(source(*e2,g2) == get(isomap,source(*e1,g1)) &&\ntarget(*e2,g2) == get(isomap,target(*e1,g1)))f\nfound edge=true;\ng\ng\nif(!found edge)\nreturn false ;\ng\nreturn true ;\ng\ng// namespace boost\n#include <boost/graph/iteration macros undef.hpp>\n#endif // BOOST GRAPH ISOMORPHISM HPPBibliography\n[1]N. Deo, J. M. Davis, and R. E. Lord. A new algorithm for digraph isomorphism.\nBIT, 17:16{30, 1977.\n[2]S. Fortin. Graph isomorphism problem. Technical Report 96-20, University of\nAlberta, Edomonton, Alberta, Canada, 1996.\n[3]E. M. Reingold, J. Nievergelt, and N. Deo. Combinatorial Algorithms: Theory\nand Practice . Prentice Hall, 1977.\n[4]E. Sussenguth. A graph theoretic algorithm for matching chemical structure. J.\nChem. Doc. , 5:36{43, 1965.\n[5]S. H. Unger. Git|a heuristic program for testing pairs of directed line graphs for\nisomorphism. Comm. ACM , 7:26{34, 1964.\n19" } ]
{ "category": "App Definition and Development", "file_name": "isomorphism-impl.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "2009-05-08Lecture\nheld at the Boost Library Conference 2009Joachim Faulhaber\nSlide Design by Chih-Hao Tsaihttp://www.chtsai.orgCopyright © Joachim Faulhaber 2009Distributed under Boost Software Licence 1.0Updated version 3.1.0 2009-09-17An Introduction to the \nInterval Template Library2\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Lecture Outline\nBackground and Motivation\nDesign\nExamples\nSemantics\nImplementation\nFuture Works\nAvailability3\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Background and Motivation\nInterval containers simplified the implementation of \ndate and time related tasks\nDecomposing “histories” of attributed events into \nsegments with constant attributes.\nWorking with time grids, e.g. a grid of months.\nAggregations of values associated to date or time \nintervals.\n… that occurred frequently in programs like\nBilling modules\nTherapy scheduling programs\nHospital and controlling statistics4\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nBackground is the date time problem domain ...\n… but the scope of the Itl as a generic library is more \ngeneral: \nan interval_set is a set\n that is implemented as a set of intervals \nan interval_map is a map\n that is implemented as a map of interval value pairs5\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Aspects\nThere are two aspects in the design of interval \ncontainers\nConceptual aspect\ninterval_set <int> mySet;\nmySet.insert(42);\nbool has_answer = mySet.contains(42);interval_set <int> mySet;\nmySet.insert(42);\nbool has_answer = mySet.contains(42);\nOn the conceptual aspect an interval_set can be used \njust as a set of elements\nexcept for . . .\n. . . iteration over elements\nconsider interval_set<double> or interval_set<string>\nIterative Aspect\nIteration is always done over intervals6\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAddability and Subtractability\nAll of itl's (interval) containers are Addable and \nSubtractable \nThey implement operators += , +, -= and -\n+= -=\n sets set union set difference\n maps ? ?\nA possible implementation for maps\nPropagate addition/subtraction to the associated values \n. . . or aggregate on overlap\n. . . or aggregate on collision7\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap\n→ a\n→ b\n+→ a\n→ (a + b)\n→ b\nDecompositional \neffect on Intervals\nAccumulative effect \non associated values\nI\nJJ-II-J\nI∩J\nI, J: intervals, a,b: associated values8\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap, a minimal example\ntypedef itl::set<string> guests;\ninterval_map <time, guests> party;\n \nparty += make_pair(\n interval< time>::rightopen (20:00, 22:00), guests( \"Mary\"));\nparty += make_pair(\n interval< time>::rightopen (21:00, 23:00), guests( \"Harry\")); \n// party now contains\n[20:00, 21:00)->{ \"Mary\"} \n[21:00, 22:00)->{ \"Harry\",\"Mary\"} //guest sets aggregated \n[22:00, 23:00)->{ \"Harry\"}9\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Granu-\nlarityStyle Sets Maps\ninterval interval\njoining interval_set interval_map\nseparating separate_interval_set\nsplitting split_interval_set split_interval_map\nelement set mapDesign\nThe Itl's class templates10\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Joining\nIntervals are joined on overlap or on touch\n. . . for maps , if associated values are equal\nKeeps interval_maps and sets in a minimal form\n interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 5)} interval_map\n \n {[1 3) ->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 5) }\n ->1 ->2 ->1 11\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Splitting\nIntervals are split on overlap and kept separate on touch\nAll interval borders are preserved (insertion memory)\n split_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 2)[2 3)[3 4) }\n \n = {[1 2)[2 3)[3 4)[4 5)} split_interval_map\n \n {[1 3) ->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 4)[4 5) }\n ->1 ->2 ->1 ->1 12\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Separating\nIntervals are joined on overlap but kept separate on \ntouch\nPreserves borders that are never crossed (preserves a \nhidden grid).\n separate_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 4)[4 5)} 13\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA few instances of intervals (interval.cpp)\ninterval< int> int_interval = interval< int>::closed(3,7);\ninterval< double> sqrt_interval\n = interval< double>::rightopen (1/sqrt(2.0), sqrt(2.0));\ninterval< std::string > city_interval\n = interval<std::string>:: leftopen(\"Barcelona\" , \"Boston\");\ninterval< boost::ptime> time_interval\n = interval< boost::ptime>::open(\n time_from_string( \"2008-05-20 19:30\" ),\n time_from_string( \"2008-05-20 23:00\" )\n );14\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks \n(month_and_week_grid.cpp )\n#include <boost/itl/gregorian.hpp> //boost::gregorian plus adapter code \n#include <boost/itl/split_interval_set.hpp>\n// A split_interval_set of gregorian dates as date_grid.\ntypedef split_interval_set<boost::gregorian::date> date_grid;\n// Compute a date_grid of months using boost::gregorian.\ndate_grid month_grid( const interval<date>& scope)\n{\n date_grid month_grid;\n // Compute a date_grid of months using boost::gregorian.\n . . .\n return month_grid;\n}\n// Compute a date_grid of weeks using boost::gregorian.\ndate_grid week_grid( const interval<date>& scope)\n{\n date_grid week_grid;\n // Compute a date_grid of weeks using boost::gregorian.\n . . .\n return week_grid;\n}15\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks\nvoid month_and_time_grid()\n{\n date someday = day_clock::local_day();\n date thenday = someday + months(2);\n interval<date> scope = interval<date>::rightopen(someday, thenday);\n // An intersection of the month and week grids ...\n date_grid month_and_week_grid \n = month_grid(scope) & week_grid(scope);\n // ... allows to iterate months and weeks. Whenever a month\n // or a week changes there is a new interval.\n for(date_grid::iterator it = month_and_week_grid.begin(); \n it != month_and_week_grid.end(); it++)\n { . . . }\n // We can also intersect the grid into an interval_map to make\n // shure that all intervals are within months and week bounds.\n interval_map< boost::gregorian::date, some_type> accrual;\n compute_some_result(accrual, scope);\n accrual &= month_and_week_grid;\n}16\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\n(partys_guest_average.cpp)\nclass counted_sum\n{\npublic:\ncounted_sum() :_sum(0),_count(0){}\ncounted_sum( int sum):_sum(sum),_count(1){}\nint sum()const {return _sum;}\nint count()const{return _count;}\ndouble average() const\n { return _count==0 ? 0.0 : _sum/ static_cast <double>(_count); }\ncounted_sum& operator += (const counted_sum& right)\n{ _sum += right.sum(); _count += right.count(); return *this; }\nprivate:\nint _sum;\nint _count;\n};\nbool operator == (const counted_sum& left, const counted_sum& right)\n{ return left.sum()==right.sum() && left.count()==right.count(); } 17\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\nvoid partys_height_average()\n{\n interval_map<ptime, counted_sum > height_sums;\n height_sums += (\n make_pair(\n interval<ptime>::rightopen(\n time_from_string( \"2008-05-20 19:30\" ), \n time_from_string( \"2008-05-20 23:00\" )), \n counted_sum(165) ) // Mary is 1,65 m tall.\n );\n // Add height of more pary guests . . . \n interval_map<ptime, counted_sum>::iterator height_sum_ =\n height_sums.begin();\n while(height_sum_ != height_sums.end())\n {\n interval<ptime> when = height_sum_->first;\n double height_average = (*height_sum_++).second. average();\n cout << \"[\" << when.first() << \" - \" << when.upper() << \")\"\n << \": \" << height_average << \" cm\" << endl;\n }\n}18\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval containers allow to express a variety of date \nand time operations in an easy way.\nExample man_power.cpp ...\nSubtract weekends and holidays from an interval_set\nworktime -= weekends(scope)\nworktime -= german_reunification_day\nIntersect an interval_map with an interval_set\nclaudias_working_hours &= worktime\nSubtract and interval_set from an interval map\nclaudias_working_hours -= claudias_absense_times\nAdding interval_maps\ninterval_map<date, int> manpower;\nmanpower += claudias_working_hours;\nmanpower += bodos_working_hours;19\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval_maps can also be intersected\nExample user_groups.cpp\ntypedef boost::itl::set<string> MemberSetT;\ntypedef interval_map<date, MemberSetT> MembershipT;\nvoid user_groups()\n{\n . . .\n MembershipT med_users;\n // Compute membership of medical staff\n med_users += make_pair( member_interval_1, MemberSetT( \"Dr.Jekyll\" ));\n med_users += . . . \n MembershipT admin_users;\n // Compute membership of administation staff\n med_users += make_pair( member_interval_2, MemberSetT( \"Mr.Hyde\"));\n . . .\n MembershipT all_users = med_users + admin_users;\n MembershipT super_users = med_users & admin_users;\n . . .\n}20\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl sets is based on a concept itl::Set\nitl::set , interval_set , split_interval_set \nand separate_interval_set are models of concept \nitl::Set\n// Abstract part\nempty set: Set::Set()\nsubset relation: bool Set::contained_in (const Set& s2)const\nequality: bool is_element_equal (const Set& s1, const Set& s2)\nset union: Set& operator += (Set& s1, const Set& s2)\n Set operator + (const Set& s1, const Set& s2)\nset difference: Set& operator -= (Set& s1, const Set& s2)\n Set operator - (const Set& s1, const Set& s2)\nset intersection: Set& operator &= (Set& s1, const Set& s2)\n Set operator & (const Set& s1, const Set& s2) \n// Part related to sequential ordering\nsorting order: bool operator < (const Set& s1, const Set& s2)\nlexicographical equality:\n bool operator == (const Set& s1, const Set& s2)\n 21\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl maps is based on a concept itl::Map\nitl::map , interval_map and split_interval_map \nare models of concept \nitl::Map\n// Abstract part\nempty map: Map::Map()\nsubmap relation: bool Map::contained_in (const Map& m2)const\nequality: bool is_element_equal (const Map& m1, const Map& m2)\nmap union: Map& operator += (Map& m1, const Map& m2)\n Map operator + (const Map& m1, const Map& m2)\nmap difference: Map& operator -= (Map& m1, const Map& m2)\n Map operator - (const Map& m1, const Map& m2)\nmap intersection: Map& operator &= (Map& m1, const Map& m2)\n Map operator & (const Map& m1, const Map& m2) \n// Part related to sequential ordering\nsorting order: bool operator < (const Map& m1, const Map& m2)\nlexicographical equality:\n bool operator == (const Map& m1, const Map& m2)\n 22\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nDefining semantics of itl concepts via sets of laws\naka c++0x axioms\nChecking law sets via automatic testing:\nA Law Based Test Automaton LaBatea\nGenerate\nlaw instance\napply law to instance\ncollect violations\nCommutativity<T a, U b, +>:\n a + b = b + a;23\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nLexicographical Ordering and Equality\nFor all itl containers operator < implements a strict \nweak ordering . \nThe induced equivalence of this ordering is \nlexicographical equality which is implemented as \noperator ==\nThis is in line with the semantics of \nSortedAssociativeContainers24\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nSubset Ordering and Element Equality\nFor all itl containers function contained_in \nimplements a partial ordering .\nThe induced equivalence of this ordering is \nequality of elements which is implemented as \nfunction is_element_equal .25\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nitl::Sets\nAll itl sets implement a Set Algebra , which is to say \nsatisfy a “ classical” set of laws . . .\n. . . using is_element_equal as equality\nAssociativity, Neutrality, Commutativity (for + and &)\nDistributivity, DeMorgan, Symmetric Difference\nMost of the itl sets satisfy the classical set of laws \neven if . . .\n. . . lexicographical equality: operator == is used\nThe differences reflect proper inequalities in sequence \nthat occur for separate_interval_set and \nsplit_interval_set . 26\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nConcept Induction / Concept Transition\nThe semantics of itl::Maps appears to be determined by the \ncodomain type of the map\n is model of if example \n Map<D,Monoid> Monoid interval_map<int, string> \n Map<D,Set> Set C1 interval_map<int, set<int>>\n \n Map<D,CommutMonoid > CommutMonoid interval_map<int, unsigned>\n Map<D,AbelianGroup> AbelianGroup C2 interval_map<int, int,total>\nConditions C1 and C2 restrict the Concept Induction to specific \nmap traits\nC1: Value pairs that carry a neutral element as associated \nvalue are always deleted (Trait: absorbs_neutrons ).\nC2: The map is total: Non existing keys are implicitly mapped to \nneutral elements (Trait: is_total ). 27\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Implementation\nItl containers are implemented based on\nstd::set and std::map\nBasic operations like adding and subtracting intervals or \ninterval value pairs perform with a time complexity \nbetween * amortized O(log n) and O(n), where n is the \nnumber of intervals of a container.\nOperations like addition and subtraction of whole \ncontainers are having a worst case complexity of\nO(m log(n+m)) , where n and m are the numbers of \nintervals of the containers to combine.\n* : Consult the library documentation for more detailed \ninformation.28\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Future Works\nImplementing interval_maps of sets more efficiently\nRevision of features of the extended itl (itl_plus.zip)\nDecomposition of histories : k histories hk with attribute \ntypes A1, ..., Ak are “decomposed ” to a product history \nof tuples of attribute sets:\n(h1<T,A1>,..., h<T,Ak>) → h<T, (set<A1>,…, set<Ak>)>\nCubes (generalized crosstables): Applying aggregate \non collision to maps of tuple value pairs in order to \norganize hierachical data and their aggregates.29\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nItl project on sourceforge (version 2.0.1)\nhttp://sourceforge.net/projects/itl\nLatest version on boost vault/Containers (3.1.0)\nhttp://www.boostpro.com/vault/ → containers\nitl_3_1_0.zip : Core itl in preparation for boost\nitl_plus_3_1_0.zip : Extended itl including histories, cubes \nand automatic validation (LaBatea).\nOnline documentation at\nhttp://www.herold-faulhaber.de/\nDoxygen generated docs for (version 2.0.1)\nhttp://www.herold-faulhaber.de/itl/\nLatest boost style documentation (version 3.1.0)\nhttp://www.herold-faulhaber.de/boost_itl/doc/libs/itl/doc/html/30\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nBoost sandbox\nhttps://svn.boost.org/svn/boost/sandbox/itl/\nCore itl: Interval containers in prepartion for boost\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl/\nExtended itl_xt: “histories” and cubes\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl_xt/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl_xt/\nValidater LaBatea: \nCompiles with msvc-8.0 or newer, gcc-4.3.2 or newer\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/validate/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/validate/2009-05-08Lectureheld at the Boost Library Conference 2009Joachim Faulhaber\nSlide Design by Chih-Hao Tsaihttp://www.chtsai.orgCopyright © Joachim Faulhaber 2009Distributed under Boost Software Licence 1.0Updated version 3.1.0 2009-09-17An Introduction to the \nInterval Template Library2\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Lecture Outline\nBackground and Motivation\nDesign\nExamples\nSemantics\nImplementation\nFuture Works\nAvailability3\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Background and Motivation\nInterval containers simplified the implementation of \ndate and time related tasks\nDecomposing “histories” of attributed events into \nsegments with constant attributes.\nWorking with time grids, e.g. a grid of months.\nAggregations of values associated to date or time \nintervals.\n… that occurred frequently in programs like\nBilling modules\nTherapy scheduling programs\nHospital and controlling statistics4\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nBackground is the date time problem domain ...\n… but the scope of the Itl as a generic library is more \ngeneral: \nan interval_set is a set\n that is implemented as a set of intervals \nan interval_map is a map\n that is implemented as a map of interval value pairs5\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Aspects\nThere are two aspects in the design of interval \ncontainers\nConceptual aspect\ninterval_set<int> mySet;\nmySet.insert(42);\nbool has_answer = mySet.contains(42);interval_set<int> mySet;\nmySet.insert(42);\nbool has_answer = mySet.contains(42);\nOn the conceptual aspect an interval_set can be used \njust as a set of elements\nexcept for . . .\n. . . iteration over elements\nconsider interval_set<double> or interval_set<string>\nIterative Aspect\nIteration is always done over intervals6\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAddability and Subtractability\nAll of itl's (interval) containers are Addable and \nSubtractable \nThey implement operators +=, +, -= and -\n+=-=\n sets set unionset difference\n maps ??\nA possible implementation for maps\nPropagate addition/subtraction to the associated values \n. . . or aggregate on overlap\n. . . or aggregate on collision7\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap\n→ a\n→ b\n+→ a\n→ (a + b)\n→ b\nDecompositional \neffect on Intervals\nAccumulative effect \non associated values\nI\nJJ-II-J\nI∩J\nI, J: intervals, a,b: associated values8\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap, a minimal example\ntypedef itl::set<string> guests;\ninterval_map<time, guests> party;\n \nparty += make_pair(\n interval<time>::rightopen(20:00, 22:00), guests( \"Mary\"));\nparty += make_pair(\n interval<time>::rightopen(21:00, 23:00), guests( \"Harry\")); \n// party now contains[20:00, 21:00)->{\"Mary\"} [21:00, 22:00)->{\"Harry\",\"Mary\"} //guest sets aggregated [22:00, 23:00)->{\"Harry\"}9\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Granu-\nlarityStyleSets Maps\nintervalinterval\njoininginterval_set interval_map\nseparatingseparate_interval_set\nsplittingsplit_interval_set split_interval_map\nelementset mapDesign\nThe Itl's class templates10\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Joining\nIntervals are joined on overlap or on touch\n. . . for maps, if associated values are equal\nKeeps interval_maps and sets in a minimal form\n interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 5)} interval_map\n \n {[1 3)->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 5) }\n ->1 ->2 ->1 11\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Splitting\nIntervals are split on overlap and kept separate on touch\nAll interval borders are preserved (insertion memory)\n split_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 2)[2 3)[3 4) }\n \n = {[1 2)[2 3)[3 4)[4 5)} split_interval_map\n \n {[1 3)->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 4)[4 5) }\n ->1 ->2 ->1 ->1 12\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Separating\nIntervals are joined on overlap but kept separate on \ntouch\nPreserves borders that are never crossed (preserves a \nhidden grid).\n separate_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 4)[4 5)} 13\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA few instances of intervals (interval.cpp)\ninterval<int> int_interval = interval< int>::closed(3,7);\ninterval<double> sqrt_interval\n = interval<double>::rightopen(1/sqrt(2.0), sqrt(2.0));\ninterval<std::string> city_interval\n = interval<std::string>:: leftopen(\"Barcelona\", \"Boston\");\ninterval<boost::ptime> time_interval\n = interval<boost::ptime>::open(\n time_from_string(\"2008-05-20 19:30\" ), time_from_string(\"2008-05-20 23:00\" ) );14\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks \n(month_and_week_grid.cpp )\n#include <boost/itl/gregorian.hpp> //boost::gregorian plus adapter code #include <boost/itl/split_interval_set.hpp>\n// A split_interval_set of gregorian dates as date_grid.typedef split_interval_set<boost::gregorian::date> date_grid;\n// Compute a date_grid of months using boost::gregorian.date_grid month_grid( const interval<date>& scope){ date_grid month_grid; // Compute a date_grid of months using boost::gregorian. . . . return month_grid;}\n// Compute a date_grid of weeks using boost::gregorian.date_grid week_grid( const interval<date>& scope){ date_grid week_grid; // Compute a date_grid of weeks using boost::gregorian. . . . return week_grid;}15\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks\nvoid month_and_time_grid(){ date someday = day_clock::local_day(); date thenday = someday + months(2); interval<date> scope = interval<date>::rightopen(someday, thenday);\n // An intersection of the month and week grids ... date_grid month_and_week_grid = month_grid(scope) & week_grid(scope);\n // ... allows to iterate months and weeks. Whenever a month // or a week changes there is a new interval. for(date_grid::iterator it = month_and_week_grid.begin(); it != month_and_week_grid.end(); it++) { . . . }\n // We can also intersect the grid into an interval_map to make // shure that all intervals are within months and week bounds. interval_map<boost::gregorian::date, some_type> accrual; compute_some_result(accrual, scope); accrual &= month_and_week_grid;\n}16\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\n(partys_guest_average.cpp)\nclass counted_sum{public:counted_sum():_sum(0),_count(0){}counted_sum(int sum):_sum(sum),_count(1){}\nint sum()const {return _sum;}int count()const{return _count;}double average()const { return _count==0 ? 0.0 : _sum/ static_cast<double>(_count); }\ncounted_sum& operator += (const counted_sum& right){ _sum += right.sum(); _count += right.count(); return *this; }\nprivate:int _sum;int _count;};\nbool operator == (const counted_sum& left, const counted_sum& right){ return left.sum()==right.sum() && left.count()==right.count(); } 17\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\nvoid partys_height_average(){ interval_map<ptime, counted_sum> height_sums;\n height_sums += ( make_pair( interval<ptime>::rightopen( time_from_string( \"2008-05-20 19:30\"), time_from_string( \"2008-05-20 23:00\")), counted_sum(165)) // Mary is 1,65 m tall. );\n // Add height of more pary guests . . . \n interval_map<ptime, counted_sum>::iterator height_sum_ = height_sums.begin(); while(height_sum_ != height_sums.end()) { interval<ptime> when = height_sum_->first; double height_average = (*height_sum_++).second. average();\n cout << \"[\" << when.first() << \" - \" << when.upper() << \")\" << \": \" << height_average << \" cm\" << endl; }}18\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval containers allow to express a variety of date \nand time operations in an easy way.\nExample man_power.cpp ...\nSubtract weekends and holidays from an interval_set\nworktime -= weekends(scope)\nworktime -= german_reunification_day\nIntersect an interval_map with an interval_set\nclaudias_working_hours &= worktime\nSubtract and interval_set from an interval map\nclaudias_working_hours -= claudias_absense_times\nAdding interval_maps\ninterval_map<date, int> manpower;\nmanpower += claudias_working_hours;\nmanpower += bodos_working_hours;19\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval_maps can also be intersected\nExample user_groups.cpp\ntypedef boost::itl::set<string> MemberSetT;typedef interval_map<date, MemberSetT> MembershipT;\nvoid user_groups(){ . . .\n MembershipT med_users; // Compute membership of medical staff med_users += make_pair(member_interval_1, MemberSetT(\"Dr.Jekyll\")); med_users += . . . \n MembershipT admin_users; // Compute membership of administation staff med_users += make_pair(member_interval_2, MemberSetT(\"Mr.Hyde\")); . . .\n MembershipT all_users = med_users + admin_users;\n MembershipT super_users = med_users & admin_users; . . .\n}20\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl sets is based on a concept itl::Set\nitl::set, interval_set, split_interval_set \nand separate_interval_set are models of concept \nitl::Set\n// Abstract partempty set: Set::Set()subset relation: bool Set::contained_in(const Set& s2)constequality: bool is_element_equal(const Set& s1, const Set& s2)set union: Set& operator += (Set& s1, const Set& s2) Set operator + (const Set& s1, const Set& s2)set difference: Set& operator -= (Set& s1, const Set& s2) Set operator - (const Set& s1, const Set& s2)set intersection: Set& operator &= (Set& s1, const Set& s2) Set operator & (const Set& s1, const Set& s2) \n// Part related to sequential orderingsorting order: bool operator < (const Set& s1, const Set& s2)lexicographical equality: bool operator == (const Set& s1, const Set& s2) 21\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl maps is based on a concept itl::Map\nitl::map, interval_map and split_interval_map \nare models of concept \nitl::Map\n// Abstract partempty map: Map::Map()submap relation: bool Map::contained_in(const Map& m2)constequality: bool is_element_equal(const Map& m1, const Map& m2)map union: Map& operator += (Map& m1, const Map& m2) Map operator + (const Map& m1, const Map& m2)map difference: Map& operator -= (Map& m1, const Map& m2) Map operator - (const Map& m1, const Map& m2)map intersection: Map& operator &= (Map& m1, const Map& m2) Map operator & (const Map& m1, const Map& m2) \n// Part related to sequential orderingsorting order: bool operator < (const Map& m1, const Map& m2)lexicographical equality: bool operator == (const Map& m1, const Map& m2) 22\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nDefining semantics of itl concepts via sets of laws\naka c++0x axioms\nChecking law sets via automatic testing:\nA Law Based Test Automaton LaBatea\nGenerate\nlaw instance\napply law to instance\ncollect violations\nCommutativity<T a, U b, +>:\n a + b = b + a;23\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nLexicographical Ordering and Equality\nFor all itl containers operator < implements a strict \nweak ordering. \nThe induced equivalence of this ordering is \nlexicographical equality which is implemented as \noperator ==\nThis is in line with the semantics of \nSortedAssociativeContainers24\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nSubset Ordering and Element Equality\nFor all itl containers function contained_in \nimplements a partial ordering .\nThe induced equivalence of this ordering is \nequality of elements which is implemented as \nfunction is_element_equal .25\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nitl::Sets\nAll itl sets implement a Set Algebra, which is to say \nsatisfy a “classical” set of laws . . .\n. . . using is_element_equal as equality\nAssociativity, Neutrality, Commutativity (for + and &)\nDistributivity, DeMorgan, Symmetric Difference\nMost of the itl sets satisfy the classical set of laws \neven if . . .\n. . . lexicographical equality: operator == is used\nThe differences reflect proper inequalities in sequence \nthat occur for separate_interval_set and \nsplit_interval_set . 26\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nConcept Induction / Concept Transition\nThe semantics of itl::Maps appears to be determined by the \ncodomain type of the map\n is model of if example \n Map<D,Monoid> Monoid interval_map<int, string> \n Map<D,Set> Set C1 interval_map<int, set<int>> Map<D,CommutMonoid> CommutMonoid interval_map<int, unsigned>\n Map<D,AbelianGroup> AbelianGroup C2 interval_map<int,int,total>\nConditions C1 and C2 restrict the Concept Induction to specific \nmap traits\nC1: Value pairs that carry a neutral element as associated \nvalue are always deleted (Trait: absorbs_neutrons ).\nC2: The map is total: Non existing keys are implicitly mapped to \nneutral elements (Trait: is_total). 27\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Implementation\nItl containers are implemented based on\nstd::set and std::map\nBasic operations like adding and subtracting intervals or \ninterval value pairs perform with a time complexity \nbetween* amortized O(log n) and O(n), where n is the \nnumber of intervals of a container.\nOperations like addition and subtraction of whole \ncontainers are having a worst case complexity of\nO(m log(n+m)), where n and m are the numbers of \nintervals of the containers to combine.\n* : Consult the library documentation for more detailed \ninformation.28\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Future Works\nImplementing interval_maps of sets more efficiently\nRevision of features of the extended itl (itl_plus.zip)\nDecomposition of histories : k histories hk with attribute \ntypes A1, ..., Ak are “decomposed” to a product history \nof tuples of attribute sets:\n(h1<T,A1>,..., h<T,Ak>) → h<T, (set<A1>,…, set<Ak>)>\nCubes (generalized crosstables): Applying aggregate \non collision to maps of tuple value pairs in order to \norganize hierachical data and their aggregates.29\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nItl project on sourceforge (version 2.0.1)\nhttp://sourceforge.net/projects/itl\nLatest version on boost vault/Containers (3.1.0)\nhttp://www.boostpro.com/vault/ → containers\nitl_3_1_0.zip : Core itl in preparation for boost\nitl_plus_3_1_0.zip : Extended itl including histories, cubes \nand automatic validation (LaBatea).\nOnline documentation at\nhttp://www.herold-faulhaber.de/\nDoxygen generated docs for (version 2.0.1)\nhttp://www.herold-faulhaber.de/itl/\nLatest boost style documentation (version 3.1.0)\nhttp://www.herold-faulhaber.de/boost_itl/doc/libs/itl/doc/html/30\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nBoost sandbox\nhttps://svn.boost.org/svn/boost/sandbox/itl/\nCore itl: Interval containers in prepartion for boost\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl/\nExtended itl_xt: “histories” and cubes\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl_xt/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl_xt/\nValidater LaBatea: \nCompiles with msvc-8.0 or newer, gcc-4.3.2 or newer\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/validate/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/validate/" } ]
{ "category": "App Definition and Development", "file_name": "intro_to_itl_3_1_0.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Everything we know about CRC but afraid to\nforget\nAndrew Kadatch1and Bob Jenkins2\n1Google Inc.\n2Microsoft Corporation\nSeptember 3, 2010\nAbstract\nThis paper describes a novel interleaved, parallelizeable word-by-word\nCRC computation algorithm which computes N-bit CRC ( N\u001464) on\nmodern Intel and AMD processors in 1.2 CPU cycles per byte, improv-\ning state of the art over word-by-word 32-bit and 64-bit CRCs (2.1 CPU\ncycles/byte) and classic byte-by-byte CRC computation (6-7 CPU cy-\ncles/byte). It computes 128-bit CRC in 1.7 CPU cycles/byte.\nCRC implementations are heavily optimized and hard to understand.\nThis paper describes CRC algorithms as they evolved over time, splitting\ncomplex optimizations into a sequence of natural improvements.\nThis paper also presents a collection of CRC \\tricks\" that we found\nhandy on many occassions.\nContents\n1 De\fnition of CRC 2\n2 Related work 3\n3 CRC tricks and tips 4\n3.1 Incremental CRC computation . . . . . . . . . . . . . . . . . . . 4\n3.2 Changing initial CRC value . . . . . . . . . . . . . . . . . . . . . 4\n3.3 Concatenation of CRCs . . . . . . . . . . . . . . . . . . . . . . . 5\n3.4 In-place modi\fcation of CRC-ed message . . . . . . . . . . . . . 5\n3.5 Storing CRC value after the message . . . . . . . . . . . . . . . . 6\n4 E\u000ecient software implementation 7\n4.1 Mapping bitstreams to hardware registers . . . . . . . . . . . . . 7\n4.2 Multiplication of D-normalized polynomials . . . . . . . . . . . . 7\n4.3 Multiplication of unnormalized polynomial . . . . . . . . . . . . . 7\n14.4 Computing powers of x. . . . . . . . . . . . . . . . . . . . . . . 8\n4.5 Simpli\fed CRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9\n4.6 Computing a CRC byte by byte . . . . . . . . . . . . . . . . . . . 9\n4.7 Rolling CRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11\n4.8 Reading multiple bytes at a time . . . . . . . . . . . . . . . . . . 12\n4.9 Computing a CRC word by word . . . . . . . . . . . . . . . . . . 12\n4.10 Processing non-overlapping blocks in parallel . . . . . . . . . . . 13\n4.11 Interleaved word-by-word CRC . . . . . . . . . . . . . . . . . . . 16\n4.11.1 Parallelizing CRC computation . . . . . . . . . . . . . . . 16\n4.11.2 Combining individual CRCs . . . . . . . . . . . . . . . . . 18\n4.11.3 E\u000ecient computation of individual CRCs . . . . . . . . . 19\n5 Experimental results 22\n5.1 Testing methology . . . . . . . . . . . . . . . . . . . . . . . . . . 22\n5.2 Compiler comparison . . . . . . . . . . . . . . . . . . . . . . . . . 22\n5.3 Choice of interleave level . . . . . . . . . . . . . . . . . . . . . . . 23\n5.4 Performance of CRC algorithms . . . . . . . . . . . . . . . . . . . 23\n1 De\fnition of CRC\nCyclic Redundancy Check (CRC) is a well-known technique that allows the\nrecipient of a message transmitted over a noisy channel to detect whether the\nmessage has been corrupted.\nA message M=m0: : : m N\u00001comprised of N=jMjbits ( mk2f0;1g) may\nbe viewed either as a numeric value\nM=N\u00001X\nk=0mk2N\u00001\u0000k\nor as a polynomial of a single variable of degree ( N\u00001)\nM(x) =N\u00001X\nk=0mkxN\u00001\u0000k\nwhere mk2GF(2) =f0;1gand all arithmetic operations on coe\u000ecients are\nperformed modulo 2. For example,\nAddition: ( x3+x2+x+ 1) + ( x2+x+ 1) = x3+ 2x2+ 2x+ 2 = x3;\nSubtraction: ( x3+x+ 1)\u0000(x2+x) =x3\u0000x2+ 1 = x3+x2+ 1;\nMultiplication: ( x+ 1)( x+ 1) = x2+ 2x+ 1 = x2+ 1:\nFor a given polynomial P(x) of degree D= deg\u0000\nP(x)\u0001\n, CRC u\u0000\nM(x); v(x)\u0001\nis the reminder from division of\u0000\nM(x)\u0001xD\u0001\nbyP(x). In practice, a more\n2complex formula is used:\nCRC u\u0000\nM(x); v(x)\u0001\n=\u0010\u0000\nv(x)\u0000u(x)\u0001\n\u0001xjMj+M(x)\u0001xD+u(x)\u0011\nmodP(x);\n(1)\nwhere polynomial P(x) of degree Dand polynomial u(x) of degree less than D\nare \fxed.\nThe use of the non-zero value of u(x) guarantees that the CRC of a sequence\nof zeroes is di\u000berent from zero. That allows detection of insertion of zeroes in\nthe beginning of a message and replacement of both content of the message and\nits CRC value with zeroes. Typically,\nu(x) =D\u00001X\nk=0xk: (2)\nThe use of auxilary parameter v(x) allows incremental CRC computation as\nshown in section 3.1.\n2 Related work\nCyclic Redundancy Checks (CRCs) were proposed by Peterson and Brown\n[PB61] in 1961. An e\u000ecient table-driven software implementation which reads\nand processes data byte by byte was described by Hill [Hil79] in 1979, Perez\n[Per83] in 1983. The \\classic\" byte-by-byte CRC algorithm described in section\n4.6 was published by Sarwate [Sar88] in 1988.\nIn 1993, Black [Bla93] published a method that reads data by words (de-\nscribed in section 4.8); however, it still computes the CRC byte by byte in strong\nsequential order.\nIn 2001, Braun and Waldvogel [BW01] brie\ry outlined a specialized vari-\nant of a CRC that could read input data by words and process them byte by\nbyte { but, thanks to the use of multiple tables, di\u000berent bytes from the input\nword could be processed in parallel. In 2002, Ji and Killian [JK02] provided de-\ntailed description and analysis of a nearly identical scheme. Both solutions were\ntargeted for hardware implementation. In 2005, Kouvanis and Berry [KB05]\ndemonstrated clear performance bene\fts of this scheme even when it is im-\nplemented in software. A generalized version of this approach is described in\nsection 4.9.\nSurprisingly, until [GGO+10] we have not seen prior art describing or utiliz-\ning a method of computing a CRC by processing in parallel (in an interleaved\nmanner to utilize multiple ALUs) multiple input streams belonging to non-\noverlapping sections of input data, desribed in section 4.10.\nA novel method of CRC computation that processes in parallel multiple\nwords belonging to overlapping sections of input data is described in section\n4.11. A special case restricted to the use of 64-bit tables, 64-bit reads, and 32\n3or 64-bit generating polynomials was implemented by the authors in February-\nMarch 2007 and was used by a couple of Microsoft products. In 2009, the\nalgorithm was generalized and these limitations were removed.\nThe fact that the CRC of a message followed by its CRC is a constant value\nwhich does not depend on the message, described in section 3.5, is well known\nand has been widely used in the telecommunication industry for long time.\nA method of storing a carefully chosen sequence of bits after a message so\nthat the CRC of a message and the sequence of bits appended to the mes-\nsage produces prede\fned result, described in 3.5, was implemented in 1990 by\nZemtsov [Zem90].\nA method for recomputing a known CRC using a new initial CRC value,\ndescribed in section 3.2, and the method of computing a CRC of the concatena-\ntion of messages having known CRC values without touching the actual data,\ndescribed in section 3.3, were implemented by one of the authors in 2005 but\nwere not published.\n3 CRC tricks and tips\n3.1 Incremental CRC computation\nThe use of an arbitrary initial CRC value v(x) allows computation of a CRC\nincrementally. If a message M(x) =M1(x)\u0001xjM2j+M2(x) is a concatenation\nof messages M1andM2, its CRC may be computed piece by piece because\nCRC u\u0000\nM(x); v(x)\u0001\n= CRC u\u0010\nM2(x);CRC u\u0000\nM1(x); v(x)\u0001\u0011\n: (3)\nIndeed,\nCRC u(M; v) =\u0000\n(v\u0000u)xjMj+MxD+u\u0001\nmodP=\n=\u0000\n(v\u0000u)xjM1j+jM2j+ (M1xjM2j+M2)xD+u\u0001\nmodP=\n=\u0010\u0000\n(v\u0000u)xjM1j+M1xD\u0001\nxjM2j+M2xD+u\u0011\nmodP=\n=\u0000\nCRC u(M1; v)xjM2j+M2xD+u\u0001\nmodP=\n= CRC u\u0000\nM2;CRC u(M1; v)\u0001\n3.2 Changing initial CRC value\nIf CRC u\u0000\nM(x); v(x)\u0001\nfor some initial value v(x) is known, it is possible to\ncompute CRC u\u0000\nM(x); v0(x)\u0001\nfor di\u000berent initial value v0(x) without touching\nthe value of M(x):\nCRC u(M; v0) = CRC u(M; v) +\u0010\n(v0\u0000v)xjMj\u0011\nmodP: (4)\n4Proof:\nCRC u(M; v0) =\u0000\n(v0\u0000u)xjMj+MxD+u\u0001\nmodP=\n=\u0010\u0000\n(v0\u0000u) + (v\u0000v)\u0001\nxjMj+MxD+u\u0011\nmodP=\n=\u0010\u0000\n(v\u0000u) + (v0\u0000v)\u0001\nxjMj+MxD+u\u0011\nmodP=\n=\u0010\u0000\n(v\u0000u)xjMj+MxD+u\u0001\n+ (v0\u0000v)xjMj\u0011\nmodP=\n= CRC u(M; v) +\u0010\n(v0\u0000v)xjMjmodP\u0011\n:\n3.3 Concatenation of CRCs\nIf a message M(x) =M1(x)\u0001xjM2j+M2(x) is a concatenation of messages\nM1andM2, and CRCs of M1,M2(computed with some initial values v1(x),\nv2(x) respectively) are known, CRC u\u0000\nM(x); v(x)\u0001\nmay be computed without\ntouching contents of the message M:\n1. Using formula (4), the value of v0\n1= CRC u(M1; v) may be computed from\nthe known CRC u(M1; v1) without touching the contents of M1.\n2. Then, v0\n2= CRC u(M2; v0\n1) may be computed from known CRC u(M2; v2)\nwithout touching the contents of M2.\nAccording to (3), CRC u(M; v) =v0\n2.\n3.4 In-place modi\fcation of CRC-ed message\nSometimes it is necessary to replace a part of message M(x) in-place and re-\ncompute CRC of modi\fed message M(x) e\u000eciently.\nIf a message M=ABC is a concatenation of messages A,B, and C, and\nB0(x) is new message of the same length as B(x), CRC u(M0) of message M0=\nAB0Cmay be computed from known CRC u(M). Indeed,\nM(x) =A(x)\u0001xjBj+jCj+B(x)\u0001xjCj+C(x);\nM0(x) =A(x)\u0001xjBj+jCj+B0(x)\u0001xjCj+C(x) =\n=M(x) +\u0000\nB0(x)\u0000B(x)\u0001\n\u0001xjCj;\ntherefore\nCRC u\u0000\nM0(x); v(x)\u0001\n=\n= CRC u\u0010\nM(x) +\u0000\nB0(x)\u0000B(x)\u0001\n\u0001xjCj\u0011\n=\n=\u0010\u0000\nv(x)\u0000u(x)\u0001\nxjMj+M(x)xD+\u0000\nB0(x)\u0000B(x)\u0001\nxjCj+D+u(x)\u0011\nmodP(x)\n=\u0010\nCRC u\u0000\nM(x); v(x)\u0001\n+\u0000\nB0(x)\u0000B(x)\u0001\nxjCj+D\u0011\nmodP(x) =\n= CRC u\u0000\nM(x); v(x)\u0001\n+\u0010\u0000\nB0(x)\u0000B(x)\u0001\nxjCj+DmodP(x)\u0011\n:\n5It is easy to see that\nCRC u\u0000\nB0(x); v(x)\u0001\n\u0000CRC u\u0000\nB(x); v(x)\u0001\n=\n=\u0000\nB0(x)\u0000B(x)\u0001\nxDmodP(x);\nso\nCRC u\u0000\nM0(x); v(x)\u0001\n= CRC u\u0000\nM(x); v(x)\u0001\n+ \u0001\nwhere\n\u0001 =\u0010\nCRC u\u0000\nB0(x); v(x)\u0001\n\u0000CRC u\u0000\nB(x); v(x)\u0001\u0011\nxjCjmodP(x):\n3.5 Storing CRC value after the message\nOften Q(x) = CRC u\u0000\nM(x); v(x)\u0001\nis padded with zero bits until the nearest\nbyte or word boundary and is transmitted as a sequence of Wbits ( W\u0015D)\nright after the message M(x). This way, the transmitted message T(x) is the\nconcatenation of M(x) and Q(x) followed by ( W\u0000D) zeroes, and is equal to\nT(x) =M(x)\u0001xW+Q(x)\u0001xW\u0000D:\nAccording to (1), (3) and taking into account that Q(x) +Q(x) = 0 since\npolynomial coe\u000ecient are from GF(2), CRC u\u0000\nT(x); v(x)\u0001\nis a constant value\nwhich does not depend on the contents of the message and is equal to\nCRC u\u0000\nT(x); v(x)\u0001\n=\n= CRC u\u0010\nQ(x)\u0001xW\u0000D; CRC\u0000\nM(x); v(x)\u0001\u0011\n=\n= CRC u\u0000\nQ(x)\u0001xW\u0000D; Q(x)\u0001\n=\n=\u0010\u0000\nQ(x)\u0000u(x)\u0001\n\u0001xW+Q(x)\u0001xW\u0000D\u0001xD+u(x)\u0011\nmodP(x) =\n=\u0010\nu(x)\u0000\n1\u0000xW\u0001\u0011\nmodP(x):\nA more generic solution is to store a W-bit long value after the message\nsuch that the CRC of the transmitted message is equal to a prede\fned value\nR(x) (typically R(x) = 0). The D-bit value followed by ( W\u0000D) zero bits that\nshould be stored after M(x) is\n^q\u0000\nQ(x)\u0001\n=\u0010\u0000\nR(x)\u0000u(x)\u0001\nx\u0000W\u0000\u0000\nQ(x)\u0000u(x)\u0001\u0011\nmodP(x)\nwhere x\u0000Wis the multiplicative inverse of xWmodP(x) which exists if P(x)\nis not divisble by xand may be found by the extended Euclidean algorithm\n6[Has01]:\nCRC u\u0010\n^q\u0000\nQ(x)\u0001\nxW\u0000D; CRC\u0000\nM(x); v(x)\u0001\u0011\n=\n= CRC u\u0010\n^q\u0000\nQ(x)\u0001\nxW\u0000D; Q(x)\u0011\n=\n=\u0010\u0000\nQ(x)\u0000u(x)\u0001\n\u0001xW+ ^q\u0000\nQ(x)\u0001\n\u0001xW\u0000D\u0001xD+u(x)\u0011\nmodP(x) =\n=R(x):\n4 E\u000ecient software implementation\n4.1 Mapping bitstreams to hardware registers\nFor little-endian machines (assumed from now on), the result of loading of a\nD-bit word from memory into hardware register matches the expectations: the\n0-th bit of the 0-th byte becomes the 0-th (least signi\fcant) bit of the word\ncorresponding to x(D\u00001).\nFor example, the 32-bit sequence of 4 bytes 0x01, 0x02, 0x03, 0x04 (0x04030201\nwhen loaded into a 32-bit hardware register) corresponds to the polynomial\n\u0000\nx31+x22+x15+x14+x5\u0001\n:\nAddition and subtraction of polymonials with coe\u000ecients from GF(2) is the\nbitwise XOR of their coe\u000ecients. Multiplication of a polynomial by xis achieved\nby logical right shift of register contents by 1 bit. If a shift operation causes a\ncarryover, the resulting polynomial has degree D.\nPolynomials of degree less than Dwhose coe\u000ecients are recorded using ex-\nactly Dbits irrespective of actual degree of the polynomial will be called D-\nnormalized .\nWhenever possible { and unless mentioned explicitly { all polynomials will\nbe represented in D-normalized form.\nSince the generating polynomial P(x) is of degree Dand has ( D+ 1) co-\ne\u000ecients, it does not \ft into the D-bit register. However, its most signi\fcant\ncoe\u000ecient is guaranteed to be 1 and may be implied implicitly.\n4.2 Multiplication of D-normalized polynomials\nMultiplication of two D-normalized polynomials may be accomplished by tradi-\ntional bit-by-bit, shift-and-add multiplication. This is adequate if performance\nis not a concern. Sample code is given in listing 1.\n4.3 Multiplication of unnormalized polynomial\nDuring initialization of CRC tables it may be necessary to multiply d-normalized\npolynomial v(x) of a degree d6=Dby a D-normalized polynomial. It may be\naccomplished by representing the operand as a sum of weighted polynomials of\n71 // \"a\" and \"b\" occupy D l e a s t s i g n i f i c a n t b i t s .\n2 Crc Multiply ( Crc a , Crc b) f\n3 Crc product = 0 ;\n4 Crc bPowX[D] ; // bPowX[ k ] = (b \u0003x\u0003\u0003k ) mod P\n5 bPowX [ 0 ] = b ;\n6 f o r ( i n t k = 0 ; k <D; ++k )f\n7 // I f \"a\" has non \u0000zero c o e f f i c i e n t at x \u0003\u0003k ,\n8 // add ( ( b \u0003x\u0003\u0003k ) mod P) to the r e s u l t .\n9 i f ( ( ( a & (1 <<(D\u0000k ) ) != 0) product ^= bPowX[ k ] ;\n10\n11 // Compute bPowX[ k+1] = (b \u0003\u0003x\u0003\u0003( k+1) ) mod P.\n12 i f (bPowX[ k ] & 1) f\n13 // I f degree of (bPowX[ k ] \u0003x ) i s D, then\n14 // degree of (bPowX[ k ] \u0003x\u0000P) i s l e s s than D.\n15 bPowX[ k+1] = (bPowX[ k ] >>1) ^ P;\n16ge l s ef\n17 bPowX[ k+1] = bPowX[ k ] >> 1 ;\n18g\n19g\n20 return product ;\n21g\nListing 1: Multiplication of normalized polynomials\ndegree of no more than ( D\u00001), then calling Multiply () function repeatedly as\nshown in listing 2.\n4.4 Computing powers of x\nOften (see sections 3.2, 3.3, 3.5) it is necessary to compute xNmodP(x) for\nvery large values of N. This may be accomplished in O\u0000\nlog(N)\u0001\ntime.\nConsider the binary representation of N:\nN=KX\nk=0nk2k\nwhere nk2f0;1g. Then\nxN=xPnk2k=KY\nk=0xnk2k=Y\nnk!=0x2k(5)\nand may be computed using no more than ( blog2(N)c+ 1) multiplications of\npolynomials of degree less than Dprovided known values of\nPow2k(k) =x2kmodP(x): (6)\nValues of Pow2k(k) may be computed iteratively using one multiplication\n81 // \"v\" occupies \"d\" l e a s t s i g n f i c a n t b i t s .\n2 // \"m\" occupies D l e a s t s i g n i f i c a n t b i t s .\n3 Crc MultiplyUnnormalized ( Crc v , i n t d , Crc m) f\n4 Crc r e s u l t = 0 ;\n5 while (d >D)f\n6 Crc temp = v & ((1 <<D)\u00001) ;\n7 v > >= D;\n8 d\u0000= D;\n9 // XpowN returns ( x \u0003\u0003N mod P( x ) ) .\n10 r e s u l t ^= Multiply (temp , Multiply (m, XpowN(d) ) ) ;\n11g\n12 r e s u l t ^= Multiply ( v <<(D\u0000d) , m) ;\n13 return r e s u l t ;\n14g\nListing 2: Multiplication of unnormalized polynomial\nmodP(x) per iteration:\nPow2k(0) = 0 ;\nPow2k(k+ 1) = x2k+1modP(x) =\n=x2\u00012kmodP(x) =\n=\u0010\nx2k\u00112\nmodP(x) =\n=\u0010\nPow2k(k\u00001)\u00112\nmodP(x):\n4.5 Simpli\fed CRC\nIt is su\u000ecient to be able to compute\nCRC 0\u0000\nM(x); v(x)\u0001\n=\u0010\nv(x)\u0001xjMj+M(x)\u0001xD\u0011\nmodP(x); (7)\nsince\nCRC u\u0000\nM(x); v(x)\u0001\n= CRC 0\u0000\nM(x); v(x)\u0000u(x)\u0001\n+u(x);\nCRC u\u0000\nM(x); v(x)\u0001\nof message M=M1: : : M Kmay be computed incrementally\nusing CRC 0instead of CRC u:\nv0(x) =v(x)\u0000u(x);\nvk(x) = CRC 0\u0000\nMk(x); vk\u00001(x)\u0001\n;\nCRC u(M(x); v(x)) =vK+u(x):\n4.6 Computing a CRC byte by byte\nIfM(x) isW-bit value (typically, W= 8) and deg\u0000\nv(x)\u0001\n< D, by de\fnition (7)\nCRC 0\u0000\nM(x); v(x)\u0001\n=\u0010\nv(x)\u0001xW+M(x)\u0001xD\u0011\nmodP(x):\n9When D\u0014W,\nCRC 0\u0000\nM(x); v(x)\u0001\n=\u0010\nv(x)\u0001xW+M(x)\u0001xD\u0011\nmodP(x) =\n=\u0010\u0000\nv(x)\u0001xW\u0000D+M(x)\u0001\n\u0001xD\u0011\nmodP(x); (8)\nwhich may be obtained via single lookup into precomputed table Tof size 2W\nsuch that T[i] =\u0000\ni(x)\u0001xD)\u0001\nmodP(x) since deg\u0000\nv(x)\u0001xW\u0000D+M(x)\u0001\n< W .\nD-normalized representation of v(x) occupies Dleast signi\fcant bits and is\nequal to\u0000\nv(x)\u0001xW\u0000D\u0001\nwhen viewed as W-normalized representation which is\nrequired to form W-bit index into a table of 2Wentries. Therefore, explicit\nmultiplication of v(x) byxW\u0000Din formula (8) is not required.\nWhen D\u0015W,v(x) may be represented as\nv(x) =vL(x) +vH(x)\u0001xD\u0000W\nwhere\nvH(x) =\u0016v(x)\nxD\u0000W\u0017\n; deg\u0000\nvH(x)\u0001\n< W;\nvL(x) =v(x) mod xD\u0000W; deg\u0000\nvL(x)\u0001\n< D\u0000W:\nSince deg\u0000\nvL(x)\u0001xW\u0001\n< D,\u0010\nvL(x)\u0001xW\u0011\nmodP(x) =vL(x)\u0001xW. Therefore,\nCRC 0\u0000\nM(x); v(x)\u0001\n=\n=\u0010\nv(x)\u0001xW+M(x)\u0001xD\u0011\nmodP(x) =\n=\u0010\u0000\nvL(x) +vH(x)\u0001xD\u0000W\u0001\n\u0001xW+M(x)\u0001xD\u0011\nmodP(x) =\n=\u0010\nvL(x)\u0001xW+\u0000\nvH(x) +M(x)\u0001\n\u0001xD\u0011\nmodP(x) =\n=\u0010\nvL(x)\u0001xW+\u0000\nvH(x) +M(x)\u0001\n\u0001xD\u0011\nmodP(x) =\n=\u0010\nvL(x)\u0001xW\u0011\nmodP(x) +\u0010\u0000\nvH(x) +M(x)\u0001\n\u0001xD\u0011\nmodP(x) =\n=\u0010\nvL(x)\u0001xW\u0011\n+ MulByXpowD\u0000\nvH(x) +M(x)\u0001\n; (9)\nwhere\nMulByXpowD\u0000\na(x)\u0001\n=\u0000\na(x)\u0001xD\u0001\nmodP(x): (10)\nThe value of\u0000\nvL(x)\u0001xW\u0001\nmay be computed by shifting v(x) byWbits and\ndiscarding Wcarry-over zero bits.\nSince deg\u0000\nvH(x) +M(x)\u0001\n< W , the value of MulByXpowD\u0000\nvH(x) +M(x)\u0001\nmay be obtained using precomputed table containing 2Wentries.\nThe classic table-driven, byte-by-byte CRC computation [Per83, Sar88] im-\nplementing formulas (1), (3), (8), (9), and (10) for W= 8 is given in listing\n3.\n101 Crc CrcByte ( Byte value ) f\n2 return MulByXpowD[ value ] ;\n3g\n4 Crc CrcByteByByte ( Byte \u0003data , i n t n , Crc v , Crc u) f\n5 Crc crc = v ^ u ;\n6 f o r ( i n t i = 0 ; i <n ; ++i )f\n7 Crc ByteCrc = CrcByte ( crc ^ data [ i ] ) ;\n8 crc > >= 8 ;\n9 crc ^= ByteCrc ;\n10g\n11 return ( crc ^ u) ;\n12g\n13 void InitByteTable ( ) f\n14 f o r ( i n t i = 0 ; i <256; ++i )f\n15 MulByXPowD[ i ] = MultiplyUnnormalized ( i , 8 , XpowN(D) ) ;\n16g\n17g\nListing 3: Computing CRC byte by byte\nExperience shows that computing CRC byte by byte is rather slow and,\ndepending on a compiler and input data size, takes 6 \u00008 CPU cycles per byte\non modern 64-bit CPU for D <= 64. There are two reasons for it:\n1. Reading data 8 bits at a time is not the most e\u000ecient data access method\non 64-bit CPU.\n2. Modern CPUs have multiple ALUs and may execute 3-4 instructions\nper CPU cycles provided the instructions handle independent data \rows.\nHowever, byte-by-byte CRC contains only one data \row. Futhermore,\nmost instructions use the result from the previous instruction, leading to\nCPU stalls because of result propagation delays.\n4.7 Rolling CRC\nGiven a set of messages Mk=mk: : : m k+N\u00001where mkareW-bit symbols\nandNis \fxed (i.e. each next message is obtained by removing \frst symbol\nand appending new one), Ck+1= CRC u(Mk+1; v) may be obtained from known\nCk= CRC u(Mk; v) and symbols mkandmk+Nonly, without the need to com-\npute CRC of entire message Mk+1. This property may be utilized to e\u000eciently\ncompute a set of rolling Rabin \fngerpints.\nSince Mk+1(x) =Mk(x)xW\u0000mk(x)xNW+mk+N(x),\nCk+1(x) = CRC u\u0000\nMk+1(x); v(x)\u0001\n=\n= \n\u0000\nv(x)\u0000u(x)\u0001\nxNW+u(x) +N\u00001X\nn=0mk+1+n(x)xD+W(N\u00001\u0000n)!\nmodP(x) =\n=F\u0000\nCk(x); mk+N(x)\u0001\n+G\u0000\nmk(x)\u0001\n;\n11where\nF\u0000\nCk(x); mk+N(x)\u0001\n=\u0010\nCk(x)xW+mk+N(x)xD\u0011\nmodP;\nG\u0000\nmk(x)\u0001\n=\u0010\u0000\u0000\nv(x)\u0000u(x)\u0001\nxNW+u\u0001\n(1\u0000xW)\u0000mk(x)xD+NW\u0011\nmodP\nare polynomials of degree less than D.\nG\u0000\nmk\u00001(x)\u0001\nmay be computed easily via a single lookup in a table of 2W\nentries indexed by mk.\nComputation of F\u0000\nCk(x); mk+N(x)\u0001\nmay be implemented as described in\nsection 4.6 and requires one bitwise shift, one bitwise XOR, and one lookup into\na precomputed table containing 2Wentries.\n4.8 Reading multiple bytes at a time\nOne straightforward way to speed up byte-by-byte CRC computation is to read\nW > 8 bits at once. Unfortunately, this is the path of very rapidly diminish-\ning return as the size of the MulByPowD table increases with Wexponentially.\nFrom practical perspective, it is extremely desirable to ensure that the Mul-\nByPowD table \fts into the L1 cache (32-64KB), otherwise table entry access\nlatency sharply increases from 3-4 CPU cycles (L1 cache) to 15-20 CPU (L2\ncache).\nThe value of MulByXpowD\u0000\nv(x)\u0001\nmay be computed iteratively using a\nsmaller table because\nMulByXpowD\u0000\nv(x)\u0001\n=v(x)\u0001xDmodP(x) = CRC 0\u0000\nv(x);0\u0001\n(11)\nand therefore may be computed using formulas (3) and (9) for smaller values of\nW0.\n[Bla93] provided the implementation for W= 32 and W0= 8. Our more\ngeneral implementation was faster than byte-by-byte CRC but not substentially:\nthe improvement was in 20-25% range. However, the result is still important {\nit demonstrates that reading input data per se is not a bottleneck.\n4.9 Computing a CRC word by word\nThe value of MulByXpowD\u0000\nv(x)\u0001\nmay be computed using multiple smaller ta-\nbles instead of one table. Given that deg\u0000\nv(x)\u0001\n< W ,v(x) may be represented\nas a weighted sum of polynomials vk(x) such that deg\u0000\nvk(x)\u0001\n< B:\nv(x) =K\u00001X\nk=0vk(x)\u0001x(K\u00001\u0000k)B;\nwhere K=dW=Beand\nvk(x) =\u0016v(x)\nx(K\u00001\u0000k)B\u0017\nmodxB:\n12Consequently,\nMulByXpowD\u0000\nv(x)\u0001\n=v(x)\u0001xDmodP(x) =\n= K\u00001X\nk=0vk(x)\u0001x(K\u00001\u0000k)B!\n\u0001xDmodP(x) =\n=K\u00001X\nk=0\u0010\nvk(x)\u0001x(K\u00001\u0000k)B+DmodP(x)\u0011\n=\n=K\u00001X\nk=0MulWordByXpowD\u0000\nk; vk(x)\u0001\n; (12)\nwhere the values of\nMulWordByXpowD\u0000\nk; vk(x))\u0001\n=vk(x)\u0001x(K\u00001\u0000k)B+DmodP(x) (13)\nmay be obtained using Kprecomputed tables. Given that deg\u0000\nvk(x)\u0001\n< B,\neach table should contain 2Bentries.\nA sample implementation of formulas (1), (3), (12), and (13) is given in\nlisting 4 using B= 8 and assuming that Wis a multiple of 8.\nCrcWordByWord1withW= 64 uses only 2.1-2.2 CPU cycles/byte on mod-\nern 64-bit CPUs (our implementation is somewhat faster than the one described\nin [KB05]). It solves the problem with data access and, to lesser degree, allows\ninstruction level parallelism: in the middle of the unrolled main loop of CrcOf-\nWord function the CPU may process multiple bytes in parallel.\nHowever, this solution is still imperfect { the beginning of computation con-\ntends for a single source of data (variable value ), and the end of computation\ncontends for a single destination (variable result ). Further improvement re-\nquires processing of multiple independent data streams in interleaved manner\nso that when computation of one data \row path is stalled the CPU may proceed\nwith another one.\n4.10 Processing non-overlapping blocks in parallel\nStraighforward pipepiling may be achieved by spliting the input message M(x) =\nM0(x): : : M N\u00001(x) into Nblocks Mk(x) of approximately the same size and\ncomputing CRC of each block in an interleaved manner, concatenating CRCs\nof individual blocks in the end. A sample implementation is given in listing 5.\nA tuned implementation of CrcWordByWordBlocks is capable of process-\ning data at 1.3-1.4 CPU cycles/byte on su\u000eciently large (64KB and more)\ninputs, which is noticeably better that 2.1-2.2 CPU cycles/byte delivered by\nword by word CRC computation. It is a good sign that it is a move in right\ndirection.\n1The variant presented in this paper is more general than \\slicing\" described in [KB05].\nSample implementation given in listing 4 does not include one subtle optimization imple-\nmented in [KB05] as it was found to be counter-productive.\n131 Crc CrcWord(Word value ) f\n2 Crc r e s u l t = 0 ;\n3 // Unroll t h i s loop or l e t compiler do i t .\n4 f o r ( i n t byte = 0 ; byte <s i z e o f (Word) / 8 ; ++byte ) f\n5 r e s u l t ^= MulWordByXpowD [ byte ] [ ( Byte ) value ] ;\n6 value > >= 8 ;\n7g\n8 return r e s u l t ;\n9g\n10 Crc CrcWordByWord(Word \u0003data , i n t n , Crc v , Crc u)\n11 Crc crc = v ^ u ;\n12 f o r ( i n t i = 0 ; i <n ; ++i )f\n13 Crc WordCrc = CrcWord( crc ^ data [ i ] ) ;\n14 i f ( s i z e o f ( Crc ) <= s i z e o f (Word) ) f\n15 crc = WordCrc ;\n16ge l s ef\n17 crc > >= 8 ;\n18 crc ^= WordCrc ;\n19g\n20g\n21 return ( crc ^ u) ;\n22g\n23 void InitWordTables ( ) f\n24 f o r ( i n t byte = 0 ; byte <s i z e o f (Word) / 8 ; ++byte ) f\n25 // (K \u00001\u0000k )\u0003B + D = (W /8\u00001\u0000byte )\u00038 + D = D\u00008 + W\u00008\u0003byte .\n26 Crc m = XpowN(D \u00008 + s i z e o f (Word) \u00038\u00008\u0003byte ) ;\n27 f o r ( i n t i = 0 ; i <256; ++i )f\n28 MulWordByXpowD [ byte ] [ i ] =MultiplyUnnormalized ( i , 8 , m) ;\n29g\n30g\n31g\nListing 4: Computing CRC word by word\n141 // Processes N s t r i p e s of StripeWidth words each\n2 // word by word , in an i n t e r l e a v e d manner .\n3 Crc CrcWordByWordBlocks (Word \u0003data , Crc v , Crc u) f\n4 a s s e r t (n % (N \u0003StripeWidth ) == 0) ;\n5 // Use N l o c a l v a r i a b l e s instead of the array .\n6 Crc crc [N ] ;\n7 // I n i t i a l i z e the CRC value f o r each s t r i p e .\n8 crc [ 0 ] = v ^ u ;\n9 f o r ( i n t s t r i p e = 1 ; s t r i p e <N; ++s t r i p e )\n10 crc [ i ] = 0 ^ u ;\n11 // Compute each s t r i p e ' s CRC.\n12 f o r ( i n t i = 0 ; i <StripeWidth ; ++i ) f\n13 // Compute multiple CRCs in i n t e r l e a v e d manner .\n14 Word buf [N ] ;\n15 f o r ( i n t s t r i p e = 0 ; s t r i p e <N; ++s t r i p e ) f\n16 buf [ i ] =\n17 crc [ s t r i p e ] ^ data [ i + s t r i p e \u0003StripeWidth ] ;\n18 i f (D >s i z e o f (Word) \u00038)f\n19 crc [ s t r i p e ] > >= D\u0000s i z e o f (Word) \u00038 ;\n20ge l s ef\n21 crc [ s t r i p e ] = 0 ;\n22g\n23g\n24 f o r ( i n t byte = 0 ; byte <s i z e o f (Word) / 8 ; ++byte ) f\n25 f o r ( i n t s t r i p e = 0 ; s t r i p e <N; ++s t r i p e ) f\n26 crc [ s t r i p e ] ^=\n27 MulWordByXpowD [ byte ] [ ( Byte ) buf [ s t r i p e ] ] ;\n28 buf [ s t r i p e ] > >= 8 ;\n29g\n30g\n31g\n32 // Combine s t r i p e CRCs.\n33 f o r ( i n t s t r i p e = 1 ; s t r i p e <N; ++s t r i p e ) f\n34 crc [ 0 ] = ChangeStartingValue (\n35 crc [ s t r i p e ] , StripeWidth , 0 , crc [ 0 ] ) ;\n36g\n37 return ( crc [ 0 ] ^ u) ;\n38g\nListing 5: Processing non-overlapping blocks in parallel\n15The drawbacks of this approach are obvious: it does not work well with\nsmall inputs { the cost of CRC concatentation becomes a bottleneck, { and it\nmay be susceptible to false cache collisions caused by cache line aliasing.\nIf the cost of CRC concatenation was not a problem, cache pressure could be\nmitigated with the use of very narrow stripes. The code in question, lines 33-37\nof listing 5 which combine CRCs of individual stripes, iteratively computes\ncrc0(x) = crc k(x) +\u0010\ncrc0\u0001x8SmodP(x)\u0011\nfork= 1; : : : ; N\u00001 where NandSare the number and the width of the stripes\nrespectively. It may be rearranged as\ncrc0(x) =N\u00001X\nk=0\u0010\ncrcK\u00001\u0000k\u0001x8kSmodP(x)\u0011\n:\nExplicit multiplication by x8kSmay be avoided by moving it into preset\ntables\nMulWordByXPowDk(n) = MulWordByXPowD( n)\u0001xkSmodP(x):\nthat are used to compute crc0\nk(x) = crc k(x)\u0001x8kS, so that\ncrc0(x) =N\u00001X\nk=0crc0\nk:\nUnfortunately, this approach alone does not help because\n1. It increases the memory footprint of MulWordByXPowD by factor of N.\nOnce the cumulative size of MulWordByXPowDktables exceeds the size\nof L1 cache (32-64KB), the cost of memory access to multiplication table\ndata increases from 3-4 CPU cycles to 15-20, eliminating all performance\ngains achieved by reducing the number of table operations.\n2. It is still necessary to combine all Nvalues of crc kinto crc 0at the end of\nthe CRC computation.\n4.11 Interleaved word-by-word CRC\n4.11.1 Parallelizing CRC computation\nAssume that input message Mis the concatenation of Kgroups gk, and each\ngroup gkis concatenation of N W -bit long words:\nM(x) =K\u00001X\nk=0gk(x)\u0001x(K\u00001\u0000k)NW;\ngk(x) =N\u00001X\nn=0mk;n\u0001x(N\u00001\u0000n)W:\n16Input message M(x) may be represented as\nM(x) =K\u00001X\nk=0gk(x)\u0001x(K\u00001\u0000k)NW=\n=K\u00001X\nk=0 N\u00001X\nn=0mk;n\u0001x(N\u00001\u0000n)W!\n\u0001x(K\u00001\u0000k)NW=\n=N\u00001X\nn=0 K\u00001X\nk=0mk;n\u0001x(K\u00001\u0000k)NW!\n\u0001x(N\u00001\u0000n)W=\n=N\u00001X\nn=0Mn(x)\u0001x(N\u00001\u0000n)W(14)\nwhere\nMn(x) =K\u00001X\nk=0mk;n\u0001x(K\u00001\u0000k)NW:\nIn other words, Mnis concatenation of n-thW-bit word from g0followed\nby (N\u00001)Wzero bits, then n-th word from g1followed by ( N\u00001)Wzero bits,\netc., ending up with n-th word from last group gK\u00001.\nAppending ( N\u00001)Wzero bits to Mnyields M0\nn(x) =Mn(x)\u0001x(N\u00001)W\nwhich may be viewed as the concatenation of K NW -bit groups fk:\nM0\nn(x) =Mn(x)\u0001x(N\u00001)W=K\u00001X\nk=0fk;n\u0001x(K\u00001\u0000k)NW;\nfk;n(x) =mk;n(x)\u0001x(N\u00001)W;\nso\nM(x) =N\u00001X\nn=0Mn(x)\u0001x(N\u00001\u0000n)W\n=N\u00001X\nn=0M0\nn(x)\u0001x\u0000(N\u00001)W\u0001x(N\u00001\u0000n)W\n=N\u00001X\nn=0M0\nn(x)\u0001x\u0000nW: (15)\nAccording to (3), vK;n(x) = CRC 0\u0000\nM0\nn(x); v0;n(x)\u0001\nmay be computed incre-\n17mentally:\nvk+1;n(x) = CRC 0\u0000\nfk;n(x); vk;n(x)\u0001\n=\n= CRC 0\u0000\nmk;n(x)\u0001x(N\u00001)W; vk;n(x)\u0001\n=\n=\u0010\nvk;n(x)\u0001xNW+mk;n(x)\u0001x(N\u00001)W\u0001xD\u0011\nmodP(x) =\n=\u0010\nvk;n(x)\u0001xW+mk;n(x)\u0001xD\u0011\n\u0001x(N\u00001)WmodP(x) = (16)\n= CrcWordN\u0000\nmk;n(x); vk;n(x)\u0001\n: (17)\nThis approach:\n1. Creates Nindependent data \rows: computation of vk;0; : : : ; v k;N\u00001may\nbe performed truly in parallel. There are no contentions on a single data\nsource or destination like those the word-by-word CRC computation de-\nscribed in section 4.9 su\u000bered from.\n2. Input data is accessed sequentially. Therefore, the load on cache sub-\nsystem and false cache collisions are minimal. Thus, the performance\nbottlenecks of approach described in 4.10 are eliminated.\n4.11.2 Combining individual CRCs\nOnce vK;n(x) = CRC 0\u0000\nM0\nn(x); v0;n(x)\u0001\nare computed starting with\nv0;0=v(x);\nv0;n= 0; n\u00151;\nby de\fnition (7) of CRC 0and relationship (15),\nCRC 0\u0000\nM(x); v(x)\u0001\n= CRC 0 N\u00001X\nn=0M0\nn(x)\u0001x\u0000nW; v(x)!\n=\n=N\u00001X\nn=0CRC 0\u0000\nM0\nn(x)\u0001x\u0000nW; v0;n(x)\u0001\n=\n=N\u00001X\nn=0CRC 0\u0000\nM0\nn(x); v0;n(x)\u0001\n\u0001x\u0000nW=\n=N\u00001X\nn=0vK;n(x)\u0001x\u0000nW: (18)\nEven though this step is performed only once per input message, it still\nrequires ( N\u00001) non-trivial multiplications modulo P(x) negatively a\u000becting\nthe performance on small input messages. Also, (18) uses the multiplicative\ninverse of xnWmodulo P(x) which does not exists when P(x) mod x= 0.\n18There is more e\u000ecient and elegant solution. Assume that M(x) is followed\nby one more group gK(x). Then\nCRC 0\u0000\nM(x)\u0001xNW+gK(x); v(x)\u0001\n=\n= CRC 0\u0010\ngK(x);CRC 0\u0000\nM(x); v(x)\u0001\u0011\n=\n=\u0010\nCRC 0\u0000\nM(x); v(x)\u0001\n\u0001xNW+gK(x)\u0001xD\u0011\nmodP(x) =\n= \nxNWN\u00001X\nn=0vK;n(x)\u0001x\u0000nW+xDN\u00001X\nn=0mK;n(x)\u0001x(N\u00001\u0000n)W!\nmodP(x) =\n= \nxWN\u00001X\nn=0vK;n(x)\u0001x(N\u00001\u0000n)W+xDN\u00001X\nn=0mK;n(x)\u0001x(N\u00001\u0000n)W!\nmodP(x)\n=N\u00001X\nn=0\u0010\nvK;n(x)\u0001xW+mK;n(x)\u0001xD\u0011\n\u0001x(N\u00001\u0000n)WmodP(x) = (19)\n=N\u00001X\nn=0CRC 0\u0000\nmK;n(x); vK;n(x)\u0001\n\u0001x(N\u00001\u0000n)WmodP(x): (20)\n(20) may be implemented using formula (13) by setting v0\n0= 0, and then for\nn= 0; : : : ; N\u00001 computing\nv0\nn+1(x) =\u0010\u0000\nv0\nn(x) +vK;n\u0001\n\u0001xW+mK;n\u0001xD\u0011\nmodP(x)\n= CRC 0\u0000\nmK;n; v0\nn(x) +vK;n\u0001\n:\nAlternatively, this step may be performed using the less e\u000ecient technique\ndescribed in section 4.8.\n4.11.3 E\u000ecient computation of individual CRCs\nGiven v(x), deg\u0000\nv(x)\u0001\n< D andm(x), deg\u0000\nm(x)\u0001\n< W ,\nCrcWordN\u0000\nm(x); v(x)\u0001\n=\u0010\nv(x)\u0001xW+m(x)\u0001xD\u0011\n\u0001x(N\u00001)WmodP(x)\nmay be implemented e\u000eciently utilizing the techniques described in sections\n4.6, 4.8, and 4.9. When D\u0014W,\nCrcWordN\u0000\nm(x); v(x)\u0001\n=\u0010\nv(x)\u0001xW+m(x)\u0001xD\u0011\n\u0001x(N\u00001)WmodP(x) =\n=\u0010\nv(x)\u0001xW\u0000D+m(x)\u0011\n\u0001x(N\u00001)W+DmodP(x);\nand may be implemented using the table-driven multiplication as described in\n(13) except that the operand is multiplied by x(N\u00001)W+Dinstead of xD. Like in\n(8), explicit multiplication of v(x) byxW\u0000Dis not required since D-normalized\n19representation of v(x), viewed as a W-normalized representation, is equal to\u0000\nv(x)\u0001xW\u0000D\u0001\n.\nUsing the same technique as in formula (9), for D\u0015Wlet\nvH(x) =\u0016v(x)\nxD\u0000W\u0017\n; deg\u0000\nvH(x)\u0001\n< W;\nvL(x) =v(x) mod xD\u0000W; deg\u0000\nvL(x)\u0001\n< D\u0000W;\nso that v(x) =vL(x) +vH(x)\u0001xD\u0000W. Then,\nCrcWordN\u0000\nm(x); v(x)\u0001\n=\n=\u0010\nv(x)\u0001xW+m(x)\u0001xD\u0011\n\u0001x(N\u00001)WmodP(x) =\n=\u0010\u0000\nvL(x) +vH(x)\u0001xD\u0000W\u0001\n\u0001xW+m(x)\u0001xD\u0011\n\u0001x(N\u00001)WmodP(x) =\n=\u0010\nvL(x)\u0001xW+\u0000\nvH(x) +m(x)\u0001\n\u0001xD\u0011\n\u0001x(N\u00001)WmodP(x) =\n=\u0010\u0000\nvH(x) +m(x)\u0001\n\u0001x(N\u00001)W+DmodP(x)\u0011\n+\n+\u0010\u0000\nvL(x)\u0001xW\u0001\n\u0001x(N\u00001)WmodP(x)\u0011\n: (21)\nSince deg\u0000\nvH(x)+m(x)\u0001\n< W , the \frst summand of CrcWordN\u0000\nm(x); v(x)\u0001\n,\n\u0010\u0000\nvH(x) +m(x)\u0001\n\u0001x(N\u00001)W+DmodP(x)\u0011\n;\nmay be computed using the table-driven multiplication technique described in\n(13) except that the operand is multiplied by xD+(N\u00001)Winstead of xD.\nComputation of the second summand of CrcWordN\u0000\nm(x); v(x)\u0001\n,\n\u0010\u0000\nvL(x)\u0001xW\u0001\n\u0001x(N\u00001)WmodP(x)\u0011\n;\nis somewhat less intuitive. Since deg\u0000\nvL(x)\u0001\n< D\u0000W,\n\u0000\nvL(x)\u0001xW\u0001\nmodP(x) =\u0000\nvL(x)\u0001xW\u0001\n;\nand may be computed by shifting vL(x) byWbits. Additional multiplication\nbyx(N\u00001)Wis accomplished by adding\u0000\nvL(x)\u0001xW\u0001\n, produced at step n < N\u00001\nof the algorithm described by formula (17), to the value of vk;n+1(x) which will\nbe additionally multiplied by x(N\u00001)Was shown in formula (16).\nForn=N\u00001, the value of\u0000\nvL(x)\u0001xW\u0001\nshould be added to the value of\nvk+1;n0(x) where n0= 0. For k < K , it will be multiplied by x(N\u00001)Wduring\nnext round of parallel computation as shown in (16). For k=K,vk+1;n0(x)\nwill be multiplied by x(N\u00001)Wduring CRC concatenation as shown in (19) since\nn0= 0.\n201 Crc CrcInterleavedWordByWord (\n2 Word\u0003data , i n t blocks , Crc v , Crc u) f\n3 Crc crc [N+1] = f0g;\n4 crc [ 0 ] = v ^ u ;\n5 f o r ( i n t i = 0 ; i <N\u0003( blocks\u00001) ; i += N)f\n6 Word b u f f e r [N ] ;\n7 // Load next N words and move overflow\n8 // b i t s into \" next \" word .\n9 f o r ( i n t n = 0 ; n <N; ++n)f\n10 b u f f e r [N] = crc [ n ] ^ data [ i + n ] ;\n11 i f (D >s i z e o f (Word) \u00038)\n12 crc [ n+1] ^= crc [ n ] >>( s i z e o f (Word) \u00038) ;\n13 crc [ n ] = 0 ;\n14g\n15 // Compute i n t e r l e a v e d word \u0000by\u0000word CRC.\n16 f o r ( i n t byte = 0 ; byte <s i z e o f (Word) ; ++byte ) f\n17 f o r ( i n t n = 0 ; n <N; ++n)f\n18 crc [ n ] ^=\n19 MulInterleavedWordByXpowD [ byte ] [ ( Byte ) b u f f e r [ n ] ] ;\n20 b u f f e r [ n ] > >= 8 ;\n21g\n22g\n23 // Combine crc [ 0 ] with delayed overflow b i t s .\n24 crc [ 0 ] ^= crc [N ] ;\n25 crc [N] = 0 ;\n26g\n27 // Process the l a s t N bytes and combine CRCs.\n28 f o r ( i n t n = 0 ; n <N; ++n)f\n29 i f (n != 0) crc [ 0 ] ^= crc [ n ] ;\n30 Crc WordCrc = CrcOfWord( crc [ 0 ] ^ data [ i + n ] ) ;\n31 i f (D >s i z e o f (Word) \u00038)f\n32 crc [ 0 ] > >= D\u0000s i z e o f (Word) \u00038 ;\n33 crc [ 0 ] ^= WordCrc ;\n34ge l s ef\n35 crc [ 0 ] = WordCrc ;\n36g\n37g\n38 return ( crc [ 0 ] ^ u) ;\n39g\n40 void InitInterleavedWordTables ( void ) f\n41 f o r ( i n t byte = 0 ; byte <s i z e o f (Word) ; ++byte ) f\n42 Crc m = XpowN(D \u00008 + N\u0003s i z e o f (Word)\u00038\u00008\u0003byte ) ;\n43 f o r ( i n t i = 0 ; i <256; ++i )f\n44 MulInterleavedWordByXpowD [ byte ] [ i ] =\n45 MultiplyUnnormalized ( i , 8 , m) ;\n46g\n47g\n48g\nListing 6: Interleaved, word by word CRC computation\n215 Experimental results\nThe tests were performed using Intel Q9650 3.0GHz CPU, DDR2-800 memory\nwith 4-4-4-12 timing, and a motherboard with an Intel P45 chipset.\n5.1 Testing methology\nAll tests were performed using random input data over various block sizes. The\ncode for all evaluated algorithms was heavily optimized. Tests were performed\non both aligned and non-aligned input data to ensure that misaligned inputs do\nnot carry performance penalty. CRC tables were aligned on 256-byte boundary.\nTests were performed with warm data and warm CRC tables: as shown in\n[KB05], the footprint of CRC tables { as long as they \ft into L1 cache { is not\na major contributor to the performance.\nPerformance was measured in number of CPU cycles per byte of input data:\napparently, performance of CRC computation is bounded by performance of\nCPU and its L1 cache latency. Spot testing of few other Intel and AMD CPU\nmodels showed little variation in performance measured in CPU cycles per byte\ndespite substential di\u000berences in CPU clock frequencies.\nTo minimize performance variations caused by interference with OS and\nother applications (context switches, CPU migrations, CPU cache \rushes, mem-\nory bus interference from other processes, etc.), the test applications were run\nat high priority, each test was executed multiple times, and the minimum time\nwas measured. That allowed the tests to achieve repeatability within \u00061%.\n5.2 Compiler comparison\nDespite CRC code being rather straightforward, there were surprises (see tables\n5 and 4).\nOn 64-bit AMD64 platform, Microsoft CL compiler (version 15.00.30729)\nconsistently and noticeably generated the fastest code that used general-purpose\ninteger arithmetics. For instance, CRC-64 and CRC-32 code generated by CL\nwas 1.24 times faster than the code generated by Intel's ICL 11.10.051, and\n1.74 times faster than the code generated by GCC 4.5.0. A tuned, hand-written\ninline assembler code for CRC-32 and CRC-64 for GCC was as fast as the code\ngenerated by CL.\nWhen it comes to arithmetics with the use of SSE2 intrinsic functions on\n64-bit AMD64 platform, the code generated by GCC 4.5.0 consistenly outper-\nformed the code generated by Microsoft and Intel compilers { by factor of 1.15\nand 1.30 respectively. However, earlier versions of GCC did not produce e\u000ecient\nSSE2 code either. For that reason, pre-4.5.0 versions of GCC use hand-written\ninline assember code which was as fast as the code generated by GCC 4.5.0.\nOn 32-bit bit X86 platform, neither compiler was able to generate e\u000ecient\ncode (most likely because because the compilers could not overcome scarsity of\ngeneral-purpose registers). Performance of the code that used MMX intrinsic\n22functions was better but still not as good as hand-written assember versions,\nwhich were provided for all compilers.\nThe fastest code for 128-bit CRC on X86 platform was generated by GCC\n4.5.0.\n5.3 Choice of interleave level\nNumber of data streams processed by interleaved, word-by-word CRC computa-\ntion described in section 4.11 should matter. Too few means underutilization of\navailable ALUs. Too many will increase the length of the main loop and stress\ninstruction decoders, and may cause splilling of registers containing hot data\n(interleaved processing of Nwords of data uses at least (2 N+ 2) registers).\nAs table 3 shows, the optimal number of interleaved data streams on modern\nIntel and AMD CPUs for integer arithmetics is either 3 or 4 (likely because they\nall have exactly 3 ALUs). However, for SSE2 arithmetics on AMD64 platform\nthe optimal number of streams is 6 (3 on X86), which is quite counter-intuitive\nresult as it does not correlate with the number of available ALUs. Good old\nperformance mantra \"you need to measure\" still applies.\n5.4 Performance of CRC algorithms\nAverage performance of best variants of CRC algorithms for 64-bit AMD64 and\n32-bit X86 platforms processing 1KB, 2KB, . . . , 1MB inputs is given in tables\n1 and 1 respectively. Proposed interleaved multiword CRC algorithm is 1.7-2.0\ntimes faster that current state of the art \\slicing\".\nAs demonstrated in tables 7 and 6, interleaved word-by-word CRC described\nin section 4.11, running at 1.2 CPU cycles/byte, is 1.8 times faster than 2.1 CPU\ncycles/byte achieved by current state of the art word-by-word CRC algorithm\n(\\slicing\") described in [KB05].\nOn 64-bit AMD64 platform, the best performance was achieved using 64-bit\nreads and 64-bit tables for all variants of N-bit CRC for N\u001464. In particular,\ntables 7 and 6 clearly show that performance of 32-bit and 64-bit CRCs is nearly\nidentical. Consequently, there is no reason to favor CRC-32 over CRC-64 for\nperformance reasons.\nThe use of MMX on the 32-bit X86 platform allowed to utilize 64-bit tables\nand 64-bit reads achieving 1.3 CPU cyles/byte. Neither compiler generated\ne\u000ecient code using MMX intrinsic functions, so inline assembler was used.\nWith the use of SSE2 intrinsics on AMD64 architecture, 128-bit CRC may\nbe computed takes at 1.7 CPU cycles/byte using the new algorithm (see table\n9), compared with 2.9 CPU cycles/byte achieved by word-by-word CRC compu-\ntation (see table 8). On the 32-bit X86 architecture, the use of SSE2 intrinsics\nand GCC 4.5.0 allowed the computation of 128-bit CRC at 2.1 CPU cycles/byte,\ncompared with 4.2 CPU cycles/byte delivered by word-by-word algorithm.\nGiven that MD5 computation takes 6.8-7.1 CPU cycles/byte and SHA-1\ntakes 7.6-7.9 CPU cycles per byte, CRCs are still the algorithm of choice for\ndata corruption detection.\n23References\n[Bla93] Richard Black. Fast CRC32 in software. http://www.cl.cam.ac.\nuk/research/srg/bluebook/21/crc/crc.html , 1993.\n[BW01] Florian Braun and Marcel Waldvogel. Fast incremental\nCRC updates for IP over ATM networks. Technical Re-\nport WUCS-01-08, Washington University in St. Louis, April\n2001. Available at http://marcel.wanda.ch/Publications/\nbraun01fast-techreport.pdf .\n[GGO+10] Vinodh Gopal, Jim Guilford, Erdinc Ozturk, Gil Wolrich, Wajdi\nFeghali, Martin Dixon, and Deniz Karakoyunlu. Fast CRC computa-\ntion for iSCSI polynomial using CRC32 instruction. Intel White Pa-\nper 323405, February 2010. Available at http://download.intel.\ncom/design/intarch/papers/323405.pdf .\n[Has01] M. A. Hasan. E\u000ecient computation of multiplicative inverses for\ncryptographic applications. In ARITH '01: Proceedings of the 15th\nIEEE Symposium on Computer Arithmetic , page 66, Washington,\nDC, USA, 2001. IEEE Computer Society.\n[Hil79] John R. Hill. A table driven approach to cyclic redundancy check\ncalculations. SIGCOMM Comput. Commun. Rev. , 9(2):40{60, 1979.\n[JK02] H. Michael Ji and Eeal Killian. Fast parallel crc algorithm and\nimplementation on a con\fgurable processor. In ICC, volume 3, pages\n1813{1817, April 2002.\n[KB05] Michael E. Kounavis and Frank L. Berry. A systematic ap-\nproach to building high performance, software-based, CRC gen-\nerators. http://www.intel.com/technology/comms/perfnet/\ndownload/CRC_generators.pdf , 2005.\n[PB61] W.W. Peterson and D.T. Brown. Cyclic codes for error detection.\nInIRE(1) , volume 49, pages 228{235, January 1961.\n[Per83] Aram Perez. Byte-wise CRC calculations. IEEE Micro , 3(3):40{50,\n1983.\n[Sar88] Dilip V. Sarwate. Computation of cyclic redundancy checks via table\nlook-up. Commun. ACM , 31(8):1008{1013, 1988.\n[Zem90] Pavel Zemtsov. Proprietary copy protection system. Personal com-\nmunication, August 1990.\n24Table 1: CRC performance, AMD64 platform\nMethod Slicing1Multiword2Improvement\nCRC-32 2:0831:164;51.79\nCRC-64 2:0931:164;51.79\nCRC-128 2:9141:684;61.73\nTable 2: CRC performance, X86 platform\nMethod Slicing1Multiword2Improvement\nCRC-32 2:5231:293;71.96\nCRC-64 3:2831:293;72.55\nCRC-128 4:1742:104;81.98\nAverage number of CPU cycles per byte processing 1KB, 2KB, . . . , 1MB inputs.\nWarm data, warm tables.\n1\\Slicing\" implements the algorithm described in section 4.9.\n2\\Multiword/ N\"implements algorithm described in section 4.11 processing N\ndata streams in parallel in interleaved manner.\n3Microsoft CL 15.00.30729 compiler, \\-O2\" \rag.\n4GCC 4.5.0 compiler, \\-O3\" \rag.\n5Multiword/ N= 4, hand-written inline assembler.\n6Multiword/ N= 6, C++.\n7Multiword/ N= 4, hand-written MMX inline assember.\n8Multiword/ N= 3, C++.\nTable 3: Interleaved multiword CRC: choosing the number of stripes N\nCRC Platform N=2 N=3 N=4 N=5 N=6 N=7 N=8\nCRC-649AMD64 1.42 1.23 1.17 1.46 2.08 2.59 2.73\nCRC-12810AMD64 2.07 1.84 1.76 1.70 1.68 1.75 1.79\nCRC-12810X86 2.56 2.10 2.46 2.61 2.52 2.62 2.57\nAverage number of CPU cycles per byte processing 1KB, 2KB, . . . , 1MB inputs.\nInterleaved word-by-word CRC computation as described in section 4.11. Warm\ndata, warm tables.\n9Microsoft CL 15.00.30729 compiler, AMD64 platform, C++ code.\n10GCC 4.5.0 compiler, AMD64 platform, C++ code.\n25Table 4: Compiler comparison: Multiword/4 64-bit CRC\nInput size 64 256 1K 4K 16K 64K 256K 1M\nGCC/C++ 2.30 2.10 2.04 2.03 2.03 2.05 2.05 2.07\nICL 2.19 1.62 1.49 1.46 1.45 1.45 1.46 1.46\nCL 1.75 1.29 1.18 1.15 1.17 1.18 1.18 1.18\nGCC/ASM 1.65 1.26 1.17 1.15 1.16 1.17 1.17 1.17\nTable 5: Compiler comparison: Multiword/6 128-bit CRC\nInput size 64 256 1K 4K 16K 64K 256K 1M\nCL 4.08 2.94 2.64 2.62 2.59 2.59 2.59 2.62\nICL 3.48 2.50 2.25 2.08 2.02 2.00 2.00 2.02\nGCC 2.90 1.93 1.85 1.72 1.65 1.63 1.63 1.63\nNumber of CPU cycles per byte. 128-bit CRC (CRC-128/IEEE polynomial) and\n64-bit CRC (CRC-64-ECMA-182 polynomial) respectively. 64-bit platform, 64-\nbit reads. Warm data, warm tables.\nMicrosoft CL 15.00.30729 compiler was used with \\-O2\" \rag. Intel ICL\n11.10.051 and GCC 4.5.0 were used with \\-O3\" \rag.\n01234567\nCL-128\nICL-128\nGCC-128\nGCC-64\nICL-64\n00.511.522.533.544.55\nCL-128\nICL-128\nGCC-128\nGCC-64\nICL-64\nCL-64\n\\Multiword/ N\"implements algorithm described in section 4.11 processing N\ndata streams in parallel in interleaved manner.\n26Table 6: CRC-32 performance\nInput size 64 256 1K 4K 16K 64K 256K 1M\nSarwate 6.61 6.62 6.70 6.68 6.67 6.66 6.67 6.75\nBlack 5.44 5.46 5.47 5.48 5.47 5.46 5.47 5.53\nSlicing 2.15 2.10 2.09 2.09 2.08 2.08 2.08 2.10\nBlockword/3 2.27 2.14 2.15 2.13 2.13 1.55 1.39 1.31\nMultiword/4 1.75 1.29 1.18 1.16 1.17 1.18 1.18 1.18\nNumber of CPU cycles per byte. 32-bit CRC (CRC-32C polynomial), 64-bit\nplatform, 64-bit tables, 64-bit reads (except Sarwate). Microsoft CL 15.00.30729\ncompiler. Warm data, warm tables.\n01234567\nSarwate\nBlack\nSlicing\nBlockword/3\nMultiword/4\n\\Sarwate\" implements the algorithm described in section 4.6.\n\\Black\" implements the algorithm described in section 4.8.\n\\Slicing\" implements the algorithm described in section 4.9.\n\\Blockword/3\" implements the algorithm described in section 4.10 with 3 stripes\nof 15,376 bytes each.\n\\Multiword/4\" implements the algorithm described in section 4.11 processing 4\ndata streams in parallel in interleaved manner.\n27Table 7: CRC-64 performance\nInput size 64 256 1K 4K 16K 64K 256K 1M\nSarwate 6.61 6.62 6.70 6.68 6.67 6.65 6.66 6.75\nBlack 5.44 5.46 5.47 5.47 5.47 5.47 5.47 5.53\nSlicing 2.16 2.08 2.09 2.10 2.08 2.08 2.08 2.09\nBlockword/3 2.27 2.14 2.15 2.13 2.13 1.59 1.41 1.33\nMultiword/4 1.75 1.29 1.18 1.15 1.17 1.18 1.18 1.18\nNumber of CPU cycles per byte. 64-bit CRC (CRC-64-ECMA-182 polynomial),\n64-bit platform, 64-bit tables, 64-bit reads (except Sarwate). Microsoft CL\n15.00.30729 compiler. Warm data, warm tables.\n00.511.522.53\nSlicing\nBlockword/3\nMultiword/4\n\\Sarwate\" implements the algorithm described in section 4.6.\n\\Black\" implements the algorithm described in section 4.8.\n\\Slicing\" implements the algorithm described in section 4.9.\n\\Blockword/3\" implements the algorithm described in section 4.10 with 3 stripes\nof 15,376 bytes each.\n\\Multiword/4\" implements the algorithm described in section 4.11 processing 4\ndata streams in parallel in interleaved manner.\n28Table 8: CRC-128 performance: Slicing CRC\nInput size 64 256 1K 4K 16K 64K 256K 1M\nCL/SSE2 4.02 3.81 4.01 4.05 4.13 4.18 4.20 4.24\nICL/SSE2 3.40 3.24 3.57 3.59 3.68 3.72 3.75 3.81\nGCC/UINT 3.45 3.24 3.36 3.48 3.61 3.64 3.67 3.72\nGCC/SSE2 2.67 2.48 2.63 2.79 2.97 2.99 2.99 3.03\nTable 9: CRC-128 performance: Multiword CRC\nInput size 64 256 1K 4K 16K 64K 256K 1M\nGCC/UINT/3 3.83 3.02 3.04 3.01 3.00 2.98 2.98 3.00\nCL/SSE2/5 4.08 2.56 2.43 2.25 2.20 2.19 2.18 2.20\nICL/SSE2/5 3.52 2.33 2.23 2.05 2.00 1.99 1.99 2.01\nGCC/SSE2/6 2.90 1.93 1.85 1.72 1.65 1.63 1.63 1.63\nNumber of CPU cycles per byte. 128-bit CRC (CRC-128/IEEE polynomial),\n64-bit platform, 128-bit tables, 64-bit reads. Warm data, warm tables.\nAll compilers were tested using SSE2 intrinsics (/SSE2 variants). GCC was also\ntested using 128-bit integers provided by the compiler (GCC/UINT).\n012345\nCL/Slicing/SSE2\nICL/Slicing/SSE2\nGCC/Slicing/UINT\nGCC/Slicing/SSE2\nGCC/Multiword/UINT/3\nCL/Multiword/SSE2/5\nICL/Multiword/SSE2/5\nGCC/Multiword/SSE2/6\n\\Slicing\" implements algorithm described in section 4.9.\n\\Multiword/ N\"implements algorithm described in section 4.11 processing N\ndata streams in parallel in interleaved manner. The optimal (for given compiler)\nvalue of Nwas used.\n29" } ]
{ "category": "App Definition and Development", "file_name": "crc-doc.1.0.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "event_12345\nevent_23456\nevent_34567\nevent_35582\nevent_37193\nevent_78901\nevent_79902\nevent_79932\nevent_89012event_2849219\nevent_120202\n…\nevent_90192\nReal-time\nNode 1\nReal-time\nNode 2\noffset 1\noffset 2\neventseventseventsKafka\nStreaming events" } ]
{ "category": "App Definition and Development", "file_name": "realtime_pipeline.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "Iterator Concepts\nAuthor : David Abrahams, Jeremy Siek, Thomas Witt\nContact : dave@boost-consulting.com ,jsiek@osl.iu.edu ,witt@styleadvisor.com\nOrganization :Boost Consulting , Indiana University Open Systems Lab ,Zephyr Asso-\nciates, Inc.\nDate : 2004-11-01\nCopyright : Copyright David Abrahams, Jeremy Siek, and Thomas Witt 2004.\nabstract: The iterator concept checking classes provide a mechanism for a template to\nreport better error messages when a user instantiates the template with a type that\ndoes not meet the requirements of the template.\nFor an introduction to using concept checking classes, see the documentation for the boost::concept_check\nlibrary.\nReference\nIterator Access Concepts\n•Readable Iterator\n•Writable Iterator\n•Swappable Iterator\n•Lvalue Iterator\nIterator Traversal Concepts\n•Incrementable Iterator\n•Single Pass Iterator\n•Forward Traversal\n•Bidirectional Traversal\n•Random Access Traversal\niterator_concepts.hpp Synopsis\nnamespace boost_concepts {\n// Iterator Access Concepts\ntemplate <typename Iterator>\n1class ReadableIteratorConcept;\ntemplate <\ntypename Iterator\n, typename ValueType = std::iterator_traits<Iterator>::value_type\n>\nclass WritableIteratorConcept;\ntemplate <typename Iterator>\nclass SwappableIteratorConcept;\ntemplate <typename Iterator>\nclass LvalueIteratorConcept;\n// Iterator Traversal Concepts\ntemplate <typename Iterator>\nclass IncrementableIteratorConcept;\ntemplate <typename Iterator>\nclass SinglePassIteratorConcept;\ntemplate <typename Iterator>\nclass ForwardTraversalConcept;\ntemplate <typename Iterator>\nclass BidirectionalTraversalConcept;\ntemplate <typename Iterator>\nclass RandomAccessTraversalConcept;\n// Interoperability\ntemplate <typename Iterator, typename ConstIterator>\nclass InteroperableIteratorConcept;\n}\n2" } ]
{ "category": "App Definition and Development", "file_name": "iterator_concepts.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "050010001500\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequery time (s)datasource\na\nb\nc\nd\ne\nf\ng\nh90th percentile query latency" } ]
{ "category": "App Definition and Development", "file_name": "90th_percentile.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": " Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nPentest-Report Vitess 02.2019\nCure53, Dr.-Ing. M. Heiderich, M. Wege, MSc. N. Krein, MSc. D. Weißer, J. Larsson\nIndex\nIntroduction\nScope\nTest Methodology\nPhase 1. Manual Code Auditing\nPhase 2. Code-Assisted Penetration Testing\nMiscellaneous Issues\nVIT-01-001 MySQL: Comparison of Auth Token allows timing Attacks (Info)\nVIT-01-002 MySQL: Timing attacks due to plain-text password auth (Low)\nVIT-01-003 PII: Not all SQL values covered by SQL redaction (Low)\nConclusions\nIntroduction\n“Vitess is a database clustering system for horizontal scaling of MySQL”\nFrom https://vitess.io/\nThis report documents the results of a security assessment targeting the Vitess software\ndatabase scaler. Funded by the CNCF / The Linux Foundation, this project was carried\nout by Cure53 in February 2019 and revealed only three miscellaneous findings.\nIn terms of resources, the test was completed by six members of the Cure53 team who\nworked within a time budget of eighteen days. The testers are considered very\nexperienced in their respective fields and have considerable expertise in regard to\nsystem complexity, cloud infrastructure, source code auditing, operating system\ninteraction, low-level protocol analysis and multi-angled penetration testing.\nPrior to the assessment, a CNCF-typical setup was requested by the testers and\nprovided by the development team. Besides furnishing Cure53 with a Kubernetes-based\ncluster, locally installed systems were also used for testing. Access to all relevant code\nand documentation was granted. While the first project meeting provided the basis for\nthe audit, a more ad-hoc kick-off meeting ensured that no major hurdles emerged. A\nCure53, Berlin · 03/08/19 1/9 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \ndedicated instant messaging channel was used for arising questions and further\ninspiration for the test.\nAn initial assessment of the interfaces and the system architecture, supported also by\nadditional exchange with the development team, revealed a rather limited attack surface.\nThis observation was later confirmed as the subsequent phases of the test ensued.\nWhile the results of this assessment are few and far between and may suggest some\nkind of test limitations, they in fact prove that the Vitess team delivers on the security\npromises they make. In Cure53’s view, there is a clear intention and follow-through on\nproviding a secure system for scaling MySQL databases. This was achieved by keeping\nthe attack surface minimal and selecting the language suited for this implementation.\nThe auditors managed to reach wide-spanning coverage of all aspects pertinent to the\nmain repository of the Vitess software system. The most likely avenues for exploitation\nwere chosen and verified for resilience.\nIn the following sections, the report first defines the scope of the test and then moves on\nto explaining the employed test methodology. Subsequent phases and details relevant\nfor the test are covered next and clarify which aspects were investigated during this\nFebruary 2019 assessment. Later in the document, each of the individual findings is\ndiscussed, with technical backdrop, illustrations of wider circumstances, and examples\nwith code snippets. Finally, this document ends with some broader conclusions and a\ngeneral impression that the Cure53 team gained about the Vitess scope system under\nscrutiny.\nScope\n•Vitess\n◦The publicly available main repository at https://github.com/vitessio/vitess was used \nas the codebase to be verified.\n▪branch master commit 092479406b27ae61a8fcd146a0e08af2d51a7245\n◦Furthermore, the minimal reference client https://github.com/vitessio/messages was \nused as additional illustration of use-cases.\n▪branch master commit 7d2ac2189573a7d26cf0f42e17df749673e3d16f\n◦The testers received unencumbered access to a Kubernetes test cluster hosted on \nAmazon Web Services, which was provided as a reference of an installation typical \nfor a general Vitess deployment.\nCure53, Berlin · 03/08/19 2/9 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nTest Methodology\nThis section describes the methodology that was used during this source code audit and\npenetration tests. The test was divided into two phases. Each phase had goals that were\nclosely linked to the areas in scope.\nThe initial phase (Phase 1) mostly comprised manual source code reviews, in particular\nin terms of the API endpoints, input handlers and parsers. The review carried out during\nPhase 1 aimed at spotting insecure code constructs. These were marked whenever a\npotential capacity for leading to buffer corruption, information leakages and other similar\nflaws has been identified.\nThe secondary phase (Phase 2) of the test was dedicated to classic penetration testing.\nAt this stage, it was verified whether the security promises made by Vitess in fact hold\nagainst real-life attack situations and malicious adversaries. This included watching out\nfor disclosure of personally identifiable information (PII), particularly in rarely\nencountered error cases. Additionally, the deployment infrastructure was further\ninvestigated for generalizable problems in their instrumentation of the Kubernetes\nenvironment.\nPhase 1. Manual Code Auditing\nThe following list of items presents the noteworthy steps undertaken during the first part\nof the test, which entailed the manual code audit of the sources of the Vitess software in\nscope. This is to underline that, in spite of the almost nonexistent findings, substantial\nthoroughness was achieved and considerable efforts have gone into this test. The\ncompleted tasks are listed next. Note that a given realm yielded no results unless\notherwise indicated with a specific link to a finding.\n•A comprehensive list of all accessible API endpoints was enumerated and\nchecked for visible defects. This entailed the functionality exposed by vtlctld and\nthe same functions that are also reachable via vtctlclient.\n•Despite this being only an administrative functionality, a typical example for such\nfunctions interacting with the file system would be ExecuteHook. This item was\nanalyzed in depth to see if it is by any means possible to inject API commands.\nThe overarching goal was clearly to achieve injection of the OS-level commands.\nThe filter implemented for this particular endpoint protects the function sufficiently\nand no path traversal instructions can be submitted via the hook’s name.\n•The monitor and debug web interfaces were analyzed for common vulnerabilities\nlike SQL injection or XSS. However, in all encountered cases the user-input was\nfound to be correctly sanitized, in particular due to the Angular framework’s\nproper handling of parameter-supplied values.\nCure53, Berlin · 03/08/19 3/9 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \n•The cryptographic and authentication-related aspects were analyzed for potential\ngeneral bypasses but no flaws allowing for such circumvention were found.\n•A potential timing issue pointed out by the development team was investigated\nin-depth but revealed no readily available exploitation paths. The reason behind\nthe secure premise is that the attacker would need to obtain the hashes\ncontained in the user-table prior to the attack. A minor issue was filed as VIT-01-\n001 to describe the exact circumstances.\n•As requested, plenty of additional effort was invested into discovering leaks of\npersonally-identifiable information, for example during the extensive logging of\nexecuted queries. The redactor was checked for flaws allowing for the exfiltration\nof unredacted or incompletely redacted values. The minor issue was filed (see\nVIT-01-003) but the real-world impact, as with most information leak issues in\ngeneral, would need to be considered as low.\n•The configuration of the Kubernetes cluster deployment was investigated for\ncommon problems like AllowPrivilegeEscalation , the application of name-space\nrules in the network policies, the running of pods in privileged mode, and the\ncharacteristics of the DefaultServiceAccounts, ContainerSecurityContext and\nRunAsNonRoot’s usage were confirmed as secure, either because of being\ncorrect or by virtue of inapplicability.\n•Furthermore, the used secret stores were analyzed for potentially being reused\nfrom across other contexts but no encryption prone to disclosure was found in\nany of the stores.\nPhase 2. Code-Assisted Penetration Testing\nA list of items below presents the noteworthy steps undertaken during the second phase\nof the test, which encompassed code-assisted penetration testing against the Vitess\nsystem in scope. Given that the manual source code audit did not yield an overly large\nnumber of findings, the second approach was added as means to maximize the test\ncoverage. As for specific tasks taken on to enrich this Phase, these can be found listed\nand discussed in the ensuing list.\n•Despite of the Kubernetes cluster provided by the development team, several\nother, local test installations were built and deployed; one type concerned simple\n3-tablet versions that were crafted via the provided docker images, another type\nwas built along the lines of the minikube-instructions. This was done to gain\nbetter understanding of the general deployment structure and the integration with\nthe core components.\n•The initially enumerated application endpoints were tested for potential input\nmanipulation, i.e. path traversal and OS-level command injection were attempted\nfor every function that interacted with the file system.\nCure53, Berlin · 03/08/19 4/9 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \n•Additional testing for filter circumvention did not uncover any methods to\nsuccessfully achieve Remote Code Execution. All enumerated endpoints were\ninvestigated for bypasses of the described protections without success in a form\nof a compromise.\n•Quite a few of the mentioned endpoints allow execution of the SQL statements\neither as the Database administrato r or the Application user . The testers sought\nto escalate the privilege level in a futile attempt to execute commands as root.\nThe unachieved goal was to have file system-level capabilities and turn them into\ndirect file manipulation.\n•After checking all user-exposed endpoints, the application-level SQL parser was\ninvestigated for robustness. The parser enables features that are not directly\npresent in the MySQL and therefore slightly extend the capacities of the\ndatabase. In particular, the possibility of breaking out of strings by providing\nlegitimately escaped data was attempted but no vulnerabilities could be spotted.\n•Interesting behaviors, such as the comment directives, were investigated. In this\nrealm, it is possible to supply additional Vitess runtime options during the\nexecution of the SQL statements via special ‘/*vt+’ directives; nothing particularly\nwrong with those extensions was uncovered, since it does not seem possible to\ninject such comment-style options in undesirable locations like strings and\nsimilar.\n•The web interface was probed for XSS and other general web application flaws\nwithout any weaknesses being discovered. Additional path traversal was\nattempted in the topology browser and, while the application of the ‘../’ path\nfragment does have an effect, it was found to be impossible to break out of the\nbase directory.\n•The network communication between the different Kubernetes application pods\nwas analyzed in order to find potential logical flaws. Nothing prone to being\nleveraged could be identified.\n•The runtime behavior of the different components was probed from a perspective\nof the services. In focus were Denial-of-Service and similar resource-depletion\nscenarios.\n•The deployed TLS configurations were analyzed for common misconfigurations,\nagain to no avail.\nCure53, Berlin · 03/08/19 5/9 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nMiscellaneous Issues\nThis section covers those noteworthy findings that did not lead to an exploit but might aid\nan attacker in achieving their malicious goals in the future. Most of these results are\nvulnerable code snippets that did not provide an easy way to be called. Conclusively,\nwhile a vulnerability is present, an exploit might not always be possible.\nVIT-01-001 MySQL: Comparison of Auth Token allows timing Attacks (Info)\nOne of the discovered issues allows an attacker to perform a timing attack against the\nauthentication of the Vitess server. This attack requires an adversary who is in\npossession of the hashed MySQL password, e.g. by obtaining it beforehand from the\nmysql.user table. Thus, this issue has a rather low impact.\nThe MySQL authentication uses hashing and a salt in order to prevent authenticating\nwith only a hash or replaying a previously recorded authentication request. The\nauthentication protocol can be consulted next. \nMySQL authentication protocol:\nServer stores: plain pw OR sha1(sha1(pw))\nServer -> Client: salt (randomly generated for each connection attempt)\nClient -> Server: sha1(pw) ^ sha1(salt + sha1(sha1(pw)))\nServer computes: sha1(client_response ^ sha1(salt + sha1(sha1(pw)))\nServer compares: generated_hash == stored_hash\nIn case the password is stored as plain-text, Vitess spares itself the final SHA1 operation\non the server-side and compares the client's authentication token directly with its own\nversion of the scrambled password, rendering the attack described below possible.\nIn this scenario, if the password hash is not known to an attacker, a timing attack is not\npossible. This is because the salt is never reused and causes unpredictable changes. If\nthe attacker has retrieved the stored password hash (e.g. via SQL injection), a timing\nattack can be performed by xoring the tested bit with sha1(salt + sha1(sha1(pw)). By\nexploiting the timing -unsafe comparison, an attacker would be able to retrieve sha1(pw),\nwhich is sufficient for authenticating to the server. The relevant code is displayed in the\nfollowing code snippet.\nAffected file:\nvitess/go/mysql/auth_server_static.go\nAffected code:\ncomputedAuthResponse := ScramblePassword(salt, []byte(entry.Password))\n// Validate the password.\nCure53, Berlin · 03/08/19 6/9 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nif matchSourceHost(remoteAddr, entry.SourceHost) && bytes.Compare(authResponse,\ncomputedAuthResponse) == 0 {\nreturn &StaticUserData{entry.UserData, entry.Groups}, nil\n}\nAs the attacker requires the double SHA1 hash of the password and the server has to\nstore the password as plain-text, attacks where this issue is of relevance are not likely.\nHowever, it is recommended to perform a timing -safe comparison instead. Go ’s\nConstantTimeCompare can be used in this realm.\nVIT-01-002 MySQL: Timing attacks due to plain-text password auth (Low)\nNext to the authentication schemes mentioned above, Vitess also implements\nMysqlDialog, which makes use of plain-text password comparison from both the server’s\nand the client’s perspectives. The problem is similar to the one mentioned in VIT-01-001\nbecause the method of comparing both passwords is incorrectly implemented. As such,\nthe method allows timing attacks due to the ‘==’ operator’s behavior of aborting early if a\nmatch between the characters is not found. The relevant code is displayed below.\nAffected File:\nvitess/go/mysql/auth_server_static.go\nAffected Code:\nfunc (a *AuthServerStatic) Negotiate(c *Conn, user string, remoteAddr net.Addr) \n(Getter, error) {\n[...]\nfor _, entry := range entries {\n// Validate the password.\nif matchSourceHost(remoteAddr, entry.SourceHost)\n&& entry.Password == password {\nreturn &StaticUserData{entry.UserData, entry.Groups}, nil\n}\nAs in the previously mentioned issue, it is recommended to switch to a timing-safe\nvariant of comparing strings. Using Go’s ConstantTimeCompare in the crypto/subtle’s\nmodule is advised.\nCure53, Berlin · 03/08/19 7/9 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nVIT-01-003 PII: Not all SQL values covered by SQL redaction (Low)\nDuring an investigation of the Vitess logging mechanism, it was discovered that\npersonally identifying information is not properly stripped from logged queries. Note that\nthis PII potentially resides in string values of SQL statements This might lead to the\nleakage of sensitive user-data whenever the debug interfaces of Vitess are opened. In\nMySQL, strings are usually encapsulated by double- or single quotes but it is also\npossible to write them as hexadecimal strings prefixed by 0x. As the variable\ndeduplication ignores strings with the 0x prefix, those values are not redacted. This\nbehavior can be observed in the following example.\nCode example:\nsql := \"select a,b,c from t where x = 1234 and y = 1234 and z = 'apple'\nand foo = 0x1337\"\nredactedSQL, err := RedactSQLQuery(sql)\nif err != nil {\n t.Fatalf(\"redacting sql failed: %v\", err)\n}\nfmt.Printf(\"redaction: %v\", redactedSQL)\nOutput:\nselect a, b, c from t where x = :redacted1 and y = :redacted1 and z = :redacted2\nand foo = 0x1337\nExpected output:\nselect a, b, c from t where x = :redacted1 and y = :redacted1 and z = :redacted2\nand foo = :redacted3\nFurthermore, it was discovered that large numbers and boolean values are also not\nredacted. While this is not as crucial as having unredacted strings, certain scenarios\nwhere logging such values might be undesirable can be envisioned.\nExample queries:\nselect * from t where x = 11111111111111111112 ;\nselect * from t where x = true;\nIt is recommended to include the affected data -types in the parameter deduplication.\nThis will help prevent the leakage of data in the logged queries.\nCure53, Berlin · 03/08/19 8/9 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nConclusions\nThe results of this Cure53 assessment funded by CNCF / The Linux Foundation certify\nthat the Vitess database scaler is secure and robust. This very good outcome is\nachieved by limiting the attack surface, taking appropriate care of user-supplied input\nwith security-driven best practices, as well as - to a certain extent - the usage of the Go\nlanguage ecosystem. A team of five Cure53 testers investigated the software system\nduring a budgeted period of 18 days in February 2019. All tasks were completed in\naccordance with the specified testing methodology, namely pure code auditing in Phase\n1 and source-code assisted penetration testing in Phase 2. The main source code\nrepository examined during the audit pertained to the Vitess itself, while the rather\nminimal sample client called messages was employed as a use-case reference.\nThe scope of the test was well-defined but not particularly extensive. Conversely, the\nactual threat-model was left mostly undefined until the actual commencement of the\nassessment, with the progress of the test making it increasingly more precise. The\ncommunications between the auditors and the development team were fluent and\nincurred no delays. After introductory discussion via mail and video conferencing, the\nensuing exchanges took place in a dedicated Slack channel. To give a more realistic\nassessment of the real-world deployment, the security of the provided Kubernetes\ncluster was scrutinized. It was considered that no wider-ranging problems impacted this\nscope item. Further, Cure53 can attest that the Vitess code is cleanly written and mostly\nwell-documented, making it particularly easy for the auditors to review the software’s\nstructure. Except for the SQL parser, none of the components had overly complex logic\nor included typically vulnerable constructs. The above factors contributed to the\nimpressive security posture found during this assessment. The number of issues found\nduring this test is particularly low despite the testers’ best efforts to locate additional\nproblem-areas. Only three minor issues were identified and their respective implications\nshould be evaluated as insignificant in the broad picture of assessing Vitess. The\nintermediate results of the test were shared with the development team during the\ncourse of the test, while the details of the discovered issues were only included in this\nfinal test report. In light of this February 2019 project, Cure53 concludes that the Vitess\ndatabase scaler is mature and secure. Therefore, it is deemed fit-for-purpose as far as\ndeployment in modern scalable environments is concerned. \nCure53 would like to thank Sugu Sougoumarane, Gary Edgar, Lori Clerkin and Deepthi\nSigireddi from the Vitess team as well as Chris Aniszczyk of The Linux Foundation, for\ntheir excellent project coordination, support and assistance, both before and during this\nassignment. Special gratitude also needs to be extended to The Linux Foundation for\nsponsoring this project.\nCure53, Berlin · 03/08/19 9/9" } ]
{ "category": "App Definition and Development", "file_name": "VIT-01-report.pdf", "project_name": "Vitess", "subcategory": "Database" }
[ { "data": "0.00.51.01.5\n01234\n0510152090%ile 95%ile 99%ile\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequery time (seconds)datasource\na\nb\nc\nd\ne\nf\ng\nhQuery latency percentiles" } ]
{ "category": "App Definition and Development", "file_name": "query_percentiles.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": " \n \nLucanus Simonson, Gy uszi Suto\n\n   \n \n \n  \n   \n\n  \n  \n\n   \n   \n\n  \n    \n\n  \n  \n    \n  \n  \n     \n  \n   \n    \n    \n    \n  \n    \n  \n  \n  \n   \n  \n    \n  \n \n \n\n \n\n \n\n  \n\n \n\n  \n \n \n  \n\n\n\n\n  \n  a c\n b\n d\n  \n  void clip_and_sub tract(polygon_set & d,\npolygon a, polygo n b, rectangle c) {\nd = (a & c) - b;\n}\n  \n\n  \n\n  \n\n  \n\n  \n\n  \n\n  \n\n  \n\n  \n\n  \n \n  \n\n  \n\n  \n\n  \n \n \n  \n\n  \n\n  \n\n  \n \n \n \n  \n\n  \n\n  \n\n  \n \n \n \n  \n\n  \n\n  \n\n  \n\n  \n \n \n \n  \n\n  \n \n \n  \n\n  \n\n  \n\n  \n \n \n \n  \n\n  \n \n \n   \n \n\n\n\n \n\n\n\n \n \n  \n  \n  \n\n \n  \n  \n  \n  \n  \n  \n\n\n\n\n\n \n \n \n   \n\n  \n \n \n   \n \n   \n \n  \n\n  \n  \n  \n\n   \n\n   \n\n\n   \n\n\n   \n\n\n\n\n\n\n   \n \n  \n\n\n  \n\n\n  \n\n  \n \n \n \n  \n  \n\n\n   \n\n  \n \n  \n\n\n\n  \n\n  \n \n \n \n \n   \n   \n   \n   \n  \n\n   \n  \n \n\n   \n  \n    \n\n    \n\n   \n  \n\n     \n  \n \n   \n   \n    \n  \n  \n     \n  \n \n    \n  \n   \n \n   \n \n  \n \n\n  \n \n  \n \n  \n \n \n   \n \n   \n  \n  \n\n  \n   \n\n  \n   \n  \n    \n \n    \n   \n    \n   \n    \n\n\n   \n  \n    \n  \n \n    \n   \n    \n   \n \n+1-1 +1\n-1-1 +1   \n\n \n  \n  \n     \n    \n   \n   \n    \n\n   \n     \n\n     \n \n      \n  \n  \n \n \n  \nDecompose \nDecompose Merge Input \nVertices-1 -1\n+1 +1\n+1 -10,-1\n0,+1 0,-10,+1-1,0 +1,0\n-1,0 +1,0\n+1,0 -1,0-1\n+1 -1+1\n  \n \n \n0,-1\n0,+1 0,-10,+1-1,0\n+1,0\n+1,0 -1,0+1,0\n-1,0\n   \n  \n\n   \n\n    \n \n0,-1\n0,+1 0,-10,+1-1,0\n+1,0\n+1,0 -1,0-1\n+1+1,0\n-1,0\n   \n\n   \n\n     \n  \n\n\n\n\n0,-1\n0,+1 0,-10,+1-1,0\n+1,0\n+1,0 -1,0+1\n-1-1\n+1+1,0\n-1,0\n   \n\n   \n\n    \n0,-1\n0,+1 0,-10,+1-1,0\n+1,0\n+1,0 -1,0-1\n+1-1\n+1\n+1,0\n-1,0\n\n   \n\n   \n\n     \n  \n \n0,-1\n0,+1 0,-10,+1-1,0\n+1,0\n+1,0 -1,0+1\n-1-1\n+1\n+1,0\n-1,0\n+1\n-1\n   \n\n   \n\n    \n\n    \n \n0,-1\n0,+1 0,-10,+1-1,0\n+1,0\n+1,0 -1,0-1\n+1\n+1,0\n-1,0\n+2\n-1-1 +1\n-1+1\n-1\n   \n\n   \n\n     \n \n0,-1\n0,+1 0,-10,+1-1,0\n+1,0\n+1,0 -1,0-1\n+1\n+1,0\n-1,0\n+1\n-1+1\n-1+1\n-1\n+2\n-1-1\n  \n\n  \n  \n \nSweep-line Polygon \nFormation-1\n+1+1\n-1+1\n-1+1\n-1\n+2\n-1-1 \n  \n   \n  \n   \n  \n  \n  \n \n\n\n\n\n\n\n \n\n  \n\n\n  \n\n\n \n\n\n \n  \n  \n \n  \n \n\n  \n\n   \n\n  \n \n   \n \n \n \n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n   \n   \n       \n  \n\n  \n\n  \n \n\n \n\n  \n\n \n\n\n \n\n  \n \n\n  \n \n \n  \n \n   \n\n  \n \n   \n\n  \n //Segment 1: (x11 ,y11) to (x12, y1 2)\n//Segment 2: (x21 ,y21) to (x22, y2 2)\ny1 < y2 iff\n(x22 - x21)((x - x11)(y12 - y11) + y11(x12 - x 11)) <\n(x12 - x11)((x - x21)(y22 - y21) + y21(x22 - x 21)) \n\n  \n\n//Segment 1: (x11 ,y11) to (x12, y1 2)\ndx1 = x12 - x11; dy1 = y12 - y11;\n//Segment 2: (x21 ,y21) to (x22, y2 2)\ndx2 = x22 - x21; dy2 = y22 - y21;\nx = (x11 * dy1 * dx2 – x21 * dy2 * dx1 + \ny21 * dx1 * dx2 - y11 * dx1 * dx2) / \n(dy1 * dx2 - dy2 * dx1); \ny = (y11 * dx1 * dy2 - y21 * dx2 * dy1 + \nx21 * dy1 * dy2 - x11 * dy1 * dy2) / \n(dx1 * dy2 - dx2 * dy1); \n \n \n \n\n \n\n \n\n \n\n  \n  \n \n\n\n\n  \n\n \n\n  \n\n \n\n  \n  \n \n  \n \n  \n \n  \n  \n   \n\n   \n   \n  b a a\n result\n   \n \n   void foo(list<CPo lygon>& result, \nconst list<CPolyg on>& a, \nconst list<CPolyg on>& b) {\nCBoundingBox doma inExtent;\ngtl::extents(doma inExtent, a);\nresult += (b & do mainExtent) ^ (a - 10);\n} \n  \n\n   \n \n \n \n   \n \n   \n \n   \n \n  \n   \n\ntemplate <typenam e T>\nstruct point_trai ts {\ntypedef T::coordi nate_type coordin ate_type;\ncoordinate_type g et(const T& p, or ientation_2d orie nt) { \nreturn p.get(orie nt);\n}\ntemplate <typenam e T>\nstruct point_muta ble_traits {\nvoid set(const T& p, orientation_2 d orient, \ncoordinate_type v alue) {\np.set(orient, val ue);\n}\nT construct(coord inate_type x, coo rdinate_type y) { \nreturn T(x, y); }\n}; \n   \n  \n \n    \n  \n\n   \n \ntemplate <typenam e T> struct is_in teger {};\ntemplate <> \nstruct is_integer <int> { typedef i nt type; };\ntemplate <typenam e T> struct is_fl oat {};\ntemplate <> \nstruct is_float<f loat> { typedef f loat type; };\ntemplate <typenam e T>\ntypename is_int<T >::type foo(T inp ut);\ntemplate <typenam e T>\ntypename is_float <T>::type foo(T i nput); \n \n \n \n \n\n   \n   \n \n   \n \n  \n   \n \n  \nstruct polygon_co ncept {};\nstruct rectangle_ concept {};\ntemplate <typenam e T>\nstruct is_a_polyg on_concept{};\ntemplate <> struc t is_a_polygon_co ncept<rectangle_c oncept> { \ntypedef gtl_yes t ype; }; \n \n  \n \n  \n \n\n  \n\n  \n \n  \nPS polygon_set_conceptPS45 polygon_45_set_conceptPS90 polygon_90_set_conceptPWH polygon_with_holes_conc eptP polygon_conceptPWH45 polygon_45_with_holes_ conceptP45 polygon_45_conceptPWH90 polygon_90_with_holes_ conceptP90 polygon_90_conceptR rectangle_conceptPT3D point_3d_conceptPT point_conceptI interval_conceptC coordinate_conceptAbbreviation Concept\n  \n\n  \n \n  \n \n \n  \n \n \n  \n \n  \n \n  \n \n  \n   \n\n   \n \n   \n \n\n   \n \n\n  \n\n\n\n\n  \n\n \n\n\n \n \n \n  \n \n operator=operator-operator&void clip_and_sub tract(polygon_set & d,\npolygon a, polygo n b, rectangle c) {\nd = (a & c) - b;\n} \n   \n  \n \n  \n   \n\n   \n  \n\n  \n  \n    \n  \n\n    \n\n    \n  \n   \n  \n \n   \n\n   \n  \n  \n \n  \n  \n  \n \n \ntemplate <typenam e T> struct gtl_i f{\n#ifdef WIN32\ntypedef gtl_no ty pe;\n#endif\n};\ntemplate <> struc t gtl_if<gtl_yes> { typedef gtl_ye s type; };" } ]
{ "category": "App Definition and Development", "file_name": "GTL_boostcon_draft03.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "The client reads directly \nfrom the storage nodes, \nwhich use MVCC to \nprovide a consistent \nsnapshot.4\nWrites are sent to \nthe transactional\nauthority at \ncommit time.3The client’s commit succeeds once the \ntransaction is logged \nto multiple disks in \nthe transactional \nauthority.5a\nA set of coordinators running PAXOS elects \nleaders and avoids “split \nbrain” problems. They are \nnot involved in individual \ntransactions.Successful trans -\nactions are sent \nto memory on \nthe appropriate \nstorage nodes.5b\ntransactional\nauthoritystorage nodes\ncoordinatorsThe transactional\nauthority enforces \nACID guarantees.4Storage nodes lazily \nupdate data on disk \nbased on transactional \nwrites.6\n5The transactionalauthority can \nremove transactions \nfrom disk once they \nare safely durable on \nthe storage nodes.7A=6\nA=6 A=6\nA=6A:6A:6\nA:6\nKey-Value StoreWhen starting a \ntransaction, the client \nrequests a consistent \nread version from the \ntransactional authority.21Applications can use \nmultiple data models \nby speaking to\nco-located layers.\nLayers store data \nside-by-side in the \ncluster, usually in \ntheir own keyspace.Application\nCode\nLayer Layer\nFDB\nClientFDB\nClient\nApp servers use \ntraditional load \nbalancing for client \nrequests.Key-Value Store Logical Archite cture\nDISCLAIMER: Please do not try to infer system propert ies from this diagram .\nFor that information, please see the Key-Value S tore Features and\nKnown Limitations, or ask us a que stion." } ]
{ "category": "App Definition and Development", "file_name": "Architecture.pdf", "project_name": "FoundationDB", "subcategory": "Database" }
[ { "data": "Introduction: \nCNCF Serverless WG & \nCloudEvents \nDoug Davis - dug@us.ibm.com \nCathy Zhang - Cathy.H.Zhang@huawei.com Agenda \n●Serverless WG Overview \n●CloudEvents Overview \n●SDKs \n●Status of CloudEvents \n●Serverless Workflow Introduction \n●Demo - time permitting \n●Q&A Serverless WG Overview \n●Technical Oversight Committee initiated \n○Whitepaper \n■Overview of technology \n■State of ecosystem \n■Recommendations for possible CNCF next steps \n○Landscape \n●CloudEvents - Minimal common attributes / shape of events \n○Sandbox project \n●Function workflow - orchestration of Functions CloudEvents Overview \n●Consistent metadata & format \n●Core specification - minimal properties \n●Transport bindings - how to serialize in JSON, HTTP, MQTT, ... \n{\n \"specversion\" : \"0.1\", \n \"type\" : \"myevent\", \n \"source\" : \"uri:example-com:mydevice\", \n \"id\" : \"A234-1234-1234\", \n \"time\" : \"2018-04-05T17:31:00Z\", \n \"contenttype\" : \"text/plain\", \n \"data\" : \"Hello\" \n}It’s not about data. \nIt’s about metadata! CloudEvents Use Cases \n●Normalize events, web-hooks, across environments - interop!! \n●Facilitate integrations across platforms \n●Leave the event business logic processing to the application \n●First step towards portability of functions CloudEvents Deliverables \n●CloudEvents Specification – define the metadata \n●Serialization Rules Specifications \n○JSON event format \n○AMQP event format \n●Transport Bindings Specifications \n○HTTP – binary and structured \n○MQTT \n○AMQP \n○NATS \n○Web-hooks \n●Primer Cloud Events SDK \n●SDK CloudEvent Sub-group \n○(De)Serializer for CloudEvents on various transports - at least http \n○Provide consistency across SDK / languages \n●Development underway (WIP) \n○Go\n○Java \n○Javascript \n○CSharp \n○Python Status of CloudEvents \n●Current version: v0.1 - April 2018 \n○v0.2 very soon! \n●What’s left for CloudEvents v1.0? \n○Finalize the core Event Attributes \n○Finalize the set of protocol and serialization mappings \n○Documentation, developer and/or user guide. \n○Interop demos & verification through implementations and testing \n●What will come after CloudEvents 1.0? \n○Develop SDK and supporting tools for CloudEvents \n○Stabilization and adoption (organize more CloudEvents Interop Demos) Workflow Introduction WorkFlow Introduction \nWorkFlow Introduction \nWorkFlow Introduction \nWorkFlow Introduction \nDemo - Time Permitting CloudEvents Demo \nhttps://youtu.be/TZPPjAv12k \nCloud Events Demo \nhttps://twitter.com/CloudEventsDemo/lists/demo Thank You! \n●Serverless WG : https://github.com/cncf/wg-serverless \n○Workflow: https://github.com/cncf/wg-serverless/tree/master/workflow/spec \n●CloudEvents : https://cloudevents.io/ \n○Org : https://github.com/cloudevents \n○Spec repo : https://github.com/cloudevents/spec \n○SDKs : https://github.com/cloudevents/sdk- ...\n●Deep-Dive Session: Thursday November 15 - 15:05-15:40 - 3M 3 \n○Cathy: Serverless Workflow: Key to Wide Serverless Adoption \n■Thursday November 15, 2018 12:15 - 12:50 \n●Questions? Deep Dive: \nCNCF Serverless WG & \nCloudEvents \nClemens Vasters - clemensv@microsoft.com \nCathy Zhang - Cathy.H.Zhang@huawei.com Agenda \n●CloudEvents Deep Dive \n●Workflow Overview \n●Q&A Eventing vs Messaging \n•Events and messages are both mailing envelopes for data, decorated \nby metadata – but they are different. \n•Events carry facts. They report things that have happened. \n•State transitions, observed conditions, objects having been created, … \n•Messages carry intents. The sender expects something to happen. \n•Command execution, job handling, workflow progress, … \n•Events are published as an information option for interested \nsubscribers. The audience size may be zero or many. \n•Messages are sent to handlers. There may be delivery and handling \nstatus feedback, replies, conversations, or complex control flows like \nWorkflows and Sagas. The audience size may be one or many. CloudEvents - Base Specification \n•CloudEvents is a lightweight common convention for events. \n•It’s intentionally not a messaging model to keep complexity low. \n•No reply-path indicators, no message-to-message correlation, no target \naddress indicators, no command verbs/methods. \n•Metadata for handling of events by generic middleware and/or \ndispatchers \n•What kind of event is it? eventtype \n•When was it sent? eventtime \n•What context was it sent out of? source \n•What is this event’s unique identifier? eventid \n•What’s the shape of the carried event data? contenttype , schemaurl \n•Event data may be text-based (esp. JSON) or binary CloudEvents - Event Formats \n•Event formats bind the abstract \nCloudEvents information model to \nspecific wire encodings. \n•All implementation must support JSON. \nJSON is the default encoding for where \nmetadata text must be rendered, e.g. \nHTTP header values \n•AMQP type system encoding defined for \nmetadata mapping to AMQP properties \nand annotations \n•Further compact binary event format \ncandidates might be CBOR, or Protobuf. JSON Representation {\n \"specversion\" : \"0.1\", \n \"type\" : \"myevent\", \n \"source\" : \"uri:example-com:mydevice\", \n \"id\" : \"A234-1234-1234\", \n \"time\" : \"2018-04-05T17:31:00Z\", \n \"type\" : \"text/plain\", \n \"data\" : \"Hello\" \n}HTTP Transport Binding \n•Transport bindings bind the CloudEvent event metadata and data to \nthe transport frame of an existing application or transfer protocol. \n•HTTP Transport Binding: \n•Binds a CloudEvent event to the HTTP message. Works for both requests \nand replies. Does not constrain usage of methods or status codes; can be \nused for all cases where HTTP carries entity bodies. \n•Structured mode: Complete event including metadata rendered carried in \nentity body. Upside: Easier to handle/forward \n•Binary mode: Only event data carried in entity body, metadata mapped to \nheaders. Upside: More compact HTTP Structured Binding Mode \nHTTP/1.1 POST /myresource \n…\ncontent-type: application/cloudevents+json \n{\n \"specversion\" : \"0.1\", \n \"type\" : \"myevent\", \n \"source\" : \"uri:example-com:mydevice\", \n \"id\" : \"A234-1234-1234\", \n \"time\" : \"2018-04-05T17:31:00Z\", \n \"contenttype\" : \"text/plain\", \n \"data\" : \"Hello\" \n}\nComplete event including metadata rendered carried in entity body. Upside: \nEasier to handle/forward HTTP Binary Binding Mode \nHTTP/1.1 POST /myresource \nce-specversion: 0.1 \nce-type: myevent \nce-source: uri:example-com:mydevice \nce-id: A234-1234-1234 \nce-time: 2018-04-05T17:31:00Z \ncontent-type: text/plain \nHello\nOnly event data carried in entity body, metadata mapped to headers. Upside: \nMore compact Other Transport Bindings \n•AMQP: ISO/IEC messaging protocol used for a variety of message \nbrokers and event buses; defined in OASIS \n•Binds event to the AMQP message \n•Binary and structured modes \n•MQTT: ISO/IEC lightweight pub/sub protocol for device telemetry \npropagation; defines in OASIS \n•Binds event to MQTT PUBLISH frame. \n•Binary and Structured for MQTT v5 \n•Structured mode only for MQTT v3.1.1 (lacks custom frame headers) \n•NATS: Text-based lightweight pub/sub protocol \n•Binds event to the NATS message. \n•Structured mode only (lacks custom frame headers) Workflow Deep Dive Workflow Overview (Use Case) \nWorkflow Overview (Key Primitives) \nWorkflow Overview (Use Case) \nWorkflow Overview \nThank You! \n●Serverless WG : https://github.com/cncf/wg-serverless \n○Workflow: https://github.com/cncf/wg-serverless/tree/master/workflow/spec \n●CloudEvents : https://cloudevents.io/ \n○Org : https://github.com/cloudevents \n○Spec repo : https://github.com/cloudevents/spec \n○SDKs : https://github.com/cloudevents/sdk- ...\n●Questions? " } ]
{ "category": "App Definition and Development", "file_name": "2018-11-14-KubeCon-Intro-DeepDive.pdf", "project_name": "CloudEvents", "subcategory": "Streaming & Messaging" }
[ { "data": "●\n●\n●\n102030\n30 40 50 60 70\nNumber of NodesCore Scan Rate (million rows/sec.)query\n●Query 1\nQuery 2\nQuery 3\nQuery 4\nQuery 5\nQuery 6" } ]
{ "category": "App Definition and Development", "file_name": "core_scan_rate.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "NoSQL\n1\nNoSQL\nIn computing, NoSQL (sometimes expanded to \"not only SQL\") is a broad class of database management systems\nthat differ from classic relational database management systems (RDBMSes) in some significant ways. These data\nstores may not require fixed table schemas, usually avoid join operations, and typically scale horizontally. Academia\ntypically refers to these databases as structured storage,[1] [2] [3] [4] a term that would include classic relational\ndatabases as a subset.\nHistory\nCarlo Strozzi used the term NoSQL in 1998 to name his lightweight, open-source relational database that did not\nexpose an SQL interface.[5] (Strozzi suggests that, as the current NoSQL movement \"departs from the relational\nmodel altogether; it should therefore have been called more appropriately 'NoREL', or something to that effect.\"[6] )\nEric Evans, a Rackspace employee, reintroduced the term NoSQL in early 2009 when Johan Oskarsson of Last.fm\nwanted to organize an event to discuss open-source distributed databases.[7] The name attempted to label the\nemergence of a growing number of non-relational, distributed data stores that often did not attempt to provide ACID\n(atomicity, consistency, isolation, durability) guarantees, which are the key attributes of classic relational database\nsystems such as IBM DB2, MySQL, Microsoft SQL Server, PostgreSQL, Oracle RDBMS, Informix, Oracle Rdb,\netc.\nIn 2011, work began on UnQL (Unstructured Query Language), a specification for a query language for NoSQL\ndatabases.[8] It is built to query collections (versus tables) of documents (versus rows) with loosely defined fields\n(versus columns). So it is a superset of SQL where SQL is a very constrained type of UnQL where the queries will\nalways return the same fields (same number, names and types). However, UnQL does not cover the DDL SQL\nstatements like CREATE TABLE or CREATE INDEX[9] .\nArchitecture\nTypical modern relational databases have shown poor performance on certain data-intensive applications, including\nindexing a large number of documents, serving pages on high-traffic websites, and delivering streaming media.[10]\nTypical RDBMS implementations are tuned either for small but frequent read/write transactions or for large batch\ntransactions with rare write accesses. NoSQL, on the other hand, can service heavy read/write workloads.[10]\nReal-world NoSQL deployments include Digg's 3 TB for green badges (markers that indicate stories upvoted by\nothers in a social network)[11] and Facebook's 50 TB for inbox search.[12]\nNoSQL architectures often provide weak consistency guarantees, such as eventual consistency, or transactions\nrestricted to single data items. Some systems, however, provide full ACID guarantees in some instances by adding a\nsupplementary middleware layer (e.g., AppScale and CloudTPS).[13] [14] Two systems have been developed that\nprovide snapshot isolation for column stores: Google's Percolator system based on BigTable,[15] and a transactional\nsystem for HBase developed at the University of Waterloo.[16] These systems, developed independently, use similar\nconcepts to achieve multi-row distributed ACID transactions with snapshot isolation guarantee for the underlying\ncolumn store, without the extra overhead of data management, middleware system deployment, or maintenance\nintroduced by the middleware layer.\nSeveral NoSQL systems employ a distributed architecture, with the data held in a redundant manner on several\nservers, often using a distributed hash table. In this way, the system can readily scale out by adding more servers,\nand failure of a server can be tolerated.[17]\nSome NoSQL advocates promote very simple interfaces such as associative arrays or key-value pairs. Other systems,\nsuch as native XML databases, promote support of the XQuery standard. Newer systems such as CloudTPS also\nsupport join queries.[18]\nNoSQL\n2\nTaxonomy\nNoSQL implementations can be categorized by their manner of implementation:\nDocument store\n Name \n Language \n Notes \n BaseX\n Java, XQuery\n XML database\n Apache CouchDB\nErlang\n eXist\n XQuery\n XML database\n Jackrabbit\nJava\n Lotus Notes\nLotusScript, Java, others\nMultiValue\n MarkLogic Server\nXQuery\n XML database\n MongoDB\nC++\n BSON (Binary format JSON)\nOrientDB\nJava\n SimpleDB\nErlang\n Terrastore\nJava\n \nGraph\n Name \n Language \n Notes \nAllegroGraph\nSPARQL\nRDF GraphStore\nDEX\nJava\nHigh-performance Graph Database\nInfiniteGraph\nJava\nHigh-performance, scalable, distributed Graph Database\nNeo4j\nJava\nOrientDB\nJava\nFlockDB\nScala\nSones GraphDB\nC#\nGraph database with query language called GraphQL\nPregel\nKey-value store\nKey-value stores allow the application to store its data in a schema-less way. The data could be stored in a datatype\nof a programming language or an object. Because of this, there is no need for a fixed data model.[19] The following\ntypes exist:\nEventually‐consistent key‐value store\n•Apache Cassandra\n•Dynamo\n•Hibari\n•Project Voldemort\n•Riak [20]\nNoSQL\n3\nHierarchical key-value store\n•GT.M\nHosted services\n•Freebase\nKey-value cache in RAM\n•Citrusleaf database\n•memcached\n•Oracle Coherence\n•Redis\n•Tuple space\n•Velocity\nKey-value stores on disk\n•BigTable\n•CDB\n•Citrusleaf database\n•Keyspace\n•LevelDB\n•membase\n•Memcachedb\n•Redis\n•Tokyo Cabinet\n•TreapDB\n•Tuple space\n•MongoDB\nOrdered key-value stores\n•Berkeley DB\n•IBM Informix C-ISAM\n•Memcachedb\n•NDBM\nMultivalue databases\n•Extensible Storage Engine (ESE/NT)\n•OpenQM\n•Revelation Software's OpenInsight\n•Rocket U2\n•D3 Pick database\n•InterSystems Caché\nNoSQL\n4\nObject database\n•db4o\n•GemStone/S\n•InterSystems Caché\n•JADE\n•ObjectDB\n•Objectivity/DB\n•ObjectStore\n•Versant Object Database\n•ZODB\nTabular\n•BigTable\n•Apache Hadoop\n•Apache Hbase\n•Hypertable\n•Mnesia\nTuple store\n•Apache River\nReferences\n[1]Hamilton, James (3 November 2009). \"Perspectives: One Size Does Not Fit All\" (http:/ / perspectives. mvdirona. com/\nCommentView,guid,afe46691-a293-4f9a-8900-5688a597726a. aspx). . Retrieved 13 November 2009.\n[2]Lakshman, Avinash; Malik, Prashant. Cassandra — A Decentralized Structured Storage System (http:/ / www. cs. cornell. edu/ projects/\nladis2009/ papers/ lakshman-ladis2009. pdf). Cornell University. . Retrieved 13 November 2009.\n[3]Chang, Fay; Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and\nRobert E. Gruber. Bigtable: A Distributed Storage System for Structured Data (http:/ / labs. google. com/ papers/ bigtable-osdi06. pdf).\nGoogle. . Retrieved 13 November 2009.\n[4]Kellerman, Jim. \"HBase: structured storage of sparse data for Hadoop\" (http:/ / www. rapleaf. com/ pdfs/ hbase_part_2. pdf). . Retrieved 13\nNovember 2009.\n[5]Lith, Adam; Jakob Mattson (2010). \"Investigating storage solutions for large data: A comparison of well performing and scalable data storage\nsolutions for real time extraction and batch insertion of data\" (http:/ / publications. lib. chalmers. se/ records/ fulltext/ 123839. pdf) (PDF).\nGöteborg: Department of Computer Science and Engineering, Chalmers University of Technology. p. 15. . Retrieved 2011-05-12. \"Carlo\nStrozzi first used the term NoSQL in 1998 as a name for his open source relational database that did not offer a SQL interface[...]\"\n[6]\"NoSQL Relational Database Management System: Home Page\" (http:/ / www. strozzi. it/ cgi-bin/ CSA/ tw7/ I/ en_US/ nosql/ Home Page).\nStrozzi.it. 2007-10-02. . Retrieved 2010-03-29.\n[7]\"NOSQL 2009\" (http:/ / blog. sym-link. com/ 2009/ 05/ 12/ nosql_2009. html). Blog.sym-link.com. 2009-05-12. . Retrieved 2010-03-29.\n[8]http:/ / unqlspec. org/ display/ UnQL/ Home\n[9]Avram, Abel (04). \"Interview: Richard Hipp on UnQL, a New Query Language for Document Databases\" (http:/ / www. infoq. com/ news/\n2011/ 08/ UnQL). http:/ / www. infoq. com. . Retrieved 7 September 2011.\n[10]Agrawal, Rakesh et al. (2008). \"The Claremont report on database research\" (http:/ / db. cs. berkeley. edu/ claremont/ claremontreport08.\npdf). SIGMOD Record (ACM) 37 (3): 9–19. doi:http:/ / doi. acm. org/ 10. 1145/ 1462571. 1462573. & #32;ISSN& nbsp;0163-5808. .\n[11]\"Looking to the future with Cassandra | Digg About\" (http:/ / about. digg. com/ blog/ looking-future-cassandra). About.digg.com.\n2009-09-09. . Retrieved 2010-03-29.\n[12]\"Cassandra\" (http:/ / www. facebook. com/ note. php?note_id=24413138919& id=9445547199& index=9). facebook.com. 2008-08-25. .\nRetrieved 2011-08-19.\n[13]\"Datastore Agnostic Transaction Support for Cloud Infrastructures\" (http:/ / cs. ucsb. edu/ ~ckrintz/ papers/ ieeecloud11. pdf). IEEE.\n2011-07-04. .\n[14]\"CloudTPS: Scalable Transactions for Web Applications in the Cloud\" (http:/ / www. globule. org/ publi/ CSTWAC_ircs53. html).\nGlobule.org. . Retrieved 2010-03-29.\nNoSQL\n5\n[15]\"Large-scale Incremental Processing Using Distributed Transactions and Notifications\" (http:/ / www. google. ca/ url?sa=t& source=web&\ncd=3& ved=0CCQQFjAC& url=http:/ / www. usenix. org/ events/ osdi10/ tech/ full_papers/ Peng. pdf& rct=j& q=Large-scale Incremental\nProcessing Using Distributed Transactions and Notifications& ei=eM24TOYnjqedB_mHmLUN&\nusg=AFQjCNGGm1Xfaml5lq6Aj1R2BlX7WilIuQ& sig2=ZZcPWxhiMVSnY-DmewIFIg& cad=rja). The 9th USENIX Symposium on\nOperating Systems Design and Implementation (OSDI 2010), Oct 4–6, 2010, Vancouver, BC, Canada. . Retrieved 2010-10-15.\n[16]\"Supporting Multi-row Distributed Transactions with Global Snapshot Isolation Using Bare-bones [[HBase (http:/ / www. cs. uwaterloo. ca/\n~c15zhang/ ZhangDeSterckGrid2010. pdf)]\"]. The 11th ACM/IEEE International Conference on Grid Computing (Grid 2010), Oct 25-29,\n2010, Brussels, Belgium. . Retrieved 2010-10-15.\n[17]\"Cassandra: Structured Storage System over a P2P Network\" (http:/ / static. last. fm/ johan/ nosql-20090611/ cassandra_nosql. pdf) (PDF). .\nRetrieved 2010-03-29.\n[18]\"Consistent Join Queries in Cloud Data Stores\" (http:/ / www. globule. org/ publi/ CJQCDS_ircs68. html). Globule.org. . Retrieved\n2011-01-31.\n[19]Marc Seeger (2009-09-21). \"Key-Value Stores: a practical overview\" (http:/ / dba. stackexchange. com/ questions/ 607/\nwhat-is-a-key-value-store-database). http:/ / www. slideshare. net/ marc. seeger/ keyvalue-stores-a-practical-overview: slideshare. . Retrieved\n2010-03-09. \"Key value stores allow the application developer to store schema-less data. This data is usually consisting of a string that\nrepresents the key, and the actual data that is considered to be the value in the \"key - value\" relationship. The data itself is usually some kind\nof primitive of the programming language (a string, an integer, an array) or an object that is being marshalled by the programming languages\nbindings to the key value store. This replaces the need for fixed data model and makes the requirement for properly formatted.\"\n[20]\"Riak: An Open Source Scalable Data Store\" (https:/ / wiki. basho. com). 28 November 2010. . Retrieved 28 November 2010.\nExternal links\n•(http:/ / www. odbms. org/ downloads. aspx#nosql) on [ODBMS.ORG: NoSQL Data Stores Section]\n•NoSQLforums.ORG: NoSQL Knowledgebase - Live Message Board (http:/ / www. nosqlforums. org/ )\n•NoSQL User Group (http:/ / www. linkedin. com/ groups?gid=2085042) on LinkedIn\n•nosql-discussion (http:/ / groups. google. com/ group/ nosql-discussion) on Google Groups\n•nosqldatabases.com (http:/ / nosqldatabases. com/ )\n•myNoSQL: news, articles and links about NoSQL (http:/ / nosql. mypopescu. com/ )\n•nosql-databases.org (http:/ / nosql-databases. org/ )\n•computerworld.com : No to SQL? Anti-database movement gains steam (http:/ / www. computerworld. com/ s/\narticle/ 9135086/ No_to_SQL_Anti_database_movement_gains_steam_)\n•Is Microsoft Feeling the \"NoSQL\" Heat? (http:/ / reddevnews. com/ blogs/ data-driver/ 2009/ 12/ nosql-heat_0.\naspx)\n•Information Week \"The NoSQL Alternative\" (http:/ / www. informationweek. com/ news/ development/\narchitecture-design/ showArticle. jhtml?articleID=224900559)\n•How RDF Databases Differ from Other NoSQL Solutions (http:/ / blog. datagraph. org/ 2010/ 04/ rdf-nosql-diff)\n•CouchOne (http:/ / www. couchone. com)\n•NoSql Tapes (http:/ / nosqltapes. com)\n•NoSQL Databases (Introduction and Overview) (http:/ / www. christof-strauch. de/ nosqldbs. pdf)\nArticle Sources and Contributors\n6\nArticle Sources and Contributors\nNoSQL  Source: http://en.wikipedia.org/w/index.php?oldid=454406739  Contributors: Al3xpopescu, Alexandre.Morgaut, AlisonW, Amire80, Argv0, Arto B, Asafdapper, AxelBoldt, Bbulkow,\nBdijkstra, Bearcat, Beland, Benatkin, Benhoyt, Bhaskar, Biofinderplus, Bovineone, CaptTofu, Ceefour, Cekli829, Charbelgereige, ChristianGruen, Clemwang, Cnorvell, ColdShine, Coldacid,\nCraigbeveridge, Crosbiesmith, Cybercobra, Cyril.wack, DamarisC, Dancrumb, DavidBourguignon, DavidSol, Davidhorman, Dericofilho, Dm, Dmccreary, Dmitri.grigoriev, Dredwolff, Drttm,\nDshelby, Dstainer, Duncan, Ebalter, Eco schranzer, Edlich, Ehn, Eno, EricBloch, ErikHaugen, Ertugka, Euphoria, Excirial, Fiskbil, Fraktalek, Frap, Furrykef, Fxsjy, Gaborcselle, Germanviscuso,\nGetmoreatp, Gkorland, GlobalsDB, GoingBatty, Gpierre, Gstein, Heelmijnlevenlang, Hloeung, Hoelzro, Inmortalnet, Irmatov, JLaTondre, Jabawack81, Jandalhandler, Javalangstring,\nJeffdexter77, JnRouvignac, Jonasfagundes, Joolean, Jottinger, Jrudisin, Jstplace, Justinsheehy, Kgfleischmann, Ki2010, KiloByte, Kkbhumana, Koavf, Komap, Korrawit, Leotohill, Lfstevens,\nLguzenda, Linas, Looris, Luisramos22, MMSequeira, Mabdul, Magnuschr, Marasmusine, Mbonaci, Mhegi, Miami33139, Mitpradeep, Mjresin, Morphh, Mortense, MrWerewolf, Mshefer,\nMtrencseni, Mydoghasworms, Natishalom, Nawroth, Netmesh, Nileshbansal, Ntoll, Omidnoorani, PatrickFisher, Pcap, Peak, Phillips-Martin, Phunehehe, Plustgarten, Pnm, Poohneat, R39132,\nRabihnassar, Really Enthusiastic, Rfl, RobertG, Robhughadams, Ronz, Rtweed1955, Russss, Sae1962, SamJohnston, ScottConroy, Sduplooy, Seancribbs, Seraphimblade, Shadowjams, Shepard,\nShijucv, Smyth, Sorenriise, Sstrader, Stephen Bain, Stephen E Browne, Stevedekorte, Stimpy77, Syaskin, TJRC, Tagishsimon, Tedder, Theandrewdavis, Thomas.uhl, ThomasMueller,\nThumperward, Tobiasivarsson, Tomdo08, Trbdavies, Tshanky, Tuvrotya, Uhbif19, Violaaa, Viper007Bond, Volt42, Voodootikigod, Vychtrle, Weimanm, Whooym, William greenly, Winterst,\nWoohookitty, Wyverald, YPavan, Zapher67, Zond, 419 anonymous edits\nLicense\nCreative Commons Attribution-Share Alike 3.0 Unported\n//creativecommons.org/licenses/by-sa/3.0/\n" } ]
{ "category": "App Definition and Development", "file_name": "file.pdf", "project_name": "OrientDB", "subcategory": "Database" }
[ { "data": "Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)\n1e+041e+06\n1e+02 1e+05 1e+08\nCardinalityConcise compressed size (bytes)sorted\nsorted\nunsorted" } ]
{ "category": "App Definition and Development", "file_name": "concise_plot.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "Segment_v4 \nSegment_v3 \nSegment_v2 \nSegment_v1 \nSegment_v4 Segment_v3 Segment_v1 Day 1 Day 2 Day 3 \nResults " } ]
{ "category": "App Definition and Development", "file_name": "timeline.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "ZLIB(3) ZLIB(3)\nNAME\nzlib − compression/decompression libr ar y\nSYNOPSIS\n[see zlib.h forfull description]\nDESCRIPTION\nThezliblibrar yis a general purpose data compression libr ar y.The code is thread saf e, assuming\nthat the standard libr ar yfunctions used are thread saf e, such as memor yallocation routines .It\nprovides in-memor ycompression and decompression functions ,including integ rity checks of the\nuncompressed data. This version of the libr ar ysuppor ts only one compression method (defla-\ntion) but other algorithms ma yb ea dded later with the same stream interface.\nCompression can be done in a single step if the b uffers are large enough or can be done b y\nrepeated calls of the compression function. In the latter case ,the application must provide more\ninput and/or consume the output (providing more output space) before each call.\nThe libr ar yalso supports reading and writing files in gzip(1) (.gz) f or mat with an interface similar\nto that of stdio.\nThe libr ar ydoes not install an ysignal handler .The decoder checks the consistency of the com-\npressed data, so the libr ar yshould ne vercrash eveni nt he case of corrupted input.\nAll functions of the compression libr ar yare documented in the file zlib.h .The distr ibution source\nincludes examples of use of the libr ar yin the files test/example.c andtest/minigzip.c, as well as\nother examples in the examples/ director y .\nChanges to this version are documented in the file ChangeLog that accompanies the source.\nzlibis built in to man ylanguages and operating systems ,including but not limited to J ava, Python,\n.NET ,PHP,Per l,Ruby, Swift, and Go.\nAn exper imental package to read and write files in the .zip f or mat, wr itten on top of zlibby G illes\nVollant (info@winimage.com), is a vailable at:\nhttp://www.winimage.com/zLibDll/minizip .html and also in the contr ib/minizip director y of\nthe main zlibsource distribution.\nSEE ALSO\nThezlibwebsite can be found at:\nhttp://zlib.net/\nThe data f or mat used b ythezliblibrar yis described b yRFC (Request for Comments) 1950 to\n1952 in the files:\nhttp://tools.ietf.org/html/rfc1950 (for the zlib header and trailer f or mat)\nhttp://tools.ietf.org/html/rfc1951 (for the deflate compressed data f or mat)\nhttp://tools.ietf.org/html/rfc1952 (for the gzip header and trailer f or mat)\nMar k Nelson wrote an article about zlibforthe Jan. 1997 issue of Dr.Dobb’sJour nal; acopyof\nthe article is a vailable at:\nhttp://mar knelson.us/1997/01/01/zlib-engine/\nREPORTING PROBLEMS\nBefore reporting a problem, please chec kthezlibwebsite to v er ify that you ha ve the latest v er-\nsion of zlib;otherwise ,obtain the latest version and see if the problem still e xists.Please read\nthezlibFA Q at:\nhttp://zlib.net/zlib_faq.html\nbefore asking for help .Send questions and/or comments to zlib@gzip .org, or (for the Windo ws\nDLL version) to Gilles Vollant (info@winimage.com).\n13 Oct 2022 1ZLIB(3) ZLIB(3)\nAUTHORS AND LICENSE\nVersion 1.2.13\nCopyr ight (C) 1995-2022 Jean-loup Gailly and Mar kAdler\nThis software is provided ’as-is’, without an yexpress or implied w arranty .Inn oe vent will the\nauthors be held liable for an ydamages arising from the use of this software.\nPermission is g ranted to an yone to use this software for an ypur pose ,including commercial appli-\ncations ,and to alter it and redistribute it freely ,subject to the following restrictions:\n1. The or igin of this software must not be misrepresented; you must not claim that you wrote the\nor iginal software .I fyou use this software in a product, an ac knowledgment in the product doc-\numentation would be appreciated but is not required.\n2. Altered source versions must be plainly mar keda ss uch, and must not be misrepresented as\nbeing the original software.\n3. This notice ma ynot be remo vedo ra ltered from an ysource distribution.\nJean-loup Gailly Mar k Adler\njloup@gzip .org madler@alumni.caltech.edu\nThe deflate f or mat used b yzlibwasdefined b yPhil Katz. The deflate and zlibspecifications\nwere written b yL .P eter Deutsch. Thanks to all the people who reported problems and suggested\nvarious impro vements in zlib;who are too numerous to cite here.\nUNIX manual page b yR .P .C .R odgers ,U .S.N ational Libr ar y of Medicine\n(rodgers@nlm.nih.gov).\n13 Oct 2022 2" } ]
{ "category": "App Definition and Development", "file_name": "zlib.3.pdf", "project_name": "Percona Server for MySQL", "subcategory": "Database" }
[ { "data": "050010001500\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequeries / minutedatasource\na\nb\nc\nd\ne\nf\ng\nhQueries per minute" } ]
{ "category": "App Definition and Development", "file_name": "queries_per_min.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "01234\ncount_star_interval\nsum_all\nsum_all_filter\nsum_all_year\nsum_price\ntop_100_commitdate\ntop_100_parts\ntop_100_parts_details\ntop_100_parts_filter\nQueryTime (seconds)engine\nDruid\nMySQLMedian query time (100 runs) − 1GB data − single node" } ]
{ "category": "App Definition and Development", "file_name": "tpch_1gb.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "●●●\n0102030\n30 40 50 60 70\nNumber of NodesCluster Scan Rate (billion rows/sec.)query\n●Query 1\nQuery 2\nQuery 3\nQuery 4\nQuery 5\nQuery 6" } ]
{ "category": "App Definition and Development", "file_name": "cluster_scan_rate.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "P R E S E N T S \nV i t e s s\ns e c u r i t y\na u d i t\nIn\ncollaboration\nwith\nthe\nVitess\nmaintainers,\nOpen\nSource\nTechnology\nImprovement\nFund\nand\nThe\nLinux\nFoundation\nA u t h o r s\nAdam\nKorczynski\n<\nadam@adalogics.com\n>\nDavid\nKorczynski\n<\ndavid@adalogics.com\n>\nDate:\nJune\n5,\n2023\nThis\nreport\nis\nlicensed\nunder\nCreative\nCommons\n4.0\n(CC\nBY\n4.0)\nV i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nT able\nof\ncontents\nTable\nof\ncontents\n1\nExecutive\nsummary\n2\nN o t a b l e\nf i n d i n g s\n3\nProject\nSummary\n4\nAudit\nScope\n4\nThreat\nmodel\nformalisation\n5\nFuzzing\n14\nIssues\nfound\n16\nSLSA\nreview\n38\nC o n c l u s i o n s\n4 0\n1V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nEx ecutive\nsummary\nIn\nMarch\nand\nApril\n2023,\nAda\nLogics\ncarried\nout\na\nsecurity\naudit\nof\nVitess.\nThe\nprimary\nfocus\nof\nthe\naudit\nwas\na\nnew\ncomponent\nof\nVitess,\nVTAdmin.\nThe\ngoal\nwas\nto\nconduct\na\nholistic\nsecurity\naudit\nwhich\nincludes\nmultiple\ndisciplines\nto\nconsider\nthe\nsecurity\nposture\nfrom\ndifferent\nperspectives.\nTo\nthat\nend,\nthe\naudit\nhad\nthe\nfollowing\nhigh-level\ngoals:\n1.\nFormalise\na\nthreat\nmodel\nof\nVTAdmin.\n2.\nManually\naudit\nthe\nVTAdmin\ncode.\n3.\nManually\naudit\nthe\nremaining\nVitess\ncode\nbase.\n4.\nAssess\nand\nimprove\nVitessʼs\nfuzzing\nsuite.\n5.\nCarry\nout\na\nSLSA\ncompliance\nreview.\nThese\nfive\ngoals\nare\nfairly\ndifferent.\nWhile\nthey\nallowed\nthe\nauditors\nto\nevaluate\nthe\nsecurity\nposture\nof\nVitess\nfrom\ndifferent\nperspectives,\nthey\nalso\noffered\na\nlevel\nof\nsynergy;\nAda\nLogics\nfound\ntwo\nCVEʼs\nduring\nthe\naudit\nwhich\nthe\nthreat\nmodel\ngoal\nhelped\nto\nassess.\nThe\nthreat\nmodel\nwas\nalso\na\nforce-multiplier\nfor\nthe\nfuzzing\nwork\nthat\nled\nto\nthe\ndiscovery\nof\na\nfew\nmissed\nedge\ncases\nwhen\nfixing\nthe\ntwo\nCVEʼs.\nThe\naudit\nstarted\nwith\na\nmeeting\nbetween\nAda\nLogics,\nthe\nVitess\nmaintainers\nand\nOSTIF.\nA\u0000er\nthat,\nall\nthree\nparties\nmet\nregularly\nto\ndiscuss\nissues\nand\nquestions\nas\nthey\narose\nduring\nthe\naudit.\nAda\nLogics\nshared\nissues\nof\nhigher\nseverity\nduring\nthe\naudit.\nIn\nthis\nreport,\nwe\npresent\nthe\nwork\nand\nresults\nfrom\nthe\naudit.\nThe\naudit\nwas\nfunded\nby\nthe\nCNCF\nwho\nhosts\nVitess\nas\na\ngraduated\nproject.\nResults\nsummarised\n12\nsecurity\nissues\nfound\n2\nCVEs\nassigned\nFormalisation\nof\nVTAdmins\nthreat\nmodel\n3\nfuzzers\nadded\nto\nVitessʼs\nOSS-Fuzz\nintegration\n2V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nNotable\nfindings\nThe\nmost\nnotable\nfindings\nfrom\nthe\naudit\nare\n“ADA-VIT-SA23-5,\nUsers\nthat\ncan\ncreate\nkeyspaces\ncan\ndeny\naccess\nto\nalready\nexisting\nkeyspaces”\nand\n“ADA-VIT-SA23-12,\nVTAdmin\nusers\nthat\ncan\ncreate\nshards\ncan\ndeny\naccess\nto\nother\nfunctions”.\nThese\ntwo\nissues\nallowed\na\nmalicious\nuser\nto\ncreate\na\nresource\nthat\nwould\nthen\nsubsequently\ndisallow\nother\noperations\nfor\nother\nusers.\nFor\nexample,\na\nuser\ncould\ncreate\na\nmalicious\nshard\nthat\nwould\nprevent\nother\nusers\nfrom\nfetching\nor\ncreating\nshards.\nThe\nissues\nwould\ndisallow\nactions\nagainst\nother\nresource\ntypes\nas\nwell,\nthus\nresulting\nin\na\ndenial\nof\nservice\nattack\nvector.\nThe\nissues\nwere\nmore\nsignificant\nfor\nVitess\ndeployments\nthat\ninclude\nthe\nVTAdmin\ncomponent,\nsince\na\nuser\nwith\nthe\nlowest\nlevel\nof\nprivileges\nin\nVTAdmin\ncould\ncause\ndenial\nof\nservice\nfor\nall\nother\nusers\nin\nthe\ndeployment.\nThe\nroot\ncause\nof\nthe\ntwo\nissues\nwere\nat\nthe\nTopology\nlevel\nin\nVitess.\nVitess\ncreated\nan\nadvisory\nfor\neach\nissue\nand\nassigned\nCVEʼs\nfor\nboth\nadvisories:\nID\nCVE\nSeverity\nADA-VIT-SA23-5\nCVE-2023-29194\nModerate\nADA-VIT-SA23-12\nCVE-2023-29195\nModerate\n3V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nProject\nSummary\nThe\nauditors\nof\nAda\nLogics\nwere:\nName\nTitle\nEmail\nAdam\nKorczynski\nSecurity\nEngineer,\nAda\nLogics\nAdam@adalogics.com\nDavid\nKorczynski\nSecurity\nResearcher,\nAda\nLogics\nDavid@adalogics.com\nThe\nVitess\ncommunity\nmembers\ninvolved\nin\naudit\nwere:\nName\nTitle\nEmail\nDeepthi\nSigireddi\nProject\nLead\n&\nMaintainer\nDeepthi@planetscale.com\nAndrew\nMason\nMaintainer\nAndrew@planetscale.com\nFlorent\nPoinsard\nMaintainer\nFlorent@planetscale.com\nVeronica\nLopez\nContributor\nVeronica@planetscale.com\nDirkjan\nBussink\nMaintainer\nDbussink@planetscale.com\nThe\nfollowing\nfacilitators\nof\nOSTIF\nwere\nengaged\nin\nthe\naudit:\nName\nTitle\nEmail\nDerek\nZimmer\nExecutive\nDirector,\nOSTIF\nDerek@ostif.org\nAmir\nMontazery\nManaging\nDirector,\nOSTIF\nAmir@ostif.org\nAudit\nScope\nThe\nfollowing\nassets\nwere\nin\nscope\nof\nthe\naudit.\nRepository\nhttps://github.com/vitessio/vitess\nLanguage\nGo,\nTypescript\nThe\nfull\nVitess\nrepository\nwas\nconsidered\nin\nscope,\nhowever\nthe\nmain\nfocus\nof\nthe\naudit\nwas\nVTAdmin\nwhich\nis\nlocated\nat\nhttps://github.com/vitessio/vitess/tree/main/go/vt/vtadmin\n.\n4V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nThreat\nmodel\nformalisation\nIn\nthis\nsection\nwe\noutline\nthe\nthreat\nmodel\nof\nVitessʼs\nVTAdmin\ncomponent.\nWe\nfirst\noutline\nthe\ncore\ncomponents\nof\nVTAdmin.\nWe\nthen\ncover\nhow\nit\ninteracts\nwith\nthe\ninternal\ncomponents\nof\nVitess.\nNext,\nwe\nspecify\nthe\nthreat\nactors\nthat\ncould\nhave\na\nharmful\nimpact\non\na\nVTAdmin\ndeployment.\nFinally\nwe\nexemplify\nseveral\nthreat\nscenarios\nbased\non\nthe\nobservations\nwe\nmade\nwhen\noutlining\nthe\ncore\ncomponents\nand\nthe\nspecified\nthreat\nactors.\nWe\nused\nthe\nfollowing\nsources\nfor\nthe\nthreat\nmodelling:\n●\nVitessʼs\ndocumentation\nincluding\nREADME\nfiles\nfrom\nthe\nVitess\nrepository\n●\nVitessʼs\nsource\ncode\nat\nhttps://github.com/vitessio/vitess\n●\nFeedback\nfrom\nVitess\nmaintainers\nThe\nthreat\nmodel\nis\naimed\nat\nthree\ntypes\nof\nreaders:\n1.\nSecurity\nresearchers\nwho\nwish\nto\ncontribute\nto\nthe\nsecurity\nposture\nof\nVitess.\n2.\nMaintainers\nof\nVitess.\n3.\nUsers\nof\nVitess.\nWe\nexpect\nthat\nthe\nthreat\nmodel\nevolves\nover\ntime\nbased\non\nboth\nhow\nVitess\nand\nadoption\nevolve.\nAs\nsuch,\nthreat\nmodelling\nshould\nbe\nseen\nas\nan\nongoing\neffort.\nFuture\nsecurity\ndisclosures\nto\nthe\nVitess\nsecurity\nteam\nare\nopportunities\nto\nevaluate\nthe\nthreat\nmodel\nof\nthe\naffected\ncomponents.\nMost\ncompromises\nof\nVTAdmin\nhave\nthe\ngoal\nof\ncompromising\nthe\nfull\nVitess\ndeployment.\nAs\nsuch,\nthe\nthreat\nmodel\nof\na\nVitess\ndeployment\nand\nVTAdmin\nare\nclosely\naligned,\nbut\nthey\nare\nalso\ndifferent.\nOther\ncomponents\nof\nVitess\nhave\ndifferent\nattack\nvectors,\nthreat\nactors\nand\nsecurity\ndesigns.\nThe\nthreat\nmodel\nin\nthis\nreport\nis\nsolely\nfor\nVitessʼs\nVTAdmin\ncomponent.\nV T A d m i n\na r c h i t e c t u r e\nVTAdmin\nis\na\ncomponent\nfor\nmanaging\nVitess\nclusters.\nIt\nis\nintended\nto\nbe\nused\nby\nadministrators,\nas\nthe\nname\nsuggests.\nAs\nsuch,\nnon-admin\nusers\nshould\nnot\nbe\nable\nto\nperform\nthe\nactions\nthat\nthe\nadmin\nusers\ncan.\nVTAdmin\nconsists\nof\ntwo\ncomponents:\n1.\nA\nweb\ninterface\n-\nVTAdmin-web\n2.\nA\nserver\n-\nVTAdmin-api\n5V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nThe\nweb\ninterface\nconnects\nto\nthe\nserver\nwhich\nin\nturn\nforwards\nthe\nrequests\nto\nthe\nVitess\ninternals:\nF r o m\nh t t p s : / / v i t e s s . i o / d o c s / 1 7 . 0 / r e f e r e n c e / v t a d m i n / a r c h i t e c t u r e /\nA u t h e n t i c a t i o n\na n d\na u t h o r i z a t i o n\nVTAdmin\ndoes\ntwo\nthings\nwhen\nreceiving\nincoming\nrequests:\n1)\nIt\nfirst\nauthenticates\nthe\nrequest,\nand\n2)\nit\nthen\nchecks\nthe\nauthorization\nlevel\nfor\nthe\nuser\nsending\nthe\nrequest.\nIn\nVTAdmin,\nauthentication\nis\nthe\ntask\nof\nobtaining\nthe\nactor\nthat\nis\nsending\nthe\nrequest,\nand\nauthorization\nevaluates\nwhether\nthe\nactor\nhas\npermission\nto\nmake\nthe\nrequest.\nVitess\ncalls\nauthenticated\nusers\n“actors”.\nOnce\nVTAdmin\nhas\nobtained\nan\nactor\nfrom\nthe\nincoming\nrequest,\nVTAdmin\nvalidates\nthe\nactor\nagainst\nthe\nRBAC.\nAs\nsuch,\nthe\nflow\nof\nhandling\nthe\npermissions\nof\nincoming\nrequests\nlooks\nas\nsuch:\nA u t h e n t i c a t i o n\nAuthentication\nin\nVTAdmin\nhas\nthe\npurpose\nof\nanswering\nthe\nquestion\nof\nw h o\nis\nsending\na\nrequest.\nVTAdmin\ndoes\nnot\nhave\na\ndefault\nauthenticator,\nso\nusers\nare\nrequired\nto\nimplement\ntheir\nown\nvia\nthe\nAuthenticator\ninterface:\nhttps://github.com/vitessio/vitess/blob/da1906d54eaca4447e039d90b96fb07251ae852c/g\no/vt/vtadmin/rbac/authentication.go#L37\n.\nVitess\nlinks\nto\nan\nexample\nauthentication\nplugin\nwhich\nis\navailable\nhere:\nhttps://gist.github.com/ajm188/5b2c7d3ca76004a297e6e279a54c2299\n.\nThis\nexample\nplugin\nextracts\nthe\nactor\nfrom\neither\nthe\ncontext\nof\na\nrequest\nor\nfrom\na\ncookie.\n6\nV i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nWhen\na\nVitess\nadministrator\nadds\nan\nauthentication\nplugin,\nVTAdmin-api\nadds\nit\nas\na\nmiddleware\nat\nthe\nhttp\nmux\nlayer.\nVTAdmin-api\ndoes\nthis\nin\nvitess/go/vt/vtadmin/api.go\n,\nwhen\nthe\nroutes\nare\ninitialized:\nFirst\nVTAdmin-api\nchecks\nif\nthe\nuser\nhas\nregistered\nan\nauthentication\nplugin:\nAnd\nlater,\nit\ngets\nadded\nto\nthe\nhttp\nmux\nlayer:\nA u t h o r i z a t i o n\nOnce\na\nrequest\nhas\nbeen\nauthenticated,\nit\ncan\nbe\nauthorized.\nIn\nVTAdmin,\nauthorization\nchecks\nwhether\nan\nactor\ncan\nperform\nan\naction\nagainst\na\ngiven\nresource.\nThe\nlogic\nis\nimplemented\nhere:\nhttps://github.com/vitessio/vitess/tree/main/go/vt/vtadmin/rbac\n.\nVTAdmin\nchecks\nRBAC\nrules\nin\nthe\nroute\nhandlers\nwith\na\ncall\ntoIsAuthorized\n,\nfor\nexample:\nhttps://github.com/vitessio/vitess/blob/da1906d54eaca4447e039d90b96fb07251ae852c/g\no/vt/vtadmin/api.go#L755\nfunc(api*API)GetClusters(ctxcontext.Context,req*vtadminpb.GetClustersRequest)(*vtadminpb.GetClustersResponse,error){span,_:=trace.NewSpan(ctx,\"API.GetClusters\")deferspan.Finish()\nclusters,_:=api.getClustersForRequest(nil)\nvcs:=make([]*vtadminpb.Cluster,0,len(clusters))\nfor_,c:=rangeclusters{if!api.authz.IsAuthorized(ctx,c.ID,rbac.ClusterResource,rbac.GetAction){continue}\n7\nV i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nvcs=append(vcs,&vtadminpb.Cluster{Id:c.ID,Name:c.Name,})}\nreturn&vtadminpb.GetClustersResponse{Clusters:vcs,},nil}\nAuthentication\nand\nauthorization\nare\ndone\nat\nthe\nVTAdmin-api\nlevel,\nnot\nVTAdmin-web;\nVTAdmin-web\nis\nmerely\na\nclient.\nIn\nother\nwords,\nauthentication\nand\nauthorization\nare\nnot\nenforced\nwhen\nusing\nthe\nweb\nUI\n-\nVTAdmin-web\n-\nbut\nwhen\nthe\nweb\nUI\ncommunicates\nwith\nthe\nserver.\nIf\na\nthreat\nactor\nis\nable\nto\nperform\nan\naction\nthat\nthey\nhave\nnot\nbeen\ngranted\naccess\nto\nvia\nRBAC\nrules,\nthat\nis\na\nbreach\nof\nsecurity.\nAn\nRBAC\npermission\nshould\nonly\nallow\na\nuser\nto\ncarry\nout\nthe\nactions\nagainst\nthe\nresources\nthat\nmatch\nthe\nRBAC\nrules\nspecified\nby\nthe\ncluster\nadmin.\nA u t h e n t i c a t i o n\na n d\nA u t h o r i z a t i o n\nt h r e a t\ns c e n a r i o s\nHaving\ndefined\nhow\nauthentication\nand\nauthorization\nwork\nin\nVTAdmin,\nwe\nnow\nenumerate\na\nlist\nof\nthreat\nscenarios\nand\nrisks\nconcerning\nVTAdmin.\nUsers\ncan\nclaim\nto\nbe\na\nuser\nthat\nthey\nare\nnot\nIf\nusers\nare\nable\nto\nclaim\nto\nbe\nsomeone\nthey\nare\nnot,\nthey\ncan\nlaunch\na\nnumber\nof\ndifferent\nattacks\nagainst\nthe\ncluster.\nFor\nexample,\nby\nclaiming\nto\nbe\na\nuser\nwith\nhigher\nprivileges,\nthey\nare\npotentially\nable\nto\nelevate\ntheir\nRBAC\npermissions.\nOr\nthe\nuser\ncould\ndisguise\nthemselves\nunder\nthe\npretence\nof\nanother\nuser\nwhen\nperforming\nreconnaissance\nagainst\nthe\ncluster\nor\nexploiting\na\nvulnerability.\nUsers\ncan\nperform\nactions\nthat\nthey\ndo\nnot\nhave\npermission\nto\nperform\nVTAdmins\nRBAC\nhas\ntwo\nmain\ngoals:\n1.\nUsers\nshould\nbe\nable\nto\nperform\nthe\nactions\nthat\nthey\nhave\nbeen\npermitted\nto.\n2.\nUsers\nshould\nnot\nbe\nable\nto\nperform\nactions\nthat\nthey\nhave\nnot\nbeen\npermitted\nto.\nThe\nfirst\ngoal\nis\nrelated\nto\nboth\nthe\nreliability\nof\nVTAdmin\nas\nwell\nas\nits\nsecurity\nposture;\nIf\na\ncluster\nadmin\nhas\ngranted\npermissions\nto\na\nuser\nto\nperform\nan\naction\nagainst\na\nresource,\nthe\nuser\nshould\nnot\nbe\nprevented\nfrom\ndoing\nsaid\naction.\nIssues\nwith\nthis\ngoal\nis\nrelated\nto\nthe\nreliability\nand\nnot\nsecurity\nof\nVTAdmin\nwith\none\nexception:\n8V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\n●\nIf\nUser\nA\nhas\npermissions\nto\nperform\nan\naction\nbut\ncannot\nperform\nit\nbecause\nUser\nB\nhas\ndisabled\nthis\nfunctionality\nfor\nUser\nA,\nand\nUser\nB\nshould\nnot\nbe\nable\nto\ndisable\nthis.\nGoal\n2\nis\nfully\nrelated\nto\nthe\nsecurity\nposture\nof\nVTAdmin:\nIf\nany\nuser\ncan\ncarry\nout\nan\naction\nthat\nthey\nhave\nnot\nbeen\ngranted\npermission\nto,\nthen\nit\nis\na\nbreach\nof\nVTAdmin-apiʼs\nRBAC.\nAn\nattacker\ncould\ndo\nthis\nby\nusing\ntilizing\nexisting\nRBAC\nprivileges\nfor\na\ngiven\naction\nand\nresource\nto\nobtain\npermissions\nto\nperform\nactions\nagainst\nresources\nthat\nthe\nattacker\ndoes\nnot\nhave\npermission\nto.\nFor\nexample,\nif\na\nuser\nis\nable\nto\nutilizecreate\nprivileges\nto\ncause\nVitess\nto\ndelete\na\nresource,\nthe\nuser\nhas\nelevated\ntheir\nprivileges.\nThe\nroot\ncause\nof\nsuch\nan\nattack\nscenario\nis\nlikely\nto\nbe\nan\nimplementation\nerror.\nT h e\nr o l e\no f\nV T A d m i n\na n d\nV i t e s s ’ s\na t t a c k\ns u r f a c e\nVTAdmin\nadds\na\nnew,\nmore\ngranular\nuser\naccess\ncontrol\nthan\nVitess\nhas\npreviously\nhad.\nIn\na\ndeployment\nwithout\nVTAdmin,\na\nuser\nwith\npermission\nto\nperform\none\naction\nagainst\none\nresource\ncan\nperform\nall\nactions\nagainst\nall\nresources.\nVTAdmin\nintroduces\ngranular\npermission\ncontrols.\nThis\nmay\ncause\nusers\nto\nover-permit\naccess\nto\nkeep\npermissions\nat\nthe\nsame\nlevel\nof\nsimplicity\n-\nie.\nallow\nusers\neither\nfull\naccess\nor\nnone.\nTo\nthis\nend,\nusers\nshould\nbe\nwell\nadvised\nin\nmaintaining\na\nwell-configured\nRBAC\npolicy.\nT h r e a t\na c t o r s\nA\nthreat\nactor\nis\nan\nindividual\nor\ngroup\nthat\nintentionally\nattempts\nto\nexploit\nvulnerabilities,\ndeploys\nmalicious\ncode,\nor\ncompromise\nor\ndisrupt\na\nVTAdmin\ndeployment,\no\u0000en\nfor\npersonal\ngain,\nespionage,\nor\nsabotage.\nWe\nidentify\nthe\nfollowing\nthreat\nactors\nfor\nVTAdmin.\nA\nthreat\nactor\ncan\nassume\nmultiple\nprofiles\nfrom\nthe\ntable\nbelow;\nFor\nexample,\na\nfully\nuntrusted\nuser\ncan\nalso\nbe\na\ncontributor\nto\na\n3rd-party\nlibrary\nused\nby\nVTAdmin.\nActor\nDescription\nHave\nalready \nescalated\nprivileges\nFully\nuntrusted \nusers\nUsers\nthat\nhave\nnot\nbeen\ngranted\nany\npermissions\nand\nthat\nthe\nVitess\ncluster\nadmin\ndoes\nnot\nknow\nthe\nidentity\nof.\nNo\nLimited\naccess \nusers\nUsers\nthat\nhave\nbeen\ngranted\nsome\nRBAC\npermissions\nbut\nnot\nothers.\nNote\nthat\nthis\nactor\nis\nalways\nawarded\nand\nnever\nobtained.\nFor\nexample,\na\nfully\nuntrusted\nuser\ncan\nseek\nNo\n9V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nto\nbecome\na\nlimited\naccess\nuser,\nbut\nthis\nwould\nbe\na\nsecurity\nbreach\nperformed\nby\nthe\nfully\nuntrusted\nuser\nactor.\nContributors\nto \n3rd-party\ndependencies\nContributors\nto\ndependencies\nused\nby\nVitess.\nNo\nActor\nwith\nlocal \nnetwork\nor\nlocal \nfile\naccess\nAn\nactor\nthat\nhas\nbreached\nsome\nsecurity \nboundaries\nof\nthe\nenvironment\nto\nget\nto\nthe \nposition\nof\nhaving\naccess\nto\nthe\nlocal\nnetwork \nor\nfile\nsystem.\nYes\nWell-funded \ncriminal\ngroups\nOrganized\ncriminal\ngroups\nthat\no\u0000en\nhave \neither\npolitical\nor\neconomic\ngoals.\nNo\nT r u s t\nb o u n d a r i e s\nA\nso\u0000ware\ntrust\nboundary\nis\na\nboundary\nwithin\na\nso\u0000ware\nsystem\nthat\nseparates\ntrusted\ncomponents\nand\nactors\nfrom\nuntrusted\nones.\nIn\nthis\nsection\nwe\nenumerate\nthe\ntrust\nboundaries\nfor\nVTAdmin.\nWe\nfirst\nconsider\nthe\ntrust\nboundaries\nof\nVTAdmin-web\nand\nthen\nof\nVTAdmin-api.\nV T A d m i n - w e b\nVTAdmin-web\nis\nmeant\nto\nbe\ndeployed\nin\na\ntrusted\nenvironment,\nmeaning\nthat\nan\nattacker\nneeds\nto\ncompromise\nthe\nsecurity\nmeasures\nof\nthe\nenvironment\nto\ngain\naccess\nto\nVTAdmin-web.\nThere\nare\ntwo\nsecurity\nmeasures\nthat\nan\nattacker\ncan\ncompromise,\n1)\nthe\nfile\nsystem\nof\na\nrunning\nVTAdmin\ndeployment,\nand\n2)\nan\napp\nthat\nis\nresponsible\nfor\nauthenticating\nthe\nVTAdmin-web\nclient.\nF i l e\ns y s t e m\nTrust\nincreases\nwhen\nan\nactor\nobtains\naccess\nto\nthe\nlocal\nfile\nsystem.\nAn\nattacker\nwith\nlocal\naccess\nmay\nbe\nable\nto\naccess\nVTAdmin-web.\nIn\nthis\ncase,\nthe\ntrust\nboundary\nis\nthe\nlocal\nfile\nsystem.\n1 0V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\n.\nC o m p r o m i s i n g\ne x i s t i n g\na p p\nAnother\nuse\ncase\nof\nVTAdmin-web\nis\nto\nintegrate\nit\ninto\nan\nexisting\nweb\napp,\nwhere\nthe\nexisting\nweb\napp\nalready\ncontains\nits\nown\nauthentication\nmechanism.\nAs\nsuch,\nusers\nfirst\nhave\nto\nauthenticate\nby\nway\nof\nthe\nexisting\napp\nto\naccess\nVTAdmin-web.\nIn\nthis\ncase,\na\ntrust\nboundary\nexists\nbetween\nthe\ninternet\nand\nthe\nexisting\nweb\napp:\n1 1\nV i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nV T A d m i n - a p i\nThe\nthreat\nmodel\nof\nthe\nVTAdmin-api\nhas\none\ntrust\nboundary\nbetween\nthe\nweb\nui\nand\nthe\nVTAdmin-api.\nOnce\na\nrequest\nhas\nbeen\nauthenticated\nand\nauthorized,\nit\nwill\nnot\ncross\nany\nfurther\ntrust\nboundaries.\nThe\nrequests\nmade\nby\nVTAdmin-Web\nare\nunauthenticated\nand\nunauthorized\nuntil\nVTAdmin-api\nauthenticates\nand\nauthorizes\nthem.\nIn\nother\nwords,\nthe\nrequest\nbecomes\ntrusted\na\u0000er\nit\npasses\nVTAdmin-api.\n1 2\nV i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA t t a c k\ns u r f a c e\nA\nso\u0000ware\nattack\nsurface\nrefers\nto\nall\npossible\nentry\npoints,\nvulnerabilities,\nand\nweak\npoints\nwithin\na\nso\u0000ware\nsystem\nthat\ncan\nbe\ntargeted\nor\nexploited\nby\nattackers\nto\ncompromise\nits\nsecurity.\nIn\nthis\nsection\nwe\ndetail\nthe\nattack\nsurface\nof\nVTAdmin.\nA P I\ne n d p o i n t s\nVTAdmin\nexposes\na\nseries\nof\nHTTP\nendpoints\nthat\nhandle\na\nwide\nrange\nof\ndifferent\noperations,\nand\nthey\nare\nplausible\nto\na\nwide\nrange\nof\nattacks.\nAn\nattacker\nwill\nneed\nto\nbe\nable\nto\nsend\nrequests\nto\nthe\nVTAdmin-api\nserver\nor\nhave\naccess\nto\nan\nauthenticated\nVTAdmin-web\nclient,\nbut\nonce\nthey\nhave\nobtained\nthat,\nthe\nattack\ncomplexity\nis\nsimple;\nAn\nattacker\nwill\nlaunch\nan\nattack\nthrough\nrequests\nto\nthe\nserver.\n3 r d - p a r t y\nd e p e n d e n c i e s\nSecurity\nissues\nin\nVTAdmins\n3rd-party\ndependencies\ncan\nhave\na\nnegative\nimpact\non\nVTAdmin.\nThis\ncan\nbe\nachieved\nin\nseveral\nways;\nFor\nexample,\na\nthreat\nactor\ncould\ndeliberately\ncontribute\nvulnerable\ncode\nthat\nhas\na\nnegative\nimpact\non\nVTAdmins\nusers.\nVTAdmins\ndependencies\nare\nopen\nsource\nlibraries\nmost\nof\nwhich\naccept\ncommunity\ncontributions,\nand\ncarefully\nplaced\nvulnerabilities\nin\nsome\ndependencies\nwould\nmake\nexploitation\nof\nVTAdmin\nusers\npossible.\nAlternatively,\nVTAdmins\ndependencies\ncould\nhave\nvulnerabilities\nthat\na\nthreat\nactor\nknows\nexist\nbut\ndoes\nnot\nplace\nin\nthe\ncode.\nThreat\nactors\ncan\nobtain\ninformation\nof\nvulnerabilities\nin\npublic\nregistries\nand\nassess\nwhether\nprojects\nuse\nthe\nvulnerable\nversion.\nIn\neither\ncase,\na\nthreat\nactor\ncan\nuse\na\nvulnerability\nin\na\n3rd-party\ndependency\nto\nescalate\nprivileges\nand\ncause\nharm\nto\nVTAdmin\nusers.\nL o c a l\na t t a c k e r\nAn\nattacker\nwho\nhas\ncompromised\nthe\nmachine\nrunning\nVTAdmin\nmay\nescalate\nprivileges\nby\nlistening\non\nthe\nnetwork.\nFor\nexample,\nVTAdmin-api\nconnects\nto\nVtctld\nover\nGRPC.\nAt\nthis\nstage\nthe\nrequest\nis\nalready\nauthenticated,\nand\nif\nan\nattacker\ncan\nfind\na\nway\nto\nread\ntraffic,\nthey\nare\npotentially\nable\nto\nbypass\nauthentication\nand\nassume\nthe\nhighest\nlevel\nof\npermissions\nthat\nthe\nRBAC\ncan\ngrant.\nA\nlocal\nattacker\nwith\nlimited\ncontrol\nover\nthe\nfile\nsystem\ncan\nhave\na\nhigh\nimpact,\nbut\nthe\nattack\nsurface\nis\nsmall;\nVTAdmin\ndoes\nnot\nrely\nheavily\non\nthe\nfile\nsystem,\nand\nan\nattackerʼs\noptions\nare\ntherefore\nlimited,\nhowever,\nan\nimpactful\nvector\ncould\nbe\ncontrolling\ntherbac.yaml\nwhich\ncould\nallow\nan\nattacker\nto\nassign\npermission\nto\nthemselves,\nthus\ncontrolling\nthe\nauthentication\nat\nthe\nhighest\npossible\nlevel.\n1 3V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nFuzzing\nAs\npart\nof\nthe\naudit,\nAda\nLogics\nassessed\nVitess's\nfuzz\ntest\nsuite\nwith\nthe\npurpose\nof\nimproving\nit\nto\ncover\ncritical\nparts\nof\nVTAdmin.\nVitess\nhas\ndone\nextensive\nfuzzing\nwork;\nIt\ncarried\nout\na\nfuzzing\naudit\nin\n2020\nwhich\nadded\ncoverage\nto\ncomplex\ntext\nprocessing\nroutines.\nVitess\nis\nintegrated\ninto\nOSS-Fuzz\nwhich\nallows\nthe\nfuzzers\nto\nrun\ncontinuously\nand\nnotify\nmaintainers\nin\ncase\nthe\nfuzzers\nfind\nbugs.\nThe\nVitess\nsource\ncode\nand\nthe\nsource\ncode\nfor\nthe\nVitess\nfuzzers\nare\nthe\ntwo\nkey\nso\u0000ware\npackages\nthat\nOSS-Fuzz\nuses\nto\nfuzz\nVitess.\nThe\ncurrent\nOSS-Fuzz\nset\nup\nbuilds\nthe\nfuzzers\nby\ncloning\nthe\nupstream\nVitess\nGithub\nrepository\nto\nget\nthe\nlatest\nVitess\nsource\ncode\nand\nthe\nCNCF-Fuzzing\nGithub\nrepository\nto\nget\nthe\nlatest\nset\nof\nfuzzers,\nand\nthen\nbuilds\nthe\nfuzzers\nagainst\nthe\ncloned\nVitess\ncode.\nAs\nsuch,\nthe\nfuzzers\nare\nalways\nrun\nagainst\nthe\nlatest\nVitess\ncommit.\nThis\nbuild\ncycle\nhappens\ndaily\nand\nOSS-Fuzz\nwill\nverify\nif\nany\nexisting\nbugs\nhave\nbeen\nfixed.\nIf\nOSS-fuzz\nfinds\nthat\nany\nbugs\nhave\nbeen\nfixed\nOSS-Fuzz\nmarks\nthe\ncrashes\nas\nfixed\nin\nthe\nMonorail\nbug\ntracker\nand\nnotifies\nmaintainers.\nIn\neach\nfuzzing\niteration,\nOSS-Fuzz\nuses\nits\ncorpus\naccumulated\nfrom\nprevious\nfuzz\nruns.\nIf\nOSS-Fuzz\ndetects\nany\ncrashes\nwhen\nrunning\nthe\nfuzzers,\nOSS-Fuzz\nperforms\nthe\nfollowing\nactions:\n1.\nA\ndetailed\ncrash\nreport\nis\ncreated.\n2.\nAn\nissue\nin\nthe\nMonorail\nbug\ntracker\nis\ncreated.\n3.\nAn\nemail\nis\nsent\nto\nmaintainers\nwith\nlinks\nto\nthe\nreport\nand\nrelevant\nentry\nin\nthe\nbug\ntracker.\nOSS-Fuzz\nhas\na\n90\nday\ndisclosure\npolicy,\nmeaning\nthat\na\nbug\nbecomes\npublic\nin\nthe\nbug\ntracker\nif\nit\nhas\nnot\nbeen\nfixed.\nThe\ndetailed\nreport\nis\nnever\nmade\npublic.\nThe\nVitess\nmaintainers\nwill\nfix\nissues\nupstream,\nand\nOSS-Fuzz\nwill\npull\nthe\nlatest\nVitess\nmaster\nbranch\nthe\nnext\ntime\nit\nperforms\na\nfuzz\nrun\nand\nverify\nthat\na\ngiven\nissue\nhas\nbeen\nfixed.\nVitess's\nfuzzers\nreside\nin\nCNCF's\ndedicated\nfuzzing\nrepository,\nhttps://github.com/cncf/cncf-fuzzing\n,\nin\nwhich\nthe\ncommunity\nmaintains\nthem.\nIn\naddition,\ncommunity\nmembers\nalso\nmaintain\nthe\nbuild,\nso\nthat\nthe\nfuzzers\nkeep\nrunning,\nin\ncase\nupstream\ncode\nchanges\nbreak\nthe\nbuild.\nDuring\nthe\naudit,\nAda\nLogics\nwrote\n3\nnew\nfuzzers:\n1 4V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\n#\nN a m e\nU R L\nR u n n i n g\no n \nO S S - F u z z\n1\nF u z z K e y s p a c e C r e a t i o n\nh t t p s : / / g i t h u b . c o m / c n c f / c n c f - f u z z \ni n g / b l o b / 8 3 b a d 3 2 3 2 3 d 4 a 3 5 1 5 7 1 \n7 c 5 f 1 4 4 f a f 3 8 b 2 c 7 d 2 0 c b / p r o j e c t \ns / v i t e s s / f u z z _ k e y s p a c e _ c r e a t i o n \n. g o\nY e s\n2\nF u z z S h a r d C r e a t i o n\nh t t p s : / / g i t h u b . c o m / c n c f / c n c f - f u z z \ni n g / b l o b / 8 3 b a d 3 2 3 2 3 d 4 a 3 5 1 5 7 1 \n7 c 5 f 1 4 4 f a f 3 8 b 2 c 7 d 2 0 c b / p r o j e c t \ns / v i t e s s / f u z z _ s h a r d _ c r e a t i o n . g o\nY e s\n3\nF u z z T a b l e t C r e a t i o n\nh t t p s : / / g i t h u b . c o m / c n c f / c n c f - f u z z \ni n g / b l o b / b f e c 1 5 2 c 4 9 7 f 6 d 8 e 0 7 8 6 \nd 2 f 8 9 d 9 9 7 8 8 b 8 9 0 e 8 4 7 f / p r o j e c t s \n/ v i t e s s / f u z z _ t a b l e t _ t e s t . g o\nY e s\nThe\nfuzzers\ntarget\nAPIs\nat\nthe\ntopology\nserver\nlevel\nresponsible\nfor\ncreating\nkeyspaces,\nshards\nand\ntablets\nand\nfollow\na\nsimilar\npattern.\nEach\nfuzzer\ntests\nwhether\nit\ncan\ncreate\na\nresource\nthat\nwill\nblock\nsubsequent\noperations\nagainst\nthe\ngiven\ntype.\nFor\nexample,\nthe\nfuzzer\nfor\nthe\nshards\nwill\nattempt\nto\ncreate\na\nshard\nand\na\u0000erwards\ntest\nif\noperations\n-\nsuch\nas\nget-operations\n-\nagainst\nshards\nare\nrejected\nor\nfail.\nThe\nfuzzers\ndo\nnot\ntarget\nthe\nnewly\nwritten\nVTAdmin\ncode\nbase,\nbut\nthey\nare\nstill\nrelevant\nfor\nVTAdmin.\nIn\nfact,\nduring\nthe\nauditing\nof\nthe\nVTAdmin\nweb\ninterface,\nAda\nLogics\nfound\ntwo\nvulnerabilities\nwith\nroot\ncause\nat\nthe\ntopology\nlevel\nthat\nwere\ntriggerable\nfrom\nVTAdmin.\nThe\ntwo\nvulnerabilities\nallow\nusers\nto\ncreate\ninvalid\nkeyspaces\nand\nshards\nthat\nwill\nblock\nfuture\noperations\nagainst\nkeyspaces\nand\nshards.\nAn\nattacker\ncould\ntrigger\nthese\nby\ncreating\nthe\ntype\nwith\na\nwell-cra\u0000ed\nname.\nTo\ntest\nexhaustively\nfor\nmalicious\nnames,\nAda\nLogics\nwrote\nthe\nthree\nfuzzers.\nThis\nproved\nfruitful\ninstantly,\nas\nthe\nshard\nfuzzer\nfound\nmore\nspecial\ncases\nin\nthe\nshard\nname\nthan\nwere\nfound\nduring\nthe\nmanual\nauditing.\nAda\nLogics\nadded\nthe\nthree\nfuzzers\nto\nVitess's\nOSS-Fuzz\nintegration,\nallowing\nthem\nto\nrun\ncontinuously\nand\ntest\nfor\nmore\nspecial\ncases\nas\nwell\nas\ncode\nchanges.\n1 5V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nIssues\nfound\nH e r e\nw e\np r e s e n t\nt h e\ni s s u e s\nt h a t\nw e\ni d e n t i f i e d\nd u r i n g\nt h e\na u d i t .\n#\nI D\nT i t l e\nS e v e r i t y\nF i x e d\n1\nADA-VIT-SA23-1\nMissing\ndocumentation\non\ndeploying \nVTAdmin-web\nsecurely\nModerate\nYes\n2\nADA-VIT-SA23-2\nInsecure\ncryptographic\nprimitives\nInformational\nYes\n3\nADA-VIT-SA23-3\nSQL\ninjection\nin\nsqlutils\nInformational\nYes\n4\nADA-VIT-SA23-4\nPath\ntraversal\nin\nVtctldServers \nGetBackups\nmethod\nModerate\nYes\n5\nADA-VIT-SA23-5\nUsers\nthat\ncan\ncreate\nkeyspaces\ncan\ndeny \naccess\nto\nalready\nexisting\nkeyspaces\nModerate\nYes\n6\nADA-VIT-SA23-6\nVTAdmin-web\nui\nis\nnot\nauthenticated\nby \ndefault\nModerate\nNo\n7\nADA-VIT-SA23-7\nCritical\n3rd-party\ndependency\nis\narchived\nLow\nNo\n8\nADA-VIT-SA23-8\nVTAdmin\nnot\nprotected\nby\na\nrate\nlimiter\nModerate\nNo\n9\nADA-VIT-SA23-9\nProfiling\nendpoints\nexposed\nby\ndefault\nModerate\nPartially\n10\nADA-VIT-SA23-10\nUnsanitized\nparameters\nin\nhtml\ncould \nlead\nto\nXSS\nLow\nYes\n11\nADA-VIT-SA23-11\nZip\nbomb\nin\nk8stopo\nLow\nNo\n12\nADA-VIT-SA23-12\nVTAdmin\nusers\nthat\ncan\ncreate\nshards\ncan \ndeny\naccess\nto\nother\nfunctions\nModerate\nYes\n1 6V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 1 :\nM i s s i n g\nd o c u m e n t a t i o n\no n\nd e p l o y i n g\nV T A d m i n - w e b\ns e c u r e l y\nID\nADA-VIT-SA23-1\nComponent\nVTAdmin\nSeverity\nModerate\nFixed\nin:\nhttps://vitess.io/docs/17.0/reference/vtadmin/operators_guide/#best-practices\nWe\nrecommend\nadding\na\ndocument\non\nhow\nto\nsecurely\ndeploy\nand\nuse\nVTAdmin.\nThe\npurpose\nof\nthis\ndocument\nis\nto\nprovide\na\nsingle\nsource\nof\nactionable\nsteps\nto\nuse\nVTAdmin\nsecurely.\nThe\nVitess\ndocumentation\ncurrently\ncontains\nlimited\ninformation\nabout\nthe\nRBAC\nof\nVitess,\nwhich\nis\npositive,\nhowever\nwe\nconsider\nthe\ndocumentation\nincomplete.\nLack\nof\ncomplete\ndocumentation\non\nVTAdmins\nsecurity\ncould\nresult\nin\nusers\nunknowingly\nusing\nVTAdmin\nin\na\nway\nthat\nis\nknown\nto\nbe\ninsecure,\nand\nis\neither\nnot\ndocumented,\nor\nthe\nuser\nwill\nhave\nto\nread\nthe\ndocs\nin\nfull\nto\nfind\nout\nthat\ntheir\ndeployment\nis\ninsecure.\nA\nsecurity\nbest\npractices\ndocument\noutlines\nthe\nproperties\nthat\nVitess\nconsiders\ninsecure\nfor\nusers.\nFor\nexample,\nusers\nwishing\nto\nwrite\nan\nauthentication\nplugin\nwould\nbenefit\nfrom\na\ngeneral\nsecurity\nbest\npractices\nchecklist.\nAt\nthe\nmoment,\nVitess\ndoes\nnot\noffer\nguidelines\non\nwriting\na\nsecure\nplugin.\nVitess\nprovides\nan\nexample\n-\nwhich\nis\npositive\n-\nhowever,\nthe\nexample\ndemonstrates\na\nminimum\nviable\nauthenticator\nthat\nhas\nnot\nbeen\nhardened\nfor\nsecurity;\nFor\nexample,\nthe\nactor\nname\nis\nsent\nin\nplain\ntext,\nand\nthere\nis\nno\nminimum\nlength\nrequired\nfor\nthe\nactor\nname.\n1 7V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 2 :\nI n s e c u r e\nc r y p t o g r a p h i c\np r i m i t i v e s\nID\nADA-VIT-SA23-2\nComponent\nMultiple\nSeverity\nInformational\nFixed\nYes\nVitess\nuses\ninsecure\nhashing\nfunctions\nin\na\nnumber\nof\nplaces\nacross\ndifferent\npackages.\nUsage\nof\ninsecure\nhashing\nfunctions\nshould\nbe\njustified,\nand\npreferably\nin\nthe\ncode\nwhere\nthey\nare\nused.\nVitess\nworked\non\nclarifying\nall\nusages\nand\nfound\nthat\nall\nuses\nof\ninsecure\nhashing\nfunctions\nfall\nin\none\nof\ntwo\ncategories:\nthey\nare\neither\nnot\ncryptographic\nprimitives\nor\nVitess\nare\nbound\nto\nuse\na\nspecific\nhashing\nalgorithm\nto\ncomply\nwith\nMySQLʼs\ninterface.\nThis\ntable\nillustrate\nhow\neach\ncase\nis\ncategorized:\n#\nComponent\nUsage\n1\nMySQL\nProtocol\nTo\nimplement\nMySQL\nhandshake\n2\nVindexes\nNon-cryptographic\nhash\n3\nEvalengine\nTo\nsupport\nMySQL\nbuilt-in\nfunctions\n4\nTmutils\nNon-cryptographic\nhash\n5\nS3\nBackup\nStorage\nNon-cryptographic\nhash\npart\nof\nthe\nS3\nAPI\nAs\nsuch,\nthis\nissue\ndid\nnot\nrequire\nany\ncode\nchanges,\nand\nit\nhas\nbeen\nkept\nhere\nin\nthe\nreport\nas\na\nreference\nfor\nusers\nthat\nhave\ninternal\npolicies\nthat\nare\nsensitive\nto\ninsecure\nhash\nfunctions.\nVitess\ndid\nremove\nthe\nuse\nof\nMD5\nin\nTmutils\n1\n,\nbut\nthis\nwas\ndue\nto\nthe\ncode\nnot\nbeing\nused\nrather\nthan\na\nsecurity\nfix.\n1:\nMySQL\nhandshake\nVitessʼs\nmysql\npackage\nimplements\na\nfunction\nthat\ncomputes\nthe\nhash\nof\na\nmysql\npassword\nusing\nSHA1.\nSHA1\nhas\nbeen\nbroken\nsince\n2004,\ndeprecated\nby\nNIST\nsince\n2011,\nand\nsecurity\nresearchers\nhave\nproven\ncollisions\nin\npractice\n2\n.\n2\nh t t p s : / / s e c u r i t y . g o o g l e b l o g . c o m / 2 0 1 7 / 0 2 / a n n o u n c i n g - f i r s t - s h a 1 - c o l l i s i o n . h t m l\n1\nh t t p s : / / g i t h u b . c o m / v i t e s s i o / v i t e s s / p u l l / 1 2 9 9 9\n1 8V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nIn\nthe\ncase\nof\nVitessʼs\nmysql\npackage,\nSHA1\nis\nused\nto\nhash\na\npassword\nwhich\nwe\nconsider\nsecurity\nsensitive\ndata.\nWe\nconsider\nthis\na\nsecurity\nissue.\nWe\nrecommend\nusing\na\nsecure\nhashing\nalgorithm.\nThe\nissue\nexists\nin\nvitess/go/mysql/auth_server.go\ninScrambleMysqlNativePassword\n:\nhttps://github.com/vitessio/vitess/blob/58e2719069c35c2820e1bf33324f27c3fb5852f1/go/mysql/a\nuth_server.go#L251\nfuncScrambleMysqlNativePassword(salt,password[]byte)[]byte{iflen(password)==0{returnnil}\n//stage1Hash=SHA1(password)crypt:=sha1.New()crypt.Write(password)stage1:=crypt.Sum(nil)\n//scrambleHash=SHA1(salt+SHA1(stage1Hash))//innerHashcrypt.Reset()crypt.Write(stage1)hash:=crypt.Sum(nil)//outerHashcrypt.Reset()crypt.Write(salt)crypt.Write(hash)scramble:=crypt.Sum(nil)\n//token=scrambleHashXORstage1Hashfori:=rangescramble{scramble[i]^=stage1[i]}returnscramble}\n2 :\nV i n d e x e s\nhttps://github.com/vitessio/vitess/blob/c43a162ea567f47a89b8d4a506d2995740737b79/g\no/vt/vtgate/vindexes/hash.go#L139\nvarblockDEScipher.Block\nfuncinit(){varerrerrorblockDES,err=des.NewCipher(make([]byte,8))iferr!=nil{panic(err)}Register(\"hash\",NewHash)}\n1 9V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nfuncvhash(shardKeyuint64)[]byte{varkeybytes,hashed[8]bytebinary.BigEndian.PutUint64(keybytes[:],shardKey)blockDES.Encrypt(hashed[:],keybytes[:])returnhashed[:]}\nfuncvunhash(k[]byte)(uint64,error){iflen(k)!=8{return0,fmt.Errorf(\"invalidkeyspaceid:%v\",hex.EncodeToString(k))}varunhashed[8]byteblockDES.Decrypt(unhashed[:],k)returnbinary.BigEndian.Uint64(unhashed[:]),nil}\n3 :\nE v a l e n g i n e\nhttps://github.com/vitessio/vitess/blob/a502fceda310886223342020136db5718ace34a5/g\no/vt/vtgate/evalengine/fn_crypto.go#L67\nfunc(call*builtinSHA1)eval(env*ExpressionEnv)(eval,error){arg,err:=call.arg1(env)iferr!=nil{returnnil,err}ifarg==nil{returnnil,nil}\nb:=evalToBinary(arg)sum:=sha1.Sum(b.bytes)buf:=make([]byte,hex.EncodedLen(len(sum)))hex.Encode(buf,sum[:])returnnewEvalText(buf,defaultCoercionCollation(call.collate)),nil}\nhttps://github.com/vitessio/vitess/blob/a502fceda310886223342020136db5718ace34a5/g\no/vt/vtgate/evalengine/fn_crypto.go#L39\nfunc(call*builtinMD5)eval(env*ExpressionEnv)(eval,error){arg,err:=call.arg1(env)iferr!=nil{returnnil,err}ifarg==nil{returnnil,nil}\nb:=evalToBinary(arg)sum:=md5.Sum(b.bytes)buf:=make([]byte,hex.EncodedLen(len(sum)))hex.Encode(buf,sum[:])returnnewEvalText(buf,defaultCoercionCollation(call.collate)),nil\n2 0V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\n}\nhttps://github.com/vitessio/vitess/blob/641e5c6acc2345a4920d22745a7f9dbeb19e39c5/g\no/vt/vtgate/evalengine/compiler_asm.go#L3007\nfunc(asm*assembler)Fn_SHA1(colcollations.TypedCollation){asm.emit(func(env*ExpressionEnv)int{arg:=env.vm.stack[env.vm.sp-1].(*evalBytes)\nsum:=sha1.Sum(arg.bytes)buf:=make([]byte,hex.EncodedLen(len(sum)))hex.Encode(buf,sum[:])\narg.tt=int16(sqltypes.VarChar)arg.bytes=bufarg.col=colreturn1},\"FNSHA1VARBINARY(SP-1)\")}\nhttps://github.com/vitessio/vitess/blob/641e5c6acc2345a4920d22745a7f9dbeb19e39c5/g\no/vt/vtgate/evalengine/compiler_asm.go#L2992\nfunc(asm*assembler)Fn_MD5(colcollations.TypedCollation){asm.emit(func(env*ExpressionEnv)int{arg:=env.vm.stack[env.vm.sp-1].(*evalBytes)\nsum:=md5.Sum(arg.bytes)buf:=make([]byte,hex.EncodedLen(len(sum)))hex.Encode(buf,sum[:])\narg.tt=int16(sqltypes.VarChar)arg.bytes=bufarg.col=colreturn1},\"FNMD5VARBINARY(SP-1)\")}\n4 :\nT m u t i l s\nhttps://github.com/vitessio/vitess/blob/d1685d96bd7c2a57fc48a7e42ac38e4897741824/g\no/vt/mysqlctl/tmutils/schema.go#L206\nfuncGenerateSchemaVersion(sd*tabletmanagerdatapb.SchemaDefinition){hasher:=md5.New()for_,td:=rangesd.TableDefinitions{if_,err:=hasher.Write([]byte(td.Schema));err!=nil{panic(err)//extremelyunlikely}}sd.Version=hex.EncodeToString(hasher.Sum(nil))}\n2 1V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\n5 :\nS 3\nB a c k u p\nS t o r a g e\nhttps://github.com/vitessio/vitess/blob/adb2535cf79926d2d9ecf9710a280d657103f74a/go\n/vt/mysqlctl/s3backupstorage/s3.go#L253\nfunc(s3ServerSideEncryption*S3ServerSideEncryption)init()error{s3ServerSideEncryption.reset()\nifstrings.HasPrefix(sse,sseCustomerPrefix){sseCustomerKeyFile:=strings.TrimPrefix(sse,sseCustomerPrefix)base64CodedKey,err:=os.ReadFile(sseCustomerKeyFile)iferr!=nil{log.Errorf(err.Error())returnerr}\ndecodedKey,err:=base64.StdEncoding.DecodeString(string(base64CodedKey))iferr!=nil{decodedKey=base64CodedKey}\nmd5Hash:=md5.Sum(decodedKey)s3ServerSideEncryption.customerAlg=aws.String(\"AES256\")s3ServerSideEncryption.customerKey=aws.String(string(decodedKey))s3ServerSideEncryption.customerMd5=aws.String(base64.StdEncoding.EncodeToString(md5Hash[:]))}elseifsse!=\"\"{s3ServerSideEncryption.awsAlg=&sse}returnnil}\n2 2V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 3 :\nS Q L\ni n j e c t i o n\ni n\ns q l u t i l s\nID\nADA-VIT-SA23-3\nComponent\nsqlutils\nSeverity\nInformational\nFixed\nin:\nhttps://github.com/vitessio/vitess/pull/12929\nThe\nsqlutils\npackage\ncontains\nan\nSQL\nInjection\nvulnerability.\nThe\nroot\ncause\nof\nthe\nvulnerability\nis\nthat\nsqlutils\nwill\ngenerate\nan\nsql\nquery\nwithout\nsanitising\nthe\ninput\nthus\nessentially\nallowing\nthe\nuser\nto\ncontrol\nthe\nfull\nquery.\nhttps://github.com/vitessio/vitess/blob/8263d6301ce1809891afb27c85294fcb3572395e/go\n/vt/external/golib/sqlutils/sqlutils.go#L423\nfuncWriteTable(db*sql.DB,tableNamestring,dataNamedResultData)(errerror){iflen(data.Data)==0{returnnil}iflen(data.Columns)==0{returnnil}placeholders:=make([]string,len(data.Columns))fori:=rangeplaceholders{placeholders[i]=\"?\"}query:=fmt.Sprintf(`replaceinto%s(%s)values(%s)`,tableName,strings.Join(data.Columns,\",\"),strings.Join(placeholders,\",\"),)for_,rowData:=rangedata.Data{if_,execErr:=db.Exec(query,rowData.Args()...);execErr!=nil{err=execErr}}returnerr}\nThe\nvulnerable\ncode\nwas\nnot\nused\nin\nany\nVitess\nrelease\nand\nwas\nremoved\nfrom\nthe\nproject.\n2 3V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 4 :\nP a t h\nt r a v e r s a l\ni n\nV t c t l d S e r v e r s\nG e t B a c k u p s\nm e t h o d\nID\nADA-VIT-SA23-4\nComponent\nvtctld\nserver\nSeverity\nModerate\nFixed\nin:\nhttps://github.com/vitessio/website/pull/1471\nA\npath\ntraversal\nvulnerability\nexists\nin\nthe\nVtctldServersGetBackups\nmethod\nfrom\na\npath\nbeing\ncreated\nfrom\nparameters\nof\nthe\nincoming\nrequests.\nThis\nallows\na\nrequest\nto\npass\na\nvalue\nthat\ncould\ntraverse\nthe\nstorage.\nhttps://github.com/vitessio/vitess/blob/aa87bc4e9f05e3de3955e9641799eca9114d83bb/g\no/vt/vtctl/grpcvtctldserver/server.go#L1161\nfunc(s*VtctldServer)GetBackups(ctxcontext.Context,req*vtctldatapb.GetBackupsRequest)(resp*vtctldatapb.GetBackupsResponse,errerror){span,ctx:=trace.NewSpan(ctx,\"VtctldServer.GetBackups\")deferspan.Finish()\ndeferpanicHandler(&err)\nspan.Annotate(\"keyspace\",req.Keyspace)span.Annotate(\"shard\",req.Shard)span.Annotate(\"limit\",req.Limit)span.Annotate(\"detailed\",req.Detailed)span.Annotate(\"detailed_limit\",req.DetailedLimit)\nbs,err:=backupstorage.GetBackupStorage()iferr!=nil{returnnil,err}deferbs.Close()\nbucket:=filepath.Join(req.Keyspace,req.Shard)span.Annotate(\"backup_path\",bucket)\nbhs,err:=bs.ListBackups(ctx,bucket)iferr!=nil{returnnil,err}\ntotalBackups:=len(bhs)ifreq.Limit>0{totalBackups=int(req.Limit)}\n2 4V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 5 :\nU s e r s\nt h a t\nc a n\nc r e a t e\nk e y s p a c e s\nc a n\nd e n y\na c c e s s\nt o\na l r e a d y\ne x i s t i n g\nk e y s p a c e s\nID\nADA-VIT-SA23-5\nComponent\nVTAdmin\nSeverity\nModerate\nFixed\nin:\nhttps://github.com/vitessio/vitess/pull/12843\nUsers\nthat\ncan\ncreate\nkeyspaces\nvia\nthe\nVTAdmin-web\nUI\ncan\nspecify\na\nname\nthat\nprevents\nthe\nendpoint\nat\n/keyspaces\nfrom\ndisplaying\nany\nkeyspaces.\nIf\na\nuser\ncreates\na\nkeyspace\nfrom/keyspaces/create\nin\nVTAdmin-web\nand\nthe\nname\ncontains\nthe\ncharacter\n“/”,\nthen\nno\nkeyspaces\nwill\nbe\ndisplayed\nat/keyspaces.\nIn\nthis\nscreenshot,\nAda\nLogics\nhave\ncreated\na\nkeyspace\nnameda/a\n,\nand\n6\nkeyspaces\nexist:\n2 5\nV i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nIf\nwe\ncheck\nthe\ndeveloper\ntools,\nwe\nsee\nwe\nget\nan\nerror:\nHttpResponseNotOkError:[status500]/api/keyspaces:unknownrpcerror:code=Unknowndesc=nodedoesn'texist:/vitess/global/keyspaces/KEYSPACE_NAME/Keyspace\nIf\na\nsingle\nkeyspace\nreturns\nan\nerror,\nVTAdmin-api\ndoes\nnot\nreturn\nany\nkeyspaces.\nThe\nbelow\ncode\nis\nresponsible\nfor\nfetching\nthe\nexisting\nkeyspaces:\nhttps://github.com/vitessio/vitess/blob/da1906d54eaca4447e039d90b96fb07251ae852c/g\no/vt/vtadmin/cluster/cluster.go#L1141\nfunc(c*Cluster)GetKeyspaces(ctxcontext.Context)([]*vtadminpb.Keyspace,error){span,ctx:=trace.NewSpan(ctx,\"Cluster.GetKeyspaces\")deferspan.Finish()\nAnnotateSpan(c,span)\niferr:=c.topoReadPool.Acquire(ctx);err!=nil{returnnil,fmt.Errorf(\"GetKeyspaces()failedtoacquiretopoReadPool:%w\",err)}\nresp,err:=c.Vtctld.GetKeyspaces(ctx,&vtctldatapb.GetKeyspacesRequest{})c.topoReadPool.Release()\niferr!=nil{returnnil,err}\nvar(msync.Mutexwgsync.WaitGrouprecconcurrency.AllErrorRecorderkeyspaces=make([]*vtadminpb.Keyspace,len(resp.Keyspaces)))\nfori,ks:=rangeresp.Keyspaces{wg.Add(1)gofunc(iint,ks*vtctldatapb.Keyspace){deferwg.Done()\nshards,err:=c.FindAllShardsInKeyspace(ctx,ks.Name,FindAllShardsInKeyspaceOptions{})iferr!=nil{rec.RecordError(err)return}\nkeyspace:=&vtadminpb.Keyspace{Cluster:c.ToProto(),Keyspace:ks,Shards:shards,\n2 6V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\n}\nm.Lock()deferm.Unlock()keyspaces[i]=keyspace}(i,ks)}\nwg.Wait()ifrec.HasErrors(){returnnil,rec.Error()}\nreturnkeyspaces,nil}\nIn\nthe\nfirst\nchunk\nof\nhighlighted\ncode,rec\nrecords\nan\nerror\nof\na\nsingle\nkeyspace.\nIn\nthe\nsecond\nchunk\nof\nhighlighted\ncode,GetKeySpaces\nreturnnil\nand\nthe\nerror\nof\nthe\nkeyspace\nthat\nhad\nthe\nerror.\nThis\nis\na\nsecurity\nissue\nbecause\na\nuser\ncan\ncontrol\nwhether\nVTAdmin-api\nreturns\nthe\nexisting\nkeyspaces\nthus\nenabling\na\ndenial-of-service\nattack\nvector.\nFrom\nthe\npoint\nof\nview\nof\nthe\nthreat\nmodel,\nthis\nis\na\nbreach\nof\nsecurity,\nbecause\nthe\nuser\nthat\ncreates\nthe\nfaulty\nkeyspace\nhas\nbeen\ngranted\npermission\nto\ncreate\nkeyspaces\n-\nnot\nto\nprevent\nother\nusers\nfrom\nviewing\nthe\nexisting\nkeyspaces.\nThe/topology\nroute\nshows\nthe\nexisting\nkeyspaces\ncorrectly.\nThe/workflows\nand/schemas\napis\nget\ndenied\ntoo,\nwhen\na\nuser\ncreates\na\nkeyspace\ncontaining\nthe\n“/”\nchar:\nT h i s\ni s s u e\nw a s\na s s i g n e d\nC V E - 2 0 2 3 - 2 9 1 9 4 .\n2 7\nV i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 6 :\nV T A d m i n - w e b\nu i\ni s\nn o t\na u t h e n t i c a t e d\nb y\nd e f a u l t\nID\nADA-VIT-SA23-6\nComponent\nVTAdmin\nSeverity\nModerate\nFixed\nNo\nVTAdmins\nweb\nui\nis\nnot\nauthenticated\nby\ndefault.\nAs\nsuch,\nany\ndefault\ninstallation\nis\ninsecure\nby\ndefault.\nThe\nimmediate\nimpact\nis\nthat\nthe\nweb\nui\nis\nfully\nexposed\nto\nany\nuser\nthat\nhas\naccess\nto\nthe\ndomain\nand\nport\nhosting\nthe\nWeb\nui.\nThis\ncan\nallow\na\nthreat\nactor\nto\nachieve\nelevated\nprivileges\nby\ngetting\naccess\nto\na\nsystem\nrunning\nVTAdmin-web.\nFor\nexample,\nif\nthe\ndeployment\nis\nexposed\nto\nthe\ninternet,\nany\nuntrusted\nuser\ncould\nachieve\nthe\nlevel\nof\nprivileges\nof\nthe\nweb\nui\nthat\nthey\ncan\nlocate.\nIt\nis\nlikely\nthat\nVTAdmin-web\nui\nis\nexposed\non\nthe\ninternet\ngiven\nthat\nthe\nweb\nui\nand\nVTAdmin-api\nare\ndesigned\nto\nbe\ndeployed\non\ndifferent\ndomains.\nThis\nis\nnot\na\ncode\nvulnerability\nwith\nhigh\nseverity\nbut\na\nsecurity\nissue\nrelated\nto\nVTAdmins\ndefault\nsettings.\nMissing\nauthentication\nis\na\nsecurity\nissue\nrelated\nto\nVTAdmins\ndesign.\nWe\nunderstand\nfrom\ninternal\ndiscussions\nduring\nthe\naudit\nthat\nVitess\nwishes\nto\nallow\nas\nflexible\na\nusage\nof\nthe\nweb\nui\nas\npossible\nwhich\nauthentication-by-default\nis\ncounter-productive\nto.\nHowever,\nfrom\nthe\nperspective\nof\nsecurity,\nwe\nrecommend\nan\nauthentication-by-default\ndesign\nthat\ncan\nbe\nremoved\nfrom\ndeployments.\n2 8V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 7 :\nC r i t i c a l\n3 r d - p a r t y\nd e p e n d e n c y\ni s\na r c h i v e d\nI D\nA D A - V I T - S A 2 3 - 7\nC o m p o n e n t\nV T A d m i n\nS e v e r i t y\nL o w\nF i x e d\nN o\nVTAdmin\nuses\nthegithub.com/gorilla/mux\nlibrary\nfor\nrouting\nincoming\nrequests\nto\nVTAdmin-api.\nAs\nof\n9th\nDecember\n2022,\nthe\ngorilla/mux\nlibrary\nhas\nbeen\narchived\nand\nis\nnow\nunmaintained.\nThis\ndoes\nnot\nmean\nthat\nthe\nlibrary\nis\ninsecure\nto\nuse,\nbut\nit\ndoes\nhave\nimplications\nfor\nits\nsecurity.\nOne\nimplication\nis\nthat\ngorilla/mux\nis\nunlikely\nto\nfix\nissues\n-\nboth\nreliability\nissues\nand\nsecurity\nvulnerabilities.\nFurthermore,\nthe\nproject\nis\nunlikely\nto\neven\naccept\nand\ntriage\nsecurity\ndisclosures.\nAnother\nimplication\nis\nthat\nthe\nproject\nis\nunlikely\nto\ndo\nits\nown\nongoing\nsecurity\nwork.\nFor\nexample,\nAda\nLogics\nattempted\nto\ninvolve\nthe\nproject\nin\nintegrating\ncontinuous\nfuzzing\nby\nway\nof\nOSS-Fuzz\nin\n2020:\nhttps://github.com/gorilla/mux/pull/575\nvia\na\npull\nrequest\nthat\nhas\nstill\nnot\nbeen\nmerged.\nAs\nsuch,\ngorilla/mux\nhas\na\nlow\nsecurity\nposture\nthat\ncan\naffect\nVTAdmin.\nSince\nthe\nlibrary\nis\ndesigned\nto\nbe\nexposed\nto\nuntrusted\ninput,\nsecurity\nvulnerabilities\ncould\nhave\na\ncritical\nimpact\non\nVTAdmin.\n2 9V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 8 :\nV T A d m i n\nn o t\np r o t e c t e d\nb y\na\nr a t e\nl i m i t e r\nID\nADA-VIT-SA23-8\nComponent\nVTAdmin-api\nSeverity\nModerate\nFixed\nNo\nDescription\nVTAdmin\nis\nnot\nprotected\nby\na\nrate\nlimiter\nwhich\nmakes\nit\nsusceptible\nto\nmultiple\nattack\nvectors.\nThe\nunderlying\nVitess\nbackend\nis\nguarded\nby\na\nrate\nlimiter,\nand\nthe\nimpact\nof\nthis\nattack\nwould\nbe\nlimited\nto\nstealing\nRBAC\ncredentials\nor\nlaunching\na\nDDoS\nattack.\nPoC\nWe\ndemonstrate\nthe\nissue\nwith\nthe\nfollowing\nPoC.\nThe\nidea\nis\nthat\nwe\nshould\nbe\nable\nto\nexecute\nall\n100,000\nrequests\nwithout\nbeing\nblocked\n-\nwhich\ndemonstrates\nlack\nof\na\nrate\nlimiter.\nThe\nPoC\nchecks\nthe\nreturn\nvalue\nof\nthe\nhttp\nresponse.\nThe\nassumption\nis\nthat\nif\na\nrate\nlimiter\nwould\nprevent\nan\nattacker\nfrom\nsending\n100,000\nrequests,\nVTAdmin\nwould\nreturn\nan\nerror\nor\nan\nempty\nresponse.\nThe\nPoC\ntherefore\nchecks\nwhether\nVTAdmin\nreturns\na\nvalid\nhostname,\nand\nif\nnot,\nthen\nit\nbreaks\nthe\nloop\nand\nchecks\nif\nit\nsent\n100,000\nrequests.\nimportrequestsimportjson\nurl='http://localhost:14200/api/vtctlds'\nj=0foriinrange(100000):x=requests.get(url)resp=json.loads(x.text)ifresp[\"result\"][\"vtctlds\"][0][\"hostname\"]!=\"localhost:15999\":breakj+=1\nifj!=10000:print(\"Wehitalimit\")else:print(\"Wesentall100000requests\")\n3 0V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nThis\nscript\nsends\nall\n100,000\nrequests\nto\nthe\nserver\nsuccessfully\ndemonstrating\nhow\neasy\nit\nis\nto\nexploit\nthe\nlack\nof\nrate\nlimiting.\n3 1V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 9 :\nP r o fi l i n g\ne n d p o i n t s\ne x p o s e d\nb y\nd e f a u l t\nID\nADA-VIT-SA23-9\nComponent\nServenv\nSeverity\nLow\nFixed\nPartially\nVitessʼs\nservenv\npackage\nexposes\nthe\nHTTP\nhandlers\nfor\nprofiling\nby\ndefault.\nWe\nrecommend\nexposing\nthese\nhandlers\nonly\nif\nusers\nchoose\nto\nexpose\nthem,\nto\nprevent\naccidentally\nrevealing\nsensitive\ninformation\nin\na\nproduction\ndeployment.\nhttps://github.com/vitessio/vitess/blob/137cf9daf41112a553f617c66a56fd8b06fad20b/go/\nvt/servenv/servenv.go#L33\npackageservenv\nimport(//registertheHTTPhandlersforprofiling_\"net/http/pprof\"\"net/url\"\"os\"\"os/signal\"\"runtime/debug\"\"strings\"\"sync\"\"syscall\"\"time\"\nV i t e s s\ni s\nw o r k i n g\no n\nf i x i n g\nt h i s .\nI t\nh a s\nb e e n\np a r t i a l l y\nf i x e d\ni n\nh t t p s : / / g i t h u b . c o m / v i t e s s i o / v i t e s s / p u l l / 1 2 9 8 7\n.\n3 2V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 1 0 :\nU n s a n i t i z e d\np a r a m e t e r s\ni n\nh t m l\nc o u l d\nl e a d\nt o\nX S S\nID\nADA-VIT-SA23-10\nComponent\nMultiple\nSeverity\nLow\nFixed\nin:\n●\nhttps://github.com/vitessio/vitess/pull/12939\n●\nhttps://github.com/vitessio/vitess/pull/12940\nVitess\nuses\nGo\ntemplating\na\nnumber\nof\nplaces\nto\ngenerate\nHTML\nbut\ndoes\nnot\nescape\nthe\nparameters\nto\nthe\ntemplate.\nVitess\ncould\nbe\nexposed\nto\nfront-end\nattacks\nsuch\nas\ncross-site\nscripting,\nif\nan\nattacker\nmanages\nto\npass\nvalid\njavascript\ninto\nthe\ntemplates.\nAda\nLogics\nfound\nthe\nfollowing\nparts\nof\nVitess\nto\nbe\nimpacted:\nhttps://github.com/vitessio/vitess/blob/867043971bd0aa969fe7e34ae8564330972e4d89/g\no/vt/topo/topoproto/shard.go#L54\nfuncSourceShardAsHTML(source*topodatapb.Shard_SourceShard)template.HTML{result:=fmt.Sprintf(\"<b>Uid</b>:%v</br>\\n<b>Source</b>:%v/%v</br>\\n\",source.Uid,source.Keyspace,source.Shard)ifkey.KeyRangeIsPartial(source.KeyRange){result+=fmt.Sprintf(\"<b>KeyRange</b>:%v-%v</br>\\n\",hex.EncodeToString(source.KeyRange.Start),hex.EncodeToString(source.KeyRange.End))}iflen(source.Tables)>0{result+=fmt.Sprintf(\"<b>Tables</b>:%v</br>\\n\",strings.Join(source.Tables,\"\"))}returntemplate.HTML(result)}\nhttps://github.com/vitessio/vitess/blob/bd78c08ced8f6a3e55279d308a5a8402fd6780bc/g\no/vt/srvtopo/status.go#L126\nfunc(st*SrvKeyspaceCacheStatus)StatusAsHTML()template.HTML{ifst.Value==nil{returntemplate.HTML(\"NoData\")}\nresult:=\"<b>Partitions:</b><br>\"for_,keyspacePartition:=rangest.Value.Partitions{\n3 3V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nresult+=\"&nbsp;<b>\"+keyspacePartition.ServedType.String()+\":</b>\"for_,shard:=rangekeyspacePartition.ShardReferences{result+=\"&nbsp;\"+shard.Name}result+=\"<br>\"}\niflen(st.Value.ServedFrom)>0{result+=\"<b>ServedFrom:</b><br>\"for_,sf:=rangest.Value.ServedFrom{result+=\"&nbsp;<b>\"+sf.TabletType.String()+\":</b>&nbsp;\"+sf.Keyspace+\"<br>\"}}\nreturntemplate.HTML(result)}\nhttps://github.com/vitessio/vitess/blob/a49702d9f9782c14d96030c8d2771c8decb39948/g\no/vt/discovery/tablets_cache_status.go#L57\nfunc(tcs*TabletsCacheStatus)StatusAsHTML()template.HTML{tLinks:=make([]string,0,1)iftcs.TabletsStats!=nil{sort.Sort(tcs.TabletsStats)}for_,ts:=rangetcs.TabletsStats{color:=\"green\"extra:=\"\"ifts.LastError!=nil{color=\"red\"extra=fmt.Sprintf(\"(%v)\",ts.LastError)}elseif!ts.Serving{color=\"red\"extra=\"(NotServing)\"}elseifts.Target.TabletType==topodatapb.TabletType_PRIMARY{extra=fmt.Sprintf(\"(PrimaryTermStartTime:%v)\",ts.PrimaryTermStartTime)}else{extra=fmt.Sprintf(\"(RepLag:%v)\",ts.Stats.ReplicationLagSeconds)}name:=topoproto.TabletAliasString(ts.Tablet.Alias)tLinks=append(tLinks,fmt.Sprintf(`<ahref=\"%s\"style=\"color:%v\">%v</a>%v`,ts.getTabletDebugURL(),color,name,extra))}returntemplate.HTML(strings.Join(tLinks,\"<br>\"))}\nhttps://github.com/vitessio/vitess/blob/47611bca3951ecdf442dda5c8fc12f4eb9cff29c/go/\nvt/callinfo/plugin_mysql.go#L56\nfunc(mci*mysqlCallInfoImpl)HTML()template.HTML{returntemplate.HTML(\"<b>MySQLUser:</b>\"+mci.user+\"<b>RemoteAddr:<b>\"+\n3 4V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nmci.remoteAddr)}\nhttps://github.com/vitessio/vitess/blob/47611bca3951ecdf442dda5c8fc12f4eb9cff29c/go/\nvt/callinfo/plugin_grpc.go#L67\nfunc(gci*gRPCCallInfoImpl)HTML()template.HTML{returntemplate.HTML(\"<b>Method:</b>\"+gci.method+\"<b>RemoteAddr:</b>\"+gci.remoteAddr)}\nT h e\nV i t e s s\nm a i n t a i n e r s\nt r i a g e d\nt h e s e\nc a s e s\ne x t e n s i v e l y\nt o\na s s e s s\nw h e t h e r\nu s e r - c o n t r o l l e d\nd a t a\nc o u l d\nb e\np a s s e d\nt o\na n y\no f\nt h e\nt e m p l a t e s\nt o\nl a u n c h\na n\nX S S\na t t a c k .\nS u c h\na n\na t t a c k\nc a n\nb e\nh i g h l y\nc r i t i c a l ,\ns i n c e\ns o m e\no f\nt h e\nt e m p l a t e s\na r e\nm e a n t\nt o\nb e\nv i e w e d\nb y\na\nV i t e s s\na d m i n .\nA t\nt h e\nt i m e\no f\nt h e\na u d i t ,\nt h e\nV i t e s s\nm a i n t a i n e r s\nf o u n d\nt h a t\nt h e\np a r a m e t e r s\np a s s e d\nt o\nt h e\nt e m p l a t e s\nw e r e\no n l y\nu s e r - c o n t r o l l e d\ni n\no n e\no f\nt h e\nc a s e s .\nT h i s\nc a s e\nw a s\nt r i a g e d\nh e a v i l y\na n d\nt h e\nV i t e s s\nt e a m\nf o u n d\nt h a t\na n\na t t a c k\nv e c t o r\nw a s\nn o t\np o s s i b l e .\nT o\ng u a r d\na g a i n s t\nf u t u r e\ni s s u e s\nw i t h\nt e m p l a t i n g ,\nV i t e s s\nn o w\nu s e s\nt h e\nh t t p s : / / g i t h u b . c o m / g o o g l e / s a f e h t m l\nl i b r a r y\nf o r\nh t m l\nt e m p l a t i n g .\nI n\na d d i t i o n ,\nV i t e s s\nn o w\nu s e s\nt e m p l a t e s\ni n s t e a d\no f\nr a w\ns t r i n g\nc o n c a t e n a t i o n\nf o r\nn o n - u s e r\nc o n t r o l l e d\ni n p u t .\n3 5V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 1 1 :\nZ i p\nb o m b\ni n\nk 8 s t o p o\nID\nADA-VIT-SA23-11\nComponent\nk8stopo\nSeverity\nLow\nFixed\nNo\nDescription\nK8stopo\nmay\nbe\nsusceptible\nto\na\nzip\nbomb\nattack\nfrom\nlack\nof\nsize\nchecking\nwhen\nextracting\na\nzip\nfile.\nk8stopo\nreads\nthe\ncontents\nof\nan\nextracted\nzip\narchive\nentirely\ninto\nmemory\non\nthe\nhighlighted\nline\nbelow.\nIf\nthe\narchive\nis\naccidentally\nor\nintentionally\ncra\u0000ed\nin\nsuch\na\nway\nthat\nit\nis\nlarger\nthan\nthe\navailable\nmemory,\nthe\nzip\narchive\ncould\ncause\nk8stopo\nto\nexhaust\nmemory\nthereby\nresulting\nin\ndenial\nof\nservice.\nhttps://github.com/vitessio/vitess/blob/395840969d183dbdb080eabf95b0bcd2ddefb885/g\no/vt/topo/k8stopo/file.go#L70\nfuncunpackValue(value[]byte)([]byte,error){decoder:=base64.NewDecoder(base64.StdEncoding,bytes.NewBuffer(value))\nzr,err:=gzip.NewReader(decoder)iferr!=nil{return[]byte{},fmt.Errorf(\"unabletocreatenewgzipreader:%s\",err)}\ndecoded:=&bytes.Buffer{}if_,err:=io.Copy(decoded,zr);err!=nil{return[]byte{},fmt.Errorf(\"errorcoppyinguncompresseddata:%s\",err)}\niferr:=zr.Close();err!=nil{return[]byte{},fmt.Errorf(\"unabletoclosegzipreader:%s\",err)}\nreturndecoded.Bytes(),nil}\n3 6V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nA D A - V I T - S A 2 3 - 1 2 :\nV T A d m i n\nu s e r s\nt h a t\nc a n\nc r e a t e\ns h a r d s\nc a n\nd e n y\na c c e s s\nt o\no t h e r\nf u n c t i o n s\nID\nADA-VIT-SA23-12\nComponent\nVTAdmin\nSeverity\nModerate\nFixed\nin:\nhttps://github.com/vitessio/vitess/pull/12917\nDescription\nA\nuser\nthat\ncan\ncreate\nshards\nin\nVitess\ncan\nalso\ndeny\naccess\nto\nshards\nby\ncreating\na\nshard\nwith\na\nwell-cra\u0000ed\nname.\nThis\nwill\ncause\nVitess\nto\ncreate\nthe\nshard,\nand\nany\naccess\nto\nshards\nwill\nsubsequently\nbe\ndenied\nfrom.\nThis\nis\nespecially\nimpactful\nfor\nVTAdmin\nwhich\nhas\na\ngranular\npermission\ncontrol.\nAs\nsuch,\na\nuser\nwith\nlow\nprivileges\n-\nfor\nexample\nonly\ncreate-privileges\nagainst\nshards\n-\ncan\ndeny\nall\nother\nusers\nfrom\nfetching\ncreated\nshards.\nThe\nissue\nhas\nbeen\nassigned\nCVE-2023-29195.\n3 7V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nSLSA\nreview\nIn\nthis\nsection\nwe\npresent\nour\nfindings\nfrom\nour\nSLSA\ncompliance\nreview\nof\nVitess.\nSLSA\nis\na\nframework\nfor\nassessing\nartifact\nintegrity\nand\nensure\na\nsecure\nsupply\nchain\nfor\ndownstream\nusers.\nIn\nthis\npart\nof\nthe\naudit,\nwe\nassessed\nVitessʼs\nSLSA\ncompliance\nby\nfollowing\nSLSAʼs\nv0.1\nrequirements\n3\n.\nThis\nversion\nof\nthe\nSLSA\nstandard\nis\ncurrently\nin\nalpha\nand\nis\nlikely\nto\nchange.\nOur\nassessment\nshows\nVitessʼs\ncurrent\nlevel\nof\ncompliance.\nVitess\nmanages\nits\nsource\ncode\non\nGithub\nwhich\nmakes\nit\nversion\ncontrolled\nand\npossible\nto\nverify\nthe\ncommit\nhistory.\nThe\nsource\ncode\nis\nretained\nindefinitely\nand\nall\ncommits\nare\nverified\nby\ntwo\ndifferent\nmaintainers.\nThe\nbuild\nis\nfully\nscripted\nand\nis\ninvoked\nvia\nVitessʼs\nMakefile.\nThe\nbuild\nruns\nin\nGithub\nActions\nwhich\nprovisions\nthe\nbuild\nenvironment\nfor\nbuilding\nVitess\nand\ndoes\nnot\nreuse\nit\nfor\nother\npurposes.\nGithub\nactions\nare\nnot\nfully\nisolated,\nin\nthat\nthe\nbuild\ncan\naccess\nenv\nvar\nmounted\nsecrets.\nThe\nbuild\nis\nalso\nnot\nfully\nhermetic,\nsince\nit\nruns\nwith\nnetwork\naccess,\nwhich\nVitess\nneeds\nto\npull\nin\ndependencies\nat\nbuild\ntime.\nVitess\nlacks\nthe\nprovenance\nstatement,\nand\nthis\nis\nthe\narea\nwhere\nVitess\ncan\nimprove\nthe\nmost.\nVitess\ncan\nachieve\nlevel\n1\nSLSA\ncompliance\nby:\n●\nMaking\nthe\nprovenance\nstatement\navailable\nwith\nreleases.\n●\nIncluding\nthe\nbuilder,\nartifacts\nand\nbuild\ninstructions\nin\nthe\nprovenance.\nThe\nbuild\ninstructions\nare\nthe\nhighest\nlevel\nof\nentry,\nwhich\nin\nVitessʼ\ncase\nis\nthe\ncommand\nthat\ninvokes\nthe\nMakefile.\nOverview\nR e q u i r e m e n t\nS L S A\n1\nS L S A\n2\nS L S A\n3\nS L S A\n4\nS o u r c e\n-\nV e r s i o n\nc o n t r o l l e d\n✓\n✓\n✓\nS o u r c e\n-\nV e r i f i e d\nh i s t o r y\n✓\n✓\nS o u r c e\n-\nR e t a i n e d\ni n d e f i n i t e l y\n✓\nS o u r c e\n-\nT w o - p e r s o n\nr e v i e w e d\n✓\n3\nh t t p s : / / s l s a . d e v / s p e c / v 0 . 1 / r e q u i r e m e n t s\n3 8V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nB u i l d\n-\nS c r i p t e d\nb u i l d\n✓\n✓\n✓\n✓\nB u i l d\n-\nB u i l d\ns e r v i c e\n✓\n✓\n✓\nB u i l d\n-\nB u i l d\na s\nc o d e\n✓\n✓\nB u i l d\n-\nE p h e m e r a l\ne n v i r o n m e n t\n✓\n✓\nB u i l d\n-\nI s o l a t e d\n⛔\n⛔\nB u i l d\n-\nP a r a m e t e r l e s s\n✓\nB u i l d\n-\nH e r m e t i c\n⛔\nB u i l d\n-\nR e p r o d u c i b l e\n✓\nP r o v e n a n c e\n-\nA v a i l a b l e\n⛔\n⛔\n⛔\n⛔\nP r o v e n a n c e\n-\nA u t h e n t i c a t e d\n⛔\n⛔\n⛔\nP r o v e n a n c e\n-\nS e r v i c e\ng e n e r a t e d\n⛔\n⛔\n⛔\nP r o v e n a n c e\n-\nN o n - f a l s i f i a b l e\n⛔\n⛔\nP r o v e n a n c e\n-\nD e p e n d e n c i e s\nc o m p l e t e\n⛔\nP r o v e n a n c e\n-\nI d e n t i f i e s\na r t i f a c t\n⛔\n⛔\n⛔\n⛔\nP r o v e n a n c e\n-\nI d e n t i f i e s\nb u i l d e r\n⛔\n⛔\n⛔\n⛔\nP r o v e n a n c e\n-\nI d e n t i f i e s\nb u i l d\ni n s t r u c t i o n s\n⛔\n⛔\n⛔\n⛔\nP r o v e n a n c e\n-\nI d e n t i f i e s\ns o u r c e\nc o d e\n⛔\n⛔\n⛔\nP r o v e n a n c e\n-\nI d e n t i f i e s\ne n t r y\np o i n t\n⛔\n⛔\nP r o v e n a n c e\n-\nI n c l u d e s\na l l\nb u i l d \np a r a m e t e r s\n⛔\n⛔\nP r o v e n a n c e\n-\nI n c l u d e s\na l l\nt r a n s i t i v e \nd e p e n d e n c i e s\n⛔\nP r o v e n a n c e\n-\nI n c l u d e s\nr e p r o d u c i b l e\ni n f o\n⛔\nP r o v e n a n c e\n-\nI n c l u d e s\nm e t a d a t a\n⛔\n⛔\n⛔\n⛔\nC o m m o n\n-\nS e c u r i t y\nN o t\nd e f i n e d\nb y\nS L S A\nr e q u i r e m e n t s\nC o m m o n\n-\nA c c e s s\n✓\nC o m m o n\n-\nS u p e r u s e r s\n✓\n3 9V i t e s s\nS e c u r i t y\nA u d i t ,\n2 0 2 3\nConclusions\nIn\nthis\nengagement,\nAda\nLogics\ncompleted\na\nsecurity\nof\nVitessʼs\nVTAdmin\ncomponent.\nThe\nscope\nwas\nwell-defined\nand\nset\nto\nbe\na\n5-week\nengagement.\nThe\ngoals\nwere\nto\nformalize\na\nthreat\nmodel\nof\nVTAdmin,\nconduct\na\nmanual\ncode\nreview\nof\nVTAdmin\nand\nthe\nremaining\nVitess\ncodebase,\nassess\nand\nimprove\nVitessʼs\nfuzzing\nsuite\nand\nfinally\ncarry\nout\na\nSLSA\ncompliance\nreview.\nOur\noverall\nassessment\nof\nVTAdmin\nis\nhighly\npositive.\nVTAdmin\nfollows\nsecure\ndesign\nand\ncode\npractices,\nand\nVTAdmin-web\nis\nwritten\nwith\nReact\nwhich\nis\nhardened\nto\ndefend\nagainst\nmany\ncases\nof\nCross-Site\nScripting.\nThe\nbackend,\nVTAdmin-api,\nis\nwritten\nin\nGo\nwhich\nis\na\nmemory-safe\nlanguage.\nThe\nVTAdmin\ncode\nis\nclean\nand\nwell-structured,\nmaking\nit\neasy\nto\nunderstand\nand\naudit.\nThis\nis\nimportant\nfor\nboth\nexternal\nauditors\nsuch\nas\nAda\nLogics\nas\nwell\nas\nfor\nthe\nVitess\nteam\nwhen\ntriaging\nbug\nreports.\nThe\nauditing\nteam\nfound\ntwo\nvulnerabilities\nduring\nthe\naudit,\nand\nthe\nVitess\nteam\nwere\nfast\nto\nrespond\nto\nthese.\nThe\nVitess\nteam\nalso\nextensively\ntriaged\nanother\nissue\nreported\nby\nAda\nLogics\nto\ndetermine\nits\nseverity.\nThis\nprofessional\nresponse\nto\nsecurity\ndisclosures\nis\nan\nimportant\nelement\nof\nwell-maintained\nsecurity\npolicy.\nThe\nhighest\nseverity\nof\nany\nissue\nfound\nwas\nModerate\nwhich\nis\na\ntestament\nto\nthe\nsecurity\npractices\nthat\nVitess\nfollows\nwith\nVTAdmin\nas\nwell\nas\nthe\nremaining\ncode\nbase.\nVitessʼs\nfuzzing\nsuite\nis\nextensive,\ntargets\ncomplex\nparts\nof\nthe\ncode\nbase\nand\nruns\ncontinuously\non\nOSS-Fuzz\nwhich\nare\nimportant\nelements\nof\na\nsolid\nfuzzing\nsuite.\nAda\nLogics\nadded\ntwo\nfuzzers\nthat\ntest\nthe\nroot\ncause\nfor\nthe\ntwo\nCVEs.\nThe\nfuzzers\nfound\nedge\ncases\nthat\ncould\ntrigger\nboth\nvulnerabilities\nbut\nhad\nnot\nbeen\nfound\nand\nfixed\ninitially,\nand\nthe\nVitess\nteam\nsubsequently\nfixed\nthese.\nVitess\nshowed\ngreat\ninitiative\nwith\ntheir\nSLSA\ncompliance,\nhaving\nstarted\nwork\non\ngenerating\nthe\nprovenance\nattestation\nbefore\nthe\naudit\ncommenced.\nAda\nLogics\nwould\nlike\nto\nthank\nthe\nVitess\nteam\nfor\na\nproductive\nsecurity\naudit\nwith\nfruitful\ncollaboration\non\nthe\nfound\nissues.\nWe\nwould\nalso\nlike\nto\nthank\nOSTIF\nfor\nfacilitating\nthe\naudit\nand\nthe\nCNCF\nfor\nfunding\nthe\naudit.\n4 0" } ]
{ "category": "App Definition and Development", "file_name": "VIT-03-report-security-audit.pdf", "project_name": "Vitess", "subcategory": "Database" }
[ { "data": "1\nPapersubmitted totheIFACConference onSystem Structure andControl (Nantes, France, July8-10 1998).\nNov14,1997. Revised Feb.10,1998.\nNumericalcomputationofspectralelements\ninmax-plusalgebra\nJEANCOCHET-TERRASSON\n\u0000GUYCOHEN\n\u0001\nSTÂEPHANEGAUBERT\n\u0002\u0004\u0003\nMICHAELMCGETTRICK\n\u0005\nJEAN-PIERREQUADRAT\n\u0006\nAbstract\nWedescribethespecialization tomax-plusalgebraof\nHoward'spolicyimprovement scheme,whichyieldsanal-\ngorithmtocomputethesolutionsofspectralproblemsinthe\nmax-plussemiring.Experimentally ,thealgorithmshowsa\nremarkable(almostlinear)averageexecutiontime.\nI.Introduction\nThemax-plussemiring\u0007\t\b\u000b\n\r\fistheset\u0007\u000f\u000e\u0011\u0010\u0013\u0012\u0015\u0014\u0017\u0016,equipped\nwithmax,writtenadditively( \u0018\u001a\u0019\u001c\u001b\u001e\u001d \u001f\"!$#&%'\u0018)(*\u001b,+ ),and -,writ-\ntenmultiplicatively ( \u0018\u001e./\u001b\u001e\u001d0\u0018\u001e-/\u001b ).Thezeroelementwillbe\ndenotedby 1( 12\u001d3\u0012\u0015\u0014),theunitelementwillbedenotedby 4\n( 45\u001d/6).Wewilladopttheusualalgebraicconventions,writing\nforinstance \u00187\u001bfor \u00188.\u0011\u001b, 1forthezerovectororzeromatrix(the\ndimensionbeingclearfromthecontext),etc.\nThespectralproblemforamatrix9;:<%=\u0007\n\b>\n?\f+A@CB7@canbe\nwritenas9\u001eDE\u001dGF)D3( (1)\nwhereD\u0017:H%I\u0007\n\b>\n\r\f+J@LKM\u0010N1O\u0016andF\u0017:\u0017\u0007\n\b>\n\r\f,i.e.withtheusual\nnotationPOQ:R\u0010TSU(WVXVWV\r(ZY[\u00167( \u001f\u0011!\u0004#\\Z]7^\u0004]@\n%_9\u0015`\n^-\u000fD\n^+>\u001d Fa-\u001cD&`?( (2)\nwhere Db:c%=\u0007d\u000ee\u0010\u0013\u0012\u0015\u0014\u0017\u0016N+J@ hasatleastone®niteentry,andF\u0011:2\u0007f\u000e\u001e\u0010U\u0012g\u0014\u0017\u0016 .Asusual,wewillcall Faneigenvalue,and Dan\nassociatedeigenvector.Whereasthemax-plusspectraltheorem,\nwhichcharacterizes thesolutionsof(1),isoneofthemoststud-\niedmax-plusresults1,comparatively littlecanbefoundabout\nthenumericalsolvingof(1).Unlikeinusualalgebra,themax-\nplusspectralproblemcanbesolvedexactlyina®nitenumber\nThisworkwaspartiallysupportedbytheEuropeanCommunityFrameworkIV\nprogramthroughtheresearchnetworkALAPEDES(ªTheAlgebraicApproach\ntoPerformanceEvaluationofDiscreteEventSystemsº).hBPRE,ÂEtatMajordel'ArmÂeedel'Air,24Bd.Victor,75015Paris,Francei\nCentreAutomatique etSystÁemes,ÂEcoledesMinesdeParis,35rueSaint-\nHonorÂe,77305Fontainebleau Cedex,France.email:cohen@cas.ensmp.frj\nCorresponding author.k\nINRIA,DomainedeVoluceau,B.P.105,78153LeChesnayCedex,France.\nemail:Stephane.Gaubert@inria.frl\nCentreAutomatique etSystÁemes.email:gettrick@cas.ensmp.frm\nINRIA.email:Jean-Pierre.Quadrat@inria.frn\nSee[24,26,6,15,16]forhistoricalreferences.Recentpresentations canbe\nfoundin[1,\n\u00033.2.4,\n\u00033.7],[7],[14,\n\u00033.7].See[22,21]forgeneralizations tothe\nin®nitedimensioncase.ofsteps.Thecommonlyreceivedmethodtosolve(1)relieson\nKarp'salgorithm[20],whichcomputesthe(unique)eigenvalue\nofanirreducible2matrix 9in oE%pYOqW+time3(infact, oE%pYsrfta+\ntime,where tisthenumberofnon- 1entriesof 9),and4ou%'YC+\nspace q.Then,someadditionalmanipulations allowonetoobtain\nageneratingfamilyoftheeigenspace,tocomputeotherinter-\nestingspectralcharacteristics suchasthespectralprojector,the\ncyclicity,etc.(see[1,v3.7]).Agoodbibliographyonthemaxi-\nmalcyclemeanproblem,andacomparisonofKarp'salgorithm\nwithotherclassicalalgorithms,canbefoundin[9].\nThepurposeofthispaperistodescribeaverydifferentalgo-\nrithm,whichseemsmoreef®cient,inpractice.\nWewillshowhowthespecialization tothemax-pluscase\nofHoward'smultichainpolicyimprovement algorithm(see\ne.g.[10],or[23]forasurvey),whichiswellknowninstochas-\nticcontrol,runsintime5 w\u001exou%'ta+andspace oE%pYC+,where\nwyxis\nthenumberofiterationsofthealgorithm.Although\nw5x,which\ndependsonbothYandthenumericalvaluesoftheentriesof9,\nseemsdif®culttoevaluate,itsaveragevalueissmall(experimen-\ntaltestsonfullmatricessuggest\nwzx\u001d oE%_{=|U}~YC+?+.\nInotherwords,itseemsexperimentally possibletosolveinan\nalmostlinear(i.e.almostoE%'ta+)averagetimeafamilyofcom-\nbinatorialproblemsforwhichthebeststandardalgorithmsrun\nin oE%pYr\u0011ta+time.\nWeconjecturethattheworstcasevalueofthenumberofiter-\nations\nwxispolynomialin t.Examplesshowthatitisatleast\noforder Y.\nThemax-plusversionofHoward'salgorithmoutperforms\notherknownmethodswithgoodaverageexecutiontime,such\naslinearprogramming. Theonlyotherfastmethodknowntous\nisCuninghame-Green andYixun'salgorithm[8],whichrunsin\ntime\nwz€CoE%'tz+,wheretheaveragevalueofthenumberofiter-\nations\nwz€Cisexperimentally oE%'Yƒ‚…„ †W+forfullmatrices,accord-\ningto[8].‡\nIrreducibility isde®nedin\n\u0003IIIbelow.ˆ\nThroughoutthepaper,ªtimeºandªspaceºrefertotheexecutiontime(ona\nsequentialmachine)andtothememoryspacerequiredbythealgorithm,respec-\ntively.‰\nThenaturalimplementation ofKarp'salgorithm,describedin[20],needsŠŒ‹=\n‡,Ž\nspace.However,itiseasytodesignatwopassesvariant,whichneeds\nadoubletime,andrunsinonly\nŠŒ‹=\nŽ\nspace.Asdetailedin[9],itisalsopossible\ntooptimizeKarp'salgorithmusingthesometimessparsecharacterofthematrix\nthatitbuilds.\nThefamilyofHoward'salgorithmsworksonlyforªnon-degenerateº matri-\nceswithatleastonenon- entryperrow.Forsuchmatrices,\n’‘d“,andŠŒ‹=“\nŽ&”ŠŒ‹=–•M“\nŽ\n.2\nSomepartsofthepresentworkwereinitiatedin[3],andde-\nvelopedinadifferentdirectionin[4,12].Itisremarkablethat\nHoward'spolicyimprovement schemenotonlyprovidesef®-\ncientalgorithms,butalsosimpleexistenceproofs.Inparticular,\ntheexistenceofgeneralizedeigenmodesformax-pluslineardy-\nnamicalsystemswithseveralincommensurable delays,whichis\nstatedinvIIIbelow,seemsnew.Asimilarprooftechniquewas\nappliedtomin-maxfunctionsin[12].\nThepaperisorganizedasfollows.\nInsectionII,wemotivatethemax-plusspectralproblems,by\nshowinghowfamiliarproblemsinDiscreteEventSystemsthe-\noryandOperationsResearchreducetothespectralproblem(1),\nandtosomeofitsextensions.\nInsectionIII,webrie¯yrecalltheveryclassicalcharacter-\nizationofeigenvaluesofmax-plusmatrices.Wediscusstheir\nrelationwithcycletimevectors,whichgoverntheasymptotic\nbehaviorofmax-pluslineardynamicalsystems.Weshowhow\nthesecycletimescanbycomputedfromgeneralized eigen-\nmodes,whichareanonclassicalusefulextensionofthenotion\nofeigenvector,alreadyusedin[12].\nInsectionIV,wedescribethemax-plusversionofHoward's\npolicyiterationalgorithm,whichcomputesgeneralizedeigen-\nmodes,andwhichinfactshowsthatsucheigenmodesexist.\nTheonlynoticeableoriginality,bycomparisonwiththeclassi-\ncalstochasticcontrolcase,isthatavaluedetermination stepcan\nbeperformedintime oE%'YC+,usingaspecialgraphexplorational-\ngorithmthatwepresentindetail.\nInsectionV,illustrativeexamplesandsystematicalnumerical\ntestsarepresented.\nAsmallprototype,writteninC,whichimplementsthemax-\npluspolicyiterationalgorithmdescribedherecanbefoundcur-\nrentlyonthewebpagehttp://amadeus.inria.fr/gaubert. Thispro-\ntotypewillbeintegratedinthemax-plustoolboxofSCILAB6\nwhichisunderdevelopment.\nII.Whatthemax-plusspectraltheorycandoforyou\nInthissection,welistseveralbasicproblemsthatreduceto\nthespectralproblem(1)andtosomeofitsextensions.Other\napplicationsofthemax-plusspectralproblemcanbefounde.g.\nin[22,14],andinthereferencestherein.\nPROBLEM1(MAXIMALCIRCUITMEAN).Givenadirected\ngraph7 \u0000\u001d’%\u0002\u0001G(\u0004\u0003 +,equippedwithavaluationmap \u0005\u0007\u0006\b\u0003\n\t \u0007,\ncomputethemaximalcircuitmean\u000b\u001d \u001f\"!\u0004#\f\n\r\u000f\u000e\u0011\u0010\f\u0005a%\u0013\u0012X+\r\n\u000e\u0014\u0010\fS\n( (3)\nwherethe \u001f\"!$#istakenoverallthecircuits \u0015of\n\u0000,andthesums\naretakenoveralltheedges\u0012of\u0015.\nThedenominatorof(3)isthelengthofcircuit\u0015.Thenumer-\natoristhevaluationorweightofcircuit\u0015.\u0016\nAfreeopenMATLAB-analogue software,developedatINRIA.Thecurrent\nversionofSCILAB(withoutthemax-plustoolbox)canbefoundonhttp://www-\nrocq.inria.fr/scilab. \u0017\nA(®nite,directed)graphcanbedescribedbya®nitesetofnodes \u0018andaset\nof(oriented)edges \u0019\u001b\u001a\u001c\u0018\u001e\u001d\u001f\u0018.Inthesequel,wewillusethefamiliarnotions\nof(directed)path,(directed)circuit,etc.,withoutfurthercomments.ByTheoremIII.1below,when\n\u0000isstronglyconnected,\u000bcoincideswiththe(unique)eigenvalueofmatrix 9 :%=\u0007M\b\u000b\n\r\f*+! uB\" ,de®nedasfollows:9 `\n^\u001d\n#\u0005a%\nQ(%$\u0013+if %\nQ(%$\u0013+ :\n\u0003,1 %'\u001de\u0012\u0015\u0014 +otherwise.(4)\nConversely8,withanymatrix 9 : %=\u0007\n\b>\n?\f+A@CB7@,wewillasso-\nciatethegraph\n\u0000'&withsetofnodes \u0001 \u001d \u00107S\u0013(XVWVXV?(*Y[\u0016andset\nofedges \u00030\u001db\u00107%\nQ(($\u0013+*) 9 `\n^,+\u001db1)\u0016,equippedwiththevalua-\ntion \u0005a%\nQ(%$$+ \u001d 9 `\n^.Thisbijectivecorrespondence betweenval-\nuedgraphs,ontheonehand,andmax-plusmatrices,ontheother\nhand,willbeusedsystematically inthesequel.\nPROBLEM2(CYCLETIME).Givenamatrix93:/%=\u0007\n\b>\n\r\f+J@OB @\nwithatleastone®niteentryperrow,computethecycletimevec-\ntor-%_9\u001a+ \u001d {\u0002.=\u001f/1032\nS4\u000frD[%\n4+ ( (5)\nwhereD`\n%\n4+8\u001d \u001f\"!$#\\*]T^$]@\n%_9`\n^-MD\n^%\n4\u0012\tSX+?+…(\nPS65\nQ5\u001cY8(\nP74:98LK)\u0010N6U\u0016s(\n(6)\nandtheinitialcondition D[%'6U+ :L\u0007\t@isarbitrary.\nOfcourse,(6)isnothingbutalinearsysteminthemax-plus\nsemiring:D %\n4+>\u001d\u00179\u001aD[%\n4\u0012 SX+,(\nP:4:;8 K \u0010\u000467\u0016 V(7)\nInotherwords,thecycletimevector\n-% 9\u001a+determinesthelin-\neargrowthrateofthetrajectoriesofthemax-pluslineardynam-\nicalsystem(7).Thefactthat\n-% 9\u001a+exists,thatitisindepen-\ndent9oftheinitialconditionD[%'6U+ :L\u0007\t@,andthatitcanbecom-\nputedfromtheeigenvaluesofthesubmatricesassociatedwith\nthestronglyconnectedcomponentsofthegraphof9,willbede-\ntailedinProp.III.2below.\nWenextdescribeausefulgeneralization ofthemax-plusspec-\ntralproblem,whichrequiresthede®nitionofmax-polynomials.\nA(formal,generalized)max-polynomial intheindeterminate <\nissimplyaformalsum =\u001e>\n\u0010@?BADC><\n>,where\nCisamap \u0007FE\u0007\t\u0007\n\b>\n\r\f(%G\u001fH \t\nC>,suchthat\nC>\u001d 1forallbut®nitelymanyvalues\nof GŒ:2\u0007IE.Wedenoteby \u0007\n\b>\nA\f\u0010J<ƒ\u0016thesetofsuchpolynomials.\nThegeneralized spectralproblemforapolynomialmatrixK:s%=\u0007\n\b>\n\r\f\u0010@<C\u0016N+J@OB7@canbewrittenas:K%'FBL\n\\+?DE\u001d D ( (8)M\nNotethataccordingto(4)andthroughoutthepaper,thereisanarcfromNtoO\nif PRQ SUT\n”.Thisªdirectºconvention,whichisstandardincombinatorial matrix\ntheoryandautomatatheory,wasalreadyusedin[14].Theªinverseºconvention\n(with PBS%QDT\n”insteadof PRQ SVT\n”)wasusedin[1].Thisªinverseºconventionis\nstandardandpreferablefordiscreteeventsystemapplications,unlessoneaccepts\ntodealwithlinearsystemsoftheformW\n‹YX\nŽ ”W\n‹YX\u001fZ;[\nŽP,W\n‹\\X\nŽ\nbeingarow\nvector,and Pasquarematrix,insteadofthemorefamiliar W\n‹YX\nŽ ”PDW\n‹YX]Z^[\nŽ\n,W\n‹\\X\nŽ\nbeingacolumnvector.Aconsequenceofthecompromisemadeinthis\npaper(choosingtheªdirectºconvention,whileconsideringdynamicalsystems\nofthesecondform)isthattheaccessibilityrelation,inProp.III.2andIII.4below,\nistheinverseoftheoneusede.g.in[1,18]._\nIfsomeentriesofW\n‹a`\nŽ\narein®nite,thelimitin(5)neednotexist,seee.g.[11,\nRemark1.1.10,Chap.VI]and[14,Th.17].TheconditionthatalltheentriesofW\n‹\u0002`\nŽ\nare®nite,andthat Phasatleastone®niteentryperrow(whichguarantees\nthat Psends bdcto bdc,i.e.thattheimageby Pofacolumnvectorwith®nite\nentrieshas®niteentries)isfrequentlyusedsinceitseemspracticallyrelevantfor\ndiscreteeventsystemsandmakeslifesimpler.3\nwhere DR:/%=\u0007\n\b>\nA\f+A@5Ky\u0010N1)\u0016, F :R\u0007,and\nK%'FL\n\\+ :/%=\u0007\n\b>\n\r\f+J@OB @\ndenotesthematrixobtainedbyreplacingeachoccurrenceofthe\nindeterminate<byFL\n\\\n(\u001d \u0012\u0015F,withtheusualnotation)inthe\nformalexpressionof\nK.If\nK\u001d =>\n\u0010@?\nA9\n><\n>,with9\n>:%=\u0007M\b\u000b\n\r\f,+A@CB7@,thespectralproblem(8)canberewrittenmoreex-\nplicitlyas\u0000>\n\u0010@?\nA\n9\n>F\nL\n>DE\u001dGD3( (9)\nwherethesumisindeeda®niteone,since 9\n>is 1forallbut\n®nitelymanyvaluesof G.When\nK\u001d/9U<,(9)specializesto(1).\nForthisreason,wewillcall Dageneralizedeigenvectorof\nKandFageneralizedeigenvalue.\nTheappropriategraphicalobjecttobeassociatedwithapoly-\nnomialmatrix\nK: %=\u0007\t\b\u000b\n\r\f…\u0010J<C\u0016\u0004+\n@OB @isnotavalueddirected\ngraph,butthebi-valueddirectedmultigraph10 \u0000\u0002\u0001,withsetof\nnodes \u0001 \u001d \u00107S\u0013(XVWVXV?(*Y[\u0016,setofedges \u0003R\u001d \u0010T%\nQ(%GW(($\u0013+ : \u0001 r\u001e\u0007FEur\u0001 )a% 9\n>+J`\n^ +\u001d 1)\u0016,initialnodemapIn %\nQ(%GW(%$$+8\u001d\nQ,terminalnode\nmapOut %\nQ(%GW(($\u0013+\u001d $,®rstvaluation \u0005 \u00066\u0005z%\nQ((GW(%$\u0013+\"\u001d;%_9\n>+J`\n^,\nandsecondvaluation \u0003 \u0006\u0004\u0003C%\nQ(%GW(($\u0013+5\u001d G.Then,thegeneralized\nspectralproblem(8)becomesPOQ: \u0001G( D&` \u001d \u001f\"!$#\u0005`\u0007\u0006\n>\u0006\n^\t\b\n\u0010\u000b\n%!\u0005a%\nQ(%GW(($\u0013+[\u0012 F r\f\u0003C%\nQ(%GW(%$$+C-\u001cD\n^+RV\n(10)\nWewillseeinTheoremIII.3thatthesolution Fof(10)(which\nisuniqueundernaturalconditions)yieldsthesolution\n\u000b\u000e\rofthe\nfollowingproblem.\nPROBLEM3(MAXIMALCIRCUITMEAN\n\r).Givenamulti-\ngraph\n\u0000\u001d %Y\u0001G( \u0003 (In (Out +,equippedwithtwovaluations\u0005 \u0006 \u0003 \t \u0007,\u0003 \u0006 \u0003 \t \u0007IE,suchthat\n\r\u000e\u0011\u0010\f\u0003C%\u0013\u0012X+\u0010\u000f 6,for\nallcircuits\u0015of\n\u0000,computethe(generalized) maximalcircuit\nmean:\u000b\n\r\u001d\u0017\u001f\"!$#\f\n\r\n\u000e\u0014\u0010\f\u0005a%!\u0012W+\r\u000e\u0011\u0010\f\u0003C%\u0013\u0012X+\n( (11)\nwherethe\u001f\"!$#istakenoverallthecircuits\u0015of\n\u0000.\nAsshowninProp.III.4below,thegeneralizedspectralprob-\nlem(8)isalsousefulintheeffectivecomputationofcycletimes\nofsomemax-pluslineardynamicalsystems,thatarein®nitedi-\nmensional(multi-delay) versionsof(7).\nWewillsaythat\nK\u001d\n=,>\n\u0010@? A9\n><\n>: %=\u0007\n\b>\n\r\f\u0010J<C\u0016\u0004+A@OB @isa\ngoodpolynomialmatrixifithasatleastonenon- 1entryperrow,\nandiftherearenocircuitsinthegraphof 9‚.\nPROBLEM4(CYCLETIME\n\r).GivenagoodpolynomialmatrixK:s%=\u00075\b>\n\r\f,\u0010@<C\u0016N+J@OB7@ ,computethecycletimevector-%\nK+>\u001d;{\u0002. \u001f/10\u001b2\nS4rD %\n4+( (12)\nwherethetrajectoryDisnowgivenbythedynamicsD `Z%\n4+>\u001d;\u001f\"!$#\\*]T^$]@\n\u001f\u0011!\u0004#>\n\u0010@?\nA%?%_9\n>+A`\n^-\u000fD\n^%\n4\u0012 GZ+?+,(\nP:4\u0012\u00116G((13)\nand %'D %\n4+?+‚\u0014\u0013\n/\u0016\u0015L\u0018\u0017\u001a\u0019isagiven(bounded)initialcondition,with\u001b‚\n\u001d\u0017\u001f\"!\u0004# \u0010 G :L\u0007IE ) 9\n>\n+\u001dG1)\u0016.n\u001d\u001c\nLooselyspeaking,amultigraphisagraphinwhichseveraledgescanlinkthe\nsamepairofnodes.Formally,a(®nite)multigraphcanbede®nedbya(®nite)\nsetofnodes \u0018,a(®nite)setofedges \u0019,andtwomapsIn \u001e\b\u0019 \u001f\u001e\u0018andOut \u001e\u0019!\u001f \u0018,whichgivetheinitialnodeandterminalnodeofanedge,respectively.Morealgebraically,(13)canberewrittenasfollows:D[%\n4+8\u001d\n\u0000>\n\u0010@?BA\n9\n>D[%\n4\u0012 GZ+,(\nP:4\"\u00116/V(14)\nRemarkII.1.Problems4and2areinfacttwospecialversionsofa\nmoregeneralproblem(seee.g.[19]).If #isanormedvectorspaceand$\u0012%#'&(#isanon-expansive map(i.e.)\n$\u0018*\u001d+-,\u0018.\f$\u0018*0/\u000b,)213)\n+4.\f/)),\nthelimit5\n*0$-,\u0004687:9<;>=@?BABC\"DE$\n=*\u001d+-,,ifitexists,isindependentofthe\ninitialpoint\n+.Problem2dealswiththecasewhen #isequalto F\nc,\nequippedwiththesupnorm,and\n$\u0018*\u001d+-,26HG2+.InProblem4, #isthe\nsetofboundedfunctionsfrom I\n.\u001aJ\u001cBKML\n,to F\nc,equippedwiththesup\nnorm,and\n$istheevolutionoperatorwhichwiththepieceoftrajectoryN+O*\u001dCP,MQ\u000bR-S\u0019UT\n=\u0014V\u001c(initialcondition),associatesthetrajectoryobtained\nafteroneunitoftime:\nN+\u0018*\u001dCXWY?B,MQZR-S\u0019UT\n=\u0014V\u001c.Theevolutionoperatoris\nobviouslywellde®nedsincetherearenocircuitsinthegraphof\nG\u001c.Itis\nclearlymonotoneandhomogeneous, hence,byasimpleresult[5],itis\nnon-expansive forthesup-norm.Thus,theexistenceofthelimit(12)\nforaparticularboundedfunction\nN+\u0018*\u001dC[,MQ\\R-S\u0019UT\n=\u0014V\u001c,impliestheexis-\ntenceof5\n*0$-,,whichisequalto5\n*^]_,.Conversely,theexistenceof5\n*0$-,clearlyimpliesthatthelimit(12)exists,with5\n*^]_,`65\n*0$a,.\nIII.Someclassicalandlessclassicalelementsofmax-plus\nspectraltheory\nInallthissection,withamatrix9 :s%=\u0007\n\b>\n\r\f+J@OB @weassociate\nthegraph\n\u0000 &\u001d %\u0002\u0001G(\u0004\u0003 +,equippedwiththevaluation\u0005,asde-\n®nedinthediscussionfollowingEqn4.Thestronglyconnected\ncomponentsofthegraphof9arecalledclasses.Amatrixisir-\nreducibleifitsgraphisstronglyconnected,i.e.ifithasasingle\nclass.Thefollowingresultisclassical[24,26,6,15,16].See\ne.g.[1,7]forrecentpresentations andproofs.\nTHEOREMIII.1(MAX-PLUSSPECTRALTHEOREM).Anirre-\nduciblematrix9 :\u000f%I\u0007E\b>\nJ\f +J@OB @hasauniqueeigenvalue,given\nby(3).\nIngeneral,thereareseveralnon-proportion aleigenvectors\n(seee.g.[1]or[14]).Areduciblematrix9hasingeneralsev-\neraldistincteigenvalues,andthemaximalcircuitmean(3)yields\npreciselythemaximaleigenvalue(seee.g.[11,Ch.IV],[14],[2]\nforcharacterizations ofthespectrumofreduciblematrices).\nWesaythat\nQhasaccessto $ifthereisapathfrom\nQto $in\nthegraphof 9.Wesaythat\nQhasaccesstoaclass bifithas\naccesstoany $:cb(thispropertyisobviouslyindependentof\nthechoiceof $5:db,byde®nitionofaclass).Byªeigenvalueof\naclass bº,wemeantheeigenvalueofthe b3rebsubmatrixof9,whichisuniquebyTheoremIII.1.\nThefollowingresultappearedin[18,Prop.7],and,ina\nstochasticcontext,in[1,Th.7.36].\nPROPOSITIONIII.2(CYCLETIMEFORMULA).Let 9 :%=\u0007\n\b\u000b\n\r\f+A@OB @,withatleastone®niteentryperrow.The\nQ-thentry\n-`*%_9\u001e+ofthecycletimevectorisequaltothemaximumofthe\neigenvaluesoftheclassestowhich\nQhasaccess†.\nThenextstatementusesthecorrespondence betweenpolyno-\nmialmatricesandmultigraphs,describedin vIIabove.Wewill\nsaythatapolynomialmatrix\nKisirreducibleifitsmultigraph\nisstronglyconnected.Moregenerally,wewillnaturallyextend\nthenotionsofaccessibility,classes,etc.topolynomialmatrices\n(thesenotionsarede®nedasinthecaseofordinarymatrices,but4\nreplacingthegraph\n\u0000'&bythemultigraph\n\u0000_\u0001).Thefollowing\nresultistakenfrom[1,Th.3.28]\nTHEOREMIII.3(SPECTRALTHEOREM\n\r).Anirreducible\npolynomialmatrix\nK\u001d\n=>\n\u0010@?BA9\n><\n>: %I\u0007\n\b>\n\r\f\u0010@<C\u0016N+J@OB @,\nsuchthatthegraphof9‚hasnocircuits11,admitsaunique\ngeneralizedeigenvalueF,givenby(11).\nThefollowingextensionofProp.III.2isimmediate.\nPROPOSITIONIII.4(CYCLETIMEFORMULA\n\r).Let\nKdenote\nagoodpolynomialmatrix.The\nQ-thentry\n-`*%\nK+ofthecycletime\nvectorisequaltothemaximumofthegeneralizedeigenvaluesof\ntheclassestowhich\nQhasaccess †.\nSincethedecomposition ofadirectedgraphormultigraphin\nstronglyconnectedcomponentscanbedoneinlineartimeus-\ningTarjan'salgorithm[25],Prop.III.2andProp.III.4reducein\nlineartimethecomputationofthecycletimevectortothecom-\nputationofthe(possiblygeneralized)eigenvaluesofirreducible\n(possiblypolynomial)matrices.Inparticular,thetraditionalway\ntocomputethecycletimevector\n-%_9\u001e+istocomputetheeigen-\nvaluesoftheclassesof9viaKarp'salgorithm[20],andthen\ntoapplyProp.III.2.Thismethoddoesnotworkforthegen-\neralizeddynamics(13),sinceKarp'salgorithmcannotcompute\ngeneralizedeigenvalues.Therearetwotraditionalwaystoover-\ncomethisdif®culty.ÐWhen9\n>iszeroexceptforintegerval-\nuesof G,aneliminationoftheimplicitpartandafamiliaraug-\nmentationofstatereducesthegeneralizedspectralproblemforKtoanordinaryspectralproblemforalargermatrix 9\n\r.This\nmethod,whichispresentedin[1, v2.5.3, v2.5.4],isnotsoex-\npensivewhenthenumberofvaluesof Gforwhich 9\n>\n+\u001db1is\nsmall,particularlyifitisimplementedwithsomere®nements,as\nin[13],for\nK\u001d/9‚\n\u0019R9\n\\<.ÐThesecondmethodreliesonthe\ngeneraltechniquespresentedin[17,AppendixV],whichallow\nonetomaximizeinpseudo-polynomial timearatiooftheform\u0005a%!\u0015,+\u0001\u0000\u0016\u0003C%\u0013\u0015…+for\u0015ina®niteset\u0002,providedthatforanyvalueofF :\u0007,weknowhowtomaximizeinpolynomialtimetheratio\u0005a%!\u0015,+[\u0012\u000fFO\u0003ƒ%\u0013\u0015,+ for \u0015inthesameset \u0002.\nWewillnotdiscussindetailthesetwomoreorlessclassical\napproaches,butrathershowhowadifferentgeneralization ofthe\nspectralproblemallowsustodeterminedirectlyandinfullgen-\neralitycycletimevectors.Alltheremainingpartofthispaper,\nandinparticular,themax-plusversionofHoward'spolicyim-\nprovementalgorithm,willbebasedonthisnewspectralprob-\nlem.\nWeconsideragoodpolynomialmatrix\nK.Wesaythat%\u0004\u0003)(*D + : %I\u00075@\u0013+\u0001\u0005isageneralized eigenmode12ifthereexistsnAn\nIn[1,Th.3.28],itisonlyrequiredthatthecircuitsofthegraphof P\n\u001chave\nnegativeweights.Wewillnotneedthisdegreeofgeneralityhere.Intermsofthe\nassociateddynamicalsystems(13),theconditionofthetheoremsimplymeans\nthattherearenocircuitsinvolvingzero-delaycausalityrelations.n_‡\nThisspectralnotionisobtainedbytwosuccessivegeneralizations ofor-\ndinaryspectralproblems.The®rstgeneralization consistsinreplacingordi-\nnarydynamicalsystemsoftheform(7)(withunitarydelays)bysystemsofthe\nform(14)(withmultipledelays).Theordinaryspectralproblem(1)anditsgen-\neralization(9)areobtainedbylookingforsolutionsoftheform W\n‹YX\nŽ>”\u0007\u0006\n=W,\nwhere\n\u0006\nisascalarandW\t\b\n‹b\u000b\n\r\f\u000f\u000e\nŽc\u0011\u0010\u0013\u0012\n`\u0015\u0014.Butthede®nitionofcycle-time\nvectorsrequires Wtohave®nitecoordinates.Then,inthegeneralcase,asim-\npleaf®neregime W\n‹\\X\nŽŒ”\u0016\u0006\n=W\n”X\u001d\n‹\n\u0006\u0018\u0017\u000f\u0019\u001a\u0019\u000f\u0019\u0001\u0017\u000f\u0006\u0013Ž\u001c\u001b•Wneednotexist,but\namoregeneralaf®neregime W\n‹\\X\nŽ ”\u001e\u001d\n=W,where\n\u001d\nisadiagonalmatrix,is\nexpected.Inotherwords,weexpectthedifferententriesof W\n‹\\X\nŽ\ntohavedif-\n\u001b:2\u0007suchthat4:L\u0007\u0011(\n4\u0012\u0011\u001b \u001f !\n/DE\u001d\nK%\n!L\n\\+\n!\n/D3((15)\nwhere\n!def\u001ddiag%\u001c\u0003\n\\(WVXVXVZ(\"\u0003@\n+and\n!\n/\u001ddiag%\n4r#\u0003\n\\(XVWVXV*(\n4r\u0003@\n+.\nWhen\nK\u001d\u001796<,(15)becomes4:2\u0007\"(\n4\"\u0011\u001b$\u001f !\n/DE\u001d/9\n!\n/L\n\\D3V(16)\nThatis,theactionof 9coincideswiththeactionof\n!onthe\norbit \u0010\n!\n/Dƒ\u0016\n/\u0016\u0015\u0017 L\n\\.Asdetailedinfootnote12,theeigenmode\nequation(15)isobtainedbylookingforanultimatelyaf®neso-\nlutionof(13), D %\n4+E\u001d\n!\n/D/\u001d\n4r%\u00035- D.Ifsuchasolution\nexists,\n-%\nK+5\u001d {\u0002.=\u001f\n/\n\\/rsD %\n4+M\u001d&\u0003.Thenextlemmafollows\nreadilyfromthisobservation,andfromthefact,mentionedin\nRemarkII.1above,thatthelimit{a.=\u001f\n/\n\\/r\tD[%\n4+ \u001d\u001e\u0003isindepen-\ndentoftheparticularboundedinitialcondition.\nLEMMAIII.5.Ifagoodpolynomialmatrix\nKhasageneralized\neigenmode%\u001c\u0003)(ZD)+,then\n-%\nK+>\u001d'\u0003.\nInparticular,if\nKisirreducible,Prop.III.4impliesthat \u0003\u001d\n-%\nK+\u001d %pF (XVWVXV*(ZF)+,whereFisthegeneralizedeigenvalueofK.Therefore,(15)reducestothe(generalized) spectralprob-\nlem(8),andDisa(generalized) eigenvectorof\nK.I.e.,forirre-\nduciblematrices,®ndinggeneralizedeigenmodesisequivalent\nto®ndinggeneralizedeigenvectors.\nTheexistenceofgeneralizedeigenmodeswasprovedin[12]\nwhen\nK\u001d\u001796<,asaspecialcaseofamoregeneralresultformin-\nmaxfunctions.Inthenextsection,wewillshowhowthemax-\nplusversionofHoward'spolicyimprovement algorithmallows\nustocomputegeneralizedeigenmodes.Inparticular,thetermi-\nnationofthealgorithmwillprovetheexistenceofsucheigen-\nmodes,forgoodpolynomialmatrices.\nIV.Themax-pluspolicyimprovementalgorithm\nInthissection,\nKwillbeagoodpolynomialmatrix.We\nwillusesystematically themultigraph\n\u00004\u0001\u001d %Y\u0001G( \u0003 (In(Out+\nequippedwiththevaluations\u0005a( \u0003,canonicallyassociatedwithKin vII.\nItcanbecheckedthattheeigenmodeequation(15)which\nseemsdeceivinglytoinvolveanin®nitenumberofconditions,\nisequivalenttothefollowing®nitesystem:\u0003\u0004` \u001d \u001f\"!$#\u0005`\u0007\u0006\n>\u0006\n^ \b\n\u0010\u000b\n\u0003\n^(17)D `c\u001d \u001f\"!$#\u0005`\u0007\u0006\n>\u0006\n^ \b\n\u0010 \n%\u0013\u0005z%\nQ((GW(%$$+\u000b\u0012 \u0003C%\nQ(%GW(%$$+ r(\u0003\n^-\u000fD\n^+R((18)\nwhere\u0003\u001d \u00107%\nQ((GW(%$$+ :9\u0003 ))\u0003`\n\u001d*\u0003\n^\u0016\u001cV\nInlooseterms,themultichainpolicyiterationalgorithmwill\nsolvethissystembytryingtoguessthearcsthatattainthemax-\nimum.Aprecisestatementofthisideaneedsthede®nitionof\nferent(linear)growthrates,givenbythediagonalentriesof\n\u001d\n.Hence,thesec-\nondgeneralization consistsinsubstitutingW\n‹YX\nŽ ”+\u001d\n=Wfor\nXlargeenough(i.e.X-,(./•+.\u001c)in(14):then,oneobtainspreciselythegeneralizedeigenmode\nequation(15).Contrarytothecasewhen\n\u001d\u000f”/\u0006\n,\n\u001d\nRn\nneednotcommutewith\nthematrices P10,andthus,therelation W\n”32‹\n\u001d\nRnJŽW\n”=05476\nAP\u00130\n\u001d\nR0W\nneednotimplythat\n\u001d\n=W\n”82‹\n\u001d\nRn?Ž9\u001d\n=W,for\nX:, `.Thisiswhy(15)hasto\nbestatedforalllarge\nX.5\npolicy,whichisamap\u0000\u0006@\u0001 \t \u0003~(suchthatIn%\n\u0000%\nQ+?+ \u001d\nQ(\nPOQ: \u0001 V\nThatis,apolicyisjustamapwhichwithanodeassociatesan\nedgestartingfromthisnode.\nWithapolicy\n\u0000,weassociatethespecialpolynomialmatrixK\u0002\u0001\u001d =>\n\u0010@?\nA9\n\u0001><\n>:% 9\n\u0001>+J`\n^\u001d\n\u0003\u0005z%\n\u0000%\nQ+?+if $y\u001dOut %\n\u0000%\nQ+?+and G\u000b\u001d'\u0003C%\n\u0000%\nQ+?+1otherwise.\nHence,thematrix\nK\u0004\u0001hasexactlyonenon-zeroentryperrow,\nwhichcorrespondstotheedgeselectedby\n\u0000,i.e.inthemulti-\ngraphof\nK\u0005\u0001,\n\u0000%\nQ+istheuniqueedgestartingfrom\nQ.Ithas\nthesamevaluations \u0005and \u0003asintheoriginalmultigraphof\nK.\nE.g.,wehavedepictedinFig.2belowthemultigraph(infact,\nthegraph)of\nK\n\u0001\u0007\u0006,where\n\u0000\\isthepolicy S*\t SU(\nQ\t \b,forQ\u001d\t\b&(\u000b\n&(\r\fand\nK\u001d\u001796<,where9isdisplayedinFig.1.\nWe®rstshowhowageneralizedeigenmode%\u001c\u0003)(ZD)+ofamatrix\noftheform\nK\u0005\u0001canbecomputedintimeou%'YC+.\nALGORITHMIV.1(VALUEDETERMINATION).Input:agood\npolynomialmatrix\nKandapolicy\n\u0000.Output:ageneralized\neigenmodeof\nK\u0005\u0001.\n1.Findacircuit \u0015inthemultigraphof\nK\u000e\u0001\n2.Set\u0003M\u001d\n\r\u000e\u0011\u0010\f\u0005a%\u0013\u0012X+\r\u000e\u0014\u0010\f\u0003C%!\u0012W+\nV (19)\n3.Selectanarbitrarynode\nQin \u0015,set \u0003`\n\u001d \u0003andset D`to\nanarbitraryvalue,say D`\n\u001d 6.\n4.Visitingallthenodes $thathaveaccessto\nQinbackward\ntopologicalorder,set\u0003\n^\u001d \u0003 (20)D\n^\u001d \u0005z%\n\u0000%Y$$+\r+>\u0012 \u0003Er \u0003C%\n\u0000%\u0002$\u0013+?+ -\u001cDOut\n\u0005\u0001\n\u0005^\t\b\u001d\b(21)\n5.Ifthereisanonemptysetbofnodes$thatdonothave\naccessto\nQ,repeatthealgorithmusingthe b\u001cr bsubmatrix\nof\nKandtherestrictionof\n\u0000to b.\nThealgorithmshouldbespeci®edasfollows.\nStep1isveryeasytoimplement:wecanstartfromanar-\nbitrarynode\nQ,movetonode$ \u001dOut%\n\u0000%\nQ+?+,thenpossiblyto\nnodeOut%\n\u0000%Y$$+\r+,etc.,untilanodethatisalreadyvisitedisfound.\nThen,acircuithasbeenfound.Thisrequiresalineartime.\nEqn(21)requiresvisitingthenodesinbackwardtopological\norder,startingfrom\nQ,sincethevalueof DOut\n\u0005\u0001\n\u0005^\t\b\u001d\bmustbeal-\nready®xedwhenwevisitnode $andset D\n^.Thiscompletevisit\ncanbedoneinlineartime,atthepriceofanaprioritabulationof\nthe(multi-valued)-inv erseofthemap \u0001 \t \u0001G(D$'H \tOut %\n\u0000%Y$$+\r+.\nComputingtheinverseofthismapalsorequiresalineartime.\nThehandlingofthisinverseisinfacttheonlypartofthealgo-\nrithmwhichrequiresmorere®neddatatypesthansimplearrays\n(inourimplementation, weusedlinearlychainedlists).\nStep5isformulatedinarecursivewayonlytosimplifythe\nstatementofthealgorithm,whichisessentiallynonrecursive.\nTheaboveconsiderations justifythefollowingtheorem.\nTHEOREMIV.2.AlgorithmIV.1computesageneralizedeigen-\nmodeof\nK\u0002\u0001intimeandspaceoE%pYC+.ThesecondingredientofHoward'salgorithmisapolicyim-\nprovementroutine,whichgivenapolicy\n\u0000andageneralized\neigenmode%\u0004\u0003)(*D +of\nK\u0005\u0001,either®ndsanewpolicy\n\u0000`\rsuchthat\n-%\nK\u0002\u0001\u0007\u000f+\n\u0011\n-%\nK\u0002\u0001+,orprovesthat%\u001c\u0003)(ZD)+isageneralizedeigen-\nmodeof\nK.\nALGORITHMIV.3(POLICYIMPROVEMENT).Input:agood\npolynomialmatrix\nK,apolicy\n\u0000,togetherwithageneral-\nizedeigenmode%\u0004\u0003)(*D +of\nK\u0005\u0001.Output:apolicy\n\u0000 \r,suchthat\n-%\nK\u0002\u0001\n\u000f+\n\u0011\n-%\nK\u0002\u0001+.\n1.Let13\u0010\u001d \u0010\nQ);\u001f\"!$#\u0005`\u0007\u0006\n>\u0006\n^ \b\n\u0010\u000b\n\u0003\n^\u000f \u0003$`A\u0016\u001b%\nQ+ \u001d !\u0012\u0011?}>\u001f\u0011!\u0004#\u0005`\u0007\u0006\n>\u0006\n^\t\b\n\u0010\u000b\n\u0003\n^(for\nQ\u001d S>VXVXV Y,\u0013\u001d \u0010\nQ) \u001f\"!\u0004#\n\u000e\u0015\u0014\u0005`\u0007\u0006\n>\u0006\n^\t\b\n\u0010\u0017\n\u0005`\n\b\n%!\u0005a%!\u0012W+\u000b\u0012 \u0003C%!\u0012W+\u001a\u0003\n^-\u001cD\n^+ \u000f D`\n\u0016\u0016%\nQ+ \u001d !\u0007\u0011\r}>\u001f\"!$#\n\u000e\r\u0014\u0005`\u0007\u0006\n>\u0006\n^ \b\n\u0010\u0017\n\u0005`\n\b\n%\u0013\u0005z%\u0013\u0012X+8\u0012 \u0003C%\u0013\u0012X+\u0001\u0003\n^- D\n^+(\nfor\nQ\u001d S VWVXVAY.\n2.If\n\u0013\u001d\n\u0010\u001d\u0018\u0017, %\u0004\u0003)(*D +isageneralizedeigenmodeof\nK.\nStop.\n3.(a)If\n\u0010+\u001d\t\u0017,weset:\u0000\n\r%\nQ+ \u001d\n#\nany\u0012in\n\u001b%\nQ+if\nQ:\n\u0010,\u0000%\nQ+if\nQ+:\n\u0010.\n(b)If\n\u0010\u001d\t\u0017but\n\u0013+\u001d\u0019\u0017,weset\u0000\n\r%\nQ+ \u001d\n#\nany \u0012\u001e:\n\u0016%\nQ+if\nQ:\n\u0013,\u0000%\nQ+if\nQ+:\n\u0013.\nThepolicyimprovement rules3aand3bsimplymeanthat\noneselectsforthenewpolicytheedgeswhichrealizethemaxi-\nmuminEqns(17),(18).Thismaximumistakenhierarchically:\nEqn(17)haspriorityonEqn(18)inapolicyimprovement step.\nOnlywhenEqn(17)issatis®edEqn(18)isusedtodeterminethe\nnewpolicy.Theotherconditionsinsteps3aand3bsimplymean\nthattheprecedingvaluesof\n\u0000shouldbekeptin\n\u0000 \r,whenever\npossible.Thesetechnicaltrickswillguaranteethetermination\nofthepolicyiterationalgorithmbelow,evenwhenªdegenerateº\npolicyimprovements inwhich\n-%\nK\u000e\u0001\n\u000f+8\u001d\n-%\nK\u0002\u0001+occur.\nThesets\n\u001b%\nQ+and\n\u0016%\nQ+,whichareintroducedtosimplify\nthestatementofthealgorithm,neednotbeexplicitlytabulated.\nClearly,AlgorithmIV.3runsin oE% ) \u0003\u001f) +time14and oE%pYC+space15\nWenextstatethemax-plusversionofHoward'spolicyitera-\ntionalgorithm.\nALGORITHMIV.4(MAX-PLUSPOLICYITERATION).Input:\nagoodpolynomialmatrix\nK.Output:ageneralizedeigenmode\nof\nK.\n1.Initialization.Selectanarbitrarypolicy\n\u0000\\.Com-\nputeageneralizedeigenmode %\u0004\u0003\n\\(*D\n\\+of\nK\u0002\u0001\n\u0006,usingAlgo-\nrithmIV.1.Set\n4\u001d’S.n_ˆ\nRecallthatby\u001a\u001c\u001b\u001e\u001d \u001f!\u001a#\"#$4&%('\n‹*)\nŽ\n,wemeanasusualthesetofelements+\u001e\b\u0019suchthat'\n‹+\nŽ&”\u001f!\u001a#\"\n$4&%'\n‹,)\nŽ\n.n_‰.-\u0019\n-\nsimplydenotesthenumberofedgesofthemultigraph.n_\nThealgorithmneedslessinternalmemory(\nŠŒ‹=\nŽ\nspace)thanthecodingof\ntheinputitself,whichrequires\nŠŒ‹\n-\u0019\n-\nŽ\nspace.6\n2.Policyimprovement.Improvethepolicy\n\u0000/,usingAl-\ngorithmIV.3withinput\n\u0000\u001d\n\u0000/(\"\u0003s\u001d&\u0003\n/(*D\u000f\u001d D\n/\n.Ifthe\nstoppingconditionofAlgorithmIV.3issatis®ed,%\u0004\u0003\n/(*D\n/+\nisageneralizedeigenmodeof\nK.Stop.Otherwise,set\u0000/E\n\\\u001d\n\u0000\u0018\r(theoutputofAlgorithmIV.3).\n3.Valuedetermination .Findageneralizedeigenmode%\u001c\u0003\n/E\n\\(ZD\n/E\n\\+of\nK\u0005\u0001\u0001\u0000\nA\u0006usingAlgorithmIV.1,takingthe\nspecialvalue D\n/E\n\\`\n\u001dGD\n/`instep3,IV.1.\n4.Increment\n4byoneandgotostep2.\nThealgorithmbuildsasequenceofgeneralizedeigenmodes%\u0004\u0003\n/(ZD\n/+thatisstrictlyincreasingforthelexicographic orderon%=\u0007M@\u0013+\u0001\u0005,de®nedby %'D[(\n\u0002+\u0004\u0003lex\n%pD\n\r(\n\u0002\u000b\r+if D\u0005\u0003 D\n\ror D \u001dD\n\rand\n\u0002\u0003\n\u0002Z\r.Thefactthat D\n/E\n\\`mustbesetto D\n/`instep3\nofAlgorithmIV.1isaconservativetrickanalogoustothefact\nthatthevaluesof\n\u0000arekeptin\n\u0000`\rwheneverpossible,inAlgo-\nrithmIV.3.Thistechnicalconditionisessentialtoguaranteethe\nstrictmonotonicity ofthesequence %\u0004\u0003\n/(ZD\n/+,whichisneededin\ntheproofthatthealgorithmterminates.\nTheproofofthefollowingresultissimilartotheproofofthe\nmaintheoremof[10].Itreliesessentiallyonaversionofthe\nmaximumprinciplefortransientMarkovchains.Amorealge-\nbraicversionofthisproof,usinggermsofaf®nefunctions,ap-\npearsin[4],andintheproofoftheresultsannouncedtherein.\nTHEOREMIV.5.Themax-pluspolicyiterationalgorithmter-\nminatesinanumberofiterations\nwzxwhichislessthanthenum-\nberofpolicies.Oneiterationrequires\n\\\u0007\u0006ou% ) \u0003\u001f) +time.Thealgo-\nrithmrequires oE%pYC+space\n\\\t\b\n.\nIndeed,thesamepolicyisneverselectedtwice.Bounding\nwx\nbythenumberofpolicieswhichis®nitebutexponentialisvery\ncoarse.Onexperimentalrandomexamples,\nwxisverysmall,as\ndetailedinsectionVbelow.Thefollowingresultisanimmediate\nconsequenceoftheterminationofthepolicyiterationalgorithm\nandofLemmaIII.5.\nCOROLLARYIV.6.Agoodpolynomialmatrix\nKhasagener-\nalizedeigenmode %\u0004\u0003)(*D +.Inparticular,thecycletime\n-%\nK+ \u001d\u001e\u0003\nexists.\nRemarkIV.7.Howard'salgorithmisnotlimitedtospectralproblems.\nItispossibletodesignpolicyiterationalgorithmsfor®xedpointsequa-\ntionsoftheform\n+ 6eG2+\u000b\n\r\f,where\nGisasquarematrixwithmaximal\neigenvaluestrictlylessthan\u000e,and\n\facolumnvector.Thiswillbede-\ntailedelsewhere.\nV.Examplesandnumericaltests\nA.Illustrativeexample\nWeapplythemax-pluspolicyiterationalgorithmtodetermine\ntheeigenvalueofthematrixdisplayedinFig.1.Thiscorre-\nspondstothecasewhere\nK\u001d 9U<,and \u0003\u0010\u000f S.Inparticular,\nthemultigraphof\nKwillbeidenti®edwiththegraphof 9.\n1\n34\n8\n2127\n23\n5\n43\nP\n”\n\u0011\u0012\n[\u0014\u0013 \u0015\u0017\u0016\u0019\u0018 \u0017\u001a \u0019\u0016\n\u0013\u001c\u001b\n\u001d\u001e\nFig.1.AmatrixanditsgraphThefollowingrunofthealgorithmisvisualizedinFig.2.We\nchoosetheinitialpolicy\n\u0000\\: S \t S,\nQ\t \b,for\nQ\u001d \b ( \n (\u0015\f.\nApplyingAlgorithmIV.1,we®nda®rstcircuit\u0015\n\\\u0006&S \t S,with\u0003\u0011\u001d \u0005a%!\u0015\n\\+\u001a\u0000\\\u0003C%!\u0015\n\\+ \u001d S.Weset\u0003\n\\\\\u001d S,D\n\\\\\u001d36.SinceSisthe\nonlynodewhichhasaccesstoS,weapplyAlgorithmIV.1tothe\nsubgraphof\n\u0000\u0001withnodes\b&(\u000b\n&(\r\f.We®ndthecircuit\u0015\u0005\n\u0006 \b\u001b\t \b\nandset\u0003G\u001d \u0005z%\u0013\u0015\u0005\n+\u0001\u0000\u0016\u0003C%\u0013\u0015\u0005\n+\u001d \n,\u0003\n\\\u0005\n\u001d \nandD\n\\\u0005\n\u001d 6.Since\n&(\r\fhaveaccessto \b,weset \u0003\n\\`\n\u001d \nfor\nQ\u001d \n (\u0015\f.Moreover,an\napplicationof(21)yields D\n\\q\n\u001d \fŒ\u0012 \n>-RD\n\\\u0005, D\n\\\u0006\u001d\t\b \u0012 \n>-RD\n\\\u0005.To\nsummarize:\u0003\n\\\u001d \u001fES \n \n \n\"!\t#\u0011( D\n\\\u001d$\u001fa6 6 S \u0012aS%!&#cV\nWeimprovethepolicyusingAlgorithmIV.3.Since\n\u0010\u001d \u0010TS\u0004\u0016\n+\u001d\u0017,wehaveatype3aimprovement. Thisyields\n\u0000\u0005\n\u0006\nQ\t \b,forQ\u001d3S\u0013(\u000b\b&(\u000b\n&(\r\f.Onlytheentry Sof D\n\\\nand \u0003\n\\\nhastobemodi®ed,\nwhichyields\u0003\n\u0005\u001d\n\u001f\n \n \n \n!#\"(sD\n\u0005\u001d\n\u001f\u0012aS 6 S \u0012aS\n!# V\nWenexttabulatewithlessdetailstheendoftherunofthealgo-\nrithm.AlgorithmIV.3,type3bpolicyimprovement.\n\u0000q\n\u0006ŒS9\t\f (\u000b\b\u001c\t \n&(\u000b\nF\t \b (\u0015\fF\t \n.AlgorithmIV.1.Valuedetermination.\nCircuitfound, \u00159\u0006 \n9\t \b9\t \n, \u0003f\u001d %\u0013\u0005z%\u001e\b ( \nU+ -\u0007\u0005a%\u001e\n ( \bU+?+\u001a\u0000 \bL\u001d'\u0000 \b.\u0003\nq\u001d$\u001f)(\u0005\n(\u0005\n(\u0005\n(\u0005\n!*#\"(fD\nq\u001d \u001f\n\\Z\\\u0005\n6 \u0012\n\\\u0005\n\n+!*# V\nAlgorithmIV.3,type3bpolicyimprovement.Theonlychangeis\u0000\u0006%\u001e\nU+>\u001d \f.AlgorithmIV.1.Valuedetermination. Circuitfound,\u0015U\u0006 \n \t \f\u001b\t \n,\u0003a\u001d’%\u0013\u0005z%\u001e\n&(\r\fT+C- \u0005a% \f (\u000b\n\u0013+?+\u001a\u0000 \b \u001d SUS\u0015\u0000 \b.\u0003\n\u0006\u001d$\u001f\n\\*\\\u0005\n\\*\\\u0005\n\\Z\\\u0005\n\\Z\\\u0005\n!*#\"(sD\n\u0006\u001d,\u001f \f \u0012\n\\\u0005\n6\n\b\u0005\n!*# V\nAlgorithmIV.3.Stop. SUS\u0015\u0000 \bisaneigenvalueof 9,and D\n\u0006\nisan\neigenvector.\n1\n12 34\n32 1\n2 34\n2\n3442\n1\n2 34\n8\n457\n1\n2 34\n8\n537\n-n\nZ [`\n-‡Z [\nZ [[\n[`[ [/.0\u0013\n\u0016Z [/.1\u00132\nˆQ\n”43.0\u0013\n\u0017N\n”[\n\u0017\u000f\u0019\u0001\u0019\u001a\u0019\u0001\u0017\u001a\n-ˆ\n-‰`\n\u001a2\n‰Q\n”[ [/.0\u0013\n\u0017N\n”[\n\u0017\u000f\u0019\u0001\u0019\u001a\u0019\u000f\u0017\u001a\n2\n‡Q\n”\u0016\n\u0017N\n”[\n\u0017\u001a\u0019\u000f\u0019\u0001\u0019\u0001\u0017\u001a\n2\nnn\n”[,\n2\nnQ\n”\u0016\n\u0017N\n”\u0013\n\u0017\u0001\u0019\u000f\u0019\u0001\u0019\u001a\u0018\n.0\u0013Z [/.1\u0013 `\n`\n”W\nnn\nFig.2.Thesequenceofpoliciesbuiltbythemax-pluspolicyiteration\nalgorithm,forthematrix\nGdisplayedinFig.1.Thevaluationsof\nthenodesindicatethevectors\n+\n=\n,\nC_6\u0010?65&5&5K\u00077.\nB.NumericalTests\nTheresultsofthenumericalexperiments displayedin\nFig.3,4,5shouldbeself-explanatory .7\n0204060\n0200 400 600 8001000 1200 1400\nFig.3.Numberofiterations \u0000\u0002\u0001ofHoward'salgorithmasafunction\nofthedimension,forfullrandommatrices,withi.i.dentriesdis-\ntributeduniformlyonaninterval.\n050100150200\n0 500000 1e+06 1.5e+06 2e+06\nFig.4.Cputime(insec.)ofHoward'salgorithm(inred)vsKarp's\nalgorithm(inblack)onapentium200Mhzwith500MbofRAM,\nasafunctionofthenumberofarcsforfullrandommatrices(same\nprobabilisticcharacteristics asinFig.3).\nReferences\n[1]F.Baccelli,G.Cohen,G.J.Olsder,andJ.P.Quadrat.Synchronizationand\nLinearity.Wiley,1992.\n[2]R.B.Bapat,D.Stanford,andP.vandenDriessche.Patternpropertiesand\nspectralinequalitiesinmaxalgebra.SIAMJournalofMatrixAnalysisand\nApplications,16(3):964±97 6,1995.\n[3]J.Cochet-Terrasson.ÂEtudeetmiseenúuvredelalgorithmedeHoward\nsousdeshypothÁesesfaiblesd'accessibilitÂe.Rapportdestage,ENSTAet\nDEAªModÂelisationetMÂethodesMathÂematiquesenÂEconomieº,Univer-\nsitÂedeParisI,July1996.\n[4]J.Cochet-Terrasson,S.Gaubert,andJ.Gunawardena. Dynamicsofmin-\nmaxfunctions.1997.Submittedforpublication.AlsoTechnicalReport\nHPL-BRIMS-97-13.\n050100150200\n020000 40000 60000 80000 100000 120000 140000\nFig.5.Cputime(insec.)ofHoward'salgorithm(inred)vsKarp's\nalgorithm(inblack),onapentium200Mhzwith500MbofRAM,\nasafunctionofthenumberofnodes\u0003,forsparserandommatrices,\nwithexactly5arcsstartingfromeachnode.The5successorsofa\ngivennodearedrawnrandomlyfrom\nN?K\n5&5&5K\u0003\nQwiththeuniform\ndistribution.Thecorresponding valuationsofthearcsarei.i.d.,with\nanuniformdistributiononaninterval.[5]M.G.CrandallandL.Tartar.Somerelationsbetweennonexpansiveand\norderpreservingmaps.ProceedingsoftheAMS,78(3):385±39 0,1980.\n[6]R.A.Cuninghame-Green .MinimaxAlgebra.Number166inLecturenotes\ninEconomicsandMathematical Systems.Springer,1979.\n[7]R.ACuninghame-Green. Minimaxalgebraandapplications. Advancesin\nImagingandElectronPhysics,90,1995.\n[8]R.A.Cuninghame-Green andLinYixun.Maximumcycle-means of\nweighteddigraphs.Appl.Math.JCU,11B:225±234, 1996.\n[9]A.DasdanandR.Gupta.Fastermaximumandminimummeancycleal-\ngorithmsforsystemperformanceanalysis.Technicalreport97-07,Univ.\nofCalifornia,Irvine,1997.SubmittedtoIEEE-TransactionsonComputer-\nAidedDesign.\n[10]E.V.DenardoandB.L.Fox.MultichainMarkovrenewalprograms.SIAM\nJ.Appl.Math,16:468±487,1968.\n[11]S.Gaubert.ThÂeoriedessystÁemeslinÂeairesdanslesdioÈõdes.ThÁese,ÂEcole\ndesMinesdeParis,July1992.\n[12]S.GaubertandJ.Gunawardena. Thedualitytheoremformin-maxfunc-\ntions.C.R.A.S,1997.Accepted.\n[13]S.GaubertandJ.Mairesse.Modellingandanalysisoftimedpetrinetsus-\ningheapsofpieces.1997.Submittedforpublication.Alsotechnicalreport\nLITP/97-14.AnabridgedversioncanbefoundintheProceedingsofthe\nECC'97,Bruxells,1997.\n[14]S.GaubertandM.Plus.Methodsandapplicationsof(max,+)linearalge-\nbra.InR.ReischukandM.Morvan,editors,STACS'97,number1200in\nLNCS,LÈubeck,March1997.Springer.\n[15]M.GondranandM.Minoux.ValeurspropresetvecteurspropresenthÂeorie\ndesgraphes.InProblÁemescombinatoiresetthÂeoriedesgraphes,number\n260inColloquesinternationaux CNRS,Orsay,1976.\n[16]M.GondranandM.Minoux.Valeurspropresetvecteurspropresdansles\ndioÈõdesetleurinterprÂetationenthÂeoriedesgraphes.EDF,Bulletindela\nDirectiondesEtudesetRecherches,SerieC,MathÂematiquesInformatique,\n2:25±41,1977.\n[17]M.GondranandM.Minoux.Graphesetalgorithmes.Eyrolles,Paris,\n1979.Engl.transl.GraphsandAlgorithms,Wiley,1984.\n[18]J.Gunawardena. Cycletimeand®xedpointsofmin-maxfunctions.In\nG.CohenandJ.P.Quadrat,editors,11thInternational ConferenceonAnal-\nysisandOptimization ofSystems,number199inLNCIS,pages266±272.\nSpringer,1994.\n[19]J.Gunawardena andM.Keane.Ontheexistenceofcycletimesforsome\nnonexpansive maps.TechnicalReportHPL-BRIMS-95-003, Hewlett-\nPackardLabs,1995.\n[20]R.M.Karp.Acharacterization oftheminimummean-cycleinadigraph.\nDiscreteMaths.,23:309±311,1978.\n[21]V.KolokoltsovandVMaslov.Idempotentanalysisandapplications.\nKluwerAcad.Publisher,1997.\n[22]V.MaslovandS.SamborskiÆõ,editors.Idempotentanalysis,volume13of\nAdv.inSov.Math.AMS,RI,1992.\n[23]M.L.Puterman.Markovdecisionprocesses.Handbboksinoperationsre-\nsearchandmanagement science,2:331±434,1990.\n[24]I.V.RomanovskiÆõ.Optimization andstationarycontrolofdiscretedeter-\nministicprocessindynamicprogramming. Kibernetika,2:66±78,1967.\nEngl.transl.inCybernetics3(1967).\n[25]R.E.Tarjan.Depth®rstsearchandlineargraphalgorithms.SIAMJ.Com-\nput,1:146±160,1972.\n[26]N.N.Vorobyev.Extremalalgebraofpositivematrices.Elektron.Informa-\ntionsverarbeitung undKybernetik,3,1967.inRussian." } ]
{ "category": "App Definition and Development", "file_name": "cochet-terrasson98numerical.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "count_star_intervalsum_allsum_all_filtersum_all_yearsum_pricetop_100_commitdatetop_100_partstop_100_parts_detailstop_100_parts_filter\n0\n50\n100 150Time (seconds)QueryDruid Scaling − 100GB" } ]
{ "category": "App Definition and Development", "file_name": "tpch_scaling.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "Druid: Open Source Real-time Analytics at Scale\nFangjin Y ang\nMetamarkets Group, Inc.\nfangjin@metamar-\nkets.comEric Tschetter\necheddar@gmail.comXavier Léauté\nMetamarkets Group, Inc.\nxavier@metamar-\nkets.com\nNishant Bangarwa\nMetamarkets Group, Inc.\nnishant@metamar-\nkets.comNelson Ray\nncray86@gmail.comGian Merlino\nMetamarkets Group, Inc.\ngian@metamarkets.com\nABSTRACT\nDruid is an open source1data store built for exploratory\nanalytics on large data sets. Druid supports fast data ag-\ngregation, low latency data ingestion, and arbitrary data\nexploration. The system combines a column-oriented stor-\nage layout, a distributed, shared-nothing architecture, and\nan advanced indexing structure to return queries on billions\nof rows in milliseconds. Druid is petabyte scale and is de-\nployed in production at several technology companies.\n1. INTRODUCTION\nTherecentproliferation ofinternet technologyhas created\na surge in machine-generated events. Individually, these\nevents contain minimal useful information and are of low\nvalue. Given the time and resources required to extract\nmeaning from large collections of events, many companies\nwere willing to discard this data instead.\nA few years ago, Google introduced MapReduce as their\nmechanism of leveraging commodity hardware to index the\ninternetandanalyzelogs. TheHadoopprojectsoonfollowed\nand was largely patterned after the insights that came out\nof the original MapReduce paper. Hadoop has contributed\nmuch to helping companies convert their low-value event\nstreams into high-value aggregates for a variety of applica-\ntions such as business intelligence and A-B testing.\nAs with a lot of great systems, Hadoop has opened our\neyes to a new space of problems. Specifically, Hadoop ex-\ncels at storing and providing access to large amounts of\ndata, however, it does notmakeanyperformanceguarantees\naroundhowquicklythatdatacanbeaccessed. Furthermore,\nalthough Hadoop is a highly available system, performance\ndegradesunderheavyconcurrentload. Lastly, whileHadoop\nworks well for storing data, it is not optimized for ingesting\ndata and making that data immediately readable.\n1https://github.com/metamx/druid1.1 The Need for Druid\nDruidwasoriginallydesignedtosolveproblemsaroundin-\ngestingandexploringlargequantitiesoftransactionalevents\n(log data). This form of timeseries data (OLAP data) is\ncommonly found in the business intelligence space and the\nnature of the data tends to be very append heavy. Events\ntypically have three distinct components: a timestamp col-\numn indicating when the event occurred, a set of dimension\ncolumns indicating various attributes about the event, and\na set of metric columns containing values (usually numeric)\nthat can be aggregated. Queries are typically issued for the\nsum of some set of metrics, filtered by some set of dimen-\nsions, over some span of time.\nTheDruidprojectfirstbeganoutofnecessityatMetamar-\nkets to power a business intelligence dashboard that allowed\nusers to arbitrarily explore and visualize event streams. Ex-\nisting open source Relational Database Management Sys-\ntems, cluster computing frameworks, and NoSQL key/value\nstores were unable to provide a low latency data ingestion\nand query platform for an interactive dashboard. Queries\nneeded to return fast enough to allow the data visualizations\nin the dashboard to update interactively.\nIn addition to the query latency needs, the system had\nto be multi-tenant and highly available, as the dashboard\nis used in a highly concurrent environment. Downtime is\ncostly and many businesses cannot afford to wait if a system\nis unavailable in the face of software upgrades or network\nfailure. Finally, Metamarkets also wanted to allow users\nand alerting systems to be able to make business decisions\nin “real-time”. The time from when an event is created to\nwhen that event is queryable determines how fast users and\nsystems are able to react to potentially catastrophic occur-\nrences in their systems.\nTheproblemsofdataexploration, ingestion, andavailabil-\nity span multiple industries. Since Druid was open sourced\nin October 2012, it has been deployed as a video, network\nmonitoring, operations monitoring, and online advertising\nanalytics platform at multiple companies2.\n2. ARCHITECTURE\nA Druid cluster consists of different types of nodes and\neachnodetypeisdesignedtoperformaspecificsetofthings.\nWe believe this design separates concerns and simplifies the\n2http://druid.io/druid.html\n1Figure 1: An overview of a Druid cluster and the flow of data through the cluster.\ncomplexity of the system. The different node types operate\nfairly independently of each other and there is minimal in-\nteraction among them. Hence, intra-cluster communication\nfailures have minimal impact on data availability. To solve\ncomplex data analysis problems, the different node types\ncome together to form a fully working system. The compo-\nsition of and flow of data in a Druid cluster are shown in\nFigure 1. All Druid nodes announce their availability and\nthe data they are serving over Zookeeper[3].\n2.1 Real-time Nodes\nReal-time nodes encapsulate the functionality to ingest\nand query event streams. Events indexed via these nodes\nare immediately available for querying. These nodes are\nonly concerned with events for some small time range. They\nperiodically hand off batches of immutable events to other\nnodes in the Druid cluster that are specialized in dealing\nwith batches of immutable events.\nReal-time nodes maintain an in-memory index buffer for\nall incoming events. These indexes are incrementally pop-\nulated as new events are ingested and the indexes are also\ndirectly queryable. To avoid heap overflow problems, real-\ntimenodespersisttheirin-memoryindexes todiskeitherpe-\nriodically or after some maximum row limit is reached. This\npersist process converts data stored in the in-memory buffer\nto a column oriented storage format. Each persisted index is\nimmutable and real-time nodes load persisted indexes into\noff-heap memory such that they can still be queried. On\na periodic basis, each real-time node will schedule a back-\nground task that searches for all locally persisted indexes.\nThe task merges these indexes together and builds an im-\nmutable block of data that contains all the events that have\ningested by a real-time node for some span of time. We\nrefer to this block of data as a “segment”. During the hand-\noff stage, a real-time node uploads this segment to perma-\nnent backup storage, typically a distributed file system that\nDruid calls “deep storage”.\n2.2 Historical Nodes\nHistorical nodes encapsulate the functionality to load and\nserve the immutable blocks of data (segments) created by\nreal-time nodes. In many real-world workflows, most of the\ndata loaded in a Druid cluster is immutable and hence his-\ntorical nodes are typically the main workers of a Druid clus-ter. Historical nodes follow a shared-nothing architecture\nand there is no single point of contention among the nodes.\nThe nodes have no knowledge of one another and are op-\nerationally simple; they only know how to load, drop, and\nserve immutable segments.\n2.3 Broker Nodes\nBroker nodes act as query routers to historical and real-\ntime nodes. Broker nodes understand what segments are\nqueryable and where those segments are located. Broker\nnodes route incoming queries such that the queries hit the\nright historical or real-time nodes. Broker nodes also merge\npartial results from historical and real-time nodes before re-\nturning a final consolidated result to the caller.\n2.4 Coordinator Nodes\nDruid coordinator nodes are primarily in charge of data\nmanagement and distribution on historical nodes. The co-\nordinator nodes tell historical nodes to load new data, drop\noutdated data, replicate data, and move data to load bal-\nance. Coordinator nodes undergo a leader-election process\nthatdeterminesasinglenodethatrunsthecoordinatorfunc-\ntionality. Theremainingcoordinatornodesactasredundant\nbackups.\nAcoordinatornoderunsperiodicallytodeterminethecur-\nrent state of the cluster. It makes decisions by comparing\nthe expected state of the cluster with the actual state of the\ncluster at the time of the run. Coordinator nodes also main-\ntain a connection to a MySQL database that contains ad-\nditional operational parameters and configurations. One of\nthekeypiecesofinformationlocatedintheMySQLdatabase\nis a table that contains a list of all segments that should be\nserved by historical nodes. This table can be updated by\nany service that creates segments, such as real-time nodes.\n2.5 Query Processing\nData tables in Druid (called data sources ) are collections\nof timestamped events partitioned into a set of segments,\nwhere each segment is typically 5–10 million rows. Formally,\nwe define a segment as a collection of rows of data that span\nsome period in time. Segments represent the fundamental\nstorage unit in Druid and replication and distribution are\ndone at a segment level.\n2Timestamp City Revenue\n2014-01-01T01:00:00Z San Francisco 25\n2014-01-01T01:00:00Z San Francisco 42\n2014-01-01T02:00:00Z New York 17\n2014-01-01T02:00:00Z New York 170\nTable 1: Sample sales data set.\nDruid segments are stored in a column orientation. Given\nthat Druid is best used for aggregating event streams (all\ndata going into Druid must have a timestamp), the advan-\ntages storing aggregate information as columns rather than\nrows are well documented [1]. Column storage allows for\nmore efficient CPU usage as only what is needed is actually\nloaded and scanned.\nDruid has multiple column types to represent various data\nformats. Depending on the column type, different compres-\nsion methods are used to reduce the cost of storing a column\nin memory and on disk. For example, if an entire column\nonly contains string values, storing the raw strings is unnec-\nessarily costly. String columns can be dictionary encoded\ninstead. Dictionary encoding is a common method to com-\npress data in column stores.\nInmanyrealworldOLAPworkflows, queriesareissuedfor\nthe aggregated results of some set of metrics where some set\nof dimension specifications are met. Consider Table 1. An\nexample query for this table may ask: “How much revenue\nwas generated in the first hour of 2014-01-01 in the city\nof San Francisco?”. This query is filtering a sales data set\nbased on a Boolean expression of dimension values. In many\nreal world data sets, dimension columns contain strings and\nmetric columns contain numbers. Druid creates additional\nlookup indices for string columns such that only those rows\nthat pertain to a particular query filter are ever scanned.\nFor each unique city in Table 1, we can form some repre-\nsentation indicating in which table rows a particular city is\nseen. We can store this information in a binary array where\nthe array indices represent our rows. If a particular page is\nseen in a certain row, that array index is marked as 1. For\nexample:\nSan Francisco -> rows [0, 1] -> [1][1][0][0]\nNew York -> rows [2, 3] -> [0][0][1][1]\nSan Francisco is seen in rows 0and1. This mapping of\ncolumn values to row indices forms an inverted index [4]. To\nknow which rows contain San Francisco orNew York , we\ncanORtogether the two arrays.\n[0][1][0][1] OR [1][0][1][0] = [1][1][1][1]\nThis approach of performing Boolean operations on large\nbitmap sets is commonly used in search engines. Druid com-\npresses each bitmap index using the Concise algorithm [2].\nAll Boolean operations on top of these Concise sets are done\nwithout decompressing the set.\n2.6 Query Capabilities\nDruid supports many types of aggregations including dou-\nble sums, long sums, minimums, maximums, and complex\naggregationssuchascardinalityestimationandapproximate\nquantileestimation. Theresultsofaggregationscanbecom-\nbined in mathematical expressions to form other aggrega-\ntions. Druid supports different query types ranging from\nsimple aggregates for an interval time, groupBys, and ap-\nproximate top-K queries.\n0.00.51.0\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequery time (s)datasource\na\nb\nc\nd\ne\nf\ng\nhMean query latencyFigure 2: Query latencies of production data\nsources.\naggregation top−n\n0200400600\n02500500075001000012500count_star_interval\nsum_all\nsum_all_filter\nsum_all_year\nsum_price\ntop_100_commitdate\ntop_100_parts\ntop_100_parts_details\ntop_100_parts_filter\nQueryTime (seconds)engine\nDruid\nMySQLMedian Query Time (5 runs) ... 100GB data ... single node\nFigure 3: Druid & MySQL benchmarks – 100GB\nTPC-H data.\n3. PERFORMANCE\nDruid runs in production at several organizations, and\nto briefly demonstrate its performance, we have chosen to\nshare some real world numbers for the main production clus-\nter running at Metamarkets in early 2014. For comparison\nwith other databases we also include results from synthetic\nworkloads on TPC-H data.\n3.1 Query Performance\nQuery latencies are shown in Figure 2 for a cluster hosting\napproximately 10.5TB of data using 1302 processing threads\nand 672 total cores (hyperthreaded). There are approxi-\nmately 50 billion rows of data in this cluster.\nThe average queries per minute during this time was ap-\nproximately 1000. The number of dimensions the various\ndata sources vary from 25 to 78 dimensions, and 8 to 35 met-\nrics. Across all the various data sources, average query la-\ntency is approximately 550 milliseconds, with 90% of queries\nreturning in less than 1 second, 95% in under 2 seconds, and\n99% of queries returning in less than 10 seconds.\nApproximately30%ofthequeriesarestandardaggregates\ninvolvingdifferenttypesofmetricsandfilters, 60%ofqueries\nare ordered group bys over one or more dimensions with ag-\ngregates, and 10% of queries are search queries and meta-\ndata retrieval queries. The number of columns scanned in\naggregate queries roughly follows an exponential distribu-\ntion. Queries involving a single column are very frequent,\nand queries involving all columns are very rare.\n3050,000100,000150,000200,000250,000\nDec 15 Jan 01 Jan 15 Feb 01 Feb 15 Mar 01\ntimeevents / sdatasource\ns\nt\nu\nv\nw\nx\ny\nzEvents per second ... 24h moving averageFigure 4: Combined cluster ingestion rates.\nWe also present Druid benchmarks on TPC-H data in\nFigure??. Most TPC-H queries do not directly apply to\nDruid, so we selected queries more typical of Druid’s work-\nload to demonstrate query performance. As a comparison,\nwe also provide the results of the same queries using MySQL\nusing the MyISAM engine (InnoDB was slower in our exper-\niments).\nWebenchmarkedDruid’sscanrateat53,539,211rows/sec-\nond/corefor select count(*) equivalentqueryoveragiven\ntime interval and 36,246,530 rows/second/core for a select\nsum(float) type query.\n3.2 Data Ingestion Performance\nTo showcase Druid’s data ingestion latency, we selected\nseveral production datasources of varying dimensions, met-\nrics, and event volumes. Druid’s data ingestion latency is\nheavily dependent on the complexity of the data set being\ningested. The data complexity is determined by the number\nof dimensions in each event, the number of metrics in each\nevent, and the types of aggregations we want to perform on\nthose metrics.\nFor the given datasources, the number of dimensions vary\nfrom 5 to 35, and the number of metrics vary from 2 to\n24. The peak ingestion latency we measured in production\nwas 22914.43 events/second/core on a datasource with 30\ndimensions and 19 metrics.\nThe latency measurements we presented are sufficient to\naddress the our stated problems of interactivity. We would\nprefer the variability in the latencies to be less, which can\nbe achieved by adding additional hardware, but we have not\nchosen to do so because of cost concerns.\n4. DEMONSTRATION DETAILS\nWe would like to do an end-to-end demonstratation of\nDruid, from setting up a cluster, ingesting data, structuring\na query, and obtaining results. We would also like to show-\ncase how to solve real-world data analysis problems with\nDruid and demonstrate tools that can be built on top of it,\nincluding interactive data visualizations, approximate algo-\nrithms, and machine-learning components. We already use\nsimilar tools in production.\n4.1 Setup\nUsers will be able to set up a local Druid cluster to better\nunderstand the components and architecture of the system.\nDruid is designed to run on commodity hardware and Druid\nnodes are simply java processes that need to be started up.\nThe local setup will allow users to ingest data from Twitter’spublic API and query it. We will also provide users access to\nan AWShosted Druid cluster that contains several terabytes\nof Twitter data that we have been collecting for over 2 years.\nThere are over 3 billion tweets in this data set, and new\neventsareconstantlybeingingested. Wewillwalkthrougha\nvariety of different queries to demonstrate Druid’s arbitrary\ndata-exploration capabilities.\nFinally, we will teach users how to build a simple inter-\nactive dashboard on top of Druid. The dashboard will use\nsome of Druid’s more powerful features such as approximate\nalgorithms for quickly determining the cardinality of sets,\nand machine learning algorithms for scientific computing\nproblems such as anomaly detection. These use cases rep-\nresent some of the more interesting problems we use Druid\nfor in production.\n4.2 Goals\nWe will not only walk users through solving real-world\nproblems with Druid and different tools that have been\nbuilt on top of Druid, but also answer conference-specific\nquestions such as what are the trending tweets and topics\nat VLDB, what netizens are conversing about in the gen-\neral area, and even perform a sentiment analysis of VLDB.\nOur goal is to clearly explain why the architecture of Druid\nmakes it highly optimal for certain types of queries, and the\npotential of the system as a real-time analytics platform.\n5. ACKNOWLEDGMENTS\nDruid could not have been built without the help of many\ngreat people in the community. We want to thank every-\none that has contributed to the Druid codebase for their\ninvaluable support.\n6. ADDITIONAL AUTHORS\nAdditional authors: Deep Ganguli (Metamarkets Group,\nInc.,deep@metamarkets.com ), Himadri Singh (Metamarkets\nGroup, Inc., himadri@metamarkets.com ), Igal Levy (Meta-\nmarkets Group, Inc., igal@metamarkets.com )\n7. REFERENCES\n[1]D. J. Abadi, S. R. Madden, and N. Hachem.\nColumn-stores vs. row-stores: How different are they\nreally? In Proceedings of the 2008 ACM SIGMOD\ninternational conference on Management of data , pages\n967–980. ACM, 2008.\n[2]A. Colantonio and R. Di Pietro. Concise: Compressed\n‘n’composable integer set. Information Processing\nLetters, 110(16):644–650, 2010.\n[3]P. Hunt, M. Konar, F. P. Junqueira, and B. Reed.\nZookeeper: Wait-free coordination for internet-scale\nsystems. In USENIX ATC , volume 10, 2010.\n[4]A. Tomasic and H. Garcia-Molina. Performance of\ninverted indices in shared-nothing distributed text\ndocument information retrieval systems. In Parallel and\nDistributed Information Systems, 1993., Proceedings of\nthe Second International Conference on , pages 8–17.\nIEEE, 1993.\n4" } ]
{ "category": "App Definition and Development", "file_name": "druid_demo.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "Using VoltDB\nAbstract\nThis book explains how to use VoltDB to design, build, and run high performance applica-\ntions.\nV11.3Using VoltDB\nV11.3\nCopyright © 2008-2022 Volt Active Data, Inc.\nThe text and illustrations in this document are licensed under the terms of the GNU Affero General Public License Version 3 as published by the\nFree Software Foundation. See the GNU Affero General Public License ( http://www.gnu.org/licenses/ ) for more details.\nMany of the core VoltDB database features described herein are part of the VoltDB Community Edition, which is licensed under the GNU Affero\nPublic License 3 as published by the Free Software Foundation. Other features are specific to the VoltDB Enterprise Edition and VoltDB Pro, which\nare distributed by Volt Active Data, Inc. under a commercial license.\nThe VoltDB client libraries, for accessing VoltDB databases programmatically, are licensed separately under the MIT license.\nYour rights to access and use VoltDB features described herein are defined by the license you received when you acquired the software.\nVoltDB is a trademark of Volt Active Data, Inc.\nVoltDB software is protected by U.S. Patent Nos. 9,600,514, 9,639,571, 10,067,999, 10,176,240, and 10,268,707. Other patents pending.\nThis document was generated on March 07, 2022.Table of Contents\nAbout This Book .............................................................................................................. xiv\n1. Overview ....................................................................................................................... 1\n1.1. What is VoltDB? .................................................................................................. 1\n1.2. Who Should Use VoltDB ....................................................................................... 1\n1.3. How VoltDB Works .............................................................................................. 2\n1.3.1. Partitioning ................................................................................................ 2\n1.3.2. Serialized (Single-Threaded) Processing ......................................................... 2\n1.3.3. Partitioned vs. Replicated Tables ................................................................... 3\n1.3.4. Ease of Scaling to Meet Application Needs ..................................................... 4\n1.4. Working with VoltDB Effectively ............................................................................ 4\n2. Installing VoltDB ............................................................................................................ 5\n2.1. Operating System and Software Requirements ............................................................ 5\n2.2. Installing VoltDB .................................................................................................. 6\n2.2.1. Upgrading From Older Versions ................................................................... 6\n2.2.2. Building a New VoltDB Distribution Kit ........................................................ 7\n2.3. Setting Up Your Environment ................................................................................. 7\n2.4. What is Included in the VoltDB Distribution ............................................................. 7\n2.5. VoltDB in Action: Running the Sample Applications .................................................. 8\n3. Starting the Database ....................................................................................................... 9\n3.1. Initializing and Starting a VoltDB Database ............................................................... 9\n3.2. Initializing and Starting a VoltDB Database on a Cluster ............................................ 10\n3.3. Stopping a VoltDB Database ................................................................................. 11\n3.4. Saving the Data .................................................................................................. 12\n3.5. Restarting a VoltDB Database ............................................................................... 12\n3.6. Updating Nodes on the Cluster .............................................................................. 12\n3.7. Defining the Cluster Configuration ......................................................................... 13\n3.7.1. Determining How Many Sites per Host ......................................................... 13\n3.7.2. Configuring Paths for Runtime Features ........................................................ 14\n3.7.3. Verifying your Hardware Configuration ........................................................ 15\n4. Designing the Database Schema ....................................................................................... 16\n4.1. How to Enter DDL Statements .............................................................................. 17\n4.2. Creating Tables and Primary Keys ......................................................................... 18\n4.3. Analyzing Data Volume and Workload ................................................................... 19\n4.4. Partitioning Database Tables ................................................................................. 20\n4.4.1. Choosing a Column on which to Partition Table Rows ..................................... 20\n4.4.2. Specifying Partitioned Tables ...................................................................... 21\n4.4.3. Design Rules for Partitioning Tables ............................................................ 21\n4.5. Replicating Database Tables .................................................................................. 21\n4.5.1. Choosing Replicated Tables ........................................................................ 22\n4.5.2. Specifying Replicated Tables ...................................................................... 22\n4.6. Modifying the Schema ......................................................................................... 22\n4.6.1. Effects of Schema Changes on Data and Clients ............................................. 23\n4.6.2. Viewing the Schema .................................................................................. 24\n4.6.3. Modifying Tables ...................................................................................... 24\n4.6.4. Adding and Dropping Indexes ..................................................................... 26\n4.6.5. Modifying Partitioning for Tables and Stored Procedures ................................. 27\n5. Designing Stored Procedures to Access the Database ........................................................... 31\n5.1. How Stored Procedures Work ................................................................................ 31\n5.1.1. VoltDB Stored Procedures are Transactional .................................................. 31\n5.1.2. VoltDB Stored Procedures are Deterministic .................................................. 31\n5.2. The Anatomy of a VoltDB Stored Procedure ............................................................ 33\niiiUsing VoltDB\n5.2.1. The Structure of the Stored Procedure .......................................................... 34\n5.2.2. Passing Arguments to a Stored Procedure ...................................................... 35\n5.2.3. Creating and Executing SQL Queries in Stored Procedures ............................... 36\n5.2.4. Interpreting the Results of SQL Queries ........................................................ 37\n5.2.5. Returning Results from a Stored Procedure .................................................... 40\n5.2.6. Rolling Back a Transaction ......................................................................... 41\n5.3. Installing Stored Procedures into the Database .......................................................... 41\n5.3.1. Compiling, Packaging, and Loading Stored Procedures .................................... 42\n5.3.2. Declaring Stored Procedures in the Schema ................................................... 42\n5.3.3. Partitioning Stored Procedures in the Schema ................................................. 43\n6. Designing VoltDB Client Applications .............................................................................. 47\n6.1. Connecting to the VoltDB Database ....................................................................... 47\n6.1.1. Connecting to Multiple Servers ................................................................... 48\n6.1.2. Using the Auto-Connecting Client ............................................................... 48\n6.2. Invoking Stored Procedures ................................................................................... 49\n6.3. Invoking Stored Procedures Asynchronously ............................................................ 49\n6.4. Closing the Connection ........................................................................................ 50\n6.5. Handling Errors .................................................................................................. 51\n6.5.1. Interpreting Execution Errors ...................................................................... 51\n6.5.2. Handling Timeouts .................................................................................... 52\n6.5.3. Writing a Status Listener to Interpret Other Errors .......................................... 54\n6.6. Compiling and Running Client Applications ............................................................. 56\n6.6.1. Starting the Client Application .................................................................... 56\n6.6.2. Running Clients from Outside the Cluster ..................................................... 56\n7. Simplifying Application Development ............................................................................... 58\n7.1. Using Default Procedures ..................................................................................... 58\n7.2. Shortcut for Defining Simple Stored Procedures ....................................................... 59\n7.3. Verifying Expected Query Results .......................................................................... 60\n7.4. Scheduling Stored Procedures as Tasks ................................................................... 61\n7.5. Directed Procedures: Distributing Transactions to Every Partition ................................. 62\n8. Using VoltDB with Other Programming Languages ............................................................. 64\n8.1. C++ Client Interface ............................................................................................ 64\n8.1.1. Writing VoltDB Client Applications in C++ .................................................. 64\n8.1.2. Creating a Connection to the Database Cluster ............................................... 65\n8.1.3. Invoking Stored Procedures ........................................................................ 65\n8.1.4. Invoking Stored Procedures Asynchronously .................................................. 66\n8.1.5. Interpreting the Results .............................................................................. 67\n8.2. JSON HTTP Interface .......................................................................................... 67\n8.2.1. How the JSON Interface Works .................................................................. 67\n8.2.2. Using the JSON Interface from Client Applications ......................................... 69\n8.2.3. How Parameters Are Interpreted .................................................................. 71\n8.2.4. Interpreting the JSON Results ..................................................................... 72\n8.2.5. Error Handling using the JSON Interface ...................................................... 73\n8.3. JDBC Interface ................................................................................................... 74\n8.3.1. Using JDBC to Connect to a VoltDB Database .............................................. 74\n8.3.2. Using JDBC to Query a VoltDB Database ..................................................... 75\n9. Using VoltDB in a Cluster .............................................................................................. 77\n9.1. Starting a Database Cluster ................................................................................... 77\n9.2. Updating the Cluster Configuration ........................................................................ 77\n9.3. Elastic Scaling to Resize the Cluster ....................................................................... 78\n9.3.1. Adding Nodes with Elastic Scaling .............................................................. 79\n9.3.2. Removing Nodes with Elastic Scaling .......................................................... 80\n9.3.3. Configuring How VoltDB Rebalances Nodes During Elastic Scaling .................. 80\n10. Availability ................................................................................................................. 82\nivUsing VoltDB\n10.1. How K-Safety Works ......................................................................................... 82\n10.2. Enabling K-Safety ............................................................................................. 83\n10.2.1. What Happens When You Enable K-Safety ................................................. 84\n10.2.2. Calculating the Appropriate Number of Nodes for K-Safety ............................ 84\n10.3. Recovering from System Failures ......................................................................... 85\n10.3.1. What Happens When a Node Rejoins the Cluster .......................................... 85\n10.3.2. Where and When Recovery May Fail ......................................................... 86\n10.4. Avoiding Network Partitions ................................................................................ 86\n10.4.1. K-Safety and Network Partitions ................................................................ 86\n10.4.2. Using Network Fault Protection ................................................................. 87\n11. Database Replication .................................................................................................... 90\n11.1. How Database Replication Works ......................................................................... 91\n11.1.1. Starting Database Replication .................................................................... 92\n11.1.2. Database Replication, Availability, and Disaster Recovery .............................. 93\n11.1.3. Database Replication and Completeness ...................................................... 94\n11.2. Using Passive Database Replication ...................................................................... 95\n11.2.1. Specifying the DR Tables in the Schema ..................................................... 95\n11.2.2. Configuring the Clusters ........................................................................... 96\n11.2.3. Starting the Clusters ................................................................................ 96\n11.2.4. Loading the Schema and Starting Replication ............................................... 96\n11.2.5. Updating the Schema During Replication .................................................... 97\n11.2.6. Stopping Replication ................................................................................ 98\n11.2.7. Database Replication and Read-only Clients ............................................... 100\n11.3. Using Cross Datacenter Replication ..................................................................... 100\n11.3.1. Designing Your Schema for Active Replication ........................................... 101\n11.3.2. Configuring the Database Clusters ............................................................ 101\n11.3.3. Starting the Database Clusters .................................................................. 103\n11.3.4. Loading a Matching Schema and Starting Replication .................................. 104\n11.3.5. Updating the Schema During Active Replication ......................................... 104\n11.3.6. Stopping Replication .............................................................................. 105\n11.3.7. Example XDCR Configurations ............................................................... 106\n11.3.8. Understanding Conflict Resolution ............................................................ 106\n11.4. Monitoring Database Replication ........................................................................ 114\n12. Security .................................................................................................................... 115\n12.1. How Security Works in VoltDB ......................................................................... 115\n12.2. Enabling Authentication and Authorization ........................................................... 115\n12.3. Defining Users and Roles .................................................................................. 116\n12.4. Assigning Access to Stored Procedures ................................................................ 117\n12.5. Assigning Access by Function (System Procedures, SQL Queries, and Default Proce-\ndures) ..................................................................................................................... 117\n12.6. Using Built-in Roles ......................................................................................... 118\n12.7. Encrypting VoltDB Communication Using TLS/SSL .............................................. 118\n12.7.1. Configuring TLS/SSL on the VoltDB Server .............................................. 119\n12.7.2. Choosing What Ports to Encrypt with TLS/SSL .......................................... 120\n12.7.3. Using the VoltDB Command Line Utilities with TLS/SSL ............................ 120\n12.7.4. Implementing TLS/SSL in the Java Client Applications ................................ 121\n12.7.5. Configuring Database Replication (DR) With TLS/SSL ................................ 121\n12.8. Integrating Kerberos Security with VoltDB ........................................................... 122\n12.8.1. Installing and Configuring Kerberos .......................................................... 122\n12.8.2. Installing and Configuring the Java Security Extensions ................................ 123\n12.8.3. Configuring the VoltDB Servers and Clients ............................................... 124\n12.8.4. Accessing the Database from the Command Line and the Web ....................... 126\n13. Saving & Restoring a VoltDB Database ......................................................................... 127\n13.1. Performing a Manual Save and Restore of a VoltDB Cluster .................................... 127\nvUsing VoltDB\n13.1.1. How to Save the Contents of a VoltDB Database ........................................ 128\n13.1.2. How to Restore the Contents of a VoltDB Database Manually ........................ 128\n13.1.3. Changing the Cluster Configuration Using Save and Restore .......................... 129\n13.2. Scheduling Automated Snapshots ........................................................................ 131\n13.3. Managing Snapshots ......................................................................................... 131\n13.4. Special Notes Concerning Save and Restore ......................................................... 132\n14. Command Logging and Recovery .................................................................................. 133\n14.1. How Command Logging Works ......................................................................... 133\n14.2. Controlling Command Logging .......................................................................... 134\n14.3. Configuring Command Logging for Optimal Performance ....................................... 134\n14.3.1. Log Size .............................................................................................. 135\n14.3.2. Log Frequency ...................................................................................... 135\n14.3.3. Synchronous vs. Asynchronous Logging .................................................... 135\n14.3.4. Hardware Considerations ........................................................................ 136\n15. Streaming Data: Import, Export, and Migration ................................................................ 138\n15.1. How Data Streaming Works in VoltDB ............................................................... 139\n15.1.1. Understanding Import ............................................................................. 141\n15.1.2. Understanding Export ............................................................................. 141\n15.1.3. Understanding Migration ......................................................................... 142\n15.1.4. Understanding Topics ............................................................................. 143\n15.2. The Business Case for Streaming Data ................................................................ 144\n15.2.1. Extract, Transform, Load (ETL) ............................................................... 145\n15.2.2. Change Data Capture ............................................................................. 145\n15.2.3. Streaming Data Validation ...................................................................... 146\n15.2.4. Caching ............................................................................................... 147\n15.2.5. Archiving ............................................................................................. 148\n15.3. VoltDB Export Connectors ................................................................................ 148\n15.3.1. How Export Works ................................................................................ 149\n15.3.2. The File Export Connector ...................................................................... 150\n15.3.3. The HTTP Export Connector ................................................................... 153\n15.3.4. The JDBC Export Connector ................................................................... 157\n15.3.5. The Kafka Export Connector ................................................................... 158\n15.3.6. The Elasticsearch Export Connector .......................................................... 161\n15.4. VoltDB Import Connectors ................................................................................ 162\n15.4.1. Bulk Loading Data Using VoltDB Standalone Utilities ................................. 162\n15.4.2. Streaming Import Using Built-in Import Features ........................................ 163\n15.4.3. The Kafka Importer ............................................................................... 164\n15.4.4. The Kinesis Importer ............................................................................. 166\n15.5. VoltDB Import Formatters ................................................................................. 167\n15.6. VoltDB Topics ................................................................................................ 168\n15.6.1. Types of VoltDB Topics ......................................................................... 169\n15.6.2. Declaring VoltDB Topics ........................................................................ 170\n15.6.3. Configuring and Managing Topics ............................................................ 171\n15.6.4. Configuring the Topic Server ................................................................... 174\n15.6.5. Calling Topics from Consumers and Producers ........................................... 175\n15.6.6. Using Opaque Topics ............................................................................. 176\nA. Supported SQL DDL Statements .................................................................................... 177\nALTER STREAM ................................................................................................... 178\nALTER TABLE ...................................................................................................... 179\nALTER TASK ........................................................................................................ 182\nCREATE AGGREGATE FUNCTION ......................................................................... 183\nCREATE FUNCTION .............................................................................................. 185\nCREATE INDEX .................................................................................................... 187\nCREATE PROCEDURE AS ...................................................................................... 189\nviUsing VoltDB\nCREATE PROCEDURE FROM CLASS ..................................................................... 191\nCREATE ROLE ...................................................................................................... 193\nCREATE STREAM ................................................................................................. 195\nCREATE TABLE .................................................................................................... 199\nCREATE TASK ...................................................................................................... 206\nCREATE VIEW ...................................................................................................... 209\nDR TABLE ............................................................................................................ 211\nDROP FUNCTION .................................................................................................. 212\nDROP INDEX ........................................................................................................ 213\nDROP PROCEDURE ............................................................................................... 214\nDROP ROLE .......................................................................................................... 215\nDROP STREAM ..................................................................................................... 216\nDROP TABLE ........................................................................................................ 217\nDROP TASK .......................................................................................................... 218\nDROP VIEW .......................................................................................................... 219\nPARTITION PROCEDURE ...................................................................................... 220\nPARTITION TABLE ............................................................................................... 222\nB. Supported SQL Statements ............................................................................................ 223\nDELETE ................................................................................................................ 224\nINSERT ................................................................................................................. 226\nMIGRATE .............................................................................................................. 228\nSELECT ................................................................................................................. 229\nTRUNCATE TABLE ............................................................................................... 238\nUPDATE ................................................................................................................ 239\nUPSERT ................................................................................................................ 240\nC. SQL Functions ............................................................................................................ 242\nABS() .................................................................................................................... 245\nAPPROX_COUNT_DISTINCT() ............................................................................... 246\nAREA() .................................................................................................................. 247\nARRAY_ELEMENT() .............................................................................................. 248\nARRAY_LENGTH() ................................................................................................ 249\nASTEXT() .............................................................................................................. 250\nAVG() ................................................................................................................... 251\nBIN() ..................................................................................................................... 252\nBIT_SHIFT_LEFT() ................................................................................................. 253\nBIT_SHIFT_RIGHT() .............................................................................................. 254\nBITAND() .............................................................................................................. 255\nBITNOT() .............................................................................................................. 256\nBITOR() ................................................................................................................. 257\nBITXOR() .............................................................................................................. 258\nCAST() .................................................................................................................. 259\nCEILING() ............................................................................................................. 260\nCENTROID() .......................................................................................................... 261\nCHAR() ................................................................................................................. 262\nCHAR_LENGTH() .................................................................................................. 263\nCOALESCE() ......................................................................................................... 264\nCONCAT() ............................................................................................................. 265\nCONTAINS() .......................................................................................................... 266\nCOS() .................................................................................................................... 267\nCOT() .................................................................................................................... 268\nCOUNT() ............................................................................................................... 269\nCSC() .................................................................................................................... 270\nCURRENT_TIMESTAMP() ...................................................................................... 271\nDATEADD() ........................................................................................................... 272\nviiUsing VoltDB\nDAY(), DAYOFMONTH() ........................................................................................ 273\nDAYOFWEEK() ...................................................................................................... 274\nDAYOFYEAR() ...................................................................................................... 275\nDECODE() ............................................................................................................. 276\nDEGREES() ............................................................................................................ 277\nDISTANCE() .......................................................................................................... 278\nDWITHIN() ............................................................................................................ 279\nEXP() .................................................................................................................... 280\nEXTRACT() ........................................................................................................... 281\nFIELD() ................................................................................................................. 283\nFLOOR() ................................................................................................................ 285\nFORMAT_CURRENCY() ......................................................................................... 286\nFORMAT_TIMESTAMP() ....................................................................................... 287\nFROM_UNIXTIME() ............................................................................................... 288\nHEX() .................................................................................................................... 289\nHOUR() ................................................................................................................. 290\nINET6_ATON() ...................................................................................................... 291\nINET6_NTOA() ...................................................................................................... 292\nINET_ATON() ........................................................................................................ 293\nINET_NTOA() ........................................................................................................ 294\nISINVALIDREASON() ............................................................................................ 295\nISVALID() ............................................................................................................. 296\nIS_VALID_TIMESTAMP() ....................................................................................... 298\nLATITUDE() .......................................................................................................... 299\nLEFT() ................................................................................................................... 300\nLN(), LOG() ........................................................................................................... 301\nLOG10() ................................................................................................................ 302\nLONGITUDE() ....................................................................................................... 303\nLOWER() ............................................................................................................... 304\nMAKEVALIDPOLYGON() ....................................................................................... 305\nMAX() ................................................................................................................... 306\nMAX_VALID_TIMESTAMP() .................................................................................. 307\nMIGRATING() ........................................................................................................ 308\nMIN() .................................................................................................................... 309\nMIN_VALID_TIMESTAMP() ................................................................................... 310\nMINUTE() .............................................................................................................. 311\nMOD() ................................................................................................................... 312\nMONTH() .............................................................................................................. 313\nNOW() ................................................................................................................... 314\nNUMINTERIORRINGS() ......................................................................................... 315\nNUMPOINTS() ....................................................................................................... 316\nOCTET_LENGTH() ................................................................................................. 317\nOVERLAY() ........................................................................................................... 318\nPI() ........................................................................................................................ 319\nPOINTFROMTEXT() ............................................................................................... 320\nPOLYGONFROMTEXT() ......................................................................................... 321\nPOSITION() ........................................................................................................... 322\nPOWER() ............................................................................................................... 323\nQUARTER() ........................................................................................................... 324\nRADIANS() ............................................................................................................ 325\nREGEXP_POSITION() ............................................................................................. 326\nREPEAT() .............................................................................................................. 327\nREPLACE() ............................................................................................................ 328\nRIGHT() ................................................................................................................ 329\nviiiUsing VoltDB\nROUND() ............................................................................................................... 330\nSEC() .................................................................................................................... 331\nSECOND() ............................................................................................................. 332\nSET_FIELD() .......................................................................................................... 333\nSIN() ..................................................................................................................... 335\nSINCE_EPOCH() .................................................................................................... 336\nSPACE() ................................................................................................................ 337\nSQRT() .................................................................................................................. 338\nSTR() .................................................................................................................... 339\nSUBSTRING() ........................................................................................................ 340\nSUM() ................................................................................................................... 341\nTAN() .................................................................................................................... 342\nTO_TIMESTAMP() ................................................................................................. 343\nTRIM() .................................................................................................................. 344\nTRUNCATE() ......................................................................................................... 345\nUPPER() ................................................................................................................ 346\nVALIDPOLYGONFROMTEXT() .............................................................................. 347\nWEEK(), WEEKOFYEAR() ...................................................................................... 348\nWEEKDAY() .......................................................................................................... 349\nYEAR() .................................................................................................................. 350\nD. VoltDB CLI Commands ............................................................................................... 351\ncsvloader ................................................................................................................ 352\njdbcloader ............................................................................................................... 357\nkafkaloader ............................................................................................................. 361\nsqlcmd ................................................................................................................... 365\nvoltadmin ............................................................................................................... 370\nvoltdb .................................................................................................................... 378\nE. Configuration File (deployment.xml) ............................................................................... 385\nE.1. Understanding XML Syntax ................................................................................ 385\nE.2. The Structure of the Configuration File ................................................................. 385\nF. VoltDB Datatype Compatibility ...................................................................................... 391\nF.1. Java and VoltDB Datatype Compatibility ............................................................... 391\nG. System Procedures ....................................................................................................... 394\n@AdHoc ................................................................................................................ 395\n@Explain ............................................................................................................... 397\n@ExplainProc ......................................................................................................... 398\n@ExplainView ........................................................................................................ 399\n@GetPartitionKeys ................................................................................................... 401\n@Note ................................................................................................................... 403\n@Pause .................................................................................................................. 404\n@Ping .................................................................................................................... 405\n@Promote .............................................................................................................. 406\n@QueryStats ........................................................................................................... 407\n@Quiesce ............................................................................................................... 409\n@Resume ............................................................................................................... 411\n@Shutdown ............................................................................................................ 412\n@SnapshotDelete ..................................................................................................... 413\n@SnapshotRestore ................................................................................................... 415\n@SnapshotSave ....................................................................................................... 418\n@SnapshotScan ....................................................................................................... 422\n@Statistics .............................................................................................................. 425\n@StopNode ............................................................................................................ 450\n@SwapTables ......................................................................................................... 452\n@SystemCatalog ...................................................................................................... 454\nixUsing VoltDB\n@SystemInformation ................................................................................................ 459\n@UpdateApplicationCatalog ...................................................................................... 464\n@UpdateClasses ...................................................................................................... 466\n@UpdateLogging ..................................................................................................... 468\nxList of Figures\n1.1. Partitioning Tables ........................................................................................................ 2\n1.2. Serialized Processing ..................................................................................................... 3\n1.3. Replicating Tables ......................................................................................................... 4\n4.1. Components of a Database Schema ................................................................................ 16\n4.2. Partitions Distribute Table Data and Stored Procedure Processing ........................................ 17\n4.3. Diagram Representing the Flight Reservation System ........................................................ 19\n5.1. Array of VoltTable Structures ....................................................................................... 38\n5.2. One VoltTable Structure is returned for each Queued SQL Statement .................................... 38\n5.3. Stored Procedures Execute in the Appropriate Partition Based on the Partitioned Parameter\nValue ............................................................................................................................... 44\n8.1. The Structure of the VoltDB JSON Response ................................................................... 72\n10.1. K-Safety in Action ..................................................................................................... 83\n10.2. Network Partition ...................................................................................................... 87\n10.3. Network Fault Protection in Action ............................................................................... 88\n11.1. Passive Database Replication ....................................................................................... 90\n11.2. Cross Datacenter Replication ....................................................................................... 91\n11.3. Replicating an Existing Database .................................................................................. 93\n11.4. Promoting the Replica ................................................................................................ 94\n11.5. Read-Only Access to the Replica ................................................................................ 100\n11.6. Standard XDCR Configuration ................................................................................... 106\n11.7. XDCR Configuration with Read-Only Replicas ............................................................. 106\n11.8. Transaction Order and Conflict Resolution ................................................................... 107\n14.1. Command Logging in Action ..................................................................................... 133\n14.2. Recovery in Action .................................................................................................. 134\n15.1. Overview of Data Streaming ...................................................................................... 140\n15.2. Overview of Topics .................................................................................................. 140\nE.1. Configuration XML Structure ...................................................................................... 387\nxiList of Tables\n2.1. Operating System and Software Requirements ................................................................... 5\n2.2. Components Installed by VoltDB ..................................................................................... 7\n4.1. Example Application Workload ..................................................................................... 19\n5.1. Methods of the VoltTable Classes .................................................................................. 39\n8.1. Datatypes in the JSON Interface .................................................................................... 71\n11.1. Structure of the XDCR Conflict Logs .......................................................................... 112\n12.1. Named Security Permissions ...................................................................................... 117\n15.1. File Export Properties ............................................................................................... 151\n15.2. Export Metadata ...................................................................................................... 152\n15.3. HTTP Export Properties ............................................................................................ 154\n15.4. JDBC Export Properties ............................................................................................ 157\n15.5. Kafka Export Properties ............................................................................................ 160\n15.6. Elasticsearch Export Properties ................................................................................... 162\n15.7. Kafka Import Properties ............................................................................................ 165\n15.8. Kinesis Import Properties .......................................................................................... 166\n15.9. CSV and TSV Formatter Properties ............................................................................. 167\n15.10. Topic Formatting Properties ..................................................................................... 173\nA.1. Supported SQL Datatypes .......................................................................................... 199\nC.1. Selectable Values for the EXTRACT Function ............................................................... 281\nE.1. Configuration File Elements and Attributes .................................................................... 388\nF.1. Java and VoltDB Datatype Compatibility ....................................................................... 391\nG.1. @SnapshotRestoreOptions .......................................................................................... 415\nG.2. @SnapshotSave Options ............................................................................................. 419\nxiiList of Examples\n4.1. DDL Example of a Reservation Schema .......................................................................... 18\n5.1. Components of a VoltDB Java Stored Procedure ............................................................... 34\n5.2. Cycles of Queue and Execute in a Stored Procedure .......................................................... 37\n5.3. Displaying the Contents of VoltTable Arrays ................................................................... 40\nxiiiAbout This Book\nThis book is a complete guide to VoltDB. It describes what VoltDB is, how it works, and — more impor-\ntantly — how to use it to build high performance, data intensive applications. The book is divided into\nfive parts:\nPart 1: Getting Started Explains what VoltDB is, how it works, how to install it, and how to\nstart using VoltDB. The chapters in this section are:\n•Chapter 1, Overview\n•Chapter 2, Installing VoltDB\n•Chapter 3, Starting the Database\nPart 2: Developing VoltDB Data-\nbase ApplicationsDescribes how to design and develop applications using VoltDB. The\nchapters in this section are:\n•Chapter 4, Designing the Database Schema\n•Chapter 5, Designing Stored Procedures to Access the Database\n•Chapter 6, Designing VoltDB Client Applications\n•Chapter 7, Simplifying Application Development\n•Chapter 8, Using VoltDB with Other Programming Languages\nPart 3: Running VoltDB in a Clus-\nterDescribes additional features useful for running a database in a cluster.\nThe chapters in this section are:\n•Chapter 9, Using VoltDB in a Cluster\n•Chapter 10, Availability\n•Chapter 11, Database Replication\n•Chapter 12, Security\nPart 4: Managing the Data Provides techniques for ensuring data durability and integrity. The\nchapters in this section are:\n•Chapter 13, Saving & Restoring a VoltDB Database\n•Chapter 14, Command Logging and Recovery\n•Chapter 15, Streaming Data: Import, Export, and Migration\nPart 5: Reference Material Provides reference information about the languages and interfaces\nused by VoltDB, including:\n•Appendix A, Supported SQL DDL Statements\n•Appendix B, Supported SQL Statements\n•Appendix C, SQL Functions\n•Appendix D, VoltDB CLI Commands\nxivAbout This Book\n•Appendix E, Configuration File (deployment.xml)\n•Appendix F, VoltDB Datatype Compatibility\n•Appendix G, System Procedures\nThis book provides the most complete description of the VoltDB product. It includes features from both\nthe open source Community Edition and the commercial products VoltDB Enterprise Edition and VoltDB\nPro. In general, the features described in Parts 1 and 2 are available in all versions of the product. Several\nfeatures in Parts 3 and 4 are unique to the commercial products.\nIf you are new to VoltDB, the VoltDB Tutorial provides an introduction to the product and its features.\nThe tutorial, and other books, are available on the web from http://docs.voltdb.com/ .\nxvChapter 1. Overview\n1.1. What is VoltDB?\nVoltDB is a revolutionary new database product. Designed from the ground up to be the best solution for\nhigh performance business-critical applications, the VoltDB architecture is able to achieve 45 times higher\nthroughput than current database products. The architecture also allows VoltDB databases to scale easily\nby adding processors to the cluster as the data volume and transaction requirements grow.\nCurrent commercial database products are designed as general-purpose data management solutions. They\ncan be tweaked for specific application requirements. However, the one-size-fits-all architecture of tradi-\ntional databases limits the extent to which they can be optimized.\nAlthough the basic architecture of databases has not changed significantly in 30 years, computing has. As\nhave the demands and expectations of business applications and the corporations that depend on them.\nVoltDB is designed to take full advantage of the modern computing environment:\n•VoltDB uses in-memory storage to maximize throughput, avoiding costly disk access.\n•Further performance gains are achieved by serializing all data access, avoiding many of the time-con-\nsuming functions of traditional databases such as locking, latching, and maintaining transaction logs.\n•Scalability, reliability, and high availability are achieved through clustering and replication across mul-\ntiple servers and server farms.\nVoltDB is a fully ACID-compliant transactional database, relieving the application developer from having\nto develop code to perform transactions and manage rollbacks within their own application. By using\nANSI standard SQL for the schema definition and data access, VoltDB also reduces the learning curve\nfor experienced database designers.\n1.2. Who Should Use VoltDB\nVoltDB is not intended to solve all database problems. It is targeted at a specific segment of business\ncomputing.\nVoltDB focuses specifically on fast data. That is, applications that must process large streams of data\nquickly. This includes financial applications, social media applications, and the burgeoning field of the\nInternet of Things. The key requirements for these applications are scalability, reliability, high availability,\nand outstanding throughput.\nVoltDB is used today for traditional high performance applications such as capital markets data feeds, fi-\nnancial trade, telco record streams and sensor-based distribution systems. It's also used in emerging appli-\ncations like wireless, online gaming, fraud detection, digital ad exchanges and micro transaction systems.\nAny application requiring high database throughput, linear scaling and uncompromising data accuracy\nwill benefit immediately from VoltDB.\nHowever, VoltDB is not optimized for all types of queries. For example, VoltDB is not the optimal choice\nfor collecting and collating extremely large historical data sets which must be queried across multiple\ntables. This sort of activity is commonly found in business intelligence and data warehousing solutions,\nfor which other database products are better suited.\n1Overview\nTo aid businesses that require both exceptional transaction performance and ad hoc reporting, VoltDB\nincludes integration functions so that historical data can be exported to an analytic database for larger\nscale data mining.\n1.3. How VoltDB Works\nVoltDB is not like traditional database products. Each VoltDB database is optimized for a specific appli-\ncation by partitioning the database tables and the stored procedures that access those tables across multiple\n\"sites\" or partitions on one or more host machines to create the distributed database. Because both the data\nand the work is partitioned, multiple queries can be run in parallel. At the same time, because each site\noperates independently, each transaction can run to completion without the overhead of locking individ-\nual records that consumes much of the processing time of traditional databases. Finally, VoltDB balances\nthe requirements of maximum performance with the flexibility to accommodate less intense but equally\nimportant queries that cross partitions. The following sections describe these concepts in more detail.\n1.3.1. Partitioning\nIn VoltDB, each stored procedure is defined as a transaction. The stored procedure (i.e. transaction) suc-\nceeds or rolls back as a whole, ensuring database consistency.\nBy analyzing and precompiling the data access logic in the stored procedures, VoltDB can distribute both\nthe data and the processing associated with it to the individual partitions on the cluster. In this way, each\npartition contains a unique \"slice\" of the data and the data processing. Each node in the cluster can support\nmultiple partitions.\nFigure 1.1. Partitioning Tables\n1.3.2. Serialized (Single-Threaded) Processing\nAt run-time, calls to the stored procedures are passed to the appropriate partition. When procedures are\n\"single-partitioned\" (meaning they operate on data within a single partition) the server process executes\nthe procedure by itself, freeing the rest of the cluster to handle other requests in parallel.\nBy using serialized processing, VoltDB ensures transactional consistency without the overhead of locking,\nlatching, and transaction logs, while partitioning lets the database handle multiple requests at a time. As\n2Overview\na general rule of thumb, the more processors (and therefore the more partitions) in the cluster, the more\ntransactions VoltDB completes per second, providing an easy, almost linear path for scaling an applica-\ntion's capacity and performance.\nWhen a procedure does require data from multiple partitions, one node acts as a coordinator and hands out\nthe necessary work to the other nodes, collects the results and completes the task. This coordination makes\nmulti-partitioned transactions slightly slower than single-partitioned transactions. However, transactional\nintegrity is maintained and the architecture of multiple parallel partitions ensures throughput is kept at a\nmaximum.\nFigure 1.2. Serialized Processing\nIt is important to note that the VoltDB architecture is optimized for total throughput. Each transaction runs\nuninterrupted in its own thread, minimizing the individual latency per transaction (the time from when the\ntransaction begins until processing ends). This also eliminates the overhead needed for locking, latching,\nand other administrative tasks, reducing the amount of time requests sit in the queue waiting to be executed.\nThe result is that for a suitably partitioned schema, the number of transactions that can be completed in a\nsecond (i.e. throughput) is orders of magnitude higher than traditional databases.\n1.3.3. Partitioned vs. Replicated Tables\nTables are partitioned in VoltDB based on a column that you, the developer or designer, specify. When you\nchoose partitioning columns that match the way the data is accessed by the stored procedures, it optimizes\nexecution at runtime.\nTo further optimize performance, VoltDB allows certain database tables to be replicated to all nodes of the\ncluster. For small tables that are largely read-only, this allows stored procedures to create joins between\nthis table and another larger table while remaining a single-partitioned transaction. For example, a retail\nmerchandising database that uses product codes as the primary key may have one table that simply corre-\nlates the product code with the product's category and full name, Since this table is relatively small and\ndoes not change frequently (unlike inventory and orders) it can be replicated for access by all partitions.\nThis way stored procedures can retrieve and return user-friendly product information when searching by\nproduct code without impacting the performance of order and inventory updates and searches.\n3Overview\nFigure 1.3. Replicating Tables\n1.3.4. Ease of Scaling to Meet Application Needs\nThe VoltDB architecture is designed to simplify the process of scaling the database to meet the changing\nneeds of your application. Increasing the number of nodes in a VoltDB cluster both increases throughput\n(by increasing the number of simultaneous queues in operation) and increases the data capacity (by in-\ncreasing the number of partitions used for each table).\nScaling up a VoltDB database is a simple process that doesn't require any changes to the database schema\nor application code. You can either:\n•Save the database (using a snapshot), then restart the database specifying the new number of nodes for\nthe resized cluster and using restore to reload the schema and data.\n•Add nodes \"on the fly\" while the database is running.\n1.4. Working with VoltDB Effectively\nIt is possible to use VoltDB like any other SQL database, creating tables and performing ad hoc SQL\nqueries using standard SQL statements. However, to take full advantage of VoltDB's capabilities, it is best\nto design your schema and your stored procedures to maximize the use of partitioned tables and procedures.\nThere are also additional features of VoltDB to increase the availability and durability of your data. The\nfollowing sections explain how to work effectively with VoltDB, including:\n•Chapters 2 and 3 explain how to install VoltDB and create a new database.\n•Chapters 4 through 8 explain how to design your database, stored procedures, and client applications\nto maximize performance.\n•Chapters 9 through 12 explain how to create and use VoltDB clusters to increase scalability and avail-\nability.\n•Chapters 13 through 15 explain how VoltDB ensures the durability of your data and how you can inte-\ngrate VoltDB with other data sources using export for complete business solutions\n4Chapter 2. Installing VoltDB\nVoltDB is available in both open source and commercial editions. The open source, or community, edi-\ntion provides all the transactional performance benefits of VoltDB, plus basic durability and availability.\nThe commercial editions provide additional features needed to support production environments, such as\ncomplete durability, dynamic scaling, and WAN replication.\nDepending on which version you choose, the VoltDB software comes as either pre-built distributions or\nas source code. This chapter explains the system requirements for running VoltDB, how to install and\nupgrade the software, and what resources are provided in the kit.\n2.1. Operating System and Software Requirements\nThe following are the requirements for developing and running VoltDB applications.\nTable 2.1. Operating System and Software Requirements\nOperating System VoltDB requires a 64-bit Linux-based operating system. Kits are built and\nqualified on the following platforms:\n•CentOS version 7.0 and later, or version 8.0 and later\n•Red Hat (RHEL) version 7.0 and later, or version 8.0 and later\n•Ubuntu versions 18.04 and 20.04\n•Macintosh OS X 10.9 and later (for development only)\nCPU •Dual core1 x86_64 processor\n•64 bit\n•1.6 GHz\nMemory 4 Gbytes2\nJava3VoltDB Server: Java 8, 11 or 17\nJava and JDBC Client: Java 8, 11, or 17\nRequired Software Time synchronization service, such as NTP or chrony4\nPython 3.6 or later\nRecommended Software Eclipse 3.x (or other Java IDE)\nFootnotes:\n1.Dual core processors are a minimum requirement. Four or eight physical cores are recommended for\noptimal performance.\n2.Memory requirements are very specific to the storage needs of the application and the number of nodes\nin the cluster. However, 4 Gigabytes should be considered a minimum configuration.\n3.VoltDB supports JDKs from OpenJDK or Oracle.\n4.Time synchronization services minimize the time difference between nodes in a database cluster,\nwhich is critical for VoltDB. All nodes of the cluster should be configured to synchronize against the\nsame time server. Using a single local server is recommended, but not required.\n5Installing VoltDB\n2.2. Installing VoltDB\nVoltDB is distributed as a compressed tar archive. The file name identifies the edition (community or\nenterprise) and the version number. The best way to install VoltDB is to unpack the distribution kit as a\nfolder in the home directory of your personal account, like so:\n$ tar -zxvf voltdb-ent-10.0.tar.gz -C $HOME/\nInstalling into your personal directory gives you full access to the software and is most useful for devel-\nopment.\nIf you are installing VoltDB on a production server where the database will be run, you may want to\ninstall the software into a standard system location so that the database cluster can be started with the\nsame commands on all nodes. The following shell commands install the VoltDB software in the folder\n/opt/voltdb :\n$ sudo tar -zxvf voltdb-ent-10.0.tar.gz -C /opt\n$ cd /opt\n$ sudo mv voltdb-ent-10.0 voltdb\nNote that installing as root using the sudo command makes the installation folders read-only for non-priv-\nileged accounts. Which is why installing in $HOME is recommended for running the sample applications\nand other development activities.\n2.2.1. Upgrading From Older Versions\nWhen upgrading an existing database from a recent version of VoltDB, the easiest way to upgrade is as\nfollows:\n1.Perform an orderly shutdown of the database, saving a final snapshot ( voltadmin shutdown --save )\n2.Upgrade the VoltDB software\n3.Restart the database ( voltdb start )\nUsing this process VoltDB automatically restores the final snapshot taken before the upgrade. To upgrade\nVoltDB on clusters running database replication (DR), see the instructions specific to DR in the VoltDB\nAdministrator's Guide .\nIf you are upgrading from a version before V6.8, you need to save and restore the snapshot manually. In\nwhich case, the recommended steps for upgrading an existing database are:\n1.Place the database in admin mode ( voltadmin pause --wait ).\n2.Perform a manual snapshot of the database ( voltadmin save --blocking ).\n3.Shutdown the database ( voltadmin shutdown ).\n4.Upgrade VoltDB.\n5.Initialize a new database root directory ( voltdb init )\n6.Start the new database in admin mode ( voltdb start --pause ).\n7.Restore the snapshot created in Step #2 ( voltadmin restore ).\n6Installing VoltDB\n8.Return the database to normal operations ( voltadmin resume ).\n2.2.2. Building a New VoltDB Distribution Kit\nIf you want to build the open source VoltDB software from source (for example, if you want to test recent\ndevelopment changes), you must first fetch the VoltDB source files. The VoltDB sources are stored in a\nGitHub repository .\nThe VoltDB sources are designed to build and run on 64-bit Linux-based or 64-bit Macintosh platforms.\nHowever, the build process has not been tested on all possible configurations. Attempts to build the sources\non other operating systems may require changes to the build files and possibly to the sources as well.\nOnce you obtain the sources, use Ant 1.7 or later to build a new distribution kit for the current platform:\n$ ant dist\nThe resulting distribution kit is created as obj/release/volt-n.n.nn.tar.gz where n.n.nn iden-\ntifies the current version and build numbers. Use this file to install VoltDB according to the instructions\nin Section 2.2, “Installing VoltDB” .\n2.3. Setting Up Your Environment\nVoltDB comes with shell command scripts that simplify the process of developing and deploying VoltDB\napplications. These scripts are in the /bin folder under the installation root and define short-cut commands\nfor executing many VoltDB actions. To make the commands available to your session, you must include\nthe /bin directory as part your PATH environment variable.\nYou can add the /bin directory to your PATH variable by redefining PATH. For example, the following\nshell command adds /bin to the end of the environment PATH, assuming you installed the VoltDB\nEnterprise Edition as /voltdb-ent-n.n in your $HOME directory:\n$ export PATH=\"$PATH:$HOME/voltdb-ent-n.n/bin\"\nTo avoid having to redefine PATH every time you create a new session, you can add the preceding com-\nmand to your shell login script. For example, if you are using the bash shell, you would add the preceding\ncommand to the $HOME/.bashrc file.\n2.4. What is Included in the VoltDB Distribution\nTable 2.2 lists the components that are provided as part of the VoltDB distribution.\nTable 2.2. Components Installed by VoltDB\nComponent Description\nVoltDB Software & Runtime The VoltDB software comes as Java archives (.JAR\nfiles) and a callable library that can be found in the\n/voltdb subfolder. Other software libraries that\nVoltDB depends on are included in a separate /lib\nsubfolder.\nExample Applications VoltDB comes with several example applications\nthat demonstrate VoltDB capabilities and perfor-\nmance. They can be found in the /examples sub-\nfolder.\n7Installing VoltDB\nComponent Description\nVoltDB Management Center VoltDB Management Center is a browser-based\nmanagement tool for monitoring, examining, and\nquerying a running VoltDB database. The Man-\nagement Center is bundled with the VoltDB serv-\ner software. You can start the Management Cen-\nter by connecting to the HTTP port of a running\nVoltDB database server. For example, http://\nvoltsvr:8080/ . Note that the httpd server and\nJSON interface must be enabled on the server to be\nable to access the Management Center.\nShell Commands The /bin subfolder contains executable scripts to\nperform common VoltDB tasks, such as starting the\nVoltDB server process and issuing database queries\nfrom the command line using sqlcmd, Add the /\nbin subfolder to your PATH environment variable\nto use the following shell commands:\ncsvloader\njdbcloader\nkafkaloader\nsqlcmd\nvoltadmin\nvoltdb\nDocumentation Online documentation, including the full manuals\nand javadoc describing the Java programming inter-\nface, is available in the /doc subfolder.\n2.5. VoltDB in Action: Running the Sample Appli-\ncations\nOnce you install VoltDB, you can use the sample applications to see VoltDB in action and get a better\nunderstanding of how it works. The easiest way to do this is to set directory to the /examples folder\nwhere VoltDB is installed. Each sample application has its own subdirectory and a run.sh script to simplify\nbuilding and running the application. See the README file in the /examples subfolder for a complete\nlist of the applications and further instructions.\nOnce you get a taste for what VoltDB can do, we recommend following the VoltDB tutorial to understand\nhow to create your own applications using VoltDB.\n8Chapter 3. Starng the Database\nThis chapter describes the procedures for starting and stopping a VoltDB database and includes details\nabout configuring the database. The chapter contains the following sections:\n•Section 3.1, “Initializing and Starting a VoltDB Database”\n•Section 3.2, “Initializing and Starting a VoltDB Database on a Cluster”\n•Section 3.3, “Stopping a VoltDB Database”\n•Section 3.5, “Restarting a VoltDB Database”\n•Section 3.6, “Updating Nodes on the Cluster”\n•Section 3.7, “Defining the Cluster Configuration”\n3.1. Initializing and Starting a VoltDB Database\nBefore you start a VoltDB database, you must initialize the root directory where VoltDB stores its config-\nuration data, logs, and other disk-based information. Once you initialize the root directory, you can start\nthe database. For example, you can accept the defaults for the voltdb init and start commands to initialize\nand start a new, single-node database suitable for developing and testing a database and application.\n$ voltdb init\n$ voltdb start\nThis creates a VoltDB root directory as a subfolder of your current working directory and starts a database\nwith all default options. You only need to initialize the root directory once and can then start and stop the\ndatabase as often as you like.\n$ voltadmin shutdown\n$ voltdb start\nIf you are using command logging, which is enabled by default in the VoltDB Enterprise Edition, VoltDB\nautomatically saves and recovers your database between any stoppage and a restart. If you are not using\ncommand logging, you will want to save a snapshot before shutting down. The easiest way to do this is\nby adding the --save argument to the shutdown command.\nThe snapshot is automatically restored when the database restarts:\n$ voltadmin shutdown --save\n$ voltdb start\nIf you want to create a new database, you can reinitialize the root directory. However, you must use the --\nforce flag if the database has already been used; VoltDB will not clear the root directory of existing data\nunless you explicitly \"force\" it to.\n$ voltdb init --force\n$ voltdb start\nAlso, you can specify an alternate location for the root directory using the --dir or -D flag. Of course,\nyou must specify the same location for the root directory when both initializing and starting the database.\nYou cannot start a database in a directory that has not been initialized.\n$ voltdb init --dir=~/mydb\n9Starting the Database\n$ voltdb start --dir=~/mydb\nIn most cases, you will want to use additional arguments to configure the server and database options. But\nthe preceding commands are sufficient to get you started in a test environment. The rest of this chapter\nexplains how to use other arguments and how to start, stop, and recover a database when using a cluster.\nFinally, when using the VoltDB Enterprise Edition, you must provide a license file when initializing the\ndatabase. VoltDB looks for the license as a file named license.xml in three possible locations, in the\nfollowing order:\n1.The current working directory\n2.The directory where the VoltDB image files are installed (usually in the /voltdb subfolder of the\ninstallation directory)\n3.The current user's home directory\nIf the license file is not in any of these locations, you must explicitly identify it when you issue the voltdb\ninit command using the --license or -l flag. For example, the command might be:\n$ voltdb init -l /usr/share/voltdb-license.xml\nThe examples in this manual assume that the license file is in one of the default locations and therefore\ndo not show the --license flag for simplicity's sake.\n3.2. Initializing and Starting a VoltDB Database on\na Cluster\nYou initialize and start a cluster the same way you start a single node: with the voltdb init and start\ncommands. The only difference is that when starting the cluster, you must tell the cluster nodes how big\nthe cluster is and which nodes to use as potential hosts for the startup.\nYou initialize a root directory on each server using the voltdb init command. You can accept the default\nconfiguration as shown in the previous section. However, when setting up a cluster you often want to make\nsome configuration adjustments (for example, enabling K-safety). So it is a good idea to get into the habit\nof specifying a configuration file.\nYou specify the configuration file with the --config or -C flag when you initialize the root directory.\nAll nodes must use the same configuration file. For example:\n$ voltdb init -D ~/mydb --config=myconfig.xml\nOnce the nodes are initialized, you start the cluster by issuing the voltdb start command on all nodes\nspecifying the following information:\n•Number of nodes in the cluster: When you start the cluster, you specify how many servers will make\nup the cluster using the --count flag.\n•Host names: You specify the hostnames or IP addresses of one or more servers from the cluster that\nare potential \"hosts\" for coordinating the formation of the cluster. You specify the list of hosts with the\n--host or -H flag. You must specify at least one node as a host.\nFor each node of the cluster, log in and start the server process using the same voltdb start command. For\nexample, the following example starts a five-node database cluster specifying voltsvr1 as the host node.\nBe sure the number of nodes on which you run the command match the number of nodes specified in the\n--count argument.\n10Starting the Database\n$ voltdb start --count=5 -–host=voltsvr1\nOr you can also use shortened forms for the argument flags:\n$ voltdb start -c 5 -H voltsvr1\nAlthough you only need to specify one potential host, it is a good idea to specify multiple hosts. This way,\nyou can use the exact same command for both starting and rejoining nodes in a highly-available cluster.\nEven if the rejoining node is in the host list another, running node can be chosen to facilitate the rejoin.\nTo simplify even further, you can specify all of the servers in the --host argument. If you do this, you\ncan skip the --count argument. If --count is missing, VoltDB assumes the number of servers in the\n--host list is complete and sets the server count to match. For example, the following command —\nissued on all three servers — starts a three node cluster:\n$ voltdb start --host=svrA,svrB,svrC\nWhen starting a VoltDB database on a cluster, the VoltDB server process performs the following actions:\n1.If you are starting the database process on the node selected as the host node, it waits for initialization\nmessages from the remaining nodes. The host is selected from the list of hosts on the command line\nand plays a special role during startup by managing the cluster initiation process. It is important that all\nnodes in the cluster can resolve the hostnames or IP addresses of the host nodes you specify.\n2.If you are starting the database on a non-host node, it sends an initialization message to the host indi-\ncating that it is ready. The database is not operational until the correct number of nodes (as specified\non the command line) have connected.\n3.Once all the nodes have sent initialization messages, the host sends out a message to the other nodes that\nthe cluster is complete. Once the startup procedure is complete, the host's role is over and it becomes a\npeer like every other node in the cluster. It performs no further special functions.\nManually logging on to each node of the cluster every time you want to start the database can be tedious.\nInstead, you can use secure shell (ssh) to execute shell commands remotely. By creating an ssh script (with\nthe appropriate permissions) you can copy files and/or start the database on each node in the cluster from\na single script. Or you can use distributed system management tools such as Chef and Puppet to automate\nthe startup procedures.\n3.3. Stopping a VoltDB Database\nOnce the VoltDB database is up and running, you can shut it down by stopping the VoltDB server processes\non each cluster node. However, it is easier to stop the database as a whole with a single command. You\ndo this with the voltadmin shutdown command, which pauses database activity, completes all current\ntransactions, and empties any queued data (such as export or database replication) before shutting down.\nFor example, entering the following command without specifying a host server will perform an orderly\nshut down the database cluster the current system is part of.\n$ voltadmin shutdown\nIf you are not using command logging, which automatically saves all progress, be sure to add the --save\nargument to save a final snapshot before shutting down:\n$ voltadmin shutdown --save\nTo shutdown a database running on another system, use the --host argument to access the remote data-\nbase. For example, the following command shuts down the VoltDB database that includes the server zeus:\n11Starting the Database\n$ voltadmin shutdown --host=zeus\nYou can pause the database using the voltadmin pause command to restrict clients from accessing it\nwhile you perform changes in administration mode. You resume the database using the voltadmin resume\ncommand. See the VoltDB Administrator's Guide for more about modes of operation.\n3.4. Saving the Data\nBecause VoltDB is an in-memory database, once the database server process stops, the database schema\nand the data itself are removed from memory. However, VoltDB can save this information to disk through\nthe use of command logs and snapshots, so use of these features is strongly encouraged.\n•Command logging provides the most complete data durability for VoltDB and is enabled by default\nin the VoltDB Enterprise Edition. Command logging works automatically by saving a record of every\ntransaction. These logs can then be replayed if the database stops for any reason.\n•Snapshots , on the other hand, provide a point-in-time copy of the database contents written to disk. You\ncan create snapshots manually with the voltadmin save command, you can enable periodic (also known\nas automatic) snapshots, or you can save a final snapshot when you shutdown the database using the\nvoltadmin shutdown --save command. Snapshots are restored when the database restarts, but only take\nyou back to the state of the database at the time the last snapshot was saved.\nTo learn more about using command logging see Chapter 14, Command Logging and Recovery . To learn\nmore about how to save and restore snapshots of the database, see Chapter 13, Saving & Restoring a\nVoltDB Database .\n3.5. Restarting a VoltDB Database\nOnce a database stops, you can restart it using the same voltdb start command used to start the database the\nfirst time. Once the database starts, any command logs or snapshots are restored. In the VoltDB Enterprise\nEdition, command logs automatically restore the last state of the database. If no command log exist but a\nsnapshot does, the database is restored to its state when that snapshot was taken. For example, the following\ncommand restarts a single-node database:\n$ voltdb start\nTo restart a database on a cluster, issue the same voltdb start command used to start that cluster, including\nthe server count and list of host nodes. For example:\n$ voltdb start --count=5 -–host=voltsvr1\n3.6. Updating Nodes on the Cluster\nA cluster is a dynamic system in which nodes might be stopped either deliberately or by unforeseen cir-\ncumstances, or nodes might be added to the cluster on-the-fly to scale the database for improved perfor-\nmance. The voltdb start command provides the following additional functions, described later in this\nbook, for rejoining and adding nodes to a running VoltDB database:\n•Section 10.3, “Recovering from System Failures” — Use the same voltdb start command to start the\ncluster or rejoin a failed node.\n•Section 9.3.1, “Adding Nodes with Elastic Scaling” — Use voltdb start with the --add flag to add a\nnew node to the running database cluster.\n12Starting the Database\n3.7. Defining the Cluster Configuration\nTwo important aspects of a VoltDB database are the physical layout of the cluster that runs the database\nand the database features you choose to use. You define the physical cluster layout on the voltdb start\ncommand using the --count and --host arguments. You enable and disable specific database features\nin the configuration file when you initialize the database root directory with the voltdb init command.\nThe configuration file is an XML file, which you specify when you initialize the root directory. The basic\nsyntax of the configuration file is as follows:\n<?xml version=\"1.0\"?>\n<deployment>\n <cluster kfactor=\"n\" />\n <feature option ... >\n </feature >\n ...\n</deployment>\nThe attributes of the <cluster> tag define the layout of the database partitions. The attributes of the\n<cluster> tag are:\n•sitesperhost — specifies the number of partitions created on each server in the cluster. The sites-\nperhost value times the number of servers gives you the total number of partitions in the cluster. See\nSection 3.7.1, “Determining How Many Sites per Host” for more information about partition count.\n•kfactor — specifies the K-safety value to use for durability when creating the database. The K-safety\nvalue controls the duplication of database partitions. See Chapter 10, Availability for more information\nabout K-safety.\nIn the simplest case — when running on a single node with no special options enabled — you can skip\nthe configuration file on the voltdb init command and the server count and host list on the voltdb start\ncommand. If you do not specify a configuration file, VoltDB defaults to eight execution sites per host,\nand a K-safety value of zero.\nThe configuration file is also used to enable and configure many other runtime options related to the\ndatabase, which are described later in this book. For example, the configuration file can specify:\n•Whether security is enabled and what users and passwords are needed to authenticate clients at runtime.\nSee Chapter 12, Security for more information.\n•A schedule for saving automatic snapshots of the database. See Section 13.2, “Scheduling Automated\nSnapshots” .\n•Properties for exporting and importing data to other data sources. See Chapter 15, Streaming Data:\nImport, Export, and Migration .\nFor the complete configuration file syntax, see Appendix E, Configuration File (deployment.xml) .\n3.7.1. Determining How Many Sites per Host\nThere is very little penalty for allocating more sites than needed for the partitions the database will use\n(except for incremental memory usage). Consequently, VoltDB defaults to eight sites per node to provide\n13Starting the Database\nreasonable performance on most modern system configurations. This default does not normally need to be\nchanged. However, for systems with a large number of available processors (16 or more) or older machines\nwith fewer than 8 processors and limited memory, you may wish to tune the sitesperhost attribute.\nThe number of sites needed per node is related to the number of processor cores each system has, the\noptimal number being approximately 3/4 of the number of CPUs reported by the operating system. For\nexample, if you are using a cluster of dual quad-core processors (in other words, 8 cores per node), the\noptimal number of partitions is likely to be 6 or 7 sites per node.\n<?xml version=\"1.0\"?>\n<deployment>\n <cluster . . .\n sitesperhost=\"6\"\n />\n</deployment>\nFor systems that support hyperthreading (where the number of physical cores support twice as many\nthreads), the operating system reports twice the number of physical cores. In other words, a dual quad-\ncore system would report 16 virtual CPUs. However, each partition is not quite as efficient as on non-\nhyperthreading systems. So the optimal number of sites is more likely to be between 10 and 12 per node\nin this situation.\nBecause there are no hard and set rules, the optimal number of sites per node is best calculated by actually\nbenchmarking the application to see what combination of cores and sites produces the best results. How-\never, it is important to remember that all nodes in the cluster will use the same number of sites. So the best\nperformance is achieved by using a cluster with all nodes having the same physical architecture (i.e. cores).\n3.7.2. Configuring Paths for Runtime Features\nAn important aspect of some runtime features is that they make use of disk resources for persistent storage\nacross sessions. For example, automatic snapshots need a directory for storing snapshots of the database\ncontents. Similarly, export uses disk storage for writing overflow data if the export connector cannot keep\nup with the export queue.\nYou can specify individual paths for each feature in the configuration file. If not, VoltDB creates subfolders\nfor each feature in the database root directory as needed, which can be useful for testing. However, in\nproduction, it is useful to direct certain high volume features, such as command logging, to separate devices\nto avoid disk I/O affecting database performance.\nYou can identify specific path locations, within the <paths> element, for the following features:\n•<commandlog>\n•<commandlogsnapshot>\n•<exportoverflow>\n•<snapshots>\nIf you name a specific feature path and it does not exist, VoltDB attempts to create it for you. For example,\nthe <exportoverflow> path contains temporary data which can be deleted periodically. The following\nexcerpt from a configuration file specifies /opt/overflow as the directory for export overflow.\n<paths>\n <exportoverflow path=\"/opt/overflow\" />\n</paths>\n14Starting the Database\n3.7.3. Verifying your Hardware Configuration\nThe configuration file and start command options define the desired configuration of your database cluster.\nHowever, there are several important aspects of the physical hardware and operating system configuration\nthat you should be aware of before running VoltDB:\n•VoltDB can operate on heterogeneous clusters. However, best performance is achieved by running the\ncluster on similar hardware with the same type of processors, number of processors, and amount of\nmemory on each node.\n•All nodes must be able to resolve the IP addresses and host names of the other nodes in the cluster. That\nmeans they must all have valid DNS entries or have the appropriate entries in their local hosts file.\n•You must run a time synchronization service such as Network Time Protocol (NTP) or chrony on all of\nthe cluster nodes, preferably synchronizing against the same local time server. If the time skew between\nnodes in the cluster is greater than 200 milliseconds, VoltDB cannot start the database.\n•It is strongly recommended that you configure your time service to avoid adjusting time backwards for\nall but very large increments. For example, in NTP this is done using the -x argument. If the server\ntime moves backward, VoltDB must pause and wait for time to catch up.\n15Chapter 4. Designing the Database\nSchema\nVoltDB is a relational database product. Relational databases consist of tables and columns, with con-\nstraints, indexes, and views. VoltDB uses standard SQL database definition language (DDL) statements\nto specify the database schema. So designing the schema for a VoltDB database uses the same skills and\nknowledge as designing a database for Oracle, MySQL, or any other relational database product.\nThis guide describes the stages of application design by dividing the work into three chapters:\n•Design the schema in DDL to define the database structure. Schema design is covered in this chapter.\n•Design stored procedures to access data in the database. Stored procedures provide client applications\nan application programming interface (API) to the database. Stored procedures are covered in Chapter 5,\nDesigning Stored Procedures to Access the Database .\n•Design clients to provide business logic and also connect to the database to access data. Client appli-\ncation design is covered in Chapter 6, Designing VoltDB Client Applications .\nThe database schema is a specification that describes the structure of the VoltDB database such as tables\nand indexes, identifies the stored procedures that access data in the database, and defines the way tables\nand stored procedures are partitioned for fast data access. When designing client applications to use the\ndatabase, the schema specifies the details needed about data types, tables, columns, and so on.\nFigure 4.1. Components of a Database Schema\nAlong with designing your database tables, an important aspect of VoltDB database design is partitioning,\nwhich provides much more efficient access to data and processing. Partitioning distributes the rows of a\ntable and the processing to access the table across several, independent partitions instead of one. Your\ndesign requires coordinating the partitioning of both database tables and the stored procedures that access\nthe tables. At design time you choose a column on which to partition a table's rows. You also partition\nstored procedures on the same column if they use the column to identify which rows to operate on in the\ntable.\nAt runtime, VoltDB decides which cluster nodes and partitions to use for the table partitions and consis-\ntently allocates rows to the appropriate partition. Figure 4.2, “Partitions Distribute Table Data and Stored\nProcedure Processing” shows how when data is inserted into a partitioned table, VoltDB automatically\nallocates the data to the correct partition. Also, when a partitioned stored procedure is invoked, VoltDB\nautomatically executes the stored procedure in the single partition that has the data requested.\n16Designing the Database Schema\nFigure 4.2. Partitions Distribute Table Data and Stored Procedure Processing\nThe following sections of this chapter provide guidelines for designing VoltDB database schemas. Al-\nthough gathering business requirements is a typical first step in database application design, it is outside\nthe scope of this guide.\n4.1. How to Enter DDL Statements\nYou use standard SQL DDL statements to design your schema. For a full list of valid VoltDB DDL, see\nAppendix A, Supported SQL DDL Statements . The easiest way to enter your DDL statements is using\nVoltDB's command line utility, sqlcmd. Using sqlcmd you can input DDL statements in several ways:\n•Redirect standard input from a file when you start sqlcmd:\n$ sqlcmd < myschema.sql\n•Import from a file using the sqlcmd file directive:\n$ sqlcmd\n1> file myschema.sql;\n•Enter DDL directly at the sqlcmd prompt:\n$ sqlcmd\n1> \n2> CREATE TABLE Customer (\n3> CustomerID INTEGER UNIQUE NOT NULL,\n4> FirstName VARCHAR(15),\n5> LastName VARCHAR (15),\n6> PRIMARY KEY(CustomerID)\n7> );\n•Copy DDL from another application and paste it into the sqlcmd prompt:\n$ sqlcmd\n1> CREATE TABLE Flight (\n2> FlightID INTEGER UNIQUE NOT NULL,\n3> DepartTime TIMESTAMP NOT NULL,\n4> Origin VARCHAR(3) NOT NULL,\n5> Destination VARCHAR(3) NOT NULL,\n6> NumberOfSeats INTEGER NOT NULL,\n17Designing the Database Schema\n7> PRIMARY KEY(FlightID)\n8> );\nThe following sections show how to design and create schema objects. DDL statements and techniques\nfor changing a schema are described later in Section 4.6, “Modifying the Schema” .\n4.2. Creating Tables and Primary Keys\nThe schema in this section is referred to throughout the design chapters of this guide. Let's assume you\nare designing a flight reservation system. At its simplest, the application requires database tables for the\nflights, the customers, and the reservations. Example 4.1, “DDL Example of a Reservation Schema” shows\nhow the schema looks as defined in standard SQL DDL. For the VoltDB-specific details for creating tables,\nsee CREATE TABLE . When defining the data types for table columns, refer to Table A.1, “Supported\nSQL Datatypes” .\nExample 4.1. DDL Example of a Reservation Schema\nCREATE TABLE Flight (\n FlightID INTEGER UNIQUE NOT NULL,\n DepartTime TIMESTAMP NOT NULL,\n Origin VARCHAR(3) NOT NULL,\n Destination VARCHAR(3) NOT NULL,\n NumberOfSeats INTEGER NOT NULL,\n PRIMARY KEY(FlightID)\n);\n \nCREATE TABLE Reservation (\n ReserveID INTEGER NOT NULL,\n FlightID INTEGER NOT NULL,\n CustomerID INTEGER NOT NULL,\n Seat VARCHAR(5) DEFAULT NULL,\n Confirmed TINYINT DEFAULT '0'\n);\n \nCREATE TABLE Customer (\n CustomerID INTEGER UNIQUE NOT NULL,\n FirstName VARCHAR(15),\n LastName VARCHAR (15),\n PRIMARY KEY(CustomerID)\n);\nTo satisfy entity integrity you can specify a table's primary key by providing the usual PRIMARY KEY\nconstraint on one or more of the table’s columns. To create a simple key, apply the PRIMARY KEY\nconstraint to one of the table's existing columns whose values are unique and not null, as shown in Exam-\nple 4.1, “DDL Example of a Reservation Schema” .\nTo create a composite primary key from a combination of columns in a table, apply the PRIMARY KEY\nconstraint to multiple columns with typical DDL such as the following:\n$ sqlcmd\n1> CREATE TABLE Customer (\n2> FirstName VARCHAR(15),\n3> LastName VARCHAR (15),\n4> CONSTRAINT pkey PRIMARY KEY (FirstName, LastName)\n5> );\n18Designing the Database Schema\n4.3. Analyzing Data Volume and Workload\nA schema is not all you need to define the database effectively. You also need to know the expected volume\nand workload on the database. For our example, let's assume that we expect the following volume of data\nat any given time:\n•Flights: 2,000\n•Reservations: 200,000\n•Customers: 1,000,000\nThis additional information about the volume and workload affects the design of both the database and\nthe client application, because it impacts what SQL queries need to be written for accessing the data and\nwhat attributes (columns) to share between tables. Table 4.1, “Example Application Workload” defines a\nset of procedures the application must perform. The table also shows the estimated workload as expected\nfrequency of each procedure. Procedures in bold modify the database.\nTable 4.1. Example Application Workload\nUse Case Frequency\nLook up a flight (by origin and destination) 10,000/sec\nSee if a flight is available 5,000/sec\nMake a reservation 1,000/sec\nCancel a reservation 200/sec\nLook up a reservation (by reservation ID) 200/sec\nLook up a reservation (by customer ID) 100/sec\nUpdate flight info 1/sec\nTake off (close reservations and archive associated records) 1/sec\nYou can make your procedures that access the database transactional by defining them as VoltDB stored\nprocedures. This means each stored procedure call completes or rolls back if necessary, thus maintaining\ndata integrity. Stored procedures are described in detail in Chapter 5, Designing Stored Procedures to\nAccess the Database .\nIn our analysis we also need to consider referential integrity, where relationships are maintained between\ntables with shared columns that link tables together. For example, Figure 4.3, “Diagram Representing the\nFlight Reservation System” shows that the Flight table links to the Reservation table where FlightID is\nthe shared column. Similarly, the Customer table links to the Reservation table where CustomerID is the\ncommon column.\nFigure 4.3. Diagram Representing the Flight Reservation System\n19Designing the Database Schema\nSince VoltDB stored procedures are transactional, you can use stored procedures to maintain referential\nintegrity between tables as data is added or removed. For example, if a customer record is removed from the\nCustomer table, all reservations for that customer need to be removed from the Reservations table as well.\nWith VoltDB, you use all this additional information about volume and workload to configure the database\nand optimize performance. Specifically, you want to partition the individual tables to ensure efficiency.\nPartitioning is described next.\n4.4. Partitioning Database Tables\nThis section discusses how to partition a database to maximize throughput, using the flight reservation case\nstudy as an example. To partition a table, you choose a column of the table that VoltDB can use to uniquely\nidentify and distribute the rows into partitions. The goal of partitioning a database table is to ensure that\nthe most frequent transactions on the table execute in the same partition as the data accessed. We call this a\nsingle-partitioned transaction . Thus the stored procedure must uniquely identify a row by the partitioning\ncolumn value. This is particularly important for queries that modify the data, such as INSERT, UPDATE,\nand DELETE statements.\nLooking at the workload for the reservation system in the previous section, the important transactions to\nfocus on are:\n•Look up a flight\n•See if a flight is available\n•Look up a reservation\n•Make a reservation\nOf these transactions, only the last modifies the database.\n4.4.1. Choosing a Column on which to Partition Table Rows\nWe will discuss the Flight table later, but first let's look at the Reservation table. Looking at the schema\nalone (Example 4.1 ), ReserveID might look like a good attribute to use to partition the table rows. How-\never, looking at the workload, there are only two transactions that are keyed to the ReserveID (“Cancel\na reservation” and “Look up a reservation (by reservation ID)”), each of which occur only 200 times a\nsecond. Whereas, “See if a flight is available” , which requires looking up reservations by the FlightID,\noccurs 5,000 times a second, or 25 times as frequently. Therefore, the Reservation table is best partitioned\non the FlightID column.\nMoving to the Customer table, CustomerID is used for most data access. Although customers might need\nto look up their record by name, the first and last names are not guaranteed to be unique. Therefore,\nCustomerID is the best column to use for partitioning the Customer table.\nCREATE TABLE Customer (\n CustomerID INTEGER UNIQUE NOT NULL,\n20Designing the Database Schema\n FirstName VARCHAR(15),\n LastName VARCHAR (15),\n PRIMARY KEY(CustomerID)\n);\n4.4.2. Specifying Partitioned Tables\nOnce you choose the column to use for partitioning a database table, you define your partitioning choices\nin the database schema. Specifying the partitioning along with the schema DDL helps keep all of the\ndatabase structural information in one place.\nYou define the partitioning scheme using VoltDB's PARTITION TABLE statement, specifying the par-\ntitioning column for each table. For example, to specify FlightID and CustomerID as the partitioning\ncolumns for the Reservation and Customer tables, respectively, your database schema must include the\nfollowing statements:\n$ sqlcmd\n1> PARTITION TABLE Reservation ON COLUMN FlightID;\n2> PARTITION TABLE Customer ON COLUMN CustomerID;\n4.4.3. Design Rules for Partitioning Tables\nThe following are the rules to keep in mind when choosing a column by which to partition table rows:\n•There can be only one partition column per table. If you need to partition a table on two columns\n(for example first and last name), add an additional column (fullname) that combines the values of the\ntwo columns and use this new column to partition the table.\n•If the table has a primary key, the partitioning column must be included in the primary key.\n•Any integer, string, or byte array column can identify the partition. VoltDB can partition rows on\nany column that is an integer (TINYINT, SMALLINT, INTEGER, or BIGINT), string (VARCHAR),\nor byte array (VARBINARY) datatype. (See also Table A.1, “Supported SQL Datatypes” .)\n•Partition column values cannot be null. The partition columns do not need to have unique values, but\nyou must specify NOT NULL in the schema for the partition column. Numeric fields can be zero and\nstring or character fields can be empty, but the column cannot contain a null value.\nThe following are some additional recommendations:\n•Choose a column with a reasonable distribution of values so that rows of data will be evenly partitioned.\n•Choose a column that maximizes use of single-partitioned stored procedures. If one procedure uses\ncolumn A to lookup data and two procedures use column B to lookup data, partition on column B. The\ngoal of partitioning is to make the most frequent transactions single-partitioned.\n•If you partition more than one table on the same column attribute, VoltDB will partition them together.\n4.5. Replicating Database Tables\nWith VoltDB, tables are either partitioned or replicated across all nodes and sites of a VoltDB database.\nSmaller, mostly read-only tables are good candidates for replication. Note also that if a table needs to be\naccessed frequently by columns other than the partitioning column, the table should be replicated instead\nbecause there is no guarantee that a particular partition includes the data that the query seeks.\n21Designing the Database Schema\nThe previous section describes how to partition the Reservation and Customer tables as examples, but what\nabout the Flight table? It is possible to partition the Flight table (for example, on the FlightID column).\nHowever, not all tables benefit from partitioning.\n4.5.1. Choosing Replicated Tables\nLooking at the workload of the flight reservation example, the Flight table has the most frequent accesses\n(at 10,000 a second). However, these transactions are read-only and may involve any combination of three\ncolumns: the departure time, the point of origin, and the destination. This makes it hard to partition the\ntable in a way that would make the transaction single-partitioned because the lookup is not restricted to\none table column.\nFortunately, the number of flights available for booking at any given time is limited (estimated at 2,000)\nand so the size of the table is relatively small (approximately 36 megabytes). In addition, the vast majority\nof the transactions involving the Flight table are read-only except when new flights are added and at take-\noff (when the records are deleted). Therefore, Flight is a good candidate for replication.\nNote that the Customer table is also largely read-only. However, because of the volume of data in the\nCustomer table (a million records), it is not a good candidate for replication, which is why it is partitioned.\n4.5.2. Specifying Replicated Tables\nIn VoltDB, you do not explicitly state that a table is replicated. If you do not specify a partitioning column\nin the database schema, the table will by default be replicated.\nSo, in our flight reservation example, there is no explicit action required to replicate the Flight table.\nHowever, it is very important to specify partitioning information for tables that you want to partition.\nIf not, they will be replicated by default, significantly changing the performance characteristics of your\napplication.\n4.6. Modifying the Schema\nYou can use DDL to add, modify, or remove schema objects as the database is running. For a list of all\nvalid DDL you can use, see Appendix A, Supported SQL DDL Statements . You can do the following types\nof schema changes:\n•Modifying Tables — You can add, modify (alter), and remove (drop) table columns. You can also add\nand drop table constraints. Finally, you can drop entire tables.\n•Adding and Dropping Indexes — You can add and remove (drop) named indexes.\n•Modifying Partitioning for Tables and Stored Procedures — You can un-partition stored procedures\nand re-partition stored procedures on a different column, For tables you can change a table between\npartitioned and replicated, and repartition a table on a different column,\n•Modify roles and users — To learn about modifying roles and users, see Chapter 12, Security.\n22Designing the Database Schema\nVoltDB safely handles sqlcmd DDL entered by different users on different nodes of the cluster because\nit manages sqlcmd commands as transactions, just like stored procedures. To demonstrate the DDL state-\nments to modify the schema, the following sections use a new table, Airport, added to the fight reservation\nas shown below:\nCREATE TABLE Airport (\n AirportID integer NOT NULL,\n Name varchar(15) NOT NULL,\n City varchar(25),\n Country varchar(15),\n PRIMARY KEY (AirportID)\n);\n4.6.1. Effects of Schema Changes on Data and Clients\nYou can make many schema changes on empty tables with few restrictions. However, be aware that if\na table has data, some schema changes are not allowed and other schema changes may modify or even\nremove data. When working with test data in your database, you can use TRUNCATE TABLE to empty\nthe data from a table you are working on. Note that all DDL examples in this chapter assume the tables\nare empty.\nWe can think of the effects of schema changes on data in three severity levels:\n•Schema change completes without damage to data\n•Schema change fails to complete to avoid damage to data\n•Schema change destroys data\nVoltDB error messages and the documentation can help you avoid schema change attempts that fail to\ncomplete. For example, you cannot drop a table that has referencing procedures or views.\nObviously you need to be most aware of which schema changes cause data to be destroyed. In particular,\nremoving objects from the schema will also remove the data they contain. Note that schema objects cannot\nbe renamed with DDL, but objects can be replaced by performing a DROP and then ADD. However, it is\nimportant to realize that as a result of a DROP operation, such as DROP TABLE, the data associated with\nthat table will be deleted before the new definition is added.\nPlan and coordinate changes with client development. Stored procedures and ad hoc queries provide an\nAPI that clients use to access the database correctly. Changes to the schema can break the stored procedure\ncalls client applications have developed, so use well-planned schedules to communicate database schema\nchanges to others. Client applications depend on many schema definition features including (but not limited\nto):\n•Table names\n•Column names\n•Column data types\n•Primary key definitions\n•Table partitions\n•Stored procedure names\n•Stored procedure partitioning\n23Designing the Database Schema\nPlan and test carefully before making schema changes to a production database. Be aware that clients may\nexperience connection issues during schema changes, especially for changes that take longer to complete,\nsuch as view or index changes.\nSchema changes not only affect data, but the existence of data in the database affects the time it takes to\nprocess schema changes. For example, when there are large amounts of data, some DDL statements can\nblock processing, resulting in a noticeable delay for other pending transactions. Examples include adding\nindexes, creating new table columns, and modifying views.\n4.6.2. Viewing the Schema\nThe VoltDB Management Center provides a web browser view of database information, including the\nDDL schema source. Use a web browser to view the VoltDB Management Center on port 8080 of one of\nthe cluster hosts (http://host-name:8080).\nYou can also use the sqlcmd show directive to see a list of the current database tables and all procedures.\nFor additional details about the schema, execute the @SystemCatalog system procedure. Use any of the\nfollowing arguments to @SystemCatalog to obtain details about a component of the database schema:\n•TABLES\n•COLUMNS\n•INDEXINFO\n•PRIMARYKEYS\n•PROCEDURES\n•PROCEDURECOLUMNS\nFor example:\n$ sqlcmd\n1> SHOW TABLES;\n2> SHOW PROCEDURES;\n3> EXEC @SystemCatalog COLUMNS;\n4.6.3. Modifying Tables\nAfter creating a table in a database with CREATE TABLE, you can use ALTER TABLE to make the\nfollowing types of table changes:\n•Altering a Table Column's Data Definition\n•Adding and Dropping Table Columns\n•Adding and Dropping Table Constraints\nTo drop an entire table, use the DROP TABLE DDL statement.\n4.6.3.1. Altering a Table Column's Data Definition\nYou can make the following types of alterations to a table column's data definition:\n$ sqlcmd\n24Designing the Database Schema\n1> ALTER TABLE Airport ALTER COLUMN Name VARCHAR(25); \n2> ALTER TABLE Airport ALTER COLUMN Country SET DEFAULT 'USA'; \n3> ALTER TABLE Airport ALTER COLUMN Name SET NOT NULL; \nThe examples are described as follows:\nChange a column's data type. In our example we decided we needed more than 15 characters for the\nAirport Name so we changed it to 25 characters.\nIf the table has no existing data, you can make any data type changes. However, if the table already\ncontains data, the new type must be larger than the old one. This restriction prevents corrupting\nexisting data values that might be larger than the size of the new data type (See also Table A.1,\n“Supported SQL Datatypes” .)\nSet or drop the column's DEFAULT value. In our example we assume the application is to be used\nmostly for US domestic travel so we can set a default value for the Airport Country of 'USA'.\nTo remove a default, redefine the column data definition, for example:\nALTER TABLE Airport ALTER COLUMN Country VARCHAR(15);\nChange whether the column is NULL or NOT NULL. In our example we set the AirportID to be not\nnull because this is a required field.\nIf the table has existing data, you cannot change a column to not null.\n4.6.3.2. Adding and Dropping Table Columns\n$ sqlcmd\n1> ALTER TABLE Airport ADD COLUMN AirportCode VARCHAR(3) \n2> BEFORE AirportID; \n3> ALTER TABLE Airport DROP COLUMN AirportID; \nThe examples are described as follows:\nAdd table columns. In our example, we have decided not to use the integer AirportID for airport\nidentification but to instead add an AirportCode, which uses a unique three-letter code for any airport\nas defined by the International Air Transport Association's airport codes.\nYou cannot rename or overwrite a column but you can drop and add columns. When adding a column,\nyou must include the new column name and the data type. Options you may include are:\n•DEFAULT value — If a table contains data, the values for the new column will be automatically\nfilled in with the default value.\n•NOT NULL — If the table contains data, you must include a default value if you specify a NOT\nNULL column.\n•One of the following index type constraints including PRIMARY KEY, UNIQUE, or ASSUME-\nUNIQUE.\nNote, we recommend that you not define the UNIQUE or ASSUMEUNIQUE constraint directly on\na column definition when adding a column or creating a table. If you do, the constraint has no name\nso you cannot drop the constraint without dropping the entire column. Instead, we recommend\nyou apply UNIQUE or ASSUMEUNIQUE by adding the constraint (see Section 4.6.3.3, “Adding\nand Dropping Table Constraints” ) or by adding an index with the constraint (see Section 4.6.4,\n“Adding and Dropping Indexes” ). Defining these constraints this way names the constraint, which\nmakes it easier to drop later if necessary.\n25Designing the Database Schema\n•BEFORE column-name — Table columns cannot be reordered but the BEFORE clause allows\nyou to place a new column in a specific position with respect to the existing columns of the table.\nDrop table columns. In our example we drop the AirportID column because we are replacing it with\nthe AirportCode column.\nYou cannot remove a column that has a reference to it. You have to remove all references to the\ncolumn first. References to a column may include:\n•A stored procedure\n•An index\n•A view\n4.6.3.3. Adding and Dropping Table Constraints\nYou cannot alter a table constraint but you can add and drop table constraints. If the table contains existing\ndata, you cannot add UNIQUE, ASSUMEUNIQUE, or PRIMARY KEY constraints.\n$ sqlcmd \n1> ALTER TABLE Airport ADD CONSTRAINT \n2> uniquecode UNIQUE (Airportcode);\n3> ALTER TABLE Airport ADD PRIMARY KEY (AirportCode); \nThe examples are described as follows:\nAdd named constraints UNIQUE or ASSUMEUNIQUE. In our example, we add the UNIQUE con-\nstraint to the AirportCode column. To drop a named constraint, include the name using the format\nin the following example:\nALTER TABLE Airport DROP CONSTRAINT uniquecode ;\nAdd unnamed constraint PRIMARY KEY. In our example, we add the PRIMARY KEY constraint\nto the new AirportCode column.\nWhen adding a table constraint, it must not conflict with the other columns of the table. For example,\nonly one primary key is allowed for a table so you cannot add the PRIMARY KEY constraint to\nan additional column.\nTo drop the PRIMARY KEY, include the type of constraint using the following format:\nALTER TABLE Airport DROP PRIMARY KEY ;\n4.6.4. Adding and Dropping Indexes\nUse CREATE INDEX to create an index on one or more columns of a table. Use DROP INDEX to remove\nan index from the schema. The following example modifies the flight reservation schema by adding an\nindex to the Flight table to improve performance when looking up flights.\n$ sqlcmd\n1> CREATE INDEX flightTimeIdx ON Flight (departtime);\nThe CREATE INDEX statement explicitly creates an index. VoltDB creates an index implicitly when\nyou specify the table constraints UNIQUE, PRIMARY KEY, or ASSUMEUNIQUE. Use the ALTER\nTABLE statement to add or drop these table constraints along with their associated indexes, as shown in\nSection 4.6.3, “Modifying Tables” .\n26Designing the Database Schema\n4.6.5. Modifying Partitioning for Tables and Stored Proce-\ndures\nAny changes to the schema must be carefully coordinated with the design and development of stored\nprocedures. This not only applies to column names, data types, and so on, but also to the partition plan.\nHow to partition tables and stored procedures using the PARTITION TABLE and CREATE PROCE-\nDURE PARTITION ON statements is explained in Section 4.4, “Partitioning Database Tables” and Sec-\ntion 5.3.3, “Partitioning Stored Procedures in the Schema” .\nYou can change the partitioning of stored procedures, and you can change a table to a replicated table or\nrepartition it on a different column. However, because of the intricate dependencies of partitioned tables\nand stored procedures, this can only be done by dropping and re-adding the tables and procedures. Also,\nyou must pay close attention to the order in which objects are dropped and added.\nThe following DDL examples demonstrate some partitioning modifications to a table and stored proce-\ndures.\n•Un-partitioning a Stored Procedure\n•Changing a Partitioned Table to a Replicated Table\n•Re-partitioning a Table to a Different Column\n•Updating a Stored Procedure\n•Removing a Stored Procedure from the Database\nThe following DDL is added to the Flight reservation schema to help demonstrate the DDL partition\nchanges described in this section.\n$ sqlcmd\n1> PARTITION TABLE Airport ON COLUMN Name;\n2> CREATE PROCEDURE FindAirportCodeByName \n3> PARTITION ON TABLE Airport COLUMN Name\n4> AS SELECT TOP 1 AirportCode FROM Airport WHERE Name=?;\n5> \n6> CREATE PROCEDURE FindAirportCodeByCity AS\n7> SELECT TOP 1 AirportCode FROM Airport WHERE City=?;\nThe stored procedures are tested with the following sqlcmd directives:\n$ sqlcmd\n1> exec FindAirportCodeByName 'Logan Airport';\n2> exec FindAirportCodeByCity 'Boston';\n4.6.5.1. Un-partitioning a Stored Procedure\nIn the simplest case, you can un-partition a single-partitioned stored procedure by dropping and re-creating\nthat procedure without including the PARTITION ON clause. In this example we drop the single-parti-\ntioned FindAirportCodeByName procedure and re-create it as multi-partitioned because it needs to search\nall partitions to find an airport code by name.\n$ sqlcmd\n1> DROP PROCEDURE FindAirportCodeByName;\n2> CREATE PROCEDURE FindAirportCodeByName AS\n27Designing the Database Schema\n3> SELECT TOP 1 AirportCode FROM Airport WHERE Name=?;\n4.6.5.2. Changing a Partitioned Table to a Replicated Table\nImportant\nYou cannot change the partitioning of a table that has data in it. To change a partitioned table to a\nreplicated one, you drop and re-create the table, which deletes any data that might be in the table.\nBefore executing the following steps, save the existing schema so you can easily re-create the table. The\nVoltDB Management Center provides a view of the existing database schema DDL source, which you\ncan download and save.\n$ sqlcmd\n1> DROP PROCEDURE FindAirportCodeByName; \n2> DROP PROCEDURE FindAirportCodeByCity;\n3> DROP TABLE Airport IF EXISTS CASCADE; \n4> CREATE TABLE AIRPORT ( \n5> AIRPORTCODE varchar(3) NOT NULL,\n6> NAME varchar(25),\n7> CITY varchar(25),\n8> COUNTRY varchar(15) DEFAULT 'USA',\n9> CONSTRAINT UNIQUECODE UNIQUE (AIRPORTCODE),\n10> PRIMARY KEY (AIRPORTCODE)\n11> );\n12> CREATE PROCEDURE FindAirportCodeByName AS \n13> SELECT TOP 1 AirportCode FROM Airport WHERE Name=?;\n14> CREATE PROCEDURE FindAirportCodeByCity AS\n15> SELECT TOP 1 AirportCode FROM Airport WHERE City=?;\nThe example is described as follows:\nDrop all stored procedures that reference the table. You cannot drop a table if stored procedures\nreference it.\nDrop the table. Options you may include are:\n•IF EXISTS — Use the IF EXISTS option to avoid command errors if the named table is already\nremoved.\n•CASCADE — A table cannot be removed if it has index or view references. You can remove\nthe references explicitly first or use the CASCADE option to have VoltDB remove the references\nalong with the table.\nRe-create the table. By default, a newly created table is a replicated table.\nRe-create the stored procedures that access the table. If the stored procedure is implemented with\nJava and changes are required, modify and reload the code before re-creating the stored procedures.\nFor more, see Section 5.3, “Installing Stored Procedures into the Database” .\n4.6.5.3. Re-partitioning a Table to a Different Column\nImportant\nYou cannot change the partitioning of a table that has data in it. In order to re-partition a table\nyou have to drop and re-create the table, which deletes any data that might be in the table.\nFollow these steps to re-partition a table:\n28Designing the Database Schema\n1.Un-partition the table by following the instructions in Section 4.6.5.2, “Changing a Partitioned Table\nto a Replicated Table” . The sub-steps are summarized as follows:\na.Drop all stored procedures that reference the table.\nb.Drop the table.\nc.Re-create the table.\nd.Re-create the stored procedures that access the table.\n2.Partition the table on the new column. In our example, it makes sense to partition the Airport table on\nthe AirportCode column, where each row must be unique and non null.\n$ sqlcmd\n1> PARTITION TABLE Airport ON COLUMN AirportCode;\n3.Re-partition stored procedures that should be single-partitioned. See Section 4.6.5.4, “Updating a Stored\nProcedure” .\n4.6.5.4. Updating a Stored Procedure\nThis section describes how to update a stored procedure that has already been declared in the database with\nthe CREATE PROCEDURE statement. The steps to update a stored procedure are summarized as follows:\n1.If the procedure is implemented in Java, update the procedure's code, recompile, and repackage the jar\nfile. For details, see Section 5.3, “Installing Stored Procedures into the Database” .\n2.Ensure all tables and columns the procedure accesses are in the database schema.\n3.Update the procedure in the database.\n•If the procedure is implemented in Java, use the sqlcmd load classes directive to update the class\nin the database. For example:\n$ sqlcmd\n1> load classes GetAirport.jar;\n•If the procedure is implemented with SQL, use the CREATE PROCEDURE AS command to update\nthe SQL.\n4.If required, re-partition the stored procedure. You partition procedures using the PARTITION ON\nclause in the CREATE PROCEDURE statement. If you need to re-partition the procedure, either chang-\ning the partitioning column or switching from replicated to partitioned or vice versa, perform the fol-\nlowing steps:\na.Use DROP PROCEDURE to remove the stored procedure.\nb.Use CREATE PROCEDURE to re-declare the stored procedure, including the new partitioning\nscheme.\nIn our example so far, we have three stored procedures that are adequate to access the Airport table, so\nno additional procedures need to be partitioned:\n•VoltDB automatically defined a default select stored procedure, which is partitioned on the Airport-\nCode column. It takes an AirportCode as input and returns a table structure containing the Airport-\nCode, Name, City, and Country.\n29Designing the Database Schema\n•The FindAirportCodeByName stored procedure should remain multi-partitioned because it needs to\nsearch in all partitions.\n•The FindAirportCodeByCity stored procedure should also remain multi-partitioned because it needs\nto search in all partitions.\n4.6.5.5. Removing a Stored Procedure from the Database\nIf you've decided a stored procedure is no longer needed, use the following steps to remove it from the\ndatabase:\n1.Drop the stored procedure from the database.\n$ sqlcmd\n1> DROP PROCEDURE GetAirport;\n2.Remove the code from the database. If the procedure is implemented with Java, use the sqlcmd remove\nclasses directive to remove the procedure's class from the database.\n2> remove classes myapp.procedures.GetAirport;\n30Chapter 5. Designing Stored Procedures\nto Access the Database\nAs you can see from Chapter 4, Designing the Database Schema , defining the database schema and the\npartitioning plan go hand in hand with understanding how the data is accessed. The two must be coordi-\nnated to ensure optimum performance. Your stored procedures must use the same attribute for partitioning\nas the table being accessed. Proper partitioning ensures that the table rows the stored procedure requests\nare in the same partition in which the procedure executes, thereby ensuring maximum efficiency.\nIt doesn't matter whether you design the partitioning first or the data access first, as long as in the end\nthey work together. However, for the sake of example, we will use the schema and partitioning outlined\nin Chapter 4, Designing the Database Schema when discussing how to design the data access.\n5.1. How Stored Procedures Work\nThe key to designing the data access for VoltDB applications is that complex or performance sensitive\naccess to the database should be done through stored procedures. It is possible to perform ad hoc queries\non a VoltDB database. However, ad hoc queries do not benefit as fully from the performance optimizations\nVoltDB specializes in and therefore should not be used for frequent, repetitive, or complex transactions.\nWithin the stored procedure, you access the database using standard SQL syntax, with statements such\nas SELECT, UPDATE, INSERT, and DELETE. You can also include your own code within the stored\nprocedure to perform calculations on the returned values, to evaluate and execute conditional statements,\nor to perform many other functions your applications may need.\n5.1.1. VoltDB Stored Procedures are Transactional\nIn VoltDB, a stored procedure and a transaction are one and the same. Thus when you define a stored\nprocedure, VoltDB automatically provides ACID transaction guarantees for the stored procedure. This\nmeans that stored procedures fully succeed or automatically roll back as a whole if an error occurs (atom-\nic). When stored procedures change the data, the database is guaranteed to remain consistent. Stored pro-\ncedures execute and access the database completely isolated from each other, including when they execute\nconcurrently. Finally, stored procedure changes to the database are guaranteed to be saved and available\nfor subsequent database access (durable).\nBecause the transaction is defined in advance as a stored procedure, there is no need for your application\nto manage transactions using specific transaction commands such as BEGIN, ROLLBACK, COMMIT\nor END.1\n5.1.2. VoltDB Stored Procedures are Deterministic\nTo ensure data consistency and durability, VoltDB procedures must be deterministic. That is, given specific\ninput values, the outcome of the procedure is consistent and predictable. Determinism is critical because it\nallows the same stored procedure to run in multiple locations and give the same results. It is determinism\nthat makes it possible to run redundant copies of the database partitions without impacting performance.\n(See Chapter 10, Availability for more information on redundancy and availability.)\n1One side effect of transactions being precompiled as stored procedures is that external transaction management frameworks, such as Spring or\nJEE, are not supported by VoltDB.\n31Designing Stored Proce-\ndures to Access the Database\n5.1.2.1. Use Sorted SQL Queries\nOne key to deterministic behavior is avoiding ambiguous SQL queries in stored procedures. Specifically,\nperforming unsorted queries can result in a non-deterministic outcome. VoltDB does not guarantee a con-\nsistent order of results unless you use a tree index to scan the records in a specific order or you specify\nan ORDER BY clause in the query itself. In the worst case, a limiting query, such as SELECT TOP 10\nEmp_ID FROM Employees without an index or ORDER BY clause, can result in a different set of\nrows being returned. However, even a simple query such as SELECT * from Employees can return\nthe same rows in a different order.\nThe problem is that even if a non-deterministic query is read-only, its results might be used as input to an\nINSERT, UPDATE, or DELETE statement elsewhere in the stored procedure. For clusters with a K-safety\nvalue greater than zero, this means unsorted query results returned by two copies of the same partition,\nwhich may not match, could be used for separate update queries. If this happens, VoltDB detects the\nmismatch, reports it as a potential source of data corruption, and shuts down all but one copy of each\npartition.\nBy switching to this reduced K-safety mode, VoltDB avoids the threat of data corruption due to non-\ndeterminism. However, it also means that the cluster is no longer K-safe; there is only one copy of each\npartition and any node failure will crash the database. So, although the database continues to operate after\na mismatch, it is critically important you determine the cause of the non-deterministic behavior, correct\nthe affected procedures, take a final snapshot, and restart the database to restore full K-safety.\nThe risk of mismatched results at run time is why VoltDB issues a warning for any non-deterministic\nqueries in read-write stored procedures when you load the schema or classes. This is also why use of an\nORDER BY clause or a tree index in the WHERE constraint is strongly recommended for all SELECT\nstatements.\n5.1.2.2. Avoid Introducing Non-deterministic Values from External Func-\ntions\nAnother key to deterministic behavior is avoiding calls within your stored procedures to external functions\nor procedures that can introduce arbitrary data. External functions include file and network I/O (which\nshould be avoided any way because they can impact latency), as well as many common system-specific\nprocedures such as Date and Time.\nHowever, this limitation does not mean you cannot use arbitrary data in VoltDB stored procedures. It just\nmeans you must either generate the arbitrary data before the stored procedure call and pass it in as input\nparameters or generate it in a deterministic way. For example, if you need to load a set of records from a\nfile, you can open the file in your application and pass each row of data to a stored procedure that loads the\ndata into the VoltDB database. This is the best method when retrieving arbitrary data from sources (such\nas files or network resources) that would impact latency.\nThe other alternative is to use data that can be generated deterministically. For two of the most common\ncases, timestamps and random values, VoltDB provides methods for this:\n•VoltProcedure.getTransactionTime() returns a timestamp that can be used in place of the\nJava Date or Time classes.\n•VoltProcedure.getSeededRandomNumberGenerator() returns a pseudo random number\nthat can be used in place of the Java Util.Random class.\nThese procedures use the current transaction ID to generate a deterministic value for the timestamp and\nthe random number. See the VoltDB Java Stored Procedure API for more.\n32Designing Stored Proce-\ndures to Access the Database\n5.1.2.3. Stored Procedures have no Persistence\nEven seemingly harmless programming techniques, such as static variables can introduce nondeterminis-\ntic behavior. VoltDB provides no guarantees concerning the state of the stored procedure class instance\nacross invocations. Any information that you want to persist across invocations must either be stored in\nthe database itself or passed into the stored procedure as a parameter.\n5.1.2.4. Be Careful with Mutable Parameters\nYou can pass mutable parameters — most notably arrays — to stored procedures and those arrays can\nbe used as parameters to SQL statements. To protect you against non-deterministic behavior from the\ncontents of the mutable parameter being changed, VoltDB makes a copy of the array before passing it\nto any SQL statements. If you call such procedures frequently with large arrays, the copy operation can\nconsume significant amounts of memory, impacting your application.\nThe alternative, if you are sure that your procedures do not modify the mutable parameters, is to configure\nthe database not to copy such parameters. You do this in the configuration file by setting the copypara-\nmeters attribute of the <procedure> element to \"false\". However, there is a significant risk associated\nwith this setting. If you disable copying and a stored procedure does modify an array parameter, it can\nresult in unpredictable behavior including run-time errors, database crashes, or even data corruption. So\nthis feature should be used with extreme caution.\n5.2. The Anatomy of a VoltDB Stored Procedure\nYou can write VoltDB stored procedures as Java classes. The following code sample illustrates the basic\nstructure of a VoltDB java stored procedure.\nimport org.voltdb.*;\n \npublic class Procedure-name extends VoltProcedure {\n \n // Declare SQL statements ...\n \n public datatype run ( arguments ) throws VoltAbortException {\n \n // Body of the Stored Procedure ...\n \n }\n}\nThe key points to remember are to:\n1.Import the VoltDB classes from org.voltdb.*\n2.Include the class definition, which extends the abstract class VoltProcedure\n3.Define the method run(), which performs the SQL queries and processing that make up the transaction\nIt is important to understand the details of how to design and develop stored procedures for your application\nas described in the following sections. However, for simple data access, the following techniques may\nsuffice for some of your stored procedures:\n•VoltDB defines default stored procedures to perform the most common table access such as inserting,\nselecting, updating, and deleting records based on a specific key value. See Section 7.1, “Using Default\nProcedures” for more.\n33Designing Stored Proce-\ndures to Access the Database\n•You can create stored procedures without writing any Java code by using the DDL statement CREATE\nPROCEDURE AS, where you define a single SQL query as a stored procedure. See Section 7.2, “Short-\ncut for Defining Simple Stored Procedures” .\nThe following sections describe the components of a stored procedure in more detail.\n5.2.1. The Structure of the Stored Procedure\nThe stored procedures themselves are written as Java classes, each procedure being a separate class. Ex-\nample 5.1, “Components of a VoltDB Java Stored Procedure” shows the stored procedure that looks up a\nflight to see if there are any available seats. The callouts identify the key components of a VoltDB stored\nprocedure.\nExample 5.1. Components of a VoltDB Java Stored Procedure\npackage fadvisor.procedures; \n \nimport org.voltdb.*; \n \npublic class HowManySeats extends VoltProcedure { \n \n public final SQLStmt GetSeatCount = new SQLStmt( \n \"SELECT NumberOfSeats, COUNT(ReserveID) \" +\n \"FROM Flight AS F, Reservation AS R \" +\n \"WHERE F.FlightID=R.FlightID AND R.FlightID=? \" +\n \"GROUP BY NumberOfSeats;\");\n \n public long run( int flightid) \n throws VoltAbortException { \n \n long numofseats;\n long seatsinuse;\n VoltTable[] queryresults;\n \n voltQueueSQL( GetSeatCount, flightid); \n \n queryresults = voltExecuteSQL(); \n \n VoltTable result = queryresults[0]; \n if (result.getRowCount() < 1) { return -1; }\n numofseats = result.fetchRow(0).getLong(0); \n seatsinuse = result.fetchRow(0).getLong(1);\n numofseats = numofseats - seatsinuse;\n return numofseats; // Return available seats \n }\n}\nStored procedures are written as Java classes. To access the VoltDB classes and methods, be sure\nto import org.voltdb.* .\nAlthough VoltDB stored procedures must be written in Java and the primary client interface is Java\n(as described in Chapter 6, Designing VoltDB Client Applications ), it is possible to write client appli-\ncations using other programming languages. See Chapter 8, Using VoltDB with Other Programming\nLanguages for more information on alternate client interfaces.\n34Designing Stored Proce-\ndures to Access the Database\nEach stored procedure extends the generic class VoltProcedure .\nWithin the stored procedure you access the database using ANSI-standard SQL statements. To do\nthis, you declare the statement as a special Java type called SQLStmt , which must be declared as\nfinal.\nIn the SQL statement, you insert a question mark (?) everywhere you want to replace a value by a\nvariable at runtime. In this example, the query GetSeatCount has one input variable, FlightID. (See\nAppendix B, Supported SQL Statements for details on the supported SQL statements.)\nTo ensure the stored procedure code is single partitioned, queries must filter on the partitioning\ncolumn for a single value (using equal, =). Filtering for a range of values will not be single-partitioned\nbecause the code will have to look up in all the partitions to ensure the entire range is found.\nThe bulk of the stored procedure is the run() method, whose input specifies the input arguments for\nthe stored procedure. See Section 5.2.2, “Passing Arguments to a Stored Procedure” next for details.\nNote that the run() method throws the exception VoltAbortException if any exceptions are\nnot caught. VoltAbortException causes the stored procedure transaction to rollback. (See Sec-\ntion 5.2.6, “Rolling Back a Transaction” for more information about rollback.)\nTo perform database queries, you queue SQL statements, specifying both the SQL statement and\nthe variables it requires, using the voltQueueSQL() method. More details are described in Sec-\ntion 5.2.3, “Creating and Executing SQL Queries in Stored Procedures” .\nAfter you queue all of the SQL statements you want to perform, use voltExecuteSQL() to\nexecute the statements in the queue.\nEach statement returns its results in a VoltTable structure. Because the queue can contain multiple\nqueries, voltExecuteSQL() returns an array of VoltTable structures, one array element for\neach query. More details are described in Section 5.2.4, “Interpreting the Results of SQL Queries” .\nIn addition to queueing and executing queries, stored procedures can contain custom code. However,\nyou should limit the amount of custom code in stored procedures to only that processing that is\nnecessary to complete the transaction, so as not to delay subsequent transactions.\nStored procedures can return a long integer, a VoltTable structure, or an array of VoltTable\nstructures. For more details, see Section 5.2.5, “Returning Results from a Stored Procedure” .\n5.2.2. Passing Arguments to a Stored Procedure\nYou specify the number and type of the arguments that the stored procedure accepts in the run() method.\nFor example, the following is the declaration of the run() method for an Initialize() stored pro-\ncedure from the voter sample application. This procedure accepts two arguments: an integer and a string.\npublic long run(int maxContestants, String contestants) { . . .\nVoltDB stored procedures can accept parameters of any of the following Java and VoltDB datatypes:\nInteger types byte, short, int, long, Byte, Short, Integer, and Long\nFloating point types float, double, Float, Double\nFixed decimal types BigDecimal\nString and binary types String and byte[]\nTimestamp types org.voltdb.types.TimestampType \njava.util.Date, java.sql.Date, java.sql.Timestamp\nVoltDB type VoltTable\nThe arguments can be scalar objects or arrays of any of the preceding types. For example, the following\nrun() method defines three arguments: a scalar long and two arrays, one array of timestamps and one\narray of Strings:\n35Designing Stored Proce-\ndures to Access the Database\nimport org.voltdb.*;\npublic class LogMessagesByEvent extends VoltProcedure {\n \n public long run (\n long eventType, \n org.voltdb.types.TimestampType[] eventTimeStamps,\n String[] eventMessages\n ) throws VoltAbortException {\nThe calling client application can use any of the preceding datatypes when invoking the callProce-\ndure() method and, where necessary, VoltDB makes the appropriate type conversions (for example,\nfrom int to String or from String to Double). See Section 6.2, “Invoking Stored Procedures” for more on\nusing the callProcedure() method.\n5.2.3. Creating and Executing SQL Queries in Stored Proce-\ndures\nThe main function of the stored procedure is to perform database queries. In VoltDB this is done in two\nsteps:\n1.Queue the queries using the voltQueueSQL() function\n2.Execute the queue and return the results using the voltExecuteSQL() function\nQueuing SQL Statements The first argument to voltQueueSQL() is the SQL statement to be executed.\nThe SQL statement is declared using a special class, SQLStmt , with question marks as placeholders for\nvalues that will be inserted at runtime.\nThe SQL statements must be declared as final and initialized at compile time, either when declared or\nwithin a constructor or static initializer. This allows the VoltDB planner to determine the optimal execution\nplan for each statement when the procedure is loaded and declared in the schema. To allow for code reuse,\nSQLStmt objects can be inherited from parent classes or constructed from other compile-time constants.\nThe remaining arguments to v oltQueueSQL() are the actual values that VoltDB inserts into the place-\nholders. For example, if you want to perform a SELECT of a table using two columns in the WHERE\nclause, your SQL statement might look something like this:\nSELECT CustomerID FROM Customer WHERE FirstName=? AND LastName=?;\nAt runtime, you want the questions marks replaced by values passed in as arguments from the calling\napplication. So the actual voltQueueSQL() invocation might look like this:\npublic final SQLStmt getcustid = new SQLStmt(\n \"SELECT CustomerID FROM Customer \" +\n \"WHERE FirstName=? AND LastName=? ;\");\n \n ...\n \nvoltQueueSQL(getcustid , firstnm , lastnm );\nYour stored procedure can call voltQueueSQL() more than once to queue up multiple SQL statements\nbefore they are executed. Queuing multiple SQL statements improves performance when the SQL queries\nexecute because it minimizes the amount of network traffic within the cluster. Once you have queued all\nof the SQL statements you want to execute together, you then process the queue using the voltExe-\ncuteSQL() function.\n36Designing Stored Proce-\ndures to Access the Database\nVoltTable[] queryresults = voltExecuteSQL();\nCycles of Queue and Execute\nYour procedure can queue and execute SQL statements in as many cycles as necessary to complete the\ntransaction. For example, if you want to make a flight reservation, you may need to access the database\nand verify that the flight exists before creating the reservation in the database. One way to do this is to\nlook up the flight, verify that a valid row was returned, then insert the reservation, like so:\nExample 5.2. Cycles of Queue and Execute in a Stored Procedure\nfinal String getflight = \"SELECT FlightID FROM Flight WHERE FlightID=?;\"; \nfinal String makeres = \"INSERT INTO Reservation (?,?,?,?,?);\";\n \npublic final SQLStmt getflightsql = new SQLStmt(getflight);\npublic final SQLStmt makeressql = new SQLStmt(makeres);\n \npublic VoltTable[] run( int reservenum, int flightnum, int customernum ) \n throws VoltAbortException {\n \n // Verify flight ID\n voltQueueSQL(getflightsql, flightnum); \n VoltTable[] queryresults = voltExecuteSQL();\n \n // If there is no matching record, rollback \n if (queryresults[0].getRowCount() == 0 ) throw new VoltAbortException(); \n \n // Make reservation\n voltQueueSQL(makeressql, reservenum, flightnum, customernum,0,0); \n return voltExecuteSQL();\n}\nThis stored procedure code to make a reservation is described as follows:\nDefine the SQL statements to use. The getflight string contains an SQL statement that verifies the\nflight ID, and the makeres string contains the SQL statement that makes the reservation.\nDefine the run() method for the stored procedure. This stored procedure takes as input arguments\nthe reservation number, the flight number, and the customer number.\nQueue and execute an SQL statement. In this example the voltExecuteSQL() method processes\nthe single getflightsql() function, which executes the SQL statement specified in the getflight\nstring.\nProcess results. If the flight is not available, the exception VoltAbortException aborts the\nstored procedure and rolls back the transaction.\nThe second SQL statement to make the reservation is then queued and executed. The voltEx-\necuteSQL() method processes the single makeressql() function, which executes the SQL\nstatement specified in the makeres string.\n5.2.4. Interpreting the Results of SQL Queries\nWith the voltExecuteSQL() call, the results of all the queued SQL statements are returned in an array\nof VoltTable structures. The array contains one VoltTable for each SQL statement in the queue.\nThe VoltTable structures are returned in the same order as the respective SQL statements in the queue.\n37Designing Stored Proce-\ndures to Access the Database\nThe VoltTable itself consists of rows, where each row contains columns, and each column has the\ncolumn name and a value of a fixed datatype. The number of rows and columns per row depends on the\nspecific query.\nFigure 5.1. Array of VoltTable Structures\nFor example, if you queue two SQL SELECT statements, one looking for the destination of a specific\nflight and the second looking up the ReserveID and Customer name (first and last) of reservations for that\nflight, the code for the stored procedure might look like the following:\npublic final SQLStmt getdestsql = new SQLStmt(\n \"SELECT Destination FROM Flight WHERE FlightID=?;\");\npublic final SQLStmt getressql = new SQLStmt(\n \"SELECT r.ReserveID, c.FirstName, c.LastName \" +\n \"FROM Reservation AS r, Customer AS c \" +\n \"WHERE r.FlightID=? AND r.CustomerID=c.CustomerID;\");\n \n ...\n \n voltQueueSQL(getdestsql,flightnum);\n voltQueueSQL(getressql,flightnum);\n VoltTable[] results = voltExecuteSQL();\nThe array returned by voltExecuteSQL() will have two elements:\n•The first array element is a VoltTable with one row (FlightID is defined as unique) containing one\ncolumn, because the SELECT statement returns only one value.\n•The second array element is a VoltTable with as many rows as there are reservations for the specific\nflight, each row containing three columns: ReserveID, FirstName, and LastName.\nAssuming the stored procedure call input was a FlightID value of 134, the data returned for the second\narray element might be represented as follows:\nFigure 5.2. One VoltTable Structure is returned for each Queued SQL Statement\nVoltDB provides a set of convenience methods for accessing the contents of the VoltTable array. Ta-\nble 5.1, “Methods of the VoltTable Classes” lists some of the most common methods. (See also Java Stored\nProcedure API .)\n38Designing Stored Proce-\ndures to Access the Database\nTable 5.1. Methods of the VoltTable Classes\nMethod Description\nint fetchRow(int index) Returns an instance of the VoltTableRow class for\nthe row specified by index.\nint getRowCount() Returns the number of rows in the table.\nint getColumnCount() Returns the number of columns for each row in the\ntable.\nType getColumnType(int index) Returns the datatype of the column at the specified\nindex. Type is an enumerated type with the follow-\ning possible values:\nBIGINT\nDECIMAL\nFLOAT\nGEOGRAPHY\nGEOGRAPHY_POINT\nINTEGER\nINVALID\nNULL\nNUMERIC\nSMALLINT\nSTRING\nTIMESTAMP\nTINYINT\nVARBINARY\nVOLTTABLE\nString getColumnName(int index) Returns the name of the column at the specified in-\ndex.\ndouble getDouble(int index)\nlong getLong(int index)\nString getString(int index)\nBigDecimal getDecimalAsBigDecimal(int index)\ndouble getDecimalAsDouble(int index)\nDate getTimestampAsTimestamp(int index)\nlong getTimestampAsLong(int index)\nbyte[] getVarbinary(int index)Methods of VoltTable.Row\nReturn the value of the column at the specified index\nin the appropriate datatype. Because the datatype of\nthe columns vary depending on the SQL query, there\nis no generic method for returning the value. You\nmust specify what datatype to use when fetching the\nvalue.\nIt is also possible to retrieve the column values by name. You can invoke any of the getDatatype() methods\nand pass a string argument specifying the name of the column, rather than the numeric index. Accessing\nthe columns by name can make code easier to read and less susceptible to errors due to changes in the\nSQL schema (such as changing the order of the columns). On the other hand, accessing column values by\nnumeric index is potentially more efficient under heavy load conditions.\nExample 5.3, “Displaying the Contents of VoltTable Arrays” shows a generic routine for “walking”\nthrough the return results of a stored procedure. In this example, the contents of the VoltTable array\nare written to standard output.\n39Designing Stored Proce-\ndures to Access the Database\nExample 5.3. Displaying the Contents of VoltTable Arrays\npublic void displayResults(VoltTable[] results) {\n int table = 1;\n for (VoltTable result : results) {\n System.out.printf(\"*** Table %d ***\\n\",table++);\n displayTable(result);\n }\n}\n \npublic void displayTable(VoltTable t) {\n \n final int colCount = t.getColumnCount();\n int rowCount = 1;\n t.resetRowPosition();\n while (t.advanceRow()) {\n System.out.printf(\"--- Row %d ---\\n\",rowCount++);\n for (int col=0; col<colCount; col++) {\n System.out.printf(\"%s: \",t.getColumnName(col));\n switch(t.getColumnType(col)) {\n case TINYINT: case SMALLINT: case BIGINT: case INTEGER:\n System.out.printf(\"%d\\n\", t.getLong(col));\n break;\n case STRING:\n System.out.printf(\"%s\\n\", t.getString(col));\n break;\n case DECIMAL:\n System.out.printf(\"%f\\n\", t.getDecimalAsBigDecimal(col));\n break;\n case FLOAT:\n System.out.printf(\"%f\\n\", t.getDouble(col));\n break;\n }\n }\n }\n}\nFor further details on interpreting the VoltTable structure, see the Java documentation that is provided\nonline in the doc/ subfolder for your VoltDB installation.\n5.2.5. Returning Results from a Stored Procedure\nStored procedures can return the following types:\n•Long integer\n•Single VoltTable\n•Array of VoltTable structures\nYou can return all of the query results by returning the VoltTable array, or you can return a scalar value\nthat is the logical result of the transaction. (For example, the stored procedure in Example 5.1, “Compo-\nnents of a VoltDB Java Stored Procedure” returns a long integer representing the number of remaining\nseats available in the flight.)\n40Designing Stored Proce-\ndures to Access the Database\nWhatever value the stored procedure returns, make sure the run() method includes the appropriate\ndatatype in its definition. For example, the following two definitions specify different return datatypes;\nthe first returns a long integer and the second returns the results of a SQL query as a VoltTable array.\npublic long run( int flightid)\n \npublic VoltTable[] run ( String lastname, String firstname)\nNote that you can interpret the results of SQL queries either in the stored procedure or in the client appli-\ncation. However, for performance reasons, it is best to limit the amount of additional processing done by\nthe stored procedure to ensure it executes quickly and frees the queue for the next stored procedure. So\nunless the processing is necessary for subsequent SQL queries, it is usually best to return the query results\n(in other words, the VoltTable array) directly to the calling application and interpret them there.\n5.2.6. Rolling Back a Transaction\nFinally, if a problem arises while a stored procedure is executing, whether the problem is anticipated or\nunexpected, it is important that the transaction rolls back. Rollback means that any changes made during\nthe transaction are undone and the database is left in the same state it was in before the transaction started.\nVoltDB is a fully transactional database, which means that if a transaction (stored procedure) fails, the\ntransaction is automatically rolled back and the appropriate exception is returned to the calling application.\nExceptions that can cause a rollback include the following:\n•Runtime errors in the stored procedure code, such as division by zero or datatype overflow.\n•Violating database constraints in SQL queries, such as inserting a duplicate value into a column defined\nas unique.\nThe atomicity of the stored procedure depends on VoltDB being able to roll back incomplete database\nchanges. VoltDB relies on Java exception handling outside the stored procedure to perform the roll back.\nTherefore, you should not attempt to alter any exceptions thrown by the voltExecuteSql method. If your\nprocedure code does catch exceptions thrown as a result of executing SQL statements, make sure that the\nexception handler re-throws the exception to allow VoltDB to perform the necessary roll back activities\nbefore the stored procedure returns to the calling program.\nOn the other hand, there may be situations where an exception occurs in the program logic. The issue might\nnot be one that is caught by Java or VoltDB, but still there is no practical way for the transaction logic to\ncomplete. In these situations, you can force a rollback by explicitly throwing the VoltAbortExcep-\ntion exception. For example, if a flight ID does not exist, you do not want to create a reservation so the\nstored procedure can force a rollback like so:\nif (!flightid) { throw new VoltAbortException(); }\nSee Section 7.3, “Verifying Expected Query Results” for another way to roll back procedures when queries\ndo not meet necessary conditions.\n5.3. Installing Stored Procedures into the Database\nWhen your stored procedure code is ready, you need to get the procedures into the database and ready to\nuse. You first compile the procedure code, create a jar file, and load the resulting jar file into the database.\nThen you need to declare in the schema which procedures are stored procedures. Finally, depending on\nwhich table each stored procedure accesses, you need to partition each procedure to match the table par-\ntitioning. These processes are covered in the following sections:\n•Compiling, Packaging, and Loading Stored Procedures\n41Designing Stored Proce-\ndures to Access the Database\n•Declaring Stored Procedures in the Schema\n•Partitioning Stored Procedures in the Schema\nThese sections show how to use DDL to declare and partition stored procedures in the database schema.\nIf you find you need to modify the schema, see Section 4.6, “Modifying the Schema” .\n5.3.1. Compiling, Packaging, and Loading Stored Procedures\nThe VoltDB stored procedures are written as Java classes, so you compile them using the Java compiler.\nAnytime you update your stored procedure code, remember to recompile, package, and reload it into the\ndatabase using the following steps:\n$ javac -classpath \"./:/opt/voltdb/voltdb/*\" \\ \n -d ./obj \\\n *.java\n$ jar cvf myproc.jar -C obj . \n$ sqlcmd \n1> load classes myproc.jar;\n2> show classes;\nThe steps are described as follows:\nUse the javac command to compile the procedure Java code.\nYou include libraries by using the -classpath argument on the command line or by defining the\nenvironment variable CLASSPATH. You must include the VoltDB libraries in the classpath so Java\ncan resolve references to the VoltDB classes and methods. This example assumes that the VoltDB\nsoftware has been installed in the folder /opt/voltdb . If you installed VoltDB in a different\ndirectory, you need to include your installation path. Also, if your client application depends on other\nlibraries, they need to be included in the classpath as well.\nUse the -d flag to specify an output directory in which to create the resulting class files.\nUse the jar command to package your Java classes into a Java archive, or JAR file.\nThe JAR file must have the same Java package structure as the classes in the JAR file. For example,\nif a class has a structure such as myapp.procedures.ProcedureFoo , then the JAR file has\nto have myapp/procedures/ProcedureFoo.class as the class structure for this file.\nThe JAR file must include any inner classes or other dependent classes used by the stored procedures.\nIt can also be used to load any resource files, such as XML or other data files, that the procedures\nneed. Any additional resources in the JAR file are loaded into the server as long as they are in a\nsubfolder. (Resources in the root directory of the JAR file are ignored.)\nUse the sqlcmd load classes directive to load the stored procedure classes into the database.\nYou can use the show classes command to display information about the classes installed in the\ncluster.\nBefore a stored procedure can be called by a client application, you need to declare it in the schema, which\nis described next.\n5.3.2. Declaring Stored Procedures in the Schema\nTo make your stored procedures accessible in the database, you must declare them in the schema using\nthe CREATE PROCEDURE statement. Be sure to identify all of your stored procedures or they will not\n42Designing Stored Proce-\ndures to Access the Database\nbe available to the client applications at runtime. Also, before you declare a procedure, ensure the tables\nand columns the procedure accesses are in the schema.\nThe following DDL statements declare five stored procedures, identifying them by their class name:\n$ sqlcmd\n1> CREATE PROCEDURE FROM CLASS fadvisor.procedures.LookupFlight;\n2> CREATE PROCEDURE FROM CLASS fadvisor.procedures.HowManySeats;\n3> CREATE PROCEDURE FROM CLASS fadvisor.procedures.MakeReservation;\n4> CREATE PROCEDURE FROM CLASS fadvisor.procedures.CancelReservation;\n5> CREATE PROCEDURE FROM CLASS fadvisor.procedures.RemoveFlight;\nFor some situations, you can create stored procedures directly in the schema using SQL instead of loading\nJava code. See how to use the CREATE PROCEDURE AS statement in Section 7.2, “Shortcut for Defining\nSimple Stored Procedures” .\nFor more about modifying a schema with DDL, see Section 4.6, “Modifying the Schema” .\n5.3.3. Partitioning Stored Procedures in the Schema\nWe want the most frequently used stored procedures to be single-partitioned. This means that the procedure\nexecutes in the one partition that also has the data it needs. Single-partitioned stored procedures do not\nhave the overhead of processing across multiple partitions and servers, wasting time searching through the\ndata of the entire table. To ensure single-partitioned efficiency, the parameter the stored procedure uses to\nidentify its required data must be the same as the column on which the table rows are partitioned.\nRemember that in our sample application the RESERVATION table is partitioned on FLIGHTID. Let's\nsay you create a stored procedure, MakeReservation() , with two arguments, flight_id and customer_id .\nThe following figure shows how the stored procedure will automatically execute in the partition that has\nthe requested row.\n43Designing Stored Proce-\ndures to Access the Database\nFigure 5.3. Stored Procedures Execute in the Appropriate Partition Based on the\nPartitioned Parameter Value\nIf you do not declare a procedure as single-partitioned, it is assumed to be multi-partitioned by default.\nThe advantage of multi-partitioned stored procedures is that they have full access to all of the data in\nthe database, across all partitions. However, the real focus of VoltDB, and the way to achieve maximum\nthroughput for your application, is through the use of single-partitioned stored procedures.\n5.3.3.1. How to Declare Single-Partition Procedures\nBefore declaring a single-partitioned procedure, ensure the following prerequisites:\n1.The table that the stored procedure accesses has been partitioned in the schema. See Section 4.4, “Par-\ntitioning Database Tables” .\n2.If the procedure is implemented with Java code, it is loaded into the database. See Section 5.3.1, “Com-\npiling, Packaging, and Loading Stored Procedures” .\nWhen you declare a stored procedure as single-partitioned, you must specify both the associated table and\nthe column on which it is partitioned using the PARTITION ON clause in the CREATE PROCEDURE\nstatement. The following example uses the RESERVATION table and the FLIGHTID column as the par-\ntitioning column. For example:\nCREATE PROCEDURE \n PARTITION ON \n TABLE Reservation COLUMN FlightID\n FROM CLASS fadvisor.procedures.MakeReservation;\nThe PARTITION ON clause assumes that the partitioning column value is also the first parameter to the\nstored procedure. Suppose you wish to partition a stored procedure on the third parameter such as the\n44Designing Stored Proce-\ndures to Access the Database\nprocedure GetCustomerDetails() , where the third parameter is a customer_id. You must specify\nthe partitioning parameter using the PARAMETER clause and an index for the parameter position. The\nindex is zero-based so the third parameter would be \"2\" and the CREATE PROCEDURE statement would\nbe as follows:\nCREATE PROCEDURE\n PARTITION ON \n TABLE Customer COLUMN CustomerID PARAMETER 2\n FROM CLASS fadvisor.procedures. GetCustomerDetails;\n5.3.3.2. Queries in Single-Partitioned Stored Procedures\nSingle-partitioned stored procedures are special because they operate independently of other partitions,\nwhich is why they are so fast. At the same time, single-partitioned stored procedures operate on only a\nsubset of the entire data, that is, only the data within the specified partition.\nCaution\nIt is the application developer's responsibility to ensure that the queries in a single-partitioned\nstored procedure are truly single-partitioned. VoltDB does not warn you about SELECT or\nDELETE statements that might return incomplete results. For example, if your single-partitioned\nprocedure attempts to operate on a range of values for the partitioning column, the range is in-\ncomplete and includes only a subset of the table data that is in the current partition.\nVoltDB does generate a runtime error if you attempt to INSERT a row that does not belong in\nthe current partition.\nAfter you partition a procedure, your stored procedure can operate on only those records in the partitioned\ntable that are identified by the partitioning column, in this example the RESERVATION table identified\nby a FLIGHTID. Your stored procedure can access records in replicated tables because the entire table is\navailable to every partition. However, for other partitioned tables, the stored procedure can only operate on\nthose records if both tables are partitioned on the same attribute . In this example that would be FLIGHTID.\nIn other words, the following rules apply:\n•Any SELECT, UPDATE, or DELETE queries must use the constraint, WHERE identifier =?\nThe question mark is replaced at runtime by the input value that identifies the row of data in the table.\nIn our example, queries on the RESERVATION table must use the constraint, WHERE FLIGHTID=?\n•SELECT statements can join the partitioned table to replicated tables, as long as the preceding WHERE\nconstraint is also applied.\n•SELECT statements can join the partitioned table to other partitioned tables as long as the following\nare true:\n•The two tables are partitioned on the same attribute or column (in our example, FLIGHTID).\n•The tables are joined on the shared partitioning column.\n•The following WHERE constraint is also used: WHERE partitioned-table . identifi-\ner=? In this example, WHERE RESERVATION.FLIGHTID=?\nFor example, the RESERVATION table can be joined with the FLIGHT table (which is replicated). How-\never, the RESERVATION table cannot be joined with the CUSTOMER table in a single-partitioned stored\n45Designing Stored Proce-\ndures to Access the Database\nprocedure because the two tables use different partitioning columns. (CUSTOMER is partitioned on the\nCUSTOMERID column.)\nThe following are examples of invalid SQL queries for a single-partitioned stored procedure partitioned\non FLIGHTID:\n•INVALID: SELECT * FROM reservation WHERE reservationid=?\nThe RESERVATION table is being constrained by a column (RESERVATIONID) which is not the\npartitioning column.\n•INVALID: SELECT c.lastname FROM reservation AS r, customer AS c WHERE\nr.flightid=? AND c.customerid = r.customerid\nThe correct partitioning column is being used in the WHERE clause, but the tables are being joined on\na different column. As a result, not all CUSTOMER rows are available to the stored procedure since\nthe CUSTOMER table is partitioned on a different column than RESERVATION.\n46Chapter 6. Designing VoltDB Client\nApplicaons\nAfter you design and partition your database schema ( Chapter 4, Designing the Database Schema ), and\nafter you design the necessary stored procedures ( Chapter 5, Designing Stored Procedures to Access the\nDatabase ), you are ready to write the client application logic. The client code contains all the business-spe-\ncific logic required for the application, including business rule validation and keeping track of constraints\nsuch as proper data ranges for arguments entered in stored procedure calls.\nThe three steps to using VoltDB from a client application are:\n1.Creating a connection to the database\n2.Calling stored procedures\n3.Closing the client connection\nThe following sections explain how to perform these functions using the standard VoltDB Java client\ninterface. (See VoltDB Java Client API .) The VoltDB Java Client is a thread-safe class library that provides\nruntime access to VoltDB databases and functions.\nIt is possible to call VoltDB stored procedures from programming languages other than Java. However,\nreading this chapter is still recommended to understand the process for invoking and interpreting the results\nof a VoltDB stored procedure. See Chapter 8, Using VoltDB with Other Programming Languages for more\ninformation about using VoltDB from client applications written in other languages.\n6.1. Connecting to the VoltDB Database\nThe first task for the calling program is to create a connection to the VoltDB database. You do this with\nthe following steps:\norg.voltdb.client.Client client = null;\nClientConfig config = null;\ntry {\n config = new ClientConfig(\"advent\",\"xyzzy\") ; \n client = ClientFactory. createClient (config); \n \n client.createConnection (\"myserver.xyz.net\"); \n} catch (java.io.IOException e) {\n e.printStackTrace();\n System.exit(-1);\n}\nDefine the configuration for your connections. In its simplest form, the ClientConfig class spec-\nifies the username and password to use. It is not absolutely necessary to create a client configuration\nobject. For example, if security is not enabled (and therefore a username and password are not need-\ned) a configuration object is not required. But it is a good practice to define the client configuration\nto ensure the same credentials are used for all connections against a single client. It is also possible\nto define additional characteristics of the client connections as part of the configuration, such as the\ntimeout period for procedure invocations or a status listener. (See Section 6.5, “Handling Errors” .)\nCreate an instance of the VoltDB Client class.\n47Designing VoltDB Client Applications\nCall the createConnection() method. After you instantiate your client object, the argument\nto createConnection() specifies the database node to connect to. You can specify the server\nnode as a hostname (as in the preceding example) or as an IP address. You can also add a second\nargument if you want to connect to a port other than the default. For example, the following cre-\nateConnection() call attempts to connect to the admin port, 21211:\nclient.createConnection(\"myserver.xyz.net\",21211);\nIf security is enabled and the username and password in the ClientConfig() call do not match a\nuser defined in the configuration file, the call to createConnection() will throw an exception.\nSee Chapter 12, Security for more information about the use of security with VoltDB databases.\nWhen you are done with the connection, you should make sure your application calls the close() method\nto clean up any memory allocated for the connection. See Section 6.4, “Closing the Connection” .\n6.1.1. Connecting to Multiple Servers\nYou can create the connection to any of the nodes in the database cluster and your stored procedure will\nbe routed appropriately. In fact, you can create connections to multiple nodes on the server and your\nsubsequent requests will be distributed to the various connections. For example, the following Java code\ncreates the client object and then connects to all three nodes of the cluster. In this case, security is not\nenabled so no client configuration is needed:\ntry {\n client = ClientFactory.createClient();\n client.createConnection(\"server1.xyz.net\");\n client.createConnection(\"server2.xyz.net\");\n client.createConnection(\"server3.xyz.net\");\n} catch (java.io.IOException e) {\n e.printStackTrace();\n System.exit(-1);\n}\nCreating multiple connections has three major benefits:\n•Multiple connections distribute the stored procedure requests around the cluster, avoiding a bottleneck\nwhere all requests are queued through a single host. This is particularly important when using asynchro-\nnous procedure calls or multiple clients.\n•For Java applications, the VoltDB Java client library uses client affinity. That is, the client knows which\nserver to send each request to based on the partitioning, thereby eliminating unnecessary network hops.\n•Finally, if a server fails for any reason, when using K-safety the client can continue to submit requests\nthrough connections to the remaining nodes. This avoids a single point of failure between client and\ndatabase cluster. See Chapter 10, Availability for more.\n6.1.2. Using the Auto-Connecting Client\nAn easier way to create connections to all of the database servers is to use the \"smart\" or topology-aware\nclient. By setting the Java client to be aware of the cluster topology, you only need to connect to one server\nand the client automatically connects to all of the servers in the cluster.\nAn additional advantage of the smart client is that it will automatically reconnect whenever the topology\nchanges. That is, if a server fails and then rejoins the cluster, or new nodes are added to the cluster, the\nclient will automatically create connections to the newly available servers.\n48Designing VoltDB Client Applications\nYou enable auto-connecting when you initialize the client object by setting the configuration option before\ncreating the client object. For example:\norg.voltdb.client.Client client = null;\nClientConfig config = new ClientConfig(\"\",\"\");\nconfig.setTopologyChangeAware (true);\n try {\n client = ClientFactory.createClient(config);\n client.createConnection(\"server1.xyz.net\");\n . . .\nWhen setTopologyChangeAware() is set to true, the client library will automatically connect to all\nservers in the cluster and adjust its connections any time the cluster topology changes.\n6.2. Invoking Stored Procedures\nAfter your client creates the connection to the database, it is ready to call the stored procedures. You invoke\na stored procedure using the callProcedure() method, passing the procedure name and variables as\narguments. For example:\nVoltTable[] results;\n \ntry { results = client. callProcedure (\"LookupFlight\", \n origin,\n dest,\n departtime). getResults (); \n} catch (Exception e) { \n e.printStackTrace();\n System.exit(-1);\n}\nThe callProcedure() method takes the procedure name and the procedure's variables as argu-\nments. The LookupFlight() stored procedure requires three variables: the originating airport,\nthe destination, and the departure time.\nOnce a synchronous call completes, you can evaluate the results of the stored procedure. The call-\nProcedure() method returns a ClientResponse object, which includes information about the\nsuccess or failure of the stored procedure. To retrieve the actual return values you use the getRe-\nsults() method. See Section 5.2.4, “Interpreting the Results of SQL Queries” for more informa-\ntion about interpreting the results of VoltDB stored procedures.\nNote that since callProcedure() can throw an exception (such as VoltAbortException )\nit is a good practice to perform error handling and catch known exceptions.\n6.3. Invoking Stored Procedures Asynchronously\nCalling stored procedures synchronously simplifies the program logic because your client application waits\nfor the procedure to complete before continuing. However, for high performance applications looking to\nmaximize throughput, it is better to queue stored procedure invocations asynchronously.\nAsynchronous Invocation\nTo invoke stored procedures asynchronously, use the callProcedure() method with an additional\nfirst argument, a callback that will be notified when the procedure completes (or an error occurs). For ex-\nample, to invoke a NewCustomer() stored procedure asynchronously, the call to callProcedure()\nmight look like the following:\n49Designing VoltDB Client Applications\nclient.callProcedure(new MyCallback(),\n \"NewCustomer\",\n firstname,\n lastname,\n custID};\nThe following are other important points to note when making asynchronous invocations of stored pro-\ncedures:\n•Asynchronous calls to callProcedure() return control to the calling application as soon as the\nprocedure call is queued.\n•If the database server queue is full, callProcedure() will block until it is able to queue the proce-\ndure call. This is a condition known as backpressure. This situation does not normally happen unless the\ndatabase cluster is not scaled sufficiently for the workload or there are abnormal spikes in the workload.\nSee Section 6.5.3, “Writing a Status Listener to Interpret Other Errors” for more information.\n•Once the procedure is queued, any subsequent errors (such as an exception in the stored procedure itself\nor loss of connection to the database) are returned as error conditions to the callback procedure.\nCallback Implementation\nThe callback procedure ( MyCallback() in this example) is invoked after the stored procedure completes\non the server. The following is an example of a callback procedure implementation:\nstatic class MyCallback implements ProcedureCallback {\n @Override\n public void clientCallback(ClientResponse clientResponse) {\n if (clientResponse. getStatus() != ClientResponse.SUCCESS) {\n System.err.println(clientResponse.getStatusString());\n } else {\n myEvaluateResultsProc(clientResponse. getResults() );\n }\n }\n}\nThe callback procedure is passed the same ClientResponse structure that is returned in a synchronous\ninvocation. ClientResponse contains information about the results of execution. In particular, the\nmethods getStatus() and getResults() let your callback procedure determine whether the stored\nprocedure was successful and evaluate the results of the procedure.\nThe VoltDB Java client is single threaded, so callback procedures are processed one at a time. Conse-\nquently, it is a good practice to keep processing in the callback to a minimum, returning control to the main\nthread as soon as possible. If more complex processing is required by the callback, creating a separate\nthread pool and spawning worker methods on a separate thread from within the asynchronous callback\nis recommended.\n6.4. Closing the Connection\nWhen the client application is done interacting with the VoltDB database, it is a good practice to close the\nconnection. This ensures that any pending transactions are completed in an orderly way. The following\nexample demonstrates how to close the client connection:\ntry {\n client.drain();\n client.close();\n50Designing VoltDB Client Applications\n} catch (InterruptedException e) {\n e.printStackTrace();\n}\nThere are two steps to closing the connection:\n1.Call drain() to make sure all asynchronous calls have completed. The drain() method pauses the\ncurrent thread until all outstanding asynchronous calls (and their callback procedures) complete. This\ncall is not necessary if the application only makes synchronous procedure calls. However, there is no\npenalty for calling drain() and so it can be included for completeness in all applications.\n2.Call close() to close all of the connections and release any resources associated with the client.\n6.5. Handling Errors\nA special situation to consider when calling VoltDB stored procedures is error handling. The VoltDB client\ninterface catches most exceptions, including connection errors, errors thrown by the stored procedures\nthemselves, and even exceptions that occur in asynchronous callbacks. These error conditions are not\nreturned to the client application as exceptions. However, the application can still receive notification and\ninterpret these conditions using the client interface.\nThe following sections explain how to identify and interpret errors that occur when executing stored pro-\ncedures and in asynchronous callbacks. These include:\n•Interpreting Execution Errors\n•Handling Timeouts\n•Writing a Status Listener to Interpret Other Errors\n6.5.1. Interpreting Execution Errors\nIf an error occurs in a stored procedure (such as an SQL constraint violation), VoltDB catches the error\nand returns information about it to the calling application as part of the ClientResponse class. The\nClientResponse class provides several methods to help the calling application determine whether\nthe stored procedure completed successfully and, if not, what caused the failure. The two most important\nmethods are getStatus() and getStatusString() .\nstatic class MyCallback implements ProcedureCallback {\n @Override\n public void clientCallback(ClientResponse clientResponse) {\n final byte AppCodeWarm = 1;\n final byte AppCodeFuzzy = 2;\n if (clientResponse. getStatus() != ClientResponse.SUCCESS) { \n System.err.println(clientResponse. getStatusString() ); \n } else {\n \n if (clientResponse. getAppStatus() == AppCodeFuzzy) { \n System.err.println(clientResponse. getAppStatusString() );\n };\n myEvaluateResultsProc(clientResponse.getResults());\n }\n }\n}\n51Designing VoltDB Client Applications\nThe getStatus() method tells you whether the stored procedure completed successfully and, if\nnot, what type of error occurred. It is good practice to always check the status of the ClientRe-\nsponse before evaluating the results of a procedure call, because if the status is anything but SUC-\nCESS, there will not be any results returned. The possible values of getStatus() are:\n•CONNECTION_LOST — The network connection was lost before the stored procedure returned\nstatus information to the calling application. The stored procedure may or may not have completed\nsuccessfully.\n•CONNECTION_TIMEOUT — The stored procedure took too long to return to the calling ap-\nplication. The stored procedure may or may not have completed successfully. See Section 6.5.2,\n“Handling Timeouts” for more information about handling this condition.\n•GRACEFUL_FAILURE — An error occurred and the stored procedure was gracefully rolled\nback.\n•RESPONSE_UNKNOWN — This is a rare error that occurs if the coordinating node for the\ntransaction fails before returning a response. The node to which your application is connected\ncannot determine if the transaction failed or succeeded before the coordinator was lost. The best\ncourse of action, if you receive this error, is to use a new query to determine if the transaction\nfailed or succeeded and then take action based on that knowledge.\n•SUCCESS — The stored procedure completed successfully.\n•UNEXPECTED_FAILURE — An unexpected error occurred on the server and the procedure\nfailed.\n•USER_ABORT — The code of the stored procedure intentionally threw a UserAbort exception\nand the stored procedure was rolled back.\nIf a getStatus() call identifies an error status other than SUCCESS, you can use the getSta-\ntusString() method to return a text message providing more information about the specific error\nthat occurred.\nIf you want the stored procedure to provide additional information to the calling application, there are\ntwo more methods to the ClientResponse that you can use. The methods getAppStatus()\nand getAppStatusString() act like getStatus() and getStatusString() , but rather\nthan returning information set by VoltDB, getAppStatus() and getAppStatusString()\nreturn information set in the stored procedure code itself.\nIn the stored procedure, you can use the methods setAppStatusCode() and setAppSta-\ntusString() to set the values returned to the calling application by the stored procedure. For\nexample:\n/* stored procedure code */\nfinal byte AppCodeWarm = 1;\nfinal byte AppCodeFuzzy = 2;\n . . .\nsetAppStatusCode (AppCodeFuzzy);\nsetAppStatusString (\"I'm not sure about that...\");\n . . .\n6.5.2. Handling Timeouts\nOne particular error that needs special handling is if a connection or a stored procedure call times out. By\ndefault, the client interface only waits a specified amount of time (two minutes) for a stored procedure to\ncomplete. If no response is received from the server before the timeout period expires, the client interface\n52Designing VoltDB Client Applications\nreturns control to your application, notifying it of the error. For synchronous procedure calls, the client\ninterface returns the error CONNECTION_TIMEOUT to the procedure call. For asynchronous calls, the\nclient interface invokes the callback including the error information in the clientResponse object.\nIt is important to note that CONNECTION_TIMEOUT does not necessarily mean the synchronous pro-\ncedure failed. In fact, it is very possible that the procedure may complete and return information after the\ntimeout error is reported. The timeout is provided to avoid locking up the client application when proce-\ndures are delayed or the connection to the cluster hangs for any reason.\nSimilarly, if no response of any kind is returned on a connection (even if no transactions are pending)\nwithin the specified timeout period, the client connection will timeout. When this happens, the connec-\ntion is closed, any open stored procedures on that connection are closed with a return status of CONNEC-\nTION_LOST, and then the client status listener callback method connectionLost() is invoked. Un-\nlike a procedure timeout, when the connection times out, the connection no longer exists, so your client ap-\nplication will receive no further notifications concerning pending procedures, whether they succeed or fail.\nCONNECTION_LOST does not necessarily mean a pending asynchronous procedure failed. It is possible\nthat the procedure completed but was unable to return its status due to a connection failure. The goal of\nthe connection timeout is to notify the client application of a lost connection in a timely manner, even if\nthere are no outstanding procedures using the connection.\nThere are several things you can do to address potential timeouts in your application:\n•Change the timeout period by calling either or both the methods setProcedureCallTimeout()\nand setConnectionResponseTimeout() on the ClientConfig object. The default timeout\nperiod is 2 minutes for both procedures and connections. You specify the timeout period in milliseconds,\nwhere a value of zero disables the timeout altogether. For example, the following client code resets the\nprocedure timeout to 90 seconds and the connection timeout period to 3 minutes, or 180 seconds:\nconfig = new ClientConfig(\"advent\",\"xyzzy\");\nconfig.setProcedureCallTimeout(90 * 1000);\nconfig.setConnectionResponseTimeout(180 * 1000);\nclient = ClientFactory.createClient(config);\n•Catch and respond to the timeout error as part of the response to a procedure call. For example, the\nfollowing code excerpt from a client callback procedure reports the error to the console and ends the\ncallback:\nstatic class MyCallback implements ProcedureCallback {\n @Override\n public void clientCallback(ClientResponse response) {\n \n if (response.getStatus() == ClientResponse.CONNECTION_TIMEOUT) {\n System.out.println(\"A procedure invocation has timed out.\");\n return;\n };\n if (response.getStatus() == ClientResponse.CONNECTION_LOST) {\n System.out.println(\"Connection lost before procedure response.\");\n return;\n };\n•Set a status listener to receive the results of any procedure invocations that complete after the client\ninterface times out. See the following Section 6.5.3, “Writing a Status Listener to Interpret Other Errors”\nfor an example of creating a status listener for delayed procedure responses.\n53Designing VoltDB Client Applications\n6.5.3. Writing a Status Listener to Interpret Other Errors\nCertain types of errors can occur that the ClientResponse class cannot notify you about immediately.\nIn these cases, an error happens and is caught by the client interface outside of the normal stored procedure\nexecution cycle. If you want your application to address these situations, you need to create a listener,\nwhich is a special type of asynchronous callback that the client interface will notify whenever such errors\noccur. The types of errors that a listener addresses include:\nLost Connection\nIf a connection to the database cluster is lost or times out and there are outstanding asynchronous\nrequests on that connection, the ClientResponse for those procedure calls will indicate that the\nconnection failed before a return status was received. This means that the procedures may or may not\nhave completed successfully. If no requests were outstanding, your application might not be notified\nof the failure under normal conditions, since there are no callbacks to identify the failure. Since the\nloss of a connection can impact the throughput or durability of your application, it is important to have\na mechanism for general notification of lost connections outside of the procedure callbacks.\nBackpressure\nIf backpressure causes the client interface to wait, the stored procedure is never queued and so your\napplication does not receive control until after the backpressure is removed. This can happen if the\nclient applications are queuing stored procedures faster than the database cluster can process them.\nThe result is that the execution queue on the server gets filled up and the client interface will not let\nyour application queue any more procedure calls. Two ways to handle this situation programmatically\nare to:\n•Let the client pause momentarily to let the queue subside. The asynchronous client interface does\nthis automatically for you.\n•Create multiple connections to the cluster to better distribute asynchronous calls across the database\nnodes.\nExceptions in a Procedure Callback\nAn error can occur in an asynchronous callback after the stored procedure completes. These exceptions\nare also trapped by the VoltDB client, but occur after the ClientResponse is returned to the\napplication.\nLate Procedure Responses\nProcedure invocations that time out in the client may later complete on the server and return results.\nSince the client application can no longer react to this response inline (for example, with asynchronous\nprocedure calls, the associated callback has already received a connection timeout error) the client\nmay want a way to process the returned results.\nFor the sake of example, the following status listener does little more than display a message on standard\noutput. However, in real world applications the listener would take appropriate actions based on the cir-\ncumstances.\n/*\n* Declare the status listener\n*/\nClientStatusListenerExt mylistener = new ClientStatusListenerExt () \n {\n @Override\n public void connectionLost (String hostname, int port, \n int connectionsLeft,\n54Designing VoltDB Client Applications\n DisconnectCause cause)\n {\n System.out.printf(\"A connection to the database has been lost.\"\n + \"There are %d connections remaining.\\n\", connectionsLeft);\n }\n @Override\n public void backpressure (boolean status)\n {\n System.out.println(\"Backpressure from the database \"\n + \"is causing a delay in processing requests.\");\n }\n @Override\n public void uncaughtException (ProcedureCallback callback,\n ClientResponse r, Throwable e)\n {\n System.out.println(\"An error has occurred in a callback \"\n + \"procedure. Check the following stack trace for details.\");\n e.printStackTrace();\n }\n @Override\n public void lateProcedureResponse (ClientResponse response,\n String hostname, int port)\n {\n System.out.printf(\"A procedure that timed out on host %s:%d\"\n + \" has now responded.\\n\", hostname, port);\n }\n };\n/*\n* Declare the client configuration, specifying\n* a username, a password, and the status listener\n*/\nClientConfig myconfig = new ClientConfig (\"username\", \n \"password\",\n mylistener);\n/*\n* Create the client using the specified configuration.\n*/\nClient myclient = ClientFactory. createClient (myconfig); \nBy performing the operations in the order as described here, you ensure that all connections to the VoltDB\ndatabase cluster use the same credentials for authentication and will notify the status listener of any error\nconditions outside of normal procedure execution.\nDeclare a ClientStatusListenerExt listener callback. Define the listener before you define\nthe VoltDB client or open a connection.\nThe ClientStatusListenerExt interface has four methods that you can implement, one for\neach type of error situation:\n•connectionLost()\n•backpressure()\n•uncaughtException()\n•lateProcedureResponse()\n55Designing VoltDB Client Applications\nDefine the client configuration ClientConfig object. After you declare your ClientStatus-\nListenerExt , you define a ClientConfig object to use for all connections, which includes\nthe username, password, and status listener. This configuration is then used to define the client next.\nCreate a client with the specified configuration.\n6.6. Compiling and Running Client Applications\nVoltDB client applications written in Java compile and run like other Java applications. (See Chapter 8,\nUsing VoltDB with Other Programming Languages for more on writing client applications using other lan-\nguages.) To compile, you must include the VoltDB libraries in the classpath so Java can resolve references\nto the VoltDB classes and methods. It is possible to do this manually by defining the environment variable\nCLASSPATH or by using the -classpath argument on the command line. If your client application\ndepends on other libraries, they need to be included in the classpath as well. You can also specify where to\ncreate the resulting class files using the -d flag to specify an output directory, as in the following example:\n$ javac -classpath \"./:/opt/voltdb/voltdb/*\" \\\n -d ./obj \\\n *.java\nThe preceding example assumes that the VoltDB software has been installed in the folder /opt/volt-\ndb. If you installed VoltDB in a different directory, you need to include your installation path in the -\nclasspath argument.\nIf you are using Apache Maven to manage your application development, the VoltDB Java client library\nis available from the central Maven repository. So rather than installing VoltDB locally, you can simply\ninclude it as a dependency in your Maven project object model, or pom.xml, like so:\n<dependency>\n <groupId>org.voltdb</groupId>\n <artifactId>voltdbclient</artifactId>\n <version>5.1</version>\n</dependency>\n6.6.1. Starting the Client Application\nBefore you start your client application, the VoltDB database must be running. When you start your client\napplication, you must ensure that the VoltDB library JAR file is in the classpath. For example:\n$ java -classpath \"./:/opt/voltdb/voltdb/*\" MyClientApp\nIf you develop your application using one of the sample applications as a template, the run.sh file\nmanages this dependency for you.\n6.6.2. Running Clients from Outside the Cluster\nIf you are running the database on a cluster and the client applications on separate machines, you\ndo not need to include all of the VoltDB software with your client application. The VoltDB distribu-\ntion comes with two separate libraries: voltdb-n.n.nn .jar and voltdbclient-n.n.nn .jar\n(where n.n.nn is the VoltDB version number). The first file is a complete library that is required for build-\ning and running a VoltDB database server.\nThe second file, voltdbclient-n.n.nn .jar, is a smaller library containing only those components\nneeded to run a client application. If you are distributing your client applications, you only need to distribute\n56Designing VoltDB Client Applications\nthe client classes and the VoltDB client library. You do not need to install all of the VoltDB software\ndistribution on the client nodes.\n57Chapter 7. Simplifying Applicaon\nDevelopment\nThe previous chapter ( Chapter 6, Designing VoltDB Client Applications ) explains how to develop your\nVoltDB database application using the full power and flexibility of the Java client interface. However,\nsome database tasks — such as inserting records into a table or retrieving a specific column value — do\nnot need all of the capabilities that the Java API provides. In other cases, there are automation techniques\nthat can reduce the amount of application code you need to write and maintain.\nNow that you know how the VoltDB programming interface works, VoltDB has features to simplify com-\nmon tasks and make your application development easier. Those features include:\n•Using Default Procedures\n•Shortcut for Defining Simple Stored Procedures\n•Verifying Expected Query Results\n•Scheduling Stored Procedures as Tasks\n•Directed Procedures: Distributing Transactions to Every Partition\nThe following sections describe each of these features separately.\n7.1. Using Default Procedures\nAlthough it is possible to define quite complex SQL queries, often the simplest are also the most common.\nInserting, selecting, updating, and deleting records based on a specific key value are the most basic opera-\ntions for a database. Another common practice is upsert, where if a row matching the primary key already\nexists, the record is updated — if not, a new record is inserted. To simplify these operations, VoltDB\ndefines these default stored procedures for tables.\nThe default stored procedures use a standard naming scheme, where the name of the procedure is composed\nof the name of the table (in all uppercase), a period, and the name of the query in lowercase. For example,\nthe Hello World tutorial ( doc/tutorials/helloworld ) contains a single table, HELLOWORLD,\nwith three columns and the partitioning column, DIALECT, as the primary key. As a result, five default\nstored procedures are included in addition to any user-defined procedures declared in the schema. The\nparameters to the procedures differ based on the procedure.\nVoltDB defines a default insert stored procedure when any table is defined:\nHELLOWORLD.insert The parameters are the table columns, in the same order as defined in the\nschema.\nVoltDB defines default update, upsert, and delete stored procedures if the table has a primary key:\nHELLOWORLD.update The parameters are the new column values, in the order defined by the schema,\nfollowed by the primary key column values. This means the primary key col-\numn values are specified twice: once as their corresponding new column val-\nues and once as the primary key value.\nHELLOWORLD.upsert The parameters are the table columns, in the same order as defined in the\nschema.\n58Simplifying Application Development\nHELLOWORLD.delete The parameters are the primary key column values, listed in the order they\nappear in the primary key definition.\nVoltDB defines a default select stored procedure if the table has a primary key and the table is partitioned:\nHELLOWORLD.select The parameters are the primary key column values, listed in the order they\nappear in the primary key definition.\nUse the sqlcmd command show procedures to list all the stored procedures available including the number\nand type of parameters required. Use @SystemCatalog with the PROCEDURECOLUMNS selector\nto show more details about the order and meaning of each procedure's parameters.\nThe following code example uses the default procedures for the HELLOWORLD table to insert, retrieve\n(select), update, and delete a new record with the key value \"American\":\nVoltTable[] results;\nclient.callProcedure(\"HELLOWORLD.insert\",\n \"American\",\"Howdy\",\"Earth\");\nresults = client.callProcedure(\"HELLOWORLD.select\",\n \"American\").getResults();\nclient.callProcedure(\"HELLOWORLD.update\",\n \"American\",\"Yo\",\"Biosphere\",\n \"American\");\nclient.callProcedure(\"HELLOWORLD.delete\",\n \"American\");\n7.2. Shortcut for Defining Simple Stored Proce-\ndures\nSometimes all you want is to execute a single SQL query and return the results to the calling application. In\nthese simple cases, writing the necessary Java code to create a stored procedure can be tedious, so VoltDB\nprovides a shortcut. For very simple stored procedures that execute a single SQL query and return the\nresults, you can define the entire stored procedure as part of the database schema.\nRecall from Section 5.3.2, “Declaring Stored Procedures in the Schema” , that normally you use the CRE-\nATE PROCEDURE statement to specify the class name of the Java procedure you coded, for example:\nCREATE PROCEDURE FROM CLASS MakeReservation;\nCREATE PROCEDURE FROM CLASS CancelReservation;\nHowever, to create procedures without writing any Java, you can simply insert a SQL query in the AS\nclause:\nCREATE PROCEDURE CountReservations AS\n SELECT COUNT(*) FROM RESERVATION ;\nVoltDB creates the procedure when you include the SQL query in the CREATE PROCEDURE AS state-\nment. Note that you must specify a unique class name for the procedure, which is unique among all stored\nprocedures, including both those declared in the schema and those created as Java classes. (You can use\nthe sqlcmd command show procedures to display a list of all stored procedures.)\nIt is also possible to pass arguments to the SQL query in simple stored procedures. If you use the ques-\ntion mark placeholder in the SQL, any additional arguments you pass in client applications through the\n59Simplifying Application Development\ncallProcedure() method are used to replace the placeholders, in their respective order. For example,\nthe following simple stored procedure expects to receive three additional parameters:\nCREATE PROCEDURE MyReservationsByTrip AS\n SELECT R.RESERVEID, F.FLIGHTID, F.DEPARTTIME\n FROM RESERVATION AS R, FLIGHT AS F\n WHERE R.CUSTOMERID = ? \n AND R.FLIGHTID = F.FLIGHTID\n AND F.ORIGIN=? AND F.DESTINATION=? ;\nYou can also specify whether the simple procedure is single-partitioned or not. By default, stored proce-\ndures are assumed to be multi-partitioned. But if your procedure should be single-partitioned, specify its\npartitioning in the PARTITION ON clause. In the following example, the stored procedure is partitioned\non the FLIGHTID column of the RESERVATION table using the first parameter as the partitioning key.\nCREATE PROCEDURE FetchReservations \n PARTITION ON \n TABLE Reservation COLUMN flightid\n AS\n SELECT * FROM RESERVATION WHERE FLIGHTID=?;\nFinally, if you want to execute multiple SQL statements within a simple procedure, you must enclose the\nSQL in a BEGIN-END clause. For example, the following CREATE PROCEDURE AS statement fetches\nseparate records from the CUSTOMER and ORDER tables:\nCREATE PROCEDURE OpenOrders\n AS BEGIN\n SELECT fullname FROM CUSTOMER WHERE CUSTOMERID=?;\n SELECT * FROM ORDER WHERE CUSTOMERID=?;\n END;\nSome important points to note concerning multi-statement simple procedures:\n•The END statement and all of the enclosed SQL statements, must be terminated with a semi-colon.\n•The procedure returns an array of VoltTables, one for each statement in the procedure.\n•Each placeholder represents one parameter to the stored procedure. Parameters cannot be reused. So in\nthe previous example, the customer ID would need to be entered twice as separate parameters to the\nstored procedure, one parameter for the first statement and one parameter for the second statement.\n7.3. Verifying Expected Query Results\nThe automated default and simple stored procedures reduce the coding needed to perform simple queries.\nHowever, another substantial chunk of stored procedure and client application code is often required to\nverify the correctness of the results returned by the queries. Did you get the right number of records? Does\nthe query return the correct value?\nRather than you having to write the code to validate the query results manually, VoltDB provides a way\nto perform several common validations as part of the query itself. The Java client interface includes an\nExpectation object that you can use to define the expected results of a query. Then, if the query does not\nmeet those expectations, the executing stored procedure automatically throws a VoltAbortException\nand rolls back.\nYou specify the expectation as the second parameter (after the SQL statement but before any arguments)\nwhen queuing the query. For example, when making a reservation in the Flight application, the procedure\n60Simplifying Application Development\nmust make sure there are seats available. To do this, the procedure must determine how many seats the\nflight has. This query can also be used to verify that the flight itself exists, because there should be one\nand only one record for every flight ID.\nThe following code fragment uses the EXPECT_ONE_ROW expectation to both fetch the number of seats\nand verify that the flight itself exists and is unique.\nimport org.voltdb.Expectation;\n .\n .\n .\npublic final SQLStmt GetSeats = new SQLStmt(\n \"SELECT numberofseats FROM Flight WHERE flightid=?;\");\n \nvoltQueueSQL(GetSeats, EXPECT_ONE_ROW , flightid);\nVoltTable[] recordset = voltExecuteSQL();\nLong numofseats = recordset[0].asScalarLong();\nBy using the expectation, the stored procedure code does not need to do additional error checking to verify\nthat there is one and only one row in the result set. The following table describes all of the expectations\nthat are available to use in stored procedures.\nExpectation Description\nEXPECT_EMPTY The query must return no rows.\nEXPECT_ONE_ROW The query must return one and only one row.\nEXPECT_ZERO_OR_ONE_ROW The query must return no more than one row.\nEXPECT_NON_EMPTY The query must return at least one row.\nEXPECT_SCALAR The query must return a single value (that is, one row with one\ncolumn).\nEXPECT_SCALAR_LONG The query must return a single value with a datatype of Long.\nEXPECT_SCALAR_MATCH( long ) The query must return a single value equal to the specified\nLong value.\n7.4. Scheduling Stored Procedures as Tasks\nThere are often repetitive tasks you want to perform on the database that can be scheduled at regular\nintervals. These tasks may include general cleanup, pruning, or periodic data validation. Rather than write\na separate application and scheduler to do this, VoltDB lets you automate tasks at intervals ranging from\nmilliseconds to days.\nA task is a stored procedure that you schedule using the CREATE TASK statement. The statement spec-\nifies what procedure to run and when to run it and what arguments to use. In the simplest case, you can\nschedule a multi-partition procedure at specific times of day (using cron notation), at a regular interval\n(using EVERY), or with a regular pause between iterations (using DELAY). For example, The following\nstatements define a procedure called OrphanedRecords that deletes reservations from a specific airline\nwith no associated flight number and a task called RemoveOrphans that uses that procedure to delete or-\nphaned records for FlyByNight airlines every two hours.\nCREATE PROCEDURE OrphanedRecords\n AS DELETE FROM reservations \n WHERE aireline=? AND flight_id IS NULL;\n61Simplifying Application Development\nCREATE TASK RemoveOrphans\n ON SCHEDULE EVERY 2 HOURS\n PROCEDURE OrphanedRecords WITH ('FlyByNight'); \nSince the task definition is part of the schema, VoltDB automates starting and stopping the tasks with the\ndatabase. Other clauses to the CREATE TASK statement let you further refine how the task is run including\nwhat user account runs it and what to do in case of errors. There are also corresponding ALTER TASK and\nDROP TASK statements for managing your task definitions. See the description of the CREATE TASK\nstatement for details.\n7.5. Directed Procedures: Distributing Transac-\ntions to Every Partition\nAs useful as scheduling regular stored procedures is in simplifying application development, it can be\ndisruptive to ongoing workflow if multi-partition procedures take too long or run too frequently. It would\nbe nice to be able to schedule some partitioned activities as well to do piecemeal work on each partition\nwithout tying up all of the partitions at once. This is exactly what directed procedures are designed to do.\nA directed procedure is a special type of stored procedure, declared using the DIRECTED clause instead\nof PARTITION ON. You write a directed procedure the same way you write a regular stored procedure:\neither as a simple procedure of one or more SQL statements or as a Java class extending voltProcedure,\nusing the voltQueueSQL method to queue SQL statements. Since it is transactional, the procedure must\nalso be deterministic.\nHowever, if you declare the procedure as DIRECTED, when you invoke it a separate instance of the\nprocedure is queued on every partition in the database. Each instance is its own transaction and acts like\na partitioned procedure. So the separate transactions do not block the other partitions. However, because\nthey are separate, there is no coordination between the transactions and no guarantee that they are executed\nat the same time.\nThis makes directed procedures particularly useful for non-critical procedures that need to access data\nacross the database but do not need to be coordinated as a single, atomic transaction. Because of the\nspecial nature of directed procedures, you cannot invoke them the way you would normal partitioned or\nmulti-partitioned procedures. Instead, the primary way to invoke them is as a scheduled task.\nTo schedule a directed procedure as a task, you use the same syntax for the CREATE TASK statement\nas for a multi-partitioned procedure, except you add the RUN ON PARTITIONS clause. The RUN ON\nPARTITIONS clause specifies that the task is scheduled separately for each and every partition. For ex-\nample, if you want to run the RemoveOrphans task defined in the previous section as a directed procedure\nso it will not block the ongoing database workload, you would add the DIRECTED clause to the CREATE\nPROCEDURE statement and the RUN ON PARTITIONS clause to the CREATE TASK statement, like so:\nCREATE PROCEDURE OrphanedRecords DIRECTED\n AS DELETE FROM reservations \n WHERE airline=? AND flight_id IS NULL;\nCREATE TASK RemoveOrphans\n ON SCHEDULE EVERY 2 HOURS\n PROCEDURE OrphanedRecords WITH ('FlyByNight')\n RUN ON PARTITIONS ; \nAlthough scheduled tasks are the easiest way to invoke directed procedures, you can also invoke them\ndirectly from your Java applications. You cannot call them with the callProcedure method, but you\ncan using the callAllPartitionProcedure method where the results from all of the partitions are\n62Simplifying Application Development\nreturned as an array of VoltTables, one per partition. See the descriptions of the CREATE PROCEDURE\nAS, CREATE PROCEDURE FROM CLASS , and CREATE TASK statements for more information about\nusing directed procedures.\n63Chapter 8. Using VoltDB with Other\nProgramming Languages\nVoltDB stored procedures are written in Java and the primary client interface also uses Java. However,\nthat is not the only programming language you can use with VoltDB.\nIt is possible to have client interfaces written in almost any language. These client interfaces allow pro-\ngrams written in different programming languages to interact with a VoltDB database using native func-\ntions of the language. The client interface then takes responsibility for translating those requests into a\nstandard communication protocol with the database server as described in the VoltDB wire protocol.\nSome client interfaces are developed and packaged as part of the standard VoltDB distribution kit while\nothers are compiled and distributed as separate client kits. As of this writing, the following client interfaces\nare available for VoltDB:\n•C#\n•C++\n•Erlang\n•Go\n•Java (packaged with VoltDB)\n•JDBC (packaged with VoltDB)\n•JSON (packaged with VoltDB)\n•Node.js\n•PHP\n•Python (packaged with VoltDB)\nThe JSON client interface may be of particular interest if your favorite programming language is not listed\nabove. JSON is a data format, rather than a programming interface, and the JSON interface provides a\nway for applications written in any programming language to interact with VoltDB via JSON messages\nsent across a standard HTTP protocol.\nThe following sections explain how to use the C++, JSON, and JDBC client interfaces.\n8.1. C++ Client Interface\nVoltDB provides a client interface for programs written in C++. The C++ client interface is available pre-\ncompiled as a separate kit from the VoltDB web site, or in source format from the VoltDB github repository\n(http://github.com/VoltDB/voltdb-client-cpp ). The following sections describe how to write VoltDB client\napplications in C++.\n8.1.1. Writing VoltDB Client Applications in C++\nWhen using the VoltDB client library, as with any C++ library, it is important to include all of the neces-\nsary definitions at the beginning of your source code. For VoltDB client applications, this includes defin-\n64Using VoltDB with Oth-\ner Programming Languages\nitions for the VoltDB methods, structures, and datatypes as well as the libraries that VoltDB depends on\n(specifically, boost shared pointers). For example:\n#define __STDC_CONSTANT_MACROS\n#define __STDC_LIMIT_MACROS\n#include <vector>\n#include <boost/shared_ptr.hpp>\n#include \"Client.h\"\n#include \"Table.h\"\n#include \"TableIterator.h\"\n#include \"Row.hpp\"\n#include \"WireType.h\"\n#include \"Parameter.hpp\"\n#include \"ParameterSet.hpp\"\n#include \"ProcedureCallback.hpp\"\nOnce you have included all of the necessary declarations, there are three steps to using the interface to\ninteract with VoltDB:\n1.Create and open a client connection\n2.Invoke stored procedures\n3.Interpret the results\nThe following sections explain how to perform each of these functions.\n8.1.2. Creating a Connection to the Database Cluster\nBefore you can call VoltDB stored procedures, you must create a client instance and connect to the database\ncluster. For example:\nvoltdb::ClientConfig config(\"myusername\", \"mypassword\");\nvoltdb::Client client = voltdb::Client::create(config);\nclient.createConnection(\"myserver\");\nAs with the Java client interface, you can create connections to multiple nodes in the cluster by making\nmultiple calls to the createConnection method specifying a different IP address for each connection.\n8.1.3. Invoking Stored Procedures\nThe C++ client library provides both a synchronous and asynchronous interface. To make a synchronous\nstored procedure call, you must declare objects for the parameter types, the procedure call itself, the para-\nmeters, and the response. Note that the datatypes, the procedure, and the parameters need to be declared\nin a specific order. For example:\n/* Declare the number and type of parameters */\nstd::vector<voltdb::Parameter> parameterTypes(3);\nparameterTypes[0] = voltdb::Parameter(voltdb::WIRE_TYPE_BIGINT);\nparameterTypes[1] = voltdb::Parameter(voltdb::WIRE_TYPE_STRING);\nparameterTypes[2] = voltdb::Parameter(voltdb::WIRE_TYPE_STRING);\n/* Declare the procedure and parameter structures */\nvoltdb::Procedure procedure(\"AddCustomer\", parameterTypes);\nvoltdb::ParameterSet* params = procedure.params();\n65Using VoltDB with Oth-\ner Programming Languages\n/* Declare a client response to receive the status and return values */\nvoltdb::InvocationResponse response;\nOnce you instantiate these objects, you can reuse them for multiple calls to the stored procedure, inserting\ndifferent values into params each time. For example:\nparams->addInt64(13505).addString(\"William\").addString(\"Smith\");\nresponse = client.invoke(procedure);\nparams->addInt64(13506).addString(\"Mary\").addString(\"Williams\");\nresponse = client.invoke(procedure);\nparams->addInt64(13507).addString(\"Bill\").addString(\"Smythe\");\nresponse = client.invoke(procedure);\n8.1.4. Invoking Stored Procedures Asynchronously\nTo make asynchronous procedure calls, you must also declare a callback structure and method that will\nbe used when the procedure call completes.\nclass AsyncCallback : public voltdb::ProcedureCallback\n{\npublic:\n bool callback\n (voltdb::InvocationResponse response)\n throw (voltdb::Exception)\n {\n /*\n * The work of your callback goes here...\n */\n }\n};\nThen, when you go to make the actual stored procedure invocation, you declare an callback instance and\ninvoke the procedure, using both the procedure structure and the callback instance:\nboost::shared_ptr<AsyncCallback> callback(new AsyncCallback());\nclient.invoke(procedure, callback);\nNote that the C++ interface is single-threaded. The interface is not thread-safe and you should not use\ninstances of the client, client response, or other client interface structures from within multiple concurrent\nthreads. Also, the application must release control occasionally to give the client interface an opportunity\nto issue network requests and retrieve responses. You can do this by calling either the run() or runOnce()\nmethods.\nThe run() method waits for and processes network requests, responses, and callbacks until told not to.\n(That is, until a callback returns a value of false.)\nThe runOnce() method processes any outstanding work and then returns control to the client application.\nIn most applications, you will want to create a loop that makes asynchronous requests and then calls\nrunOnce(). This allows the application to queue stored procedure requests as quickly as possible while\nalso processing any incoming responses in a timely manner.\nAnother important difference when making stored procedure calls asynchronously is that you must make\nsure all of the procedure calls complete before the client connection is closed. The client objects destructor\nautomatically closes the connection when your application leaves the context or scope within which the\n66Using VoltDB with Oth-\ner Programming Languages\nclient is defined. Therefore, to make sure all asynchronous calls have completed, be sure to call the drain\nmethod until it returns true before leaving your client context:\nwhile (!client.drain()) {}\n8.1.5. Interpreting the Results\nBoth the synchronous and asynchronous invocations return a client response object that contains both the\nstatus of the call and the return values. You can use the status information to report problems encountered\nwhile running the stored procedure. For example:\nif (response.failure())\n{\n std::cout << \"Stored procedure failed. \" << response.toString();\n exit(-1);\n}\nIf the stored procedure is successful, you can use the client response to retrieve the results. The results\nare returned as an array of VoltTable structures. Within each VoltTable object you can use an iterator to\nwalk through the rows. There are also methods for retrieving each datatype from the row. For example,\nthe following example displays the results of a single VoltTable containing two strings in each row:\n/* Retrieve the results and an iterator for the first volttable */\nstd::vector<voltdb::Table> results = response.results();\nvoltdb::TableIterator iterator = results[0].iterator();\n/* Iterate through the rows */\nwhile (iterator.hasNext())\n{\n voltdb::Row row = iterator.next();\n std::cout << row.getString(0) << \", \" << row.getString(1) << std::endl;\n}\n8.2. JSON HTTP Interface\nJSON (JavaScript Object Notation) is not a programming language; it is a data format. The JSON \"inter-\nface\" to VoltDB is actually a web interface that the VoltDB database server makes available for processing\nrequests and returning data in JSON format.\nThe JSON interface lets you invoke VoltDB stored procedures and receive their results through HTTP\nrequests. To invoke a stored procedure, you pass VoltDB the procedure name and parameters as a querys-\ntring to the HTTP request, using either the GET or POST method.\nAlthough many programming languages provide methods to simplify the encoding and decoding of JSON\nstrings, you still need to understand the data structures that are created. So if you are not familiar with\nJSON encoding, you may want to read more about it at http://www.json.org .\n8.2.1. How the JSON Interface Works\nWhen a VoltDB database starts, it opens port 8080 on each server as a simple web server. You have\ncomplete control over this feature through the configuration file and the voltdb start command, including:\n•Disabling just the JSON interface, or the HTTP port entirely using the <httpd> element in the con-\nfiguration file.\n67Using VoltDB with Oth-\ner Programming Languages\n•Enabling TLS encryption on the port using the <ssl> element.\n•Changing the port number using the --http flag on the voltdb start command.\nSee the section on the \" Web Interface Port \" in the VoltDB Administrator's Guide for more information\non configuring the HTTP port.\nThis section assumes the database is using the default httpd configuration. In which case, any HTTP re-\nquests sent to the location /api/2.0/ on that port are interpreted as JSON requests to run a stored procedure.\nThe structure of the request is:\nURL http://<server>:8080/api/2.0/\nArguments Procedure=<procedure-name>\nParameters=<procedure-parameters>\nUser=<username for authentication>\nPassword=<password for authentication>\nHashedpassword=<Hashed password for authentication>\nadmin=<true|false>\njsonp=<function-name>\nThe arguments can be passed either using the GET or the POST method. For example, the following URL\nuses the GET method (where the arguments are appended to the URL) to execute the system procedure\n@SystemInformation on the VoltDB database running on node voltsvr.mycompany.com:\nhttp://voltsvr.mycompany.com:8080/api/2.0/?Procedure=@SystemInformation\nNote that only the Procedure argument is required. You can authenticate using the User and Pass-\nword (or Hashedpassword ) arguments if security is enabled for the database. Use Password to send\nthe password as plain text or Hashedpassword to send the password as an encoded string. (The hashed\npassword must be either a 40-byte hex-encoding of the 20-byte SHA-1 hash or a 64-byte hex-encoding\nof the 32-byte SHA-256 hash.)1\nYou can also include the parameters on the request. However, it is important to note that the parameters —\nand the response returned by the stored procedure — are JSON encoded. The parameters are an array (even\nif there is only one element to that array) and therefore must be enclosed in square brackets. Also, although\nthere is an upper limit of 2 megabytes for the entire length of the parameter string, large parameter sets\nmust be sent using POST to avoid stricter limitations on allowable URL lengths.\nThe admin argument specifies whether the request is submitted on the standard client port (the default)\nor the admin port (when you specify admin=true ). When the database is in admin mode, the client\nport is read-only; so you must submit write requests with admin=true or else the request is rejected\nby the server.\nThe jsonp argument is provided as a convenience for browser-based applications (such as Javascript)\nwhere cross-domain browsing is disabled. When you include the jsonp argument, the entire response is\nwrapped as a function call using the function name you specify. Using this technique, the response is a\ncomplete and valid Javascript statement and can be executed to create the appropriate language-specific\nobject. For example, calling the @Statistics system procedure in Javascript using the jQuery library looks\nlike this:\n$.getJSON('http://myserver:8080/api/1.0/?Procedure=@Statistics' +\n '&Parameters=[\"MANAGEMENT\",0]&jsonp=?',\n {},MyCallBack);\n1Hashing the password stops the text of your password from being detectable from network traffic. However, it does not make the database access\nany more secure. To secure the transmission of credentials and data between client applications and VoltDB, enable TLS encryption for the HTTP\nport using the configuration file.\n68Using VoltDB with Oth-\ner Programming Languages\nPerhaps the best way to understand the JSON interface is to see it in action. If you build and start the Hello\nWorld example application that is provided in the VoltDB distribution kit (including the client that loads\ndata into the database), you can then open a web browser and connect to the local system through port\n8080, to retrieve the French translation of \"Hello World\". For example:\nhttp://localhost:8080/api/1.0/?Procedure=Select&Parameters=[\"French\"]\nThe query returns the following results:\n{\"status\":1,\"appstatus\":-128,\"statusstring\":null,\"appstatusstring\":null,\n\"results\":{\"0\":[{ \"HELLO\":\"Bonjour\",\"WORLD\":\"Monde\"}]}}\nAs you can see, the JSON-encoded results are not particularly easy to read. But they do provide a simple,\ngeneric interface accessible from almost any programming language, most of which provide methods for\nencoding and decoding JSON strings and interpreting their results.\n8.2.2. Using the JSON Interface from Client Applications\nThe general process for using the JSON interface from within a program is:\n1.Encode the parameters for the stored procedure as a JSON-encoded string\n2.Instantiate and execute an HTTP request, passing the name of the procedure and the parameters as\narguments using either GET or POST.\n3.Decode the resulting JSON string into a language-specific data structure and interpret the results.\nThe following are examples of invoking the Hello World Insert stored procedure from several different\nlanguages. In each case, the three arguments (the name of the language and the words for \"Hello\" and\n\"World\") are encoded as a JSON string.\nPHP\n<?php\n// Construct the procedure name, parameter list, and URL.\n \n $voltdbserver = \"http://myserver:8080/api/2.0/\";\n $proc = \"Insert\";\n $a = array(\"Croatian\",\"Pozdrav\",\"Svijet\");\n $params = json_encode($a);\n $params = urlencode($params);\n $querystring = \"Procedure=$proc&Parameters=$params\";\n// create a new cURL resource and set options\n $ch = curl_init();\n curl_setopt($ch, CURLOPT_URL, $voltdbserver);\n curl_setopt($ch, CURLOPT_HEADER, 0);\n curl_setopt($ch, CURLOPT_FAILONERROR, 1);\n curl_setopt($ch, CURLOPT_POST, 1);\n curl_setopt($ch, CURLOPT_POSTFIELDS, $querystring);\n curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);\n// Execute the request\n $resultstring = curl_exec($ch);\n?>\n69Using VoltDB with Oth-\ner Programming Languages\nPython\nimport urllib\nimport urllib2\nimport json\n# Construct the procedure name, parameter list, and URL.\nurl = 'http://myserver:8080/api/2.0/'\nvoltparams = json.dumps([\"Croatian\",\"Pozdrav\",\"Svijet\"])\nhttpparams = urllib.urlencode({\n 'Procedure': 'Insert',\n 'Parameters' : voltparams\n})\nprint httpparams\n# Execute the request\ndata = urllib2.urlopen(url, httpparams).read()\n# Decode the results\nresult = json.loads(data)\nPerl\nuse LWP::Simple;\nmy $server = 'http://myserver:8080/api/2.0/';\n# Insert \"Hello World\" in Croatian\nmy $proc = 'Insert';\nmy $params = '[\"Croatian\",\"Pozdrav\",\"Svijet\"]';\nmy $url = $server . \"?Procedure=$proc&Parameters=$params\";\nmy $content = get $url;\ndie \"Couldn't get $url\" unless defined $content;\nC#\nusing System;\nusing System.Text;\nusing System.Net;\nusing System.IO;\nnamespace hellovolt\n{\n class Program\n {\n static void Main(string[] args)\n {\n string VoltDBServer = \"http://myserver:8080/api/2.0/\";\n string VoltDBProc = \"Insert\";\n string VoltDBParams = \"[\\\"Croatian\\\",\\\"Pozdrav\\\",\\\"Svijet\\\"]\";\n string Url = VoltDBServer + \"?Procedure=\" + VoltDBProc \n + \"&Parameters=\" + VoltDBParams;\n \n string result = null;\n70Using VoltDB with Oth-\ner Programming Languages\n WebResponse response = null;\n StreamReader reader = null;\n try\n {\n HttpWebRequest request = (HttpWebRequest)WebRequest.Create(Url);\n request.Method = \"GET\";\n response = request.GetResponse();\n reader = new StreamReader(response.GetResponseStream(),Encoding.UTF8 );\n result = reader.ReadToEnd();\n }\n catch (Exception ex)\n { // handle error\n Console.WriteLine( ex.Message );\n }\n finally\n { \n if (reader != null)reader.Close();\n if (response != null) response.Close();\n }\n }\n }\n}\n8.2.3. How Parameters Are Interpreted\nWhen you pass arguments to the stored procedure through the JSON interface, VoltDB does its best to\nmap the data to the datatype required by the stored procedure. This is important to make sure partitioning\nvalues are interpreted correctly.\nFor integer values, the JSON interface maps the parameter to the smallest possible integer type capable of\nholding the value. (For example, BYTE for values less than 128). Any values containing a decimal point\nare interpreted as DOUBLE.\nString values (those that are quoted) are handled in several different ways. If the stored procedure is ex-\npecting a BIGDECIMAL, the JSON interface will try to interpret the quoted string as a decimal value.\nIf the stored procedure is expecting a TIMESTAMP, the JSON interface will try to interpret the quoted\nstring as a JDBC-encoded timestamp value. (You can alternately pass the argument as an integer value\nrepresenting the number of microseconds from the epoch.) Otherwise, quoted strings are interpreted as\na string datatype.\nTable 8.1, “Datatypes in the JSON Interface” summarizes how to pass different datatypes in the JSON\ninterface.\nTable 8.1. Datatypes in the JSON Interface\nDatatype How to Pass Example\nIntegers (Byte, Short, Integer,\nLong)An integer value 12345\nDOUBLE A value with a decimal point 123.45\n71Using VoltDB with Oth-\ner Programming Languages\nDatatype How to Pass Example\nBIGDECIMAL A quoted string containing a value\nwith a decimal point\"123.45\"\nTIMESTAMP Either an integer value or a quoted\nstring containing a JDBC-encod-\ned date and time12345\n\"2010-07-01 12:30:21\"\nString A quoted string \"I am a string\"\n8.2.4. Interpreting the JSON Results\nMaking the request and decoding the result string are only the first steps. Once the request is completed,\nyour application needs to interpret the results.\nWhen you decode a JSON string, it is converted into a language-specific structure within your application,\ncomposed of objects and arrays. If your request is successful, VoltDB returns a JSON-encoded string that\nrepresents the same ClientResponse object returned by calls to the callProcedure method in the Java client\ninterface. Figure 8.1, “The Structure of the VoltDB JSON Response” shows the structure of the object\nreturned by the JSON interface.\nFigure 8.1. The Structure of the VoltDB JSON Response\n{ status (integer)\n appstatus (integer)\n statusstring (string)\n appstatusstring (string)\n results (list)\n { result-index (array)\n [ \n { column-name (any type) ,...\n }\n ]\n }\n}\nThe key components of the JSON response are the following:\nstatus Indicates the success or failure of the stored procedure. If status is false, statusstring con-\ntains the text of the status message..\nappstatus Returns additional information, provided by the application developer, about the success\nor failure of the stored procedure. The values of appstatus and appstatusstring can be\nset programmatically in the stored procedure. (See Section 6.5.1, “Interpreting Execution\nErrors” for details.)\nresults A list of objects representing the VoltTables returned by the stored procedure. Each ele-\nment of the list is one set of results, identified by an index value (\"0\", \"1\", \"2\" and so on).\nWithin each set is an array of rows. And within each row is a list of columns represented\nby the column name and value. If the stored procedure does not return any results (i.e. is\nvoid or null), then the results object will be null.\nIt is possible to create a generic procedure for testing and evaluating the result values from any VoltDB\nstored procedure. However, in most cases it is far more expedient to evaluate the values that you know\nthe individual procedures return.\n72Using VoltDB with Oth-\ner Programming Languages\nFor example, again using the Hello World example that is provided with the VoltDB software, it is possible\nto use the JSON interface to call the Select stored procedure and return the values for \"Hello\" and \"World\"\nin a specific language. Rather than evaluate the entire results array (including the name and type fields),\nwe know we are only receiving one result object with two column values. So we can simplify the code,\nas in the following python example:\nimport urllib\nimport urllib2\nimport json\nimport pprint\n# Construct the procedure name, parameter list, and URL.\nurl = 'http://localhost:8080/api/2.0/'\nvoltparams = json.dumps([\"French\"])\nhttpparams = urllib.urlencode({\n 'Procedure': 'Select',\n 'Parameters' : voltparams\n})\n# Execute the request\ndata = urllib2.urlopen(url, httpparams).read()\n# Decode the results\nresults = json.loads(data)[u'results']\nvolttable = results[u'0']\nrow = volttable[0]\n# Get the data by column name and display them\nhello = row[u'HELLO']\nworld = row[u'WORLD']\nprint hello, world\n8.2.5. Error Handling using the JSON Interface\nThere are a number of different reasons why a stored procedure request using the JSON interface may fail:\nthe VoltDB server may be unreachable, the database may not be started yet, the stored procedure name\nmay be misspelled, the stored procedure itself may fail... When using the standard Java client interface,\nthese different situations are handled at different times. (For example, server and database access issues\nare addressed when instantiating the client, whereas stored procedure errors can be handled when the\nprocedures themselves are called.) The JSON interface simplifies the programming by rolling all of these\nactivities into a single call. But you must be more organized in how you handle errors as a consequence.\nWhen using the JSON interface, you should check for errors in the following order:\n1.First check to see that the HTTP request was submitted without errors. How this is done depends on what\nlanguage-specific methods you use for submitting the request. In most cases, you can use the appropriate\nprogramming language error handlers (such as try-catch) to catch and interpret HTTP request errors.\n2.Next check to see if VoltDB successfully invoked the stored procedure. You can do this by verifying\nthat the HTTP request returned a valid JSON-encoded string and that its status is set to true.\n3.If the VoltDB server successfully invoked the stored procedure, then check to see if the stored procedure\nitself succeeded, by checking to see if appstatus is true.\n73Using VoltDB with Oth-\ner Programming Languages\n4.Finally, check to see that the results are what you expect. (For example, that the data array is non-empty\nand contains the values you need.)\n8.3. JDBC Interface\nJDBC (Java Database Connectivity) is a programming interface for Java programmers that abstracts data-\nbase specifics from the methods used to access the data. JDBC provides standard methods and classes\nfor accessing a relational database and vendors then provide JDBC drivers to implement the abstracted\nmethods on their specific software.\nVoltDB provides a JDBC driver for those who would prefer to use JDBC as the data access interface. The\nVoltDB JDBC driver supports ad hoc queries, prepared statements, calling stored procedures, and methods\nfor examining the metadata that describes the database schema.\n8.3.1. Using JDBC to Connect to a VoltDB Database\nThe VoltDB driver is a standard class within the VoltDB software jar. To load the driver you use the\nClass.forName method to load the class org.voltdb.jdbc.Driver.\nOnce the driver is loaded, you create a connection to a running VoltDB database server by constructing\na JDBC url using the \"jdbc:\" protocol, followed by \"voltdb://\", the server name, a colon, and the port\nnumber. In other words, the complete JDBC connection url is \"jdbc:voltdb://{server}:{port}\". To connect\nto multiple nodes in the cluster, use a comma separated list of server names and port numbers after the\n\"jdbc:voltdb://\" prefix.\nFor example, the following code loads the VoltDB JDBC driver and connects to the servers svr1 and svr2\nusing the default client port:\nClass.forName(\"org.voltdb.jdbc.Driver\");\nConnection c = DriverManager.getConnection(\n \"jdbc:voltdb://svr1:21212,svr2:21212\");\nIf, after the connection is made, the connection to one or more of the servers is lost due to a network\nissue or server failure, the VoltDB JDBC client does not automatically reconnect the broken connection\nby default. However, you can have the JDBC driver reconnect lost connections by adding the autoconnect\nargument to the connection string. For example:\nClass.forName(\"org.voltdb.jdbc.Driver\");\nConnection c = DriverManager.getConnection(\n \"jdbc:voltdb://svr1:21212,svr2:21212 ?autoreconnect=true \");\nWhen autoreconnect is enabled and a server goes offline, the JDBC driver periodically attempts to recon-\nnect to the missing server until it comes back online and the connection is reestablished.\nIf security is enabled for the database, you must also provide a username and password. Set these as\nproperties using the setProperty method before creating the connection and then pass the properties as a\nsecond argument to getConnection. For example, the following code uses the username/password pair of\n\"Hemingway\" and \"KeyWest\" to authenticate to the VoltDB database:\nClass.forName(\"org.voltdb.jdbc.Driver\");\nProperties props = new Properties();\nprops.setProperty(\"user\", “Hemingway\");\nprops.setProperty(\"password\", “KeyWest\");\nConnection c = DriverManager.getConnection(\n74Using VoltDB with Oth-\ner Programming Languages\n \"jdbc:voltdb://svr1:21212,svr2:21212\", props);\n8.3.2. Using JDBC to Query a VoltDB Database\nOnce the connection is made, you use the standard JDBC classes and methods to access the database. (See\nthe JDBC documentation at http://download.oracle.com/javase/8/docs/technotes/\nguides/jdbc for details.) Note, however, when running the JDBC application, you must make sure\nboth the VoltDB software jar and the Guava library are in the Java classpath. Guava is a third party library\nthat is shipped as part of the VoltDB kit in the /lib directory. Unless you include both components in the\nclasspath, your application will not be able to find and load the necessary driver class.\nThe following is a complete example that uses JDBC to access the Hello World tutorial that comes with\nthe VoltDB software in the subdirectory /doc/tutorials/helloworld . The JDBC demo program\nexecutes both an ad hoc query and a call to the VoltDB stored procedure, Select.\nimport java.sql.*;\nimport java.io.*;\npublic class JdbcDemo {\n public static void main(String[] args) {\n \n String driver = \"org.voltdb.jdbc.Driver\";\n String url = \"jdbc:voltdb://localhost:21212\";\n String sql = \"SELECT dialect FROM helloworld\";\n \n try {\n // Load driver. Create connection.\n Class.forName(driver);\n Connection conn = DriverManager.getConnection(url);\n \n // create a statement\n Statement query = conn.createStatement();\n ResultSet results = query.executeQuery(sql);\n while (results.next()) {\n System.out.println(\"Language is \" + results.getString(1));\n }\n \n // call a stored procedure\n CallableStatement proc = conn.prepareCall(\"{call Select(?)}\");\n proc.setString(1, \"French\");\n results = proc.executeQuery();\n while (results.next()) {\n System.out.printf(\"%s, %s!\\n\", results.getString(1), \n results.getString(2));\n }\n \n //Close statements, connections, etc.\n query.close(); \n proc.close();\n results.close();\n conn.close();\n } catch (Exception e) {\n75Using VoltDB with Oth-\ner Programming Languages\n e.printStackTrace();\n }\n }\n}\n76Chapter 9. Using VoltDB in a Cluster\nIt is possible to run VoltDB on a single server and still get all the advantages of parallelism because VoltDB\ncreates multiple partitions on each server. However, there are practical limits to how much memory or\nprocessing power any one server can sustain.\nOne of the key advantages of VoltDB is its ease of expansion. You can increase both capacity and pro-\ncessing (i.e. the total number of partitions) simply by adding servers to the cluster to achieve almost linear\nscalability. Using VoltDB in a cluster also gives you the ability to increase the availability of the database\n— protecting it against possible server failures or network glitches.\nThis chapter explains how to create a cluster of VoltDB servers running a single database. It also explains\nhow to expand the cluster when additional capacity or processing power is needed. The following chapters\nexplain how to increase the availability of your database through the use of K-safety and database repli-\ncation, as well as how to enable security to limit access to the data.\n9.1. Starting a Database Cluster\nAs described in Chapter 3, Starting the Database , starting a VoltDB cluster is similar to starting VoltDB on\na single server — you use the same commands. To start a single server database, you use the voltdb start\ncommand by itself. To customize database features, you specify a configuration file when you initialize\nthe root directory with voltdb init .\nTo start a cluster, you also use the voltdb start command. In addition, you must:\n•Specify the number of nodes in the cluster using the --count argument.\n•Choose one or more nodes as the potential lead or \"host\" node and specify those nodes using the --host\nargument on the start command\n•Issue the same voltdb start command on all nodes of the cluster\nFor example, if you are creating a new five node cluster and choose nodes server2 and server3 as the hosts,\nyou would issue a command like the following on all five nodes:\n$ voltdb start --host=server2,server3 --count=5\nTo restart a cluster using command logs or automatic snapshots, you repeat the same command. Alternate-\nly, you can specify all nodes in the cluster in the --host argument and skip the server count:\n$ voltdb start --host=server1,server2,server3,server4,server5\nNo matter which approach you choose, you must specify the same list of potential hosts on all nodes of\nthe cluster. Once the database cluster is running the leader's special role is complete and all nodes become\npeers.\n9.2. Updating the Cluster Configuration\nBefore you start the cluster, you choose what database features to use by specifying a configuration file\nwhen you initialize the database root directory on each node using the voltdb init command. You must\nspecify the same configuration file on every node of the cluster. For example:\n$ voltdb init --config=deployment.xml\n77Using VoltDB in a Cluster\nIf you choose to change database options, many of the features can be adjusted while the database is\nrunning by either:\n•Using the web-based VoltDB Management Center to make changes interactively in the Admin tab\n•Editing the original configuration file and applying the modifications with the voltadmin update com-\nmand\nFor example, you can change security settings, import and export configurations, and resource limits dy-\nnamically. With either approach, the changes you make are saved by VoltDB in the database root directory.\nHowever, there are some changes that cannot be made while the database is running. For example, chang-\ning the K-safety value or the number of partitions per server require shutting down, re-initializing, and\nrestarting the database. To change these static aspects of your cluster, you must save the database contents,\nreconfigure the root directory, then restart and restore the database. The steps for changing static config-\nuration options are:\n1.Pause the database ( voltadmin pause )\n2.Save a snapshot of the contents ( voltadmin save {path} {file-prefix} )\n3.Shutdown the database ( voltadmin shutdown )\n4.Re-initialize the root directory with the new configuration file and the --force argument ( voltdb\ninit --force --config= file)\n5.Restart the database in admin mode ( voltdb start --pause )\n6.Restore the snapshot ( voltadmin restore {path} {file-prefix} )\n7.Resume normal operations ( voltadmin resume )\nSee Chapter 13, Saving & Restoring a VoltDB Database for information on using save and restore. When\ndoing benchmarking, where you need to change the number of partitions or adjust other static configuration\noptions, this is the recommended approach. However, if you are adjusting the size of the cluster to increase\nor decrease capacity or performance, you can perform these operations while the database is running.\nAdding and removing nodes \"on the fly\" is known as elastic scaling and is described in the next section.\n9.3. Elastic Scaling to Resize the Cluster\nElastic scaling is the ability to resize the cluster as needed, without having to shutdown the database.\nElastic scaling supports both increasing and decreasing the size of the cluster. For example, you might\nwant to increase the size of the cluster ahead of an important announcement that will drive additional\ntraffic — and subsequently require additional capacity. Similarly, you may want to reduce the size for the\ncluster during slow periods to limit the number of resources that would be under utilized.\nAdding and removing nodes using elastic scaling are each handled separately because increasing the size\nof the cluster requires adding new nodes to the cluster first. While when decreasing the size of the cluster,\nthe nodes are already part of the cluster and VoltDB decides which node are most advantageous to remove\nbased on the distribution of partitions within the cluster.\nTo add nodes to the cluster you start the additional nodes using the voltdb start --add command. To\nremove nodes from the cluster, you use the voltadmin resize command and the cluster decides which\nnodes to remove.\n78Using VoltDB in a Cluster\nBut in both cases, the correct number of nodes must be added or removed at the same time. The number\nof nodes added or removed must result in the resized cluster meeting the requirements for a K-safe cluster\nbased on the K-safety value and number of sites per host (as described in Section 10.2.2, “Calculating\nthe Appropriate Number of Nodes for K-Safety” ). So for a cluster with no K-safety (K=0), nodes can be\nadded and removed individually. For K-safe clusters, K+1 nodes must be added or removed at a time. For\nexample, with K=1 two nodes must be added at a time. While in the case of reducing the size of the cluster,\ntwo nodes must be removed but the resulting cluster must also meet the requirement that the total number\nof partitions (sites per host X number of nodes) is divisible by K+1.\nFinally, resizing the cluster \"on the fly\" does require both time and some amount of resources while the\ndata and partitions are rebalanced. The length of time required to complete the rebalancing depends on\nthe amount of data present and the current workload. Similarly, the performance impact of resizing on\nthe ongoing operation of the cluster depends on how much additional capacity the cluster has to assign\nto rebalance tasks.\nThe following sections describe how to:\n•Add nodes using elastic scaling\n•Remove nodes using elastic scaling\n•Control the time and performance impact of elastic scaling by configuring the rebalance workoad\n9.3.1. Adding Nodes with Elastic Scaling\nWhen you are ready to extend the cluster by adding one or more nodes, you simply initialize and start the\nVoltDB database process on the new nodes using the voltdb init command to initialize and the voltdb\nstart command to start with the --add argument, specifying the name of one or more of the existing\ncluster nodes as the hosts. For example, if you are adding node ServerX to a cluster where ServerA is\nalready a member, you can execute the following commands on ServerX:\n$ voltdb init --config=deployment.xml\n$ voltdb start --add --host=ServerA \nOnce the elastic add action is initiated, the cluster performs the following tasks:\n1.The cluster acknowledges the presence of a new server.\n2.Copies of the current schema and configuration settings are sent to the new node.\n3.Once sufficient nodes are added, copies of all replicated tables and their share of the partitioned tables\nare sent to the new nodes.\n4.As the data is redistributed (or rebalanced ), the added nodes begin participating as full members of\nthe cluster.\nThere are some important notes to consider when expanding the cluster using elastic scaling:\n•You must add a sufficient number of nodes to create an integral K-safe unit. That is, K+1 nodes. For\nexample, if the K-safety value for the cluster is two, you must add three nodes at a time to expand the\ncluster. If the cluster is not K-safe (in other words it has a K-safety value of zero), you can add one\nnode at a time.\n•When you add nodes to a K-safe cluster, the nodes added first will complete steps #1 and #2 above,\nbut will not complete steps #3 and #4 until the correct number of nodes are added, at which point all\nnodes rebalance together.\n79Using VoltDB in a Cluster\n•While the cluster is rebalancing (Step #3), the database continues to handle incoming requests. However,\ndepending on the workload and amount of data in the database, rebalancing may take a significant\namount of time.\n•Once elastic scaling is complete, your database configuration has changed. If you shutdown the database\nand then restart, you must specify the new server count in the --count argument to the voltdb start\ncommand.\n9.3.2. Removing Nodes with Elastic Scaling\nWhen you want to reduce the size of your cluster, you use the voltadmin resize command to start the\nresizing process. First, as with any significant maintenance activity, it is a good idea to take a snapshot of\nthe database contents before you begin, just in case you need to restore it later. The next step is to test to\nmake sure the cluster can be reduced. You do this using the voltadmin resize --test command:\n$ voltadmin resize --test\nThe --test qualifier verifies that there are sufficient nodes and partitions to reduce the cluster while main-\ntaining the K-safety and sitesperhost settings. If not, the command will report that the cluster cannot be\nreduced in size. If resizing is possible, the command reports which nodes will be removed when resizing\nbegins.\nOnce you are ready to begin the resizing process, you use the voltadmin resize command:\n$ voltadmin resize\nThe command repeats the test phase, reports which nodes will be removed and prompts you to confirm\nthat you are ready to start. When you respond with \"y\" or \"yes\", the resizing process begins.\nOnce resizing begins, the process cannot be canceled. Even if the cluster stops, resizing will continue once\nthe cluster restarts (and you must restart all of the original nodes so the resize operation can complete). So\nbe sure you want to reduce the cluster size before you respond positively to the prompt.\nThe length of time it takes for resizing to complete depends on the amount of data in the database and\nthe current workload. You can adjust parameters that affect resizing (as described in Section 9.3.3, “Con-\nfiguring How VoltDB Rebalances Nodes During Elastic Scaling” ). However, increasing the duration or\nthroughput for resizing will likely have a corresponding inverse impact on the performance of ongoing\ndatabase activities. Use the voltadmin status to check on the current status of the resizing operation, or\nuse the @Statistics system procedure with the REBALANCE selector for details.\nFinally, if an unexpected event causes the resize process to fail — which will be reported in the server logs\n— you can restart the resize operation using the voltadmin resize --retry command.\n9.3.3. Configuring How VoltDB Rebalances Nodes During\nElastic Scaling\nAs you add or remove nodes using elastic scaling, VoltDB rebalances the cluster by rearranging data within\nthe partitions. During elastic expansion, as soon as you add the necessary number of nodes (based on the\nK-safety value), VoltDB rebalances the cluster, moving data from existing partitions to partitions on the\nnew nodes. During elastic contraction, before the nodes are removed, VoltDB rebalances the cluster by\nmoving data from partitions that are being removed to partitions that will remain.\nDuring the rebalance phase, the database remains available and actively processing client requests. How\nlong the rebalance operation takes is dependent on two factors: how often rebalance tasks are processed\nand how much data each transaction moves.\n80Using VoltDB in a Cluster\nRebalance tasks are fully transactional, meaning they operate within the database's ACID-compliant trans-\nactional model. Because they involve moving data between two or more partitions, they are also mul-\nti-partition transactions. This means that each rebalance work unit can incrementally add to the latency\nof pending client transactions.\nYou can control how quickly the rebalance operation completes versus how much rebalance work impacts\nongoing client transactions using two attributes of the <elastic> element in the configuration file:\n•The duration attribute sets a target value for the length of time each rebalance transaction will take,\nspecified in milliseconds. The default is 50 milliseconds.\n•The throughput attribute sets a target value for the number of megabytes per second that will be\nprocessed by the rebalance transactions. The default is 2 megabytes.\nWhen you change the target duration, VoltDB adjusts the amount of data that is moved in each transaction\nto reach the target execution time. If you increase the duration, the volume of data moved per transaction\nincreases. Similarly, if you reduce the duration, the volume per transaction decreases.\nWhen you change the target throughput, VoltDB adjusts the frequency of rebalance transactions to achieve\nthe desired volume of data moved per second. If you increase the target throughout, the number of rebal-\nance transactions per second increases. Similarly, if you decrease the target throughout, the number of\ntransactions decreases.\nThe <elastic> element is a child of the <systemsettings> element. For example, the following configuration\nfile sets the target duration to 15 milliseconds and the target throughput to 1 megabyte per second before\nstarting the database:\n<deployment>\n . . .\n <systemsettings>\n <elastic duration=\"15\" throughput=\"1\"/>\n </systemsettings>\n</deployment>\n81Chapter 10. Availability\nDurability is one of the four key ACID attributes required to ensure the accurate and reliable operation of\na transactional database. Durability refers to the ability to maintain database consistency and availability\nin the face of external problems, such as hardware or operating system failure. Durability is provided by\nfour features of VoltDB: snapshots, command logging, K-safety, and disaster recovery through database\nreplication.\n•Snapshots are a \"snapshot\" of the data within the database at a given point in time written to disk. You\ncan use these snapshot files to restore the database to a previous, known state after a failure which brings\ndown the database. The snapshots are guaranteed to be transactionally consistent at the point at which\nthe snapshot was taken. Chapter 13, Saving & Restoring a VoltDB Database describes how to create\nand restore database snapshots.\n•Command Logging is a feature where, in addition to periodic snapshots, the system keeps a log of every\nstored procedure (or \"command\") as it is invoked. If, for any reason, the servers fail, they can \"replay\"\nthe log on startup to reinstate the database contents completely rather than just to an arbitrary point-\nin-time. Chapter 14, Command Logging and Recovery describes how to enable, configure, and replay\ncommand logs.\n•K-safety refers to the practice of duplicating database partitions so that the database can withstand the\nloss of cluster nodes without interrupting the service. For example, a K value of zero means that there\nis no duplication and losing any servers will result in a loss of data and database operations. If there are\ntwo copies of every partition (a K value of one), then the cluster can withstand the loss of at least one\nnode (and possibly more) without any interruption in service.\n•Database Replication is similar to K-safety, since it involves replicating data. However, rather than cre-\nating redundant partitions within a single database, database replication involves creating and maintain-\ning a complete copy of the entire database. Database replication has a number of uses, but specifically\nin terms of durability, replication lets you maintain two copies of the database in separate geographic\nlocations. In case of catastrophic events, such as fires, earthquakes, or large scale power outages, the\nreplica can be used as a replacement for a disabled cluster.\nSubsequent chapters describe snapshots and command logging. The next chapter describes how you can\nuse database replication for disaster recovery. This chapter explains how K-safety works, how to configure\nyour VoltDB database for different values of K, and how to recover in the case of a system failure.\n10.1. How K-Safety Works\nK-safety involves duplicating database partitions so that if a partition is lost (either due to hardware or\nsoftware problems) the database can continue to function with the remaining duplicates. In the case of\nVoltDB, the duplicate partitions are fully functioning members of the cluster, including all read and write\noperations that apply to those partitions. (In other words, the duplicates function as peers rather than in\na master-slave relationship.)\nIt is also important to note that K-safety is different than WAN replication. In replication the entire database\ncluster is replicated (usually at a remote location to provide for disaster recovery in case the entire cluster\nor site goes down due to catastrophic failure of some type).\nIn replication, the replicated cluster operates independently and cannot assist when only part of the active\ncluster fails. The replicate is intended to take over only when the primary database cluster fails entirely.\nSo, in cases where the database is mission critical, it is not uncommon to use both K-safety and replication\nto achieve the highest levels of service.\n82Availability\nTo achieve K=1, it is necessary to duplicate all partitions. (If you don't, failure of a node that contains a\nnon-duplicated partition would cause the database to fail.) Similarly, K=2 requires two duplicates of every\npartition, and so on.\nWhat happens during normal operations is that any work assigned to a duplicated partition is sent to all\ncopies (as shown in Figure 10.1, “K-Safety in Action” ). If a node fails, the database continues to function\nsending the work to the unaffected copies of the partition.\nFigure 10.1. K-Safety in Action\n10.2. Enabling K-Safety\nYou specify the desired K-safety value as part of the cluster configuration when you initialize the database\nroot directory. By default, VoltDB uses a K-safety value of zero (no duplicate partitions). You can specify a\nlarger K-safety value using the kfactor attribute of the <cluster> tag. For example, in the following\nconfiguration file, the K-safety value is set to 2:\n<?xml version=\"1.0\"?>\n<deployment>\n <cluster kfactor=\"2\" />\n</deployment>\nWhen you start the database specifying a K-safety value greater than zero, the appropriate number of\npartitions out of the cluster will be assigned as duplicates. For example, if you start a cluster with 3 nodes\nand the default partitions per node of 8, there are a total of 24 partitions. With K=1, half of those partitions\n(12) will be assigned as duplicates of the other half. If K is increased to 2, the cluster would be divided\ninto 3 copies consisting of 8 partitions each.\nThe important point to note when setting the K value is that, if you do not change the hardware configu-\nration, you are dividing the available partitions among the duplicate copies. Therefore performance (and\n83Availability\ncapacity) will be proportionally decreased as K-safety is increased. So running K=1 on a 6-node cluster\nwill be approximately equivalent to running a 3-node cluster with K=0.\nIf you wish to increase reliability without impacting performance, you must increase the cluster size to\nprovide the appropriate capacity to accommodate for K-safety.\n10.2.1. What Happens When You Enable K-Safety\nOf course, to ensure a system failure does not impact the database, not only do the partitions need to be\nduplicated, but VoltDB must ensure that the duplicates are kept on separate nodes of the cluster. To achieve\nthis, VoltDB calculates the maximum number of unique partitions that can be created, given the number\nof nodes, partitions per node, and the desired K-safety value.\nWhen the number of nodes is an integral multiple of the duplicates needed, this is easy to calculate. For\nexample, if you have a six node cluster and choose K=1, VoltDB will create two instances of three nodes\neach. If you choose K=2, VoltDB will create three instances of two nodes each. And so on.\nIf the number of nodes is not a multiple of the number of duplicates, VoltDB does its best to distribute the\npartitions evenly. For example, if you have a three node cluster with two partitions per node, when you\nask for K=1 (in other words, two of every partition), VoltDB will duplicate three partitions, distributing\nthe six total partitions across the three nodes.\n10.2.2. Calculating the Appropriate Number of Nodes for K-\nSafety\nBy now it should be clear that there is a correlation between the K value and the number of nodes and\npartitions in the cluster. Ideally, the number of nodes is a multiple of the number of copies needed (in other\nwords, the K value plus one). This is both the easiest configuration to understand and manage.\nHowever, if the number of nodes is not an exact multiple, VoltDB distributes the duplicated partitions\nacross the cluster using the largest number of unique partitions possible. This is the highest whole integer\nwhere the number of unique partitions is equal to the total number of partitions divided by the needed\nnumber of copies:\nUnique partitions = (nodes * partitions/node) / (K + 1)\nTherefore, when you specify a cluster size that is not a multiple of K+1, but where the total number of\npartitions is, VoltDB will use all of the partitions to achieve the required K-safety value.\nNote that the total number of partitions must be a whole multiple of the number of copies (that is, K+1).\nIf neither the number of nodes nor the total number of partitions is divisible by K+1, then VoltDB will\nnot let the cluster start and will display an appropriate error message. For example, if the configuration\nspecifies 3 sites per host and a K-safety value of 1 but the voltdb start command specifies a server count\nof 3, the cluster cannot start because the total number of partitions (3X3=9) is not a multiple of the number\nof copies (K+1=2). To start the cluster, you must either change the configuration to increase the K-safety\nvalue to 2 (so the number of copies is 3) or change the sites per host to 2 or 4 so the total number of\npartitions is divisible by 2.\nFinally, if the configuration specifies a K value higher than the available number of nodes, it is not possible\nto achieve the requested K-safety. Even if there are enough partitions to create the requested duplicates,\nVoltDB cannot distribute the duplicates to distinct nodes. For example, if you start a 3 node cluster when\nthe configuration specifies 4 partitions per node (12 total partitions) and a K-safety value of 3, the number\nof total partitions (12) is divisible by K+1 (4) but not without some duplicates residing on the same node.\n84Availability\nIn this situation, VoltDB issues an error message. You must either reduce the K-safety or increase the\nnumber of nodes.\n10.3. Recovering from System Failures\nWhen running without K-safety (in other words, a K-safety value of zero) any node failure is fatal and\nwill bring down the database (since there are no longer enough partitions to maintain operation). When\nrunning with K-safety on, if a node goes down, the remaining nodes of the database cluster log an error\nindicating that a node has failed.\nBy default, these error messages are logged to the console terminal. Since the loss of one or more nodes\nreduces the reliability of the cluster, you may want to increase the urgency of these messages. For example,\nyou can configure a separate Log4J appender (such as the SMTP appender) to report node failure mes-\nsages. To do this, you should configure the appender to handle messages of class HOST and severity level\nERROR or greater. See the chapter on Logging in the VoltDB Administrator's Guide for more information\nabout configuring logging.\nWhen a node fails with K-safety enabled, the database continues to operate. But at the earliest possible\nconvenience, you should repair (or replace) the failed node.\nTo replace a failed node to a running VoltDB cluster, you restart the VoltDB server process specifying\nthe address of at least one of the remaining nodes of the cluster as the host. For example, to rejoin a node\nto the VoltDB cluster where server5 is one of the current member nodes, you use the following voltdb\nstart command:\n$ voltdb start --host=server5 \nIf you started the servers specifying multiple hosts, you can use the same voltdb start command used to\nstart the cluster as a whole since, even if the failed node is in the host list, one of the other nodes in the\nlist can service its rejoin request.\nIf the failed server cannot be restarted (for example, if hardware problems caused the failure) you can start\na replacement server in its place. Note you will need to initialize a root directory on the replacement server\nbefore you can start the database process. You can either initialize the root with the original configuration\nfile. Or, if you have changed the configuration, you can download a copy of the current configuration from\nthe VoltDB Management Center and use that file to initialize the root directory before starting:\n$ voltdb init --config=latest-config.xml\n$ voltdb start --host=server5 \nNote that at least one node you specify in the --host argument must be an active member of the cluster. It\ndoes not have to be one of the nodes identified as the host when the cluster was originally started.\n10.3.1. What Happens When a Node Rejoins the Cluster\nWhen you use voltdb start to bring back a server to a running cluster, the node first rejoins the cluster,\nthen retrieves a copy of the database schema and the appropriate data for its partitions from other nodes\nin the cluster. Rejoining the cluster only takes seconds and once this is done and the schema is received,\nthe node can accept and distribute stored procedure requests like any other member.\nHowever, the new node will not actively participate in the work until a full working copy of its partition\ndata is received. While the data is being copied, the cluster separates the rejoin process from the standard\ntransactional workflow, allowing the database to continue operating with a minimal impact to throughput\nor latency. So the database remains available and responsive to client applications throughout the rejoin\nprocedure.\n85Availability\nIt is important to remember that the cluster is not fully K-safe until the restoration is complete. For example,\nif the cluster was established with a K-safety value of two and one node failed, until that node rejoins and\nis updated, the cluster is operating with a K-safety value of one. Once the node is up to date, the cluster\nbecomes fully operational and the original K-safety is restored.\n10.3.2. Where and When Recovery May Fail\nIt is possible to rejoin any appropriately configured node to the cluster. It does not have to be the same\nphysical machine that failed. This way, if a node fails for hardware reasons, it is possible to replace it\nin the cluster immediately with a new node, giving you time to diagnose and repair the faulty hardware\nwithout endangering the database itself.\nThere are a few conditions in which the rejoin operation may fail. Those situations include the following:\n•Insufficient K-safety\nIf the database is running without K-safety, or more nodes fail simultaneously than the cluster is capable\nof sustaining, the entire cluster will fail and must be restarted from scratch. (At a minimum, a VoltDB\ndatabase running with K-safety can withstand at least as many simultaneous failures as the K-safety\nvalue. It may be able to withstand more node failures, depending upon the specific situation. But the K-\nsafety value tells you the minimum number of node failures that the cluster can withstand.)\n•Mismatched configuration in the root directory\nIf the configuration file that you specify when initializing the root directory does not match the current\nconfiguration of the database, the cluster will refuse to let the node rejoin.\n•More nodes attempt to rejoin than have failed\nIf one or more nodes fail, the cluster will accept rejoin requests from as many nodes as failed. For\nexample, if one node fails, the first node requesting to rejoin will be accepted. Once the cluster is back\nto the correct number of nodes, any further requests to rejoin will be rejected. (This is the same behavior\nas if you try to start more nodes than specified in the --count argument to the voltdb start command\nwhen starting the database.)\n10.4. Avoiding Network Partitions\nVoltDB achieves scalability by creating a tightly bound network of servers that distribute both data and\nprocessing. When you configure and manage your own server hardware, you can ensure that the cluster\nresides on a single network switch, guaranteeing the best network connection between nodes and reducing\nthe possibility of network faults interfering with communication.\nHowever, there are situations where this is not the case. For example, if you run VoltDB \"in the cloud\",\nyou may not control or even know what is the physical configuration of your cluster.\nThe danger is that a network fault — between switches, for example — can interrupt communication\nbetween nodes in the cluster. The server nodes continue to run, and may even be able to communicate\nwith others nodes on their side of the fault, but cannot \"see\" the rest of the cluster. In fact, both halves of\nthe cluster think that the other half has failed. This condition is known as a network partition .\n10.4.1. K-Safety and Network Partitions\nWhen you run a VoltDB cluster without availability (in other words, no K-safety) the danger of a network\npartition is simple: loss of the database. Any node failure makes the cluster incomplete and the database\n86Availability\nwill stop, You will need to reestablish network communications, restart VoltDB, and restore the database\nfrom the last snapshot.\nHowever, if you are running a cluster with K-safety, it is possible that when a network partition occurs, the\ntwo separate segments of the cluster might have enough partitions each to continue running, each thinking\nthe other group of nodes has failed.\nFor example, if you have a 3 node cluster with 2 sites per node, and a K-safety value of 2, each node is a\nseparate, self-sustaining copy of the database, as shown in Figure 10.2, “Network Partition” . If a network\npartition separates nodes A and B from node C, each segment has sufficient partitions remaining to sustain\nthe database. Nodes A and B think node C has failed; node C thinks that nodes A and B have failed.\nFigure 10.2. Network Partition\nThe problem is that you never want two separate copies of the database continuing to operate and accepting\nrequests thinking they are the only viable copy. If the cluster is physically on a single network switch,\nthe threat of a network partition is reduced. But if the cluster is on multiple switches, the risk increases\nsignificantly and must be accounted for.\n10.4.2. Using Network Fault Protection\nVoltDB provides a mechanism for guaranteeing that a network partition does not accidentally create two\nseparate copies of the database. The feature is called network fault protection.\nBecause the consequences of a partition are so severe, use of network partition detection is strongly rec-\nommended and VoltDB enables partition detection by default. In addition it is recommended that, wher-\never possible, K-safe clusters be configured with an odd number of nodes.\nHowever, it is possible to disable network fault protection in the configuration file when you initialize the\ndatabase, if you choose. You enable and disable partition detection using the <partition-detection> tag.\nThe <partition-detection> tag is a child of <deployment> and peer of <cluster>. For example:\n<deployment>\n <cluster hostcount=\"4\" \n sitesperhost=\"2\"\n kfactor=\"1\" />\n <partition-detection enabled=\"true\"/>\n</deployment>\n87Availability\nWhen network fault protection is enabled, and a fault is detected (either due to a network fault or one or\nmore servers failing), any viable segment of the cluster will perform the following steps:\n1.Determine what nodes are missing\n2.Determine if the missing nodes are also a viable self-sustained cluster. If so...\n3.Determine which segment is the larger segment (that is, contains more nodes).\n•If the current segment is larger, continue to operate assuming the nodes in the smaller segment have\nfailed.\n•If the other segment is larger, shutdown to avoid creating two separate copies of the database.\nFor example, in the case shown in Figure 10.2, “Network Partition” , if a network partition separates nodes\nA and B from C, the larger segment (nodes A and B) will continue to run and node C will shutdown (as\nshown in Figure 10.3, “Network Fault Protection in Action” ).\nFigure 10.3. Network Fault Protection in Action\nIf a network partition creates two viable segments of the same size (for example, if a four node cluster\nis split into two two-node segments), a special case is invoked where one segment is uniquely chosen\nto continue, based on the internal numbering of the host nodes. Thereby ensuring that only one viable\nsegment of the partitioned database continues.\nNetwork fault protection is a very valuable tool when running VoltDB clusters in a distributed or uncon-\ntrolled environment where network partitions may occur. The one downside is that there is no way to dif-\nferentiate between network partitions and actual node failures. In the case where network fault protection\nis turned on and no network partition occurs but a large number of nodes actually fail, the remaining nodes\nmay believe they are the smaller segment. In this case, the remaining nodes will shut themselves down\nto avoid partitioning.\nFor example, in the previous case shown in Figure 10.3, “Network Fault Protection in Action” , if rather\nthan a network partition, nodes A and B fail, node C is the only node still running. Although node C is\nviable and could continue because the database was configured with K-safety set to 2, if fault protection\nis enabled node C will shut itself down to avoid a partition.\nIn the worst case, if half the nodes of a cluster fail, the remaining nodes may actually shut themselves down\nunder the special provisions for a network partition that splits a cluster into two equal parts. For example,\nconsider the situation where a two node cluster with a k-safety value of one has network partition detection\n88Availability\nenabled. If one of the nodes fails (half the cluster), there is only a 50/50 chance the remaining node is the\n\"blessed\" node chosen to continue under these conditions. If the remaining node is not the chosen node, it\nwill shut itself down to avoid a conflict, taking the database out of service in the process.\nBecause this situation — a 50/50 split — could result in either a network partition or a viable cluster\nshutting down, VoltDB recommends always using network partition detection and using clusters with an\nodd number of nodes. By using network partitioning, you avoid the dangers of a partition. By using an\nodd number of servers, you avoid even the possibility of a 50/50 split, whether caused by partitioning or\nnode failures.\n89Chapter 11. Database Replicaon\nThere are times when it is useful to create multiple copies of a database. Not just a snapshot of a moment\nin time, but live, constantly updated copies.\nK-safety maintains redundant copies of partitions within a single VoltDB database, which helps protect\nthe database cluster against individual node failure. Database replication also creates a copy. However,\ndatabase replication creates and maintains copies in separate, often remote, databases.\nVoltDB supports two forms of database replication:\n•One-way (Passive)\n•Two-way (Cross Datacenter)\nPassive replication copies the contents from one database, known as the master database, to the other,\nknown as the replica. In passive replication, replication occurs in one direction: from the master to the\nreplica. Clients can connect to the master database and perform all normal database operations, including\nINSERT, UPDATE, and DELETE statements. As shown in Figure 11.1, “Passive Database Replication”\nchanges are copied from the master to the replica. To ensure consistency between the two databases, the\nreplica is started as a read-only database, where only transactions replicated from the master can modify\nthe database contents.\nFigure 11.1. Passive Database Replication\nCross Datacenter Replication (XDCR) , or active replication, copies changes in both directions. XDCR\ncan be set up on multiple clusters (not just two). Client applications can then perform read/write operations\non any of the participating clusters and changes in one database are then copied and applied to all the other\ndatabases. Figure 11.2, “Cross Datacenter Replication” shows how XDCR can support client applications\nattached to each database instance.\n90Database Replication\nFigure 11.2. Cross Datacenter Replication\nDatabase replication (DR) provides two key business advantages. The first is protecting your business data\nagainst catastrophic events, such as power outages or natural disasters, which could take down an entire\ncluster. This is often referred to as disaster recovery . Because the clusters can be in different geographic\nlocations, both passive DR and XDCR allow other clusters to continue unaffected when one becomes\ninoperable. Because the replica is available for read-only transactions, passive DR also allows you to\noffload read-only workloads, such as reporting, from the main database instance.\nThe second business issue that DR addresses is the need to maintain separate, active copies of the database\nin separate locations. For example, XDCR allows you to maintain copies of a product inventory database\nat two or more separate warehouses, close to the applications that need the data. This feature makes it\npossible to support massive numbers of clients that could not be supported by a single database instance\nor might result in unacceptable latency when the database and the users are geographically separated. The\ndatabases can even reside on separate continents.\nIt is important to note, however, that database replication is not instantaneous. The transactions are com-\nmitted locally, then copied to the other database or databases. So when using XDCR to maintain multiple\nactive clusters you must be careful to design your applications to avoid possible conflicts when transac-\ntions change the same record in two databases at approximately the same time. See Section 11.3.8, “Un-\nderstanding Conflict Resolution” for more information about conflict resolution.\nThe remainder of this chapter discusses the following topics:\n•Section 11.1, “How Database Replication Works”\n•Section 11.2, “Using Passive Database Replication”\n•Section 11.3, “Using Cross Datacenter Replication”\n•Section 11.4, “Monitoring Database Replication”\n11.1. How Database Replication Works\nDatabase replication (DR) involves duplicating the contents of selected tables between two database clus-\nters. In passive DR, the contents are copied in one direction: from master to replica. In active or cross\ndatacenter DR, changes are copied in both directions.\n91Database Replication\nYou identify which tables to replicate in the schema, by specifying the table name in a DR TABLE state-\nment. For example, to replicate all tables in the voter sample application, you would execute three DR\nTABLE statements when defining the database schema:\nDR TABLE contestants;\nDR TABLE votes;\nDR TABLE area_code_state;\n11.1.1. Starting Database Replication\nYou enable DR by including the <dr> tag in the configuration files when initializing the database. The\n<dr> element identifies three pieces of information:\n•A unique cluster ID for each database. The ID is required and can be any number between 0 and 127,\nas long as each cluster has a different ID.\n•The role the cluster plays, whether master, replica, or xdcr. The default is master.\n•For the replica and xdcr roles, a connection source listing the host name or IP address of one or more\nnodes from the other databases.\nFor example:\n<dr id=\"2\" role=\"replica\">\n <connection source=\"serverA1,serverA2\" />\n</dr>\nEach cluster must have a unique ID. For passive DR, only the replica needs a <connection> element,\nsince replication occurs in only one direction.\nFor cross datacenter replication (XDCR), all clusters must include the <connection> element pointing\nto at each one other cluster. If you are establishing an XDCR network with multiple clusters, the <con-\nnection> tag can specify hosts from one or more of the other clusters. The participating clusters will co-\nordinate establishing the correct connections, even if the <connection> element does not list them all.\nNote that for XDCR, you must specify the attribute role=\"xdcr\" before starting each cluster. You\ncannot mix active and passive DR in the same database group.\nFor passive DR, you must start the replica database with the role=\"replica\" attribute to ensure the\nreplica is in read-only mode. Once the clusters are configured properly and the schema of the DR tables\nmatch in the databases, replication starts.\nThe actual replication process is performed in multiple parallel streams; each unique partition on one\ncluster sends a binary log of completed transactions to the other clusters. Replicating by partition has two\nkey advantages:\n•The process is faster — Because the replication process uses a binary log of the results of the transaction\n(rather than the transaction itself), the receiving cluster (or consumer ) does not need to reprocess the\ntransaction; it simply applies the results. Also, since each partition replicates autonomously, multiple\nstreams of data are processed in parallel, significantly increasing throughout.\n•The process is more durable — In a K-safe environment, if a server fails on a DR cluster, individual\npartition streams can be redirected to other nodes or a stream can wait for the server to rejoin — without\ninterfering with the replication of the other partitions.\n92Database Replication\nIf data already exists in one of the clusters before database replication starts for the first time, that data-\nbase sends a snapshot of the existing data to the other, as shown in Figure 11.3, “Replicating an Existing\nDatabase” . Once the snapshot is received and applied (and the two clusters are in sync), the partitions start\nsending binary logs of transaction results to keep the clusters synchronized.\nFigure 11.3. Replicating an Existing Database\nFor passive DR, only the master database can have existing data before starting replication for the first\ntime. The replica's DR tables must be empty. For XDCR, the first database that is started can have data in\nthe DR tables. If other clusters contain data, replication cannot start. Once DR has started, the databases\ncan stop and recover using command logging without having to restart DR from the beginning.\n11.1.2. Database Replication, Availability, and Disaster Re-\ncovery\nOnce replication begins, the DR process is designed to withstand normal failures and operational down-\ntime. When using K-safety, if a node fails on any cluster, you can rejoin the node (or a replacement) us-\ning the voltdb start command without breaking replication. Similarly, if a cluster shuts down, you can\nuse voltdb start to restart the database and restart replication where it left off. The ability to restart DR\nassumes you are using command logging. Specifically, synchronous command logging is recommended\nto ensure complete durability.\nIf unforeseen events occur that make a database unreachable, database replication lets you replace the\nmissing database with its copy. This process is known as disaster recovery . For cross datacenter replication\n(XDCR), you simply need to redirect your client applications to the remaining cluster(s). For passive\nDR, there is an extra step. To replace the master database with the replica, you must issue the voltadmin\npromote command on the replica to switch it from read-only mode to a fully operational database.\n93Database Replication\nFigure 11.4. Promoting the Replica\nSee Section 11.2.6.3, “Promoting the Replica When the Master Becomes Unavailable” for more informa-\ntion on promoting the replica database.\n11.1.3. Database Replication and Completeness\nIt is important to note that, unlike K-safety where multiple copies of each partition are updated simultane-\nously, database replication involves shipping the results of completed transactions from one database to\nanother. Because replication happens after the fact, there is no guarantee that the contents of the clusters\nare identical at any given point in time. Instead, the receiving database (or consumer) \"catches up\" with\nthe sending database (or producer) after the binary logs are received and applied by each partition.\nAlso, because DR occurs on a per partition basis, changes to partitions may not occur in the same order on\nthe consumer, since one partition may replicate faster than another. Normally this is not a problem because\nthe results of all transactions are atomic in the binary log. However, if the producer cluster crashes, there\nis no guarantee that the consumer has managed to retrieve all the logs that were queued. Therefore, it is\npossible that some transactions that completed on the producer are not reflected on the consumer.\nFortunately, using command logging, when you restart the failed cluster, any unacknowledged transactions\nwill be replayed from the failed cluster's disk-based DR cache, allowing the clusters to recover and resume\nDR where they left off. However, if the failed cluster does not recover, you will need to decide how to\nproceed. You can choose to restart DR from scratch or, if you are using passive DR, you can promote the\nreplica to replace the master.\nTo ensure effective recovery, the use of synchronous command logging is recommended for DR. Synchro-\nnous command logging guarantees that all transactions are recorded in the command log and no transac-\ntions are lost. If you use asynchronous command logging, there is a possibility that a binary log is applied\nbut not captured by the command log before the cluster crashes. Then when the database recovers, the\nclusters will not agree on the last acknowledged DR transaction, and DR will not be able to resume.\nThe decision whether to promote the replica or wait for the master to return (and hopefully recover all\ntransactions from the command log) is not an easy one. Promoting the replica and using it to replace the\noriginal master may involve losing one or more transactions per partition. However, if the master cannot\nbe recovered or cannot not be recovered quickly, waiting for the master to return can result in significant\nbusiness loss or interruption.\nYour own business requirements and the specific situation that caused the outage will determine which\nchoice to make — whether to wait for the failed cluster to recover or to continue operations on the re-\n94Database Replication\nmaining cluster only. The important point is that database replication makes the choice possible and sig-\nnificantly eases the dangers of unforeseen events.\n11.2. Using Passive Database Replication\nThe following sections provide step-by-step instructions for setting up and running passive replication\nbetween two VoltDB clusters. The steps include:\n1.Specifying what tables to replicate in the schema\n2.Configuring the master and replica root directories for DR\n3.Starting the databases\n4.Loading the schema\nThe remaining sections discuss other aspects of managing passive DR, including:\n•Updating the schema\n•Stopping database replication\n•Promoting the replica database\n•Using the replica for read-only transactions\n11.2.1. Specifying the DR Tables in the Schema\nFirst, you must identify which tables you wish to copy from the master to the replica. Only the selected\ntables are copied. You identify the tables in both the master and the replica database schema with the\nDR TABLE statement, For example, the following statements identify two tables to be replicated, the\nCustomers and Orders tables:\nCREATE TABLE customers (\n customerID INTEGER NOT NULL,\n firstname VARCHAR(128),\n lastname VARCHAR(128)\n);\nCREATE TABLE orders (\n orderID INTEGER NOT NULL,\n customerID INTEGER NOT NULL,\n placed TIMESTAMP\n);\nDR TABLE customers;\nDR TABLE orders;\nYou can identify any regular table, whether partitioned or not, as a DR table, as long as the table is empty.\nThat is, the table must have no data in it when you issue the DR TABLE statement.\nThe important point to remember is that the schema for both databases must contain matching table defi-\nnitions for any tables identified as DR tables, including the associated DR TABLE declarations. Although\nit is easiest to have the master and replica databases use the exact same schema, that is not necessary. The\nreplica can have a subset or superset of the tables in the master, as long as it contains matching definitions\nfor all of the DR tables. The replica schema can even contain additional objects not in the master schema,\nsuch as additional views. Which can be useful when using the replica for read-only or reporting workloads,\njust as long as the DR tables match.\n95Database Replication\n11.2.2. Configuring the Clusters\nThe next step is to properly configure the master and replica clusters. The two database clusters can have\ndifferent physical configurations (that is, different numbers of nodes, different sites per host, or a different\nK factor). Identical cluster configurations guarantee the most efficient replication, because the replica\ndoes not need to repartition the incoming binary logs. Differing configurations, on the other hand, may\nincrementally increase the time needed to apply the binary logs.\nBefore you start the databases, you must initialize the root directories for both clusters with the appropriate\nDR attributes. You enable DR in the configuration file using the <dr> element, including a unique cluster\nID for each database cluster and that cluster's role. The ID is a number between 0 and 127 which VoltDB\nuses to uniquely identify each cluster as part of the DR process. The role is either master or replica.\nFor example, you could assign ID=1 for the master cluster and ID=2 for the replica. On the replica, you\nmust also include a <connection> sub-element that points to the master database. For example:\nMaster Cluster <dr id=\"1\" role=\"master\"/>\nReplica Cluster <dr id=\"2\" role=\"replica\">\n <connection source=\"MasterSvrA,MasterSvrB\" />\n</dr>\n11.2.3. Starting the Clusters\nThe next step is to start the databases. You start the master database as normal with the voltdb start\ncommand. If you are creating a new database, you can then load the schema, including the necessary DR\nTABLE statements. Or you can restore a previous database instance if desired. Once the master database\nstarts, it is ready and can interact with client applications.\nFor the replica database, you use the voltdb start command to start a new, empty database. Once the\ndatabase is running, you can execute DDL statements to load the database schema, but you cannot perform\nany data manipulation queries such as INSERT, UPDATE, or DELETE because the replica is in read-\nonly mode.\nThe source attribute of the <connection> tag in the replica configuration file identifies the hostname\nor IP address (and optionally port number) of one or more servers in the master cluster. You can specify\nmultiple servers so that DR can start even if one of the listed servers on the master cluster is currently down.\nIt is usually convenient to specify the connection information when initializing the database root directory.\nBut this property can be changed after the database is running, in case you do not know the address of\nthe master cluster nodes before starting. (Note, however, that the cluster ID cannot be changed once the\ndatabase starts.)\n11.2.4. Loading the Schema and Starting Replication\nAs soon as the replica database starts with DR enabled, it will attempt to contact the master database to\nstart replication. The replica will issue warnings that the schema does not match, since the replica does\nnot have any schema defined yet. This is normal. The replica will periodically contact the master until the\nschema for DR objects on the two databases match. This gives you time to load a matching schema.\nAs soon as the replica database has started, you can load the appropriate schema. Loading the same schema\nas the master database is the easiest and recommended approach. The key point is that once a matching\nschema is loaded, replication will begin automatically.\nWhen replication starts, the following actions occur:\n96Database Replication\n1.The replica and master databases verify that the DR tables match on the two clusters.\n2.If data already exists in the DR tables on the master, the master sends a snapshot of the current contents\nto the replica where it is restored into the appropriate tables.\n3.Once the snapshot, if any, is restored, the master starts sending binary logs of changes to the DR tables\nto the replica.\nIf any errors occur during the snapshot transmission, replication stops and must be restarted from the\nbeginning. However, once the third step is reached, replication proceeds independently for each unique\npartition and, in a K safe environment, the DR process becomes durable across node failures and rejoins\nand other non-fatal events.\nIf either the master or the replica database crashes and needs to restart, it is possible to restart DR where it\nleft off, assuming the databases are using command logging for recovery. If the master fails, you simply\nuse the voltdb start command to restart the master database. The replica will wait for the master to recover.\nThe master will then replay any DR logs on disk and resume DR where it left off.\nIf the replica fails, the master will queue the DR logs to disk waiting for the replica to return. If you use\nthe voltdb start command on the replica cluster, the replica will perform the following actions:\n1.Restart the replica database, restoring both the schema and the data, and placing the database in read-\nonly mode.\n2.Contact the master cluster and attempt to re-establish DR.\n3.If both clusters agree on where (that is, what transaction), DR was interrupted, DR will resume from\nthat point, starting with the DR logs that the master database has queued in the interim.\nIf the clusters do not agree on where DR stopped during step #3, the replica database will generate an error\nand stop replication. For example, if you recover from an asynchronous command log where the last few\nDR logs were ACKed to the master but not written to the command log, the master and the replica will\nbe in different states when the replica recovers.\nIf this occurs, you must restart DR from the beginning, by re-initializing the replica root directory (with\nthe --force flag), restarting the database, and then reloading a compatible schema. Similarly, if you are not\nusing command logging, you cannot recover the replica database and must start DR from scratch.\n11.2.5. Updating the Schema During Replication\nBecause database replication is asynchronous, updating the schema requires a deliberate, planned process.\nYou need to ensure that no transactions that write to the affected tables are executed while the schema is\nbeing updated. If the DR consumer (that is, the replica) detects a transaction to a table where the schema\ndoes not match, the replica stops requesting and processing binary logs from the master cluster. The master\ncluster then queues all changes until the schema is updated on the replica. Once the schema on the replica\nis updated to match the incoming transaction, replication resumes.\nThe safest way to update the schema is the following:\n1.Pause the master cluster with the voltadmin pause --wait command\n2.Update the schema on the master and replica.\n3.Resume operation on the master with the voltadmin resume command\nThese steps ensure that no transactions are processed until the schema for both clusters are updated. How-\never, this process also means the master database does not accept any client transactions during the update\nprocess.\n97Database Replication\nBecause schema validation occurs on a per table, per transaction basis, it is possible to update the schema\nwithout pausing the database. However, this only works if you ensure that no client transactions attempt\nto modify affected tables while the schema differ.\nFor example, it is possible to add tables to the database schema without pausing the database by adding the\ntables to the master database and replica in one step, then updating the stored procedures to access the new\ntables in a second step. This way no client applications access the new tables until they exist and match\non both databases, and ongoing transactions are not impacted.\nYou can even modify existing DR tables without pausing the database. But in this case you must be\nmuch more careful about avoiding operations that access the affected tables during the transition. If any\ntransactions attempt to write to an affected table while the schema differ, the replica will stall until the\nschema match. One way to do this is to create a new table, matching the existing table but with the desired\nchanges. Update the schema on both clusters, then update the client applications and stored procedures to\nuse the new table. Finally, once all client applications are updated, the original table can be deleted.\n11.2.6. Stopping Replication\nIf, for any reason, you wish to stop replication of a database, there are two ways to do this: you can stop\nsending data from the master or you can \"promote\" the replica to stop it from receiving data. Since the\nindividual partitions are replicating data independently, if possible you want to make sure all pending\ntransfers are completed before turning off replication.\nSo, under the best circumstances, you should perform the following steps to stop replication:\n1.Stop write transactions on the master database by putting it in admin mode using the voltadmin pause\ncommand.\n2.Wait for all pending DR log transfers to be completed.\n3.Reset DR on the master cluster using the voltadmin dr reset command.\n4.Depending on your goals, either shut down the replica or promote it to a fully-functional database as\ndescribed in Section 11.2.6.3, “Promoting the Replica When the Master Becomes Unavailable” .\n11.2.6.1. Stopping Replication on the Master if the Replica Becomes Un-\navailable\nIf the replica becomes unavailable and is not going to be recovered or restarted , you should consider\nstopping DR on the master database, to avoid consuming unnecessary disk space.\nThe DR process is resilient against network glitches and node or cluster failures. This durability is achieved\nby the master database continually queueing DR logs in memory and — if too much memory is required\n— to disk while it waits for the replica to ACK the last message. This way, when the network interruption\nor other delay is cleared, the DR process can pick up where it left off. However, the master database has\nno way to distinguish a temporary network failure from an actual stoppage of DR on the replica.\nTherefore, if the replica stops unexpectedly, it is a good idea to restart the replica and re-initiate DR as\nsoon as convenient. Or, if you are not going to restart DR, you should reset DR on the master to cancel\nthe queuing of DR logs and to delete any pending logs. To reset the DR process on the master database,\nuse the voltadmin dr reset command. For example:\n$ voltadmin dr reset --host=serverA\n98Database Replication\nOf course, if you do intend to recover and restart DR on the replica, you do not want to reset DR on the\nmaster. Resetting DR on the master will delete any queued DR logs and make restarting replication where\nit left off impossible and force you to start DR over from the beginning.\n11.2.6.2. Database Replication and Disaster Recovery\nIf unforeseen events occur that make the master database unreachable, database replication lets you replace\nthe master with the replica and restore normal business operations with as little downtime as possible. You\nswitch the replica from read-only to a fully functional database by promoting it. To do this, perform the\nfollowing steps:\n1.Make sure the master is actually unreachable, because you do not want two live copies of the same\ndatabase. If it is reachable but not functioning properly, be sure to pause or shut down the master\ndatabase.\n2.Promote the replica to a read/write mode using the voltadmin promote command.\n3.Redirect the client applications to the newly promoted database.\nFigure 11.4, “Promoting the Replica” illustrates how database replication reduces the risk of major disas-\nters by allowing the replica to replace the master if the master becomes unavailable.\nOnce the master is offline and the replica is promoted, the data is no longer being replicated. As soon as\nnormal business operations have been re-established, it is a good idea to also re-establish replication. This\ncan be done using any of the following options:\n•If the original master database hardware can be restarted, take a snapshot of the current database (that\nis, the original replica), restore the snapshot on the original master and redirect client traffic back to the\noriginal. Replication can then be restarted using the original configuration.\n•An alternative, if the original database hardware can be restarted but you do not want to (or need to)\nredirect the clients away from the current database, is to use the original master hardware to create\na replica of the newly promoted cluster — essentially switching the roles of the master and replica\ndatabases — as described in Section 11.2.6.4, “Reversing the Master/Replica Roles” .\n•If the original master hardware cannot be recovered effectively, create a new database cluster in a third\nlocation to use as a replica of the current database.\n11.2.6.3. Promoting the Replica When the Master Becomes Unavailable\nIf the master database becomes unreachable for whatever reason (such as catastrophic system or network\nfailure) it may not be possible to turn off DR in an orderly fashion. In this case, you may choose to “turn\non” the replica as a fully active (writable) database to replace the master. To do this, you use the voltad-\nmin promote command. When you promote the replica database, it exits read-only mode and becomes\na fully operational VoltDB database. For example, the following Linux shell command uses voltadmin\nto promote the replica node serverB:\n$ voltadmin promote --host=serverB\n11.2.6.4. Reversing the Master/Replica Roles\nIf you do promote the replica and start using it as the primary database, you will likely want to establish a\nnew replica as soon as possible to return to the original production configuration and level of durability.\nYou can do this by creating a new replica cluster and connecting to the promoted database as described in\nSection 11.2.3, “Starting the Clusters” . Or, if the master database can be restarted, you can reuse that cluster\nas the new replica, by modifying the configuration file to change the DR role from master to replica, and\n99Database Replication\nadd the necessary <connection> element, re-initializing the database root directory, and then starting\nthe new database cluster with the voltdb start command.\n11.2.7. Database Replication and Read-only Clients\nWhile database replication is occurring, the only changes to the replica database come from the binary\nlogs. Client applications can connect to the replica and use it for read-only transactions, including read-\nonly ad hoc queries and system procedures. However, any attempt to perform a write transaction from a\nclient application returns an error.\nThere will always be some delay between a transaction completing on the master and its results being\napplied on the replica. However, for read operations that do not require real-time accuracy (such as report-\ning), the replica can provide a useful source for offloading certain less-frequent, read-only transactions\nfrom the master.\nFigure 11.5. Read-Only Access to the Replica\n11.3. Using Cross Datacenter Replication\nThe following sections provide step-by-step instructions for setting up and running cross datacenter repli-\ncation (XDCR) between two or more VoltDB clusters. The sections describe how to:\n1.Design your schema and identify the DR tables\n2.Configure the database clusters, including:\n•Choosing unique cluster IDs\n•Identifying the DR connections\n3.Start the databases\n4.Load the schema and start replication\nLater sections discuss other aspects of managing XDCR, including:\n•Updating the schema during replication\n•Stopping database replication\n•Resolving conflicts\n100Database Replication\nImportant\nXDCR is a separately licensed feature. If your current VoltDB license does not include a key for\nXDCR you will not be able to complete the tasks described in this section. See your VoltDB sales\nrepresentative for more information on licensing XDCR.\n11.3.1. Designing Your Schema for Active Replication\nTo manage XDCR, VoltDB stores a small amount (8 bytes) of extra metadata with every row of data that is\nshared. This additional space is allocated automatically for any table declared as a DR TABLE on a cluster\nconfigured with the <dr> role attribute set to xdcr. Be sure to take this additional space requirement into\nconsideration when planning the memory usage of servers participating in an XDCR network.\nNext, you must identify which tables you wish to share between the databases. Only the selected tables are\ncopied. You identify the tables in the schema with the DR TABLE statement. For example, the following\nstatements identify two tables to be replicated, the Customers and Orders tables:\nCREATE TABLE customers (\n customerID INTEGER NOT NULL,\n firstname VARCHAR(128),\n LASTNAME varchar(128)\n);\nCREATE TABLE orders (\n orderID INTEGER NOT NULL,\n customerID INTEGER NOT NULL,\n placed TIMESTAMP\n);\nDR TABLE customers;\nDR TABLE orders;\nYou can identify any regular table, whether partitioned or not, as a DR table, as long as the table is empty.\nThat is, the table must have no data in it when you issue the DR TABLE statement. The important point\nto remember is that the schema definitions for all DR tables, including the DR TABLE statements, must\nbe identical on all the participating clusters.\n11.3.2. Configuring the Database Clusters\nThe next step is to configure and initialize the database root directories. The database clusters can have\ndifferent physical configurations (that is, different numbers of nodes, different sites per host, or a different\nK factor). Identical cluster configurations guarantee the most efficient replication, because the databases\ndo not need to repartition the incoming binary logs. Differing configurations, on the other hand, may\nincrementally increase the time needed to apply the binary logs.\nWhen initializing the database root directories, you must also enable and configure DR in the configuration\nfile, including:\n•Choosing a unique ID for each cluster\n•Specifying the DR connections\n11.3.2.1. Choosing Unique IDs\nYou enable DR in the configuration file using the <dr> element and including a unique cluster ID for\neach database cluster.\n101Database Replication\nTo manage the DR process VoltDB needs to uniquely identify the clusters. You provide this unique iden-\ntifier as a number between 0 and 127 when you configure the clusters. For example, if we assign ID=1 to a\ncluster in New York and ID=2 to another in Chicago, their respective configuration files must contain the\nfollowing <dr> elements. You must also specify that the cluster is participating in XDCR by specifying\nthe role. For example:\nNew York Cluster\n<dr id=\"1\" role=\"xdcr\" />\nChicago Cluster\n<dr id=\"2\" role=\"xdcr\" />\n11.3.2.2. Identifying the DR Connections\nFor each database cluster, you must also specify the source of replication in the <connection> sub-\nelement. You do this by pointing each cluster to at least one of the other clusters, specifying one or more\nservers on the remote cluster(s) in the source attribute.\nYou only need to point each connection source at servers from one of the other clusters, even if more\nclusters are participating in the XDRC relationship. However, it is a good idea to include them all in the\nsource string so the current cluster is not dependent on the order in which the clusters start.\nFor example, say there are two clusters. The New York cluster has nodes NYserverA, NYserverB, and\nNYserverC. While the Chicago cluster has CHIserverX, CHIserverY, and CHIserverZ. The configuration\nfiles for the two clusters might look like this:\nNew York Cluster\n<dr id=\"1\" role=\"xdcr\" >\n <connection source=\"CHIserverX,CHIserverY\" />\n</dr>\nChicago Cluster\n<dr id=\"2\" role=\"xdcr\" >\n <connection source=\"NYserverA,NYserverB,NYserverC\" />\n</dr>\nNote that both clusters must have a connection defined for active replication to start. An alternative ap-\nproach is to initialize the databases leaving the source attribute of the <connection> element empty. You\ncan then update the configuration to add source servers once the database is up and running and the ap-\npropriate schema has been applied. For example:\n<dr id=\"1\" role=\"xdcr\">\n <connection source=\"\" />\n</dr>\nOnce the configuration files have the necessary declarations, you can initialize the root directories on all\ncluster nodes using the appropriate configuration files:\nNew York Cluster\n$ voltdb init -D ~/nydb --config=nyconfig.xml\nChicago Cluster\n102Database Replication\n$ voltdb init -D ~/chidb --config=chiconfig.xml\nIf you then want to add a third cluster to the XDRC relationship (say San Francisco), you can define a\nconfiguration file that points at either or both of the other clusters:\nSan Francisco Cluster\n<dr id=\"3\" role=\"xdcr\" >\n <connection source=\"CHIserverX,CHIserverY,NYserverA,NYserverB\" />\n</dr>\nWhen configuring three or more XDCR clusters, you also have the option of specifying which cluster a\nnew instance uses as the source for downloading the initial snapshot. For example, if two of the clusters are\nlocated in the same physical location, you can specify the cluster ID of a preferred source to reduce the time\nneeded to synchronize the clusters. Note that the preferred source attribute only applies when the database\nfirst joins the XDCR environment or if DR is restarted from scratch. When the cluster recovers existing\ndata under normal operation the preferred source is ignored. For example, a second Chicago cluster could\nspecify the cluster ID of the original Chicago database as the preferred source, like so:\n2nd Chicago Cluster\n<dr id=\"4\" role=\"xdcr\" >\n <connection source=\"CHIserverX,CHIserverY,NYserverA,NYserverB\" \n preferred-source=\"2\" />\n</dr>\n11.3.3. Starting the Database Clusters\nOnce the servers are initialized with the necessary configuration, you can start the database clusters. How-\never, it is important to note three important points:\n•Only one of the clusters can have data in the DR tables when setting up XDCR and that database must\nbe the first in the XDCR network. In other words, start the database containing the data first. Then start\nand connect a second, empty database to it.\n•As soon as the databases start, they automatically attempt to contact each other, verify that the DR table\nschema match, and start the DR process\n•Only one database can join the XDCR network at a time. You must wait for each joining cluster to\ncomplete the initial synchronization before starting the next.\nOften the easiest method for starting the databases is to:\n1.Start one cluster\n2.Load the schema (including the DR table declarations) and any pre-existing data on that cluster\n3.Once the first cluster is fully configured, start the second cluster and load the schema\n4.Once the second cluster finishes synchronizing with the first, start each additional cluster, one at a time.\nUsing this approach, DR does not start until step #3 is complete and the first two clusters are fully config-\nured. Then any additional clusters are added separately.\nYou can then start and load the schema on the databases and perform any other preparatory work you\nrequire. Then edit the configuration files — one at a time using the voltadmin update command — filling\n103Database Replication\nin the source attribute for each cluster to point at another. As soon as the source attribute is defined and\nthe schema match, the DR process will begin for the first pair of clusters. Once the first two clusters\nsynchronize, you can repeat this process, one at a time, with any other participating clusters.\nNote\nAlthough the source attribute can be modified on a running database, the unique cluster ID cannot\nbe changed after the database starts. So it is important to include the <dr> element with the unique\nID and xdcr role when initializing the database root directories.\n11.3.4. Loading a Matching Schema and Starting Replication\nAs soon as the databases start with DR enabled, they attempt to contact a cooperating database to start\nreplication. Each cluster will issue warnings until the schema for the databases match. This is normal and\ngives you time to load a matching schema. The key point is that once matching schema are loaded on the\ndatabases, replication will begin automatically.\nWhen replication starts, the following actions occur:\n1.The clusters verify that the DR tables match on both clusters.\n2.If data already exists in the DR tables of the first database, that cluster sends a snapshot of the current\ncontents to the other cluster where it is restored into the appropriate tables.\n3.Once the snapshot, if any, is restored, both databases (and any other participating clusters) start sending\nbinary logs of changes from DR tables to the other cluster.\nIf any errors occur during the snapshot transmission, replication stops and must be restarted from the\nbeginning. However, once the third step is reached, replication proceeds independently for each unique\npartition and, in a K safe environment, the DR process becomes durable across node failures and rejoins\nas well as cluster shutdowns and restarts.\n11.3.5. Updating the Schema During Active Replication\nSQL statements such as DELETE, INSERT, and UPDATE are transmitted through the DR binary logs,\nbut schema changes are not. Therefore, you must make schema changes to each database separately. More\nimportantly, while doing this you must be careful to ensure that no transactions attempt to modify data in\ntables where the schema does not match on the cooperating clusters.\nIf a consumer cluster (cluster A) receives a replication record in the binary log that does not match the\nschema for that table in the database, the consumer will stop processing binary logs from the producer\n(cluster B). Replication will remain stalled until the schema is updated to match what was received from the\nproducer. At the same time, the producer will buffer any subsequent transactions waiting for the consumer\nto resume replication.\nIn the best case, there are mismatched transactions in only one direction (that is, from cluster B to cluster\nA). If so, once you update the schema on the stalled consumer cluster A, replication resumes and cluster\nB can send the subsequent transactions it had buffered.\nHowever, while binary logs from the producer are stalled, the consumer continues to process client trans-\nactions itself and will send those transactions as binary logs to the other cluster. That is, cluster A also acts\nas a producer sending binary logs to cluster B as a consumer. If there are simultaneous write transactions to\nthe same table on the two clusters while the schema do not match, a deadlock can result. Both clusters will\nstall due to mismatched schema and their content will have diverged. In this situation, your only option is to\nchoose one of the clusters as the \"winner\" and reinitialize the other cluster and restart XDCR from scratch.\n104Database Replication\nTo avoid conflicts, the safest process for changing the schema for DR tables in XDCR is the following:\n1.Pause and drain the outstanding DR binary logs on all clusters using the voltadmin pause --wait com-\nmand\n2.Update the schema for the DR tables on all clusters\n3.Resume all clusters using the voltadmin resume command\nThis process ensures that no transactions are processed until the schema on all clusters in the XDCR\nrelationship are updated and in sync. However, this process also means that there are no client transactions\nprocessed during the update.\nIt is possible to update the schema without pausing the database. However, to do this, you must be ex-\ntremely careful to ensure that no transactions attempt to modify tables while the schema differ between\nthe clusters. For example, it is possible to add tables to the database schema without pausing the database.\nYou can add the new tables to the databases in one step, then update the stored procedures and client\napplications in a second step. This way no client applications access the new tables until their schema exist\nand match on all of the XDCR databases. At the same time, ongoing transactions associated with older\ntables are not impacted.\n11.3.6. Stopping Replication\nIf, for any reason, you need to break replication between the XDCR databases, you can issue the voltadmin\ndr reset command to any cluster. For example, if one of two clusters goes down and will not be brought\nback online for an extended period, you can issue a voltadmin dr reset command on the remaining cluster\nto tell it to stop queuing binary logs. If not, the logs will be saved on disk, waiting for the other cluster\nto return, until you run out of disk space.\nWhen using multiple clusters in an XDCR environment, you must choose whether to break replication\nwith all other clusters ( voltadmin dr reset --all ) or with one specific cluster. Breaking replication with\nall clusters means that all of the other clusters will need to restart DR from scratch to rejoin the XDCR\nenvironment. Breaking replication with a single cluster means the remaining clusters retain their XDCR\nrelationship.\nIf you wish to remove just one, active cluster from the XDCR relationship, you can issue the voltadmin\ndr drop command to the cluster you wish to remove. This command finalizes any remaining DR logs on\nthe cluster and tells all other clusters to break their DR connection with that cluster. If the cluster you want\nto remove is not currently running, you can issue the voltadmin dr reset --cluster= n to all the remaining\nclusters where n is the cluster ID of the cluster being removed.\nHowever, there is a danger that if you remove a failed cluster from a multi-cluster XDCR environment,\nthe failed cluster may not have sent the same binary logs to all of the other clusters. In which case, when\nyou drop that cluster from the environment, the data on the remaining clusters will diverge. So, using dr\nreset --cluster is recommended only if you are sure that there were no outstanding logs to be sent from the\nfailed cluster. For example, stopping an XDCR cluster with an orderly shutdown ( voltadmin shutdown )\nensures that all its binary logs are transmitted and therefore the other clusters are in sync.\nWhen using the dr reset --cluster command, you must also include the --force option to verify that you\nunderstand the risks associated with this action. So, the process for removing a single, failed cluster from\na multi-cluster XDCR environment is:\n1.Identify the cluster ID of the cluster that has failed.\n2.Issue the voltadmin dr reset --cluster= {failed-cluster-ID} --force command on all the remaining clus-\nters to clear the binary log queues.\n105Database Replication\nThis way, the remaining clusters can maintain their XDCR relationship but not retain queued data for the\nfailed cluster. If, later, you want to rejoin the failed cluster to the XDCR environment, you will need to\nreinitialize the failed cluster's root directories and restart its XDCR relationship from scratch.\n11.3.7. Example XDCR Configurations\nIt is not possible to mix XDCR clusters and passive DR in the same database relationship. However, it is\npossible to create \"virtual\" replicas in a XDCR environment, if your business requires it.\nNormally, in an XDCR environment, all cluster participate equally. They can all initiate transactions and\nreplicate those transactions among themselves, as shown in Figure 11.6, “Standard XDCR Configuration” .\nFigure 11.6. Standard XDCR Configuration\nIf you also want to have one (or more) clusters on \"standby\", for example, purely for disaster recovery\nor to off-load read-only workloads, you can dedicate clusters from within your XDCR environment for\nthat purpose. The easiest way to do that is to configure the extra clusters as normal XDCR clusters. That\nis setting their role as \"XDCR\" and assigning them a unique DR ID. However, rather than starting the\nclusters in normal operational mode, you can use the --pause flag on the voltdb start command to start\nthem in admin mode. This way no transactions can be initiated on the cluster's client ports. However, the\ncluster will receive and process DR binary logs from the other clusters in the DR relationship. Figure 11.7,\n“XDCR Configuration with Read-Only Replicas” demonstrates one such configuration.\nFigure 11.7. XDCR Configuration with Read-Only Replicas\n11.3.8. Understanding Conflict Resolution\nOne aspect of database replication that is unique to cross datacenter replication (XDCR) is the need to\nprepare for and manage conflicts between the databases. Conflict resolution is not an issue for passive\n106Database Replication\nreplication since changes travel in only one direction. However, with XDCR it is possible for changes to\nbe made to the same data at approximately the same time on two databases. Those changes are then sent\nto the other database, resulting in possible inconsistencies or invalid transactions.\nFor example, say clusters A and B are processing transactions as shown in Figure 11.8, “Transaction\nOrder and Conflict Resolution” . Cluster A executes a transaction that modifies a specific record and this\ntransaction is included in the binary log A 1. By the time cluster B receives the binary log and processes\nA1, cluster B has already processed its own transactions B 1 and B2. Those transactions may have modified\nthe same record as the transaction in A 1, or another record that would conflict with the change in A 1, such\nas a matching unique index entry.\nFigure 11.8. Transaction Order and Conflict Resolution\nUnder these conditions, cluster B cannot simply apply the changes in A 1 because doing so could violate\nthe uniqueness constraints of the schema and, more importantly, is likely to result in the content of the\ntwo database clusters diverging. Instead, cluster B must decide which change takes priority. That is, what\nresolution to the conflict is most likely to produce meaningful results or match the intent of the business\napplication. This decision making process is called conflict resolution .\nNo matter what the resolution, it is important that the database administrators are notified of the conflict,\nwhy it occurred, and what action was taken. The following sections explain:\n•How to avoid conflicts\n•How VoltDB resolves conflicts when they do occur\n•What types of conflicts can occur\n•How those conflicts are reported\n11.3.8.1. Designing Your Application to Avoid Conflicts\nVoltDB uses well-defined rules for resolving conflicts. However, the best protection against conflicts and\nthe problems they can cause is to design your application to avoid conflicts in the first place. There are at\nleast two things you can do in your client applications to avoid conflicts:\n•Use Primary Keys\nIt is best, wherever possible, to define a primary key for all DR tables. The primary key index greatly\nimproves performance for finding the matching row to apply the change on a consumer cluster. It is also\nrequired if you want conflicts to be resolved using the standard rules described in the following section .\nAny conflicting action without a primary key is rejected.\n107Database Replication\n•Apply related transactions to the same cluster\nAnother tactic for avoiding conflicts is to make sure any autonomous set of transactions affecting a set\nof rows are all applied on the same cluster. For example, ensuring that all transactions for a single user\nsession, or associated with a particular purchase order, are directed to the same cluster.\n•Do not use TRUNCATE TABLE\nTRUNCATE TABLE is a convenient statement for deleting all records in a table. The statement is\noptimized to avoid deleting row by row. However, this optimization means that the binary log does\nnot report which rows were deleted. As a consequence, a TRUNCATE TABLE statement can easily\nproduce a conflict between two XDCR clusters that is not detected or reported in the conflict log.\nTherefore, do not use TRUNCATE TABLE with XDCR. Instead, explicitly delete all rows with a\nDELETE statement and a filter. For example, DELETE * FROM table WHERE column =column\nensures all deleted rows are identified in the binary log and any conflicts are accurately reported.\nNote that DELETE FROM table is not sufficient, since its execution plan is optimized to equate to\nTRUNCATE TABLE. Also, when deleting all rows in a table, it is best to perform the delete in smaller\nbatches to avoid overflowing the maximum size allowed for the binary log packets.\n11.3.8.2. How Conflicts are Resolved\nEven with the best application design possible, errors in program logic or operation may occur that result\nin conflicting records being written to two or more databases. When a conflict does occur, VoltDB follows\nspecific rules for resolving the issue. The conflict resolution rules are:\n•Conflicts are resolved on a per action basis. That is, resolution rules apply to the individual INSERT,\nUPDATE, or DELETE operation on a specific tuple. Resolutions are not applied to the transaction as\na whole.\n•The resolution is that the incoming action is accepted (that is, applied to the receiving database) or\nrejected.\n•Only actions involving a table with a primary key can be accepted, all other conflicting actions are\nrejected.\n•Accepted actions are applied as a whole — the entire record is changed to match the result on the\nproducer cluster. That means for UPDATE actions, all columns are written not just the columns specified\nin the SQL statement.\n•For tables with primary keys, the rules for which transaction wins are, in order:\n1.DELETE transactions always win\n2.If neither action is a DELETE, the last transaction (based on the timestamp) wins\nLet's look at a simple example to see how these rules work. Assume that the database stores user records,\nusing a numeric user ID as the primary key and containing columns for the user's name and password. A\nuser logs on simultaneously in two locations and performs two separate updates: one on cluster A changing\ntheir name and one on cluster B changing the password. These updates are almost simultaneous. However,\ncluster A timestamps its transaction as occurring at 10:15.00.003 and cluster B timestamps its transaction\nat 10:15.00.001.\nThe binary logs from the two transactions include the type of action, the contents of the record before\nand after the change, and the timestamps — both of the last previous transaction and the timestamp of the\n108Database Replication\nnew transaction. (Note that the timestamp includes both the time and the cluster ID where the transaction\noccurred.) So the two binary logs might look like the following.\nBinary Log A 1:\nAction: UPDATE\nCurrent Timestamp: 1, 10:15.00.003\nPrevious Timestamp: 1, 06:30.00.000\nBefore\n UserID: 12345\n Name: Joe Smith\n Password: abaloneAfter\n UserID: 12345\n Name: Joseph Smith\n Password: abalone\nBinary Log B 1:\nAction: UPDATE\nCurrent Timestamp: 2, 10:15.00.001\nPrevious Timestamp: 1, 06:30.00.000\nBefore\n UserID: 12345\n Name: Joe Smith\n Password: abaloneAfter\n UserID: 12345\n Name: Joe Smith\n Password: flounder\nWhen the binary log A 1 arrives at cluster B, the DR process performs the following steps:\n1.Uses the primary key (12345) to look up the current record in the database.\n2.Compares the current timestamp in the database with the previous timestamp in the binary log.\n3.Because the transaction in B 1 has already been applied on cluster B, the time stamps do not match. A\nconflict is recognized.\n4.A primary key exists, so cluster B attempts to resolve the conflict by comparing the new timestamp,\n10:15.00.003, to the current timestamp, 10:15.00.001.\n5.Because the new timestamp is the later of the two, the new transaction \"wins\" and the change is applied\nto the database.\n6.Finally, the conflict and resolution is logged. (See Section 11.3.8.4, “Reporting Conflicts” for more\ninformation about how conflicts are reported.)\nNote that when the UPDATE from A 1 is applied, the change to the password in B 1 is overwritten and\nthe password is reset to \"abalone\". Which at first looks like a problem. However, when the binary log B 1\narrives at cluster A, the same steps are followed. But when cluster A reaches steps #4 and 5, it finds that\nthe new timestamp from B 1 is older than the current timestamp, and so the action is rejected and the record\nis left unchanged. As a result both databases end up with the same value for the record. Essentially, the\npassword change is dropped.\nIf the transaction on cluster B had been to delete the user record rather than change the password, then\nthe outcome would be different, but still consistent. In that case, when binary log A 1 reaches cluster B, it\nwould not be able to find the matching record in step #1. This is recognized as a DELETE action having\noccurred. Since DELETE always wins, the incoming UPDATE is rejected. Similarly, when binary log B 1\nreaches cluster A, the previous timestamps do not match but, even though the incoming action in B 1 has\n109Database Replication\nan older timestamp than the UPDATE action in A 1, B1 \"wins\" because it is a delete action and the record\nis deleted from cluster A. Again, the result is consistent across the two databases.\nThe real problem with conflicts is when there is no primary key on the database table. Primary keys\nuniquely identify a record. Without a primary key, there is no way for VoltDB to tell, even if there are one\nor more unique indexes on the table, whether two records are the same record modified or two different\nrecords with the same unique key values.\nAs a result, if there is a conflict between two transactions without a primary key, VoltDB has no way to\nresolve the conflict and simply rejects the incoming action. Going back to our example, if the user table\nhad a unique index on the user ID rather than a primary key, and both cluster A and cluster B update the\nuser record at approximately the same time, when binary log A 1 arrives at cluster B, it would look for the\nrecord based on all columns in the record and fail to find a match.\nHowever, when it attempts to insert the record, it will encounter a constraint violation on the unique index.\nAgain, since there is no primary key, VoltDB cannot resolve the conflict and rejects the incoming action,\nleaving the record with the changed password. On cluster A, the same process occurs and the password\nchange in B 1 gets rejected, leaving cluster A with a changed name column and database B with a changed\npassword column — the databases diverge.\n11.3.8.3. What Types of Conflict Can Occur\nThe preceding section uses a simple case of conflicting UPDATE transactions to illustrate the steps in-\nvolved in conflict resolution. However, there are several different types of conflict that can occur. First,\nthere are three possible actions that the binary log can contain: INSERT, UPDATE, or DELETE. There\nare also three types of conflicts that can be generated:\n•Missing row — The affected row is missing from the consumer database.\n•Timestamp mismatch — The affected row exists in the consumer database, but has a different time-\nstamp than expected (in other words, it has been modified).\n•Constraint violation — Applying the incoming action would result in one or more constraint violations\non unique indexes.\nA missing row means that the binary log contains an UPDATE or DELETE action, but the affected row\ncannot be found in the consumer database. (A missing row conflict cannot occur for INSERT actions, since\nINSERT assumes no such row exists.) In the case of a missing row conflict, VoltDB assumes a DELETE\naction has removed the affected row. Since the rule is that DELETE wins, this means the incoming action\nis rejected.\nNote that if the table does not have a primary key, the assumption that a DELETE action removed the row\nis not guaranteed to be true, since it is possible an UPDATE changed the row. Without a primary key,\nthere is no way for the DR process to find the matching row when some columns may have changed, so it\nassumes it was deleted. As a result, an UPDATE could occur on one cluster and a DELETE on the other.\nThis is why assigning primary keys is recommended for DR tables when using XDCR.\nIf the matching primary key is found, it is still possible that the contents of the row have been changed.\nIn which case, the timestamps will not match and a timestamp mismatch conflict occurs. Again, this can\nhappen for UPDATE and DELETE actions where an existing row is being modified. If the incoming action\nis a DELETE, it takes precedence and the row is deleted. If not, if the incoming action has the later of\nthe two timestamps, it is accepted. If the existing record has the later timestamp, the incoming action is\nrejected.\nFinally, whether the timestamps match or not, with an INSERT or UPDATE action, it is possible that\napplying the action would violate one of more unique index constraints. This can happen because another\n110Database Replication\nrow has been updated with matching values for the unique index or another record has been inserted\nwith similar values. Whatever the cause, VoltDB cannot apply the incoming action so it is rejected. Note\nthat for a single action there can be more than one unique index that applies to the table, so there can\nbe multiple constraint violations as well as a possible incorrect timestamp. When a conflict occurs, all\nconflicts associated with the action are included in the conflict log.\nTo summarize, the following chart shows the conflicts that can occur with each type of action and the\nresult for tables with a primary key.\nAction Possible Conflict Result for Tables with Primary Key\nINSERT Constraint violation Rejected\nUPDATE Missing row\nTimestamp mismatch\nConstraint violationRejected\nLast transaction wins\nRejected\nDELETE Missing row\nTimestamp mismatchAccepted (no op)\nAccepted\n11.3.8.4. Reporting Conflicts\nVoltDB makes a record of every conflict that occurs when processing the DR binary logs. These conflict\nlogs include:\n•The intended action\n•The type of conflict\n•The timestamp and contents of the row before and after the action from the binary log\n•The timestamp and contents of the row(s) in the consumer database that caused the conflict\n•The timestamp and cluster ID of the conflict itself\nBy default, these logs are written as comma-separated value (CSV) files on the cluster where the con-\nflicts occur. These files are usually written to a subfolder of the voltdbroot directory ( voltdbroot/xd-\ncr_conflicts ) using the file prefix LOG. However, you can configure the logs to be written to different\ndestinations or locations using the VoltDB export configuration settings.\nThe DR process writes the conflicts as export data to the export stream VOLTDB_XDCR_CONFLICTS.\nYou do not need to explicitly configure export — the DR process automatically declares the necessary\nexport streams, establishes a default export configuration for the file connector, and enables the export\nstream. However, if you want the data to be sent to a different location or using a different export connector,\nyou can do this by configuring the export stream yourself.\nFor example, if you want to export the XDCR conflicts to a Kafka stream where they can be used for\nautomatic notifications, you can change the export properties in the configuration file. The following con-\nfiguration file code writes the conflict logs to the Kafka topic sysops on the broker kafkabroker.mycompa-\nny.com:\n<export>\n <configuration enabled=\"true\" type=\"kafka\" \n stream=\" VOLTDB_XDCR_CONFLICTS \">\n <property name=\"broker\"> kafkabroker.mycompany.com </property>\n <property name=\"topic\"> sysops </property>\n </configuration>\n111Database Replication\n</export>\nEach action in the binary log can generate one or more conflicts. When this occurs, VoltDB logs the\nconflict(s) as multiple rows in the conflict report. Each row is identified by the type of action (INSERT,\nUPDATE, DELETE) as well as the type of information the row contains:\n•EXISTING (EXT) — The timestamp and contents of an existing row in the consumer database that\ncaused a conflict. There can be multiple existing row logs, if there are multiple conflicts.\n•EXPECTED (EXP) — The timestamp and contents of the row that is expected before the action is\napplied (from the binary log).\n•NEW (NEW) — The new timestamp and contents for the row once the action is applied (from the\nbinary log).\n•DELETE (DEL) — For a DELETE conflict, the timestamp and cluster ID indicating when and where\nthe conflict occurred.\nFor an INSERT action, there is no EXPECTED row. For either an INSERT or an UPDATE action there\nis no DELETE row. And for a DELETE action there is no NEW row. The order of the rows in the report\nis as follows:\n1.The EXISTING row, if there is a timestamp mismatch\n2.The EXPECTED row, if there is a timestamp mismatch\n3.One or more EXISTING rows, if there are any constraint violations\n4.The NEW row, for all actions but DELETE\n5.The DELETE row, for the DELETE action only\nTable 11.1, “Structure of the XDCR Conflict Logs” describes the structure and content of the conflict log\nrecords in the export stream.\nTable 11.1. Structure of the XDCR Conflict Logs\nColumn Name Datatype Description\nROW_TYPE 3 Byte string The type of row, specified as:\nEXT — existing\nEXP — expected\nNEW — new\nDEL — delete\nACTION_TYPE 1 Byte string The type of action, specified as:\nI — insert\nU — update\nD — delete\nCONFLICT_TYPE 4 Byte string The type of conflict, specified as:\nMISS — missing row\nMSMT — timestamp mismatch\nCNST — constraint violation\nNONE — no violationa\n112Database Replication\nColumn Name Datatype Description\nCONFLICTS_ON\n_PRIMARY_KEYTINYINT Whether a constraint violation is associated with the\nprimary key. 1 for true and 0 for false.\nDECISION 1 Byte string How the conflict was resolved, specified as:\nA — the incoming action is accepted\nR — the incoming action is rejected\nCLUSTER_ID TINYINT The DR cluster ID of the cluster that last modified\nthe row\nTIMESTAMP BIGINT The timestamp of the row.\nDIVERGENCE 1 Byte string Whether the resulting action could cause the two\ncluster to diverge, specified as:\nC — the clusters are consistent\nD — the cluster may have diverged\nTABLE_NAME String The name of the table.\nCURRENT\n_CLUSTER_IDTINYINT The DR cluster ID of the cluster reporting the con-\nflict.\nCURRENT\n_TIMESTAMPBIGINT The timestamp of the conflict.\nTUPLE JSON-encoded string The schema and contents of the row, as a JSON-en-\ncoded string. The column is limited to 1MB in size.\nIf the schema and contents exceeds the 1MB limit,\nit is truncated.\naUpdate operations are executed as two separate statements: a delete and an insert, where only one of the two statements might result\nin a violation. For example, the delete may trigger a missing row violation but the insert not generate a violation. In which case the\nEXT row of the conflict log reports the MISS conflict and the NEW row reports NONE.\n11.3.8.5. Managing XDCR Conflict Logs\nThe XDCR conflict logs provide the information necessary to recover from unexpected conflicts in your\napplication workflow. Of course, not all conflicts that are logged are critical. For example, if a row is\ndeleted simultaneously by two XDCR clusters, one or both of the clusters will log a \"missing row\" conflict\nwhen it receives the matching delete transaction from the other cluster. It is a business decision which\nconflicts are acceptable and which require intervention.\nIt is also a business decision how long the conflict logs need to be retained, either for corrective action or\nas historical records. By default, VoltDB saves all conflict logs. However, over time these add up and put\na strain on system resources. So it is a good idea to establish a retention policy for managing the log files.\nVoltDB lets you specify a retention period for conflict logs as part of the configuration file, when initializ-\ning the database. The conflictretention attribute of the <dr> element specifies a time limit, after\nwhich old confict logs are deleted from the system. For example, the following configuration file specifies\nthat the conflict logs are kept for 14 days:\n<dr id=\"1\" role=\"xdcr\" conflictretention=\"14d\" >\n . . .\nThe argument to conflictretention is an integer followed by a single character specifying the time\nunit, where the time unit is s, m, h, or d representing seconds, minutes, hours, or days respectively. By\ndefault, there is no retention limit and all conflict log files are kept (except on Kubernetes, where a default\nretention limit of 30 days is applied). Also, conflictretention only applies if you do not customize\n113Database Replication\nthe export connector for XDCR conflict logs, as described in Section 11.3.8.4, “Reporting Conflicts” . If\nyou do customize the export connector and are using a file exporter, you can use the retain attribute on\nthe <configuration> element to specify a retention limit.\n11.4. Monitoring Database Replication\nDatabase replication runs silently in the background. To ensure replication is proceeding effectively, Volt-\nDB provides statistics on the producer and consumer clusters that help you understand the current state of\nthe DR process. Specifically, the statistics can tell you:\n•The amount of DR data waiting to be sent from the producer\n•The timestamp and unique ID of the last transaction received by the consumer\n•Whether any partitions are \"falling behind\" in processing DR data\nThis information is available from the @Statistics system procedure using the DRROLE, DR-\nCONSUMER, and DRPRODUCER selectors. All clusters provide summary information in response to\nthe DRROLE selector. For one-way (passive) DR, the master database is a \"producer\" and provides addi-\ntional information through the DRPRODUCER selector and the replica is the \"consumer\" and provides\nadditional information through the DRCONSUMER selector. For two-way (cross datacenter) replication,\nall clusters act as both producer and consumer and can provide statistics on both roles:\n•On all databases, the @Statistics DRROLE procedure provides summary information about the data-\nbase's DR role (master, replica, xdcr, or none), the cluster ID, and the current state of the DR process.\n•On the producer database, the @Statistics DRPRODUCER procedure includes columns for the cluster\nIDs of the current cluster and the consumer, as well as the transaction ID and timestamp of the last\nqueued transaction and for the last transaction ACKed by the consumer. The difference between these\ntwo events can tell you the approximate latency between the two databases.\n•On the consumer database, the @Statistics DRCONSUMER procedure includes statistics, on a per par-\ntition basis, showing whether it has an identified \"host\" server from each producer cluster \"covering\" it,\nor in other words, providing it DR logs. The system procedure results also include columns listing the\nID and timestamp of the last received transaction for each producer cluster. If a consumer partition is\nnot covered, it means it has lost contact with the server on the producer database that was providing it\nlogs (possibly due to a node failure). It is possible for the partition to recover, once the covering serv-\ner rejoins. However, the difference between the last received timestamp of that partition and the other\npartitions may give you an indication of how long the interruption has persisted and how far behind\nthat partition may be.\n114Chapter 12. Security\nSecurity is an important feature of any application. By default, VoltDB does not perform any security\nchecks when a client application opens a connection to the database or invokes a stored procedure. This\nis convenient when developing and distributing an application on a private network.\nHowever, on public or semi-private networks, it is important to make sure only known client applications\nare interacting with the database. VoltDB lets you control access to the database through settings in the\nschema and configuration files. The following sections explain how to enable and configure security for\nyour VoltDB application.\n12.1. How Security Works in VoltDB\nWhen an application creates a connection to a VoltDB database (using ClientFactory.clientCreate), it pass-\nes a username and password as part of the client configuration. These parameters identify the client to the\ndatabase and are used for authenticating access.\nAt runtime, if security is enabled, the username and password passed in by the client application are val-\nidated by the server against the users defined in the configuration file. If the client application passes in\na valid username and password pair, the connection is established. When the application calls a stored\nprocedure, permissions are checked again. If the schema identifies the user as being assigned a role hav-\ning access to that stored procedure, the procedure is executed. If not, an error is returned to the calling\napplication.\nNote\nVoltDB uses hashing rather than encryption when passing the username and password between\nthe client and the server. The Java and C++ clients use SHA-2 hashing while the older clients\ncurrently use SHA-1. The passwords are also hashed within the database. To secure the actual\ncommunication between the server and client, you can implement either Transport Layer Security\n(TLS) or Kerberos security. Use of TLS is described in Section 12.7, “Encrypting VoltDB Com-\nmunication Using TLS/SSL” while the use of Kerberos with VoltDB is described in Section 12.8,\n“Integrating Kerberos Security with VoltDB” .\nThere are three steps to enabling security for a VoltDB application:\n1.Add the <security enabled=\"true\"/> tag to the configuration file to turn on authentication\nand authorization.\n2.Define the users and roles you need to authenticate.\n3.Define which roles have access to each stored procedure.\nThe following sections describe each step of this process, plus how to enable access to system procedures\nand ad hoc queries.\n12.2. Enabling Authentication and Authorization\nBy default VoltDB does not perform authentication and client applications have full access to the database.\nTo enable authentication, add the <security> tag to the configuration file. You can enable security when\nyou initialize the database root directory, or you can use voltadmin update to change the security setting\non the running database. (Or you can change the setting interactively through the VoltDB Management\nCenter.)\n115Security\n<deployment>\n <security enabled=\"true\"/>\n . . .\n</deployment>\n12.3. Defining Users and Roles\nThe key to security for VoltDB applications is the users and roles defined in the schema and configuration.\nYou define users in the configuration file and roles in the schema.\nThis split is deliberate because it allows you to define the overall security structure globally in the schema,\nassigning permissions to generic roles (such as operator, dbuser, apps, and so on). You then define specific\nusers and assign them to the generic roles as part of the database configuration. This way you can create\none configuration (including cluster information and users) for development and testing, then move the\ndatabase to a different configuration and a different set of users for production by changing only one file:\nthe configuration file.\nYou define users within the <users> ... </users> tag set in the configuration file. The syntax for defining\nusers is as follows.\n<deployment>\n <users>\n <user name=\" user-name \" \n password=\" password-string \" \n roles=\" role-name [,...]\" />\n [ ... ]\n </users>\n ...\n</deployment>\nNote\nIf you do not want to distribute the account passwords in plain text, you can use the voltdb mask\ncommand to hash the passwords in the configuration file.\nInclude a <user> tag for every username/password pair you want to define. You specify which roles a user\nbelongs to as part of the user definition in the configuration file using the roles attribute to the <user> tag.\nYou can assign users built-in roles, user-defined roles, or both. For user-defined roles, you define the roles\nin the database schema using the CREATE ROLE statement.\nCREATE ROLE role-name ;\nNote that at least one user must be assigned the built-in ADMINISTRATOR role. For example, the fol-\nlowing code defines three users, assigning operator the built-in ADMINISTRATOR role and the user-\ndefined OPS role, assigning developer the user-defined roles OPS and DBUSER, and assigning the user\nclientapp DBUSER. When a user is assigned more than one role, you specify the role names as a com-\nma-delimited list.\n<deployment>\n <users>\n <user name=\"operator\" password=\"mech\" roles=\"administrator, ops\" />\n <user name=\"developer\" password=\"tech\" roles=\"ops,dbuser\" />\n <user name=\"clientapp\" password=\"xyzzy\" roles=\"dbuser\" />\n </users>\n</deployment>\n116Security\nThree important notes concerning the assignment of users and roles:\n•Users must be assigned at least one role, or else they have no permissions. (Permissions are assigned\nby role.)\n•At least one user must be assigned the built-in ADMINISTRATOR role.\n•There must be a corresponding role defined in the schema for any user-defined roles listed in the con-\nfiguration file.\n12.4. Assigning Access to Stored Procedures\nOnce you define the users and roles you need, you assign them access to individual stored procedures using\nthe ALLOW clause of the CREATE PROCEDURE statement in the schema. In the following example,\nusers assigned the roles dbuser and ops are permitted access to both the MyProc1 and MyProc2 procedures.\nOnly users assigned the ops role have access to the MyProc3 procedure.\nCREATE PROCEDURE ALLOW dbuser,ops FROM CLASS MyProc1;\nCREATE PROCEDURE ALLOW dbuser,ops FROM CLASS MyProc2;\nCREATE PROCEDURE ALLOW ops FROM CLASS MyProc3;\nUsually, when security is enabled, you must specify access rights for each stored procedure. If a procedure\ndeclaration does not include an ALLOW clause, no access is allowed. In other words, calling applications\nwill not be able to invoke that procedure.\n12.5. Assigning Access by Function (System Proce-\ndures, SQL Queries, and Default Procedures)\nIt is not always convenient to assign permissions one at a time. You might want a special role for access to\nall user-defined stored procedures. Also, there are special capabilities available within VoltDB that are not\ncalled out individually in the schema so cannot be assigned using the CREATE PROCEDURE statement.\nFor these special cases VoltDB provides named permissions that you can use to assign functions as a\ngroup. For example, the ALLPROC permission grants a role access to all user-defined stored procedures\nso the role does not need to be granted access to each procedure individually.\nSeveral of the special function permissions have two versions: a full access permission and a read-only\npermission. So, for example, DEFAULTPROC assigns access to all default procedures while DEFAULT-\nPROCREAD allows access to only the read-only default procedures; that is, the TABLE.select procedures.\nSimilarly, the SQL permission allows the user to execute both read and write SQL queries interactively\nwhile SQLREAD only allows read-only (SELECT) queries to be executed.\nOne additional functional permission is access to the read-only system procedures, such as @Statistics and\n@SystemInformation. This permission is special in that it does not have a name and does not need to be\nassigned; all authenticated users are automatically assigned read-only access to these system procedures.\nTable 12.1, “Named Security Permissions” describes the named functional permissions.\nTable 12.1. Named Security Permissions\nPermission Description Inherits\nDEFAULTPROCREAD Access to read-only default procedures ( TABLE.se-\nlect)\n117Security\nPermission Description Inherits\nDEFAULTPROC Access to all default procedures ( TABLE.select, TA-\nBLE.insert, TABLE.delete, TABLE.update, and TA-\nBLE.upsert)DEFAULTPROCREAD\nSQLREAD Access to read-only ad hoc SQL queries (SELECT) DEFAULTPROCREAD\nSQL Access to all ad hoc SQL queries and default proce-\nduresSQLREAD, DEFAULT-\nPROC\nALLPROC Access to all user-defined stored procedures\nADMIN Full access to all system procedures, all user-defined\nprocedures, as well as default procedures, ad hoc\nSQL, and DDL statements.ALLPROC, DEFAULT-\nPROC, SQL\nNote: For backwards compatibility, the special permissions ADHOC and SYSPROC are still recognized.\nThey are interpreted as synonyms for SQL and ADMIN, respectively.\nIn the CREATE ROLE statement you enable access to these functions by including the permission name\nin the WITH clause. (The default, if security is enabled and the keyword is not specified, is that the role\nis not allowed access to the corresponding function.)\nNote that the permissions are additive. So if a user is assigned one role that allows access to SQLREAD\nbut not DEFAULTPROC, but that user is also assigned another role that allows DEFAULTPROC, the\nuser has both permissions.\nThe following example assigns full access to members of the ops role, access to interactive SQL queries\n(and default procedures by inheritance) and all user-defined procedures to members of the developer role,\nand no special access beyond read-only system procedures to members of the apps role.\nCREATE ROLE ops WITH admin;\nCREATE ROLE developer WITH sql, allproc;\nCREATE ROLE apps;\n12.6. Using Built-in Roles\nTo simplify the development process, VoltDB predefines two roles for you when you enable security:\nadministrator and user. Administrator has ADMIN permissions: access to all functions including interac-\ntive SQL queries, DDL, system procedures, and user-defined procedures. User has SQL and ALLPROC\npemissions: access to ad hoc SQL and all default and user-defined stored procedures.\nThese predefined roles are important, because when you start the database there is no schema and therefore\nno user-defined roles available to assign to users. So you must always include at least one user who is\nassigned the Administrator role when starting a database with security enabled. You can use this account\nto then load the schema — including additional security roles and permissions — and then update the\nconfiguration to add more users as necessary.\n12.7. Encrypting VoltDB Communication Using\nTLS/SSL\nVoltDB hashes usernames and passwords both within the database server and while passing them across\nthe network. However, the network communication itself is not encrypted by default. You can enable\nTransport Layer Security (TLS) — the recommended upgrade from Secure Socket Layer (SSL) commu-\nnication — for the HTTP port, which affects the VoltDB Management Center and the JSON interface.\n118Security\nYou can also extend TLS encryption to all external interfaces (HTTP, client, and admin), the internal in-\nterface, and the port used for database replication (DR) for more thorough security. The following sections\nsummarize how to enable TLS for the servers in a cluster, including:\n•Configuring TLS encryption on the server\n•Choosing which ports to encrypt\n•Using the VoltDB command line utilities with TLS\n•Implementing TLS communication in Java client applications\n•Configuring Database Replication (DR) using TLS\n12.7.1. Configuring TLS/SSL on the VoltDB Server\nTLS, like its predecessor SSL, uses certificates to validate the authenticity of the communication. You\ncan either use a certificate created by a commercial certificate provider (such as Digitcert, GeoTrust, or\nSymantec) or you can create your own certificate. If you use a commercial provider, that provider also\nhandles the authentication of the certificate. If you create a local or self-signed certificate, you need to\nprovide the certificate and authentication to the server and clients yourself.\nIf you purchase a commercial certificate, the server configuration must include a pointer to the certificate\nin the <keystore> element. So, for example, if the path to the certificate is /etc/ssl/certifi-\ncate, you can enable TLS for all external interfaces by including the following XML in the database\nconfiguration file:\n<ssl enabled=\"true\" external=\"true\">\n <keystore path=\"/etc/ssl/certificate\" password=\"mysslpassword\"/>\n</ssl>\nIf you choose to use a locally created certificate, you must first generate the certificate key store and trust\nstore. You can create a local certificate using the Java keytool utility. Creating the key store and trust store\nrequires several steps including:\n1.Creating a key store and password\n2.Creating a key signing request\n3.Creating and signing the certificate\n4.Importing the certificate into the key store\n5.Creating the associated trust store\nThere are a number of different options when performing this task. It is important to understand how\nthese options affect the resulting certificate. Be sure to familiarize yourself with the documentation of the\nkeytool utility before creating your own certificate. The following example uses some common options to\ngenerate a self-signed certificate key store and trust store.\n$ keytool -genkey -keystore mydb.keystore \\\n -storepass mypasswd -alias mydb \\\n -keyalg rsa -validity 365 -keysize 2048\n$ keytool -certreq -keystore mydb.keystore \\\n -storepass mypasswd -alias mydb \\\n -keyalg rsa -file mydb.csr\n$ keytool -gencert -keystore mydb.keystore \\\n -storepass mypasswd -alias mydb \\\n119Security\n -infile mydb.csr -outfile mydb.cert -validity 365\n$ keytool -import -keystore mydb.keystore \\\n -storepass mypasswd -alias mydb \\\n -file mydb.cert\n$ keytool -import -keystore mydb.truststore \\\n -storepass mypasswd -alias mydb \\\n -file mydb.cert\nOnce you create the key store and the trust store, you can reference them in the database configuration file\nto enable TLS when initializing the database root directory. For example:\n<ssl enabled=\"true\" external=\"true\">\n <keystore path=\"/etc/ssl/local/mydb.keystore\" password=\"mypasswd\"/>\n <truststore path=\"/etc/ssl/local/mydb.truststore\" password=\"mypasswd\"/>\n</ssl>\n12.7.2. Choosing What Ports to Encrypt with TLS/SSL\nIf TLS encryption is enabled, the HTTP is always encrypted. You can selectively choose to encrypt other\nports as well. You specify which ports to encrypt using attributes of the <ssl> element:\n•External ports (external), including the client and admin ports\n•Internal ports (internal), used for intra-cluster communication between the nodes of the cluster\n•Extranet ports (dr), including the replication port used for DR\nFor each type of port, you specify that the ports are either enabled (\"true\") or disabled (\"false\"). The default\nis false. For example, the following configuration enables TLS encryption on the external, internal, and\nDR ports:\n<ssl enabled=\"true\" external=\"true\" internal=\"true\" dr=\"true\">\n <keystore path=\"/etc/ssl/local/mydb.keystore\" password=\"mypasswd\"/>\n <truststore path=\"/etc/ssl/local/mydb.truststore\" password=\"mypasswd\"/>\n</ssl>\nNote that if you enable TLS encryption for the DR port, other clusters replicating from this cluster must\ninclude the appropriate client configuration when they enable DR. See Section 12.7.5, “Configuring Data-\nbase Replication (DR) With TLS/SSL” for information on setting up TLS when configuring DR.\nAlso, enabling TLS encryption on the internal port means that all intra-cluster communication must be\nencrypted and decrypted as it passes between nodes. Consequently, any operations that require interactions\nbetween cluster nodes (such as K-safety or multi-partition transactions) may take longer and therefore\nimpact overall latency. Be sure to benchmark your application with and without TLS encryption before\nenabling internal port encryption on production systems.\nFinally, it is important to note that all ports where TLS is enabled and all the servers within a single cluster\nuse the same certificate.\n12.7.3. Using the VoltDB Command Line Utilities with TLS/\nSSL\nOnce you enable TLS for the external interfaces on your database servers, you must also enable TLS on the\ncommand line utilities so they use the appropriate protocols to connect to the servers. (The voltdb utility\nis the one exception. Since it only operates on the local server it does not require a network connection.)\n120Security\nWhen invoking the command line utilities, such as voltadmin and sqlcmd, you use the --ssl option to\nactivate encryption with TLS-enabled VoltDB servers. If the servers are using a commercially-provided\ncertificate, you can specify the --ssl option without an argument. For example:\n$ sqlcmd --ssl\nIf the servers are using a local or self-signed certificate you must also specify a Java properties file as an\nargument to the --ssl option. For example:\n$ sqlcmd --ssl=localcert.txt\nThe properties file must declare two properties that specify the path to the trust store and the trust store\npassword, respectively. So, using the trust store generated by the example in Section 12.7.1, “Configuring\nTLS/SSL on the VoltDB Server” , the localcert.txt file could be:\ntrustStore=/etc/ssl/local/mydb.truststore\ntrustStorePassword=mypasswd\n12.7.4. Implementing TLS/SSL in the Java Client Applications\nJust as the command line tools must specify how to connect to an TLS-enabled server, client applications\nmust also establish an appropriate connection. Using the VoltDB Java API, you can enable TLS by setting\nthe appropriate attributes of the client configuration. Specifically, if you are using a self-signed certificate,\nyou must provide the path to the trust store and its password. You can do this using either the .setTrustS-\ntore() or .setTrustStoreConfigFromPropertyFile(). For example, the following two commands are equiva-\nlent, assuming the localcert.txt file matches the properties file described in Section 12.7.3, “Using\nthe VoltDB Command Line Utilities with TLS/SSL” :\nclientConfig.setTrustStore(\"/etc/ssl/local/mydb.truststore\", \"mypasswd\");\nclientConfig.setTrustStoreConfigFromPropertyFile(\"localcert.txt\");\nAfter setting the trust store properties you can enable TLS communication using the .enableSSL() method\nand create the client connection. For example:\nClientConfig clientConfig = new ClientConfig(\"JDoe\", \"JDsPasswd\");\nclientConfig.setTrustStoreConfigFromPropertyFile(\"localcert.txt\");\nclientConfig.enableSSL();\nclient = ClientFactory.createClient(clientConfig);\nWhen using a commercially generated certificate, you do not need to specify the trust store and can use\njust the .enableSSL() method.\n12.7.5. Configuring Database Replication (DR) With TLS/SSL\nWhen using TLS encryption on the DR port, the DR snapshots and binary logs are encrypted as they\npass from the producer cluster to the consumer cluster. This means that the producer must not only have\nTLS enabled for the DR port, but the consumer cluster must use the appropriate TLS credentials when\nit contacts the producer.\nSo, for example, in passive DR, the master cluster must have TLS enabled for the DR port and the replica\nmust be configured to use TLS when connecting to the master. In XDCR, you enable TLS for all clusters\nin the XDCR relationship. So each cluster must both enable TLS for its DR port as well as configure TLS\nfor its connections to the other clusters.\nSection 12.7.1, “Configuring TLS/SSL on the VoltDB Server” describes how to enable TLS encryption\nfor the DR port, which must be done before the cluster starts. To configure TLS connectivity at the other\nend, you add the ssl attribute to the <connection> element within the DR configuration. The value\n121Security\nof the ssl attribute is either blank — for commercial certificates — or the path to a Java properties file\nspecifying the trust store and password for the remote cluster(s) when using a locally-generated certificate.\nThese attribute values are the same as the --ssl argument you use when running the command line\nutilities described in Section 12.7.3, “Using the VoltDB Command Line Utilities with TLS/SSL” .\nFor example, when configuring TLS encryption for passive DR, the master cluster must enable TLS on\nthe DR port and the replica must specify use of TLS in the <connection> element. The respective\nconfiguration files might look like this:\nMaster Cluster <ssl enabled=\"true\" dr=\"true\" >\n <keystore path=\"/etc/ssl/local/mydb.keystore\" password=\"mypasswd\"/>\n <truststore path=\"/etc/ssl/local/mydb.truststore\" password=\"mypasswd\"/>\n</ssl>\nReplica Cluster <dr id=\"2\" role=\"replica\">\n <connection source=\"MasterSvrA,MasterSvrB\" ssl=\"/usr/local/mastercert.txt\" >\n</dr>\nNote that the replica does not need to enable TLS for its DR port, since it is a consumer and its own port\nis not used.\nFor XDCR, each cluster must both enable DR for its own port and specify the TLS credentials for the\nremote clusters. The configuration file might look like this:\nXDCR Cluster <ssl enabled=\"true\" dr=\"true\" >\n <keystore path=\"/etc/ssl/local/mydb.keystore\" password=\"mypasswd\"/>\n <truststore path=\"/etc/ssl/local/mydb.truststore\" password=\"mypasswd\"/>\n</ssl>\n<dr id=\"1\" role=\"xdcr\">\n <connection source=\"NYCSvrA,NYCSvrB\" ssl=\"/usr/local/nyccert.txt\" >\n</dr>\nNote that when using locally-generated certificates, there is only one properties file specified in the ssl\nattribute. So all of the clusters in the XDCR relationship must use the same certificate. When using com-\nmercially purchased certificates, the ssl attributes is left blank; so each cluster can, if you choose, use\na separate certificate.\n12.8. Integrating Kerberos Security with VoltDB\nFor environments where more secure communication is required than hashed usernames and passwords, it\nis possible for a VoltDB database to use Kerberos to authenticate clients and servers. Kerberos is a popular\nnetwork security protocol that you can use to authenticate the Java client processes when they connect to\nVoltDB database servers. Use of Kerberos is supported for the Java client library and JSON interface only.\nTo use Kerberos authentication for VoltDB security, you must perform the following steps:\n1.Set up and configure Kerberos on your network, servers, and clients.\n2.Install and configure the Java security extensions on your VoltDB servers and clients.\n3.Configure the VoltDB cluster and client applications to use Kerberos.\nThe following sections describe these steps in detail.\n12.8.1. Installing and Configuring Kerberos\nKerberos is a complete software solution for establishing a secure network environment. It includes net-\nwork protocols and software for handling authentication and authorization in a secure, encrypted fashion.\n122Security\nKerberos requires one or more servers known as key distribution centers (KDC) to authenticate and au-\nthorize services and the users who access them.\nTo use Kerberos for VoltDB authentication you must first set up Kerberos within your network environ-\nment. If you do not already have a Kerberos KDC, you will need to create one. You will also need to install\nthe Kerberos client libraries on all of the VoltDB servers and clients and set up the appropriate principals\nand services. Because Kerberos is a complete network environment rather than a single platform applica-\ntion, it is beyond the scope of this document to explain how to install and configure Kerberos itself. This\nsection only provides notes specific to configuring Kerberos for use by VoltDB. For complete information\nabout setting up and using Kerberos, please see the Kerberos documentation .\nPart of the Kerberos setup is the creation of a configuration file on both the VoltDB server and client ma-\nchines. By default, the configuration file is located in /etc/krb5.conf on Linux systems. (On Mac-\nintosh systems, the configuration file is edu.mit.Kerberos located either in ~/Library/Pref-\nerences/ or /Library/Preferences/ .) Be sure this file exists and points to the correct realm\nand KDC.\nOnce a KDC exists and the nodes are configured correctly, you must create the necessary Kerberos ac-\ncounts — known as \"user principals\" for the accounts that run the VoltDB client applications and a \"ser-\nvice principal\" for the VoltDB cluster. If you intend to use the web-based VoltDB Management Center\nor the JSON interface, you will also want to create a host and HTTP service principle for each server as\nwell. For example, to create the service keytab file for the VoltDB database, you can issue the following\ncommands on the Kerberos KDC:\n$ sudo kadmin.local\nkadmin.local: addprinc -randkey service/voltdb\nkadmin.local: ktadd -k voltdb.keytab service/voltdb\nThen copy the keytab file to the database servers, making sure it is only accessible by the user account\nthat starts the database process:\n$ scp voltdb.keytab voltadmin@voltsvr:voltdb.keytab\n$ ssh voltadmin@voltsvr chmod 0600 voltdb.keytab\nYou can then create host and HTTP service principles for each server in the cluster and write them to a\nserver-specific keytab. For example, to create a keytab file for the database node server1, the command\nwould be the following:\n$ sudo kadmin.local\nkadmin.local: addprinc -randkey host/server1.mycompany.lan\nkadmin.local: addprinc -randkey HTTP/server1.mycompany.lan\n \nkadmin.local: ktadd -k server1.mycompany.lan.keytab HTTP/server1.mycompany.lan\nkadmin.local: ktadd -k server1.mycompany.lan.keytab host/server1.mycompany.lan\n12.8.2. Installing and Configuring the Java Security Exten-\nsions\nThe next step is to install and configure the Java security extension known as Java Cryptography Extension\n(JCE). JCE enables the more robust encryption required by Kerberos within the Java Authentication and\nAuthorization Service (JAAS). This is necessary because VoltDB uses JAAS to interact with Kerberos.\nThe JCE that needs to be installed is specific to the version of Java you are running. See the the Java web\nsite for details. Again, you must install JCE on both the VoltDB servers and client nodes\n123Security\nOnce JCE is installed, you create a JAAS login configuration file so Java knows how to authenticate the\ncurrent process. By default, the JAAS login configuration file is $HOME/.java.login.config . On\nthe database servers, the configuration file must define the VoltDBService module and associate it with\nthe keytab created in the previous section.\nTo enable Kerberos access from the web-based VoltDB Management Center and JSON interface, you must\nalso include entries for the Java Generic Security Service (JGSS) declaring the VoltDB service principle\nand the server's HTTP service principle. For example:\nServer JAAS Login Configuration File\nVoltDBService {\n com.sun.security.auth.module.Krb5LoginModule required\n useKeyTab=true keyTab=\"/home/voltadmin/voltdb.keytab\"\n doNotPrompt=true\n principal=\"service/voltdb@MYCOMPANY.LAN\" storeKey=true;\n};\ncom.sun.security.jgss.initiate { \n com.sun.security.auth.module.Krb5LoginModule required \n principal=\"service/voltdb@MYCOMPANY.LAN\" \n keyTab=\"/home/voltadmin/voltdb.keytab\" \n useKeyTab=true \n storeKey=true \n debug=false; \n}; \n \ncom.sun.security.jgss.accept { \n com.sun.security.auth.module.Krb5LoginModule required \n principal=\"HTTP/server1.mycompany.lan@MYCOMPANY.LAN\" \n useKeyTab=true \n keyTab=\"/etc/krb5.keytab\" \n storeKey=true \n debug=false \n isInitiator=false; \n};\nOn the client nodes, the JAAS login configuration defines the VoltDBClient module.\nClient JAAS Login Configuration File\nVoltDBClient {\n com.sun.security.auth.module.Krb5LoginModule required\n useTicketCache=true renewTGT=true doNotPrompt=true;\n};\n12.8.3. Configuring the VoltDB Servers and Clients\nFinally, once Kerberos and the Java security extensions are installed and configured, you must configure\nthe VoltDB database cluster and client applications to use Kerberos.\nOn the database servers, you enable Kerberos security using the <security> element when you initialize\nthe database root directory, specifying \"kerberos\" as the provider. For example:\n<?xml version=\"1.0\"?>\n124Security\n<deployment>\n <security enabled=\"true\" provider=\"kerberos\"/>\n . . .\n</deployment>\nYou then assign roles to individual users as described in Section 12.3, “Defining Users and Roles” , except\nin place of generic usernames, you specify the Kerberos user — or \"principal\" — names, including their\nrealm. Since Kerberos uses encrypted certificates, the password attribute is ignored and can be filled in\nwith arbitrary text. For example:\n<?xml version=\"1.0\"?>\n<deployment>\n <security enabled=\"true\" provider=\"kerberos\"/>\n . . .\n <users>\n <user name=\"mtwain@MYCOMPANY.LAN\" password=\"n/a\" roles=\"administrator\"/>\n <user name=\"cdickens@MYCOMPANY.LAN\" password=\"n/a\" roles=\"dev\"/>\n <user name=\"hbalzac@MYCOMPANY.LAN\" password=\"n/a\" roles=\"adhoc\"/>\n </users>\n</deployment>\nHaving configured Kerberos in the configuration file, you are ready to initialize and start the VoltDB\ncluster. When starting the VoltDB process, Java must know how to access the Kerberos and JAAS login\nconfiguration files created in the preceding sections. If the files are not in their default locations, you\ncan override the default location using the VOLTDB_OPTS environment variable and setting the flags\njava.security.krb5.conf and java.security.auth.login.config , respectively.1\nIn Java client applications, you specify Kerberos as the security protocol when you create the client con-\nnection, using the enableKerberosAuthentication method as part of the configuration. For example:\nimport org.voltdb.client.ClientConfig;\nimport org.voltdb.client.ClientFactory;\n \nClientConfig config = new ClientConfig();\n // specify the JAAS login module\nconfig.enableKerberosAuthentication(\"VoltDBClient\"); \n \nVoltClient client = ClientFactory.createClient(config);\nclient.createConnection(\"voltsvr\");\nNote that the VoltDB client automatically picks up the Kerberos cached credentials of the current process,\nthe user's Kerberos \"principal\". So you do not need to — and should not — specify a username or password\nas part of the VoltDB client configuration.\nWhen using the VoltDB JDBC client interface, you can enable Kerberos by setting the kerberos prop-\nerty on the connection to match the settings in the Java API. For example, you can enable Kerberos by\nsetting the property on the connection string as a query parameter:\nClass.forName(\"org.voltdb.jdbc.Driver\");\nConnection c = DriverManager.getConnection(\n \"jdbc:voltdb://svr1:21212,svr2:21212 ?kerberos=VoltDBClient \");\nAlternately, you can supply a list of properties, including the kerberos property, when you initialize\nthe connection:\n1On Macintosh systems, you must always specify the java.security.krb5.conf property.\n125Security\nClass.forName(\"org.voltdb.jdbc.Driver\");\nProperties props = new Properties();\nprops.setProperty(\"kerberos\", “VoltDBClient\");\nConnection c = DriverManager.getConnection(\n \"jdbc:voltdb://svr1:21212,svr2:21212\", props);\n12.8.4. Accessing the Database from the Command Line and\nthe Web\nIt is also important to note that once the cluster starts using Kerberos authentication, only Java, JDBC,\nJSON, and Python clients can connect to the cluster and they must use Kerberos authentication to do it. The\nsame is true for the CLI commands, such has sqlcmd and voltadmin . To authenticate to a VoltDB server\nwith Kerberos security enabled using the Java-based utilities sqlcmd and cvsloader , you must include the\n--kerberos flag identifying the name of the Kerberos client service module. For example:\n$ sqlcmd --kerberos=VoltDBClient\nIf the configuration files are not in the default location, you must specify their location on the command\nline:\n$ sqlcmd --kerberos=VoltDBClient \\\n -J-Djava.security.auth.login.config=myclient.kerberos.conf\nTo use the Python API or Python-based voltadmin utility, you must first make sure you have the python-\ngssapi package installed. Then, login to your Kerberos account using kinit before invoking the Python\nclient. When using the voltadmin utility, you must also include --kerberos flag, but you do not need\nto specify any argument since it picks up the credentials in the Kerberos user's cache. For example:\n$ voltadmin shutdown --kerberos\nTo use the VoltDB Management Center or the JSON interface to access the database, your web browser\nmust be configured to use the Simple and Protected GSS-API Negotiation Mechanism (also known as\nSPNEGO). See your web browser's help for instructions on configuring SPNEGO.\n126Chapter 13. Saving & Restoring a VoltDB\nDatabase\nThere are times when it is necessary to save the contents of a VoltDB database to disk and then restore it.\nFor example, if the cluster needs to be shut down for maintenance, you may want to save the current state\nof the database before shutting down the cluster and then restore the database once the cluster comes back\nonline. Performing periodic backups of the data can also provide a fallback in case of unexpected failures\n— either physical failures, such as power outages, or logic errors where a client application mistakenly\ncorrupts the database contents.\nVoltDB provides shell commands, system procedures, and an automated snapshot feature that help you\nperform these operations. The following sections explain how to save and restore a running VoltDB cluster,\neither manually or automatically.\n13.1. Performing a Manual Save and Restore of a\nVoltDB Cluster\nManually saving and restoring a VoltDB database is useful when you need to modify the database's phys-\nical structure or make schema changes that cannot be made to a running database. For example, changing\nthe K-safety value, the number of sites per site, or changing the partitioning column of a partitioned table.\nThe normal way to perform such a maintenance operation using save and restore is as follows:\n1.Stop database activities (using pause).\n2.Use save to write a snapshot of the current data to disk.\n3.Shutdown the cluster.\n4.Make changes to the VoltDB schema, cluster configuration, and/or configuration file as desired.\n5.Reinitialize the database with the modified configuration file, using voltdb init --force .\n6.Restart the cluster in admin mode, using voltdb start --pause .\n7.Optionally, reload the schema and stored procedures (if you are changing the schema).\n8.Restore the previous snapshot.\n9.Restart client activity (using resume).\nThe key is to make sure that all database activity is stopped before the save and shutdown are performed.\nThis ensures that no further changes to the database are made (and therefore lost) after the save and before\nthe shutdown. Similarly, it is important that no client activity starts until the database has started and the\nrestore operation completes.\nAlso note that Step #7, reloading the schema, is optional. If you are going to reuse the same schema in a\nnew database instance, the restore operation will automatically load the schema from the snapshot itself.\nIf you want to modify the schema in any way, such as changing indexes or tables and columns, you should\nload the modified schema before restoring the data from the snapshot. If the database schema is not empty\n(that is there are tables already defined), only the data is loaded from the snapshot. See Section 13.1.3.2,\n127Saving & Restoring\na VoltDB Database\n“Modifying the Database Schema and Stored Procedures” for more information on modifying the schema\nwhen restoring snapshots.\nSave and restore operations are performed either by calling VoltDB system procedures or using the cor-\nresponding voltadmin shell commands. In most cases, the shell commands are simpler since they do not\nrequire program code to use. Therefore, this chapter uses voltadmin commands in the examples. If you\nare interested in programming the save and restore procedures, see Appendix G, System Procedures for\nmore information about the corresponding system procedures.\nWhen you issue a save command, you specify a path where the data will be saved and a unique identifier\nfor tagging the files. VoltDB then saves the current data on each node of the cluster to a set of files at the\nspecified location (using the unique identifier as a prefix to the file names). This set of files is referred\nto as a snapshot, since it contains a complete record of the database for a given point in time (when the\nsave operation was performed).\nThe --blocking option lets you specify whether the save operation should block other transactions\nuntil it completes. In the case of manual saves, it is a good idea to use this option since you do not want\nadditional changes made to the database during the save operation.\nNote that every node in the cluster uses the same absolute path, so the path specified must be valid, must\nexist on every node, and must not already contain data from any previous saves using the same unique\nidentifier, or the save will fail.\nWhen you issue a restore command, you specify the same absolute path and unique identifier used when\ncreating the snapshot. VoltDB checks to make sure the appropriate save set exists on each node, then\nrestores the data into memory.\n13.1.1. How to Save the Contents of a VoltDB Database\nTo save the contents of a VoltDB database, use the voltadmin save command. The following example\ncreates a snapshot at the path /tmp/voltdb/backup using the unique identifier TestSnapshot .\n$ voltadmin save --blocking /tmp/voltdb/backup \"TestSnapshot\"\nIn this example, the command tells the save operation to block all other transactions until it completes. It is\npossible to save the contents without blocking other transactions (which is what automated snapshots do).\nHowever, when performing a manual save prior to shutting down, it is normal to block other transactions\nto ensure you save a known state of the database.\nNote that it is possible for the save operation to succeed on some nodes of the cluster and not others. When\nyou issue the voltadmin save command, VoltDB displays messages from each partition indicating the\nstatus of the save operation. If there are any issues that would stop the process from starting, such as a\nbad file path, they are displayed on the console. It is a good practice to examine these messages to make\nsure all partitions are saved as expected.\nNote that it is also possible to issue the voltadmin save command without arguments. In that case the\nsnapshot is saved to the default snapshots folder in the database root directory. This can be useful because\nthe voltdb start command can automatically restore the latest snapshot in that directory as described in\nthe next section.\n13.1.2. How to Restore the Contents of a VoltDB Database\nManually\nThe easiest way to restore a snapshot is to let VoltDB do it for you as part of the recover operation. If\nyou are not changing the cluster configuration you can use an automated snapshot or other snapshot saved\n128Saving & Restoring\na VoltDB Database\ninto the voltdbroot/snapshots directory by simply restarting the cluster nodes using the voltdb\nstart command. With the start action VoltDB automatically starts and restores the most recent snapshot.\nIf command logging is enabled, it also replays any logs after the snapshot. This approach has the added\nbenefit that VoltDB automatically loads the previous schema as well as part of the snapshot.\nHowever, you cannot use voltdb start to restore a snapshot if the physical configuration of the cluster has\nchanged or if you want to restore an earlier snapshot or a snapshot stored in an alternate location. In these\ncases you must do a manual restore.\nTo manually restore a VoltDB database from a snapshot previously created by a save operation, you can\ncreate a new database instance and use the voltadmin restore command. So, for example, if you modify\nthe configuration, you must re-initialize the root directory with the new configuration file, using the --\nforce flag to overwrite the previous configuration and database content:\n$ voltdb init --config=newconfig.xml --force\nThen you can start the reconfigured database, which creates a new empty database. It is also a good idea\nto start the database in admin mode by including the --pause flag:\n$ voltdb start --pause\nFinally, you restore the previously saved snapshot using the same pathname and unique identifier used\nduring the save. The following example restores the snapshot created by the example in Section 13.1.1\nand resumes normal operation (that is, exits admin mode).\n$ voltadmin restore /tmp/voltdb/backup \"TestSnapshot\"\n$ voltadmin resume\nAs with save operations, it is always a good idea to check the status information displayed by the command\nto ensure the operation completed as expected.\n13.1.3. Changing the Cluster Configuration Using Save and\nRestore\nMost changes to a VoltDB database can be made \"on the fly\" while the database is running. Adding\nand removing tables, enabling and disabling database features such as import and export, and adding or\nupdating stored procedures can all be done while the database is active. However, between a save and a\nrestore, it is possible to make changes to the database and cluster configuration that cannot be made on\na running cluster. For example, you can:\n•Add or remove nodes from the cluster\n•Modify the schema and/or stored procedures that:\n•Change partitioned tables to replicated and vice versa\n•Change the partitioning column on partitioned tables\n•Add unique indexes to tables with existing data\n•Change the number of sites per host\n•Change the K-safety value\nThe following sections discuss these procedures in more detail.\n129Saving & Restoring\na VoltDB Database\n13.1.3.1. Adding and Removing Nodes from the Database\nTo add nodes to the cluster, use the following procedure:\n1.Save the database with the voltadmin save command.\n2.Shutdown and re-initialize the database root directories on each node (including initializing new root\ndirectories for the nodes you are adding).\n3.Start the cluster (including the new nodes) specifying the new server count with the --count argument\nto the voltdb start command.\n4.Restore the database with the voltadmin restore command..\nWhen the snapshot is restored, the database (and partitions) are redistributed over the new cluster config-\nuration.\nIt is also possible to remove nodes from the cluster using this procedure. However, to make sure that no\ndata is lost in the process, you must copy the snapshot files from the nodes that are being removed to one\nof the nodes that is remaining in the cluster. This way, the restore operation can find and restore the data\nfrom partitions on the missing nodes.\n13.1.3.2. Modifying the Database Schema and Stored Procedures\nThe easiest and recommended way to change the database schema is by sending the appropriate SQL\ndatabase definition language (DDL) statements to the sqlcmd utility. Similarly you can update the stored\nprocedures on a running database using the LOAD CLASSES and REMOVE CLASSES directives.\nHowever, there are a few changes that cannot be made to a running database,. For example, changing the\npartitioning column of a table if the table contains data. For these changes, you must use save and restore\nto change the schema.\nTo modify the database schema or stored procedures between a save and restore, make the appropriate\nchanges to the source files (that is, the database DDL and the stored procedure Java source files). If you\nmodify the stored procedures, be sure to repackage any Java stored procedures into a JAR file. Then you\ncan:\n1.Save the database with the voltadmin save command.\n2.Shutdown and re-initialize the database root directories on each node.\n3.Start the cluster with the voltdb start command.\n4.Load the modified schema and stored procedures using sqlcmd.\n5.Restore the database contents with the voltadmin restore command.\nTwo points to note when modifying the database structure before restoring a snapshot are:\n•When existing rows are restored to tables where new columns have been added, the new columns are\nfilled with either the default value (if defined by the schema) or nulls.\n•When changing the datatypes of columns, it is possible to decrease the datatype size (for example, going\nfrom an INT to an TINYINT). However, if any existing values exceed the capacity of the new datatype\n(such as an integer value of 5,000 where the datatype has been changed to TINYINT), the entire restore\nwill fail.\n130Saving & Restoring\na VoltDB Database\nIf you remove or modify stored procedures (particularly if you change the number and/or datatype of the\nparameters), you must make sure the corresponding changes are made to client applications as well.\n13.2. Scheduling Automated Snapshots\nSave and restore are useful when planning for scheduled down times. However, these functions are also\nimportant for reducing the risk from unexpected outages. VoltDB assists in contingency planning and\nrecovery from such worst case scenarios as power failures, fatal system errors, or data corruption due to\napplication logic errors.\nIn these cases, the database stops unexpectedly or becomes unreliable. By automatically generating snap-\nshots at set intervals, VoltDB gives you the ability to restore the database to a previous valid state.\nYou schedule automated snapshots of the database as part of the configuration file. The <snapshot> tag\nlets you specify:\n•The frequency of the snapshots. You can specify any whole number of seconds, minutes, or hours (using\nthe suffix \"s\", \"m\", or \"h\", respectively, to denote the unit of measure). For example \"3600s\", \"60m\",\nand \"1h\" are all equivalent. The default frequency is 24 hours.\n•The unique identifier to use as a prefix for the snapshot files. The default prefix is \"AUTOSNAP\".\n•The number of snapshots to retain. Snapshots are marked with a timestamp (as part of the file names), so\nmultiple snapshots can be saved. The retain attribute lets you specify how many snapshots to keep.\nOlder snapshots are purged once this limit is reached. The default number of snapshots retained is two.\nThe following example enables automated snapshots every thirty minutes using the prefix \"flightsave\" and\nkeeping only the three most recent snapshots.\n<snapshot prefix=\"flightsave\" \n frequency=\"30m\" \n retain=\"3\"\n/>\nBy default, automated snapshots are stored in a snapshots subfolder of the VoltDB root directory (as\ndescribed in Section 3.7.2, “Configuring Paths for Runtime Features” ). You can save the snapshots to a\nspecific path by adding the <snapshots> tag within to the <paths>...</paths> tag set. For example, the\nfollowing example defines the path for automated snapshots as /etc/voltdb/autobackup/ .\n<paths>\n <snapshots path=\"/etc/voltdb/autobackup/\" />\n</paths>\n13.3. Managing Snapshots\nVoltDB does not delete snapshots after they are restored; the snapshot files remain on each node of the\ncluster. For automated snapshots, the oldest snapshot files are purged according to the settings in the\nconfiguration file. But if you create snapshots manually or if you change the directory path or the prefix\nfor automated snapshots, the old snapshots will also be left on the cluster.\nTo simplify maintenance, it is a good idea to observe certain guidelines when using save and restore:\n•Create dedicated directories for use as the paths for VoltDB snapshots.\n•Do not store any other files in the directories used for VoltDB snapshots.\n131Saving & Restoring\na VoltDB Database\n•Periodically cleanup the directories by deleting obsolete, unused snapshots.\nYou can delete snapshots manually. To delete a snapshot, use the unique identifier, which is applied as\na filename prefix, to find all of the files in the snapshot. For example, the following commands remove\nthe snapshot with the ID TestSave from the directory /etc/voltdb/backup/. Note that VoltDB separates the\nprefix from the remainder of the file name with a dash for manual snapshots:\n$ rm /etc/voltdb/backup/TestSave-*\nHowever, it is easier if you use the system procedures VoltDB provides for managing snapshots. If you\ndelete snapshots manually, you must make sure you execute the commands on all nodes of the cluster.\nWhen you use the system procedures, VoltDB distributes the operations across the cluster automatically.\nVoltDB provides several system procedures to assist with the management of snapshots:\n•@Statistics \"SNAPSHOTSTATUS\" provides information about the most recently performed snapshots\nfor the current database. The response from @Statistics for this selector includes information about\nup to ten recent snapshots, including their location, when they were created, how long the save took,\nwhether they completed successfully, and the size of the individual files that make up the snapshot. See\nthe reference section on @Statistics for details.\n•@SnapshotScan lists all of the snapshots available in a specified directory path. You can use this system\nprocedure to determine what snapshots exist and, as a consequence, which ought to be deleted. See the\nreference section on @SnapshotScan for details.\n•@SnapshotDelete deletes one or more snapshots based on the paths and prefixes you provide. The\nparameters to the system procedure are two string arrays. The first array specifies one or more directory\npaths. The second array specifies one or more prefixes. The array elements are taken in pairs to determine\nwhich snapshots to delete. For example, if the first array contains paths A, B, and C and the second\narray contains the unique identifiers X, Y, and Z, the following three snapshots will be deleted: A/X,\nB/Y, and C/Z. See the reference section on @SnapshotDelete for details.\n13.4. Special Notes Concerning Save and Restore\nThe following are special considerations concerning save and restore that are important to keep in mind:\n•Save and restore do not check the cluster health (whether all nodes exist and are running) before exe-\ncuting. The user can find out what nodes were saved by looking at the messages displayed by the save\noperation.\n•Both the save and restore calls do a pre-check to see if the action is likely to succeed before the actual\nsave/restore is attempted. For save, VoltDB checks to see if the path exists, if there is any data that\nmight be overwritten, and if it has write access to the directory. For restore, VoltDB verifies that the\nsaved data can be restored completely.\n•It is possible to provide additional protection against failure by copying the automated snapshots to\nremote locations. Automated snapshots are saved locally on the cluster. However, you can set up a\nnetwork process to periodically copy the snapshot files to a remote system. (Be sure to copy the files\nfrom all of the cluster nodes.) Another approach would be to save the snapshots to a SAN disk that is\nalready set up to replicate to another location. (For example, using iSCSI.)\n132Chapter 14. Command Logging and\nRecovery\nBy executing transactions in memory, VoltDB, frees itself from much of the management overhead and I/\nO costs of traditional database products. However, accidents do happen and it is important that the contents\nof the database be safeguarded against loss or corruption.\nSnapshots provide one mechanism for safeguarding your data, by creating a point-in-time copy of the\ndatabase contents. But what happens to the transactions that occur between snapshots?\nCommand logging provides a more complete solution to the durability and availability of your VoltDB\ndatabase. Command logging keeps a record of every transaction (that is, stored procedure) as it is execut-\ned. Then, if the servers fail for any reason, the database can restore the last snapshot and \"replay\" the\nsubsequent logs to re-establish the database contents in their entirety.\nThe key to command logging is that it logs the invocations, not the consequences, of the transactions. A\nsingle stored procedure can include many individual SQL statements and each SQL statement can modify\nhundreds or thousands of table rows. By recording only the invocation, the command logs are kept to a\nbare minimum, limiting the impact the disk I/O will have on performance.\nHowever, any additional processing can impact overall performance, especially when it involves disk I/O.\nSo it is important to understand the tradeoffs concerning different aspects of command logging and how\nit interacts with the hardware and any other options you are utilizing. The following sections explain how\ncommand logging works and how to configure it to meet your specific needs.\n14.1. How Command Logging Works\nWhen command logging is enabled, VoltDB keeps a log of every transaction (that is, stored procedure)\ninvocation. At first, the log of the invocations are held in memory. Then, at a set interval the logs are\nphysically written to disk. Of course, at a high transaction rate, even limiting the logs to just invocations,\nthe logs begin to fill up. So at a broader interval, the server initiates a snapshot. Once the snapshot is\ncomplete, the command logging process is able to free up — or \"truncate\" — the log keeping only a record\nof procedure invocations since the last snapshot.\nThis process can continue indefinitely, using snapshots as a baseline and loading and truncating the com-\nmand logs for all transactions since the last snapshot.\nFigure 14.1. Command Logging in Action\nThe frequency with which the transactions are written to the command log is configurable (as described in\nSection 14.3, “Configuring Command Logging for Optimal Performance” ). By adjusting the frequency and\n133Command Logging and Recovery\ntype of logging (synchronous or asynchronous) you can balance the performance needs of your application\nagainst the level of durability desired.\nIn reverse, when it is time to \"replay\" the logs, you start the database and the server nodes establish a\nquorum, the first thing the database servers do is restore the most recent snapshot. Then they replay all of\nthe transactions in the log since that snapshot.\nFigure 14.2. Recovery in Action\n14.2. Controlling Command Logging\nCommand logging is enabled by default in the VoltDB Enterprise Edition. Using command logging is\nrecommended to ensure durability of your data. However, you can choose whether to have command\nlogging enabled or not using the <commandlog> element in the configuration file. For example:\n<deployment>\n <cluster kfactor=\"1\" />\n <commandlog enabled=\"true\"/>\n</deployment>\nIn its simplest form, the <commandlog/> tag enables or disables command logging by setting the en-\nabled attribute to \"true\" or \"false\". You can also use other attributes and child elements to control specific\ncharacteristics of command logging. The following section describes those options in detail.\n14.3. Configuring Command Logging for Optimal\nPerformance\nCommand logging can provide complete durability, preserving a record of every transaction that is com-\npleted before the database stops. However, the amount of durability must be balanced against the perfor-\nmance impact and hardware requirements to achieve effective I/O.\nVoltDB provides three settings you can use to optimize command logging:\n•The amount of disk space allocated to the command logs\n•The frequency between writes to the command logs\n•Whether logging is synchronous or asynchronous\nThe following sections describe these options. A fourth section discusses the impact of storage hardware\non the different logging options.\n134Command Logging and Recovery\n14.3.1. Log Size\nThe command log size specifies how much disk space is preallocated for storing the logs on disk. The\nlogs are divided into three \"segments\" Once a segment is full, it is written to a snapshot (as shown in\nFigure 14.1, “Command Logging in Action” ).\nFor most workloads, the default log size of one gigabyte is sufficient. However, if your workload writes\nlarge volumes of data or uses large strings for queries (so the procedure invocations include large parame-\nter values), the log segments fill up very quickly. When this happens, VoltDB can end up snapshotting\ncontinuously, because by the time one snapshot finishes, the next log segment is full.\nTo avoid this situation, you can increase the total log size, to reduce the frequency of snapshots. You define\nthe log size in the configuration file using the logsize attribute of the <commandlog> tag. Specify\nthe desired log size as an integer number of megabytes. For example:\n<commandlog enabled=\"true\" logsize=\"3072\" />\nWhen increasing the log size, be aware that the larger the log, the longer it may take to recover the database\nsince any transactions in the log since the last snapshot must be replayed before the recovery is complete.\nSo, while reducing the frequency of snapshots, you also may be increasing the time needed to restart.\nThe minimum log size is three megabytes. Note that the log size specifies the initial size. If the existing\nsegments are filled before a snapshot can truncate the logs, the server will allocate additional segments.\n14.3.2. Log Frequency\nThe log frequency specifies how often transactions are written to the command log. In other words, the\ninterval between writes, as shown in Figure 14.1, “Command Logging in Action” . You can specify the\nfrequency in either or both time and number of transactions.\nFor example, you might specify that the command log is written every 200 milliseconds or every 10,000\ntransactions, whichever comes first. You do this by adding the <frequency> element as a child of\n<commandlog> and specifying the individual frequencies as attributes. For example:\n<commandlog enabled=\"true\">\n <frequency time=\"200\" transactions=\"10000\"/>\n</commandlog>\nTime frequency is specified in milliseconds and transaction frequency is specified as the number of trans-\nactions. You can specify either or both types of frequency. If you specify both, whichever limit is reached\nfirst initiates a write.\n14.3.3. Synchronous vs. Asynchronous Logging\nIf the command logs are being written asynchronously (which is the default), results are returned to the\nclient applications as soon as the transactions are completed. This allows the transactions to execute un-\ninterrupted.\nHowever, with asynchronous logging there is always the possibility that a catastrophic event (such as a\npower failure) could cause the cluster to fail. In that case, any transactions completed since the last write\nand before the failure would be lost. The smaller the frequency, the less data that could be lost. This is how\nyou \"dial up\" the amount of durability you want using the configuration options for command logging.\nIn some cases, no loss of data is acceptable. For those situations, it is best to use synchronous logging . When\nyou select synchronous logging, no results are returned to the client applications until those transactions\n135Command Logging and Recovery\nare written to the log. In other words, the results for all of the transactions since the last write are held on\nthe server until the next write occurs.\nThe advantage of synchronous logging is that no transaction is \"complete\" and reported back to the calling\napplication until it is guaranteed to be logged — no transactions are lost. The obvious disadvantage of\nsynchronous logging is that the interval between writes (i.e. the frequency) while the results are held, adds\nto the latency of the transactions. To reduce the penalty of synchronous logging, you need to reduce the\nfrequency.\nWhen using synchronous logging, it is recommended that the frequency be limited to between 1 and 4 mil-\nliseconds to avoid adding undue latency to the transaction rate. A frequency of 1 or 2 milliseconds should\nhave little or no measurable affect on overall latency. However, low frequencies can only be achieved\neffectively when using appropriate hardware (as discussed in the next section, Section 14.3.4, “Hardware\nConsiderations” ).\nTo select synchronous logging, use the synchronous attribute of the <commandlog> tag. For exam-\nple:\n<commandlog enabled=\"true\" synchronous=\"true\" >\n <frequency time=\"2\"/>\n</commandlog>\n14.3.4. Hardware Considerations\nClearly, synchronous logging is preferable since it provides complete durability. However, to avoid neg-\natively impacting database performance you must not only use very low frequencies, but you must have\nstorage hardware that is capable of handling frequent, small writes. Attempting to use aggressively low\nlog frequencies with storage devices that cannot keep up will also hurt transaction throughput and latency.\nStandard, uncached storage devices can quickly become overwhelmed with frequent writes. So you should\nnot use low frequencies (and therefore synchronous logging) with slower storage devices. Similarly, if the\ncommand logs are competing for the device with other disk I/O, performance will suffer. So do not write\nthe command logs to the same device that is being used for other I/O, such as snapshots or export overflow.\nOn the other hand, fast, cached devices such as disks with a battery-backed cache, are capable of handling\nfrequent writes. So it is strongly recommended that you use such devices when using synchronous logging.\nTo specify where the command logs and their associated snapshots are written, you use tags within the\n<paths> ...</paths> tag set. For example, the following example specifies that the logs are written to\n/fastdisk/voltdblog and the snapshots are written to /opt/voltdb/cmdsnaps :\n<paths>\n <commandlog path=\"/faskdisk/voltdblog/\" />\n <commandlogsnapshot path=\"/opt/voltdb/cmdsnaps/\" />\n</paths>\nNote that the default paths for the command logs and the command log snapshots are both subfolders of\nthe voltdbroot directory. To avoid overloading a single device on production servers, it is recommended\nthat you specify an explicit path for the command logs, at a minimum, and preferably for both logs and\nsnapshots.\nTo summarize, the rules for balancing command logging with performance and throughput on production\ndatabases are:\n•Use asynchronous logging with slower storage devices.\n136Command Logging and Recovery\n•Write command logs to a dedicated device. Do not write logs and snapshots to the same device.\n•Use low (1-2 milisecond) frequencies when performing synchronous logging.\n•Use moderate (100 millisecond or greater) frequencies when performing asynchronous logging.\n137Chapter 15. Streaming Data: Import,\nExport, and Migraon\nEarlier chapters discuss features of VoltDB as a standalone component of your business application. But\nlike most technologies, VoltDB is often used within a diverse and heterogeneous computing ecosystem\nwhere it needs to \"play well\" with other services This chapter describes features of VoltDB that help inte-\ngrate it with other databases, systems, and applications to simplify, automate, and speed up your business\nprocesses.\nJust as VoltDB as a database aims to provide the optimal transaction throughput, VoltDB as a data service\naims to efficiently and reliably transfer data to and from other services. Of course, you can always write\ncustom code to integrate VoltDB into your application environment, calling stored procedures to move\ndata in and out of the database. However, the VoltDB feature set simplifies and automates the process of\nstreaming data into, out of, and through VoltDB allowing your application to focus on the important work\nof analyzing, processing, and modifying the data in flight through secure, reliable transactions. To make\nthis possible, VoltDB introduces five key concepts:\n•Streams\n•Import\n•Export\n•Migration\n•Topics\nStreams operate much like regular database tables. You define them with a CREATE statement like tables,\nthey consist of columns and you insert data into streams the same way you insert data into tables using\nthe INSERT statement. You can define views that aggregate the data as it passes through the stream.\nInteractions with streams within a stored procedure are transactional just like tables. The only difference is\na stream does not store any data in the database. This allows you to use all the consistency and reliability\nof a transactional database and the familiar syntax of SQL to manage data \"in flight\" without necessarily\nhaving to save it to persistent storage. Of course, since there is no storage associated with streams, they are\nfor INSERT only. Any attempt to SELECT, UPDATE, or DELETE data from a stream results in an error.\nImport automates the process of pulling data from external sources and inserting it into the database work-\nflow through the same stored procedures your applications use. The import connectors are declared as part\nof the database configuration and stop and start with the database. The key point being that the database\nmanages the entire import process and ensures the durability of the data while it is within VoltDB. Alter-\nnately, you can use one of the VoltDB data loading utilities to push data into the VoltDB database from\na variety of sources.\nExport automates the reverse process from import: it manages copying any data written to an export table\nor stream and sending it to the associated external target, whether it be a file, a service such as Kafka, or\nanother database. The export targets are defined in the database configuration file, while the connection\nof a table or stream to it specific export target is done in the data definition language (DDL) CREATE\nstatement using the EXPORT TO TARGET clause.\nTopics are similar to import and export in that topics let you stream data into and out of the VoltDB\ndatabase. The differences are that a single topic can perform both import and output, there can be multiple\nconsumers and producers for a single topic, and it is the external producers and consumers that control how\n138Streaming Data: Import,\nExport, and Migration\nand when data is transferred rather than VoltDB pulling from and pushing to individual external targets.\nYou identify the stream to use for output to the topic by specifying EXPORT TO TOPIC in the CREATE\nSTREAM statement. You then configure the topic, including the stored procedure to use for input, in the\nconfiguration file. Another difference between export and topics is that, because topics do not have a single\noutput consumer, there is no single event that determines when the data transfer is complete. Instead, you\nmust define a retention/expiration policy (based on time or size) for when data is no longer needed and\ncan be deleted from the queue.\nMigration is a special case of export where export is more fully integrated into the business workflow.\nWhen you define a table with the MIGRATE TO TARGET clause instead of EXPORT TO TARGET, data\nis not deleted from the VoltDB table until it is successfully written to the associated target. You trigger a\nmigration of data using an explicit MIGRATE statement or you can declare the table with USING TTL\nto schedule the migration based on a timestamp within the data records and an expiration time defined\nas the TTL value.\nHow you configure these features depends on your specific business goals. The bulk of this chapter de-\nscribes how to declare and configure import, export and migration in detail. The next two sections provide\nan overview of how data streaming works and how to use these features to perform common business\nactivities.\n15.1. How Data Streaming Works in VoltDB\nImport associates incoming data with a stored procedure that determines what is done with the data. Ex-\nport associates a database object (a table or stream) with an external target, where the external target de-\ntermines how the exported data is handled. But in both cases the handling of streamed data follows three\nkey principles:\n•Interaction with the VoltDB database is transactional, providing the same ACID guarantees as all other\ntransactions.\n•Interaction with the external system occurs as a separate asynchronous process, avoiding any negative\nimpact on the latency of ongoing transactions in the VoltDB database.\n•The VoltDB server takes care of starting and stopping the import and export subsystems when the\ndatabase starts and stops. The server also takes responsibility for managing streaming data \"in flight\"\n— ensuring that no data is lost once it enters the subsystem and before it reaches its final destination.\nVoltDB database achieves these goals is by having separate export and import connectors handle the data\nas it passes from one system to the next as shown in Figure 15.1, “Overview of Data Streaming” .\n139Streaming Data: Import,\nExport, and Migration\nFigure 15.1. Overview of Data Streaming\nIn the case of topics, there is no specific source or target; multiple producers and consumers can write to\nand read from the topic. And the stored procedure that receives the incoming data can do whatever you\nchoose with that content: it can write it to the stream as output for the same topic, it can write into other\ntopics, it can write into other database tables, or any combination, providing the ultimate flexibility to meet\nyour business logic needs, as shown in Figure 15.2, “Overview of Topics” .\nFigure 15.2. Overview of Topics\nWhich streaming features you use depend on your business requirements. The key point is that orchestrat-\ning multiple disparate systems is complex and error prone and the VoltDB streaming services free you\nfrom these complexities by ensuring that all operations start and stop automatically as part of the server\nprocess, the data in flight is made durable across database sessions, and that all data is delivered at least\nonce or retained until delivery is possible.\nThe following sections provide an overview of each service. Later sections describe the services and built-\nin connectors in more detail. You can also define your own custom import and export connectors, as\ndescribed in the VoltDB Guide to Performance and Customization .\n140Streaming Data: Import,\nExport, and Migration\n15.1.1. Understanding Import\nTo import data into VoltDB from an external system you have two options: you can use one of the standard\nVoltDB data loading utilities (such as csvloader) or you can define an import connector in the database\nconfiguration file that associates the external source with a stored procedure. The data loading utilities\nare standalone external applications that push data into the VoltDB database. VoltDB import connectors\nuse a pull model. In other words, the connector periodically checks the data source to determine if new\ncontent is available. If so, the connector retrieves the data and passes it to the stored procedure where it\ncan analyze the data, validate it, manipulate it, insert it into the database, or even pass it along to an export\nstream; whatever your application needs.\nThe creation of the import connector is done using the <configuration> tag within the <import> ... </\nimport> element of the configuration file. The attributes of the <configuration> tag specify the type of\nimport connector to use (Kafka, Kinesis, or custom) and, optionally, the input format (CSV by default).\nThe <property> tags within the configuration specify the actual data source, the stored procedure to use\nas a destination, and any other connector-specific attributes you wish to set.\nFor example, to process data from a Kafka topic, the connector definition must specify the type (kafka),\nthe addresses of one or more Kafka brokers as the source, the name of the topic (or topics), and the stored\nprocedure to process the data. If the data does not need additional processing, you can use the default\nstored procedure that VoltDB generates for each table to insert the data directly into the database. The\nfollowing configuration reads the Kafka topics nyse and nasdaq in CSV format and inserts records into\nthe stocks table using the default insert procedure:\n<import>\n <configuration type=\"kafka\" format=\"csv\">\n <property name=\"brokers\">kafkasvr1:9092,kafkasvr2:9092</property>\n <property name=\"topics\">nyse,nasdaq</property>\n <property name=\"procedure\">STOCKS.insert</property>\n </configuration>\n</import>\nHaving the import connectors defined in the configuration file lets VoltDB manage the entire import\nprocess, from starting and stopping the connectors to making sure the specified stored procedure exists,\nfetching the data in batches and ensuring nothing is lost in transit. You can even add, delete, or modify the\nconnector definitions on the fly by updating the database configuration file while the database is running.\nVoltDB provides built-in import connectors for Kafka and Kinesis. Section 15.4, “VoltDB Import Con-\nnectors” describes these built-in connectors and the required and optional properties for each. Section 15.5,\n“VoltDB Import Formatters” provides additional information about the input formatters that prepare the\nincoming data for the stored procedure.\n15.1.2. Understanding Export\nTo export data from VoltDB to an external system you define a database table or stream as the export\nsource by including the EXPORT TO TARGET clause in the DDL definition and associating that data\nsource with a logical target name. For example, to associate the stream alerts with a target called systemlog ,\nyou would declare a stream like so:\nCREATE STREAM alerts \n EXPORT TO TARGET systemlog \n ( {column-definition} [,...] );\nFor tables, you can also specify when data is queued for export. By default, data inserted into export tables\nwith the INSERT statement (or UPSERT that results in a new record being inserted) is queued to the\n141Streaming Data: Import,\nExport, and Migration\ntarget, similar to streams. However, you can customize the export to write on any combination of data\nmanipulation language (DML) statements, using the ON clause. For example, to include updates into the\nexport steam, the CREATE TABLE statement might look like this:\nCREATE TABLE orders \n EXPORT TO TARGET orderprocessing ON INSERT, UPDATE\n ( {column-definition} [,...] );\nAs soon as you declare a stream or table as exporting to a target, any data written to that source (or in the\ncase of tables, the export actions you specified in the CREATE TABLE statement) is queued for the export\nstream. You associate the named target with a specific connector and external system in the <export> ...\n</export> section of the database configuration file. Note that you can define the target either before or\nafter declaring the source, and you can add, remove, or modify the export configuration at any time before\nor after the database is started.\nIn the configuration file you define the export connector using the <configuration> element, identifying the\ntarget name and type of connector to use. Within the <configuration> element you then identify the specific\nexternal target to use and any necessary connector-specific attributes in <property> tags. For example, to\nwrite export data to files locally on the database servers, you use the file connector and specify attributes\nsuch as the file prefix, location, and roll-over frequency as properties:\n<export>\n <configuration target=\"systemlog\" type=\"file\">\n <property name=\"type\">csv</property>\n <property name=\"nonce\">syslog</property>\n <property name=\"period\">60</property>\n <!-- roll every hour (60 minutes) -->\n </configuration>\n</export>\nVoltDB supports built-in connectors for five types of external targets: file, HTTP (including Hadoop),\nJDBC, Kafka, and Elasticsearch. Each export connector supports different properties specific to that type\nof target. Section 15.3, “VoltDB Export Connectors” describes the built-in export connectors and the\nrequired and optional properties for each.\n15.1.3. Understanding Migration\nMigration is a special case of export that synchronizes export with the deletion of data in database tables.\nWhen you migrate a record, VoltDB ensures the data is successfully transmitted to (and acknowledged\nby) the target before the data is deleted from the database. This way you can ensure the data is always\navailable from one of the two systems — it cannot temporarily \"disappear\" during the move.\nYou define a VoltDB table as a source of migration using the MIGRATE TO TARGET clause, the same\nway you define an export source with the EXPORT TO TARGET clause. For example, the following\nCREATE TABLE statement defines the orders table as a source for migration to the oldorders target:\nCREATE TABLE orders \n MIGRATE TO TARGET oldorders\n ( {column-definition} [,...] );\nMigration uses the export subsystem to perform the interaction with the external data store. So you can use\nany of the supported connectors to configure the target of the migration; and you do so the exact same way\nyou do for any other export target. The difference is that rather than exporting the data when it is inserted\ninto the table, the data is exported when you initiate migration.\n142Streaming Data: Import,\nExport, and Migration\nYou trigger migration at run time using the MIGRATE SQL statement and a WHERE clause to identify\nthe specific rows to move. For example, to migrate all of the orders for a specific customer, you could use\nthe following MIGRATE statement:\nMIGRATE FROM orders\n WHERE custmer_id = ? AND NOT MIGRATING;\nNote the use of NOT MIGRATING. MIGRATING is a special function that identifies all rows that are\ncurrently being migrated; that is, where migration (and deletion) has not yet completed. Although not re-\nquired — VoltDb will skip rows that are already migrating — adding AND NOT MIGRATING to a MI-\nGRATE statement can improve performance by reducing the number of rows evaluated by the expression.\nOnce the rows are migrated and the external target acknowledges receipt, the rows are deleted from the\ndatabase.\nTo further automate the migration of data to external targets, you can use the MIGRATE TO TARGET\nclause with USING TTL. USING TTL automates the deletion of records based on a TTL value and a\nTIMESTAMP column in the table. For example, adding the clause USING TTL 12 HOURS ON COLUMN\ncreated to a table where the created column defaults to NOW, means that records will be deleted\nfrom the table 12 hours after they are inserted. By adding the MIGRATE TO TARGET clause, you can tell\nVoltDB to migrate the data to the specified target before removing it when its TTL expiration is reached.\nCREATE TABLE sessions \n MIGRATE TO TARGET sessionlog\n ( session_id BIGINT NOT NULL,\n created TIMESTAMP DEFAULT NOW [,...] \n )\n USING TTL 12 HOURS ON COLUMN created ;\n15.1.4. Understanding Topics\nTopics allow you to integrate both import and export into a single stream. They also allow multiple external\nproducers and consumers to access the topic at the same time, keeping track of where each consumer or\ngroup of consumers is in the stream of output.\nThere are actually two distinct and independent components to a topic that you control separately: input\nand output. You declare a topic having either or both, depending on the schema and configuration file.\nThe schema associates individual streams with topics and the configuration file defines the properties of\nthe topic, including what stored procedure to use for input. For example, you can declare an output-only\ntopic by specifying the topic in the CREATE STREAM.... EXPORT TO TOPIC statement but specifying\nno stored procedure in the configuration file. In this case, any records written to the associated stream are\nqueued for output and available to any consumers of the topic:\nCREATE STREAM session EXPORT TO TOPIC sessions ... \nIf, on the other hand, you specify a stored procedure in the configuration file, records written to the topic\nby producers invoke the specified procedure passing the message contents (and, optionally, the key) as\narguments:\n<topics>\n <topic name=\"sessions\" procedure=\"ProcessSessions\"/>\n</topics>\nIf you include both the EXPORT TO TOPIC clause in the CREATE STEAM statement and the proce-\ndure attribute in the <topic> element of the configuration file, the topic is available for both input and\noutput. What happens to the data as it passes through VoltDB is up to you. You can simply pass it from\nproducers to consumers by taking the data received by the input procedure and inserting it into the associ-\n143Streaming Data: Import,\nExport, and Migration\nated stream. Or the stored procedure can filter, modify, or redirect the content as needed. For example, the\nfollowing data definitions create a topic where the input procedure uses an existing table in the database\n(users) to fill out additional fields based on the matching username in the incoming records while writing\nthe data to the stream for output:\nSchema CREATE TABLE tempuser ( username VARCHAR(128) NOT NULL);\nCREATE TABLE users ( username VARCHAR(128) NOT NULL,\n country VARCHAR(32), userrank INTEGER);\nPARTITION TABLE tempuser on column username;\nPARTITION TABLE users on column username;\nCREATE STREAM session\n EXPORT TO TOPIC \"sessions\"\n PARTITION ON COLUMN username (\n username VARCHAR(128) NOT NULL,\n login TIMESTAMP, country VARCHAR(32), userrank INTEGER);\nCREATE PROCEDURE ProcessSessions \n PARTITION ON TABLE users COLUMN username \n AS BEGIN\n INSERT INTO tempuser VALUES(CAST(? AS VARCHAR));\n INSERT INTO session SELECT u.username, \n CAST(? AS TIMESTAMP), u.country, u.userrank\n FROM users AS u, tempuser AS t\n WHERE u.username=t.username;\n TRUNCATE TABLE tempuser;\n END;\nConfiguration <topics>\n <topic name=\"sessions\" procedure=\"ProcessSessions\"/>\n</topics>\nFinally, if you want to create a topic that is not processed but simply flows through VoltDB from producers\nto consumers, you declare the topic as \"opaque\" in the configuration file, without either specifying a stored\nprocedure for input or associating a stream with the topic for output.\n<topic name=\"sysmsgs\" opaque=\"true\"/>\nOpaque topics are useful if you want to have a single set of brokers for all your topics but only need to\nanalyze and process some of the data feeds. Opaque topics let VoltDB handle the additional topics without\nrequiring the stored procedure or stream definitions needed for processed topics.\n15.2. The Business Case for Streaming Data\nThe streaming features of VoltDB provide a robust and flexible set of capabilities for connecting a VoltDB\ndatabase to external systems. They can be configured in many different ways. At the most basic, they\nlet you automate the import and export data from a VoltDB database. The following section demonstrate\nother ways these capabilities can simplify and automate common business processes, including:\n•Section 15.2.1, “Extract, Transform, Load (ETL)”\n•Section 15.2.2, “Change Data Capture”\n•Section 15.2.3, “Streaming Data Validation”\n•Section 15.2.4, “Caching”\n144Streaming Data: Import,\nExport, and Migration\n•Section 15.2.5, “Archiving”\n15.2.1. Extract, Transform, Load (ETL)\nExtract, transform, load (ETL) is a common business pattern where you extract data from a database,\nrestructure and repurpose it, then load into another system. For example, an order processing database\nmight have separate tables for customer data, orders, and product information. When it comes time to ship\nthe order, information from all three tables is needed: the customer ID and product SKU from the order,\nthe name and address from the customer record, and the product name and description from the product\ntable. This information is merged and passed to the shipping management system.\nRather than writing a separate application to perform these tasks, VoltDB lets you integrate them in a\nsingle stored procedure. By creating a stream with the appropriate columns for the transformed data and\nassigning it as an export source and defining a target that matches the shipping management system, you\ncan declare single stored procedure to complete the process:\nCREATE STREAM shipping \n EXPORT TO TARGET shipmgtsystem \n ( order_number BIGINT,\n prod_sku BIGINT,\n prod_name VARCHAR(64),\n customer_name VARCHAR(64),\n customer_address VARCHAR(128) );\nCREATE PROCEDURE shiporder AS\n INSERT INTO shipping SELECT\n o.id, p.sku, p.name, c.name, c.address\n FROM orders AS o, products AS p, customers AS c\n WHERE o.id = ? AND\n o.sku = p.sku AND o.customer_id = c.id;\n15.2.2. Change Data Capture\nChange Data Capture is the process of recording all changes to the content of a database. Those changes\ncan then be reused by inserting into another repository for redundancy, logging to a file, merging into\nanother database or whatever the business workflow call for.\nVoltDB simplifies change data capture by allowing you to export all or any subset of data changes to a table\nto any of the available export targets. When you declare a table as an export source with the EXPORT TO\nTARGET clause you can specify which actions trigger export using ON. Possible triggers are INSERT,\nUPDATE, UPDATE_NEW, UPDATE_OLD, and DELETE.\nINSERT and DELETE are self-explanatory. UPDATE, on the other hand, generates two export records:\none for the row before the update and one for the row after the update. To select only one or these records,\nyou can use the actions UPDATE_OLD or UPDATE_NEW.\nFor change data capture, you can export all changes by specifying ON INSERT, UPDATE, DELETE. For\nexample, the following schema definitions ensure that all data changes for the tables products and orders\nare exported to the targets offsiteprod and offsiteorder , respectively:\nCREATE TABLE products EXPORT TO TARGET offsiteprod\n ON INSERT, UPDATE, DELETE\n [ ... ];\nCREATE TABLE orders EXPORT TO TARGET offsiteorder\n ON INSERT, UPDATE, DELETE\n [ ... ];\n145Streaming Data: Import,\nExport, and Migration\nNote that the built-in connectors include six columns of metadata at the beginning of the export data by\ndefault. For change data capture, the most important piece of metadata is the sixth column, with is a single\nbyte value that indicates which action triggered the export. The external target can use this information\nto determine what to do with the record. The possible values for the operation indicator are shown in\nTable 15.2, “Export Metadata” .\n15.2.3. Streaming Data Validation\nVoltDB provides the necessary speed and features to implement an intelligent data pipeline — where\ninformation passing through a high performance stream is analyzed, validated and then accepted, rejected,\nor modified as necessary and passed on to the next stage of the pipeline. In this use case, the data in VoltDB\nis used as reference for comparison with the influx of data in the pipeline. VoltDB import connectors\naccept the incoming data, where it is submitted to a stored procedure. The stored procedure analyses the\ndata against the reference tables, then inserts the validated content into a stream which is in turn declared\nas a source for an export connector that sends it along to its next target.\nFor example, VoltDB can be inserted into a Kafka pipeline by using:\n•A Kafka import connector as the input\n•A VoltDB stream and a Kafka export connector as the output\n•A stored procedure analyzing the input and inserting it into the stream\nThe following schema and configuration illustrate a simple example that checks if the data in a Kafka\nstream matches an existing user account with appropriate funds. The schema uses a reference table (ac-\ncount), a temporary table (incoming), and an export stream (outgoing). Any data matching the require-\nments is written to the export target; all other incoming data is dropped.\nCREATE TABLE incoming\n ( trans_id BIGINT, amount BIGINT, user_id BIGINT );\nCREATE STREAM outgoing EXPORT TO TARGET kafka_output \n ( trans_id BIGINT, amount BIGINT, user_id BIGINT );\nCREATE PROCEDURE validate AS \n BEGIN\n INSERT INTO incoming (?,?,?); \n INSERT INTO outgoing \n SELECT i.trans_id, i.amount, i.userid\n FROM incoming AS i, account AS a\n WHERE i.user_id = a.user_id AND a.balance + i.amount > 0;\n TRUNCATE incoming;\n END;\n146Streaming Data: Import,\nExport, and Migration\n<import>\n <configuration type=\"kafka\">\n <property name=\"procedure\">validate</configuration>\n <property name=\"brokers\">kfkasrc1,kfksrc2</configuration>\n <property name=\"topics\">transactions</configuration>\n </configuration>\n</import>\n<export>\n <configuration type=\"kafka\" target=\"kafka_output\">\n <property name=\"bootstrap.servers\">kfkdest1,kfkdest2</configuration>\n <property name=\"topic.key\">outgoing.transactions</configuration>\n <property name=\"skipinternals\">true</configuration>\n </configuration>\n</export>\n15.2.4. Caching\nBecause of its architecture, VoltDB is excellent at handling high volume transactions. It is not as well\nsuited for ad hoc analytical processing of extremely large volumes of historical data. But sometimes you\nneed both. Caching allows current, high touch content to be accessible from a fast front-end repository\nwhile historical, less frequently accessed content is stored in slower, large back-end repositories (such as\nHadoop) sometimes called data lakes .\nExport, Time To Live (TTL), and automated tasks help automate the use of VoltDB as a hot cache. By\ndeclaring tables in VoltDB as export sources to a large back-end repository, any data added to VoltDB\nautomatically gets added to the historical data lake. Once data in VoltDB is no longer \"hot\", it can be\ndeleted but remains available from larger back-end servers.\nIn the simplest case, caching can be done by declaring the VoltDB tables with EXPORT TO TARGET\nand using ON INSERT, UPDATE_NEW so all data changes except deletes are exported to the data lake.\nYou can then manually delete data from VoltDB when it becomes unnecessary in the cache.\nCREATE TABLE sessions \n EXPORT TO TARGET historical ON INSERT, UPDATE_NEW\n ( id BIGINT NOT NULL, \n login TIMESTAMP, last_access TIMESTAMP [,...] );\nTo make it easier, VoltDB can automate the process of aging out old data. If the content is time sensitive,\nyou can add USING TTL to the table declaration to automatically delete records once a column exceeds\na certain time limit. You specify the reference column and the time limit in the USING TTL clause. For\nexample, if you want to automatically delete any sessions that have not been accessed for more than two\nhours, you can change the sessions table declaration like so:\nCREATE TABLE sessions\n EXPORT TO TARGET historical ON INSERT, UPDATE_NEW\n ( id BIGINT NOT NULL, user_id BIGINT, \n login TIMESTAMP, last_access TIMESTAMP [,...] )\n USING TTL 2 hours ON COLUMN last_access;\nIf your expiration criteria is more complex than a single column value, you can use a stored procedure to\nidentify rows that need deleting. To automate this process, you then define a task that executes the stored\nprocedure on a regular basis. For example, if you want to remove sessions more frequently if there is no\naccess after the initial login, you can define a stored procedure GhostSessions to remove inactive sessions,\n147Streaming Data: Import,\nExport, and Migration\nthen execute that procedure periodically with the task RemoveGhosts . Note that the actual time limit can\nbe made adjustable by a parameter passed to the task.\nCREATE PROCEDURE GhostSessions AS\n DELETE FROM sessions \n WHERE login = last_access AND DATEADD(MINUTE,?,login) < NOW;\nCREATE TASK ON SCHEDULE EVERY 2 MINUTES\n PROCEDURE GhostSessions WITH (20); -- 20 minute limit\n15.2.5. Archiving\nArchiving is like caching in that older data is maintained in slower, large-scale repositories. The difference\nis that for archiving, rather than having copies of the current data in both locations, data is not moved to\nthe archive until after it's usefulness in VoltDB expires.\nYou could simply export the data when you delete it from the VoltDB database. But since export is asyn-\nchronous, there will be a short period of time when the data is neither in VoltDB or in the archive. To\navoid this situation, you can use migration rather than export, which ensures the data is not deleted from\nVoltDB until the export target acknowledges receipt of the migrated content.\nFor example, if we are archiving orders, we can include the MIGRATE TO TARGET clause in the table\ndefinition and then use the MIGRATE statement instead of DELETE to clear the records from VoltDB:\nCREATE TABLE orders MIGRATE TO TARGET archive\n [ . . . ];\nIf you are archiving records based on age, you can use MIGRATE TO TARGET with USING TTL to\nautomatically migrate the table rows once a specific column in the table expires. Used alone, USING TTL\nsimply deletes records; used with MIGRATE TO TARGET it initiates a migration for the expired records:\nCREATE TABLE orders MIGRATE TO TARGET archive\n [ . . . ]\n USING TTL 30 DAYS ON COLUMN order_completed;\n15.3. VoltDB Export Connectors\nYou use the EXPORT TO TARGET or MIGRATE TO TARGET clauses to identify the sources of export\nand start queuing export data. To enable the actual transmission of data to an export target at runtime,\nyou include the <export> and <configuration> tags in the configuration file. You can configure\nthe export targets when you initialize the database root directory. Or you can add or modify the export\nconfiguration while the database is running using the voltadmin update command.\nIn the configuration file, the export and configuration tags specify the target you are configuring and which\nexport connector to use (with the type attribute). To export to multiple destinations, you include multiple\n<configuration> tags, each specifying the target it is configuring. For example:\n<export>\n <configuration enabled=\"true\" type=\"file\" target=\"log\" >\n . . .\n </configuration>\n <configuration enabled=\"true\" type=\"jdbc\" target=\"archive\" >\n . . .\n </configuration>\n</export>\n148Streaming Data: Import,\nExport, and Migration\nYou configure each export connector by specifying properties as one or more <property> tags within\nthe <configuration> tag. For example, the following XML code enables export to comma-separated\n(CSV) text files using the file prefix \"MyExport\".\n<export>\n <configuration enabled=\"true\" target=\"log\" type=\"file\">\n <property name=\"type\">csv</property>\n <property name=\"nonce\">MyExport</property>\n </configuration>\n</export>\nThe properties that are allowed and/or required depend on the export connector you select. VoltDB comes\nwith five export connectors:\n•Export to file (type=\"file\")\n•Export to HTTP, including Hadoop (type=\"http\")\n•Export to JDBC (type=\"jdbc\")\n•Export to Kafka (type=\"kafka\")\n•Export to Elasticsearch (type=\"elasticsearch\")\nIn addition to the connectors shipped as part of the VoltDB software kit, an export connector for Amazon\nKinesis is available from the VoltDB public Github repository ( https://github.com/VoltDB/export-kine-\nsis).\n15.3.1. How Export Works\nTwo important points about export to keep in mind are:\n•Data is queued for export as soon you declare a stream or table with the EXPORT TO TARGET clause\nand write to it. Even if the export target has not been configured yet. Be careful not to declare export\nsources and forget to configure their targets, or else the export queues could grow and cause disk space\nissues. Similarly, when you drop the stream or table, its export queue is deleted, even if there is data\nwaiting to be delivered to the configured export target.\n•VoltDB will send at least one copy of every export record to the target. It is possible, when recovering\ncommand logs or rejoining nodes, that certain export records are resent. It is up to the downstream target\nto handle these duplicate records. For example, using unique indexes or including a unique record ID\nin the export stream.\nAll nodes in a cluster queue export data, but only one actually writes to the external target. If one or more\nnodes fail, responsibility for writing to the export targets is transferred to another currently active server.\nIt is possible for gaps to appear in the export queues while servers are offline. Normally if a gap is found, it\nis not a problem because another node can take over responsibility for writing (and queuing) export data.\nHowever, in unusual cases where export falls behind and nodes fail and rejoin consecutively, it is possible\nfor gaps to occur in all the available queues. When this happens, VoltDB issues a warning to the console\n(and via SNMP) and waits for the missing data to be resolved. You can also use the @Statistics system\nprocedure with the EXPORT selector to determine exactly what records are and are not present in the\nqueues. If the gap cannot be resolved (usually by rejoining a failed server), you must use the voltadmin\nexport release command to free the queue and resume export at the next available record.\n149Streaming Data: Import,\nExport, and Migration\n15.3.1.1. Export Overflow\nVoltDB uses persistent files on disk to queue export data waiting to be written to its specified target. If\nfor any reason the export target can not keep up with the connector, VoltDB writes the excess data in the\nexport buffer from memory to disk. This protects your database in several ways:\n•If the destination target is not configured, is unreachable, or cannot keep up with the data flow, writing\nto disk helps VoltDB avoid consuming too much memory while waiting for the destination to accept\nthe data.\n•If the database stops, the export data is retained across sessions. When the database restarts, the con-\nnector will retrieve the overflow data and reinsert it in the export queue.\nEven when the target does keep up with the flow, some amount of data is written to the overflow directory\nto ensure durability across database sessions. You can specify where VoltDB writes the overflow export\ndata using the <exportoverflow> element in the configuration file. For example:\n<paths>\n <exportoverflow path=\"/tmp/export/\"/>\n</paths>\nIf you do not specify a path for export overflow, VoltDB creates a subfolder in the database root directory.\nSee Section 3.7.2, “Configuring Paths for Runtime Features” for more information about configuring paths\nin the configuration file.\n15.3.1.2. Persistence Across Database Sessions\nIt is important to note that VoltDB only uses the disk storage for overflow data. However, you can force\nVoltDB to write all queued export data to disk using any of the following methods:\n•Calling the @Quiesce system procedure\n•Requesting a blocking snapshot (using voltadmin save --blocking )\n•Performing an orderly shutdown (using voltadmin shutdown )\nThis means that if you perform an orderly shutdown with the voltadmin shutdown command, you can\nrecover the database — and any pending export queue data — by simply restarting the database cluster\nin the same root directories.\nNote that when you initialize or re-initialize a root directory, any subdirectories of the root are purged.1\nSo if your configuration did not specify a different location for the export overflow, and you re-initialize\nthe root directories and then restore the database from a snapshot, the database is restored but the export\noverflow will be lost. If both your original and new configuration use the same, explicit directory outside\nthe root directory for export overflow, you can start a new database and restore a snapshot without losing\nthe overflow data.\n15.3.2. The File Export Connector\nThe file connector receives the serialized data from the export source and writes it out as text files (either\ncomma or tab separated) to disk. The file connector writes the data out one file per source table or stream,\n\"rolling\" over to new files periodically. The filenames of the exported data are constructed from:\n•A unique prefix (specified with the nonce property)\n1Initializing a root directory deletes any files in the command log and overflow directories. The snapshots directory is archived to a named subdi-\nrectory.\n150Streaming Data: Import,\nExport, and Migration\n•A unique value identifying the current version of the database schema\n•The stream or table name\n•A timestamp identifying when the file was started\n•Optionally, the ID of the host server writing the file\nWhile the file is being written, the file name also contains the prefix \"active-\". Once the file is complete\nand a new file started, the \"active-\" prefix is removed. Therefore, any export files without the prefix are\ncomplete and can be copied, moved, deleted, or post-processed as desired.\nThere is only one required property that must be set when using the file connector. The nonce property\nspecifies a unique prefix to identify all files that the connector writes out for this database instance. All\nother properties are optional and have default values.\nTable 15.1, “File Export Properties” describes the supported properties for the file connector.\nTable 15.1. File Export Properties\nProperty Allowable Values Description\nnonce*string A unique prefix for the output files.\ntype csv, tsv Specifies whether to create comma-separated (CSV) or tab-delimit-\ned (TSV) files. CSV is the default format.\noutdir directory path The directory where the files are created. Relative paths are relative\nto the database root directory on each server. If you do not specify\nan output path, VoltDB writes the output files into a subfolder of the\nroot directory itself.\nperiod Integer The frequency, in minutes, for \"rolling\" the output file. The default\nfrequency is 60 minutes.\nretention string Specifies how long exported files are retained. You specify the re-\ntention period as an integer number and a time unit identifier from\nthe following list:\n•s — Seconds\n•m — Minutes\n•h — Hours\n•d — Days\nFor example, \"31d\" sets the retention limit at 31 days. After files ex-\nceed the specified time limit, they are deleted by the export subsys-\ntem. The default is to retain all files indefinitely.\nbinaryencoding hex, base64 Specifies whether VARBINARY data is encoded in hexadecimal or\nBASE64 format. The default is hexadecimal.\ndateformat format string The format of the date used when constructing the output file names.\nYou specify the date format as a Java SimpleDateFormat string. The\ndefault format is \"yyyyMMddHHmmss\".\ntimezone string The time zone to use when formatting the timestamp. Specify the\ntime zone as a Java timezone identifier. The default is GMT.\ndelimiters string Specifies the delimiter characters for CSV output. The text string\nspecifies four characters in the following order: the separator, the\nquote character, the escape character, and the end-of-line character.\n151Streaming Data: Import,\nExport, and Migration\nProperty Allowable Values Description\nNon-printing characters must be encoded as Java literals. For ex-\nample, the new line character (ASCII code 13) should be entered\nas \"\\n\". Alternately, you can use Java Unicode literals, such as\n\"\\u000d\". You must also encode any XML special characters, such\nas the ampersand and left angle bracket as HTML entities for inclu-\nsion in the XML configuration file. For example encoding \"<\" as\n\"&gt;\".\nThe following property definition matches the default delimiters.\nThat is, the comma, the double quotation character twice (as both\nthe quote and escape delimiters) and the new line character:\n<property name=\"delimiter\">,\"\"\\n</property>\nbatched true, false Specifies whether to store the output files in subfolders that are\n\"rolled\" according to the frequency specified by the period property.\nThe subfolders are named according to the nonce and the timestamp,\nwith \"active-\" prefixed to the subfolder currently being written.\nskipinternals true, false Specifies whether to include six columns of VoltDB metadata (such\nas transaction ID and timestamp) in the output. If you specify skipin-\nternals as \"true\", the output files contain only the exported data.\nuniquenames true, false Specifies whether to include the host ID in the file name to ensure\nthat all files written are unique across a cluster. The export files are\nalways unique per server. But if you plan to write all cluster files to\na network drive or copy them to a single location, set this property\nto true to avoid any possible conflict in the file names. The default is\nfalse.\nwith-schema true, false Specifies whether to write a JSON representation of the source's\nschema as part of the export. The JSON schema files can be used to\nensure the appropriate datatype and precision is maintained if and\nwhen the output files are imported into another system.\n*Required\nWhatever properties you choose, the order and representation of the content within the output files is the\nsame. The export connector writes a separate line of data for every INSERT it receives, including the\nfollowing information:\n•Six columns of metadata generated by the export connector.\n•The remaining columns are the columns of the database source, in the same order as they are listed in\nthe database definition (DDL) file.\nTable 15.2, “Export Metadata” describes the six columns of metadata generated by the export connector\nand the meaning of each column.\nTable 15.2. Export Metadata\nColumn Datatype Description\nTransaction ID BIGINT Identifier uniquely identifying the transaction that generated the ex-\nport record.\nTimestamp TIMESTAMP The time when the export record was generated.\nSequence Number BIGINT For internal use.\n152Streaming Data: Import,\nExport, and Migration\nColumn Datatype Description\nPartition ID BIGINT Identifies the partition that sent the record to the export target.\nSite ID BIGINT Identifies the site that sent the record to the export target.\nExport Operation TINYINT A single byte value identifying the type of transaction that initiated\nthe export. Possible values include:\n•1 — insert\n•2 — delete\n•3 — update (record before update)\n•4 — update (record after update)\n•5 — migration\n15.3.3. The HTTP Export Connector\nThe HTTP connector receives the serialized data from the export streams and writes it out via HTTP\nrequests. The connector is designed to be flexible enough to accommodate most potential targets. For\nexample, the connector can be configured to send out individual records using a GET request or batch\nmultiple records using POST and PUT requests. The connector also contains optimizations to support\nexport to Hadoop via WebHDFS.\n15.3.3.1. Understanding HTTP Properties\nThe HTTP connector is a general purpose export utility that can export to any number of destinations\nfrom simple messaging services to more complex REST APIs. The properties work together to create a\nconsistent export process. However, it is important to understand how the properties interact to configure\nyour export correctly. The four key properties you need to consider are:\n•batch.mode — whether data is exported in batches or one record at a time\n•method — the HTTP request method used to transmit the data\n•type — the format of the output\n•endpoint — the target HTTP URL to which export is written\nThe properties are described in detail in Table 15.3, “HTTP Export Properties” . This section explains the\nrelationship between the properties.\nThere are essentially two types of HTTP export: batch mode and one record at a time. Batch mode is\nappropriate for exporting large volumes of data to targets such as Hadoop. Exporting one record at a time\nis less efficient for large volumes but can be very useful for writing intermittent messages to other services.\nIn batch mode, the data is exported using a POST or PUT method, where multiple records are combined in\neither comma-separated value (CSV) or Avro format in the body of the request. When writing one record\nat a time, you can choose whether to submit the HTTP request as a POST, PUT or GET (that is, as a\nquerystring attached to the URL). When exporting in batch mode, the method must be either POST or PUT\nand the type must be either csv or avro. When exporting one record at a time, you can use the GET,\nPOST, or PUT method, but the output type must be form.\nFinally, the endpoint property specifies the target URL where data is being sent, using either the http: or\nhttps: protocol. Again, the endpoint must be compatible with the possible settings for the other properties.\nIn particular, if the endpoint is a WebHDFS URL, batch mode must enabled.\n153Streaming Data: Import,\nExport, and Migration\nThe URL can also contain placeholders that are filled in at runtime with metadata associated with the\nexport data. Each placeholder consists of a percent sign (%) and a single ASCII character. The following\nare the valid placeholders for the HTTP endpoint property:\nPlaceholder Description\n%t The name of the VoltDB export source table or stream. The source name is inserted\ninto the endpoint in all uppercase.\n%p The VoltDB partition ID for the partition where the INSERT query to the export source\nis executing. The partition ID is an integer value assigned by VoltDB internally and\ncan be used to randomly partition data. For example, when exporting to webHDFS, the\npartition ID can be used to direct data to different HDFS files or directories.\n%g The export generation. The generation is an identifier assigned by VoltDB. The gener-\nation increments each time the database starts or the database schema is modified in\nany way.\n%d The date and hour of the current export period. Applicable to WebHDFS export only.\nThis placeholder identifies the start of each period and the replacement value remains\nthe same until the period ends, at which point the date and hour is reset for the new\nperiod.\nYou can use this placeholder to \"roll over\" WebHDFS export destination files on a\nregular basis, as defined by the period property. The period property defaults to\none hour.\nWhen exporting in batch mode, the endpoint must contain at least one instance each of the %t, %p, and\n%g placeholders. However, beyond that requirement, it can contain as many placeholders as desired and\nin any order. When not in batch mode, use of the placeholders are optional.\nTable 15.3, “HTTP Export Properties” describes the supported properties for the HTTP connector.\nTable 15.3. HTTP Export Properties\nProperty Allowable Values Description\nendpoint*string Specifies the target URL. The endpoint can contain placeholders for\ninserting the source name (%t), the partition ID (%p), the date and\nhour (%d), and the export generation (%g).\navro.compress true, false Specifies whether Avro output is compressed or not. The default is\nfalse and this property is ignored if the type is not Avro.\navro.schema.location string Specifies the location where the Avro schema will be written. The\nschema location can be either an absolute path name on the local\ndatabase server or a webHDFS URL and must include at least one\ninstance of the placeholder for the source name (%t). Optional-\nly it can contain other instances of both %t and %g. The default\nlocation for the Avro schema is the file path export/avro/\n%t_avro_schema.json on the database server under the voltd-\nbroot directory. This property is ignored if the type is not Avro.\nbatch.mode true, false Specifies whether to send multiple rows as a single request or send\neach export row separately. The default is true. Batch mode must be\nenabled for WebHDFS export.\nhttpfs.enable true, false Specifies that the target of WebHDFS export is an Apache HttpFS\n(Hadoop HDFS over HTTP) server. This property must be set to true\nwhen exporting webHDFS to HttpFS targets.\n154Streaming Data: Import,\nExport, and Migration\nProperty Allowable Values Description\nkerberos.enable true, false Specifies whether Kerberos authentication is used when connecting\nto a WebHDFS endpoint. This property is only valid when connect-\ning to WebHDFS servers and is false by default.\nmethod get, post, put Specifies the HTTP method for transmitting the export data. The de-\nfault method is POST. For WebHDFS export, this property is ig-\nnored.\nperiod Integer Specifies the frequency, in hours, for \"rolling\" the WebHDFS output\ndate and time. The default frequency is every hour (1). For WebHD-\nFS export only.\ntimezone string The time zone to use when formatting the timestamp. Specify the\ntime zone as a Java timezone identifier. The default is the local time\nzone.\ntype csv, avro, form Specifies the output format. If batch.mode is true, the default type is\nCSV. If batch.mode is false, the default and only allowable value for\ntype is form. Avro format is supported for WebHDFS export only\n(see Section 15.3.3.2, “Exporting to Hadoop via WebHDFS” for de-\ntails.)\n*Required\n15.3.3.2. Exporting to Hadoop via WebHDFS\nAs mentioned earlier, the HTTP connector contains special optimizations to support exporting data to\nHadoop via the WebHDFS protocol. If the endpoint property contains a WebHDFS URL (identified by\nthe URL path component starting with the string \"/webhdfs/v1/\"), special rules apply.\nFirst, for WebHDFS URLs, the batch.mode property must be enabled. Also, the endpoint must have at\nleast one instance each of the source name (%t), the partition ID (%p), and the export generation (%g)\nplaceholders and those placeholders must be part of the URL path, not the domain or querystring.\nNext, the method property is ignored. For WebHDFS, the HTTP connector uses a combination of POST,\nPUT, and GET requests to perform the necessary operations using the WebHDFS REST API.\nFor example, The following configuration file excerpt exports stream data to WebHDFS using the HTTP\nconnector and writing each stream to a separate directory, with separate files based on the partition ID,\ngeneration, and period timestamp, rolling over every 2 hours:\n<export>\n <configuration target=\"hadoop\" enabled=\"true\" type=\"http\">\n <property name=\"endpoint\">\n http://myhadoopsvr/webhdfs/v1/%t/data%p-%g.%d.csv\n </property>\n <property name=\"batch.mode\">true</property>\n <property name=\"period\">2</property>\n </configuration>\n</export>\nNote that the HTTP connector will create any directories or files in the WebHDFS endpoint path that do\nnot currently exist and then append the data to those files, using the POST or PUT method as appropriate\nfor the WebHDFS REST API.\nYou also have a choice between two formats for the export data when using WebHDFS: comma-separated\nvalues (CSV) and Apache Avro™ format. By default, data is written as CSV data with each record on\n155Streaming Data: Import,\nExport, and Migration\na separate line and batches of records attached as the contents of the HTTP request. However, you can\nchoose to set the output format to Avro by setting the type property, as in the following example:\n<export>\n <configuration target=\"hadoop\" enabled=\"true\" type=\"http\">\n <property name=\"endpoint\">\n http://myhadoopsvr/webhdfs/v1/%t/data%p-%g.%d.avro\n </property>\n <property name=\"type\">avro</property>\n <property name=\"avro.compress\">true</property>\n <property name=\"avro.schema.location\">\n http://myhadoopsvr/webhdfs/v1/%t/schema.json\n </property>\n </configuration>\n</export>\nAvro is a data serialization system that includes a binary format that is used natively by Hadoop utilities\nsuch as Pig and Hive. Because it is a binary format, Avro data takes up less network bandwidth than text-\nbased formats such as CSV. In addition, you can choose to compress the data even further by setting the\navro.compress property to true, as in the previous example.\nWhen you select Avro as the output format, VoltDB writes out an accompanying schema definition as a\nJSON document. For compatibility purposes, the source name and columns names are converted, removing\nunderscores and changing the resulting words to lowercase with initial capital letters (sometimes called\n\"camelcase\"). The source name is given an initial capital letter, while columns names start with a lowercase\nletter. For example, the stream EMPLOYEE_DATA and its column named EMPLOYEE_iD would be\nconverted to EmployeeData and employeeId in the Avro schema.\nBy default, the Avro schema is written to a local file on the VoltDB database server. However, you can\nspecify an alternate location, including a webHDFS URL. So, for example, you can store the schema in\nthe same HDFS repository as the data by setting the avro.schema.location property, as shown in\nthe preceding example.\nSee the Apache Avro web site for more details on the Avro format.\n15.3.3.3. Exporting to Hadoop Using Kerberos Security\nIf the WebHDFS service to which you are exporting data is configured to use Kerberos security, the\nVoltDB servers must be able to authenticate using Kerberos as well. To do this, you must perform the\nfollowing two extra steps:\n•Configure Kerberos security for the VoltDB cluster itself\n•Enable Kerberos authentication in the export configuration\nThe first step is to configure the VoltDB servers to use Kerberos as described in Section 12.8, “Integrating\nKerberos Security with VoltDB” . Because use of Kerberos authentication for VoltDB security changes\nhow the clients connect to the database cluster, It is best to set up, enable, and test Kerberos authentication\nfirst to ensure your client applications work properly in this environment before trying to enable Kerberos\nexport as well.\nOnce you have Kerberos authentication working for the VoltDB cluster, you can enable Kerberos authen-\ntication in the configuration of the WebHDFS export target as well. Enabling Kerberos authentication in\nthe HTTP connector only requires one additional property, kerberos.enable , to be set. To use Ker-\nberos authentication, set the property to \"true\". For example:\n156Streaming Data: Import,\nExport, and Migration\n<export>\n <configuration target=\"hadoop\" enabled=\"true\" type=\"http\">\n <property name=\"endpoint\">\n http://myhadoopsvr/webhdfs/v1/%t/data%p-%g.%d.csv\n </property>\n <property name=\"type\">csv</property>\n <property name=\"kerberos.enable\">true</property>\n </configuration>\n</export>\nNote that Kerberos authentication is only supported for WebHDFS endpoints.\n15.3.4. The JDBC Export Connector\nThe JDBC connector receives the serialized data from the export source and writes it, in batches, to another\ndatabase through the standard JDBC (Java Database Connectivity) protocol.\nBy default, when the JDBC connector opens the connection to the remote database, it first attempts to\ncreate tables in the remote database to match the VoltDB export source by executing CREATE TABLE\nstatements through JDBC. This is important to note because, it ensures there are suitable tables to receive\nthe exported data. The tables are created using either the names from the VoltDB schema or (if you do not\nenable the ignoregenerations property) the name prefixed by the database generation ID.\nIf the target database has existing tables that match the VoltDB export sources in both name and structure\n(that is, the number, order, and datatype of the columns), be sure to enable the ignoregenerations property\nin the export configuration to ensure that VoltDB uses those tables as the export target.\nIt is also important to note that the JDBC connector exports data through JDBC in batches. That is, multiple\nINSERT instructions are passed to the target database at a time, in approximately two megabyte batches.\nThere are two consequences of the batching of export data:\n•For many databases, such as Netezza, where there is a cost for individual invocations, batching reduces\nthe performance impact on the receiving database and avoids unnecessary latency in the export pro-\ncessing.\n•On the other hand, no matter what the target database, if a query fails for any reason the entire batch fails.\nTo avoid errors causing batch inserts to fail, it is strongly recommended that the target database not use\nunique indexes on the receiving tables that might cause constraint violations.\nIf any errors do occur when the JDBC connector attempts to submit data to the remote database, the VoltDB\ndisconnects and then retries the connection. This process is repeated until the connection succeeds. If\nthe connection does not succeed, VoltDB eventually reduces the retry rate to approximately every eight\nseconds.\nTable 15.4, “JDBC Export Properties” describes the supported properties for the JDBC connector.\nTable 15.4. JDBC Export Properties\nProperty Allowable Values Description\njdbcurl*connection string The JDBC connection string, also known as the URL.\njdbcuser*string The username for accessing the target database.\n157Streaming Data: Import,\nExport, and Migration\nProperty Allowable Values Description\njdbcpassword string The password for accessing the target database.\njdbcdriver string The class name of the JDBC driver. The JDBC driver class must be\naccessible to the VoltDB process for the JDBC export process to\nwork. Place the driver JAR files in the lib/extension/ direc-\ntory where VoltDB is installed to ensure they are accessible at run-\ntime.\nYou do not need to specify the driver as a property value for several\npopular databases, including MySQL, Netezza, Oracle, PostgreSQL,\nand Vertica. However, you still must provide the driver JAR file.\nschema string The schema name for the target database. The use of the schema\nname is database specific. In some cases you must specify the data-\nbase name as the schema. In other cases, the schema name is not\nneeded and the connection string contains all the information neces-\nsary. See the documentation for the JDBC driver you are using for\nmore information.\nminpoolsize integer The minimum number of connections in the pool of connections to\nthe target database. The default value is 10.\nmaxpoolsize integer The maximum number of connections in the pool. The default value\nis 100.\nmaxidletime integer The number of milliseconds a connection can be idle before it is re-\nmoved from the pool. The default value is 60000 (one minute).\nmaxstatementcached integer The maximum number of statements cached by the connection pool.\nThe default value is 50.\ncreatetable true, false Specifies whether VoltDB should create the corresponding table in\nthe remote database. By default , VoltDB creates the table(s) to re-\nceive the exported data. (That is, the default is true.) If you set this\nproperty to false, you must create table(s) with matching names to\nthe VoltDB export sources before starting the export connector.\nlowercase true, false Specifies whether VoltDB uses lowercase table and column names\nor not. By default, VoltDB issues SQL statements using uppercase\nnames. However, some databases (such as PostgreSQL) are case\nsensitive. When this property is set to true, VoltDB uses all lower-\ncase names rather than uppercase. The default is false.\nignoregenerations true, false Specifies whether a unique ID for the generation of the database\nis included as part of the output table name(s). The generation ID\nchanges each time a database restarts or the database schema is up-\ndated. The default is false.\nskipinternals true, false Specifies whether to include six columns of VoltDB metadata (such\nas transaction ID and timestamp) in the output. If you specify skip-\ninternals as true, the output contains only the exported stream data.\nThe default is false.\n*Required\n15.3.5. The Kafka Export Connector\nThe Kafka connector receives serialized data from the export sources and writes it to a message queue using\nthe Apache Kafka version 0.10.2 protocols. Apache Kafka is a distributed messaging service that lets you\n158Streaming Data: Import,\nExport, and Migration\nset up message queues which are written to and read from by \"producers\" and \"consumers\", respectively.\nIn the Apache Kafka model, VoltDB export acts as a \"producer\" capable of writing to any Kafka service\nusing version 0.10.2 or later.\nBefore using the Kafka connector, we strongly recommend reading the Kafka documentation and becom-\ning familiar with the software, since you will need to set up a Kafka service and appropriate \"consumer\"\nclients to make use of VoltDB's Kafka export functionality. The instructions in this section assume a work-\ning knowledge of Kafka and the Kafka operational model.\nWhen the Kafka connector receives data from the VoltDB export sources, it establishes a connection to the\nKafka messaging service as a Kafka producer. It then writes records to Kafka topics based on the VoltDB\nstream or table name and certain export connector properties.\nThe majority of the Kafka export properties are identical in both in name and content to the Kafka pro-\nducer properties listed in the Kafka documentation. All but one of these properties are optional for the\nKafka connector and will use the standard Kafka default value. For example, if you do not specify the\nqueue.buffering.max.ms property it defaults to 5000 milliseconds.\nThe only required property is bootstrap.servers , which lists the Kafka servers that the VoltDB\nexport connector should connect to. You must include this property so VoltDB knows where to send the\nexport data. Specify each server by its IP address (or hostname) and port; for example, myserver:7777. If\nthere are multiple servers in the list, separate them with commas.\nIn addition to the standard Kafka producer properties, there are several custom properties specific to Volt-\nDB. The properties binaryencoding , skipinternals , and timezone affect the format of the\ndata. The topic.prefix and topic.key properties affect how the data is written to Kafka.\nThe topic.prefix property specifies the text that precedes the stream or table name when constructing\nthe Kafka topic. If you do not specify a prefix, it defaults to \"voltdbexport\". Alternately, you can map\nindividual sources to topics using the topic.key property. In the topic.key property you associate\na VoltDB export source name with the corresponding Kafka topic as a named pair separated by a period\n(.). Multiple named pairs are separated by commas (,). For example:\nEmployee.EmpTopic,Company.CoTopic,Enterprise.EntTopic\nAny mappings in the topic.key property override the automated topic name specified by top-\nic.prefix .\nNote that unless you configure the Kafka brokers with the auto.create.topics.enable property\nset to true, you must create the topics for every export source manually before starting the export process.\nEnabling auto-creation of topics when setting up the Kafka brokers is recommended.\nWhen configuring the Kafka export connector, it is important to understand the relationship between syn-\nchronous versus asynchronous processing and its effect on database latency. If the export data is sent\nasynchronously, the impact of export on the database is reduced, since the export connector does not wait\nfor the Kafka infrastructure to respond. However, with asynchronous processing, VoltDB is not able to\nresend the data if the message fails after it is sent.\nIf export to Kafka is done synchronously, the export connector waits for acknowledgement of each message\nsent to Kafka before processing the next packet. This allows the connector to resend any packets that fail.\nThe drawback to synchronous processing is that on a heavily loaded database, the latency it introduces\nmeans export may not be able to keep up with the influx of export data and and have to write to overflow.\nYou specify the level of synchronicity and durability of the connection using the Kafka acks property.\nSet acks to \"0\" for asynchronous processing, \"1\" for synchronous delivery to the Kafka broker, or \"all\"\nto ensure durability on the Kafka broker. See the Kafka documentation for more information.\n159Streaming Data: Import,\nExport, and Migration\nVoltDB guarantees that at least one copy of all export data is sent by the export connector. But when\noperating in asynchronous mode, the Kafka connector cannot guarantee that the packet is actually received\nand accepted by the Kafka broker. By operating in synchronous mode, VoltDB can catch errors returned\nby the Kafka broker and resend any failed packets. However, you pay the penalty of additional latency\nand possible export overflow.\nFinally, the actual export data is sent to Kafka as a comma-separated values (CSV) formatted string. The\nmessage includes six columns of metadata (such as the transaction ID and timestamp) followed by the\ncolumn values of the export stream.\nTable 15.5, “Kafka Export Properties” lists the supported properties for the Kafka connector, including\nthe standard Kafka producer properties and the VoltDB unique properties.\nTable 15.5. Kafka Export Properties\nProperty Allowable Val-\nuesDescription\nbootstrap.servers*string A comma-separated list of Kafka brokers (specified\nas IP-address:port-number). You can use meta-\ndata.broker.list as a synonym for boot-\nstrap.servers .\nacks 0, 1, all Specifies whether export is synchronous ( 1 or all) or\nasynchronous ( 0) and to what extent it ensures delivery.\nThe default is all, which is recommended to avoid pos-\nsibly losing messages when a Kafka server becomes un-\navailable during export. See the Kafka documentation of\nthe producer properties for details.\nacks.retry.timeout integer Specifies how long, in milliseconds, the connector will\nwait for acknowledgment from Kafka for each packet.\nThe retry timeout only applies if acknowledgements are\nenabled. That is, if the acks property is set greater than\nzero. The default timeout is 5,000 milliseconds. When\nthe timeout is reached, the connector will resend the\npacket of messages.\npartition.key {source}.{col-\numn}[,...]Specifies which source column value to use as the Kafka\npartitioning key for each stream. Kafka uses the partition\nkey to distribute messages across multiple servers.\nBy default, the value of the source's partitioning column\nis used as the Kafka partition key. Using this property\nyou can specify a list of column names, where the source\nname and column name are separated by a period and\nthe list of stream references is separated by commas. If a\nstream or table is not partitioned and you do not specify\na key, the server partition ID is used as a default.\nbinaryencoding hex, base64 Specifies whether VARBINARY data is encoded in\nhexadecimal or BASE64 format. The default is hexadec-\nimal.\nskipinternals true, false Specifies whether to include six columns of VoltDB\nmetadata (such as transaction ID and timestamp) in the\noutput. If you specify skipinternals as true, the output\n160Streaming Data: Import,\nExport, and Migration\nProperty Allowable Val-\nuesDescription\ncontains only the exported stream data. The default is\nfalse.\ntimezone string The time zone to use when formatting the timestamp.\nSpecify the time zone as a Java timezone identifier. The\ndefault is GMT.\ntopic.key string A set of named pairs each identifying a VoltDB source\nname and the corresponding Kafka topic name to which\nthe data is written. Separate the names with a period (.)\nand the name pairs with a comma (,).\nThe specific source/topic mappings declared by top-\nic.key override the automated topic names specified by\ntopic.prefix.\ntopic.prefix string The prefix to use when constructing the topic name.\nEach row is sent to a topic identified by {prefix}{source-\nname}. The default prefix is \"voltdbexport\".\nKafka producer properties various You can specify standard Kafka producer properties\nas export connector properties and their values will be\npassed to the Kafka interface. However, you cannot\nmodify the property block.on.buffer.full .\n*Required\n15.3.6. The Elasticsearch Export Connector\nThe Elasticsearch connector receives serialized data from the export source and inserts it into an Elastic-\nsearch server or cluster. Elasticsearch is an open-source full-text search engine built on top of Apache\nLucene™. By exporting selected tables and streams from your VoltDB database to Elasticsearch you can\nperform extensive full-text searches on the data not possible with VoltDB alone.\nBefore using the Elasticsearch connector, we recommend reading the Elasticsearch documentation and\nbecoming familiar with the software. The instructions in this section assume a working knowledge of\nElasticsearch, its configuration and its capabilities.\nThe only required property when configuring Elasticsearch is the endpoint, which identifies the location of\nthe Elasticsearch server and what index to use when inserting records into the target system. The structure\nof the Elasticsearch endpoint is the following:\n<protocol>://<server>:<port>//<index-name>//<type-name>\nFor example, if the target Elasticsearch service is on the server esearch.lan using the default port\n(9200) and the exported records are being inserted into the employees index as documents of type\nperson, the endpoint declaration would look like this:\n<property name=\"endpoint\">\n http://esearch.lan:9200/employees/person\n</property>\nYou can use placeholders in the endpoint that are replaced at runtime with information from the export\ndata, such as the source name (%t), the partition ID (%p), the export generation (%g), and the date and hour\n(%d). For example, to use the source name as the index name, the endpoint might look like the following:\n161Streaming Data: Import,\nExport, and Migration\n<property name=\"endpoint\">\n http://esearch.lan:9200/ %t/person\n</property>\nWhen you export to Elasticsearch, the export connector creates the necessary index names and types in\nElasticsearch (if they do not already exist) and inserts each record as a separate document with the appro-\npriate metadata. Table 15.6, “Elasticsearch Export Properties” lists the supported properties for the Elas-\nticsearch connector.\nTable 15.6. Elasticsearch Export Properties\nProperty Allowable Val-\nuesDescription\nendpoint*string Specifies the root URL of the RESTful interface for the\nElasticsearch cluster to which you want to export the da-\nta. The endpoint should include the protocol, host name\nor IP address, port, and path. The path is assumed to in-\nclude an index name and a type identifier.\nThe export connector will use the Elasticsearch RESTful\nAPI to communicate with the server and insert records\ninto the specified locations. You can use placeholders\nto replace portions of the endpoint with data from the\nexported records at runtime, including the source name\n(%t), the partition ID (%p), the date and hour (%d), and\nthe export generation (%g).\nbatch.mode true, false Specifies whether to send multiple rows as a single re-\nquest or send each export row separately. The default is\ntrue.\ntimezone string The time zone to use when formatting timestamps. Spec-\nify the time zone as a Java timezone identifier. The de-\nfault is the local time zone.\n*Required\n15.4. VoltDB Import Connectors\nJust as VoltDB can export data from selected streams and tables to external targets, it supports importing\ndata into selected tables from external sources. Import works in two ways:\n•Bulk loading data using one of several standalone utilities VoltDB provides. These data loaders support\nmultiple standard input protocols and can be run from any server, even remotely from the database itself.\n•Streaming import as part of the database server process. For data that is imported on an ongoing basis,\nuse of the built-in import functionality ensures that import starts and stops with the database.\nThe following sections discuss these two approaches to data import.\n15.4.1. Bulk Loading Data Using VoltDB Standalone Utilities\nOften, when migrating data from one database to another or when pre-loading a set of data into VoltDB\nas a starting point, you just want to perform the import once and then use the data natively within VoltDB.\n162Streaming Data: Import,\nExport, and Migration\nFor these one-time uses, or when you prefer to manage the import process externally, VoltDB provides\nseparate data loader utilities.\nEach data loader supports a different source format. You can load data from text files — such as com-\nma-separated value (CSV) files — using the csvloader utility. You can load data from another JDBC-com-\npliant database using the jdbcloader utility. Or you can load data from a streaming message service with\nthe Kafka loader utility, kafkaloader .\nAll of the data loaders operate in much the same way. For each utility you specify the source for the\nimport and either a table that the data will be loaded into or a stored procedure that will be used to load\nthe data. So, for example, to load records from a CSV file named staff.csv into the table EMPLOYEES,\nthe command might be the following:\n$ csvloader employees --file=staff.csv\nIf instead you are copying the data from a JDBC-compliant database, the command might look like this:\n$ jdbcloader employees \\\n --jdbcurl=jdbc:postgresql://remotesvr/corphr \\\n --jdbctable=employees \\\n --jdbcdriver=org.postgresql.Driver\nEach utility has arguments unique to the data source (such as --jdbcurl ) that allow you to properly\nconfigure and connect to the source. See the description of each utility in Appendix D, VoltDB CLI Com-\nmands for details.\n15.4.2. Streaming Import Using Built-in Import Features\nIf importing data is an ongoing business process, rather than a one-time event, then it is desirable to make\nit an integral part of the database system. This can be done by building a custom application to push data\ninto VoltDB using one of its standard APIs, such as the JDBC interface. Or you can take advantage of\nVoltDB's built-in import infrastructure.\nThe built-in importers work in much the same way as the data loading utilities, where incoming data is\nwritten into one or more database tables using an existing stored procedure. The difference is that the built-\nin importers start automatically whenever the database starts and stop when the database stops, making\nimport an integral part of the database process.\nYou configure the built-in importers in the configuration file the same way you configure export connec-\ntions. Within the <import> element, you declare each import stream using separate <configuration> el-\nements. Within the <configuration> tag you use attributes to specify the type and format of data being\nimported and whether the import configuration is enabled or not. Then enclosed within the <configura-\ntion> tags you use <property> elements to provide information required by the specific importer and/or\nformatter. For example:\n<import>\n <configuration type=\"kafka\" format=\"csv\" enabled=\"true\">\n <property name=\"brokers\">kafkasvr:9092</property>\n <property name=\"topics\">employees</property>\n <property name=\"procedure\">EMPLOYEE.insert</property>\n </configuration>\n</import>\nWhen the database starts, the import infrastructure starts any enabled configurations. If you are importing\nmultiple streams to separate tables through separate procedures, you must include multiple configurations,\n163Streaming Data: Import,\nExport, and Migration\neven if they come from the same source. For example, the following configuration imports data from two\nKafka topics from the same Kafka servers into separate VoltDB tables.\n<import>\n <configuration type=\"kafka\" enabled=\"true\">\n <property name=\"brokers\">kafkasvr:9092</property>\n <property name=\"topics\">employees</property>\n <property name=\"procedure\">EMPLOYEE.insert</property>\n </configuration>\n <configuration type=\"kafka\" enabled=\"true\">\n <property name=\"brokers\">kafkasvr:9092</property>\n <property name=\"topics\">managers</property>\n <property name=\"procedure\">MANAGER.insert</property>\n </configuration>\n</import>\nVoltDB currently provides support for two types of import:\n•Import from Apache Kafka (type=\"kafka\")\n•Import from Amazon Kinesis (type=\"kinesis\")\nVoltDB also provides support for two import formats: comma-separated values (csv) and tab-separated\nvalues (tsv). Comma-separated values are the default format. So if you are using CSV-formatted input,\nyou can leave out the format attribute, as in the preceding example.\nThe following sections describe each of the importers and the CSV/TSV formatter in more detail.\n15.4.3. The Kafka Importer\nThe Kafka importer connects to the specified Kafka messaging service and imports one or more Kafka\ntopics and writes the records into the VoltDB database. The data is decoded according to the specified\nformat — comma-separated values (CSV) by default — and is inserted into the VoltDB database using\nthe specified stored procedure.\nThe Kafka importer supports Kafka version 0.10 or later. You must specify at least the following properties\nfor each configuration:\n•brokers — Identifies one or more Kafka brokers. That is, servers hosting the Kafka service and desired\ntopics. Specify a single server or a comma-separated list of brokers.\n•topics — Identifies the Kafka topics that will be imported. The property value can be a single topic\nname or a comma-separated list of topics.\n•procedure — Identifies the stored procedure that is invoked to insert the records into the VoltDB data-\nbase.\nWhen import starts, the importer first checks to make sure the specified stored procedure exists in the\ndatabase schema. If not (for example, when you first create a database and before a schema is loaded), the\nimporter issues periodic warnings to the console.\nOnce the specified stored procedure is declared, the importer looks for the specified Kafka brokers and\ntopics. If the specified brokers cannot be found or the specified topics do not exist on the brokers, the\nimporter reports an error and stops. You will need to restart import once this error condition is corrected.\nYou can restart import using any of the following methods:\n164Streaming Data: Import,\nExport, and Migration\n•Stop and restart the database\n•Pause and resume the database using the voltadmin pause and voltadmin resume commands\n•Update the configuration using the voltadmin update command or the web-based VoltDB Management\nCenter\nIf the brokers are found and the topics exist, the importer starts fetching data from the Kafka topics and\nsubmitting it to the stored procedure to insert into the database. In the simplest case, you can use the default\ninsert procedure for a table to insert records into a single table. For more complex data you can write your\nown import stored procedure to interpret the data and insert it into the appropriate table(s).\nTable 15.7, “Kafka Import Properties” lists the allowable properties for the Kafka importer. You can also\nspecify properties associated with the formatter, as described in Table 15.9, “CSV and TSV Formatter\nProperties” .\nTable 15.7. Kafka Import Properties\nProperty Allowable Val-\nuesDescription\nbrokers*string A comma-separated list of Kafka brokers.\nprocedure*string The stored procedure to invoke to insert the incoming\ndata into the database.\ntopics*string A comma-separated list of Kafka topics.\ncommit.policy integer Because the importer performs two distinct tasks — re-\ntrieving records from Kafka and then inserting them in-\nto VoltDB — Kafka's automated tracking of the current\noffset may not match what records are successfully in-\nserted into the database. Therefore, by default, the im-\nporter uses a manual commit policy to ensure the Kafka\noffset matches the completed inserts.\nUse of the default commit policy is recommended. How-\never, you can, if you choose, use Kafka's automated\ncommit policy by specifying a commit interval, in mil-\nliseconds, using this property.\ngroupid string A user-defined name for the group that the client belongs\nto. Kafka maintains a single pointer for the current posi-\ntion within the stream for all clients in the same group.\nThe default group ID is \"voltdb\". In the rare case where\nyou have two or more databases importing data from the\nsame Kafka brokers and topics, be sure to set this prop-\nerty to give each database a unique group ID and avoid\nthe databases interfering with each other.\nfetch.max.bytes\nheartbeat.interval.ms\nmax.partition.fetch.bytes\nmax.poll.interval.ms\nmax.poll.records\nrequest.timeout.ms\nsession.timeout.msstring These Kafka consumer properties are supported as im-\nport properties. See the Kafka 0.11 documentation for\ndetails.\n*Required\n165Streaming Data: Import,\nExport, and Migration\n15.4.4. The Kinesis Importer\nThe Kinesis importer connects to the specified Amazon Kinesis stream and writes the records into the\nVoltDB database. Kinesis streams let you aggregate data from multiple sources, such as click streams and\nmedia feeds, which is then pushed as streaming data to the application. The VoltDB Kinesis importer acts\nas a target application for the Kinesis Stream. The data is decoded according to the specified format —\ncomma-separated values (CSV) by default — and is inserted into the VoltDB database using the specified\nstored procedure.\nWhen import starts, the importer first checks to make sure the specified stored procedure exists in the\ndatabase schema. If not (for example, when you first create a database and before a schema is loaded), the\nimporter issues periodic warnings to the console.\nOnce the specified stored procedure is declared, the importer looks for the specified Kinesis stream. If the\nstream cannot be found or accessed (for example, if the keys don't match), the importer reports an error\nand stops. You will need to restart import once this error condition is corrected. You can restart import\nusing any of the following methods:\n•Stop and restart the database\n•Pause and resume the database using the voltadmin pause and voltadmin resume commands\n•Update the configuration using the voltadmin update command or the web-based VoltDB Management\nCenter\nIf the stream is found and can be accessed, the importer starts fetching data and submitting it to the stored\nprocedure to insert into the database. In the simplest case, you can use the default insert procedure for a\ntable to insert records into a single table. For more complex data you can write your own import stored\nprocedure to interpret the data and insert it into the appropriate table(s).\nTable 15.8, “Kinesis Import Properties” lists the allowable properties for the Kinesis importer. You can\nalso specify properties associated with the formatter, as described in Table 15.9, “CSV and TSV Formatter\nProperties” .\nTable 15.8. Kinesis Import Properties\nProperty Allowable Val-\nuesDescription\napp.name*string A user-defined name that is used by Kinesis to track the\napplication's current position in the stream.\nprocedure*string The stored procedure to invoke to insert the incoming\ndata into the database.\nregion*string The Amazon region where the Kinesis stream service is\nrunning.\nstream.name*string The name of the Kinesis stream.\naccess.key*string The Amazon access key for permitting access to the\nstream.\nsecret.key*string The Amazon secret key for permitting access to the\nstream.\nmax.read.batch.size integer The maximum number of records to read in a single\nbatch. The default batch size is size 10,000 records.\n*Required\n166Streaming Data: Import,\nExport, and Migration\n15.5. VoltDB Import Formatters\nThe import infrastructure uses formatters to interpret the incoming data and convert it for insertion into\nthe database. If you use the CSV or TSV formatter, you can control how the data is interpreted by setting\nadditional properties associated with those formatters. For example, the following configuration for the\nKafka importer includes the formatter property blank specifying that blank entries should generate an\nerror, rather than being interpreted as null or empty values:\n<import>\n <configuration type=\"kafka\" format=\"csv\" enabled=\"true\">\n <property name=\"brokers\">kafkasvr:9092</property>\n <property name=\"topics\">employees</property>\n <property name=\"procedure\">EMPLOYEE.insert</property>\n <property name=\"blank\">error</property>\n </configuration>\n</import>\nYou include the formatter properties in the <configuration> element along with the import type properties.\nTable 15.9, “CSV and TSV Formatter Properties” lists the allowable properties for the CSV and TSV\nimport formatters.\nTable 15.9. CSV and TSV Formatter Properties\nProperty Allowable Val-\nuesDescription\nblank empty, error, null Specifies what to do with missing values in the input.\nIf you specify empty, missing entries result in the cor-\nresponding \"empty\" value (that is, zero for INTEGER,\na zero-length string for VARCHAR, and so on); if you\nspecify error, missing entries generate an error, if you\nspecify null, missing entries result in a null value. The\ndefault interpretation of missing values is null.\nnowhitespace true, false Specifies whether the input can contain whitespace be-\ntween data values and separators. If you specify true,\nany input lines containing whitespace will generate an\nerror and not be inserted into the database. The default is\nfalse.\nnullstring string Specifies a custom string to be interpreted as a null val-\nue. By default, the following entries are interpreted as\nnull:\n•An empty entry\n•NULL (unquoted, uppercase)\n•\\N (quoted or unquoted, either upper or lowercase)\nIf you specify a custom null string, it overrides all de-\nfault null strings.\ntrimrawtext true, false Specifies whether any white space around unquoted\nstring values is included in the string input or not. If you\nspecify true, surrounding white space is dropped; if\nyou specify false, surrounding white space between\n167Streaming Data: Import,\nExport, and Migration\nProperty Allowable Val-\nuesDescription\nthe string value and the separators is included in the in-\nput value. The default is true.\n15.6. VoltDB Topics\nTopics use the Apache Kafka protocols for producing data for (input) and consuming data from (output) a\nVoltDB database. The configuration file declares the topic and specifies the stored procedure that receives\nthe inbound data. The CREATE STREAM... EXPORT TO TOPIC statement identifies the stream that is\nused to queue outbound data to the specified topic. VoltDB topics operate just like Kafka topics, with the\ndatabase nodes acting as Kafka brokers. However, unlike Kafka, VoltDB topics also have the ability to\nanalyze, act on, or even modify the data as it passes through.\nAs the preceding diagram shows, data submitted to the topic from a Kafka producer (either using the Kafka\nAPI or using a tool such as Kafka Connect) is passed to the stored procedure, which then interprets and\noperates on the data before passing it along to the stream through standard SQL INSERT semantics. Note\nthat the named procedure must exist before input is accepted. Similarly, the stream must be declared using\nthe EXPORT TO TOPIC clause and the topic be defined in the configuration file before any output is\nqueued. So, it is a combination of the database schema and configuration file that establishes the complete\ntopic workflow.\nFor example, the following SQL statements declare the necessary stored procedure and stream and the\nconfiguration file defines a topic eventLogs that integrates them:\nSchema DDL CREATE STREAM eventlog \n PARTITION ON COLUMN e_type \n EXPORT TO TOPIC eventlogs\n ( e_type INTEGER NOT NULL,\n e_time TIMESTAMP NOT NULL,\n e_msg VARCHAR(256)\n );\nCREATE PROCEDURE \n PARTITION ON TABLE eventlog COLUMN e_type \n FROM CLASS mycompany.myprocs.checkEvent;\nConfiguration File <topics>\n <topic name=\"eventLogs\" procedure=\"checkEvent\"/>\n</topics>\nConcerning Case Sensitivity\nThe names of Kafka topics are case sensitive. That means that the name of the topic matches\nexactly how it is specified in the configuration file. So in the previous example, the topic name\n168Streaming Data: Import,\nExport, and Migration\neventLogs is all lowercase except for the letter \"L\". This is how the producers and consumers\nmust specify the topic name. But SQL names — such as table and column names — are case\ninsensitive. As a result, the topic name specified in the EXPORT TO TOPIC clause does not have\nto match exactly. In other words, the topic \"eventLogs\" matches any stream that specifies the\ntopic name with the same spelling, regardless of case.\nThe structure of a topic message — that is, the fields included in the message and the message key — is\ndefined in the schema using the EXPORT TO TOPIC... WITH clause. Other characteristics of how the\nmessage is handled, such as the data format, security, and retention policy, are controlled by <property>\ntags in the configuration file. The following sections discuss:\n•Understanding the different types of topics\n•Declaring VoltDB topics\n•Configuring and managing topics\n•Configuring the topic server\n•Calling topics from external consumers and producers\n•Using opaque topics\n15.6.1. Types of VoltDB Topics\nVoltDB supports four different types of topics, depending on how the topic is declared:\n•A fully processed topic is a pipeline that supports both input and output and passes through a stored\nprocedure. This is defined using both the procedure attribute in the configuration file and the EX-\nPORT TO TOPIC clause in the CREATE STREAM statement.\n•An input-only topic only provides for input from Kafka producers. You define an input-only topic\nby specifying the procedure attribute, without any streams including a corresponding EXPORT TO\nTOPIC clause.\n•An output-only topic only provides for output to Kafka consumers but can be written to by VoltDB\nINSERT statements. You define an output-only topic by including the EXPORT TO TOPIC clause, but\nnot specifying a procedure in the topic declaration.\n169Streaming Data: Import,\nExport, and Migration\n•An opaque topic supports input and output but provides for no processing or interpretation. You de-\nfine an opaque topic using the opaque=\"true\" attribute in the configuration file, as described in\nSection 15.6.6, “Using Opaque Topics” .\n15.6.2. Declaring VoltDB Topics\nYou declare and configure topics by combining SQL stored procedures and streams with topic declarations\nin the database configuration file. The topic itself is defined in the configuration file, using the <topics>\nand <topic> elements. The configuration also lets you identify the stored procedure used for input from\nproducers:\n<topics>\n <topic name=\"eventLogs\"\n procedure=\"eventWatch\"/>\n</topics>\n15.6.2.1. Processing Topic Output\nFor output, you include the EXPORT TO TOPIC clause when you declare a stream. Once the stream\nincludes the EXPORT TO TOPIC clause and the topic is defined in the configuration file, any records\nwritten into the stream are made available to consumers through the topic port.\nYou can control what parts of the stream records are sent to the topic, using the WITH KEY/VALUE\nclauses. The WITH VALUE clause specifies which columns of the stream are included in the body of the\ntopic message and their order. The WITH KEY clause lets you specify one or more columns as a key for\nthe message. Columns can appear in either the message body or the key, in both, or in neither, as needed.\nIn all cases, the lists of columns are enclosed in parentheses and separated by commas.\nSo, for example, the following stream declaration associates the stream events with the topic eventLogs\nand selects two columns for the body of the topic message and one column as the key:\n170Streaming Data: Import,\nExport, and Migration\nCREATE STREAM events \n PARTITION ON COLUMN event_type \n EXPORT TO TOPIC eventLogs\n WITH KEY (event_type) VALUE (when,what)\n ( event_type INTEGER NOT NULL,\n when TIMESTAMP NOT NULL,\n what VARCHAR(256)\n );\n15.6.2.2. Processing Topic Input\nSince VoltDB does not control what content producers send to the topic, it cannot dictate what columns or\ndatatypes the stored procedure will receive. Instead, VoltDB interprets the content from its format. By de-\nfault, text data is interpreted as comma-separated values. All other data is interpreted as a single value based\non the data itself. On the other hand, if the topic is configured as using either JSON or AVRO formatted\ndata in the configuration file, the incoming data from producers will be interpreted in the specified format.\nAny errors during the decoding of the input fields is recorded in the log file. If the input can be decoded,\nthe message fields are used, in order, as arguments to the store procedure call.\nOnly one key field is allowed for input. By default, the key is not passed to the specified stored procedure;\nonly the message fields of the topic are passed as parameters to the stored procedure. If you want to include\nthe key in the list of parameters to the stored procedure, you can set the property producer.parame-\nters.includeKey to true and the key will be included as the partitioning parameter for the procedure.\nFor example:\n<topics>\n <topic name=\"eventLogs\" procedure=\"eventWatch\">\n <property name=\"producer.parameters.includeKey\">true</property>\n </topic>\n</topics>\n15.6.3. Configuring and Managing Topics\nDeclaring the topic and its stream and/or procedure are the only required elements for creating a topic.\nHowever, there are several other attributes you can specify either as part of the declaration or as clauses\nto the stored procedure and stream declarations. Those attributes include:\n•Permissions — Controlling access to the topic by consumers and producers\n•Retention — Managing how long data is retained in the topic queue before being deleted\n•Data Format — Choosing a format for the data passed to the external clients\n15.6.3.1. Permissions\nWhen security is enabled for the database, the external clients must authenticate using a username and\npassword when they initiate contact with the server. Access to the topic is handled separately for consumers\nand producers.\nFor producers, access to the topic is controlled by the security permissions of the associated stored proce-\ndure, as defined by the CREATE PROCEDURE... ALLOW clause or the generic permissions of the user\naccount's role. (For example, a role with the ALLPROC or ADMIN permissions can write to any topic.)\nFor consumers, access to the topic is restricted by the allow attribute of the topic declaration in the\nconfiguration file. If allow is not specified, any authenticated user can read from the topic. If allow\n171Streaming Data: Import,\nExport, and Migration\nis included in the declaration, only users with the specified role(s) have access. You specify permissions\nby providing a comma-separated list of roles that can read from the topic. For example, the following\ndeclaration allows users with the kreader and operator roles to read from the topic eventLogs:\n<topics>\n <topic name=\"eventLogs\" allow=\"kreader,operator\" />\n</topics>\n15.6.3.2. Retention\nUnlike export or import, where there is a single destination or source, topics can have multiple consumers\nand producers. So there is no specific event when the data transfer is complete and can be discarded.\nInstead, you must set a retention policy that defines when data is aged out of the topic queues. You specify\nthe retention policy in terms of either the length of time the data is in queue or the volume of data in the\nqueue.\nFor example, if you specify a retention policy of five days, after a record has been in the queue for five\ndays, it will be deleted. If, instead, you set a retention policy of five gigabytes, as soon as the volume of\ndata in the queue exceeds 5GB, data will deleted until the queue size is under the specified limit. In both\ncases, data aging is a first in, first out process.\nYou specify the retention policy in the retention attribute of the <topic> declaration. The retention\nvalue is a positive integer and a unit, such as \"gb\" for gigabytes or \"dy\" for days. The following is the\nlist of valid retention units:\nTimemn — Minutes\nhr — Hours\ndy — Days\nwk — Weeks\nmo — Months\nyr — Years\nSizemb — Megabytes\ngb — Gigabytes\nIf you do not specify a retention value, the default policy is seven days (7 dy).\n15.6.3.3. Data Format\nVoltDB topics are composed of three elements: a timestamp, a record with one or more fields, and an\noptional set of keys values. The timestamp is generated automatically when the record is inserted into the\nstream. The format of the record and the key depends on the data itself. Or you can specify a format for\nthe record, for the key, or for both using properties of the topic declaration in the configuration file.\nFor single value records and keys, the data is sent in the native Kafka binary format for that datatype. For\nmulti-value records or keys, VoltDB defaults to sending the content as comma-separated values (CSV) in\na text string. Similarly, on input from producers, the topic record is interpreted as a single binary format\nvalue or a CSV string, depending on the datatype of the content.\nYou can control what format is used to send and receive the topic data using either the format attribute\nof the <topic> element, or separate <property> child elements to select the format of individual\ncomponents. For example, to specify the format for the message and the keys for both input and output,\nyou can use the attribute format=\"avro\" :\n<topics>\n <topic name=\"eventLogs\" format=\"avro\" />\n</topics>\n172Streaming Data: Import,\nExport, and Migration\nTo specify individual formats for input versus output, or message versus keys, you can use <property>\nelements as children of the <topic> tag, where the property name is either consumer or producer\nfollowed by format and, optionally, the component type — all separated by periods. For example, the\nfollowing declaration specifies Avro for both consumers and producers, and is equivalent to the preceding\nexample using the format attribute:\n<topics>\n <topic name=\"eventLogs\">\n <property name=\"consumer.format\">avro</property>\n <property name=\"producer.format\">avro</property>\n </topic>\n</topics>\nThe following are the valid formatting properties:\n•consumer.format\n•consumer.format.key\n•consumer.format.value\n•producer.format.value\nFor input, note that you cannot specify the format of the key. This is because only a single key value is\nsupported for producers and it is always assumed to be in native binary or string format.\nDepending on what format you choose, you can also control specific aspects of how data is represented\nin that format. For example, you can specify special characters such as the separator, quote, and escape\ncharacter in CSV format. Table 15.10, “Topic Formatting Properties” lists all of the supported formatting\nproperties you can use when declaring topics in the configuration file.\nTable 15.10. Topic Formatting Properties\nProperty Values Description\nconsumer.format avro, csv, json Format of keys and values sent to consumers. Supersedes the\nformat definition in the <topic> deployment element. The de-\nfault is CSV.\nconsumer.for-\nmat.valueavro, csv, json Format of values sent to consumers. Supersedes the format\ndefinition in the <topic> deployment element and the \"con-\nsumer.format\" property. The default is CSV.\nconsumer.for-\nmat.keyavro, csv, json Format of keys sent to consumers. Supersedes the format\ndefinition in the <topic> deployment element and the \"con-\nsumer.format\" property. The default is CSV.\nproducer.for-\nmat.valueavro, csv, json Format of values received from producers. Supersedes the for-\nmat definition in the <topic> deployment element. The default\nis CSV.\nconfig.avro.time-\nstampmicroseconds, mil-\nlisecondsUnit of measure for timestamps in AVRO formatted fields. The\ndefault is microseconds.\nconfig.avro.geogra-\nphyPointbinary, fixed_bina-\nry, stringDatatype for GEOGRAPHY_POINT columns in AVRO for-\nmatted fields. The default is fixed_binary.\nconfig.avro.geogra-\nphybinary,string Datatype for GEOGRAPHY columns in AVRO formatted\nfields. The default is binary.\nconfig.csv.escape character Character used to escape the next character in a quoted string\nin CSV format. The default is the backslash \"\\\".\n173Streaming Data: Import,\nExport, and Migration\nProperty Values Description\nconfig.csv.null character(s) Character(s) representing a null value in CSV format. The de-\nfault is \"\\N\".\nconfig.csv.quote character Character used to enclose quoted strings in CSV format. The\ndefault is the double quotation character (\").\nconfig.csv.separa-\ntorcharacter Character separating values in CSV format. The default is the\ncomma \",\".\nconfig.csv.quoteAll true, false Whether all string values are quoted or only strings with special\ncharacters (such as commas, line breaks, and quotation marks)\nin CSV format. The default is false.\nconfig.csv.stric-\ntQuotestrue, false Whether all string values are expected to be quoted on input.\nIf true, any characters outside of quotation marks in text fields\nare ignored. The default is false.\nconfig.csv.ignore-\nLeadingWhitespacetrue, false Whether leading spaces are included in string values in CSV\nformat. The default is true.\nconfig.json.schema embedded, none Whether the JSON representation contains a property named\n\"schema\" embedded within it or not. If embedded, the schema\nproperty describes the layout of the object. The default is none.\nconfig.json.con-\nsumer.attributesstring Specifies the names of outgoing JSON elements. By default,\nJSON elements are named after the table columns they repre-\nsent. This property lets you rename the columns on output.\nconfig.json.produc-\ner.attributesstring Specifies the name and order of the JSON elements that are\ninserted as parameters to the topic input procedure.\nproducer.parame-\nters.includeKeytrue, false Whether the topic key is included as the partitioning parameter\nto the stored procedure call. The default is false.\nopaque.partitioned true, false Whether the opaque topic is partitioned. Ignored if not an\nopaque topic. The default is false\ntopic.store.encoded true, false Whether the topic is stored in the same format as issued by the\nproducer: optimizes transcoding to consumers when producer\nand consumer formats are identical. The default is false.\nWhen using AVRO format, you must also have access to an AVRO schema registry, which is where\nVoltDB stores the schema for AVRO-formatted topics. The URL for the registry is specified in the database\nconfiguration file, as described in the next section.\n15.6.4. Configuring the Topic Server\nCommunication between the VoltDB database and topic clients is handled by a separate server process:\nthe topic server. The topic server process is started whenever VoltDB starts with the <topics> element\ndeclared and enabled in the configuration file.\nBy default, the topic server, when running, listens on port 9092. You can specify a different port with the\nport attribute of the <topics> element. Other aspects of the topic server operation are configured as\nproperties of the <broker> element, which if present must be the first child of the <topics> element.\nThe following are the supported properties of the <broker> element:\n•cluster.id\n•group.initial.rebalance.delay.ms\n•group.load.max.size\n174Streaming Data: Import,\nExport, and Migration\n•group.max.session.timeout.ms\n•group.max.size\n•group.min.session.timeout.ms\n•log.cleaner.dedupe.buffer.size\n•log.cleaner.delete.retention.ms\n•log.cleaner.threads\n•network.thread.count\n•offsets.retention.check.interval.ms\n•offsets.retention.minutes\n•quota.throttle.max_ms\n•quota.request.bytes_per_second\n•quota.request.processing_percent\n•quota.response.bytes_per_second\n•retention.policy.threads\nFor example, this declaration configures the broker using port 9999, a cluster ID of 3, and five network\nthreads:\n<topics port=\"9999\">\n <broker>\n <property name=\"cluster.id\">3</property>\n <property name=\"network.thread.count\">5</property>\n </broker>\n</topics>\nFinally, you can additionally tune the performance of the topic server by adjusting the threads that man-\nage the inbound and outbound connections. You can specify a threadpool for the topic server to use for\nprocessing client requests using the threadpool attribute of the <topics> , then specify a size for the\npool in the <threadpools> element:\n<topics threadpool=\"topics\">\n [ . . . ]\n</topics>\n<threadpools>\n <pool name=\"topics\" size=\"10\"/>\n</threadpools>\n15.6.5. Calling Topics from Consumers and Producers\nOnce the topic has been declared in the database configuration and the appropriate streams and stored\nprocedures created in the schema, the topic is ready for use by external clients. Since VoltDB topics use\nthe Kafka API protocol, any Kafka consumer or producer with the appropriate permissions can access the\ntopics. For example, you can use the console consumer that comes with Kakfa to read topics from VoltDB:\n$ bin/kafka-console-consumer.sh --from-beginning \\\n --topic eventLogs --bootstrap-server myvoltdb:9092\nYou can even use the console producer. However, to optimize write operations, Kafka needs to know\nthe VoltDB partitioning scheme. So it is strongly recommended that you define the Kafka ProducerCon-\nfig.PARTITIONER_CLASS_CONFIG property to point to the VoltDB partitioner for Kafka. By defining\nthe PARTITIONER_CLASS_CONFIG, VoltDB can ensure that the producer sends records to the appro-\npriate cluster node for each partitioning key. For example, a Java-based client application should contain\na producer definition similar to the following:\n Properties props = new Properties();\n175Streaming Data: Import,\nExport, and Migration\n props.put(\"bootstrap.servers\", \"myvoltdb:9092\");\n props.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, VoltDBKafkaPartitioner.class.getName());\n props.put(\"client.id\",\"myConsumer\");\n props.put(\"key.serializer\", \"org.apache.kafka.common.serialization.StringSerializer\");\n props.put(\"value.serializer\", \"org.apache.kafka.common.serialization.StringSerializer\");\n Producer<String, String> producer = new KafkaProducer<>(props); \nTo access the VoltDB partitioner for Kafka, be sure to include the VoltDB client library JAR file in your\nclasspath when compiling and running your producer client.\n15.6.6. Using Opaque Topics\nOpaque topics are a special type of topic that do not receive any interpretation or modification by the data-\nbase. If you want to create a topic that is not processed but simply flows through VoltDB from producers\nto consumers, you declare the topic as \"opaque\" in the configuration file, without either specifying a stored\nprocedure for input or associating a stream with the topic for output.\n<topic name=\"sysmsgs\" opaque=\"true\" />\nOpaque topics allow you to use a single set of brokers for all your topics even if you only need to analyze\nand process certain data feeds. Because there is no interpretation, you cannot specify a stored procedure,\na stream, or a format for the topic. However, there are a few properties specific to opaque topics you can\nuse to control how the data are handled.\nOne important control is whether the opaque topics are partitioned or not. Partitioning the opaque topics\nimproves throughput by distributing processing across the cluster. However, you can only partition opaque\ntopics that have a key. To partition an opaque topic you set the opaque.partitioned property to true:\n<topic name=\"sysmsgs\" opaque=\"true\">\n <property name=\"opaque.partitioned\">true</property>\n</topic>\nYou can specify a retention policy for opaque topics, just like regular topics. In fact, opaque topics have\none additional retention option. Since the content is not analyzed in any way, it can be compressed to save\nspace while it is stored. By specifying the retention policy as \"compact\" with a time limit, the records are\nstored compressed until the time limit expires. For example, the following configuration compresses the\nopaque topic data then deletes it after two months:\n<topic name=\"sysmsgs\" opaque=\"true\" retention=\"compact 2 mo\" >\n <property name=\"opaque.partitioned\">true</property>\n</topic>\n176Appendix A. Supported SQL DDL\nStatements\nThis appendix describes the subset of the SQL Data Definition Language (DDL) that VoltDB supports\nwhen defining the schema for a VoltDB database. VoltDB also supports extensions to the standard syntax\nto allow for the declaration of stored procedures and partitioning information related to tables and proce-\ndures.\nThe following sections are not intended as a complete description of the standard SQL DDL. Instead, they\nsummarize the subset of standard SQL DDL statements that are allowed when defining a VoltDB schema\nand any exceptions, extensions, or limitations that application developers should be aware of.\nThe supported standard SQL DDL statements are:\n•ALTER TABLE\n•CREATE INDEX\n•CREATE TABLE\n•CREATE VIEW\n•DROP INDEX\n•DROP TABLE\n•DROP VIEW\nThe supported VoltDB-specific extensions for declaring functions, stored procedures, streams, and parti-\ntioning are:\n•ALTER STREAM\n•ALTER TASK\n•CREATE AGGREGATE FUNCTION\n•CREATE FUNCTION\n•CREATE PROCEDURE AS\n•CREATE PROCEDURE FROM CLASS\n•CREATE ROLE\n•CREATE STREAM\n•CREATE TASK\n•DR TABLE\n•DROP FUNCTION\n•DROP PROCEDURE\n•DROP ROLE\n•DROP STREAM\n•DROP TASK\n•PARTITION PROCEDURE\n•PARTITION TABLE\n177Supported SQL DDL Statements\nALTER STREAM\nALTER STREAM — Modifies an existing stream definition.\nSyntax\nALTER STREAM stream-name DROP [COLUMN] column-name\nALTER STREAM stream-name ADD column-definition [BEFORE column-name ]\nALTER STREAM stream-name ALTER column-definition\nALTER STREAM stream-name ALTER [COLUMN] column-name SET {DEFAULT value | [NOT]\nNULL}\ncolumn-definition: column-name datatype [DEFAULT value ] [ NOT NULL ]\nDescription\nThe ALTER STREAM statement modifies an existing stream by adding, dropping, or modifying a column\nassociated with the stream. You cannot drop or modify the column if there are dependencies on that column.\nFor example, if stored procedure queries reference a dropped or modified column, you cannot make the\nchange. In this case, you must drop the stored procedures before making the change to the stream's schema,\nthen recreate the stored procedures afterwards.\nIf you drop the stream as a whole (using the DROP STREAM statement) and then redefine it using\nCREATE STREAM, any pending data not already sent to the stream's export target is deleted. ALTER\nSTREAM, on the other hand, does not interrupt pending data. By using ALTER STREAM to modify the\nschema of the stream, all previously committed data stays in the queue for the target and any inserts after\nthe schema change are added the queue.\nExample\nThe following example modifies an existing stream, invoice, to modify the definition of the customer\ncolumn.\nALTER STREAM ALTER COLUMN customer SET NOT NULL;\n178Supported SQL DDL Statements\nALTER TABLE\nALTER TABLE — Modifies an existing table definition.\nSyntax\nALTER TABLE table-name DROP CONSTRAINT constraint-name\nALTER TABLE table-name DROP [COLUMN] column-name [CASCADE]\nALTER TABLE table-name DROP {PRIMARY KEY | TTL}\nALTER TABLE table-name ADD constraint-definition\nALTER TABLE table-name ADD column-definition [BEFORE column-name ]\nALTER TABLE table-name ADD ttl-definition\nALTER TABLE table-name ALTER column-definition [CASCADE]\nALTER TABLE table-name ALTER [COLUMN] column-name SET {DEFAULT value | [NOT]\nNULL}\nALTER TABLE table-name ALTER export-definition\nALTER TABLE table-name ALTER ttl-definition\ncolumn-definition: [COLUMN] column-name datatype [DEFAULT value ] [ NOT NULL ] [in-\ndex-type]\nconstraint-definition: [CONSTRAINT constraint-name ] { index-definition }\nexport-definition:EXPORT TO TARGET target-name [ON action [,...]]\nindex-definition: {index-type} ( column-name [,...])\nttl-definition: USING TTL value [time-unit] ON COLUMN column-name\n[BATCH_SIZE number-of-rows ] [MAX_FREQUENCY value]\nindex-type: PRIMARY KEY | UNIQUE | ASSUMEUNIQUE\nDescription\nThe ALTER TABLE modifies an existing table definition by adding, removing or modifying a column,\nconstraint, or clause. There are several different forms of the ALTER TABLE statement, depending on\nwhat attribute you are altering and how you are changing it. The key point to remember is that you only\nalter one item at a time. For example, to change two columns or a column and a constraint, you need to\nissue two ALTER TABLE statements.\nThere are three ALTER TABLE operations:\n•ALTER TABLE ADD\n•ALTER TABLE DROP\n179Supported SQL DDL Statements\n•ALTER TABLE ALTER\nThe syntax of each statement depends on whether you are modifying a column, a constraint, or the TTL\nclause. You can ADD or DROP columns, indexes, and the TTL clause and you can ALTER columns\nand the TTL clause. However, you cannot ALTER indexes. To alter an existing constraint you must first\nDROP the constraint and then ADD the new definition.\nThere are two forms of the ALTER TABLE DROP statement. You can drop a column or constraint by\nname or you can drop a PRIMARY KEY or a USING TTL clause by identifying the item to drop, since\nthere is only one such item for any given table.\nThe syntax for the ALTER TABLE ADD statement uses the same syntax to define a new column, con-\nstraint, or clause as that used in the CREATE TABLE command. When adding a column you can also\nspecify the BEFORE clause to specify where the new column falls in the order of table columns. If you to\nnot specify BEFORE, the column is added at the end of the list of columns.\nWhen modifying the USING TTL clause, the ALTER TABLE ALTER command specifies the complete\nreplacement definition for the clause, including either or both the BATCH_SIZE or MAX_FREQUENCY\nclauses.\nYou cannot alter the MIGRATE TO TARGET attribute of the table. You also cannot alter any attributes\nof the table that affect migration. For example, you cannot add, drop, or alter the USING TTL clause if\nthe table is declared with MIGRATE TO TARGET. And if the table has both MIGRATE TO TARGET\nand USING TTL, you cannot add, drop, or alter the TTL column. However, you can alter the TTL value,\nbatch size, and frequency.\nTo add, drop, or alter the MIGRATE action you must drop the table first and redefine it using the CREATE\nTABLE statement.\nWhen modifying columns, the ALTER TABLE ALTER COLUMN statement can have one of two forms.\nYou can alter the column by providing a complete replacement definition, similar to the ALTER TABLE\nADD COLUMN statement, or you can alter a specific attribute using the ALTER TABLE ALTER COL-\nUMN... SET syntax. Use SET DEFAULT to add or modify an existing default. Use SET DEFAULT\nNULL to remove an existing default. You can also use the SET clause to specify whether the column can\nbe null (SET NULL) or must not contain a null value (SET NOT NULL).\nHandling Dependencies\nYou can only alter tables if there are no dependencies on the table, column, or index that would be violated\nby the change. For example, you cannot drop the partitioning column from a partitioned table if there\nare stored procedures partitioned on that table and column as well. You must first drop the partitioned\nstore procedures before dropping the column. Note that by dropping the partitioning column, you are also\nautomatically changing the table into a replicated table.\nThe most common dependency is if the table already has data in it. You can add, delete, and (within\nreasonable bounds) modify the columns of a table with existing data as long as those columns are not\nnamed in an index, view, or PARTITION statement. If a column is referenced in a view or index, you can\nspecify CASCADE when you drop the column to automatically drop the referring indexes and views.\nWhen a table has records in it, data associated with dropped columns is deleted. Added columns are inter-\npreted as null or filled in with the specified default value. (You cannot add a column that is defined as\nNOT NULL, but without a default, if the table has existing data in it.) You can even change the datatype\nof the column within reason. In other words, you can increase the size of the datatype (for example, from\nINTEGER to BIGINT) but you cannot decrease the size (say, from INTEGER to TINYINT) since some\nof the existing data may already violate the size constraint.\n180Supported SQL DDL Statements\nYou can also add non-unique indexes to tables with existing data. However, you cannot add unique con-\nstraints (such as PRIMARY KEY) if data exists.\nIf a table has no records in it, you can make almost any changes you like to it assuming, again, there are\nno dependencies. You can add and remove unique constraints, add, remove, and modify columns, even\nchange column datatypes at will.\nHowever, if there are dependencies, such as stored procedure queries that reference a dropped or modified\ncolumn, you may not be allowed to make the change. If there are such dependencies, it is often easier to\ndo drop the stored procedures before making the changes then recreate the stored procedures afterwards.\nExamples\nThe following example uses ALTER TABLE to drop a unique constraint, add a new column, and then\nrecreate the constraint adding the new column.\nALTER TABLE Employee DROP CONSTRAINT UniqueNames;\nALTER TABLE Employee ADD COLUMN MiddleInitial VARCHAR(1);\nALTER TABLE Employee ADD CONSTRAINT UniqueNames \n UNIQUE (FirstName, MiddleInitial, LastName);\n181Supported SQL DDL Statements\nALTER TASK\nALTER TASK — Modifies an existing task schedule.\nSyntax\nALTER TASK task-name [ENABLE | DISABLE]\nALTER TASK task-name ALTER ON ERROR {LOG | IGNORE | STOP}\nDescription\nThe ALTER TASK statement lets you modify an existing scheduled task. You can enable, disable, or\nchange the error handling for the task.\nExamples\nThe following example changes the error handling for the task cleanup to log errors and continue, then\nenables the task, in case it was previously disabled.\nALTER TASK cleanup ALTER ON ERROR LOG;\nALTER TASK cleanup ENABLE;\n182Supported SQL DDL Statements\nCREATE AGGREGATE FUNCTION\nCREATE AGGREGATE FUNCTION — Defines an aggregate SQL function and associates it with a Java\nclass.\nSyntax\nCREATE AGGREGATE FUNCTION function-name FROM CLASS class-path\nDescription\nThe CREATE AGGREGATE FUNCTION statement declares a user-defined aggregate function and as-\nsociates it with a Java class. Aggregate functions process multiple values based on a query expression and\nproduce a single result. For example, the built-in AVG aggregate function calculates the average of the\nvalues of a specific column or expression based on the query constraints.\nThe return value of a user-defined aggregate function matches the datatype of the Java method itself.\nSimilarly, the number and datatype of the function's arguments are defined by the arguments of the method.\nUser-defined aggregate functions allow you to extend the functionality of the SQL language by declaring\nyour own functions that can be used in SQL queries and data manipulation statements. The steps for\ncreating a user-defined aggregate function are:\n1.Write, compile, and debug the program code for a class that performs the function's action. The class\nmust include the following methods:\n•start() — Initializes the function. Called once for each invocation of the function.\n•assemble( arg,... ) — Processes the arguments to the function. called once for each record\nmatching the constraints of the query in which the function appears.\n•combine( class-instance ) — For partitioned queries, combines the results of one partition\ninto the results of another.\n•end() — Finalizes the function and returns the function result. Called once at the completion of\nthe function invocation.\n2.Package the class in a JAR file, just as you would a stored procedure. (Classes for functions and stored\nprocedures can be packaged in the same JAR file.)\n3.Load the JAR file into the database using the LOAD CLASSES statement.\n4.Declare and name the user-defined function using the CREATE AGGREGATE FUNCTION statement.\nThe Java methods that implement the user-defined function must follow the same rules for determinism\nas user-defined stored procedures, as outlined in Section 5.1.2.2, “Avoid Introducing Non-deterministic\nValues from External Functions” . See the chapter on \" Creating Custom SQL Functions \" in the VoltDB\nGuide to Performance and Customization for details on designing the Java class and methods necessary\nfor a user-defined aggregate function.\nTo declare a scalar rather than an aggregate function, see the description of the CREATE FUNCTION\nstatement.\n183Supported SQL DDL Statements\nExamples\nThe following example defines an aggregate function called longest_word from the start(), assemble(),\ncombine(), and end() methods in the class LongestWord:\nCREATE AGGREGATE FUNCTION longest_word FROM CLASS myapp.functions.LongestWord;\n184Supported SQL DDL Statements\nCREATE FUNCTION\nCREATE FUNCTION — Defines a SQL scalar function and associates it with a Java method.\nSyntax\nCREATE FUNCTION function-name FROM METHOD class-path .method-name\nDescription\nThe CREATE FUNCTION statement declares a user-defined function and associates it with a Java method.\nThe return value of the function matches the datatype of the Java method itself. Similarly, the number and\ndatatype of the function's arguments are defined by the arguments of the method.\nUser-defined functions allow you to extend the functionality of the SQL language by declaring your own\nfunctions that can be used in SQL queries and data manipulation statements. The steps for creating a user-\ndefined function are:\n1.Write, compile, and debug the program code for the method that will perform the function's action.\n2.Package the class and method in a JAR file, just as you would a stored procedure. (Classes for functions\nand stored procedures can be packaged in the same JAR file.)\n3.Load the JAR file into the database using the LOAD CLASSES statement.\n4.Declare and name the user-defined function using the CREATE FUNCTION statement.\nFor example, let's say you want to create function that decodes an HTML-encoded string. The beginning\nof the Java method might look like this, declaring a method of type String and accepting two arguments:\nthe string to encode and an integer value for the maximum length.\npackage myapp.datatypes;\n public class Html {\n \n public String decode( String html, int maxlength )\n throws VoltAbortException {\nAfter compiling and packaging this class into a JAR file, you can load the class and declare it as a SQL\nfunction:\nsqlcmd\n1> LOAD CLASSES myfunctions.jar;\n2> CREATE FUNCTION html_decode FROM METHOD myapp.datatypes.Html.decode;\nNote that the function name and method name do not have to be identical. Also, the function name is not\ncase sensitive. However, the Java class and method names are case sensitive. Finally, the Java methods\nfor user-defined functions must follow the same rules for determinism as user-defined stored procedures,\nas outlined in Section 5.1.2.2, “Avoid Introducing Non-deterministic Values from External Functions” .\nExamples\nThe following example defines a function called emoticon from a Java method findEmojiCode:\n185Supported SQL DDL Statements\nCREATE FUNCTION emoticon FROM METHOD utils.Charcode.findEmojiCode;\n186Supported SQL DDL Statements\nCREATE INDEX\nCREATE INDEX — Creates an index for faster access to a table.\nSyntax\nCREATE [UNIQUE|ASSUMEUNIQUE] INDEX index-name\nON {table-name | view-name } ( index-column [,...])\n[WHERE [NOT] boolean-expression [ {AND | OR} [NOT] boolean-expression]...]\nDescription\nCreating an index on a table or view makes read access to the data faster when using the columns of the\nindex as a key. Note that VoltDB creates an index automatically when you specify a constraint, such as\na primary key, in the CREATE TABLE statement.\nWhen you specify that the index is UNIQUE, VoltDB constrains the table to at most one row for each set\nof index column values. If an INSERT or UPDATE statement attempts to create a row where all the index\ncolumn values match an existing indexed row, the statement fails.\nBecause the uniqueness constraint is enforced separately within each partition, only indexes on replicated\ntables or containing the partitioning column of partitioned tables can ensure global uniqueness for parti-\ntioned tables and therefore support the UNIQUE keyword.\nIf you wish to create an index on a partitioned table that acts like a unique index but does not include the\npartitioning column, use the keyword ASSUMEUNIQUE instead of UNIQUE. Assumed unique indexes\nare treated like unique indexes (VoltDB verifies they are unique within the current partition). However,\nit is your responsibility to ensure these indexes are actually globally unique. Otherwise, it is possible an\nindex will generate a constraint violation during an operation that modifies the partitioning of the database\n(such as adding nodes on the fly or restoring a snapshot to a different cluster configuration).\nThe indexed items ( index-column ) are either columns of the specified table or expressions, including func-\ntions, based on the table. For example, the following statements index a table based on the calculated area\nand its distance from a set location:\nCREATE INDEX areaofplot ON plot (width * height);\nCREATE INDEX distancefrom49 ON plot ( ABS(latitude - 49) );\nYou can create a partial index by including a WHERE clause in the index definition. The WHERE clause\nlimits the number of rows that get indexed. This is useful if certain columns in the index are not evenly\ndistributed. For example, if you are not interested in records where a column is null, you can use a WHERE\nclause to exclude those records and optimize the size and performance of the index.\nThe partial index is utilized by the database when a query's WHERE clause contains the same condition\nas the partial index definition. A special case is if the index condition is {column} IS NOT NULL . In\nthis situation, the index may be applied even in the query does not contain that exact condition, as long as\nthe query contains a WHERE condition that implies the column is not null, such as {column} > 0 .\nVoltDB uses tree indexes. They provide the best general performance for a wide range of operations,\nincluding exact value matches and queries involving a range of values, such as SELECT ... WHERE\nScore > 1 AND Score < 10 .\n187Supported SQL DDL Statements\nExamples\nThe following example creates two indexes on a single table. The first is, by default, a non-unique index\nbased on the departure time The second is a unique index based on the columns for the airline and flight\nnumber.\nCREATE INDEX flightTimeIdx ON FLIGHT ( departtime );\nCREATE UNIQUE INDEX FlightKeyIdx ON FLIGHT ( airline, flightID );\nYou can also use functions in the index definition. For example, the following is an index based on the\nelement movie within a JSON-encoded VARCHAR column named favorites and the member's ID.\nCREATE INDEX FavoriteMovie ON MEMBER ( \n FIELD( favorites, 'movie' ), memberID\n);\nThe following example demonstrates the use of a partial index, by including a WHERE clause, to exclude\nrecords with a null column.\nCREATE INDEX completed_tasks \n ON tasks (task_id, startdate, enddate)\n WHERE enddate IS NOT NULL;\n188Supported SQL DDL Statements\nCREATE PROCEDURE AS\nCREATE PROCEDURE AS — Defines a stored procedure composed of one or more SQL statements.\nSyntax\nCREATE PROCEDURE procedure-name\n[PARTITION ON TABLE table-name COLUMN column-name [PARAMETER position]]\n[ALLOW role-name [,...]]\nAS {sql-statement ; | multi-statement-procedure }\nCREATE PROCEDURE procedure-name DIRECTED\n[ALLOW role-name [,...]]\nAS {sql-statement ; | multi-statement-procedure }\nmulti-statement-procedure:\nBEGIN\n sql-statement ; [,...]\nEND;\nDescription\nYou must declare stored procedures as part of the schema to make them accessible at runtime. The CRE-\nATE PROCEDURE AS statement lets you create a procedure from one or more SQL statements directly\nwithin the DDL statement. The SQL statements can contain question marks (?) as placeholders that are\nfilled in at runtime with the arguments to the procedure call.\nThere are two ways to define a procedure as part of the CREATE PROCEDURE AS statement:\n•A single statement procedure where the CREATE PROCEDURE AS statement is followed by one SQL\nstatement terminated by a semi-colon.\n•A multi-statement procedure where the CREATE PROCEDURE AS statement is followed by multiple\nSQL statements enclosed in a BEGIN-END clause.\nFor a single statement, the stored procedure returns the results of the query as a VoltTable. For multi-state-\nment procedures, the results are returned as an array of VoltTable structures, one for each statement.\nFor all CREATE PROCEDURE AS statements, the procedure name must follow the naming conventions\nfor Java class names. For example, the name is case-sensitive and cannot contain any white space.\nYou can create three types of stored procedures:\n•Multi-Partition Procedures — By default, the CREATE PROCEDURE statement declares a multi-par-\ntition procedure. A multi-partition procedure runs as a single transaction and has access to data from\nthe entire database. However, it also means that the procedure will access all of the partitions at once,\nblocking the transaction queues until the procedure is done.\n•Single-Partition Procedures — If you include the PARTITION ON clause, the procedure is partitioned\nand runs on only one partition of the database. The partition it runs on is determined by the value of one\nof the parameters you pass to the procedure at runtime, as described below.\n•Directed Procedures — if you include the DIRECTED clause, the procedure is a directed procedure\nand will run separate transactions on each of the partitions. However, the individual transactions are not\n189Supported SQL DDL Statements\ncoordinated. Directed procedures must be invoked as a scheduled task or using the callAllParti-\ntionProcedure method. See Section 7.5, “Directed Procedures: Distributing Transactions to Every\nPartition” and the description of the CREATE TASK statement for more information on directed pro-\ncedures.\nWhen creating single-partitioned procedures, you specify the partitioning in the PARTITION ON clause.\nPartitioning a stored procedure means that the procedure executes within a unique partition of the database.\nThe partition in which the procedure executes is chosen at runtime based on the table and column specified\nby table-name and column-name . By default, VoltDB uses the first parameter to the stored procedure as the\npartitioning value. However, you can use the PARAMETER clause to specify a different parameter. The\nposition value specifies the parameter position, counting from zero. (In other words, position 0 is the first\nparameter, position 1 is the second, and so on.) The specified table must be a partitioned table or stream.\nIf security is enabled at runtime, only those roles named in the ALLOW clause (or with the ALLPROC or\nADMIN permissions) have permission to invoke the procedure. If security is not enabled at runtime, the\nALLOW clause is ignored and all users have access to the stored procedure.\nExamples\nThe following example defines a stored procedure, CountUsersByCountry , as a single SQL query with a\nplaceholder for matching the country column:\nCREATE PROCEDURE CountUsersByCountry AS\n SELECT COUNT(*) FROM Users WHERE country=?;\nThe next example restricts access to the stored procedure to only users with the operator role. It also\npartitions the stored procedure on the userID column of the Accounts table. Note that the PARAMETER\nclause is used since the userID is the second parameter to the procedure:\nCREATE PROCEDURE ChangeUserPassword \n PARTITION ON TABLE Accounts COLUMN userID PARAMETER 1\n ALLOW operator \n AS UPDATE Accounts SET HashedPassword=? WHERE userID=?;\nThe last example uses a BEGIN-END clause to include four SQL statements in the procedure. In this case,\nthe procedure performs two INSERT INTO SELECT statements, a DELETE statement and then selects\nthe total count of records after the operation. The stored procedure returns four VoltTables, one for each\nstatement, with the last one containing the final record count since SELECT is the last statement in the\nprocedure.\nCREATE PROCEDURE MoveOrders \n AS BEGIN \n INSERT INTO enroute SELECT * FROM Orders \n WHERE ship_date < NOW() AND delivery_date > NOW();\n INSERT INTO history SELECT * FROM enroute \n WHERE delivery_date < NOW();\n DELETE FROM enroute \n WHERE delivery_date < NOW();\n SELECT COUNT(*) FROM enroute;\n END;\n190Supported SQL DDL Statements\nCREATE PROCEDURE FROM CLASS\nCREATE PROCEDURE FROM CLASS — Defines a stored procedure associated with a Java class.\nSyntax\nCREATE PROCEDURE\n[PARTITION ON TABLE table-name COLUMN column-name [PARAMETER position]]\n[ALLOW role-name [,...]]\nFROM CLASS class-name\nCREATE PROCEDURE DIRECTED\n[ALLOW role-name [,...]]\nFROM CLASS class-name\nDescription\nYou must declare stored procedures to make them accessible to client applications and the sqlcmd utility.\nCREATE PROCEDURE FROM CLASS lets you declare stored procedures that are written as Java class-\nes.The class-name is the name of the Java class.\nBefore you declare the stored procedure, you must create, compile, and load the associated Java class. It\nis usually easiest to do this by compiling all of your Java stored procedures and packaging the resulting\nclass files into a single JAR file that can be loaded once. For example:\n$ javac -d ./obj src/procedures/*.java\n$ jar cvf myprocs.jar –C obj .\n$ sqlcmd\n1> load classes myprocs.jar;\n2> CREATE PROCEDURE FROM CLASS procedures.AddCustomer;\nYou can create three types of stored procedures:\n•Multi-Partition Procedures — By default, the CREATE PROCEDURE statement declares a multi-par-\ntition procedure. A multi-partition procedure runs as a single transaction and has access to data from\nthe entire database. However, it also means that the procedure will access all of the partitions at once,\nblocking the transaction queues until the procedure is done.\n•Single-Partition Procedures — If you include the PARTITION ON clause, the procedure is partitioned\nand runs on only one partition of the database. The partition it runs on is determined by the value of one\nof the parameters you pass to the procedure at runtime, as described below.\n•Directed Procedures — if you include the DIRECTED clause, the procedure is a directed procedure\nand will run separate transactions on each of the partitions. However, the individual transactions are not\ncoordinated. Directed procedures must be invoked as a scheduled task or using the callAllParti-\ntionProcedure method. See Section 7.5, “Directed Procedures: Distributing Transactions to Every\nPartition” and the description of the CREATE TASK statement for more information on directed pro-\ncedures.\nWhen creating single-partitioned procedures, you specify the partitioning in the PARTITION ON clause.\nPartitioning a stored procedure means that the procedure executes within a unique partition of the database.\nThe partition in which the procedure executes is chosen at runtime based on the table and column specified\nby table-name and column-name . By default, VoltDB uses the first parameter to the stored procedure as\n191Supported SQL DDL Statements\nthe partitioning value. However, you can use the PARAMETER clause to specify a different parameter.\nThe position value specifies the parameter position, counting from zero. (In other words, position 0 is the\nfirst parameter, position 1 is the second, and so on.)\nThe specified table must be a partitioned table and cannot be an export stream or replicated table.\nIf security is enabled at runtime, only those roles named in the ALLOW clause (or with the ALLPROC or\nADMIN permissions) have permission to invoke the procedure. If security is not enabled at runtime, the\nALLOW clause is ignored and all users have access to the stored procedure.\nExample\nThe following example declares a stored procedure matching the Java class MakeReservation. Note that\nthe class name includes its location within the current class path (in this case, as a child of flight and\nprocedures ). However, the name itself, MakeReservation , must be unique within the schema because at\nruntime stored procedures are invoked by name only.\nCREATE PROCEDURE FROM CLASS flight.procedures.MakeReservation;\n192Supported SQL DDL Statements\nCREATE ROLE\nCREATE ROLE — Defines a role and the permissions associated with that role.\nSyntax\nCREATE ROLE role-name [WITH permission [,...]]\nDescription\nThe CREATE ROLE statement defines a named role that can be used to assign access rights to specific\nprocedures and functions. When security is enabled in the database configuration, the permissions assigned\nin the CREATE ROLE and CREATE PROCEDURE statements specify which users can access which\nfunctions.\nUse the CREATE PROCEDURE statement to assign permissions to named roles for accessing specific\nstored procedures. The CREATE ROLE statement lets you assign certain generic permissions. The fol-\nlowing table describes the permissions that can be assigned the WITH clause.\nPermission Description Inherits\nDEFAULTPROCREAD Access to read-only default procedures ( TABLE.se-\nlect)\nDEFAULTPROC Access to all default procedures ( TABLE.select, TA-\nBLE.insert, TABLE.delete, TABLE.update, and TA-\nBLE.upsert)DEFAULTPROCREAD\nSQLREAD Access to read-only ad hoc SQL queries (SELECT) DEFAULTPROCREAD\nSQL Access to all ad hoc SQL queries and default proce-\nduresSQLREAD, DEFAULT-\nPROC\nALLPROC Access to all user-defined stored procedures\nADMIN Full access to all system procedures, all user-defined\nprocedures, as well as default procedures, ad hoc\nSQL, and DDL statements.ALLPROC, DEFAULT-\nPROC, SQL\nNote: For backwards compatibility, the special permissions ADHOC and SYSPROC are still recognized.\nThey are interpreted as synonyms for SQL and ADMIN, respectively.\nThe generic permissions are denied by default. So you must explicitly enable them for those roles that\nneed them. For example, if users assigned to the \"interactive\" role need to run ad hoc queries, you must\nexplicitly assign that permission in the CREATE ROLE statement:\nCREATE ROLE interactive WITH sql;\nAlso note that the permissions are additive. So if a user is assigned to one role that allows access to\ndefaultproc but not allproc, but that user also is assigned to another role that allows allproc, the user has\nboth permissions.\nExample\nThe following example defines three roles — admin, developer , and batch — each with a different set\nof permissions:\n193Supported SQL DDL Statements\nCREATE ROLE admin WITH admin;\nCREATE ROLE developer WITH sql, allproc;\nCREATE ROLE batch WITH defaultproc;\n194Supported SQL DDL Statements\nCREATE STREAM\nCREATE STREAM — Creates an output stream in the database.\nSyntax\nCREATE STREAM stream-name\n[PARTITION ON COLUMN column-name ]\n[export-definition | topic-definition]\n( column-definition [,...] );\nexport-definition: EXPORT TO TARGET export-target-name\ntopic-definition: EXPORT TO TOPIC topic-name\n[ WITH [KEY ( column-name [,...])] [VALUE ( column-name [,...])]]\ncolumn-definition: column-name datatype [DEFAULT value ] [ NOT NULL ]\nDescription\nThe CREATE STREAM statement defines a stream and its associated columns in the database. A stream\ncan be thought of as a virtual table. It has the same structure as a table, consisting of a list of columns\nand supporting all the same datatypes ( Table A.1, “Supported SQL Datatypes” ) as tables. The columns\nhave the same rules in terms of naming and size. You can also use the INSERT statement to insert data\ninto the stream once it is defined.\nThe three differences between streams and tables are:\n•No data is stored in the database for a stream, it is only used as a passthrough.\n•Because no data is stored, you cannot SELECT, UPDATE, or DELETE the stream contents.\n•No indexes or constraints (such as primary keys) are allowed on a stream.\nData inserted into the stream is not stored in the database. The stream is an ephemeral container used only\nfor analysis and/or passing data through VoltDB to other systems via the export function.\nCombining streams with views lets you perform summary analysis on data passing through VoltDB with-\nout having to store all of the underlying data. For example, you might want to know how many times\nusers access a website and their most recent visit. But you do not need to store a record for each visit.\nIn this case, you can create a stream, visits, to capture the event and a view, visit_by_user , to capture the\ncumulative data:\nCREATE STREAM visits PARTITION ON COLUMN user_id (\n user_id BIGINT NOT NULL,\n ip_address VARCHAR(128),\n login TIMESTAMP\n);\nCREATE VIEW visit_by_user \n ( user_id, total_visits, last_visit )\n AS SELECT user_id, COUNT(*), MAX(login)\n FROM visits GROUP BY user_id;\n195Supported SQL DDL Statements\nWhen creating a view on a stream, the stream must be partitioned and the partition column must appear in\nthe view. Another special feature of views on streams is that, because there is no underlying data stored\nfor the view, VoltDB lets you modify the views content manually by issuing UPDATE and DELETE\nstatements on the view. (This ability to manipulate the view is only available for views on streams. You\ncannot UPDATE or DELETE a view on a table; you must modify the data in the underlying table instead.)\nFor example, if you only care about a daily rollup of visits, you can use DELETE with the stream name\nto clear the data at midnight every night:\nDELETE FROM visit_by_user;\nOr if you need to adjust the cumulative analysis to, say, \"reset\" the entry for a specific user, you can use\nUPDATE:\nUPDATE visit_by_user \n SET total_visits = 0, last_visit = NULL\n WHERE user_id = ?;\nExport Streams\nStreams can be used to export data out of VoltDB into other systems, such as Kafka, CSV files, and so\non. To export data into another system, you start by declaring one or more streams defining the data that\nwill be sent to the external system. In the CREATE STREAM statement you specify the named target\nfor the export:\nCREATE STREAM visits\n EXPORT TO TARGET archive\n PARTITION ON COLUMN user_id (\n user_id BIGINT NOT NULL,\n ip_address VARCHAR(128),\n login TIMESTAMP\n);\nAs soon as you declare the EXPORT TO TARGET clause for a stream, any data inserted into the stream\nis queued for export. If the export target is not defined in the database configuration, then the data waits\nin the queue. Once the export target is configured, the export connector begins sending the queued data\nto the configured destination. See Chapter 15, Streaming Data: Import, Export, and Migration for more\ninformation on configuring export targets.\nTopic Streams\nAlternately, you can output a stream to a VoltDB topic. Topics stream data to and from external systems,\nsimilar to import and export, with two distinct differences. First, topics share data written into the stream\nwith multiple external consumers. Second, rather than pushing data to a single target the way export does,\ntopics allow multiple consumers to pull the data when they need it or when they are ready for it.\nTo identify a stream as an output source for a topic, you include the EXPORT TO TOPIC clause in the\nCREATE STREAM statement, naming the topic to use:\nCREATE STREAM visits PARTITION ON COLUMN user_id \n EXPORT TO TOPIC visitors\n PARTITION ON COLUMN user_id (\n user_id BIGINT NOT NULL,\n ip_address VARCHAR(128),\n login TIMESTAMP\n196Supported SQL DDL Statements\n);\nThe topic itself is configured in the database configuration file. If the topic is not configured before the\nstream is declared, no data written to the stream is added to the queue until the topic is added to the\nconfiguration. Similarly, if the topic is removed from the configuration, the queue for the topic and its\ncontents are deleted.\nThere are two optional clauses associated with EXPORT TO TOPIC, KEY and VALUE, which are pre-\nceded with the WITH keyword. KEY identifies one or more columns to use as a key for the topic. So, for\nexample, if the column user_id is defined as the key and you execute INSERT INTO visits (123, \"1.2.3.4\",\nNOW() ), the value 123 is used as the key for the topic message. VALUE identifies which columns (and\nin which order) to include in the body of the topic message. In the following example, user_id is used as\nthe key and user_id and login are included in the body of the message (leaving out ip_address ):\nCREATE STREAM visits PARTITION ON COLUMN user_id \n EXPORT TO TOPIC visitors\n WITH KEY (user_id) VALUE (user_id, login)\n PARTITION ON COLUMN user_id (\n user_id BIGINT NOT NULL,\n ip_address VARCHAR(128),\n login TIMESTAMP\n);\nIf you do not specify a key, there is no key for the topic. If you do not specify values, all columns from\nthe stream are included in the order specified in the CREATE STREAM statement. See the section on\nSection 15.6, “VoltDB Topics” for more information on defining and using topics.\nMulti-Purpose Streams\nFinally, you can combine analysis with export by creating a stream with an export target and also creating\na view on that stream. So in our earlier example, if we want to warehouse data about each visit but use\nVoltDB to perform the real-time summary analysis, we would add an export definition, along with the\npartitioning clause, to the CREATE STREAM statement for the visits stream:\nCREATE STREAM visits \n PARTITION ON COLUMN user_id \n EXPORT TO TARGET warehouse (\n user_id BIGINT NOT NULL,\n ip_address VARCHAR(128),\n login TIMESTAMP\n);\nExample\nThe following example defines a stream and a view on that stream. Note the use of the PARTITION ON\nclause to ensure the stream is partitioned, since it is being used in a view.\nCREATE STREAM flightdata \n PARTITION ON COLUMN airport (\n flight_id BIGINT NOT NULL,\n airport VARCHAR(3) NOT NULL,\n passengers INTEGER,\n eta TIMESTAMP \n);\n197Supported SQL DDL Statements\nCREATE VIEW all_flights\n (airport, flight_count, passenger_count)\n AS SELECT airport, count(*),sum(passengers)\n FROM flightdata GROUP BY airport;\n198Supported SQL DDL Statements\nCREATE TABLE\nCREATE TABLE — Creates a table in the database.\nSyntax\nCREATE TABLE table-name\n[ export-definition | migration-definition ]\ncolumn-definition [,...]\n[, constraint-definition [,...]]\n) [ttl-definition] ;\nexport-definition: EXPORT TO TARGET target-name [ON action [,...]]\nmigration-definition: MIGRATE TO TARGET target-name\ncolumn-definition: column-name datatype [DEFAULT value ] [ NOT NULL ] [index-type]\nconstraint-definition: [CONSTRAINT constraint-name ] { index-definition }\nindex-definition: {index-type} ( column-name [,...])\nindex-type: PRIMARY KEY | UNIQUE | ASSUMEUNIQUE\nttl-definition: USING TTL value [time-unit] ON COLUMN column-name\n[BATCH_SIZE number-of-rows ] [MAX_FREQUENCY value]\ntime-unit: SECONDS | MINUTES | HOURS | DAYS\nDescription\nThe CREATE TABLE statement creates a table and its associated columns in the database. The supported\ndatatypes are described in Table A.1, “Supported SQL Datatypes” .\nTable A.1. Supported SQL Datatypes\nSQL Datatype Equivalent Ja-\nva DatatypeDescription\nTINYINT byte 1-byte signed integer, -127 to 127a\nSMALLINT short 2-byte signed integer, -32,767 to 32,767\nINTEGER int 4-byte signed integer, -2,147,483,647 to\n2,147,483,647\nBIGINT long 8-byte signed integer, -9,223,372,036,854,775,807\nto 9,223,372,036,854,775,807\nFLOAT double 8-byte numeric, -(2-2-52)·21023 to (2-2-52)·21023\n(Note that values less than or equal to -1.7E+308\nare interpreted as null.)\nDECIMAL BigDecimal 16-byte fixed scale of 12 and precision of 38,\n-99999999999999999999999999.999999999999\nto 99999999999999999999999999.999999999999\n199Supported SQL DDL Statements\nSQL Datatype Equivalent Ja-\nva DatatypeDescription\nGEOGRAPHY or GE-\nOGRAPHY() A geospatial region. The storage requirement for\ngeospatial data varies depending on the geometry.\nThe default maximum size in memory is 32768.\nHowever, you can specify a different value by\nspecifying the maximum size (in bytes) in the dec-\nlaration. For example: GEOGRAPHY(80000). See\nthe section on entering geospatial data in the Volt-\nDB Guide to Performance and Customization for\ndetails.\nGEOGRAPHY_POINT A geospatial location identified by its latitude and\nlongitude. Requires 16 bytes of storage.\nVARCHAR() String Variable length text string, with a maximum length\nspecified in either characters (the default) or bytes.\nTo specify the length in bytes, use the BYTES\nkeyword after the length value. For example:\nVARCHAR(28 BYTES).\nVARBINARY() byte array Variable length binary string (sometimes referred\nto as a \"blob\") with a maximum length specified in\nbytes\nTIMESTAMP long, VoltDB Time-\nstampTypeTime in microseconds\naFor integer and floating-point datatypes, VoltDB reserves the largest possible negative value to denote a null value. For example\n-128 is interpreted as null for TINYINT, -32768 for SMALLINT, and so on.\nThe following limitations are important to note when using the CREATE TABLE statement in VoltDB:\n•CHECK and FOREIGN KEY constraints are not supported.\n•VoltDB does not support AUTO_INCREMENT, the automatic incrementing of column values.\n•A table can have up to 1024 columns. Each column has a maximum size of 1 megabyte and the total\ndeclared size of all of the columns in a table cannot exceed 2 megabytes. For VARCHAR columns\nwhere the length is specified in characters, the declared size is calculated as 4 bytes per character to\nallow for the longest potential UTF-8 string.\n•If you intend to use a column to partition a table, that column cannot contain null values. You must\nspecify NOT NULL in the definition of the column or VoltDB issues an error when compiling the\nschema.\n•To specify an index — either for an individual column or as a table constraint — that is globally unique\nacross the database, use the standard SQL keywords UNIQUE and PRIMARY KEY. However, for\npartitioned tables, VoltDB can only ensure uniqueness if the index includes the partitioning column.\nOtherwise, these keywords are not allowed.\nIt can be a performance advantage to define indexes or constraints on non-partitioning columns that you,\nas the developer, know are going to contain unique values. Although VoltDB cannot ensure uniqueness\nacross the entire database, it does allow you to define indexes that are assumed to be unique by using\nthe ASSUMEUNIQUE keyword.\nWhen you define an index on a partitioned table as ASSUMEUNIQUE, VoltDB verifies uniqueness\nwithin the current partition when creating an index entry. However, it is your responsibility as developer\n200Supported SQL DDL Statements\nor administrator to ensure that the values are actually globally unique. If the database is repartitioned due\nto adding new nodes or restoring a snapshot to a different cluster configuration, non-unique ASSUME-\nUNIQUE index entries may collide. When this occurs it results in a constraint violation error and the\ndatabase will not be able to complete its current action.\nTherefore, ASSUMEUNIQUE should be used with caution. Also, it is not necessary and should not\nbe used with replicated tables or indexes that contain the partitioning column, which can be defined\nas UNIQUE.\n•EXPORT TO TARGET allows you to connect a table to an export target, so that by default data written\ninto the table is also sent to the export connector for delivery to the specified target. By default, only\ninsert operations ( INSERT and UPSERT when it inserts a new row) initiate export records. However,\nyou can use the ON clause to specify which actions you want to trigger export. For example, the fol-\nlowing table declaration generates export records whenever rows are created or modified.\nCREATE TABLE RESERVATION\n EXPORT TO TARGET airlines ON INSERT, UPDATE _NEW\n (reserv_id INT NOT NULL,\n flight_id INT NOT NULL,\n . . . );\nThe following table defines the actions that you can specify in the ON clause.\nKeyword Description\nINSERT Contents of new record from INSERT, or UPSERT that creates new record\nDELETE Contents of a record that is deleted\nUP-\nDATE_OLDContents of a record before it is updated\nUP-\nDATE_NEWContents of a record after it is updated\nUPDATE Two records are exported, the contents before and after a record is updated (shorthand\nequivalent for specifying both UPDATE_OLD and UPDATE_NEW)\n•The length of VARCHAR columns can be specified in either characters (the default) or bytes. To specify\nthe length in bytes, include the BYTES keyword after the length value; for example VARCHAR(16\nBYTES).\nSpecifying the VARCHAR length in characters is recommended because UTF-8 characters can require\na variable number of bytes to store. By specifying the length in characters you can be sure the column\nhas sufficient space to store any string of the specified length. Specifying the length in bytes is only\nrecommended when all values contain only single byte (ASCII) characters or when conserving space is\nrequired and the strings are less than 64 bytes in length.\n•The VARBINARY datatype provides variable storage for arbitrary strings of binary data and operates\nsimilarly to VARCHAR(n BYTES) strings. You assign byte arrays to a VARBINARY column when\npassing in variables, or you can use a hexidecimal string for assigning literal values in the SQL statement.\n•The VoltDB TIMESTAMP datatype is a long integer representing the number of microseconds since\nthe epoch. Two important points to note about this timestamp:\n•The VoltDB TIMESTAMP is not the same as the Java Timestamp datatype or traditional Linux time\nmeasurements, which are measured in milliseconds rather than microseconds. Appropriate conversion\nis needed when casting values between a VoltDB TIMESTAMP and other timestamp datatypes.\n201Supported SQL DDL Statements\n•The VoltDB TIMESTAMP is interpreted as a Greenwich Meantime (GMT) value. Depending on\nhow time values are created, their value may or may not account for the local machine's default time\nzone. Mixing timestamps from different time zones (for example, in WHERE clause comparisons)\ncan result in unexpected behavior.\n•For TIMESTAMP columns, you can define a default value using the NOW or CURRENT_TIMES-\nTAMP keywords in place of a specific value. For example:\nCREATE TABLE Event (\n Event_Id INTEGER UNIQUE NOT NULL,\n Event_Timestamp TIMESTAMP DEFAULT NOW ,\n Event_Description VARCHAR(128)\n);\nThe default value is evaluated at runtime as an approximation, in milliseconds, of when the transaction\nbegins execution.\nAutomatic Aging and Data Migration\nWhen you define a database table you can also define a \"time to live\" (TTL) when records in the table\nexpire and are automatically deleted. The USING TTL clause specifies a lifetime for each record, based on\nthe difference between the specified TTL value, the value of the specified column, and the current time (in\nGMT microseconds). In the simplest case, you can define a time to live based on a TIMESTAMP column\ndefined as DEFAULT NOW, so the record expires the specified amount of time after it is inserted. For\nexample, the records in the following table will be deleted five minutes after they are inserted into the\ndatabase (assuming the default value is used for the created column):\nCREATE TABLE current_alerts (\n id BIGINT NOT NULL,\n message VARCHAR(128),\n created TIMESTAMP DEFAULT NOW NOT NULL,\n) USING TTL 5 MINUTES ON COLUMN created ;\nYou specify the time to live value as an integer number of seconds, minutes, hours, or days. (The default,\nif you do not specify a time unit, is seconds.) The TTL column must be declared as a TIMESTAMP and\nNOT NULL.\nTTL records are evaluated and deleted by a parallel process within the database. As a result, records\nare deleted shortly after the specified time to live arrives, rather than at the exact time specified. But\nthe deletion of records is handled as a proper database transaction, guaranteeing consistency with any\nuser-invoked transactions. One consequence of automating the expiration of database records, is that the\nevaluation and deletion of records produces additional transactions that may impact database performance.\nWhen you define an expiration time for database records, you can also specify an export target using\nMIGRATE TO TARGET. If you specify both USING TTL and MIGRATE TO TARGET, before the data\nis deleted by the TTL process, the data is migrated — through the specified export connector — to the\ntarget location. The combination of TTL and data migration creates an automated archiving process, where\naged data is moved to another repository while VoltDB continues to operate on current data. Since VoltDB\ndoes not delete the records until after the target system acknowledges their receipt, you are assured that\nthe data is always present in at least one of the participating systems.\nFor example, the following table definition establishes an automatic archiving policy that removes sessions\nwith no activity for an hour, migrating old records to a historical repository:\nCREATE TABLE sessions \n202Supported SQL DDL Statements\n MIGRATE TO TARGET oldsessions\n (\n login TIMESTAMP DEFAULT NOW,\n last_update TIMESTAMP NOT NULL,\n user_id BIGINT NOT NULL\n ) USING TTL 1 HOURS ON COLUMN last_update ;\nIt is also possible to migrate data manually. If you add the MIGRATE TO TARGET clause by itself,\nwithout USING TTL, no data is automatically migrated. However, you can explicitly initiate migration\nby invoking the MIGRATE SQL statement with the WHERE clause to specify which rows are migrated.\nUse of MIGRATE TO TARGET without USING TTL is useful when the application logic to select what\ndata to migrate requires multiple or non-numeric variables. For example, if the schedule for archiving a\nrecord varies based on which user created it:\nCREATE TABLE messages\n MIGRATE TO TARGET oldmessages\n (\n posted TIMESTAMP DEFAULT NOW,\n message_text VARCHAR(128),\n user_id BIGINT NOT NULL,\n user_type VARCHAR(5) NOT NULL\n );\nIn this case, no data is migrated until you explicitly initiate migration with the MIGRATE statement:\nMIGRATE FROM messages\n WHERE \n ( (posted < DATEADD(DAY,-3,NOW()) AND user_type='USER') \n OR (posted < DATEADD(DAY,-14,NOW()) AND user_type='ADMIN')\n ) AND NOT MIGRATING;\nYou can also migrate data manually, even if the table declaration includes the USING TTL clause. In this\ncase you can use MIGRATE to preemptively migrate data before the TTL column expires. For example,\nusing the sessions table defined above, you might want to migrate all sessions for a user when their account\nis deleted:\nMIGRATE FROM sessions WHERE user_id=? AND NOT MIGRATING;\nNote that use of the MIGRATING function is not required to filter on rows that are not already migrating,\nbecause the MIGRATE statement will not initiate export if rows are already migrating. However, explicitly\ninclude AND NOT MIGRATING in your MIGRATE statement can improve performance.\nThe MIGRATING function is also useful so you can avoid accidentally modifying records that are already\nmarked for deletion, especially since any changes to migrating records will cancel the delete operation but\nnot the export. For example, if you want to update the last_update column of a user's records but only if\nthey are not already being migrated, your UPDATE statement should include NOT MIGRATING:\nUPDATE sessions SET last_update=NOW() WHERE user_id=? AND NOT MIGRATING;\nTime to live and data migration are powerful concepts. However, there are some important details to\nconsider when using these features:\n•There must be a usable index on the TTL column for the table. VoltDB uses that index to optimize the\nevaluation of the TTL values. If not, the USING TTL clause is accepted, but no automated deletion will\noccur at runtime until a usable index is defined.\n203Supported SQL DDL Statements\n•The CREATE TABLE... USING TTL statement is not rejected if the index is missing. This way you can\ndefine the index in a subsequent DDL statement. However, a warning message is issued if the USING\nTTL clause has no supporting index available. A similar warning is issued if you delete the last usable\nindex.\n•When the table definition includes both USING TTL and MIGRATE TO TARGET, there must be an\nindex including the TTL column for the USING TTL clause and a separate index including only the\nTTL column and a WHERE NOT MIGRATING clause. This index is required to effectively find and\nschedule the migration of expired records. For example, the sessions table in the previous example\nwould require the following index. If the index is not present, records for the table will neither be deleted\nnor migrated and a warning will be logged on the server:\nCREATE INDEX sessions_migrate_index ON sessions \n (last_update) WHERE NOT MIGRATING;\n•TTL clauses are most effective when used on partitioned tables. Defining TTL for a replicated table,\nespecially a large replicated table, can have a significant impact on database performance because the\nTTL delete actions must be processed as multi-partition transactions.\n•You can specify the frequency and maximum size of the TTL processing cycle.\n•The BATCH_SIZE argument specifies the maximum number of records that will be deleted during\neach processing cycle. Specify the batch size as a positive integer. The default is 1000 rows.\n•The MAX_FREQUENCY argument specifies how often the TTL clause is evaluated. You specify\nthe frequency in terms of the maximum number of times it is processed per second. For example\na MAX_FREQUENCY of 10 means that the table's TTL value is processed at most 10 times per\nsecond. Specify the frequency as a positive integer. The default frequency is once per second (1).\nUnder extreme loads or sudden bursts of inserts, it is possible for TTL processing to fall behind. Or if\nthe records are extremely large, attempting to delete too many records at one time can cause the TTL\nprocess to exceed the temporary table limit. The BATCH_SIZE and MAX_FREQUENCY clauses let\nyou customize the TTL processing per table to meet the specific requirements of your application. The\nTTL selector for the @Statistics system procedure can help you evaluate TTL performance against\nyour application workload to determine what settings you need.\n•Evaluation of the time to live is made against the current value of the TTL column, not its initial value. So\nif a subsequent transaction alters the column value (either increasing or decreasing it) that modification\nwill impact the subsequent lifetime of the record.\n•When using database replication (DR), it is possible for the TTL transaction to exceed the 50MB limit\non the DR binary log. If this happens, a warning is issued and TTL processing is suspended.\n•When using MIGRATION TO TARGET, there is an interval after the TTL value is triggered and before\nthe record is successfully exported and deleted from the VoltDB database. During this interval, the\nrecord is available for read access from SELECT queries. You can also update or delete the record; but\nmodifying the record will cancel the pending delete. So if, for example, you update the record to extend\nthe TTL column, the record will remain in the database until the new TTL column value is reached.\nHowever the update does not cancel the export of the original data to the specified target that had already\nbeen triggered. So two records will eventually be migrated.\n•In most cases, you can ignore whether a record is currently being migrated and scheduled for delete or\nnot. For example, if you delete a record that is currently being migrated, you cancel the pending delete\nbut you delete the record anyway, so the results end up the same. However, if you do want to distinguish\nbetween currently active and currently migrating records, you can use the MIGRATING function, that\n204Supported SQL DDL Statements\nidentifies records that are currently \"in flight\". For example, to select records for a specific user ID and\nonly those records that are not being migrated, you can use the following query:\nSELECT user_id, login FROM sessions WHERE user_id = ? AND NOT MIGRATING;\nExample\nThe following example defines a table with five columns. The first column, Company , is not allowed\nto be null, which is important since it is used as the partitioning column in the following PARTITION\nTABLE statement. That column is also contained in the PRIMARY KEY constraint. Again, it is important\nto include the partitioning column in any fully unique indexes for partitioned tables.\nCREATE TABLE Inventory (\n Company VARCHAR(32) NOT NULL,\n ProductID BIGINT NOT NULL, \n Price DECIMAL,\n Category VARCHAR(32),\n Description VARCHAR(256),\n PRIMARY KEY (Company, ProductID)\n);\nPARTITION TABLE Inventory ON COLUMN Company;\n205Supported SQL DDL Statements\nCREATE TASK\nCREATE TASK — Schedules a procedure to run periodically.\nSyntax\nCREATE TASK task-name\nON SCHEDULE {CRON cron-definition | DELAY time-interval | EVERY time-interval |\nFROM CLASS class-path }\nPROCEDURE { procedure-name | FROM CLASS class-path } [WITH ( argument [,...])]\n[ON ERROR {LOG | IGNORE | STOP} ]\n[RUN ON {DATABASE | HOSTS | PARTITIONS} ]\n[AS USER user-name ]\n[ENABLE | DISABLE]\nCREATE TASK task-name\nFROM CLASS class-path [WITH (argument [,...])]\n[ON ERROR {LOG | IGNORE | STOP} ]\n[RUN ON {DATABASE | HOSTS | PARTITIONS } ]\n[AS USER user-name ]\n[ENABLE | DISABLE]\ntime-interval: integer {MILLISECONDS | SECONDS | MINUTES | HOURS | DAYS}\nDescription\nThe CREATE TASK statement schedules a stored procedure to run iteratively on a set schedule. In its\nsimplest form, the CREATE TASK statement schedules a specified stored procedure to be run at a regular\ninterval. The PROCEDURE clause specifies the stored procedure and any arguments it requires. The ON\nSCHEDULE clause specifies when the procedure will be run. You can schedule a procedure to run on\nthree types of schedule:\n•CRON — Specifies a cron-style schedule to run the procedure as set times per day or week.\n•DELAY — Specifies a time interval between each run of the stored procedure, where the time interval\nstarts at the end of each run.\n•EVERY — Specifies a time interval between the start of each run of the stored procedure.\nThe difference between DELAY and EVERY is how the interval is measured. For example, if you specify\nEVERY 5 SECONDS, the stored procedure runs every 5 seconds, no matter how long it takes to execute\n(assuming it does not take more than 5 seconds). If, on the other hand, you specify DELAY 5 SECONDS,\neach run starts 5 seconds after the previous run completes. In other words, EVERY results in invocations\nat a regular interval no matter how long they take, while DELAY results in a regular interval between\nwhen one run ends and the next begins.\nFor DELAY and EVERY you specify the interval as a positive integer and a time unit, where the supported\ntime units are milliseconds, seconds, minutes, hours, and days. For EVERY, if the previous run takes\nlonger than the interval to run, the schedule is reset at the end of the previous run. So, for example, if the\nschedule specifies EVERY 2 SECONDS but the procedure takes 2.5 seconds to run, the next scheduled\ninterval will already be past when the previous run ends. In this case, the next invocation of the task is\nreset to 2 seconds after the previous run ends.\n206Supported SQL DDL Statements\nThe CRON option requires a standard cron schedule, which consists of six values separated by spaces.\nCron schedules set specific times of day, week, or month, rather than an interval. The six values of the\ncron string represent seconds, minutes, hours, day of the month, month, and day of the week. Asterisks\nindicate all possible values. For example, the cron specification ON SCHEDULE CRON 0 0 * * *\n* schedules the task on the hour, every hour of every day. More information about scheduling tasks with\ncron can be found on the web.\nYou can also specify details about how the procedure is run:\n•ON ERROR specifies how errors are handled. The default is ON ERROR STOP.\n•ON ERROR LOG — The error is logged but the procedure continues to be scheduled and run.\n•ON ERROR IGNORE — The procedure continues to be scheduled and run and the error is ignored\nand not logged.\n•ON ERROR STOP — The error is logged and the scheduling process stops. No further invocations\nof the procedure will occur until the task is explicitly re-enabled (by using ALTER TASK to disable\nand then enable the task) or the database restarts.\n•RUN ON specifies where the procedure executes. The default is RUN ON DATABASE.\n•RUN ON DATABASE — For multi-partitioned procedures, each invocation of the procedure is run\nas a single transaction coordinated across all partitions.\n•RUN ON PARTITIONS — For directed procedures, the procedure is scheduled and run indepen-\ndently on all partitions in the database. Directed procedures are useful for performing distributed\ntasks that are transactional on each partition but do not need to be coordinated and therefore are less\ndisruptive to the ongoing database workload.\n•AS USER specifies the user account under which the procedure is run. When security is enabled, you\nmust specify a valid username and that user must have sufficient privileges to run the procedure.\nWhen using passive database replication (DR), the replica cluster will automatically pause any scheduled\ntasks that might modify the database (that is, procedures that are not read-only). If the cluster is promoted,\nthe tasks are resumed.\nFinally, you can use the ENABLE and DISABLE keywords to specify whether the task is enabled or not.\n(The task is enabled by default.) If the task is disabled, the procedure is not invoked. If the task is enabled,\nthe procedure is invoked according to the schedule until the database shuts down or the task is disabled by\nan ALTER TASK statement or an error while ON ERROR STOP is active.\nCreating Custom Tasks\nIf the standard schedules do not meet your needs — you want to change the interval between runs, modify\nthe arguments to the procedure , or the procedure itself — you can define a custom task using Java classes\nthat implement one of three special interfaces:\n•When you only want to dynamically control the schedule of the procedure but keep the procedure and\nits parameters the same, you can use the ON SCHEDULE FROM CLASS clause specifying a Java class\nthat implements the IntervalGenerator interface.\n•When you want to use a regular schedule but dynamically change the procedure and/or its parameters,\nyou can use the PROCEDURE FROM CLASS clause specifying a Java class that implements the Ac-\ntionGenerator interface.\n207Supported SQL DDL Statements\n•When you want to dynamically control both the schedule and the procedure being invoked, you can\nuse the second form of the CREATE TASK syntax which replaces both the ON SCHEDULE and PRO-\nCEDURE clauses with a single FROM CLASS clause specifying a Java class that implements the Ac-\ntionScheduler interface.\nBefore declaring a custom task, you must load the specified Java class, the same way you load Java classes\nbefore declaring a user-defined stored procedure, by packaging it in a JAR file and using the LOAD\nCLASSES directive in sqlcmd. It is also important to note that the classes used for custom tasks are not\nstored procedures and do not run in the normal transactional path for VoltDB transactions. The custom\ntask classes run in a separate thread to identify the characteristics of the next task invocation before the\nspecified stored procedure is run. For all three task interfaces, the task management infrastructure provides\nthe results from the previous run as input to the callback method, which can then use that information to\ndetermine how to modify the next instantiation of the task's procedure, parameters, or run interval.\nMany of the CREATE TASK statement's clauses — ON ERROR, AS USER, and ENABLE|DISABLE —\noperate exactly the same for both custom tasks and the simple case of scheduling a single stored procedure.\nThe two exceptions are the WITH and RUN ON clauses.\nFor custom tasks that alter the procedure and procedure parameters, the arguments in the WITH clause\nare passed to the custom task's initialize() method rather than to the stored procedure that it runs.\nThe custom task can then decide what to do with those arguments. For example, it may use them as initial,\nmaximum, and minimum values for adjusting arguments to the stored procedure.\nThe RUN ON clause for a custom task has one additional option beyond just DATABASE and\nPARTITIONS. Custom tasks can also be RUN ON HOSTS, which means one instance of the task is run\non each server in the cluster.\nExamples\nThe following example declares a procedure to reset the DailyStats view, and a task scheduled as a cron\nevent at midnight every night to run the procedure.\nCREATE PROCEDURE ResetDailyStats AS\n DELETE FROM DailyStats;\nCREATE TASK nightly \n ON SCHEDULE CRON 0 0 0 * * *\n PROCEDURE ResetDailyStats\n RUN ON DATABASE;\nThe next example creates a custom task that dynamically changes the interval between invocations of the\nstored procedure. The example first loads the JAR file containing a custom task class that implements the\nIntervalGenerator interface and then declares the task using PROCEDURE FROM CLASS clause.\nsqlcmd\n1> LOAD CLASSES mytasks.jar;\n2> CREATE TASK DailyNoHolidays \n ON SCHEDULE FROM CLASS mytasks.NoHolidays\n PROCEDURE ResetDailyStats\n RUN ON DATABASE;\n208Supported SQL DDL Statements\nCREATE VIEW\nCREATE VIEW — Creates a view into one or more tables, optimizing access to a summary of their\ncontents.\nSyntax\nCREATE VIEW view-name ( view-column-name [,...] )\nAS SELECT { column-name | selection-expression } [AS alias] [,...]\nFROM table-reference [join-clause...]\n[WHERE [NOT] boolean-expression [ {AND | OR} [NOT] boolean-expression]...]\n[GROUP BY { column-name | selection-expression } [,...]]\ntable-reference:\n{ table-name [AS alias] }\njoin-clause:\n, table-reference\n[INNER] JOIN [{ table-reference }] [join-condition ]\njoin-condition:\nON conditional-expression\nUSING (column-reference [,...])\nDescription\nThe CREATE VIEW statement creates a view of a table, a stream, or joined tables with selected columns\nand aggregates. VoltDB implements views as materialized views. In other words, the view is stored as a\nspecial table in the database and is updated each time the corresponding database contents are modified.\nThis means there is a small, incremental performance impact for any inserts or updates to the tables, but\nselects on the view will execute efficiently.\nThe following limitations are important to note when using the CREATE VIEW statement with VoltDB:\n•If the SELECT statement contains a GROUP BY clause, all of the columns and expressions listed in the\nGROUP BY must be listed in the same order at the start of the SELECT statement. Aggregate functions,\nincluding COUNT(*), are allowed following the GROUP BY columns.\n•Views are allowed on individual tables or streams, or joins of multiple tables. Joining streams is not\nsupported.\n•Joins must be inner joins and cannot be self-joins. All other limitations for joins as described in the\nSELECT statement also apply to joins in views.\n•Views that join multiple tables must include a COUNT(*) field listed after all GROUP BY columns.\n•To avoid performance problems when inserting data into a view that joins multiple tables, it is strongly\nrecommended you define indexes on the table columns involved in the join.\nExamples\nThe following example defines a view that counts the number of records for a specific product item grouped\nby its location (that is, the warehouse the item is in).\n209Supported SQL DDL Statements\nCREATE VIEW inventory_count_by_warehouse (\n productID,\n warehouse,\n total_inventory\n) AS SELECT\n productID,\n warehouse,\n COUNT(*)\nFROM inventory GROUP BY productID, warehouse;\nThe next example uses a WHERE clause but no GROUP BY to provide a count and minimum and maxi-\nmum aggregates of all records that meet a certain criteria.\nCREATE VIEW small_towns ( number, minimum, maximum )\n AS SELECT count(*), min(population), max(population) \n FROM TOWNS WHERE population < 10000;\nThe final example demonstrates joining two tables in a view. This definition provides a similar view to\nthe first example, except it uses the productID column to join two tables, Product and Inventory:\nCREATE VIEW inventory_count_by_warehouse (\n productName,\n warehouse,\n total_inventory\n) AS SELECT\n product.productName,\n inventory.warehouse,\n COUNT(*)\nFROM product JOIN inventory \n ON product.productID = inventory.productID\n GROUP BY product.productName, inventory.warehouse;\n210Supported SQL DDL Statements\nDR TABLE\nDR TABLE — Identifies a table as a participant in database replication (DR)\nSyntax\nDR TABLE table-name [DISABLE]\nDescription\nThe DR TABLE statement identifies a table as a participant in database replication (DR). If DR is not\nenabled, the DR TABLE statement has no effect on the operation of the table or the database as a whole.\nHowever, once DR is enabled and if the current cluster is the master database for the DR operation, any\nupdates to the contents of tables identified in the DR TABLE statement are copied and applied to the\nreplica database as well.\nThe DR TABLE ... DISABLE statement reverses the effect of a previous DR TABLE statement, removing\nthe specified table from participation in DR. Because the replica database schema must have DR TABLE\nstatements for any tables being replicated by the master, if DR is actively occurring you must add the\nDR TABLE statements to the replica before adding them to the master. In reverse, you must issue DR\nTABLE... DISABLE statements on the master before you issue the matching statements on the replica.\nSee Chapter 11, Database Replication for more information about how database replication works.\nExamples\nThe following example identifies the tables Employee and Department as participants in database repli-\ncation.\nDR TABLE Employee;\nDR TABLE Department;\n211Supported SQL DDL Statements\nDROP FUNCTION\nDROP FUNCTION — Removes the definition of a SQL function.\nSyntax\nDROP FUNCTION function-name [IF EXISTS]\nDescription\nThe DROP FUNCTION statement deletes the definition of the specified user-defined function. Note that,\nfor functions declared using CREATE FUNCTION and a class file, the statement does not delete the class\nthat implements the function, it only deletes the definition. To remove the Java class that contains the\nassociated function method, you must first drop the function definition then use the sqlcmd remove classes\ndirective to remove the class.\nThe IF EXISTS clause allows the statement to succeed even if the specified function name does not exist. If\nthe function does not exist and you do not include the IF EXISTS clause, the statement will return an error.\nExamples\nThe following example removes the definitions of the HTML_ENCODE and HTML_DECODE functions,\nthen uses remove classes to remove the class containing their corresponding methods.\n$ sqlcmd\n1> DROP FUNCTION html_encode;\n1> DROP FUNCTION html_decode;\n2> remove classes \"*.HtmlFunctions\";\n212Supported SQL DDL Statements\nDROP INDEX\nDROP INDEX — Removes an index.\nSyntax\nDROP INDEX index-name [IF EXISTS]\nDescription\nThe DROP INDEX statement deletes the specified index, and any data associated with it, from the data-\nbase. The IF EXISTS clause allows the statement to succeed even if the specified index does not exist. If\nthe index does not exist and you do not include the IF EXISTS clause, the statement will return an error.\nYou must use the name of the index as specified in the original DDL when dropping the index. You cannot\ndrop an index if it was not explicitly named in the CREATE INDEX command. This is why you should\nalways name indexes and other constraints wherever possible.\nExamples\nThe following example removes the index named employee_idx_by_lastname:\nDROP INDEX Employee_idx_by_lastname;\n213Supported SQL DDL Statements\nDROP PROCEDURE\nDROP PROCEDURE — Removes the definition of a stored procedure.\nSyntax\nDROP PROCEDURE procedure-name [IF EXISTS]\nDescription\nThe DROP PROCEDURE statement deletes the definition of the named stored procedure. Note that, for\nprocedures declared using CREATE PROCEDURE FROM and a class file, the statement does not delete\nthe class that implements the procedure, it only deletes the definition and any partitioning information\nassociated with the procedure. To remove the associated stored procedure class, you must first drop the\nprocedure definition then use the sqlcmd remove classes directive to remove the class.\nThe IF EXISTS clause allows the statement to succeed even if the specified procedure name does not\nexist. If the stored procedure does not exist and you do not include the IF EXISTS clause, the statement\nwill return an error.\nExamples\nThe following example removes the definition of the FindCanceledReservations stored procedure, then\nuses remove classes to remove the corresponding class.\n$ sqlcmd\n1> DROP PROCEDURE FindCanceledReservations;\n2> remove classes \"*.FindCanceledReservations\";\n214Supported SQL DDL Statements\nDROP ROLE\nDROP ROLE — Removes a role.\nSyntax\nDROP ROLE role-name [IF EXISTS]\nDescription\nThe DROP ROLE statement deletes the specified role. The IF EXISTS clause allows the statement to\nsucceed even if the specified role does not exist. If the role does not exist and you do not include the IF\nEXISTS clause, the statement will return an error.\nExamples\nThe following example removes the role named debug:\nDROP ROLE debug;\n215Supported SQL DDL Statements\nDROP STREAM\nDROP STREAM — Removes a stream and, optionally, any views associated with it.\nSyntax\nDROP STREAM stream-name [IF EXISTS] [CASCADE]\nDescription\nThe DROP STREAM statement deletes the specified stream from the database. The IF EXISTS clause\nallows the statement to succeed even if the specified stream does not exist. If the stream does not exist and\nyou do not include the IF EXISTS clause, the statement will return an error.\nIf you use the CASCADE clause, VoltDB automatically drops any referencing views as well as the stream\nitself.\nIf the stream is associated with an export target (that is, the stream was created with the EXPORT TO\nTARGET clause), dropping the stream also deletes any pending records that were inserted into the stream\nbut have not been committed to the export target yet. If you want to change the stream definition without\nlosing any pending export data, use the ALTER STREAM statement. If you want to remove the stream\nbut ensure all export data is flushed before it is dropped, you can either use the voltadmin pause --wait\ncommand (to flush all queues) or the @Statistics system procedure with the EXPORT selector to check\nthat the specified target has no pending records.\nExample\nThe following example uses DROP STREAM with the IF EXISTS clause to remove the MeterReadings\nstream definition.\nDROP STREAM MeterReadings IF EXISTS;\n216Supported SQL DDL Statements\nDROP TABLE\nDROP TABLE — Removes a table and any data associated with it.\nSyntax\nDROP TABLE table-name [IF EXISTS] [CASCADE]\nDescription\nThe DROP TABLE statement deletes the specified table, and any data associated with it, from the database.\nThe IF EXISTS clause allows the statement to succeed even if the specified tables does not exist. If the\ntable does not exist and you do not include the IF EXISTS clause, the statement will return an error.\nBefore dropping a table, you must first remove any stored procedures that reference the table. For exam-\nple, if the table EMPLOYEE is partitioned and the stored procedure AddEmployee is partitioned on the\nEMPLOYEE table, you must drop the procedure first before dropping the table:\nPARTITION TABLE Employee ON COLUMN EmpID;\nCREATE PROCEDURE \n PARTITION ON TABLE Employee COLUMN EmpID\n FROM CLASS myapp.procedures.AddEmployee;\n [. . . ]\nDROP PROCEDURE AddEmployee;\nDROP TABLE Employee;\nAttempting to drop the table before dropping the procedure will result in an error. The same will normally\nhappen if there are any views or indexes that reference the table. However, if you use the CASCADE\nclause VoltDB will automatically drop any referencing indexes and views as well as the table itself.\nExamples\nThe following example uses DROP TABLE with the IF EXISTS clause to remove any existing MailAd-\ndress table definition and data before adding a new definition.\nDROP TABLE UserSignin IF EXISTS;\nCREATE TABLE UserSignin (\n userID BIGINT NOT NULL,\n lastlogin TIMESTAMP DEFAULT NOW \n);\n \n217Supported SQL DDL Statements\nDROP TASK\nDROP TASK — Removes a task and cancels any future execution.\nSyntax\nDROP TASK task-name [IF EXISTS]\nDescription\nThe DROP TASK statement deletes the specified task and cancels any future execution. The IF EXISTS\nclause allows the statement to succeed even if the specified task does not exist. If the task does not exist\nand you do not include the IF EXISTS clause, the statement will return an error.\nExamples\nThe following example removes the task named cleanup:\nDROP TASK cleanup;\n218Supported SQL DDL Statements\nDROP VIEW\nDROP VIEW — Removes a view and any data associated with it.\nSyntax\nDROP VIEW view-name [IF EXISTS]\nDescription\nThe DROP VIEW statement deletes the specified view, and any data associated with it, from the database.\nThe IF EXISTS clause allows the statement to succeed even if the specified view does not exist. If the\nview does not exist and you do not include the IF EXISTS clause, the statement will return an error.\nDropping a view has the same constraints as dropping a table, in that you cannot drop a view that is\nreferenced by existing stored procedure queries. Before dropping the view, you must drop any stored\nprocedures that reference it.\nExamples\nThe following example removes the view named Votes_by_state:\nDROP VIEW votes_by_state;\n219Supported SQL DDL Statements\nPARTITION PROCEDURE\nPARTITION PROCEDURE — Specifies that a stored procedure is partitioned.\nSyntax\nPARTITION PROCEDURE procedure-name ON TABLE table-name COLUMN column-name\n[PARAMETER position ]\nDescription\nWarning\nThe PARTITION PROCEDURE statement is deprecated and may be removed in a future release.\nPlease use the PARTITION ON clause of the CREATE PARTITION statement to declare and\npartition the procedure in a single combined statement.\nPartitioning a stored procedure means that the procedure executes within a unique partition of the database.\nThe partition in which the procedure executes is chosen at runtime based on the table and column specified\nby table-name and column-name and the value of the first parameter to the procedure. For example:\nPARTITION TABLE Employees ON COLUMN BadgeNumber;\nPARTITION PROCEDURE FindEmployee ON TABLE Employees COLUMN BadgeNumber;\nThe procedure FindEmployee is partitioned on the table Employees, and table Employees is in turn parti-\ntioned on the column BadgeNumber. This means that when the stored procedure FindEmployee is invoked\nVoltDB determines which partition to run the stored procedure in based on the value of the first parameter\nto the procedure and the corresponding partitioning value for the column BadgeNumber. So to find the\nemployee with badge number 145303 you would invoke the stored procedure as follows:\nclientResponse response = client.callProcedure(\"FindEmployee\", 145303);\nBy default, VoltDB uses the first parameter to the stored procedure as the partitioning value. However, if\nyou want to use the value of a different parameter, you can use the PARAMETER clause. The PARAME-\nTER clause specifies which procedure parameter to use as the partitioning value, with position specifying\nthe parameter position, counting from zero. (In other words, position 0 is the first parameter, position 1\nis the second, and so on.)\nThe specified table must be a partitioned table and cannot be an export stream or replicated table.\nYou specify the procedure by its simplified class name. Do not include any other parts of the class path.\nNote that the simple procedure name you specify in the PARTITION PROCEDURE may be different than\nthe class name you specify in the CREATE PARTITION statement, which can include a relative path. For\nexample, if the class for the stored procedure is mydb.procedures.FindEmployee, the procedure name in\nthe PARTITION PROCEDURE statement should be FindEmployee:\nCREATE PROCEDURE FROM CLASS mydb.procedures.FindEmployee ;\nPARTITION PROCEDURE FindEmployee ON TABLE Employees COLUMN BadgeNumber;\nExamples\nThe following example declares a stored procedure, using an inline SQL query, and then partitions the\nprocedure on the Customer table, Note that the PARTITION PROCEDURE statement includes the PA-\n220Supported SQL DDL Statements\nRAMETER clause, since the partitioning column is not the first of the placeholders in the SQL query.\nAlso note that the PARTITION argument is zero-based, so the value \"1\" identifies the second placeholder.\nCREATE PROCEDURE GetCustomerByName AS\n SELECT * from Customer WHERE FirstName=? AND LastName = ?\n ORDER BY LastName, FirstName, CustomerID;\nPARTITION PROCEDURE GetCustomerByName \n ON TABLE Customer COLUMN LastName\n PARAMETER 1;\nThe next example declares a stored procedure as a Java class. Since the first argument to the procedure's\nrun method is the value for the LastName column, The PARTITION PROCEDURE statement does not\nrequire a POSITION clause and can use the default.\nCREATE PROCEDURE FROM CLASS org.mycompany.ChangeCustomerAddress;\nPARTITION PROCEDURE ChangeCustomerAddress \n ON TABLE Customer COLUMN LastName;\n221Supported SQL DDL Statements\nPARTITION TABLE\nPARTITION TABLE — Specifies that a table is partitioned and which is the partitioning column.\nSyntax\nPARTITION TABLE table-name ON COLUMN column-name\nDescription\nPartitioning a table specifies that different records are stored in different unique partitions, based on the\nvalue of the specified column. The table table-name and column column-name must be valid, declared\nelements in the current DDL file or VoltDB generates an error when compiling the schema.\nFor a table to be partitioned, the partitioning column must be declared as NOT NULL. If you do not declare\na partitioning column of a table in the DDL, the table is assumed to be a replicated table.\nExample\nThe following example partitions the table Employee on the column EmployeeID .\nPARTITION TABLE Employee on COLUMN EmployeeID;\n222Appendix B. Supported SQL Statements\nThis appendix describes the SQL syntax that VoltDB supports in stored procedures and ad hoc queries.\nThis is not intended as a complete description of the SQL language and how it operates. Instead, it summa-\nrizes the subset of standard SQL statements that are allowed in VoltDB and any exceptions or limitations\nthat application developers should be aware of.\nThe supported SQL statements are:\n•DELETE\n•INSERT\n•MIGRATE\n•SELECT\n•TRUNCATE TABLE\n•UPDATE\n•UPSERT\n223Supported SQL Statements\nDELETE\nDELETE — Deletes one or more records from the database.\nSyntax\nDELETE FROM table-name\n[WHERE [NOT] boolean-expression [ {AND | OR} [NOT] boolean-expression]...]\n[ORDER BY { column-name [ ASC | DESC ]}[,...] [LIMIT integer] [OFFSET integer]]\nDescription\nThe DELETE statement deletes rows from the specified table that meet the constraints of the WHERE\nclause. The following limitations are important to note when using the DELETE statement in VoltDB:\n•The DELETE statement can operate on only one table at a time. It does not support joins. However, it\ndoes support subqueries in the WHERE expression.\n•The WHERE expression supports the boolean operators: equals (=), not equals (!= or <>), greater than\n(>), less than (<), greater than or equal to (>=), less than or equal to (<=), IS NULL, AND, OR, and NOT.\nNote, however, although OR is supported syntactically, VoltDB does not optimize these operations and\nuse of OR may impact the performance of your queries.\n•You can use subqueries in the WHERE clause of the DELETE statement, with the following provisions:\n•See the description of subqueries in the SELECT statement for general rules concerning the construc-\ntion of subqueries.\n•In a multi-partition procedure, subqueries of the DELETE statement can only reference replicated\ntables.\n•In single-partitioned procedures, the subquery can reference both partitioned and replicated tables.\n•For ad hoc DELETE statements, the same rules apply except the SQL statement itself determines\nwhether VoltDB executes it as a single-partitoned or multi-partitioned procedure. Statements that\ndelete rows from a partitioned table based on a specific value of the partitioning column are executed\nas single-partitioned procedures. All other statements are multi-partitioned.\n•The ORDER BY clause lets you order the selection results and then select a subset of the ordered\nrecords to delete. For example, you could delete only the five oldest records, chronologically, sorting\nby timestamp:\nDELETE FROM events ORDER BY event_time, event_id ASC LIMIT 5;\nSimilarly, you could choose to keep only the five most recent:\nDELETE FROM events ORDER BY event_time, event_id DESC OFFSET 5;\n•When using ORDER BY, the resulting sort order must be deterministic. In other words, the ORDER\nBY must include enough columns to uniquely identify each row. (For example, listing all columns or\na primary key.)\n•You cannot use ORDER BY to delete rows from a partitioned table in a multi-partitioned query. In other\nwords, for partitioned tables DELETE... ORDER BY must be executed as part of a single-partitioned\n224Supported SQL Statements\nstored procedure or as an ad hoc query with a WHERE clause that uniquely identifies the partitioning\ncolumn value.\nExamples\nThe following example removes rows from the EMPLOYEE table where the EMPLOYEE_ID column\nis equal to 145303.\nDELETE FROM employee WHERE employee_id = 145303;\nThe following example removes rows from the BID table where the BIDDERID is 12345 and the BID-\nPRICE is less than 100.00.\nDELETE FROM bid WHERE bidderid=12345 AND bidprice<100.0;\n225Supported SQL Statements\nINSERT\nINSERT — Creates new rows in the database, using the specified values for the columns.\nSyntax\nINSERT INTO table-name [( column-name [,...] )] VALUES ( value-expression [,...] )\nINSERT INTO table-name [( column-name [,...] )] SELECT select-expression\nDescription\nThe INSERT statement creates one or more new rows in the database. There are two forms of the INSERT\nstatement, INSERT INTO... VALUES and INSERT INTO... SELECT. The INSERT INTO... VALUES\nstatement lets you enter specific values for a adding a single row to the database. The INSERT INTO...\nSELECT statement lets you insert multiple rows into the database, depending upon the number of rows\nreturned by the select expression.\nThe INSERT INTO... SELECT statement is often used for copying rows from one table to another. For\nexample, say you want to export all of the records associated with a particular column value. The following\nINSERT statement copies all of the records from the table ORDERS with a warehouseID of 25 into the\ntable EXPORT_ORDERS:\nINSERT INTO Export_Orders SELECT * FROM Orders WHERE CustomerID=25;\nHowever, the select expression can be more complex, including joining multiple tables. The following\nlimitations currently apply to the INSERT INTO... SELECT statement:\n•INSERT INTO... SELECT can join partitioned tables only if they are joined on equality of the parti-\ntioning columns. Also, the resulting INSERT must apply to a partitioned table and be inserted using the\nsame partition column value, whether the query is executed in a single-partitioned or multi-partitioned\nstored procedure.\n•INSERT INTO... SELECT does not support UNION statements.\nIn addition to the preceding limitations, there are certain instances where the select expression is too com-\nplex to be processed. Cases of invalid select expressions in INSERT INTO... SELECT include:\n•A LIMIT or TOP clause applied to a partitioned table in a multi-partitioned query\n•A GROUP BY of a partitioned table where the partitioning column is not in the GROUP BY clause\nDeterministic behavior is critical to maintaining the integrity of the data in a K-safe cluster. Because an\nINSERT INTO... SELECT statement performs both a query and an insert based on the results of that query,\nif the selection expression would produces non-deterministic results, the VoltDB query planner rejects the\nstatement and returns an error. See Section 5.1.2, “VoltDB Stored Procedures are Deterministic” for more\ninformation on the importance of determinism in SQL queries.\nIf you specify the column names following the table name, the values will be assigned to the columns in\nthe order specified. If you do not specify the column names, values will be assigned to columns based on\nthe order specified in the schema definition. However, if you specify a subset of the columns, you must\nspecify values for any columns that are explicitly defined in the schema as NOT NULL and do not have\na default value assigned.\n226Supported SQL Statements\nYou can use subqueries within the VALUES clause of the INSERT statement, with the following provi-\nsions:\n•See the description of subqueries in the SELECT statement for general rules concerning the construction\nof subqueries.\n•In a multi-partition procedure, subqueries of the INSERT statement can only reference replicated tables.\n•In single-partitioned procedures, the subquery can reference both partitioned and replicated tables.\n•For ad hoc INSERT statements, the same rules apply except the SQL statement itself determines whether\nVoltDB executes it as a single-partitoned or multi-partitioned procedure. Statements that insert rows into\na partitioned table based on a specific value of the partitioning column are executed as single-partitioned\nprocedures. All other statements are multi-partitioned.\nExamples\nThe following example inserts values into the columns (firstname, mi, lastname, and emp_id) of an EM-\nPLOYEE table:\nINSERT INTO employee VALUES ('Jane', 'Q', 'Public', 145303);\nThe next example performs the same operation with the same results, except this INSERT statement ex-\nplicitly identifies the column names and changes the order:\nINSERT INTO employee (emp_id, lastname, firstname, mi) \n VALUES (145303, 'Public', 'Jane', 'Q');\nThe last example assigns values for the employee ID and the first and last names, but not the middle initial.\nThis query will only succeed if the MI column is nullable or has a default value defined in the database\nschema.\nINSERT INTO employee (emp_id, lastname, firstname) \n VALUES (145304, 'Doe', 'John');\n227Supported SQL Statements\nMIGRATE\nMIGRATE — queues table rows for migration to an export target.\nSyntax\nMIGRATE FROM table-name\n[WHERE [NOT] boolean-expression [ {AND | OR} [NOT] boolean-expression]...]\nDescription\nThe MIGRATE statement selects rows from the specified table for migration to an export target and marks\nthe rows for deletion. When rows are migrated, they are first exported to the export target defined in the\ntable definition (in the MIGRATE TO TARGET clause). Once the export target acknowledges receipt of\nthe data, the rows are deleted from the VoltDB table.\nFor example, assume the reservations table contains information about airline reservations. Once the flight\nis over, you want to archive the reservation records. But you do not want them to be deleted until you are\nsure they reach the archive. To achieve this you can declare the table using the MIGRATE TO TARGET\nclause:\nCREATE TABLE Reservation\n MIGRATE TO TARGET oldreserve\n ( Reserve_ID INT NOT NULL,\n Flight_ID INT NOT NULL,\n Customer_ID INT);\nThen, when the flight is completed, you can migrate all associated reservations to the external system\nassociated with the oldreserve target, ensuring they are not deleted from the VoltDB database until they\nreach the target.\nMIGRATE FROM Reservation WHERE Reserve_ID= ?;\nThe MIGRATE statement applies to any tables declared with a MIGRATE TO TARGET clause. You can\nuse MIGRATE to manually migrate rows from tables that do not have an automated \"time to live\" (USING\nTTL) value defined or you can use it to preemptively migrate rows in a table declared with USING TTL.\nExample\nThe following example migrates user accounts if the account type is \"trial\" and the user hasn't logged in\nfor two weeks.\nMIGRATE FROM accounts \n WHERE acct_type=\"TRIAL\" AND last_login < DATEADD(DAY,-14,NOW());\n228Supported SQL Statements\nSELECT\nSELECT — Fetches the specified rows and columns from the database.\nSyntax\n[common-table-expression] Select-statement [{set-operator} Select-statement ] ...\nSelect-statement:\nSELECT [ TOP integer-value ]\n{ * | [ ALL | DISTINCT ] { column-name | selection-expression } [AS alias] [,...] }\nFROM { table-reference } [ join-clause ]...\n[WHERE [NOT] boolean-expression [ {AND | OR} [NOT] boolean-expression]...]\n[clause...]\ntable-reference:\n{ table-name [AS alias] | view-name [AS alias] | sub-query AS alias }\nsub-query:\n(Select-statement )\njoin-clause:\n, table-reference\n[INNER | {LEFT | RIGHT | FULL } [OUTER]] JOIN [{ table-reference }] [join-condition ]\njoin-condition:\nON conditional-expression\nUSING (column-reference [,...])\nclause:\nORDER BY { column-name | alias } [ ASC | DESC ] [,...]\nGROUP BY { column-name | alias } [,...]\nHAVING boolean-expression\nLIMIT integer-value [OFFSET row-count ]\nset-operator:\nUNION [ALL]\nINTERSECT [ALL]\nEXCEPT\ncommon-table-expression:\nWITH common-table-name [(column-name [,...])] AS ( Select-statement )\nWITH RECURSIVE common-table-name [(column-name [,...])] AS (\n Select-statement UNION ALL Select-statement\n)\nDescription\nThe SELECT statement retrieves the specified rows and columns from the database, filtered and sorted\nby any clauses that are included in the statement. In its simplest form, the SELECT statement retrieves\nthe values associated with individual columns. However, the selection expression can be a function such\nas COUNT and SUM.\nThe following features and limitations are important to note when using the SELECT statement with\nVoltDB:\n229Supported SQL Statements\n•See Appendix C, SQL Functions for a full list of the SQL functions that VoltDB supports.\n•VoltDB supports the following operators in expressions: addition (+), subtraction (-), multiplication (*),\ndivision (*) and string concatenation (||).\n•TOP n is a synonym for LIMIT n.\n•The WHERE expression supports the boolean operators: equals (=), not equals (!= or <>), greater than\n(>), less than (<), greater than or equal to (>=), less than or equal to (<=), LIKE, STARTS WITH,\nIS NULL, IS DISTINCT, IS NOT DISTINCT, AND, OR, and NOT. Note, however, although OR is\nsupported syntactically, VoltDB does not optimize these operations and use of OR may impact the\nperformance of your queries.\n•The boolean expression LIKE provides text pattern matching in a VARCHAR column. The syntax of\nthe LIKE expression is {string-expression} LIKE '{pattern}' where the pattern can\ncontain text and wildcards, including the underscore (_) for matching a single character and the percent\nsign (%) for matching zero or more characters. The string comparison is case sensitive.\nWhere an index exists on the column being scanned and the pattern starts with a text prefix (rather than\nstarting with a wildcard), VoltDB will attempt to use the index to maximize performance, For example, a\nquery limiting the results to rows from the EMPLOYEE table where the primary index¸ the JOB_CODE\ncolumn, begins with the characters \"Temp\" looks like this:\nSELECT * from EMPLOYEE where JOB_CODE like 'Temp%';\n•The STARTS WITH clause is useful in stored procedures because it uses indexed scans where the\nLIKE clause cannot. The expression STARTS WITH '{string-expression} ' is syntactically\nidentical to LIKE '{string-expression}%' in that it matches any string value that starts with\nstring-expression . The difference is that in a stored procedure, use of the STARTS WITH clause with a\nplaceholder (that is, \"START WITH ?\") utilizes available indexes, whereas LIKE ? requires a sequential\nscan, since the compiler cannot tell if the replacement text ends in a percent sign or not and must plan\nfor any possible string value. For example, if KEYWORD is the primary key for the ENTRY table, then\nVoltDB can use the primary key index to optimize the following stored procedure:\nCREATE PROCEDURE SimpleSearch AS\n SELECT keyword FROM entry WHERE keyword STARTS WITH ?;\n•The boolean expression IN determines if a given value is found within a list of alternatives. For exam-\nple, in the following code fragment the IN expression looks to see if a record is part of Hispaniola by\nevaluating whether the column COUNTRY is equal to either \"Dominican Republic\" or \"Haiti\":\nWHERE Country IN ('Dominican Republic', 'Haiti')\nNote that the list of alternatives must be enclosed in parentheses. The result of an IN expression is\nequivalent to a sequence of equality conditions separated by OR. So the preceding code fragment pro-\nduces the same boolean result as:\nWHERE Country='Dominican Republic' OR Country='Haiti'\nThe advantages are that the IN syntax provides more compact and readable code and can provide im-\nproved performance by using an index on the initial expression where available.\n•The boolean expression BETWEEN determines if a value falls within a given range. The evaluation is\ninclusive of the end points. In this way BETWEEN is a convenient alias for two boolean expressions\ndetermining if a value is greater than or equal to (>=) the starting value and less than or equal to (<=)\nthe end value. For example, the following two WHERE clauses are equivalent:\n230Supported SQL Statements\nWHERE salary BETWEEN ? AND ?\nWHERE salary >= ? AND salary <= ?\n•The boolean expressions IS DISTINCT FROM and IS NOT DISTINCT FROM are similar to the equals\n(\"=\") and not equals (\"<>\") operators respectively, except when evaluating null operands. If either or\nboth operands are null, the equals and not equals operators return a boolean null value, or false. IS\nDISTINCT FROM and IS NOT DISTINCT FROM consider null a valid operand. So if only one operand\nis null IS DISTINCT FROM returns true and IS NOT DISTINCT FROM returns false. If both operands\nare null IS DISTINCT FROM returns false and IS NOT DISTINCT FROM returns true.\n•When using placeholders in SQL statements involving the IN list expression, you can either do replace-\nment of individual values within the list or replace the list as a whole. For example, consider the fol-\nlowing statements:\nSELECT * from EMPLOYEE where STATUS IN (?, ?,?);\nSELECT * from EMPLOYEE where STATUS IN ?;\nIn the first statement, there are three parameters that replace individual values in the IN list, allowing\nyou to specify exactly three selection values. In the second statement the placeholder replaces the entire\nlist, including the parentheses. In this case the parameter to the procedure call must be an array and\nallows you to change not only the values of the alternatives but the number of criteria considered.\nThe following Java code fragment demonstrates how these two queries can be used in a stored procedure,\nresulting in equivalent SQL statements being executed:\nString arg1 = \"Salary\";\nString arg2 = \"Hourly\";\nString arg3 = \"Parttime\";\nvoltQueueSQL( query1, arg1, arg2, arg3);\nString listargs[] = new String[3];\nlistargs[0] = arg1;\nlistargs[1] = arg2;\nlistargs[2] = arg3;\nvoltQueueSQL( query2, (Object) listargs);\nNote that when passing arrays as parameters in Java, it is a good practice to explicitly cast them as an\nobject to avoid the array being implicitly expanded into individual call parameters.\n•VoltDB supports the use of CASE-WHEN-THEN-ELSE-END for conditional operations. For exam-\nple, the following SELECT expression uses a CASE statement to return different values based on the\ncontents of the price column:\nSELECT Prod_name, \n CASE WHEN price > 100.00 \n THEN 'Expensive'\n ELSE 'Cheap'\n END \nFROM products ORDER BY Prod_name; \nFor more complex conditional operations with multiple alternatives, use of the DECODE() function is\nrecommended.\n•VoltDB supports both inner and outer joins.\n231Supported SQL Statements\n•The SELECT statement supports subqueries as a table reference in the FROM clause. Subqueries must\nbe enclosed in parentheses and must be assigned a table alias.\n•You can only join two or more partitioned tables if those tables are partitioned on the same value and\njoined on equality of the partitioning column. Joining two partitioned tables on non-partitioned columns\nor on a range of values is not supported. However, there are no limitations on joining to replicated tables.\n•Extremely large result sets (greater than 50 megabytes in size) are not supported. If you execute a\nSELECT statement that generates a result set of more than 50 megabytes, VoltDB will return an error.\nWindow Functions\nWindow functions, which can appear in the selection list, allow you to perform more selective calculations\non the statement results than you can do with plain aggregation functions such as COUNT() or SUM().\nWindow functions execute the specified operation on a subset of the total selection results, controlled by\nthe PARTITION BY and ORDER BY clauses. The overall syntax for a window function is as follows:\nfunction-name ( [expression ] ) \n OVER ( [ PARTITION BY { expression [,...]} ] [ORDER BY { expression [,...]} ] ) \nWhere:\n•The PARTITION BY1 clause defines how the selection results are grouped.\n•The ORDER BY clause defines the order in which the rows are evaluated within each group.\nAn example may help explain the behavior of the two clauses. Say you have a database table that lists\nthe population of individual cities and includes columns for country and state. You can use the window\nfunction COUNT(city) OVER (PARTITION BY state) to include a count of all of the cities\nwithin each state as part of each city record. You can also control the order the records are evaluated using\nthe ORDER BY clause. Note, however, when you use the ORDER BY clause the window function results\nare calculated sequentially. So rather than show the count of all cities in the state each time, the window\nfunction will return the count of cities incrementally up to the current record in the group. So rather than\nuse COUNT() you can use RANK() to more accurately indicate the values being returned. For example,\nRANK() OVER (PARTITION BY state, ORDER BY city_population) lists the cities for\neach state with a rank value showing their ranking in order of their population.\nPlease be aware of the following limitations when using the window functions:\n•There can be only one window function per SELECT statement.\n•You cannot use a window function and GROUP BY in the same SELECT statement.\n•The argument(s) to the ORDER BY clause can be either integer or TIMESTAMP expressions only.\nThe following list describes the operation and constraints for each window function separately.\nRANK() OVER ( [ PARTITION BY { expression [,...]} ] ORDER BY { expression [,...]} )\nThe RANK() window function generates a BIGINT value (starting at 1) representing the ranking of\nthe current result within the group defined by the PARTITION BY expression(s) or of the entire result\nset if PARTITION BY is not specified. No function argument is allowed and the ORDER BY clause\nis required.\n1Use of the keyword PARTITION is for compatibility with SQL syntax from other databases and is unrelated to the columns used to partition\nsingle-partitioned tables. You can use the RANK() functions with either partitioned or replicated tables and the ranking column does not need to\nbe the same as the partitioning column for VoltDB partitioned tables.\n232Supported SQL Statements\nFor example, if you rank a column (say, city_population ) and use the country column as the parti-\ntioning column for the ranking, the cities of each country will be ranked separately. If you use both\nstate and country as partitioning columns, then the cities for each state in each country will be ranked\nseparately.\nDENSE_RANK() OVER ( [ PARTITION BY { expression [,...]} ] ORDER BY { expression [,...]} )\nThe DENSE_RANK() window function generates a BIGINT value (starting at 1) representing the\nranking of the current result, in the same way the RANK() window function does. The difference\nbetween RANK() and DENSE_RANK() is how they handle ranking when there is more than one row\nwith the same ORDER BY value.\nIf more than one row has the same ORDER BY value, those rows receive the same rank value in\nboth cases. However, with the RANK() function, the next rank value is incremented by the number\nof preceding rows. For example, if the ORDER BY values of four rows are 100, 98, 98, and 73 the\nrespective rank values using RANK() will be 1, 2, 2, and 4. Whereas, with the DENSE_RANK()\nfunction, the next rank value is always only incremented by one. So, if the ORDER BY values are\n100, 98, 98, and 73, the respective rank values using DENSE_RANK() will be 1, 2, 2, and 3.\nAs with the RANK() window function, no function argument is allowed for the DENSE_RANK()\nfunction and the ORDER BY clause is required.\nROW_NUMBER() OVER ( [ PARTITION BY { expression [,...]} ] [ ORDER BY { expression [,...]} ] )\nThe ROW_NUMBER() window function generates a BIGINT value representing the ordinal order\nof the current result within the group defined by the PARTITION BY expression(s) or of the entire\nresult set if PARTITION BY is not specified. No function argument is allowed.\nFor example, if you order a column (say, animal) and use the class column as the partitioning column,\nthe animals in each class will be ordered separately. So \"angelfish\" might receive row number 1 in the\ntype \"finned fish\" while \"aardvark\" is row number 1 in the type \"mammal\". But if you do not specify\nPARTITION BY, \"angelfish\" would be numbered after \"aardvark\".\nNote that an ORDER BY clause is not required. However, use of ORDER BY is strongly recommend-\ned, preferably with sufficient columns to make the ordering unique. Without the ORDER BY clause\nthe results of the query are nondeterministic.\nCOUNT( {expression} ) OVER ( [PARTITION BY { expression [,...]}] [ ORDER BY { expression [,...]} ] )\nThe COUNT() window function generates a sub-count of the number of rows within the current result\nset, where the PARTITION BY clause defines how the rows are grouped. The function argument is\nrequired.\nSUM({expression} ) OVER ( [PARTITION BY { expression [,...]}] [ ORDER BY { expression [,...]} ] )\nThe SUM() window function generates a sub-total of the specified column within the current result\nset, where the PARTITION BY clause defines how the rows are grouped. The function argument is\nrequired.\nMAX({expression} ) OVER ( [PARTITION BY { expression [,...]}] [ ORDER BY { expression [,...]} ] )\nThe MAX() window function reports the maximum value of a column within the current result set,\nwhere the PARTITION BY clause defines how the rows are grouped. If the ORDER BY clause is\nspecified, the maximum value is calculated incrementally over the rows in the order specified. The\nfunction argument is required.\nMIN({expression} ) OVER ( [PARTITION BY { expression [,...]}] [ ORDER BY { expression [,...]} ] )\nThe MIN() window function reports the minimum value of a column within the current result set,\nwhere the PARTITION BY clause defines how the rows are grouped. If the ORDER BY clause is\nspecified, the minimum value is calculated incrementally over the rows in the order specified. The\nfunction argument is required.\n233Supported SQL Statements\nSubqueries\nThe SELECT statement can include subqueries. Subqueries are separate SELECT statements, enclosed in\nparentheses, where the results of the subquery are used as values, expressions, or arguments within the\nsurrounding SELECT statement.\nSubqueries, like any SELECT statement, are extremely flexible and can return a wide array of information.\nA subquery might return:\n•A single row with a single column — this is sometimes known as a scalar subquery and represents a\nsingle value\n•A single row with multiple columns — this is also known as a row value expression\n•Multiple rows with one or more columns\nIn general, VoltDB supports subqueries in the FROM clause, in the selection expression, and in boolean\nexpressions in the WHERE clause or in CASE-WHEN-THEN-ELSE-END operations. However, different\ntypes of subqueries are allowed in different situations, depending on the type of data returned.\n•In the FROM clause, the SELECT statement supports all types of subquery as a table reference. The\nsubquery must be enclosed in parentheses and must be assigned a table alias.\n•In the selection expression, scalar subqueries can be used in place of a single column reference.\n•In the WHERE clause and CASE operations, both scalar and non-scalar subqueries can be used as part\nof boolean expressions. Scalar subqueries can be used in place of any single-valued expression. Non-\nscalar subqueries can be used in the following situations:\n•Row value comparisons — Boolean expressions that compare one row value expression to another\ncan use subqueries that resolve to one row with multiple columns. For example:\nselect * from t1 \n where (a,c) > (select a, c from t2 where b=t1.b);\n•IN and EXISTS — Subqueries that return multiple rows can be used as an argument to the IN or\nEXISTS predicate to determine if a value (or set of values) exists within the rows returned by the\nsubquery. For example:\nselect * from t1 \n where a in (select a from t2);\nselect * from t1\n where (a,c) in (select a, c from t2 where b=t1.b);\nselect * from t1 where c > 3 and \n exists (select a, b from t2 where a=t1.a);\n•ANY and ALL — Multi-row subqueries can also be used as the target of an ANY or ALL comparison,\nusing either a scalar or row expression comparison. For example:\nselect * from t1 \n where a > ALL (select a from t2);\nselect * from t1\n where (a,c) = ANY (select a, c from t2 where b=t1.b);\nNote that VoltDB does not support subqueries in the HAVING, ORDER BY, or GROUP BY clauses.\nSubqueries are also not supported for any of the data manipulation language (DML) statements: DELETE,\nINSERT, UPDATE, and UPSERT.\n234Supported SQL Statements\nFor the initial release of subqueries in selection and boolean expressions, only replicated tables can be\nused in the subquery. Both replicated and partitioned tables can be used in subqueries in place of table\nreferences in the FROM clause.\nSet Operations\nVoltDB also supports the set operations UNION, INTERSECT, and EXCEPT. These keywords let you\nperform set operations on two or more SELECT statements. UNION includes the combined results sets\nfrom the two SELECT statements, INTERSECT includes only those rows that appear in both SELECT\nstatement result sets, and EXCEPT includes only those rows that appear in one result set but not the other.\nNormally, UNION and INTERSECT provide a set including unique rows. That is, if a row appears in\nboth SELECT results, it only appears once in the combined result set. However, if you include the ALL\nmodifier, all matching rows are included. For example, UNION ALL will result in single entries for the\nrows that appear in only one of the SELECT results, but two copies of any rows that appear in both.\nThe UNION, INTERSECT, and EXCEPT operations obey the same rules that apply to joins:\n•You cannot perform set operations on SELECT statements that reference the same table.\n•All tables in the SELECT statements must either be replicated tables or partitioned tables partitioned\non the same column value, using equality of the partitioning column in the WHERE clause.\nCommon Table Expressions\nCommon table expressions let you declare a named subquery that can be used in the main query the same\nway regular tables and columns are used. Common expressions are useful for simplifying queries that use\nan expression multiple times or for separating out two distinct aspects of a larger query. You declare a\ncommon table expression by placing the WITH clause before the main SELECT query. The WITH clause:\n•Defines the name of the common table expression\n•Optionally, renames the resulting columns\n•Declares the expression itself using standard SELECT statement syntax\nVoltDB supports two forms of common table expressions:\n•Basic common expressions, with a name, optional column names, and the expression itself\n•Recursive expressions, using the WITH RECURSIVE keywords and merging two expressions with a\nUNION ALL set operation\nYou can use the results of the common table expression in the subsequent SELECT statement the same way\nyou would reference regular tables in the database. For example, the following common table expression\ndetermines how many members live in each city, then uses that information to return a list of members\nwho live in a city with fewer than the specified number of members:\nWITH city_count (city,membercount) AS (\n SELECT cityname, count(*) FROM members \n GROUP BY cityname\n)\nSELECT m.fullname, m.cityname FROM members AS m \n JOIN city_count AS cc ON m.city = cc.city\n WHERE membercount < ?\n235Supported SQL Statements\n ORDER BY m.cityname,m.fullname;\nRecursive common expressions are like regular table expressions, except they are self-referencing, so you\ncan iterate over the results in a recursive fashion. Recursive common expressions are particularly useful for\nevaluating tree or graph structures that cannot be natively represented in flat database records or queries.\nYou declare a recursive expression with the WITH RECURSIVE keywords followed by:\n•The table name and, optionally, alias names for the columns\n•A base query that defines the starting condition\n•A UNION ALL set operator\n•A second, recursive query that iterates over the common table expression results\nFor example, assume you wanted to know all the employees in a specific branch of the company's organi-\nzational structure. However, organizational charts are hierarchical. Each employee record may only record\nthat employee's direct manager. Recursive common expressions let you start at the top of a branch of the\norganizational \"tree\" and iteratively look for any employee reporting to that manager, then employees re-\nporting to that person, and so on. The common table expression might look like this.\nWITH RECURSIVE org (id) AS (\n SELECT mgr_id AS mgr FROM department\n WHERE dept_name=?\n UNION ALL\n SELECT emp_id FROM employee, org\n WHERE employee.mgr_id = org.id \n)\nSELECT e.emp_id, e.emp_name, e.emp_address\n FROM employee AS e, org\n WHERE e.emp_id = org.id;\nWarning\nAs with any recursive programming, you are responsible for ensuring the common table expres-\nsion does not result in an infinite loop. VoltDB cannot determine at compile time whether the\nexpression is sufficiently bounded. The preceding example succeeds because the application en-\nsures all employee/manager relationships are hierarchical — no manager reports to a employee\nlower in the tree. If evaluation of a common table expression results in a loop, VoltDB will even-\ntually exceed some limit (such as the query timeout or maximum temporary table space) and fail\nthe transaction. In certain cases, an infinite loop could use up so much memory it exceeds the\nresource limit and pauses the database.\nCommon table expressions in VoltDB have the following limitations:\n•There can be only one common table expression per query.\n•In multi-partition transactions, the common expression can reference replicated tables only.\n•In single-partition transactions, the common expression can reference both replicated and partitioned\ntable, with the caveat that as in any partitioned transaction partitioned tables have access to only that\ndata in the current partition.\n•For basic (non-recursive) common table expressions, the common expression cannot be self-referencing.\nThat is, the SELECT statement within the WITH clause can reference actual database table and view\nnames only, it cannot reference the common expression name itself.\n236Supported SQL Statements\nExamples\nThe following example retrieves all of the columns from the EMPLOYEE table where the last name is\n\"Smith\":\nSELECT * FROM employee WHERE lastname = 'Smith';\nThe following example retrieves selected columns for two tables at once, joined by the employee_id using\nan implicit inner join and sorted by last name:\nSELECT lastname, firstname, salary \n FROM employee AS e, compensation AS c\n WHERE e.employee_id = c.employee_id\n ORDER BY lastname DESC;\n237Supported SQL Statements\nTRUNCATE TABLE\nTRUNCATE TABLE — Deletes all records from the specified table.\nSyntax\nTRUNCATE TABLE table-name\nDescription\nThe TRUNCATE TABLE statement deletes all of the records from the specified table. TRUNCATE TA-\nBLE is the same as the statement DELETE FROM {table-name} with no selection clause. These\nstatements contain optimizations to increase performance and reduce memory usage over an equivalent\nDELETE statement containing a WHERE selection clause.\nThe goal of the TRUNCATE TABLE statement is to remove all records from a table. Since this is not\npossible in a partitioned stored procedure, VoltDB does not allow TRUNCATE TABLE statements within\npartitioned stored procedures. You can perform TRUNCATE TABLE statements in ad hoc or multi-par-\ntition procedures only.\nExamples\nThe following example removes all data from the CURRENT_STANDINGS table:\nTRUNCATE TABLE Current_standings;\n238Supported SQL Statements\nUPDATE\nUPDATE — Updates the values within the specified columns and rows of the database.\nSyntax\nUPDATE table-name SET column-name = value-expression [, ...]\n[WHERE [NOT] boolean-expression [ {AND | OR} [NOT] boolean-expression]...]\nDescription\nThe UPDATE statement changes the values of columns within the specified records. The following limi-\ntations are important to note when using the UPDATE statement with VoltDB:\n•VoltDB supports the following arithmetic operators in expressions: addition (+), subtraction (-), multi-\nplication (*), and division (*).\n•The WHERE expression supports the boolean operators: equals (=), not equals (!= or <>), greater than\n(>), less than (<), greater than or equal to (>=), less than or equal to (<=), IS NULL, AND, OR, and NOT.\nNote, however, although OR is supported syntactically, VoltDB does not optimize these operations and\nuse of OR may impact the performance of your queries.\n•You can use subqueries in place of value expressions within the SET and WHERE clauses of the UP-\nDATE statement, with the following provisions:\n•See the description of subqueries in the SELECT statement for general rules concerning the construc-\ntion of subqueries.\n•In a multi-partition procedure, subqueries of the UPDATE statement can only reference replicated\ntables.\n•In single-partitioned procedures, the subquery can reference both partitioned and replicated tables.\n•For ad hoc UPDATE statements, the same rules apply except the SQL statement itself determines\nwhether VoltDB executes it as a single-partitoned or multi-partitioned procedure. Statements that\nmodify a partitioned table based on a specific value of the partitioning column are executed as sin-\ngle-partitioned procedures. All other statements are multi-partitioned.\nExamples\nThe following example changes the ADDRESS column of the EMPLOYEE record with an employee ID\nof 145303:\nUPDATE employee \n SET address = '49 Lavender Sweep' \n WHERE employee_id = 145303;\nThe following example increases the starting price by 25% for all ITEM records with a category ID of 7:\nUPDATE item SET startprice = startprice * 1.25 WHERE categoryid = 7;\n239Supported SQL Statements\nUPSERT\nUPSERT — Either inserts new rows or updates existing rows depending on the primary key value.\nSyntax\nUPSERT INTO table-name [( column-name [,...] )] VALUES ( value-expression [,...] )\nUPSERT INTO table-name [( column-name [,...] )] SELECT select-expression\nDescription\nThe UPSERT statement has the same syntax as the INSERT statement and will perform the same function,\nassuming a record with a matching primary key does not already exist in the database. If such a record does\nexist, UPSERT updates the existing record with the new column values. Note that the UPSERT statement\ncan only be executed on tables that have a primary key.\nUPSERT has the same two forms as the INSERT statement: UPSERT INTO... VALUES and UPSERT\nINTO... SELECT. The UPSERT statement also has similar constraints and limitations as the INSERT\nstatement with regards to joining partitioned tables and overly complex SELECT clauses. (See the de-\nscription of the INSERT statement for details.)\nHowever, UPSERT INTO... SELECT has an additional limitation: the SELECT statement must produce\ndeterministically ordered results. That is, the query must not only produce the same rows, they must be in\nthe same order to ensure the subsequent inserts and updates produce identical results.\nYou can use subqueries within the VALUES clause of the UPSERT statement, with the following provi-\nsions:\n•See the description of subqueries in the SELECT statement for general rules concerning the construction\nof subqueries.\n•In a multi-partition procedure, subqueries of the UPSERT statement can only reference replicated tables.\n•In single-partitioned procedures, the subquery can reference both partitioned and replicated tables.\n•For ad hoc UPSERT statements, the same rules apply except the SQL statement itself determines\nwhether VoltDB executes it as a single-partitoned or multi-partitioned procedure. Statements that mod-\nify a partitioned table based on a specific value of the partitioning column are executed as single-parti-\ntioned procedures. All other statements are multi-partitioned.\nExamples\nThe following examples use two tables, Employee and Manager, both of which define the column emp_id\nas a primary key. In the first example, the UPSERT statement either creates a new row with the specified\nvalues or updates an existing row with the primary key 145303.\nUPSERT INTO employee (emp_id, lastname, firstname, title, department) \n VALUES (145303, 'Public', 'Jane', 'Manager', 'HR');\nThe next example copies records from the Employee table to the Manager table, if the employee's title\nis \"Manager\". Again, new records will be created or existing records updated depending on whether the\n240Supported SQL Statements\nemployee already has a record in the Manager table. Notice the use of the primary key in an ORDER BY\nclause to ensure deterministic results from the SELECT statement.\nUPSERT INTO Manager (emp_id, lastname, firstname, title, department)\n SELECT * from Employee WHERE title='Manager' ORDER BY emp_id;\n241Appendix C. SQL Funcons\nFunctions let you aggregate column values and perform other calculations and transformations on data\nwithin your SQL queries. This appendix lists the functions alphabetically, describing for each their syntax\nand purpose. The functions can also be grouped by the type of data they produce or operate on, as listed\nbelow.\nBitwise Functions\n•BIT_SHIFT_LEFT()\n•BIT_SHIFT_RIGHT()\n•BITAND()\n•BITNOT()\n•BITOR()\n•BITXOR()\nColumn Aggregation Functions\n•APPROX_COUNT_DISTINCT()\n•AVG()\n•COUNT()\n•MAX()\n•MIN()\n•SUM()\nDate and Time Functions\n•CURRENT_TIMESTAMP()\n•DATEADD()\n•DAY(), DAYOFMONTH()\n•DAYOFWEEK()\n•DAYOFYEAR()\n•EXTRACT()\n•FORMAT_TIMESTAMP()\n•FROM_UNIXTIME()\n•HOUR()\n•IS_VALID_TIMESTAMP()\n•MAX_VALID_TIMESTAMP()\n•MIN_VALID_TIMESTAMP()\n•MINUTE()\n•MONTH()\n•NOW()\n•QUARTER()\n•SECOND()\n•SINCE_EPOCH()\n•TO_TIMESTAMP()\n•TRUNCATE()\n•WEEK(), WEEKOFYEAR()\n•WEEKDAY()\n•YEAR()\n242SQL Functions\nGeospatial Functions\n•AREA()\n•ASTEXT()\n•CENTROID()\n•CONTAINS()\n•DISTANCE()\n•DWITHIN()\n•ISINVALIDREASON()\n•ISVALID()\n•LATITUDE()\n•LONGITUDE()\n•MAKEVALIDPOLYGON()\n•NUMINTERIORRINGS()\n•NUMPOINTS()\n•POINTFROMTEXT()\n•POLYGONFROMTEXT()\n•VALIDPOLYGONFROMTEXT()\nJSON Functions\n•ARRAY_ELEMENT()\n•ARRAY_LENGTH()\n•FIELD()\n•SET_FIELD()\nInternet Functions\n•INET6_ATON()\n•INET6_NTOA()\n•INET_ATON()\n•INET_NTOA()\nLogic and Conversion Functions\n•CAST()\n•COALESCE()\n•DECODE()\nMath Functions\n•ABS()\n•CEILING()\n•EXP()\n•FLOOR()\n•LN(), LOG()\n•LOG10()\n•MOD()\n•POWER()\n•ROUND()\n•SQRT()\n243SQL Functions\nString Functions\n•BIN()\n•CHAR()\n•CHAR_LENGTH()\n•CONCAT()\n•FORMAT_CURRENCY()\n•HEX()\n•LEFT()\n•LOWER()\n•OCTET_LENGTH()\n•OVERLAY()\n•POSITION()\n•REGEXP_POSITION()\n•REPEAT()\n•REPLACE()\n•RIGHT()\n•SPACE()\n•STR()\n•SUBSTRING()\n•TRIM()\n•UPPER()\nTrigonometric Functions\n•COS()\n•COT()\n•CSC()\n•DEGREES()\n•PI()\n•RADIANS()\n•SEC()\n•SIN()\n•TAN()\nMiscellaneous Functions\n•MIGRATING()\n244SQL Functions\nABS()\nABS() — Returns the absolute value of a numeric expression.\nSyntax\nABS( numeric-expression )\nDescription\nThe ABS() function returns the absolute value of the specified numeric expression.\nExample\nThe following example sorts the results of a SELECT expression by its proximity to a target value (spec-\nified by a placeholder), using the ABS() function to normalize values both above and below the intended\ntarget.\nSELECT price, product_name FROM product_list\n ORDER BY ABS(price - ?) ASC;\n245SQL Functions\nAPPROX_COUNT_DISTINCT()\nAPPROX_COUNT_DISTINCT() — Returns an approximate count of the number of distinct values for\nthe specified column expression.\nSyntax\nAPPROX_COUNT_DISTINCT( column-expression )\nDescription\nThe APPROX_COUNT_DISTINCT() function returns an approximation of the number of distinct values\nfor the specified column expression. APPROX_COUNT_DISTINCT( column-expression ) is an alternative\nto the SQL expression \" COUNT(DISTINCT column-expression )\".\nThe reason for using APPROX_COUNT_DISTINCT() is because it can be significantly faster and use\nless temporary memory than performing a precise COUNT DISTINCT operation. This is particularly true\nwhen calculating a distinct count of a partitioned table across all of the partitions. The approximation\nusually falls within ±1% of the actual count.\nYou can use the APPROX_COUNT_DISTINCT() function on column expressions of decimal, timestamp,\nor any size integer datatype. You cannot use the function on floating point (FLOAT) or variable length\n(VARCHAR and VARBINARY) columns.\nExample\nThe following example returns an approximation of the number of distinct products available in each store.\nSELECT store, APPROX_COUNT_DISTINCT(product_id) FROM catalog\n GROUP BY store ORDER BY store;\n246SQL Functions\nAREA()\nAREA() — Returns the area of a polygon in square meters.\nSyntax\nAREA( polygon )\nDescription\nThe AREA() function returns the area of a GEOGRAPHY value in square meters. The area is the total area\nof the outer ring minus the area of any inner rings within the polygon. The area is returned as a FLOAT\nvalue.\nExample\nThe following example calculates the sum of the areas of multiple polygons representing fields on a farm.\nSELECT farmer, SUM(AREA(field)) FROM farm\n WHERE farmer = 'Old MacDonald' GROUP BY farmer;\n247SQL Functions\nARRAY_ELEMENT()\nARRAY_ELEMENT() — Returns the element at the specified location in a JSON array.\nSyntax\nARRAY_ELEMENT( JSON-array , element-position )\nDescription\nThe ARRAY_ELEMENT() function extracts a single element from a JSON array. The array position is\nzero-based. In other words, the first element in the array is in position \"0\". The function returns the element\nas a string. For example, the following function invocation returns the string \"two\":\nARRAY_ELEMENT('[\"zero\",\"one\",\"two\",\"three\"]',2)\nNote that the array element is always returned as a string. So in the following example, the function returns\n\"2\" as a string rather than an integer:\nARRAY_ELEMENT('[0,1,2,3]',2)\nFinally, the element may itself be a valid JSON-encoded object. For example, the following function\nreturns the string \"[0,1,2,3]\":\nARRAY_ELEMENT('[[0,1,2,3],[\"zero\",\"one\",\"two\",\"three\"]]',0)\nThe ARRAY_ELEMENT() function can be combined with other functions, such as FIELD(), to traverse\nmore complex JSON structures. The function returns a NULL value if any of the following conditions\nare true:\n•The position argument is less than zero\n•The position argument is greater than or equal to the length of the array\n•The JSON string does not represent an array (that is, the string is a valid JSON scalar value or object)\nThe function returns an error if the first argument is not a valid JSON string.\nExample\nThe following example uses the ARRAY_ELEMENT() function along with FIELD() to extract specific\narray elements from one field in a JSON-encoded VARCHAR column:\nSELECT language, \n ARRAY_ELEMENT(FIELD(words,'colors'),1) AS color,\n ARRAY_ELEMENT(FIELD(words,'numbers'),2) AS number\n FROM world_languages WHERE language = 'French'; \nAssuming the column words has the following structure, the query returns the strings \"French', \"vert\",\nand \"trois\".\n{\"colors\":[\"rouge\",\"vert\",\"bleu\"],\n \"numbers\":[\"un\",\"deux\",\"trois\"]}\n248SQL Functions\nARRAY_LENGTH()\nARRAY_LENGTH() — Returns the number of elements in a JSON array.\nSyntax\nARRAY_LENGTH( JSON-array )\nDescription\nThe ARRAY_LENGTH() returns the length of a JSON array; that is, the number of elements the array\ncontains. The length is returned as an integer.\nThe ARRAY_LENGTH() function can be combined with other functions, such as FIELD(), to traverse\nmore complex JSON structures.\nThe function returns NULL if the argument is a valid JSON string but does not represent an array. The\nfunction returns an error if the argument is not a valid JSON string.\nExample\nThe following example uses the ARRAY_LENGTH(), ARRAY_ELEMENT(), and FIELD() functions to\nreturn the last element of an array in a larger JSON string. The functions perform the following actions:\n•Innermost, the FIELD() function extracts the JSON field \"alerts\", which is assumed to be an array, from\nthe column messages .\n•ARRAY_LENGTH() determines the number of elements in the array.\n•ARRAY_ELEMENT() returns the last element based on the value of ARRAY_LENGTH() minus one\n(because the array positions are zero-based).\nSELECT ARRAY_ELEMENT(FIELD(messages,'alerts'),\n ARRAY_LENGTH(FIELD(messages,'alerts'))-1) AS last_alert,\n station FROM reportlog \n WHERE station=?;\n249SQL Functions\nASTEXT()\nASTEXT() — Returns the Well Known Text (WKT) representation of a GEOGRAPHY or GEOGRA-\nPHY_POINT value.\nSyntax\nASTEXT( polygon | point )\nDescription\nThe ASTEXT() function returns a text string containing a Well Known Text (WKT) representation of a\nGEOGRAPHY or GEOGRAPHY_POINT value. ASTEXT( value ) produces the same results as calling\nCAST( value AS VARCHAR).\nNote that ASTEXT() does not return the identical text string that was originally input using POINTFROM-\nTEXT() or POLYGONFROMTEXT(). When geospatial data is converted from WKT to its internal repre-\nsentation, the string representations of longitude and latitude are converted to double floating point values.\nRounding and differing levels of precision may result in small differences in the stored values. The use\nof spaces and capitalization may also vary between the original input strings and the computed output of\nthe ASTEXT() function.\nExamples\nThe following SELECT statement uses the ASTEXT() function to return the WKT representation of a\nGEOGRAPHY_POINT value in the column location.\nSELECT name, ASTEXT(location) FROM city\n WHERE state = 'NY' ORDER BY name;\n250SQL Functions\nAVG()\nAVG() — Returns the average of a range of numeric column values.\nSyntax\nAVG( column-expression )\nDescription\nThe AVG() function returns the average of a range of numeric column values. The values being averaged\ndepend on the constraints defined by the WHERE and GROUP BY clauses.\nExample\nThe following example returns the average price for each product category.\nSELECT AVG(price), category FROM product_list\n GROUP BY category ORDER BY category;\n251SQL Functions\nBIN()\nBIN() — Returns the binary representation of a BIGINT value as a string.\nSyntax\nBIN( value )\nDescription\nThe BIN() function returns the binary representation of a BIGINT value as a string. The function will\nreturn the shortest valid string representation, truncating any preceding zeros (except in the case of the\nvalue zero, which is returned as the string \"0\").\nExample\nThe following example use the BIN and BITAND functions to return the binary representations of two\nBIGINT values and their binary intersection.\n$ sqlcmd\n1> create table bits (a bigint, b bigint);\n2> insert into bits values(55,99);\n3> select bin(a) as int1, bin(b) as int2, \n4> bin(bitand(a,b)) as intersection from bits;\nINT1 INT2 INTERSECTION \n-------- --------- ------------- \n110111 1100011 100011 \n252SQL Functions\nBIT_SHIFT_LEFT()\nBIT_SHIFT_LEFT() — Shifts the bits of a BIGINT value to the left a specified number of places.\nSyntax\nBIT_SHIFT_LEFT( value, offset )\nDescription\nThe BIT_SHIFT_LEFT() function shifts the bit values of a BIGINT value to the left the number of places\nspecified by offset. The offset must be a positive integer value. The unspecified bits to the right are padded\nwith zeros. So, for example, if the offset is 5, the left-most 5 bits are dropped, the remaining bits are shifted\n5 places to the left, and the right-most 5 bits are set to zero. The result is returned as a new BIGINT value\n— the arguments to the function are not modified.\nThe left-most bit of an integer number is the sign bit, but has no special meaning for bitwise operations.\nHowever, The left-most bit set to 1 followed by all zeros is reserved as the NULL value. If you use a\nNULL value as an argument, you will receive a NULL response. But in all other circumstances (using\nnon-NULL BIGINT arguments), the bitwise functions should never return a NULL result. Consequently\nany bitwise operation that would result in only the left-most bit being set, will generate an error at runtime.\nExamples\nThe following example shifts the bits in a BIGINT value three places to the left and displays the hexadec-\nimal representation of both the initial value and the resulting value.\n$ sqlcmd\n1> create table bits (a bigint);\n2> insert into bits values (112);\n3> select hex(a), hex(bit_shift_left(a,3)) from bits;\nC1 C2 \n-------- --------- \n70 380 \n253SQL Functions\nBIT_SHIFT_RIGHT()\nBIT_SHIFT_RIGHT() — Shifts the bits of a BIGINT value to the right a specified number of places.\nSyntax\nBIT_SHIFT_RIGHT( value, offset )\nDescription\nThe BIT_SHIFT_RIGHT() function shifts the bit values of a BIGINT value to the right the number of\nplaces specified by offset. The offset must be a positive integer value. The unspecified bits to the left are\npadded with zeros. So, for example, if the offset is 5, the right-most 5 bits are dropped, the remaining bits\nare shifted 5 places to the right, and the left-most 5 bits are set to zero. The result is returned as a new\nBIGINT value — the arguments to the function are not modified.\nThe left-most bit of an integer number is the sign bit, but has no special meaning for bitwise operations.\nHowever, The left-most bit set to 1 followed by all zeros is reserved as the NULL value. If you use a\nNULL value as an argument, you will receive a NULL response. But in all other circumstances (using\nnon-NULL BIGINT arguments), the bitwise functions should never return a NULL result. Consequently\nany bitwise operation that would result in only the left-most bit being set, will generate an error at runtime.\nExamples\nThe following example shifts the bits in a BIGINT value three places to the right and displays the hexa-\ndecimal representation of both the initial value and the resulting value.\n$ sqlcmd\n1> create table bits (a bigint);\n2> insert into bits values (112);\n3> select hex(a), hex(bit_shift_right(a,3)) from bits;\nC1 C2 \n-------- -------\n70 E \n254SQL Functions\nBITAND()\nBITAND() — Returns the mask of bits set in both of two BIGINT values\nSyntax\nBITAND( value, value )\nDescription\nThe BITAND() function returns the mask of bits set in both of two BIGINT integers. In other words, it\nperforms a bitwise AND operation on the two arguments. The result is returned as a new BIGINT value\n— the arguments to the function are not modified.\nThe left-most bit of an integer number is the sign bit, but has no special meaning for bitwise operations.\nHowever, The left-most bit set to 1 followed by all zeros is reserved as the NULL value. If you use a\nNULL value as an argument, you will receive a NULL response. But in all other circumstances (using\nnon-NULL BIGINT arguments), the bitwise functions should never return a NULL result. Consequently\nany bitwise operation that would result in only the left-most bit being set, will generate an error at runtime.\nExamples\nThe following example writes values into two BIGINT columns of the table bits and then returns the\nbitwise AND of the columns:\n$ sqlcmd\n1> create table bits (a bigint, b bigint);\n2> insert into bits (a,b) values (7,13);\n3> select bitand(a,b) from bits;\nC1 \n---\n 5\n255SQL Functions\nBITNOT()\nBITNOT() — Returns the mask reversing every bit of a BIGINT value.\nSyntax\nBITNOT( value )\nDescription\nThe BITNOT() function returns the mask reversing every bit in a BIGINT value. In other words, it performs\na bitwise NOT operation, returning the complement of the argument. The result is returned as a new\nBIGINT value — the argument to the function is not modified.\nThe left-most bit of an integer number is the sign bit, but has no special meaning for bitwise operations.\nHowever, The left-most bit set to 1 followed by all zeros is reserved as the NULL value. If you use a\nNULL value as an argument, you will receive a NULL response. But in all other circumstances (using\nnon-NULL BIGINT arguments), the bitwise functions should never return a NULL result. Consequently\nany bitwise operation that would result in only the left-most bit being set, will generate an error at runtime.\nExamples\nThe following example writes a value into a BIGINT column of the table bits and then returns the bitwise\nNOT of the column:\n$ sqlcmd\n1> create table bits (a bigint);\n2> insert into bits (a) values (1234567890);\n3> select bitnot(a) from bits;\nC1 \n------------\n -1234567891\n256SQL Functions\nBITOR()\nBITOR() — Returns the mask of bits set in either of two BIGINT values\nSyntax\nBITOR( value, value )\nDescription\nThe BITOR) function returns the mask of bits set in either of two BIGINT integers. In other words, it\nperforms a bitwise OR operation on the two arguments. The result is returned as a new BIGINT value —\nthe arguments to the function are not modified.\nThe left-most bit of an integer number is the sign bit, but has no special meaning for bitwise operations.\nHowever, The left-most bit set to 1 followed by all zeros is reserved as the NULL value. If you use a\nNULL value as an argument, you will receive a NULL response. But in all other circumstances (using\nnon-NULL BIGINT arguments), the bitwise functions should never return a NULL result. Consequently\nany bitwise operation that would result in only the left-most bit being set, will generate an error at runtime.\nExamples\nThe following example writes values into two BIGINT columns of the table bits and then returns the\nbitwise OR of the columns:\n$ sqlcmd\n1> create table bits (a bigint, b bigint);\n2> insert into bits (a,b) values (7,13);\n3> select bitor(a,b) from bits;\nC1 \n---\n 15\n257SQL Functions\nBITXOR()\nBITXOR() — Returns the mask of bits set in one but not both of two BIGINT values\nSyntax\nBITXOR( value, value )\nDescription\nThe BITXOR() function returns the mask of bits set in one but not both of two BIGINT integers. In other\nwords, it performs a bitwise XOR operation on the two arguments. The result is returned as a new BIGINT\nvalue — the arguments to the function are not modified.\nThe left-most bit of an integer number is the sign bit, but has no special meaning for bitwise operations.\nHowever, The left-most bit set to 1 followed by all zeros is reserved as the NULL value. If you use a\nNULL value as an argument, you will receive a NULL response. But in all other circumstances (using\nnon-NULL BIGINT arguments), the bitwise functions should never return a NULL result. Consequently\nany bitwise operation that would result in only the left-most bit being set, will generate an error at runtime.\nExamples\nThe following example writes values into two BIGINT columns of the table bits and then returns the\nbitwise XOR of the columns:\n$ sqlcmd\n1> create table bits (a bigint, b bigint);\n2> insert into bits (a,b) values (7,13);\n3> select bitxor(a,b) from bits;\nC1 \n---\n 10\n258SQL Functions\nCAST()\nCAST() — Explicitly converts an expression to the specified datatype.\nSyntax\nCAST( expression AS datatype )\nDescription\nThe CAST() function converts an expression to a specified datatype. Cases where casting is beneficial\ninclude when converting between numeric types (such as integer and float) or when converting a numeric\nvalue to a string.\nAll numeric datatypes can be used as the source and numeric or string datatypes can be the target. When\nconverting from decimal values to integers, values are truncated. You can also cast from a TIMESTAMP\nto a VARCHAR or from a VARCHAR to a TIMESTAMP, assuming the text string is formatted as YYYY-\nMM-DD or YYYY-MM-DD HH:MM:SS.nnnnnnn . Where the runtime value cannot be converted (for ex-\nample, the value exceeds the maximum allowable value of the target datatype) an error is thrown.\nYou cannot use VARBINARY as either the target or the source datatype. To convert between numeric and\nTIMESTAMP values, use the TO_TIMESTAMP(), FROM_UNIXTIME(), and EXTRACT() functions.\nThe result of the CAST() function of a null value is the corresponding null in the target datatype.\nExample\nThe following example uses the CAST() function to ensure the result of an expression is also a floating\npoint number and does not truncate the decimal portion.\nSELECT contestant, CAST( (votes * 100) as FLOAT) / ? as percentage \n FROM contest ORDER BY votes, contestant;\n259SQL Functions\nCEILING()\nCEILING() — Returns the smallest integer value greater than or equal to a numeric expression.\nSyntax\nCEILING( numeric-expression )\nDescription\nThe CEILING() function returns the next integer greater than or equal to the specified numeric expression.\nIn other words, the CEILING() function \"rounds up\" numeric values. For example:\nCEILING(3.1415) = 4\nCEILING(2.0) = 2\nCEILING(-5.32) = -5\nExample\nThe following example uses the CEILING function to calculate the shipping costs for a product based on\nits weight in the next whole number of pounds.\nSELECT shipping.cost_per_lb * CEILING(product.weight),\n product.prod_id FROM product\n JOIN shipping ON product.prod_id=shipping.prod_id\n ORDER BY product.prod_id;\n260SQL Functions\nCENTROID()\nCENTROID() — Returns the central point of a polygon.\nSyntax\nCENTROID( polygon )\nDescription\nThe CENTROID() returns the central point of a GEOGRAPHY polygon. The centroid is the point where\nany line passing through the centroid divides the polygon into two segments of equal area. The return value\nof the CENTROID() function is a GEOGRAPHY_POINT value.\nNote that the centroid may fall outside of the polygon itself. For example, if the polygon is a ring (that is,\na circle with an inner circle removed) or a horseshoe shape.\nExample\nThe following example uses the CENTROID() and LATITUDE() functions to return a list of countries\nwhere the majority of the land mass falls above the equator.\nSELECT name, capital FROM country\n WHERE LATITUDE(CENTROID(outline)) > 0 \n ORDER BY name, capital;\n261SQL Functions\nCHAR()\nCHAR() — Returns a string with a single UTF-8 character associated with the specified character code.\nSyntax\nCHAR( integer )\nDescription\nThe CHAR() function returns a string containing a single UTF-8 character that matches the specified\nUNICODE character code. One use of the CHAR() function is to insert non-printing and other hard to\nenter characters into string expressions.\nExample\nThe following example uses CHAR() to add a copyright symbol into a VARCHAR field.\nUPDATE book SET copyright_notice= CHAR(169) || CAST(? AS VARCHAR) \n WHERE isbn=?;\n262SQL Functions\nCHAR_LENGTH()\nCHAR_LENGTH() — Returns the number of characters in a string.\nSyntax\nCHAR_LENGTH( string-expression )\nDescription\nThe CHAR_LENGTH() function returns the number of text characters in a string.\nNote that the number of characters and the amount of physical space required to store those characters can\ndiffer. To measure the length of the string, in bytes, use the OCTET_LENGTH() function.\nExample\nThe following example returns the string in the column LastName as well as the number of characters and\nlength in bytes of that string.\nSELECT LastName, CHAR_LENGTH(LastName), OCTET_LENGTH(LastName)\n FROM Customers ORDER BY LastName, FirstName;\n263SQL Functions\nCOALESCE()\nCOALESCE() — Returns the first non-null argument, or null.\nSyntax\nCOALESCE( expression [, ... ] )\nDescription\nThe COALESCE() function takes multiple arguments and returns the value of the first argument that is\nnot null, or — if all arguments are null — the function returns null.\nExamples\nThe following example uses COALESCE to perform two functions:\n•Replace possibly null column values with placeholder text\n•Return one of several column values\nIn the second usage, the SELECT statement returns the value of the column State, Province, or Territory\ndepending on the first that contains a non-null value. Or the function returns a null value if none of the\ncolumns are non-null.\nSELECT lastname, firstname, \n COALESCE(address,'[address unkown]'),\n COALESCE(state, province, territory),\n country FROM users ORDER BY lastname;\n264SQL Functions\nCONCAT()\nCONCAT() — Concatenates two or more strings and returns the result.\nSyntax\nCONCAT( string-expression { , ... } )\nDescription\nThe CONCAT() function concatenates two or more strings and returns the resulting string. The string\nconcatenation operator || performs the same function as CONCAT().\nExample\nThe following example concatenates the contents of two columns as part of a SELECT expression.\nSELECT price, CONCAT(category,part_name) AS full_part_name\n FROM product_list ORDER BY price;\nThe next example does something similar but uses the || operator as a shorthand to concatenate three strings,\ntwo columns and a string constant, as part of a SELECT expression.\nSELECT lastname || ', ' || firstname AS full_name\n FROM customers ORDER BY lastname, firstname;\n265SQL Functions\nCONTAINS()\nCONTAINS() — Returns true or false depending if a point falls within the specified polygon.\nSyntax\nCONTAINS( polygon, point )\nDescription\nThe CONTAINS() function determines if a given point falls within the specified GEOGRAPHY polygon.\nIf so, the function returns a boolean value of true. If not, it returns false.\nExample\nThe following example uses the CONTAINS function to see if a specific user is with the boundaries of a\ncity or not by evaluating if the user.location GEOGRAPHY_POINT column value falls within the polygon\ndefined by the city.boundary GEOGRAPHY column.\nSELECT user.name, user.id, city.name FROM user, city\n WHERE user.id = ? AND CONTAINS(city.boundary,user.location);\n266SQL Functions\nCOS()\nCOS() — Returns the cosine of an angle specified in radians.\nSyntax\nCOS( {numeric-expression } )\nDescription\nThe COS() function returns the cosine of a specified angle as a FLOAT value. The angle must be specified\nin radians as a numeric expression.\nExample\nThe following example returns the sine, cosine, and tangent of angles from 0 to 90 degrees (where the\nangle is specified in radians).\nSELECT SIN(radians), COS(radians), TAN(radians) \n FROM triangles WHERE radians >= 0 AND radians <= PI()/2;\n267SQL Functions\nCOT()\nCOT() — Returns the cotangent of an angle specified in radians.\nSyntax\nCOT( {numeric-expression } )\nDescription\nThe COT() function returns the cotangent of a specified angle as a FLOAT value. The angle must be\nspecified in radians as a numeric expression.\nExamples\nThe following example returns the secant, cosecant, and cotangent of angles from 0 to 90 degrees (where\nthe angle is specified in radians).\nSELECT SEC(radians), CSC(radians), COT(radians) \n FROM triangles WHERE radians >= 0 AND radians <= PI()/2;\n268SQL Functions\nCOUNT()\nCOUNT() — Returns the number of rows selected containing the specified column.\nSyntax\nCOUNT( column-expression )\nDescription\nThe COUNT() function returns the number of rows selected for the specified column. Since the actual\nvalue of the column is not used to calculate the count, you can use the asterisk (*) as a wildcard for any\ncolumn. For example the query SELECT COUNT(*) FROM widgets returns the number of rows in\nthe table widgets , without needing to know what columns the table contains.\nThe one case where the column name is significant is if you use the DISTINCT clause to constrain the\nselection expression. For example, SELECT COUNT(DISTINCT last_name) FROM customer\nreturns the count of unique last names in the customer table.\nExamples\nThe following example returns the number of rows where the product name starts with the captial letter A.\nSELECT COUNT(*) FROM product_list\n WHERE product_name LIKE 'A%';\nThe next example returns the total number of unique product categories in the product list.\nSELECT COUNT(DISTINCT category) FROM product_list;\n269SQL Functions\nCSC()\nCSC() — Returns the cosecant of an angle specified in radians.\nSyntax\nCSC( {numeric-expression } )\nDescription\nThe CSC() function returns the cosecant of a specified angle as a FLOAT value. The angle must be spec-\nified in radians as a numeric expression.\nExamples\nThe following example returns the secant, cosecant, and cotangent of angles from 0 to 90 degrees (where\nthe angle is specified in radians).\nSELECT SEC(radians), CSC(radians), COT(radians) \n FROM triangles WHERE radians >= 0 AND radians <= PI()/2;\n270SQL Functions\nCURRENT_TIMESTAMP()\nCURRENT_TIMESTAMP() — Returns the current time as a timestamp value.\nSyntax\nCURRENT_TIMESTAMP()\nCURRENT_TIMESTAMP\nDescription\nThe CURRENT_TIMESTAMP() function returns the current time as a VoltDB timestamp. The value of\nthe timestamp is determined when the query or stored procedure is invoked. Since there are no arguments\nto the function, the parentheses following the function name are optional.\nSeveral important aspects of how the CURRENT_TIMESTAMP() function operates are:\n•The value returned is guaranteed to be identical for all partitions that execute the query.\n•The value returned is measured in milliseconds then padded to create a timestamp value in microseconds.\n•During command logging, the returned value is stored as part of the log, so when the command log is\nreplayed, the same value is used during the replay of the query.\n•Similarly, for database replication (DR) the value returned is passed and reused by the replica database\nwhen replaying the query.\n•You can specify CURRENT_TIMESTAMP() as a default value in the CREATE TABLE statement\nwhen defining the schema of a VoltDB database.\n•The CURRENT_TIMESTAMP() function cannot be used in the CREATE INDEX or CREATE VIEW\nstatements.\nThe NOW() and CURRENT_TIMESTAMP() functions are synonyms and perform an identical function.\nExample\nThe following example uses CURRENT_TIMESTAMP() in the WHERE clause to delete alert events that\noccurred in the past:\nDELETE FROM Alert_event WHERE event_timestamp < CURRENT_TIMESTAMP;\n271SQL Functions\nDATEADD()\nDATEADD() — Returns a new timestamp value by adding a specified time interval to an existing time-\nstamp value.\nSyntax\nDATEADD( time-unit, interval, timestamp )\nDescription\nThe DATEADD() function creates a new TIMESTAMP value by adding (or subtracting for negative val-\nues) the specified time interval from another TIMESTAMP value. The first argument specifies the time\nunit of the interval. The valid time unit keywords are:\n•MICROSECOND (or MICROS)\n•MILLISECOND (or MILLIS)\n•SECOND\n•MINUTE\n•HOUR\n•DAY\n•MONTH\n•QUARTER\n•YEAR\nThe second argument is an integer value specifying the interval to add to the TIMESTAMP value. A\npositive interval moves the time ahead. A negative interval moves the time value backwards. The third\nargument specifies the TIMESTAMP value to which the interval is applied.\nThe DATEADD function takes into account leap years and the variable number of days in a month. There-\nfore, if the year of either the specified timestamp or the resulting timestamp is a leap year, the day is ad-\njusted to its correct value. For example, DATEADD(YEAR, 1, ‘2008-02-29’) returns ‘2009-02-28’. Sim-\nilarly, if the original timestamp is the last day of a month, then the resulting timestamp will be adjusted as\nnecessary. For example, DATEADD(MONTH, 1, ‘2008-03-31’) returns ‘2008-04-30’.\nExample\nThe following example uses the DATEADD() function to find all records where the TIMESTAMP column,\nincident, occurs within one day before a specified timestamp (entered as a POSIX time value).\nSELECT incident, description FROM securityLog \n WHERE DATEADD(DAY, 1, incident) > FROM_UNIXTIME(?) \n AND incident < FROM_UNIXTIME(?) \n ORDER BY incident, description;\n272SQL Functions\nDAY(), DAYOFMONTH()\nDAY(), DAYOFMONTH() — Returns the day of the month as an integer value.\nSyntax\nDAY( timestamp-value )\nDAYOFMONTH( timestamp-value )\nDescription\nThe DAY() function returns an integer value between 1 and 31 representing the timestamp's day of the\nmonth. The DAY() and DAYOFMONTH() functions are synonyms. These functions produce the same\nresult as using the DAY or DAY_OF_MONTH keywords with the EXTRACT() function.\nExamples\nThe following example uses the DAY(), MONTH(), and YEAR() functions to return a timestamp column\nas a formatted date string.\nSELECT CAST( MONTH(starttime) AS VARCHAR) || '/' || \n CAST( DAY(starttime) AS VARCHAR) || '/' || \n CAST( YEAR(starttime) AS VARCHAR), title, description \n FROM event ORDER BY starttime;\n273SQL Functions\nDAYOFWEEK()\nDAYOFWEEK() — Returns the day of the week as an integer between 1 and 7.\nSyntax\nDAYOFWEEK( timestamp-value )\nDescription\nThe DAYOFWEEK() function returns an integer value between 1 and 7 representing the day of the week\nin a timestamp value. For the DAYOFTHEWEEK() function, the week starts (1) on Sunday and ends (7)\non Saturday.\nThis function produces the same result as using the DAY_OF_WEEK keyword with the EXTRACT()\nfunction.\nExamples\nThe following example uses DAYOFWEEK() and the DECODE() function to return a string value repre-\nsenting the day of the week for the specified TIMESTAMP value.\nSELECT eventtime, \n DECODE(DAYOFWEEK(eventtime),\n 1, 'Sunday',\n 2, 'Monday',\n 3, 'Tuesday',\n 4, 'Wednesday',\n 5, 'Thursday',\n 6, 'Friday',\n 7, 'Saturday') AS eventday\n FROM event ORDER BY eventtime;\n274SQL Functions\nDAYOFYEAR()\nDAYOFYEAR() — Returns the day of the year as an integer between 1 and 366.\nSyntax\nDAYOFYEAR( timestamp-value )\nDescription\nThe DAYOFYEAR() function returns an integer value between 1 and 366 representing the day of the year\nof a timestamp value. This function produces the same result as using the DAY_OF_YEAR keyword with\nthe EXTRACT() function.\nExamples\nThe following example uses the DAYOFYEAR() function to determine the number of days until an event\noccurs.\nSELECT DECODE(YEAR(NOW), YEAR(starttime), \n CAST(DAYOFYEAR(starttime) - DAYOFYEAR(NOW) AS VARCHAR) \n || ' days remaining', \n CAST(YEAR(starttime) - YEAR(NOW) AS VARCHAR)\n || ' years remaining'),\n eventname FROM event;\n275SQL Functions\nDECODE()\nDECODE() — Evaluates an expression against one or more alternatives and returns the matching response.\nSyntax\nDECODE( expression , { comparison-value , result } [,...] [,default-result ] )\nDescription\nThe DECODE() function compares an expression against one or more possible comparison values. If the\nexpression matches the comparison-value , the associated result is returned. If the expression does not\nmatch any of the comparison values, the default-result is returned. If the expression does not match any\ncomparison value and no default result is specified, the function returns NULL.\nThe DECODE() function operates the same way an IF-THEN-ELSE, or CASE statement does in other\nlanguages.\nExample\nThe following example uses the DECODE() function to interpret a coded data column and replace it with\nthe appropriate meaning for each code.\nSELECT title, industry, DECODE(salary_range, \n 'A', 'under $25,000',\n 'B', '$25,000 - $34,999',\n 'C', '$35,000 - $49,999',\n 'D', '$50,000 - $74,999',\n 'E', '$75,000 - $99,000',\n 'F', '$100,000 and over',\n 'unspecified') from survey_results\n order by industry, title;\nThe next example tests a value against three columns and returns the name of the column when a match\nis found, or a message indicating no match if none is found.\nSELECT product_name, DECODE(?,product_name,'PRODUCT NAME',\n part_name, 'PART NAME',\n category, 'CATEGORY',\n 'NO MATCH FOUND')\n FROM product_list ORDER BY product_name;\n276SQL Functions\nDEGREES()\nDEGREES() — Converts an angle in radians to degrees\nSyntax\nDEGREES( angle-in-radians )\nDescription\nThe DEGREES() function converts a floating-point value representing an angle measured in radians to\nthe equivalent angle measured in degrees.\nExample\nThe following SELECT statement converts a column value stored in radians to degrees before returning\nit to the user.\nSELECT test_number, distance, DEGREES(angle) as angle_in_degrees\n FROM tests ORDER BY test_number;\n277SQL Functions\nDISTANCE()\nDISTANCE() — Returns the distance between two points or a point and a polygon.\nSyntax\nDISTANCE( point-or-polygon , point-or-polygon )\nDescription\nThe DISTANCE() function returns the distance, measured in meters, between two points or a point and a\npolygon. The arguments to the function can be either two GEOGRAPHY_POINT values or a GEOGRA-\nPHY_POINT and GEOGRAPHY value.\nThe DISTANCE() function accepts multiple datatypes for its two arguments, but there are constraints\non which combination of datatypes are allowed. For example, the two arguments cannot both be of type\nGEOGRAPHY. Consequently, the VoltDB planner must know the datatype of the arguments when the\nstatement is compiled. So using generic, untyped placeholders for these arguments is not valid. This means\nyou cannot use syntax such as DISTANCE(?,?) in a stored procedure. However, you can use placeholders\nas long as they are cast to specific types. For example:\nDISTANCE(POINTFROMTEXT(?),POLYGONFROMTEXT(?))\nExamples\nThe following example finds the closest city to a specified user, using the GEOGRAPHY_POINT column\nuser.location and the GEOGRAPHY column city.boundary.\nSELECT TOP 1 user.name, city.name, \n DISTANCE(user.location, city.boundary) \n FROM user, city WHERE user.id = ?\n ORDER BY DISTANCE(user.location, city.boundary) ASC;\nThe next example finds the distance in kilometers from a truck to stores, listed in order with closest first,\nusing the two GEOGRAPHY_POINT columns truck.loc and store.loc.\nSELECT store.address, \n DISTANCE(store.loc,truck.loc) / 1000 AS distance\n FROM store, truck WHERE truck.id = ?\n ORDER BY DISTANCE(store.loc,truck.loc)/1000 ASC;\n278SQL Functions\nDWITHIN()\nDWITHIN() — Returns true or false depending whether two geospatial entities are within a specified\ndistance of each other.\nSyntax\nDWITHIN( polygon-or-point , polygon-or-point , distance )\nDescription\nThe DWITHIN() function determines if two geospatial values are within the specified distance of each\nother. The values can be two points (GEOGRAPHY_POINT) or a point and a polygon (GEOGRAPHY).\nThe maximum distance is specified as a numeric value measured in meters. If the distance between the\ntwo geospatial values is less than or equal to the specified distance, the function returns true. If not, it\nreturns false.\nThe DWITHIN() function accepts multiple datatypes for its first two arguments, but there are constraints\non which combination of datatypes are allowed. For example, the two arguments cannot both be of type\nGEOGRAPHY. Consequently, the VoltDB planner must know the datatype of the arguments when the\nstatement is compiled. So using generic, untyped placeholders for these arguments is not valid. This means\nyou cannot use syntax such as DWITHIN(?,?,?) in a stored procedure. However, you can use placeholders\nas long as they are cast to specific types. For example:\nDWITHIN(POINTFROMTEXT(?),POLYGONFROMTEXT(?), ?)\nExamples\nThe following example finds all the cities within five kilometers of a given user, by evaluating the distance\nbetween the GEOGRAPHY_POINT column user.loc and the GEOGRAPHY column city.boundary.\nSELECT user.name, city.name, DISTANCE(user.loc, city.boundary)\n FROM user, city WHERE user.id=?\n AND DWITHIN(user.loc, city.boundary, 5000) \n ORDER BY DISTANCE(user.loc, city.boundary) ASC;\nThe next is a more generalized example, where the query returns all delivery trucks within a specified\ndistance of a store, where both the distance and the store ID are parameterized and can be input at runtime.\nSELECT store.address, truck.license_number,\n DISTANCE(store.loc, truck.loc)/1000 AS distance_in_km\n FROM store, truck \n WHERE DWITHIN(store.loc, truck.loc, ?) and store.id=?\n ORDER BY DISTANCE(store.loc,truck.loc)/1000 ASC;\n279SQL Functions\nEXP()\nEXP() — Returns the exponential of the specified numeric expression.\nSyntax\nEXP( numeric-expression )\nDescription\nThe EXP() function returns the exponential of the specified numeric expression. In other words, EXP(x)\nis the equivalent of the mathematical expression ex.\nExample\nThe following example uses the EXP function to calculate the potential population of certain species of\nanimal projecting out ten years.\nSELECT species, population AS current,\n (population/2.0) * EXP(10*(gestation/365.0)*litter) AS future\n FROM animals \n WHERE species = 'rabbit'\n ORDER BY population;\n280SQL Functions\nEXTRACT()\nEXTRACT() — Returns the value of a selected portion of a timestamp.\nSyntax\nEXTRACT( selection-keyword FROM timestamp-expression )\nEXTRACT( selection-keyword, timestamp-expression )\nDescription\nThe EXTRACT() function returns the value of the selected portion of a timestamp. Table C.1, “Selectable\nValues for the EXTRACT Function” lists the supported keywords, the datatype of the value returned by\nthe function, and a description of its contents.\nTable C.1. Selectable Values for the EXTRACT Function\nKeyword Datatype Description\nYEAR INTEGER The year as a numeric value.\nQUARTER TINYINT The quarter of the year as a single numeric value between 1\nand 4.\nMONTH TINYINT The month of the year as a numeric value between 1 and 12.\nDAY TINYINT The day of the month as a numeric value between 1 and 31.\nDAY_OF_MONTH TINYINT The day of the month as a numeric value between 1 and 31\n(same as DAY).\nDAY_OF_WEEK TINYINT The day of the week as a numeric value between 1 and 7, start-\ning with Sunday.\nDAY_OF_YEAR SMALLINT The day of the year as a numeric value between 1 and 366.\nWEEK TINYINT The week of the year as a numeric value between 1 and 52.\nWEEK_OF_YEAR TINYINT The week of the year as a numeric value between 1 and 52\n(same as WEEK).\nWEEKDAY TINYINT The day of the week as a numeric value between 0 and 6, start-\ning with Monday.\nHOUR TINYINT The hour of the day as a numeric value between 0 and 23.\nMINUTE TINYINT The minute of the hour as a numeric value between 0 and 59.\nSECOND DECIMAL The whole and fractional part of the number of seconds within\nthe minute as a floating point value between 0 and 60.\nThe timestamp expression is interpreted as a VoltDB timestamp; That is, time measured in microseconds.\nExample\nThe following example lists all the contacts by name and birthday, listing the birthday as three separate\nfields for month, day, and year.\nSELECT Last_name, first_name, EXTRACT(MONTH FROM dateofbirth),\n281SQL Functions\n EXTRACT(DAY FROM dateofbirth), EXTRACT(YEAR FROM dateofbirth)\n FROM contact_list\n ORDER BY last_name, first_name;\n282SQL Functions\nFIELD()\nFIELD() — Extracts a field value from a JSON-encoded string column.\nSyntax\nFIELD( column, field-name-path )\nDescription\nThe FIELD() function extracts a field value from a JSON-encoded string. For example, assume the VAR-\nCHAR column Profile contains the following JSON string:\n{\"first\":\"Charles\",\"last\":\"Dickens\",\"birth\":1812,\n \"description\":{\"genre\":\"fiction\",\n \"period\":\"Victorian\",\n \"output\":\"prolific\",\n \"children\":[\"Charles\",\"Mary\",\"Kate\",\"Walter\",\"Francis\",\n \"Alfred\",\"Sydney\",\"Henry\",\"Dora\",\"Edward\"]\n }\n}\nIt is possible to extract individual field values using the FIELD() function, as in the following SELECT\nstatement:\nSELECT FIELD(profile,'first') AS firstname, \n FIELD(profile,'last') AS lastname FROM Authors;\nIt is also possible to find records based on individual JSON fields by using the FIELD() function in the\nWHERE clause. For example, the following query retrieves all records from the Authors table where the\nJSON field birth is 1812. Note that the FIELD() function always returns a string, even if the JSON type is\nnumeric. The comparison must match the string datatype, so the constant '1812' is in quotation marks:\nSELECT * FROM Authors WHERE FIELD(profile,'birth') = '1812';\nThe second argument to the FIELD() function can be a simple field name, as in the previous examples.\nIn which case the function returns a first-level field matching the specified name. Alternately, you can\nspecify a path representing a hierarchy of names separated by periods. For example, you can specify the\ngenre element of the description field by specifying \"description.genre\" as the second argument, like so\nSELECT * FROM Authors WHERE \n FIELD(profile,'description.genre') = 'fiction';\nYou can also use array notation — with square brackets and an integer value — to identify array elements\nby their position. So, for example, the function can return \"Kate\", the third child, by using the path spec-\nifier \"description.children[2]\", where \"[2]\" identifies the third array element because JSON arrays are ze-\nro-based.\nTwo important points to note concerning input to the FIELD() function:\n•If the requested field name does not exist, the function returns a null value.\n•The first argument to the FIELD() function must be a valid JSON-encoded string. However, the content\nis not evaluated until the function is invoked at runtime. Therefore, it is the responsibility of the database\n283SQL Functions\napplication to ensure the validity of the content. If the FIELD() function encounters invalid content,\nthe query will fail.\nExample\nThe following example uses the FIELD() function to both return specific JSON fields within a VARCHAR\ncolumn and filter the results based on the value of a third JSON field:\nSELECT product_name, sku, \n FIELD(specification,'color') AS color,\n FIELD(specification,'weight') AS weight FROM Inventory \n WHERE FIELD(specification, 'category') = 'housewares' \n ORDER BY product_name, sku;\n284SQL Functions\nFLOOR()\nFLOOR() — Returns the largest integer value less than or equal to a numeric expression.\nSyntax\nFLOOR( numeric-expression )\nDescription\nThe FLOOR() function returns the largest integer less then or equal to the specified numeric expression.\nIn other words, the FLOOR() function truncates fractional numeric values. For example:\nFLOOR(3.1415) = 3\nFLOOR(2.0) = 2\nFLOOR(-5.32) = -6\nExample\nThe following example uses the FLOOR function to calculate the whole number of stocks owned by a\nspecific shareholder.\nSELECT customer, company, \n FLOOR(num_of_stocks) AS stocks_available_for_sale\n FROM shareholders WHERE customer_id = ?\n ORDER BY company;\n285SQL Functions\nFORMAT_CURRENCY()\nFORMAT_CURRENCY() — Converts a DECIMAL to a text string as a monetary value.\nSyntax\nFORMAT_CURRENCY( decimal-value , rounding-position )\nDescription\nThe FORMAT_CURRENCY() function converts a DECIMAL value to its string representation, rounding\nto the specified position. The resulting string is formatted with commas separating every three digits of\nthe whole portion of the number (indicating thousands, millions, and so on) and a decimal point before\nthe fractional portion, as needed.\nThe rounding-position argument must be an integer between 12 and -25 and indicates the place to which the\nnumeric value should be rounded. Positive values indicate a decimal place; for example 2 means round to\n2 decimal places. Negative values indicate rounding to a whole number position; for example, -2 indicates\nthe number should be rounded to the nearest hundred. A zero indicates that the value should be rounded\nto the nearest whole number.\nRounding is performed using \"banker's rounding\", in that any fractional half is rounded to the nearest even\nnumber. So, for example, if the rounding-position is 2, the value 22.225 is rounded to 22.22, but the value\n33.335 is rounded to 33.34. The following list demonstrates some sample results.\nFORMAT_CURRENCY( .123456789, 4) = 0.1235\nFORMAT_CURRENCY( 123456789.123, 2 ) = 123,456,789.12\nFORMAT_CURRENCY( 123456789.123, 0 ) = 123,456,789\nFORMAT_CURRENCY( 123456789.123, -2 ) = 123,456,800\nFORMAT_CURRENCY( 123456789.123, -6 ) = 123,000,000\nFORMAT_CURRENCY( 123456789.123, 6 ) = 123,456,789.123000\nExample\nThe following example uses the FORMAT_CURRENCY() function to return a DECIMAL column as a\nstring representation of its monetary value, rounding to two decimal places and appending the appropriate\ncurrency symbol from a VARCHAR column.\nSELECT country, \n currency_symbol || format_currency(budget,2) AS annual_budget \n FROM world_economy ORDER BY country;\n286SQL Functions\nFORMAT_TIMESTAMP()\nFORMAT_TIMESTAMP() — Takes a timestamp as input and returns a formatted string in the specified\ntimezone.\nSyntax\nFORMAT_TIMESTAMP( timestamp-value , timezone-or-offset )\nDescription\nThe FORMAT_TIMESTAMP() returns the timestamp input value as a formatted string in the speci-\nfied timezone. VoltDB stores timestamps as a time value in Greenwich Mean Time (GMT). The FOR-\nMAT_TIMESTAMP() function lets you return that value as a date and time string in a different timezone.\nYou can specify the timezone as a either an offset of GMT or as a region as described by the Internet\nAssigned Numbers Authority (IANA) time zone database (tz). You can find a list of the IANA time zones\non the Wikipedia tz page.\nTime zone names are case sensitive. Time offsets are specified as a time value preceded by a plus or minus\nsign. The offset time value can be specified in hours (one or two digits); hours and minutes (four digits);\nor hours, minutes, and seconds (six digits). You can optionally use colons to separate the time units. For\nexample, all of the following offsets specify the same amount of time — a positive offset of five hours:\n+5 \n+05 \n+0500 \n+05:00 \n+050000 \n+05:00:00\nExamples\nThe following example uses the FORMAT_TIMESTAMP() function to return a timestamp column as an\nEastern United States timezone date and time:\nSELECT FORMAT_TIMESTAMP(e_time,'America/New_York'), e_log FROM event;\nThe next example uses an offset to go back 15 minutes:\nSELECT FORMAT_TIMESTAMP(alarm.expires,'-00:15') AS warning FROM alarm;\nThe last example uses the FORMAT_TIMESTAMP() function to return timestamp values in the cus-\ntomer's chosen timezone.\nCREATE PROCEDURE get_reservation AS\n SELECT r.id, \n FORMAT_TIMESTAMP(r.departure,c.timezone) AS departure,\n FORMAT_TIMESTAMP(r.arrival,c.timezone) AS arrival\n FROM reservation AS r, customer AS c\n WHERE r.id = ?;\n287SQL Functions\nFROM_UNIXTIME()\nFROM_UNIXTIME() — Converts a UNIX time value to a VoltDB timestamp.\nSyntax\nFROM_UNIXTIME( integer-expression )\nDescription\nThe FROM_UNIXTIME() function converts an integer expression to a VoltDB timestamp, interpreting\nthe integer value as a POSIX time value; that is the number of seconds since the epoch (00:00.00 on\nJanuary 1, 1970 Consolidated Universal Time). This function is a synonym for TO_TIMESTAMP(second,\ninteger-expression ).\nExample\nThe following example inserts a record using FROM_UNIXTIME to convert the first argument, a POSIX\ntime value, into a VoltDB timestamp:\nINSERT INTO event (e_when, e_what, e_where) \n VALUES (FROM_UNIXTIME(?),?,?);\n288SQL Functions\nHEX()\nHEX() — Returns the hexadecimal representation of a BIGINT value as a string.\nSyntax\nHEX( value )\nDescription\nThe HEX() function returns the hexadecimal representation of a BIGINT value as a string. The function\nwill return the shortest valid string representation, truncating any preceding zeros (except in the case of\nthe value zero, which is returned as the string \"0\").\nExamples\nThe following example use the HEX and BITAND functions to return the hexadecimal representations of\ntwo BIGINT values and their binary intersection.\n$ sqlcmd\n1> create table bits (a bigint, b bigint);\n2> insert into bits values(555,999);\n3> select hex(a) as int1, hex(b) as int2, \n4> hex(bitand(a,b)) as intersection from bits;\nINT1 INT2 INTERSECTION \n-------- --------- ------------- \n22B 3E7 223 \n289SQL Functions\nHOUR()\nHOUR() — Returns the hour of the day as an integer value.\nSyntax\nHOUR( timestamp-value )\nDescription\nThe HOUR() function returns an integer value between 0 and 23 representing the hour of the day in a time-\nstamp value. This function produces the same result as using the HOUR keyword with the EXTRACT()\nfunction.\nExamples\nThe following example uses the HOUR(), MINUTE(), and SECOND() functions to return the time portion\nof a TIMESTAMP value in a formatted string.\nSELECT eventname,\n CAST(HOUR(starttime) AS VARCHAR) || ' hours, ' ||\n CAST(MINUTE(starttime) AS VARCHAR) || ' minutes, and ' ||\n CAST(SECOND(starttime) AS VARCHAR) || ' seconds.' \n AS timestring FROM event;\n290SQL Functions\nINET6_ATON()\nINET6_ATON() — Converts an IPv6 internet address from a string to a VARBINARY(16) value\nSyntax\nINET6_ATON( {string} )\nDescription\nThe INET6_ATON() function converts a VARCHAR value representing an IPv6 internet address in hex-\nidecimal notation to a 16-byte VARBINARY value in network byte order. The VARCHAR value must\nconsist of up to eight hexidecimal values separated by colons, such as \"2600:141b:4:290::2add\", or a null\nvalue. Note that in IPv6 addresses, two colons together (\"::\") can and should be used in place of two or\nmore consecutive zero values in the sequence.\nYou can use the INET6_NTOA() function to reverse the conversion or you can use the INET_ATON and\nINET_NTOA functions to perform similar conversions on IPv4 addresses.\nExample\nThe following example converts a string representation of an IPv6 internet address to a VARBINARY(16)\nvalue before storing it in the Address table\nINSERT INTO Address (v6ip, owner, date) VALUES (INET6_ATON(?),?,?);\n291SQL Functions\nINET6_NTOA()\nINET6_NTOA() — Converts an IPv6 internet address from a VARBINARY(16) value to a string\nSyntax\nINET_NTOA( {binary-value} )\nDescription\nThe INET6_NTOA() function converts a 16-byte VARBINARY value representing an IPv6 internet ad-\ndress to its corresponding string representation as a VARCHAR value. Or, if the argument is null, the\nfunction returns a null VARCHAR as the result.\nYou can use the INET6_ATON() function to perform the reverse operation, from a VARCHAR IPv6\naddress to a VARBINARY(16) value, or you can use the INET_ATON and INET_NTOA functions to\nperform similar operations on IPv4 addresses.\nExamples\nThe following example converts a VARBINARY(16) representation of an IPv6 internet address into its\nstring representation for output.\nSELECT INET6_NTOA(v6ip), owner FROM Address\n WHERE owner=? ORDER BY v6ip;\n292SQL Functions\nINET_ATON()\nINET_ATON() — Converts an IPv4 internet address from a string to a numeric value\nSyntax\nINET_ATON( {string} )\nDescription\nThe INET_ATON() function converts a VARCHAR value representing an IPv4 internet address in dot\nnotation to a single BIGINT value. The VARCHAR value must consist of four integer values separated\nby dots, such as \"104.112.152.119\", or a null value. The string representations of the integer values must\nbe between 0 and 256 and cannot contain any spaces or leading zeros. For example, string values of \"0\"\nand \"12\" are valid but \"012\" is not.\nYou can use the INET_NTOA() function to reverse the conversion or you can use the INET6_ATON and\nINET6_NTOA functions to perform similar conversions on IPv6 addresses.\nExample\nThe following example converts a string representation of an internet address to a BIGINT value before\nstoring it in the Address table\nINSERT INTO Address (ip, owner, date) VALUES (INET_ATON(?),?,?);\n293SQL Functions\nINET_NTOA()\nINET_NTOA() — Converts an IPv4 internet address from a numeric value to a string\nSyntax\nINET_NTOA( {numeric-value} )\nDescription\nThe INET_NTOA() function converts a BIGINT value representing an IPv4 internet address to its corre-\nsponding dot representation as a VARCHAR value. Or, if the argument is null, the function returns a null\nVARCHAR as the result.\nYou can use the INET_ATON() function to perform the reverse operation, from a VARCHAR IPv4 ad-\ndress in dot notation to a BIGINT value, or you can use the INET6_ATON and INET6_NTOA functions\nto perform similar operations on IPv6 addresses.\nExamples\nThe following example converts a BIGINT representation of an internet address into its string represen-\ntation for output.\nSELECT INET_NTOA(ip), owner FROM Address\n WHERE owner=? ORDER BY ip;\n294SQL Functions\nISINVALIDREASON()\nISINVALIDREASON() — Explains why a GEOGRAPHY polygon is invalid\nSyntax\nISINVALIDREASON( polygon )\nDescription\nThe ISINVALIDREASON() function returns a text string explaining if the specified GEOGRAPHY value\nis valid or not and, if not, why not. The argument to the ISINVALIDREASON() function must be a GE-\nOGRAPHY value describing a polygon. This function is especially useful when validating geospatial data.\nExample\nThe following example uses the ISVALID() and ISINVALIDREASON() functions to report on any invalid\npolygons in the border column of the country table.\nSELECT country_name, ISINVALIDREASON(border) \n FROM Country WHERE NOT ISVALID(border);\n295SQL Functions\nISVALID()\nISVALID() — Determines if the specified GEOGRAPHY value is a valid polygon.\nSyntax\nISVALID( polygon )\nDescription\nThe ISVALID() function returns true or false depending on whether the specified GEOGRAPHY value\nis a valid polygon or not. Polygons must follow rules defined by the Open Geospatial Consortium (OGC)\nstandard for Well Known Text (WKT). Specifically:\n•A GEOGRAPHY polygon consists of one or more rings, where a ring is a closed boundary described\nby a sequence of vertices and the lines, or edges, between those vertices.\n•The first ring must be the outer ring and the vertices must be listed in counter clockwise order.\n•All subsequent rings represent \"holes\" in the outer ring. The inner rings must be wholly contained within\nthe outer ring and their vertices must be listed in clockwise order.\n•Rings cannot intersect or have adjacent edges.\n•The edges of an individual ring cannot cross (for example, a figure \"8\" is invalid).\n•For each ring, the first vertex is listed twice: as both the first and last vertex.\nIf the specified GEOGRAPHY value is a valid polygon, the function returns true. If not, it returns false.\nTo maximize performance, VoltDB does not validate the GEOGRAPHY values when they are inserted.\nHowever, if you are not sure the WKT strings are valid, you can use ISVALID() to validate the resulting\nGEOGRAPHY values before inserting them or after they are inserted into the database.\nExamples\nThe first example shows an UPDATE statement that uses the ISVALID() function to remove the contents\nof a GEOGRAPHY column (by setting it to NULL), if the current contents are invalid.\nUPDATE REGION SET border = NULL WHERE NOT ISVALID(border);\nThe next example shows part of a stored procedure that uses ISVALID() to conditionally set the value of a\ncolumn, mustbevalid , that is defined as NOT NULL. By setting the column valid to NULL, the procedure\nensures that the INSERT statement fails and the stored procedure rolls back if the WKT border column\nis invalid.\npublic class ValidateBorders extends VoltProcedure {\n public final SQLStmt insertrec = new SQLStmt(\n \"INSERT INTO REGION (name, border, mustbevalid)\" +\n \" SELECT name, border,\" +\n \" CASE WHEN ISVALID(border) THEN 1 ELSE NULL END\" +\n \" FROM anothertable WHERE name = ? LIMIT 1;\"\n296SQL Functions\n );\n public VoltTable[] run( String name)\n throws VoltAbortException\n { voltQueueSQL( insertrec, name); return voltExecuteSQL(); }\n }\n297SQL Functions\nIS_VALID_TIMESTAMP()\nIS_VALID_TIMESTAMP() — Identifies whether a given value is a valid timestamp.\nSyntax\nIS_VALID_TIMESTAMP( value )\nDescription\nThe IS_VALID_TIMESTAMP() function returns either true or false depending on whether the specified\nvalue is a valid timestamp or not. The minimum valid timestamp equates to the beginning of the year 1583.\nThat is, the first microsecond of that year. The maximum valid timestamp equates to the last microsecond\nof the year 9999.\nBecause TIMESTAMP values are stored and can be entered as an eight byte integer, it is possible to\nenter a numeric value that is not actually a valid timestamp. The functions MIN_VALID_TIMESTAMP()\nand MAX_VALID_TIMESTAMP() give you access to the valid minimum and maximum values. The\nfunction IS_VALID_TIMESTAMP() compares a TIMESTAMP value and returns true or false depending\non whether the value falls within the valid range or not.\nExample\nThe following example uses the TIMESTAMP functions to return an informational string for any event\nrecords that contain an invalid timestamp value.\nSELECT 'TIMESTAMP must be between ' || \n CAST(MIN_VALID_TIMESTAMP() as VARCHAR) ||\n ' and ' ||\n CAST(MAX_VALID_TIMESTAMP() as VARCHAR),\n log_time,\n log_event\n FROM events WHERE NOT IS_VALID_TIMESTAMP(log_time);\n298SQL Functions\nLATITUDE()\nLATITUDE() — Returns the latitude of a GEOGRAPHY_POINT value.\nSyntax\nLATITUDE( point )\nDescription\nThe LATITUDE() function returns the latitude, as a floating point value, from a GEOGRAPHY_POINT\nexpression.\nExample\nThe following example returns all ships that are located in the northern hemisphere by examining the\nlatitude of their current location.\nSELECT ship.number, ship.country FROM ship\n WHERE LATITUDE(ship.location) > 0;\n299SQL Functions\nLEFT()\nLEFT() — Returns a substring from the beginning of a string.\nSyntax\nLEFT( string-expression , numeric-expression )\nDescription\nThe LEFT() function returns the first n characters from a string expression, where n is the second argument\nto the function.\nExample\nThe following example uses the LEFT function to return an abbreviation (the first three characters) of the\nproduct category as part of the SELECT expression.\nSELECT LEFT(category,3), product_name, price FROM product_list\n ORDER BY category, product_name;\n300SQL Functions\nLN(), LOG()\nLN(), LOG() — Returns the natural logarithm of a numeric value.\nSyntax\nLN( numeric-value )\nLOG( numeric-value )\nDescription\nThe LN() function returns the natural logarithm of the specified input value. The log is returned as a\nfloating point (FLOAT) value. LN() and LOG() are synonyms and perform the same function.\nExample\nThe following example uses the LN() function to calculate the rate of population growth from census data.\nSELECT city, current_population,\n ( ( LN(current_population) - LN(base_population) )\n / (current_year - base_year) \n ) * 100.0 AS percent_growth\n FROM census ORDER BY city;\n301SQL Functions\nLOG10()\nLOG10() — Returns the base-10 logarithm of a numeric value.\nSyntax\nLOG10( numeric-value )\nDescription\nThe LOG10() function returns the base-10, or decimal, logarithm of the specified input value. The log is\nreturned as a floating point (FLOAT) value.\nExample\nThe following example uses the LOG10() function to calculate the magnitude of difference between two\nvalues.\nSELECT LOG10(YR2.profit/YR1.profit) AS Magnitude_of_growth\n FROM account AS YR1, account AS YR2\n WHERE YR1.fiscalyear=? AND YR2.fiscalyear=?;\n302SQL Functions\nLONGITUDE()\nLONGITUDE() — Returns the longitude of a GEOGRAPHY_POINT value.\nSyntax\nLONGITUDE( point )\nDescription\nThe LONGITUDE() function returns the longitude, as a floating point value, from a GEOGRA-\nPHY_POINT expression.\nExample\nThe following example returns all ships that are located in the western hemisphere by examining the\nlongitude of their current location.\nSELECT ship.number, ship.country FROM ship\n WHERE LONGITUDE(ship.location) < 0\n AND LONGITUDE(ship.location) > -180;\n303SQL Functions\nLOWER()\nLOWER() — Returns a string converted to all lowercase characters.\nSyntax\nLOWER( string-expression )\nDescription\nThe LOWER() function returns a copy of the input string converted to all lowercase characters.\nExample\nThe following example uses the LOWER function to perform a case-insensitive search of a VARCHAR\nfield.\nSELECT product_name, product_id FROM product_list\n WHERE LOWER(product_name) LIKE 'acme%'\n ORDER BY product_name, product_id;\n304SQL Functions\nMAKEVALIDPOLYGON()\nMAKEVALIDPOLYGON() — Attempts to return a valid GEOGRAPHY value from a GEOGRAPHY\npolygon\nSyntax\nMAKEVALIDPOLYGON( polygon )\nDescription\nA common problem when generating polygons from Well Known Text (WKT) is listing the rings within\nthe polygon in the correct orientation. The vertices of the outer ring must be listed counter-clockwise,\nwhile the vertices of the inner rings must be listed in a clockwise direction.\nIf you use the POLYGONFROMTEXT() function to create GEOGRAPHY values from WKT strings, the\nrings can be individually correct but, if they are not oriented properly, the resulting polygon will not match\nyour intended geographic region. As a consequence, using the polygon in VoltDB geospatial functions,\nsuch as CONTAINS() and DISTANCE(), will produce unexpected answers. You can use ISVALID() to\ntest if the polygon is valid, but ISVALID() simply tests correctness, it does not fix simple errors, such\nas ring orientation.\nMAKEVALIDPOLYGON() both tests the polygon and corrects any errors in ring orientation. The argu-\nment to the MAKEVALIDPOLYGON() function is a GEOGRAPHY object representing a polygon. The\noutput is another GEOGRAPHY object, identical to the input if the input is valid, or with the orientation\nof the rings corrected if they are listed in the wrong direction. If there are any other issues with the polygon\nthat cannot be corrected (such as an incomplete ring or crossed lines), the function throws an error.\nExample\nThe following example uses the MAKEVALIDPOLYGON() function to correct any potential orientation\nissues with the location column in the country table.\nUPDATE country SET boundaries = MAKEVALIDPOLYGON(boundaries);\n305SQL Functions\nMAX()\nMAX() — Returns the maximum value from a range of column values.\nSyntax\nMAX( column-expression )\nDescription\nThe MAX() function returns the highest value from a range of column values. The range of values depends\non the constraints defined by the WHERE and GROUP BY clauses.\nExample\nThe following example returns the highest price in the product list.\nSELECT MAX(price) FROM product_list;\nThe next example returns the highest price for each product category.\nSELECT category, MAX(price) FROM product_list\n GROUP BY category\n ORDER BY category;\n306SQL Functions\nMAX_VALID_TIMESTAMP()\nMAX_VALID_TIMESTAMP() — Returns the maximum valid timestamp.\nSyntax\nMAX_VALID_TIMESTAMP()\nMAX_VALID_TIMESTAMP\nDescription\nThe MAX_VALID_TIMESTAMP() function returns the maximum valid value for the VoltDB TIMES-\nTAMP datatype. The minimum valid timestamp equates to the beginning of the year 1583. That is, the first\nmicrosecond of that year. The maximum valid timestamp equates to the last microsecond of the year 9999.\nBecause TIMESTAMP values are stored and can be entered as an eight byte integer, it is possible to\nenter a numeric value that is not actually a valid timestamp. The functions MIN_VALID_TIMESTAMP()\nand MAX_VALID_TIMESTAMP() give you access to the valid minimum and maximum values. The\nfunction IS_VALID_TIMESTAMP() compares a TIMESTAMP value and returns true or false depending\non whether the value falls within the valid range or not.\nSince there are no arguments to the function, the parentheses following the function name are optional.\nExample\nThe following example uses the TIMESTAMP functions to return an informational string for any event\nrecords that contain an invalid timestamp value.\nSELECT 'TIMESTAMP must be between ' || \n CAST(MIN_VALID_TIMESTAMP() as VARCHAR) ||\n ' and ' ||\n CAST(MAX_VALID_TIMESTAMP() as VARCHAR),\n log_time,\n log_event\n FROM events WHERE NOT IS_VALID_TIMESTAMP(log_time);\n307SQL Functions\nMIGRATING()\nMIGRATING() — Identifies table rows currently migrating to an export target.\nSyntax\nMIGRATING()\nMIGRATING\nDescription\nThe MIGRATING function identifies rows of a table that are currently being migrated to an export target.\nIf a table declaration includes the MIGRATE TO TARGET clause, when the migration is triggered (either\nby the USING TTL clause or an explicit MIGRATE statement), the row's contents are queued for export\nto the specified export target. Until the export is completed and acknowledged, the row remains in the\ndatabase but marked for deletion. Once the export is acknowledged, the row is deleted. The MIGRATING\nfunction lets you identify rows that have expired but not completed the export action.\nThe MIGRATING function can only be used in the WHERE clause under the following conditions:\n•The selection expression selects from only one table.\n•The table in question is declared with MIGRATE TO TARGET.\nExamples\nThe following example selects records for a particular customer where the records are currently being\nmigrated to an export target.\nSELECT * FROM Requests WHERE customer=? AND MIGRATING;\nThe next example performs the opposite operation — selecting only those records that are not currently\nbeing migrated.\nSELECT * FROM Requests WHERE customer=? AND NOT MIGRATING;\n308SQL Functions\nMIN()\nMIN() — Returns the minimum value from a range of column values.\nSyntax\nMIN( column-expression )\nDescription\nThe MIN() function returns the lowest value from a range of column values. The range of values depends\non the constraints defined by the WHERE and GROUP BY clauses.\nExample\nThe following example returns the lowest price in the product list.\nSELECT MIN(price) FROM product_list;\nThe next example returns the lowest price for each product category.\nSELECT category, MIN(price) FROM product_list\n GROUP BY category\n ORDER BY category;\n309SQL Functions\nMIN_VALID_TIMESTAMP()\nMIN_VALID_TIMESTAMP() — Returns the minimum valid timestamp.\nSyntax\nMIN_VALID_TIMESTAMP()\nMIN_VALID_TIMESTAMP\nDescription\nThe MIN_VALID_TIMESTAMP() function returns the minimum valid value for the VoltDB TIMES-\nTAMP datatype. The minimum valid timestamp equates to the beginning of the year 1583. That is, the first\nmicrosecond of that year. The maximum valid timestamp equates to the last microsecond of the year 9999.\nBecause TIMESTAMP values are stored and can be entered as an eight byte integer, it is possible to\nenter a numeric value that is not actually a valid timestamp. The functions MIN_VALID_TIMESTAMP()\nand MAX_VALID_TIMESTAMP() give you access to the valid minimum and maximum values. The\nfunction IS_VALID_TIMESTAMP() compares a TIMESTAMP value and returns true or false depending\non whether the value falls within the valid range or not.\nSince there are no arguments to the function, the parentheses following the function name are optional.\nExample\nThe following example uses the TIMESTAMP functions to return an informational string for any event\nrecords that contain an invalid timestamp value.\nSELECT 'TIMESTAMP must be between ' || \n CAST(MIN_VALID_TIMESTAMP() as VARCHAR) ||\n ' and ' ||\n CAST(MAX_VALID_TIMESTAMP() as VARCHAR),\n log_time,\n log_event\n FROM events WHERE NOT IS_VALID_TIMESTAMP(log_time);\n310SQL Functions\nMINUTE()\nMINUTE() — Returns the minute of the hour as an integer value.\nSyntax\nMINUTE( timestamp-value )\nDescription\nThe MINUTE() function returns an integer value between 0 and 59 representing the minute of the hour\nin a timestamp value. This function produces the same result as using the MINUTE keyword with the\nEXTRACT() function.\nExamples\nThe following example uses the HOUR(), MINUTE(), and SECOND() functions to return the time portion\nof a TIMESTAMP value in a formatted string.\nSELECT eventname,\n CAST(HOUR(starttime) AS VARCHAR) || ' hours, ' ||\n CAST(MINUTE(starttime) AS VARCHAR) || ' minutes, and ' ||\n CAST(SECOND(starttime) AS VARCHAR) || ' seconds.' \n AS timestring FROM event;\n311SQL Functions\nMOD()\nMOD() — Returns the result of a modulo operation.\nSyntax\nMOD( dividend, divisor )\nDescription\nThe MOD() function performs a modulo operation. That is, it divides one value, the dividend, by another\nvalue, the divisor, and returns the remainder of the division operation as a new integer value. The sign of\nthe result, whether positive or negative, matches the sign of the first argument, the dividend.\nBoth the dividend and the divisor must either both be integer values or both be DECIMAL values and the\ndivisor must not be zero. Use of mixed input datatypes or a divisor of zero will result in a runtime error.\nWhen using DECIMAL input values, the result is the integer portion of the remainder. In other words, the\ndecimal result is truncated and returned as an integer using the following formula:\nMOD(a,b) = a - INT(a/b) * b\nExample\nThe following example uses the HOUR() and MOD() functions to return the hour of a timestamp in 12\nhour format\nSELECT event, \n MOD(HOUR(eventtime)+11, 12)+1,\n CASE WHEN HOUR(eventtime)/12 < 1\n THEN 'AM'\n ELSE 'PM'\n END\n FROM schedule ORDER BY 3, 2;\n312SQL Functions\nMONTH()\nMONTH() — Returns the month of the year as an integer value.\nSyntax\nMONTH( timestamp-value )\nDescription\nThe MONTH() function returns an integer value between 1 and 12 representing the timestamp's month\nof the year. The MONTH() function produces the same result as using the MONTH keyword with the\nEXTRACT() function.\nExamples\nThe following example uses the DAY(), MONTH(), and YEAR() functions to return a timestamp column\nas a formatted date string.\nSELECT CAST( MONTH(starttime) AS VARCHAR) || '/' || \n CAST( DAY(starttime) AS VARCHAR) || '/' || \n CAST( YEAR(starttime) AS VARCHAR), title, description \n FROM event ORDER BY starttime;\n313SQL Functions\nNOW()\nNOW() — Returns the current time as a timestamp value.\nSyntax\nNOW()\nNOW\nDescription\nThe NOW() function returns the current time as a VoltDB timestamp. The value of the timestamp is\ndetermined when the query or stored procedure is invoked. Since there are no arguments to the function,\nthe parentheses following the function name are optional.\nSeveral important aspects of how the NOW() function operates are:\n•The value returned is guaranteed to be identical for all partitions that execute the query.\n•The value returned is measured in milliseconds then padded to create a timestamp value in microseconds.\n•During command logging, the returned value is stored as part of the log, so when the command log is\nreplayed, the same value is used during the replay of the query.\n•Similarly, for database replication (DR) the value returned is passed and reused by the replica database\nwhen replaying the query.\n•You can specify NOW() as a default value in the CREATE TABLE statement when defining the schema\nof a VoltDB database.\n•The NOW() function cannot be used in the CREATE INDEX or CREATE VIEW statements.\nThe NOW() and CURRENT_TIMESTAMP() functions are synonyms and perform an identical function.\nExample\nThe following example uses NOW(0 in the WHERE clause to delete alert events that occurred in the past:\nDELETE FROM Alert_event WHERE event_timestamp < NOW;\n314SQL Functions\nNUMINTERIORRINGS()\nNUMINTERIORRINGS() — Returns the number of interior rings within a polygon GEOGRAPHY value.\nSyntax\nNUMINTERIORRINGS( polygon )\nDescription\nThe NUMINTERIORRINGS() function returns the number of interior rings within a polygon GEOGRA-\nPHY value. Polygon GEOGRAPHY values can contain multiple polygons: one and only one outer polygon\nand one or more optional inner polygons that define \"holes\" in the outer polygon. The NUMINTERIOR-\nRINGS() function counts the number of inner polygons and returns the result as an integer value.\nExample\nThe following example lists the countries of the world based on the number of interior polygons within\nthe outline GEOGRAPHY column.\nSELECT NUMINTERIORRINGS(outline), name, capital FROM country\n ORDER BY NUMINTERIORRINGS(outline);\n315SQL Functions\nNUMPOINTS()\nNUMPOINTS() — Returns the number of points within a polygon GEOGRAPHY value.\nSyntax\nNUMPOINTS( polygon )\nDescription\nThe NUMPOINTS() function returns the total number of points that comprise a polygon GEOGRAPHY\nvalue. The number of points includes the points from both the outer polygon and any inner polygons. It\nalso includes all of the points defining the polygon. Which means the starting point for each polygon is\ncounted twice — once as the starting point and once and the ending point — because this is required in\nthe WKT representation of a polygon.\nExample\nThe following example lists the countries of the world based on the number of points in their outlines.\nSELECT NUMPOINTS(outline), name, capital FROM country\n ORDER BY NUMPOINTS(outline);\n316SQL Functions\nOCTET_LENGTH()\nOCTET_LENGTH() — Returns the number of bytes in a string.\nSyntax\nOCTET_LENGTH( string-expression )\nDescription\nThe OCTET_LENGTH() function returns the number of bytes of data in a string.\nNote that the number of bytes required to store a string and the actual characters that make up the string\ncan differ. To count the number of characters in the string use the CHAR_LENGTH() function.\nExample\nThe following example returns the string in the column LastName as well as the number of characters and\nlength in bytes of that string.\nSELECT LastName, CHAR_LENGTH(LastName), OCTET_LENGTH(LastName)\n FROM Customers ORDER BY LastName, FirstName;\n317SQL Functions\nOVERLAY()\nOVERLAY() — Returns a string overwriting a portion of the original string with the specified replacement.\nSyntax\nOVERLAY( string PLACING replacement-string FROM position [FOR length] )\nDescription\nThe OVERLAY() function overwrites a portion of the original string with the replacement string and\nreturns the result. The replacement starts at the specified position in the original string and either replaces\nthe characters one-for-one for the length of the replacement string or, if a FOR length is specified, replaces\nthe specified number of characters.\nFor example, if the original string is 12 characters in length, the replacement string is 3 characters in length\nand starts at position 4, and the FOR clause is left off, the resulting string consists of the first 3 characters\nof the original string, the replacement string, and the last 6 characters of the original string:\nOVERLAY('abcdefghijkl' PLACING 'XYZ' FROM 4) = 'abcXYZghijkl'\nIf the FOR clause is included specifying that the replacement string replaces 6 characters, the result is the\nfirst 3 characters of the original string, the replacement string, and the last 3 characters of the original string:\nOVERLAY('abcdefghijkl' PLACING 'XYZ' FROM 4 FOR 6) = 'abcXYZjkl'\nIf the combination of the starting position and the replacement length exceed the length of the original\nstring, the resulting output is extended as necessary to include all of the replacement string:\nOVERLAY('abcdef' PLACING 'XYZ' FROM 5) = 'abcdXYZ'\nIf the starting position is greater than the length of the original string, the replacement string is appended\nto the original string:\nOVERLAY('abcdef' PLACING 'XYZ' FROM 20) = 'abcdefXYZ'\nSimilarly, if the combination of the starting position and the FOR length is greater than the length of the\noriginal string, the replacement string simply overwrites the remainder of the original string:\nOVERLAY('abcdef' PLACING 'XYZ' FROM 2 FOR 20) = 'aXYZ'\nThe starting position and length must be specified as non-negative integers. The starting position must be\ngreater than zero and the length can be zero or greater.\nExample\nThe following example uses the OVERLAY function to redact part of a name.\nSELECT OVERLAY( fullname PLACING '****' FROM 2 \n FOR CHAR_LENGTH(fullname)-2\n ) FROM users ORDER BY fullname;\n318SQL Functions\nPI()\nPI() — Returns the value of the mathematical constant pi (π) as a FLOAT value.\nSyntax\nPI()\nPI\nDescription\nThe PI() function returns the value of the mathematical constant pi (π) as a double floating point (FLOAT)\nvalue. Since there are no arguments to the function, the parentheses following the function name are op-\ntional.\nExample\nThe following example uses the PI() function to return the surface area of a sphere.\nSELECT radius, 4*PI*POWER(radius, 2) FROM Sphere ORDER BY radius; \n319SQL Functions\nPOINTFROMTEXT()\nPOINTFROMTEXT() — Returns a GEOGRAPHY_POINT value from the corresponding WKT\nSyntax\nPOINTFROMTEXT( string )\nDescription\nThe POINTFROMTEXT() function generates a GEOGRAPHY_POINT value from a string containing\na well known text (WKT) representation of a geographic point. The WKT string must be in the form\n'POINT( longitude latitude )' where longitude and latitude are floating point values.\nif the argument is not a valid WKT representation of a point, the function generates an error.\nExample\nThe following example uses the POINTFROMTEXT() function to update a record containing a GEOG-\nRAPHY_POINT column using two floating point input values (representing longitude and latitude).\nUPDATE user SET location = \n POINTFROMTEXT( \n CONCAT('POINT(',CAST(? AS VARCHAR),' ',CAST(? AS VARCHAR),')') \n )\n WHERE id = ?;\n320SQL Functions\nPOLYGONFROMTEXT()\nPOLYGONFROMTEXT() — Returns a GEOGRAPHY value from the corresponding WKT\nSyntax\nPOLYGONFROMTEXT( text )\nDescription\nThe POLYGONFROMTEXT() function generates a GEOGRAPHY value from a string containing a well\nknown text (WKT) representation of a geographic polygon. The WKT string must be a valid representation\nof a polygon with only one outer polygon and, optionally, one or more inner polygons.\nif the argument is not a valid WKT representation of a polygon, the function generates an error.\nExample\nThe following example uses the POLYGONFROMTEXT() function to insert a record containing a GE-\nOGRAPHY column using a text input value containing the WKT representation of a geographic polygon.\nINSERT INTO city (name, state, boundary) VALUES(?, ?, POLYGONFROMTEXT(?));\n321SQL Functions\nPOSITION()\nPOSITION() — Returns the starting position of a substring in another string.\nSyntax\nPOSITION( substring-expression IN string-expression )\nDescription\nThe POSITION() function returns the starting position of a substring in another string. The position, if a\nmatch is found, is an integer number between one and the length of the string being searched. If no match\nis found, the function returns zero.\nExample\nThe following example selects all books where the title contains the word \"poodle\" and returns the book's\ntitle and the position of the substring \"poodle\" in the title.\nSELECT Title, POSITION('poodle' IN Title) FROM Books\n WHERE Title LIKE '%poodle%' ORDER BY Title;\n322SQL Functions\nPOWER()\nPOWER() — Returns the value of the first argument raised to the power of the second argument.\nSyntax\nPOWER( numeric-expression , numeric-expression )\nDescription\nThe POWER() function takes two numeric expressions and returns the value of the first raised to the power\nof the second. In other words, POWER(x,y) is the equivalent of the mathematical expression xy.\nExample\nThe following example uses the POWER function to return the surface area and volume of a cube.\nSELECT length, 6 * POWER(length,2) AS surface,\n POWER(length,3) AS volume FROM Cube\n ORDER BY length;\n323SQL Functions\nQUARTER()\nQUARTER() — Returns the quarter of the year as an integer value\nSyntax\nQUARTER( timestamp-value )\nDescription\nThe QUARTER() function returns an integer value between 1 and 4 representing the quarter of the year\nin a TIMESTAMP value. The QUARTER() function produces the same result as using the QUARTER\nkeyword with the EXTRACT() function.\nExamples\nThe following example uses the QUARTER() and YEAR() functions to group and sort records containing\na timestamp.\nSELECT year(starttime), quarter(starttime), \n count(*) as eventsperquarter\n FROM event\n GROUP BY year(starttime), quarter(starttime) \n ORDER BY year(starttime), quarter(starttime);\n324SQL Functions\nRADIANS()\nRADIANS() — Converts an angle in degrees to radians\nSyntax\nRADIANS( angle-in-degrees )\nDescription\nThe RADIANS() function converts a floating-point value representing an angle measured in degrees to\nthe equivalent angle measured in radians.\nExample\nThe following INSERT statement converts input entered in degrees to radians before inserting the record\ninto the database.\nINSERT INTO tests (test_number, distance, angle)\n VALUES (?, ?, RADIANS(?) );\n325SQL Functions\nREGEXP_POSITION()\nREGEXP_POSITION() — Returns the starting position of a regular expression within a text string.\nSyntax\nREGEXP_POSITION( string, pattern [, flag] )\nDescription\nThe REGEXP_POSITION() function returns the starting position of the first instance of the specified\nregular expression within a text string. The position value starts at one (1) for the first position in the string\nand the functions returns zero (0) if the regular expression is not found.\nThe first argument to the function is the VARCHAR character string to be searched. The second argument\nis the regular expression pattern to look for. The third argument is an optional flag that specifies whether\nthe search is case sensitive or not. The flag must be single character VARCHAR with one of the following\nvalues:\nFlag Description\nc Case-sensitive matching (default)\ni Case-insensitive matching\nThere are several different formats for regular expressions. The REGEXP_POSITION() uses the revised\nPerl compatible regular expression (PCRE2) syntax, which is described in detail on the PCRE website .\nExamples\nThe following example uses the REGEXP_POSITION() to filter all records where the column description\nmatches a specific pattern. The examples uses the optional flag argument to make the pattern match text\nregardless of case.\nSELECT incident, description FROM securityLog \n WHERE REGEXP_POSITION(description, \n 'host:\\s*10\\.186\\.[0-9]+\\.[0-9]+', \n 'i') > 0 \n ORDER BY incident;\n326SQL Functions\nREPEAT()\nREPEAT() — Returns a string composed of a substring repeated the specified number of times.\nSyntax\nREPEAT( string-expression , numeric-expression )\nDescription\nThe REPEAT() function returns a string composed of the substring string-expression repeated n times\nwhere n is defined by the second argument to the function.\nExample\nThe following example uses the REPEAT and the CHAR_LENGTH functions to replace a column's actual\ncontents with a mask composed of the letter \"X\" the same length as the original column value.\nSELECT username, REPEAT('X', CHAR_LENGTH(password)) as Password \n FROM accounts ORDER BY username;\n327SQL Functions\nREPLACE()\nREPLACE() — Returns a string replacing the specified substring of the original string with new text.\nSyntax\nREPLACE( string, substring, replacement-string )\nDescription\nThe REPLACE() function returns a copy of the first argument, replacing all instances of the substring\nidentified by the second argument with the third argument. If the substring is not found, no changes are\nmade and a copy of the original string is returned.\nExample\nThe following example uses the REPLACE function to update the Address column, replacing the string\n\"Ceylon\" with \"Sri Lanka\".\nUPDATE Customers SET address=REPLACE( address,'Ceylon', 'Sri Lanka')\n WHERE address LIKE '%Ceylon%';\n328SQL Functions\nRIGHT()\nRIGHT() — Returns a substring from the end of a string.\nSyntax\nRIGHT( string-expression , numeric-expression )\nDescription\nThe RIGHT() function returns the last n characters from a string expression, where n is the second argument\nto the function.\nExample\nThe following example uses the LEFT() and RIGHT() functions to return an abbreviated summary of the\nDescription column, ensuring the result fits within 20 characters.\nSELECT product_name, \n LEFT(description,10) || '...' || RIGHT(description,7) \n FROM product_list ORDER BY product_name;\n329SQL Functions\nROUND()\nROUND() — Returns a numeric value rounded to the specified decimal place\nSyntax\nROUND( numeric-value , rounding-position )\nDescription\nThe ROUND() functions returns the input value rounded to the specific decimal place. The result is re-\nturned as a DECIMAL value.\nThe numeric-value argument must be a FLOAT or DECIMAL value. The rounding-position argument\nmust be an integer between 12 and -25 and indicates the place to which the numeric value should be\nrounded. Positive values indicate a decimal place; for example 2 means round to 2 decimal places. Negative\nvalues indicate rounding to a whole number position; for example, -2 indicates the number should be\nrounded to the nearest hundred. A zero indicates that the value should be rounded to the nearest whole\nnumber.\nRounding is performed using \"banker's rounding\", in that any fractional half is rounded to the nearest even\nnumber. So, for example, if the rounding-position is 2, the value 22.225 is rounded to 22.22, but the value\n33.335 is rounded to 33.34. The following list demonstrates some sample results.\nROUND (.123456789, 4) = 0.123500000000\nROUND( 123456789.123, 2 ) = 123456789.120000000000\nROUND( 123456789.123, 0 ) = 123456789.000000000000\nROUND( 123456789.123, -2 ) = 123456800.000000000000\nROUND( 123456789.123, -6 ) = 123000000.000000000000\nROUND( 123456789.123, 6 ) = 123456789.123000000000\nExamples\nThe following example uses the ROUND() function to return a DECIMAL value, rounding the value of\nthe budget column to two decimal places.\nSELECT country, ROUND(budget,2) AS annual_budget \n FROM world_economy ORDER BY country;\n330SQL Functions\nSEC()\nSEC() — Returns the secant of an angle specified in radians.\nSyntax\nSEC( {numeric-expression } )\nDescription\nThe SEC() function returns the secant of a specified angle as a FLOAT value. The angle must be specified\nin radians as a numeric expression.\nExamples\nThe following example returns the secant, cosecant, and cotangent of angles from 0 to 90 degrees (where\nthe angle is specified in radians).\nSELECT SEC(radians), CSC(radians), COT(radians) \n FROM triangles WHERE radians >= 0 AND radians <= PI()/2;\n331SQL Functions\nSECOND()\nSECOND() — Returns the seconds of the minute as a floating point value.\nSyntax\nSECOND( timestamp-value )\nDescription\nThe SECOND() function returns an floating point value between 0 and 60 representing the whole and\nfractional part of the number of seconds in the minute of a timestamp value. This function produces the\nsame result as using the SECOND keyword with the EXTRACT() function.\nExamples\nThe following example uses the HOUR(), MINUTE(), and SECOND() functions to return the time portion\nof a TIMESTAMP value in a formatted string.\nSELECT eventname,\n CAST(HOUR(starttime) AS VARCHAR) || ' hours, ' ||\n CAST(MINUTE(starttime) AS VARCHAR) || ' minutes, and ' ||\n CAST(SECOND(starttime) AS VARCHAR) || ' seconds.' \n AS timestring FROM event;\n332SQL Functions\nSET_FIELD()\nSET_FIELD() — Returns a copy of a JSON-encoded string, replacing the specified field value.\nSyntax\nSET_FIELD( column, field-name-path, string-value )\nDescription\nThe SET_FIELD() function finds the specified field within a JSON-encoded string and returns a copy of\nthe string with the new value replacing that field's previous value. Note that the SET_FIELD() function\nreturns an altered copy of the JSON-encoded string — it does not change any column values in place. So\nto change existing database columns, you must use SET_FIELD() with an UPDATE statement.\nFor example, assume the Product table contains a VARCHAR column Productinfo which for one row\ncontains the following JSON string:\n{\"product\":\"Acme widget\",\n \"availability\":\"plenty\",\n \"info\": { \"description\": \"A fancy widget.\",\n \"sku\":\"ABCXYZ\",\n \"part_number\":1234},\n \"warehouse\":[{\"location\":\"Dallas\",\"units\":25},\n {\"location\":\"Chicago\",\"units\":14},\n {\"location\":\"Troy\",\"units\":67}]\n}\nIt is possible to change the value of the availability field using the SET_FIELD function, like so:\nUPDATE Product SET Productinfo = \n SET_FIELD(Productinfo,'availability','\"limited\"')\n WHERE FIELD(Productinfo,'product') = 'Acme widget';\nThe second argument to the SET_FIELD() function can be a simple field name, as in the previous example,\nIn which case the function replaces the value of the top field matching the specified name. Alternately, you\ncan specify a path representing a hierarchy of names separated by periods. For example, you can replace\nthe SKU number by specifying \"info.sku\" as the second argument, or you can replace the number of units\nin the second warehouse by specifying the field path \"warehouse[1].units\". For example, the following\nUPDATE statement does both by nesting SET_FIELD commands:\nUPDATE Product SET Productinfo = \n SET_FIELD(\n SET_FIELD(Productinfo,'info.sku','\"DEFGHI\"'),\n 'warehouse[1].units', '128')\n WHERE FIELD(Productinfo,'product') = 'Acme widget';\nNote that the third argument is the string value that will be inserted into the JSON-encoded string. To insert\na numeric value, you enclose the value in single quotation marks, as in the preceding example where '128'\nis used as the replacement value for the warehouse[1].units field. To insert a string value, you must\ninclude the string quotation marks within the replacement string itself. For example, the preceding code\nuses the SQL string constant '\"DEFGHI\"' to specify the replacement value for the text field info.sku .\n333SQL Functions\nFinally, the replacement string value can be any valid JSON value, including another JSON-encoded object\nor array. It does not have to be a scalar string or numeric value.\nExample\nThe following example uses the SET_FIELD() function to add a new array element to the warehouse field.\nUPDATE Product SET Productinfo = \n SET_FIELD(Productinfo,'warehouse',\n '[{\"location\":\"Dallas\",\"units\":25},\n {\"location\":\"Chicago\",\"units\":14},\n {\"location\":\"Troy\",\"units\":67},\n {\"location\":\"Phoenix\",\"units\":23}]')\n WHERE FIELD(Productinfo,'product') = 'Acme widget';\n334SQL Functions\nSIN()\nSIN() — Returns the sine of an angle specified in radians.\nSyntax\nSIN( {numeric-expression } )\nDescription\nThe SIN() function returns the sine of a specified angle as a FLOAT value. The angle must be specified\nin radians as a numeric expression.\nExample\nThe following example returns the sine, cosine, and tangent of angles from 0 to 90 degrees (where the\nangle is specified in radians).\nSELECT SIN(radians), COS(radians), TAN(radians) \n FROM triangles WHERE radians >= 0 AND radians <= PI()/2;\n335SQL Functions\nSINCE_EPOCH()\nSINCE_EPOCH() — Converts a VoltDB timestamp to an integer number of time units since the POSIX\nepoch.\nSyntax\nSINCE_EPOCH( time-unit, timestamp-expression )\nDescription\nThe SINCE_EPOCH() function converts a VoltDB timestamp into an 64-bit integer value (BIGINT) rep-\nresenting the equivalent number since the POSIX epoch in a specified time unit. POSIX time is usually\nrepresented as the number of seconds since the epoch; that is, since 00:00.00 on January 1, 1970 Consoli-\ndated Universal Time (UTC). So the function SINCE_EPOCH(SECONDS, timestamp) returns the POSIX\ntime equivalent for the value of timestamp. However, you can also request the number of milliseconds or\nmicroseconds since the epoch. The valid keywords for specifying the time units are:\n•SECOND — Seconds since the epoch\n•MILLISECOND, MILLIS — Milliseconds since the epoch\n•MICROSECOND, MICROS — Microseconds since the epoch\nYou cannot perform arithmetic on timestamps directly. So SINCE_EPOCH() is useful for performing\ncalculations on timestamp values in SQL expressions. For example, the following SQL statement looks for\nevents that are less than a minute in length, based on the timestamp columns STARTTIME and ENDTIME:\nSELECT * FROM Event\n WHERE ( SINCE_EPOCH(Second, endtime) \n - SINCE_EPOCH(Second, starttime) ) < 60;\nThe TO_TIMESTAMP() function performs the inverse of SINCE_EPOCH(), by converting an integer\nvalue to a VoltDB timestamp based on the specified time units.\nExample\nThe following example returns a timestamp column as the equivalent POSIX time value.\nSELECT event_id, event_name, \n SINCE_EPOCH(Second, starttime) as posix_time FROM Event\n ORDER BY event_id;\nThe next example uses SINCE_EPOCH() to return the length of an event, in microseconds, by calculating\nthe difference between two timestamp columns.\nSELECT event_id, event_type,\n SINCE_EPOCH(Microsecond, endtime)\n -SINCE_EPOCH(Microsecond, starttime) AS delta\n FROM Event ORDER BY event_id;\n336SQL Functions\nSPACE()\nSPACE() — Returns a string of spaces of the specified length.\nSyntax\nSPACE( numeric-expression )\nDescription\nThe SPACE() function returns a string composed of n spaces where the string length n is specified by the\nfunction's argument. SPACE(n) is a synonym for REPEAT(' ', n).\nExample\nThe following example uses the SPACE and CHAR_LENGTH functions to ensure the result is a fixed\nlength, padded with blank spaces.\n SELECT product_name || SPACE(80 - CHAR_LENGTH(product_name))\n FROM product_list ORDER BY product_name;\n337SQL Functions\nSQRT()\nSQRT() — Returns the square root of a numeric expression.\nSyntax\nSQRT( numeric-expression )\nDescription\nThe SQRT() function returns the square root of the specified numeric expression.\nExample\nThe following example uses the SQRT and POWER functions to return the distance of a graph point from\nthe origin.\nSELECT location, x, y, \n SQRT(POWER(x,2) + POWER(y,2)) AS distance\n FROM points ORDER BY location;\n338SQL Functions\nSTR()\nSTR() — Returns the string representation of a numeric value.\nSyntax\nSTR( numeric-value [string-length [decimal-precision ]] )\nDescription\nThe STR() function returns a string representation of the numeric input. The first argument can be either\na FLOAT or DECIMAL value.\nThe optional second argument specifies the maximum size of the output string and must be an integer be-\ntween 0 and 38. If the maximum string length is less than the number of characters required to represent the\nnumeric value, the resulting string is filled with asterisk (*) characters. The default length is 10 characters.\nThe optional third argument specifies the number of decimal places included in the output, which is spec-\nified as an integer between 0 and 12. If the numeric value requires more decimal places than specified, the\nvalue is rounded using \"banker's rounding\". (See the description of the FORMAT_CURRENCY() func-\ntion for a description of banker's rounding.) If the decimal precision is not specified, the value is rounded\nand only the integer portion is returned.\nExample\nThe following example uses STR() to return a percentage, rounded to two decimal places and including\nthe percent sign.\nSELECT STR( 100.0 * c.population / total_pop ) || '%'\n FROM countries AS c,\n (SELECT SUM(population) AS total_pop FROM countries) as w\n WHERE c.name=?;\n339SQL Functions\nSUBSTRING()\nSUBSTRING() — Returns the specified portion of a string expression.\nSyntax\nSUBSTRING( string-expression FROM position [FOR length] )\nSUBSTRING( string-expression , position [, length] )\nDescription\nThe SUBSTRING() function returns a specified portion of the string expression, where position specifies\nthe starting position of the substring (starting at position 1) and length specifies the maximum length of\nthe substring. The length of the returned substring is the lower of the remaining characters in the string\nexpression or the value specified by length.\nFor example, if the string expression is \"ABCDEF\" and position is specified as 3, the substring starts with\nthe character \"C\". If length is also specified as 3, the return value is \"CDE\". If, however, the length is\nspecified as 5, only the remaining four characters \"CDEF\" are returned.\nIf length is not specified, the remainder of the string, starting from the specified by position, is returned.\nFor example, SUBSTRING(\"ABCDEF\",3) and SUBSTRING(\"ABCDEF\"3,4) return the same value.\nExample\nThe following example uses the SUBSTRING function to return the month of the year, which is a VAR-\nCHAR column, as a three letter abbreviation.\nSELECT event, SUBSTRING(month,1,3), day, year FROM calendar\n ORDER BY event ASC;\n340SQL Functions\nSUM()\nSUM() — Returns the sum of a range of numeric column values.\nSyntax\nSUM( column-expression )\nDescription\nThe SUM() function returns the sum of a range of numeric column values. The values being added together\ndepend on the constraints defined by the WHERE and GROUP BY clauses.\nExample\nThe following example uses the SUM() function to determine how much inventory exists for each product\ntype in the catalog.\nSELECT category, SUM(quantity) AS inventory FROM product_list\n GROUP BY category ORDER BY category;\n341SQL Functions\nTAN()\nTAN() — Returns the tangent of an angle specified in radians.\nSyntax\nTAN( {numeric-expression } )\nDescription\nThe TAN() function returns the tangent of a specified angle as a FLOAT value. The angle must be specified\nin radians as a numeric expression.\nExample\nThe following example returns the sine, cosine, and tangent of angles from 0 to 90 degrees (where the\nangle is specified in radians).\nSELECT SIN(radians), COS(radians), TAN(radians) \n FROM triangles WHERE radians >= 0 AND radians <= PI()/2;\n342SQL Functions\nTO_TIMESTAMP()\nTO_TIMESTAMP() — Converts an integer value to a VoltDB timestamp based on the time unit specified.\nSyntax\nTO_TIMESTAMP( time-unit, integer-expression )\nDescription\nThe TO_TIMESTAMP() function converts an integer expression to a VoltDB timestamp, interpreting the\ninteger value as the number of specified time units since the POSIX epoch. POSIX time is usually repre-\nsented as the number of seconds since the epoch; that is, since 00:00.00 on January 1, 1970 Consolidat-\ned Universal Time (UTC). So the function TO_TIMESTAMP(Second, timeinsecs) returns the VoltDB\nTIMESTAMP equivalent of timeinsecs as a POSIX time value. However, you can also request the integer\nvalue be interpreted as milliseconds or microseconds since the epoch. The valid keywords for specifying\nthe time units are:\n•SECOND — Seconds since the epoch\n•MILLISECOND. MILLIS — Milliseconds since the epoch\n•MICROSECOND, MICROS — Microseconds since the epoch\nYou cannot perform arithmetic on timestamps directly. So TO_TIMESTAMP() is useful for converting the\nresults of arithmetic expressions to VoltDB TIMESTAMP values. For example, the following SQL state-\nment uses TO_TIMESTAMP to convert a POSIX time value before inserting it into a VoltDB TIMES-\nTAMP column:\nINSERT INTO Event \n (event_id,event_name,event_type, starttime) \n VALUES(?,?,?,TO_TIMESTAMP(Second, ?));\nThe SINCE_EPOCH() function performs the inverse of TO_TIMESTAMP(), by converting a VoltDB\nTIMESTAMP to an integer value based on the specified time units.\nExample\nThe following example updates a TIMESTAMP column, adding one hour (in seconds) to the current value\nusing SINCE_EPOCH() and TO_TIMESTAMP() to perform the conversion and arithmetic:\nUPDATE Contest \n SET deadline=TO_TIMESTAMP(Second, SINCE_EPOCH(Second,deadline) + 3600)\n WHERE expired=1;\n343SQL Functions\nTRIM()\nTRIM() — Returns a string with leading and/or training spaces removed.\nSyntax\nTRIM( [[ LEADING | TRAILING | BOTH ] [ character] FROM] string-expression )\nDescription\nThe TRIM() function returns a string with leading and/or trailing spaces removed. By default, the TRIM\nfunction removes spaces from both the beginning and end of the string. If you specify the LEADING or\nTRAILING clause, spaces are removed from either the beginning or end of the string only.\nYou can also specify an alternate character to remove. By default only spaces (UTF-8 character code 32)\nare removed. If you specify a different character, only that character will be removed. For example, the\nfollowing INSERT statement uses the TRIM function to remove any TAB characters from the beginning\nof the string input for the ADDRESS column:\nINSERT INTO Customers (first, last, address) \n VALUES(?, ?, \n TRIM( LEADING CHAR(9) FROM CAST(? AS VARCHAR) ) \n );\nExample\nThe following example uses TRIM() to remove extraneous leading and trailing spaces from the output for\nthree VARCHAR columns:\nSELECT TRIM(first), TRIM(last), TRIM(address) FROM Customer \n ORDER BY last, first;\n344SQL Functions\nTRUNCATE()\nTRUNCATE() — Truncates a VoltDB timestamp to the specified time unit.\nSyntax\nTRUNCATE( time-unit, timestamp )\nDescription\nThe TRUNCATE() function truncates a timestamp value to the specified time unit. For example, if the\ntimestamp column Apollo has the value July 20, 1969 4:17:40 P.M, then using the function TRUN-\nCATE(hour,apollo) would return the value July 20, 1969 4:00:00 P.M. Allowable time units for truncation\ninclude the following:\n•YEAR\n•QUARTER\n•MONTH\n•DAY\n•HOUR\n•MINUTE\n•SECOND\n•MILLISECOND, MILLIS\nExample\nThe following example uses the TRUNCATE function to find records where the timestamp column, inci-\ndent, falls within a specific day, entered as a POSIX time value.\nSELECT incident, description FROM securitylog \n WHERE TRUNCATE(DAY, incident) = TRUNCATE(DAY,FROM_UNIXTIME(?))\n ORDER BY incident, description;\n345SQL Functions\nUPPER()\nUPPER() — Returns a string converted to all uppercase characters.\nSyntax\nUPPER( string-expression )\nDescription\nThe UPPER() function returns a copy of the input string converted to all uppercase characters.\nExample\nThe following example uses the UPPER function to return results alphabetically regardless of case.\nSELECT UPPER(product_name), product_id FROM product_list\n ORDER BY UPPER(product_name);\n346SQL Functions\nVALIDPOLYGONFROMTEXT()\nVALIDPOLYGONFROMTEXT() — Returns a validated GEOGRAPHY value from the corresponding\nWKT\nSyntax\nVALIDPOLYGONFROMTEXT( text )\nDescription\nThe VALIDPOLYGONFROMTEXT() function generates a valid GEOGRAPHY value from a string con-\ntaining a well known text (WKT) representation of a geographic polygon. If the GEOGRAPHY value re-\nsulting from the WKT string cannot be made into a valid representation of a polygon, the function returns\nan error. The error message includes an explanation of why the WKT is not valid. If the polygon is valid\nexcept the rings are drawn in the wrong direction (that is, the outer ring is clockwise or the inner rings\nare counterclockwise), the VALIDPOLYGONFROMTEXT() function will correct the rings and generate\na valid polygon.\nThe difference between the POLYGONFROMTEXT() function and the VALIDPOLYGONFROM-\nTEXT() function is that the VALIDPOLYGONFROMTEXT() verifies that the resulting polygon meets\nall of the requirements for use by VoltDB. If a valid polygon cannot be generated, the VALIDPOLY-\nGONFROMTEXT() function returns an error. The POLYGONFROMTEXT() function, on the other hand,\nsimply constructs a GEOGRAPHY value without validating all of the requirements of a VoltDB polygon\nand may need separate validation (using the ISVALID() or MAKEVALIDPOLYGON() function) before\nit can be used effectively with other geospatial functions. See the description of the ISVALID() function\nfor a description of the requirements for a valid polygon.\nExample\nThe following example uses the VALIDPOLYGONFROMTEXT() function to insert a record containing\na GEOGRAPHY column using a text input value containing the WKT representation of a geographic\npolygon. Note that if\nINSERT INTO city (name, state, boundary) \n VALUES(?, ?, VALIDPOLYGONFROMTEXT(?));\n347SQL Functions\nWEEK(), WEEKOFYEAR()\nWEEK(), WEEKOFYEAR() — Returns the week of the year as an integer value.\nSyntax\nWEEK( timestamp-value )\nWEEKOFYEAR( timestamp-value )\nDescription\nThe WEEK() and WEEKOFYEAR() functions are synonyms and return an integer value between 1 and\n52 representing the timestamp's week of the year. These functions produce the same result as using the\nWEEK_OF_YEAR keyword with the EXTRACT() fucntion.\nExamples\nThe following example uses the WEEK() function to group and sort records containing a timestamp.\nSELECT week(starttime), count(*) as eventsperweek\n FROM event GROUP BY week(starttime) ORDER BY week(starttime);\n348SQL Functions\nWEEKDAY()\nWEEKDAY() — Returns the day of the week as an integer between 0 and 6.\nSyntax\nWEEKDAY( timestamp-value )\nDescription\nThe WEEKDAY() function returns an integer value between 0 and 6 representing the day of the week in a\ntimestamp value. For the WEEKDAY() function, the week starts (0) on Monday and ends (6) on Sunday.\nThis function is provided for compatibility with MySQL and produces the same result as using the WEEK-\nDAY keyword with the EXTRACT() function.\nExamples\nThe following example uses WEEKDAY() and the DECODE() function to return a string value represent-\ning the day of the week for the specified TIMESTAMP value.\nSELECT eventtime, \n DECODE(WEEKDAY(eventtime),\n 0, 'Monday',\n 1, 'Tuesday',\n 2, 'Wednesday',\n 3, 'Thursday',\n 4, 'Friday',\n 5, 'Saturday',\n 6, 'Sunday') AS eventday\n FROM event ORDER BY eventtime;\n349SQL Functions\nYEAR()\nYEAR() — Returns the year as an integer value.\nSyntax\nYEAR( timestamp-value )\nDescription\nThe YEAR() function returns an integer value representing the year of a TIMESTAMP value. The YEAR()\nfunction produces the same result as using the YEAR keyword with the EXTRACT() function.\nExamples\nThe following example uses the DAY(), MONTH(), and YEAR() functions to return a timestamp column\nas a formatted date string.\nSELECT CAST( MONTH(starttime) AS VARCHAR) || '/' || \n CAST( DAY(starttime) AS VARCHAR) || '/' || \n CAST( YEAR(starttime) AS VARCHAR), title, description \n FROM event ORDER BY starttime;\n350Appendix D. VoltDB CLI Commands\nVoltDB provides shell or CLI (command line interpreter) commands to perform common functions for\ndeveloping, starting, and managing VoltDB applications and databases. This appendix describes those\nshell commands in detail.\nThe commands are listed in alphabetical order.\n•csvloader\n•jdbcloader\n•kafkaloader\n•sqlcmd\n•voltadmin\n•voltdb\nUsing CLI Commands with TLS/SSL\nWhen TLS (Transport Layer Security) encryption is enabled for the cluster external ports (that is, the client\nand admin ports, not just the httpd port), you must explicitly tell VoltDB CLI commands that interact with\na running cluster to use TLS. The simplest way to do this, if you are using a certificate from an external\ncertificate authority, is to include the --ssl flag on the command line. For example:\n$ sqlcmd --ssl\nAlternately, if you are using a locally generated certificate, you must specify an Java properties file that\npoints to the trust store as an argument to the flag, like so:\n$ sqlcmd --ssl=mytruststore.conf\nThe properties file verifies that the database server is passing credentials that you trust. That is, credentials\nthat match the trust store you reference. The format of file is a Java properties files declaring two properties,\none per line, that identify the trust store and trust store password:\ntrustStore={path-to-trust-store}\ntrustStorePassword={trust-store password}\nSee Section 12.7, “Encrypting VoltDB Communication Using TLS/SSL” for more information on config-\nuring TLS encryption on the external ports.\n351VoltDB CLI Commands\ncsvloader\ncsvloader — Imports the contents of a CSV file and inserts it into a VoltDB table.\nSyntax\ncsvloader table-name [arguments]\ncsvloader -p procedure-name [arguments]\nDescription\nThe csvloader command reads comma-separated values and inserts each valid line of data into the specified\ntable in a VoltDB database. The most common way to use csvloader is to specify the database table to be\nloaded and a CSV file containing the data, like so:\n$ csvloader employees -f acme_employees.csv\nAlternately, you can use standard input as the source of the data:\n$ csvloader employees < acme_employees.csv\nIn addition to inserting all valid content into the specified database table, csvloader creates three output\nfiles:\n•Error log — The error log provides details concerning any errors that occur while processing the input\nfile. This includes errors in the format of the input as well as errors that occur attempting the insert into\nVoltDB. For example, if two rows contain the same value for a column that is declared as unique, the\nerror log indicates that the second insert fails due to a constraint violation.\n•Failed input — A separate file contains the contents of each line that failed to load. This file is useful\nbecause it allows you to correct any formatting issues and retry just the failed content, rather than having\nto restart and reload the entire table.\n•Summary report — Once all input lines are processed, csvloader generates a summary report listing\nhow many lines were read, how many were successfully loaded and how long the operation took.\nAll three files are created, by default, in the current working directory using \"csvloader\" and the table\nname as prefixes. For example, using csvloader to insert contestants into the sample voter database creates\nthe following files:\ncsvloader_contestants_insert_log.log\ncsvloader_contestants_invalidrows.csv\ncsvloader_contestants_insert_report.log\nIt is possible to use csvloader to load text files other than CSV files, using the --separator , --\nquotechar , and --escape flags. Note that csvloader uses Python to process the command line argu-\nments. So to enter certain non-alphanumeric characters, you must use the appropriate escaping mechanism\nfor Python command lines. For example, to use a tab-delimited file as input, you need to use the --sep-\narator flag, escaping the tab character like so:\n$ csvloader --separator=$'\\t' \\\n352VoltDB CLI Commands\n -f employees.tab employees\nIt is also important to note that, unlike VoltDB native clients, when interpreting string values for TIMES-\nTAMP columns, csvloader evaluates the value in the local timezone. That is, the timezone set by the local\nsystem. To have string values interpreted as Greenwich Mean Time, set the system variable TZ to \"GMT\"\nprior to invoking the csvloader. For example:\n$ export TZ=GMT; csvloader employees -f employees.csv\nArguments\n--batch {integer}\nSpecifies the number of rows to submit in a batch. If you do not specify an insert procedure, rows of\ninput are sent in batches to maximize overall throughput. You can specify how many rows are sent\nin each batch using the --batch flag. The default batch size is 200. If you use the --procedure\nflag, no batching occurs and each row is sent separately.\n--blank {error | null | empty }\nSpecifies what to do with missing values in the input. By default, if a line contains a missing value,\nit is interpreted as a null value in the appropriate datatype. If you do not want missing values to\nbe interpreted as nulls, you can use the --blank argument to specify other behaviors. Specifying --\nblank error results in an error if a line contains any missing values and the line is not inserted.\nSpecifying --blank empty returns the corresponding \"empty\" value in the appropriate datatype.\nAn empty value is interpreted as the following:\n•Zero for all numeric columns\n•Zero, or the Unix epoch value, for timestamp columns\n•An empty or zero-length string for VARCHAR and VARBINARY columns\n-c, --charset {character-set}\nSpecifies the character set of the input file. The default character set is UTF-8.\n--columnsizelimit {integer}\nSpecifies the maximum size of quoted column input, in bytes. Mismatched quotation marks in the\ninput can cause csvloader to read all subsequent input — including line breaks — as part of the column.\nTo avoid excessive memory use in this situation, the flag sets a limit on the maximum number of bytes\nthat will be accepted as input for a column that is enclosed in quotation marks and spans multiple\nlines. The default is 16777216 (that is, 16MB).\n--credentials= {properties-file}\nSpecifies a file that lists the username and password of the account to use when connecting to a\ndatabase with security enabled. This is useful when writing shell scripts because it avoids having to\nhardcode the password as plain text in the script. The credentials file is interpreted as a Java properties\nfile defining the properties username and password . For example:\nusername: johndoe\npassword: 4tUn8\nBecause it is a Java properties file, you must escape certain special characters in the username or\npassword, including the colon or equals sign.\n--escape {character}\nSpecifies the escape character that must precede a separator or quotation character that is supposed to\nbe interpreted as a literal character in the CSV input. The default escape character is the backslash (\\).\n353VoltDB CLI Commands\n-f, --file {file-specification}\nSpecifies the location of a CSV file to read as input. If you do not specify an input file, csvloader\nreads input from standard input.\n--header\nSpecifies that the first line of the CSV file is a header row, containing the names of the columns. The\ncolumn names must match columns in the VoltDB table. However, by using --header, the columns\ncan appear in a different order in the CSV file from the order in the database schema. Note that you\nmust specify all of the table column names in the header. The arguments --header and --procedure\nare mutually exclusive.\n--kerberos= {service-name}\nSpecifies the use of kerberos authentication when connecting to the database server(s). The service\nname identifies the Kerberos client service module, as defined by the JAAS login configuration file.\n--limitrows {integer}\nSpecifies the maximum number of rows to be read from the input stream. This argument (along with\n--skip) lets you load a subset of a larger CSV file.\n-m, --maxerrors {integer}\nSpecifies the target number of errors before csvloader stops processing input. Once csvloader encoun-\nters the specified number of errors while trying to insert rows, it will stop reading input and end the\nprocess. Note that, since csvloader performs inserts asynchronously, it often attempts more inserts\nbefore the target number of exceptions are returned from the database. So it is possible more errors\ncould be returned after the target is met. This argument lets you conditionally stop a large loading\nprocess if more than an acceptable number of errors occur.\n--noquotechar\nDisables the interpretation of quotation characters in the CSV input. All input other than the separator\ncharacter and line break will be treated as literal input characters.\n--nowhitespace\nSpecifies that the CSV input must not contain any whitespace between data values and separators. By\ndefault, csvloader ignores extra space between values, quotation marks, and the value separators. If\nyou use this argument, any input lines containing whitespace will generate an error and not be inserted\ninto the database.\n--password {text}\nSpecifies the password to use when connecting to the database. You must specify a username and\npassword if security is enabled for the database. If you specify a username with the --user argument\nbut not the --password argument, VoltDB prompts for the password. This is useful when writing shell\nscripts because it avoids having to hardcode passwords as plain text in the script.\n--port {port-number}\nSpecifies the network port to use when connecting to the database. If you do not specify a port,\ncsvloader uses the default client port 21212.\n-p, --procedure {procedure-name}\nSpecifies a stored procedure to use for loading each record from the data file. The named procedure\nmust exist in the database schema and must accept the fields of the data record as input parameters.\nBy default, csvloader uses a custom procedure to batch multiple rows into a single insert operation.\nIf you explicitly name a procedure, batching does not occur.\n--quotechar {character}\nSpecifies the quotation character that is used to enclose values. By default, the quotation character is\nthe double quotation mark (\").\n354VoltDB CLI Commands\n-r, --reportdir {directory}\nSpecifies the directory where csvloader writes the three output files. By default, csvloader writes\noutput files to the current working directory. This argument lets you redirect output to an alternative\nlocation.\n--s, --servers {server-id}[,...]\nSpecifies the network address of one or more nodes of a database cluster. When specifying an IPv6\naddress, enclose the address in square brackets. By default, csvloader attempts to insert the CSV data\ninto a database on the local system (localhost). To load data into a remote database, use the --servers\nargument to specify the database nodes the loader should connect to.\n--separator {charactor}\nSpecifies the character used to separate individual values in the input. By default, the separator char-\nacter is the comma (,).\n--skip {integer}\nSpecifies the number of lines from the input stream to skip before inserting rows into the database.\nThis argument (along with --limitrows) lets you load a subset of a larger CSV file.\n--ssl[=ssl-config-file]\nSpecifies the use of TLS encryption when communicating with the server. Only necessary if the cluster\nis configured to use TLS encryption for the external ports. See Section D, “Using CLI Commands\nwith TLS/SSL” for more information.\n--stopondisconnect\nSpecifies that if connections to all of the VoltDB servers are broken, the loader will stop. Normally,\nif the connection to the database is lost, csvloader periodically attempts to reconnect until the servers\ncome back online and it can complete the loading process. However, you can use this argument to\nhave the loader process stop if the VoltDB cluster becomes unavailable.\n--strictquotes\nSpecifies that all values in the CSV input must be enclosed in quotation marks. If you use this argu-\nment, any input lines containing unquoted values will generate an error and not be inserted into the\ndatabase.\n--update\nSpecifies that existing records with a matching primary key are updated, rather than being rejected. By\ndefault, csvloader attempts to create new records. The --update flag lets you load updates to existing\nrecords — and create new records where the primary key does not already exist. To use --update, the\ntable must have a primary key.\n--user {text}\nSpecifies the username to use when connecting to the database. You must specify a username and\npassword if security is enabled for the database.\nExamples\nThe following example loads the data from a CSV file, languages.csv , into the helloworld table from\nthe Hello World example database and redirects the output files to the ./logs subfolder.\n$ csvloader helloworld -f languages.csv -r ./logs\nThe following example performs the same function, providing the input interactively.\n$ csvloader helloworld -r ./logs\n355VoltDB CLI Commands\n\"Hello\", \"World\", \"English\"\n\"Bonjour\", \"Monde\", \"French\"\n\"Hola\", \"Mundo\", \"Spanish\"\n\"Hej\", \"Verden\", \"Danish\"\n\"Ciao\", \"Mondo\", \"Italian\"\nCTRL-D\n356VoltDB CLI Commands\njdbcloader\njdbcloader — Extracts a table from another database via JDBC and inserts it into a VoltDB table.\nSyntax\njdbcloader table-name [arguments]\njdbcloader -p procedure-name [arguments]\nDescription\nThe jdbcloader command uses the JDBC interface to fetch all records from the specified table in a remote\ndatabase and then insert those records into a matching table in VoltDB. The most common way to use\njdbcloader is to copy matching tables from another database to VoltDB. In this case, you specify the name\nof the table, plus any JDBC-specific arguments that are needed. Usually, the required arguments are the\nJDBC connection URL, the source table, the username, password, and local JDBC driver. For example:\n$ jdbcloader employees \\\n --jdbcurl=jdbc:postgresql://remotesvr/corphr \\\n --jdbctable=employees \\\n --jdbcuser=charlesdickens \\\n --jdbcpassword=bleakhouse \\\n --jdbcdriver=org.postgresql.Driver\nIn addition to inserting all valid content into the specified database table, jdbcloader creates three output\nfiles:\n•Error log — The error log provides details concerning any errors that occur while processing the input\nfile. This includes errors that occur attempting the insert into VoltDB. For example, if two rows contain\nthe same value for a column that is declared as unique, the error log indicates that the second insert fails\ndue to a constraint violation.\n•Failed input — A separate file contains the contents of each record that failed to load. The records are\nstored in CSV (comma-separated value) format. This file is useful because it allows you to correct any\nformatting issues and retry just the failed content using the csvloader.\n•Summary report — Once all input records are processed, jdbcloader generates a summary report listing\nhow many records were read, how many were successfully loaded and how long the operation took.\nAll three files are created, by default, in the current working directory using \"jdbcloader\" and the table\nname as prefixes. For example, using jdbcloader to insert contestants into the sample voter database creates\nthe following files:\njdbcloader_contestants_insert_log.log\njdbcloader_contestants_insert_invalidrows.csv\njdbcloader_contestants_insert_report.log\nIt is possible to use jdbcloader to perform other input operations. For example, if the source table does\nnot have the same structure as the target table, you can use a custom stored procedure to perform the\nnecessary translation from one to the other by specifying the procedure name on the command line with\nthe --procedure flag:\n357VoltDB CLI Commands\n$ jdbcloader --procedure translateEmpRecords \\\n --jdbcurl=jdbc:postgresql://remotesvr/corphr \\\n --jdbctable=employees \\\n --jdbcuser=charlesdickens \\\n --jdbcpassword=bleakhouse \\\n --jdbcdriver=org.postgresql.Driver\nArguments\n--batch {integer}\nSpecifies the number of rows to submit in a batch to the target VoltDB database. If you do not specify\nan insert procedure, rows of input are sent in batches to maximize overall throughput. You can specify\nhow many rows are sent in each batch using the --batch flag. The default batch size is 200. If you\nuse the --procedure flag, no batching occurs and each row is sent separately.\n--credentials= {properties-file}\nSpecifies a file that lists the username and password of the account to use when connecting to a\ndatabase with security enabled. This is useful when writing shell scripts because it avoids having to\nhardcode the password as plain text in the script. The credentials file is interpreted as a Java properties\nfile defining the properties username and password . For example:\nusername: johndoe\npassword: 4tUn8\nBecause it is a Java properties file, you must escape certain special characters in the username or\npassword, including the colon or equals sign.\n--fetchsize {integer}\nSpecifies the number of records to fetch in each JDBC call to the source database. The default fetch\nsize is 100 records,\n--jdbcdriver {class-name}\nSpecifies the class name of the JDBC driver to invoke. The driver must exist locally and be accessible\neither from the CLASSPATH environment variable or in the lib/extension directory where\nVoltDB is installed.\n--jdbcpassword {text}\nSpecifies the password to use when connecting to the source database via JDBC. You must specify a\nusername and password if security is enabled on the source database.\n--jdbctable {table-name}\nSpecifies the name of source table on the remote database. By default, jdbcloader assumes the source\ntable has the same name as the target VoltDB table.\n--jdbcurl {connection-URL}\nSpecifies the JDBC connection URL for the source database. This argument is required.\n--jdbcuser {text}\nSpecifies the username to use when connecting to the source database via JDBC. You must specify a\nusername and password if security is enabled on the source database.\n--limitrows {integer}\nSpecifies the maximum number of rows to be read from the input stream. This argument lets you load\na subset of a remote database table.\n358VoltDB CLI Commands\n-m, --maxerrors {integer}\nSpecifies the target number of errors before jdbcloader stops processing input. Once jdbcloader en-\ncounters the specified number of errors while trying to insert rows, it will stop reading input and end\nthe process. Note that, since jdbcloader performs inserts asynchronously, it often attempts more inserts\nbefore the target number of exceptions are returned from the database. So it is possible more errors\ncould be returned after the target is met. This argument lets you conditionally stop a large loading\nprocess if more than an acceptable number of errors occur.\n--password {text}\nSpecifies the password to use when connecting to the database. You must specify a username and\npassword if security is enabled for the database. If you specify a username with the --user argument\nbut not the --password argument, VoltDB prompts for the password. This is useful when writing shell\nscripts because it avoids having to hardcode passwords as plain text in the script.\n--port {port-number}\nSpecifies the network port to use when connecting to the VoltDB database. If you do not specify a\nport, jdbcloader uses the default client port 21212.\n-p, --procedure {procedure-name}\nSpecifies a stored procedure to use for loading each record from the input source. The named procedure\nmust exist in the VoltDB database schema and must accept the fields of the data record as input\nparameters. By default, jdbcloader uses a custom procedure to batch multiple rows into a single insert\noperation. If you explicitly name a procedure, batching does not occur.\n-r, --reportdir {directory}\nSpecifies the directory where jdbcloader writes the three output files. By default, jdbcloader writes\noutput files to the current working directory. This argument lets you redirect output to an alternative\nlocation.\n--s, --servers {server-id}[,...]\nSpecifies the network address of one or more nodes of a VoltDB cluster. When specifying an IPv6\naddress, enclose the address in square brackets. By default, jdbcloader attempts to insert the data into\na VoltDB database on the local system (localhost). To load data into a remote database, use the --\nservers argument to specify the VoltDB database nodes the loader should connect to.\n--ssl[=ssl-config-file]\nSpecifies the use of TLS encryption when communicating with the server. Only necessary if the cluster\nis configured to use TLS encryption for the external ports. See Section D, “Using CLI Commands\nwith TLS/SSL” for more information.\n--stopondisconnect\nSpecifies that if connections to all of the VoltDB servers are broken, the loader will stop. Normally, if\nthe connection to the database is lost, jdbcloader periodically attempts to reconnect until the servers\ncome back online and it can complete the loading process. However, you can use this argument to\nhave the loader process stop if the VoltDB cluster becomes unavailable.\n--user {text}\nSpecifies the username to use when connecting to the VoltDB database. You must specify a username\nand password if security is enabled on the target database.\nExample\nThe following example loads records from the Products table of the Warehouse database on server hq.my-\ncompany.com and writes the records into the Products table of the VoltDB database on servers svrA, svrB,\n359VoltDB CLI Commands\nand svrC, using the MySQL JDBC driver to access to source database. Note that the --jdbctable flag is not\nneeded since the source and target tables have the same name.\n$ jdbcloader Products --servers=\"svrA,svrB,svrC\" \\\n --jdbcurl=\"jdbc:mysql://hq.mycompany.com/warehouse\" \\\n --jdbcdriver=\"com.mysql.jdbc.Driver\" \\\n --jdbcuser=\"ceo\" \\\n --jdbcpassword=\"headhoncho\"\n360VoltDB CLI Commands\nkaaloader\nkafkaloader — Imports data from a Kafka message queue into the specified database table.\nSyntax\nkafkaloader table-name [arguments]\nDescription\nThe kafkaloader utility loads data from a Kafka message queue and inserts each message as a separate\nrecord into the specified database table. Apache Kafka is a distributed messaging service that lets you set\nup message queues which are written to and read from by \"producers\" and \"consumers\", respectively. In\nthe Apache Kafka model, the kafkaloader acts as a \"consumer\".\nWhen you start the kafkaloader, you must specify at least three arguments:\n•The database table\n•The Kafka server to read messages from, specified using the --brokers flag\n•The Kafka \"topic\" where the messages are stored, specified using the --topic flag\nFor example:\n$ kafkaloader --brokers=quesvr:2181 --topic=voltdb_customer customer\nNote that Kafka does not impose any specific format on the messages it manages. The format of the\nmessages are application specific. In the case of kafkaloader, VoltDB assumes the messages are encoded\nas standard comma-separated value (CSV) strings, with the values representing the columns of the table\nin the order listed in the schema definition. Each Kafka message contains a single row to be inserted into\nthe database table.\nIt is also important to note that, unlike the csvloader which reads a static file, the kafkaloader is reading\nfrom a queue where messages can be written at any time, on an ongoing basis. Therefore, the kafkaloader\nprocess does not stop when it reads the last message on the queue; instead it continues to monitor the queue\nand process any new messages it receives. The kafkaloader process will continue to read from the queue\nuntil one of the following events occur:\n•The maximum number of errors (specified by --maxerrors ) is reached.\n•The user explicitly stops the process.\n•If --stopondisconnect is specified and connection to all of the VoltDB servers is broken (that is,\nkafkaloader can no longer access the VoltDB database).\nThe kafkaloader will not terminate if it loses its connection to the Kafka zookeeper. Therefore, it is impor-\ntant to monitor the Kafka service and restart the kafkaloader if and when the Kafka service is interrupted.\nSimilarly, the kafkaloader will not stop if it loses connection to the VoltDB database, unless you include\nthe --stopondisconnect argument on the command line.\n361VoltDB CLI Commands\nArguments\nNote\nThe arguments --servers and --port are deprecated in favor of the new, more flexible ar-\ngument --host. Also, the argument --zookeeper is deprecated in favor of the new argu-\nment --brokers . The deprecated arguments continue to work but may be removed in a future\nmajor release.\n--batch {integer}\nSpecifies the number of rows to submit in a batch. By default, rows of input are sent in batches to\nmaximize overall throughput. You can specify how many rows are sent in each batch using the --\nbatch flag. The default batch size is 200.\nNote that --batch and --flush work together. Whichever limit is reached first triggers an insert to the\ndatabase.\n-b, -brokers {kafka-broker[:port]}[,...]\nSpecifies one or more Kafka brokers to connect to. Specify multiple brokers as a comma-separated\nlist. The Kafka service must be running Kafka 0.10.2 or later (including 1.0.0).\n-c, --config {file}\nSpecifies a Kafka configuration file that lets you set Kafka consumer properties, such as group.id. The\nfile should contain the names of the properties you want to set, one per line, followed by an equals\nsign and the desired value. For example:\ngroup.id=mydb\nclient.id=myappname\n--commitpolicy {interval}\nBecause the loader performs two distinct tasks — retrieving records from Kafka and then inserting\nthem into VoltDB — Kafka's automated tracking of the current offset may not match what records\nare successfully inserted into the database. Therefore, by default, the importer uses a manual commit\npolicy to ensure the Kafka offset matches the completed inserts.\nUse of the default commit policy is recommended. However, you can, if you choose, use Kafka's\nautomated commit policy by specifying a commit interval, in milliseconds, using this property.\n--credentials= {properties-file}\nSpecifies a file that lists the username and password of the account to use when connecting to a\ndatabase with security enabled. This is useful when writing shell scripts because it avoids having to\nhardcode the password as plain text in the script. The credentials file is interpreted as a Java properties\nfile defining the properties username and password . For example:\nusername: johndoe\npassword: 4tUn8\nBecause it is a Java properties file, you must escape certain special characters in the username or\npassword, including the colon or equals sign.\n-f, --flush {integer}\nSpecifies the maximum number of seconds before pending data is written to the database. The default\nflush period is 10 seconds.\nIf data is inserted into the kafka queue intermittently, there could be a long delay between when data\nis read from the queue and when enough records have been read to meet the --batch limit. The\n362VoltDB CLI Commands\nflush value avoids unnecessary delays in this situation by periodically writing all pending data. If the\nflush limit is reached, all pending records are written to the database, even if the --batch limit has\nnot been satisfied.\n--formatter {file}\nSpecifies a configuration file identifying properties for a custom formatter. The file must set the prop-\nerty formatter to the class for the custom implementation of the Formatter interface. (Note, this\nis different than the attribute set when declaring a formatter for a built-in import connector. For the\nkaflaloader utility you specify the Formatter class, not the Formatter Factory.) You can also declare\nadditional custom properties used by the formatter itself. For example:\nformatter=myformatter.MyFormatter\ncolumn_width=12\nBefore running kafkaloader with a custom formatter, you must define two environment variables:\nZK_LIB pointing to the location of the Apache Zookeeper libraries and FORMATTER_LIB pointing\nto the location of your custom formatter JAR file. See the chapter on \"Custom Importers,Exporters,\nand Formatters\" in the VoltDB Guide to Performance and Customization for more information about\nusing custom formatters.\n-H, --host {server[:port]}[,...]\nSpecifies one or more nodes of the database cluster where the records are to be inserted. You can\nspecify servers as a network address or hostname, plus an optional port number. When specifying an\nIPv6 address, enclose the address (exclusive of the optional colon and port number) in square brackets.\nBy default, kafkaloader attempts to connect to the default client port on the local system (localhost).\nTo load data into a remote database, use the --host argument to specify one or more VoltDB servers the\nloader should connect to. Once kafkaloader connects to at least one cluster node, it will automatically\nconnect to the other servers in the cluster.\n-m, --maxerrors {integer}\nSpecifies the target number of input errors before kafkaloader stops processing input. Once\nkafkaloader encounters the specified number of errors while trying to insert rows, it will stop reading\ninput and end the process.\nThe default maximum error count is 100. Since kafka import can be an persistent process, you can\navoid having input errors cancel ongoing import by setting the maximum error count to zero, which\nmeans that the loader will continue to run no matter how many input errors are generated.\n--maxpollinterval {integer}\nSpecifies the maximum time (in milliseconds) allowed between polls of the Kafka brokers before\nKafka assumes the kafkaloader client has failed and drops it from the client group. The default poll\ninterval is 300 seconds (5 minutes).\n--maxpollrecords {integer}\nSpecifies the maximum number of records fetched in each batch from the kafka brokers. The default\nmaximum is 500 records.\n--maxrequesttimeout {integer}\nSpecifies the maximum length of time (in milliseconds) VoltDB waits for a response from the Kafka\nbrokers before retrying the request or timing out the session. The default time out is 305 seconds (just\nover 5 minutes).\n--maxsessiontimeout {integer}\nSpecifies the maximum interval between heart beats from the consumer (kafkaloader) and the Kafka\nbrokers before Kafka drops the kafkaloader from the client group identified by group.id. The default\ntime out is 20 seconds.\n363VoltDB CLI Commands\n-n, --consumercount {integer}\nSpecifies the number of concurrent Kafka consumers kafakloader uses to pull data from the brokers.\nThe default is one consumer.\n--password {text}\nSpecifies the password to use when connecting to the database. You must specify a username and\npassword if security is enabled for the database. If you specify a username with the --user argument\nbut not the --password argument, VoltDB prompts for the password. This is useful when writing shell\nscripts because it avoids having to hardcode passwords as plain text in the script.\n-p, --procedure {procedure-name}\nSpecifies a stored procedure to use for loading each record from the data file. The named procedure\nmust exist in the database schema and must accept the fields of the data record as input parameters.\nBy default, kafkaloader uses a custom procedure to batch multiple rows into a single insert operation.\nIf you explicitly name a procedure, batching does not occur.\n--ssl[=ssl-config-file]\nSpecifies the use of TLS encryption when communicating with the server. Only necessary if the cluster\nis configured to use TLS encryption for the external ports. See Section D, “Using CLI Commands\nwith TLS/SSL” for more information.\n--stopondisconnect\nSpecifies that if connections to all of the VoltDB servers are broken, the kafkaloader process will stop.\nThe kafkaloader connects to servers automatically as the topology of the cluster changes. Normally, if\nall connections are broken, kafkaloader will periodically attempt to reconnect until the servers come\nback online. However, you can use this argument to have the loader process stop when the VoltDB\ncluster becomes unavailable.\n-t, --topic {kafka-topic}\nSpecifies the Kafka topic to read from the Kafka queue.\n--update\nSpecifies that existing records with a matching primary key are updated, rather than being rejected.\nBy default, kafkaloader attempts to create new records. The --update flag lets you load updates to\nexisting records — and create new records where the primary key does not already exist. To use --\nupdate, the table must have a primary key.\n--user {text}\nSpecifies the username to use when connecting to the database. You must specify a username and\npassword if security is enabled for the database.\nExamples\nThe following example starts the kafkaloader to read messages from the voltdb_customer topic on the\nKafka broker quebkr:9092, inserting the resulting records into the CUSTOMER table in the VoltDB cluster\nthat includes the servers dbsvr1, dbsvr2, and dbsvr3. The process will continue, regardless of errors, until\nthe user explicitly ends the process.\n$ kafkaloader --maxerrors=0 customer \\\n --brokers=quebkr:2181 --topic=voltdb_customer \\\n --host=dbsvr1,dbsvr2,dbsvr3 \n364VoltDB CLI Commands\nsqlcmd\nsqlcmd — Starts an interactive command prompt for issuing SQL queries to a running VoltDB database\nSyntax\nsqlcmd [args...]\nDescription\nThe sqlcmd command lets you query a VoltDB database interactively. You can execute SQL statements,\ninvoke stored procedures, or use directives to examine the structure of the database. When sqlcmd starts\nit provides its own command line prompt until you exit the session. When you start the session, you can\noptionally specify one or more database servers to access. By default, sqlcmd accesses the database on\nthe local system via localhost.\nAt the sqlcmd prompt, you have several options:\n•SQL queries — You can enter ad hoc SQL queries that are run against the database and the results\ndisplayed. You must terminate the query with a semi-colon and carriage return.\n•Procedure calls — You can have sqlcmd execute a stored procedure. You identify a procedure call\nwith the exec directive, followed by the procedure class name, the procedure parameters, and a closing\nsemi-colon. For example, the following sqlcmd directive executes the @SystemCatalog system proce-\ndure requesting information about the stored procedures.\n$ sqlcmd\n1> exec @SystemCatalog procedures;\nNote that string values can be entered as plain text or enclosed in single quotation marks. Also, the exec\ndirective must be terminated by a semi-colon.\n•Echo directives — The echo and echoerror directives let you add comments or informational messages\nto the sqlcmd output. Any text following the directive up to and including the line break or carriage\nreturn is repeated verbatim:\n•ECHO [text] — Writes the specified text, as is, to standard output (stdout).\n•ECHOERROR [text] — Writes the specified text, as is, to standard error (stderr).\n•Show, Describe, and Explain directives — The show, describe, and explain directives let you exam-\nine the structure of the schema and user-defined stored procedures. Valid directives are:\n•SHOW CLASSES — Lists the user-defined classes in the database. Classes are grouped into proce-\ndures classes (those that can be invoked as a stored procedure) and non-procedure classes (shared\nclasses that cannot themselves be called as stored procedures but can be invoked from within stored\nprocedures).\n•SHOW PROCEDURES — Lists the user-defined, default, and system procedures for the current\ndatabase, including the type and number of arguments for each.\n•SHOW TABLES — Lists the tables in the schema.\n•DESCRIBE {table-name} — Lists the columns of a table, stream, or view.\n365VoltDB CLI Commands\n•EXPLAIN {sql-query} — Displays the execution plan for the specified SQL statement.\n•EXPLAINPROC {procedure-name} — Displays the execution plans for the specified stored proce-\ndure.\n•EXPLAINVIEW {view-name} — Displays the execution plans for the components of the specified\nview.\n•Query statistics directive — The querystats directive lets you select and format the output of the\n@Statistics system procedure using SQL-like syntax. In the directive you specify a SELECT statement\nidentifying the columns you want returned, using FROM STATISTICS(selector, delta-flag) in place of\nthe table name. You can also use the WHERE, ORDER BY, and GROUP BY clauses to filter the results\nas desired. For example, the following directive returns the total number of rows in each table:\n$ sqlcmd \n1> querystats select table_name, sum(tuple_count) from statistics(table,0) group by table_name;\nKnown Limitations\n•Column aliases are not supported.\n•Query must be on a single line.\n•Errors are reported on the console but not returned to the user.\n•Class management directives — The load classes and remove classes directives let you add and\nremove Java classes from the database:\n•LOAD CLASSES —Loads any classes or resource files in the specified JAR file. If a class or resource\nalready exists in the database, it is replaced by the new definition from the JAR file.\n•REMOVE CLASSES — Removes any classes that match the specified class name string. The class\nspecification can include wildcards.\n•Command recall — You can recall previous commands using the up and down arrow keys. Or you\ncan recall a specific command by line number (the command prompt shows the line number) using the\nrecall command. For example:\n$ sqlcmd\n1> select * from votes;\n2> show procedures;\n3> recall 1\nselect * from votes;\nOnce recalled, you can edit the command before reissuing it using typical editing keys, such as the left\nand right arrow keys and backspace and delete.\n•Script files — You can run multiple queries or stored procedures in a single command using the file\ndirective. The file directive takes one or more text files as an argument and executes all of the SQL\nqueries and exec directives in the file(s) as if they were entered interactively. (Do not use control di-\nrectives such as recall and exit in script files.) Separate multiple script files with spaces. Enclose file\nnames that contain spaces with single quotation marks. For example, the first command in the following\nexample processes all of the SQL queries and procedure invocations in the file myscript.sql . The\nsecond command processes the SQL queries from two files:\n$ sqlcmd\n1> file myscript.sql;\n2> file yourscript.sql 'their script.sql';\n366VoltDB CLI Commands\nIf the file(s) contain only data definition language (DDL) statements, you can also have the files\nprocessed as a single batch by including the -batch argument:\n$ sqlcmd\n1> file -batch myscript.sql;\nIf a file or set of statements includes both DDL and DML statements, you can still batch process a\ngroup of DDL statements by enclosing the statements in a file -inlinebatch directive and the\nspecified end marker. For example, in the following code the three CREATE PROCEDURE statements\nare processed as a batch:\nload classes myprocs.jar;\nfile -inlinebatch END_OF_BATCH\nCREATE PROCEDURE FROM CLASS procs.AddEmployee;\nCREATE PROCEDURE FROM CLASS procs.ChangeDept;\nCREATE PROCEDURE FROM CLASS procs.PromoteEmployee;\nEND_OF_BATCH\nBatch processing the DDL statements has two effects:\n•Batch processing can significantly improve performance since all of the schema changes are\nprocessed and distributed to the cluster nodes at one time, rather than individually for each statement.\n•The batch operates as a transaction, succeeding or failing as a unit. If any statement fails, all of the\nschema changes are rolled back.\n•Exit — When you are done with your interactive session, enter the exit directive to end the session and\nreturn to the shell prompt.\nTo run a sqlcmd command without starting the interactive prompt, you can pipe the command through\nstandard input to the sqlcmd command. For example:\n$ echo \"select * from contestants;\" | sqlcmd\nIn general, the sqlcmd commands are not case sensitive and must be terminated by a semi-colon. However,\nthe semi-colon is optional for the exit, file, and recall directives. Also, list and quit are supported as\nsynonyms for the show and exit directives, respectively.\nArguments\n--help\nDisplays the sqlcmd help text then returns to the shell prompt.\n--servers= server-id[,...]\nSpecifies the network address of one or more nodes in the database cluster. When specifying an IPv6\naddress, enclose the address in square brackets. By default, sqlcmd attempts to connect to a database\non localhost.\n--port=port-num\nSpecifies the port number to use when connecting to the database servers. All servers must be using\nthe same port number. By default, sqlcmd connects to the standard client port (21212).\n--user=user-id\nSpecifies the username to use for authenticating to the database. The username is required if the\ndatabase has security enabled.\n367VoltDB CLI Commands\n--password= {text}\nSpecifies the password to use when connecting to the database. You must specify a username and\npassword if security is enabled for the database. If you specify a username with the --user argument\nbut not the --password argument, VoltDB prompts for the password. \\\n--credentials= {properties-file}\nSpecifies a file that lists the username and password of the account to use when connecting to a\ndatabase with security enabled. This is useful when writing shell scripts because it avoids having to\nhardcode the password as plain text in the script. The credentials file is interpreted as a Java properties\nfile defining the properties username and password . For example:\nusername: johndoe\npassword: 4tUn8\nBecause it is a Java properties file, you must escape certain special characters in the username or\npassword, including the colon or equals sign.\n--kerberos= {service-name}\nSpecifies the use of kerberos authentication when connecting to the database server(s). The service\nname identifies the Kerberos client service module, as defined by the JAAS login configuration file.\n--output-format={csv | fixed | tab}\nSpecifies the format of the output of query results. Output can be formatted as comma-separated values\n(csv), fixed monospaced text (fixed), or tab-separated text fields (tab). By default, the output is in\nfixed monospaced text.\n--output-skip-metadata\nSpecifies that the column headings and other metadata associated with query results are not displayed.\nBy default, the output includes such metadata. However, you can use this argument, along with the\n--output-format argument, to write just the data itself to an output file.\n--query-timeout= time-limit\nSpecifies a time limit for read-only queries. Any read-only queries that exceed the time limit are\ncanceled and control returned to the user. Specify the time out as an integer number of milliseconds.\nThe default timeout is set in the cluster configuration (or set to 10 seconds by default). Only users\nwith ADMIN privileges can set a sqlcmd timeout longer than the cluster-wide setting.\n--ssl[=ssl-config-file]\nSpecifies the use of TLS encryption when communicating with the server. Only necessary if the cluster\nis configured to use TLS encryption for the external ports. See Section D, “Using CLI Commands\nwith TLS/SSL” for more information.\nExample\nThe following example demonstrates an sqlcmd session, accessing the voter sample database running on\nnode zeus.\n$ sqlcmd --servers=zeus\nSQL Command :: zeus:21212\n1> select * from contestants;\n 1 Edwina Burnam \n 2 Tabatha Gehling \n 3 Kelly Clauss \n 4 Jessie Alloway \n 5 Alana Bregman \n368VoltDB CLI Commands\n 6 Jessie Eichman \n(6 row(s) affected)\n2> select sum(num_votes) as total, contestant_number from \nv_votes_by_contestant_number_State group by contestant_number \norder by total desc;\nTOTAL CONTESTANT_NUMBER \n------- ------------------\n 757240 1\n 630429 6\n 442962 5\n 390353 4\n 384743 2\n 375260 3\n(6 row(s) affected)\n3> exit\n$\n369VoltDB CLI Commands\nvoltadmin\nvoltadmin — Performs administrative functions on a VoltDB database.\nSyntax\nvoltadmin dr drop\nvoltadmin dr reset [--all | --cluster={ cluster-id} [--force]]\nvoltadmin export release --source={ source-table } --target={ export-target }\nvoltadmin help [ command ]\nvoltadmin inspect\nvoltadmin jstack [ server-id]\nvoltadmin license { license-file }\nvoltadmin log4j { log4j-configuration-file }\nvoltadmin note { text}\nvoltadmin pause [--wait [--timeout={ seconds}]]\nvoltadmin promote\nvoltadmin resize [--ignore=disabled_export] [--retry] [--test] [--yes]\nvoltadmin restore [--skiptables={ table-name [,..]}] [--tables={ table-name [,..]}]\nvoltadmin resume\nvoltadmin save [{ directory} {unique-id}] [--format={csv|native}] [--blocking}]\n[--skiptables={ table-name [,..]}] [tables={ table-name [,..]}]\nvoltadmin show [license|snapshots]\nvoltadmin shutdown [--cancel|--force|--save] [--timeout={ seconds}]\nvoltadmin status [--continuous] [--dr] [--json]\nvoltadmin stop { server-id} [--force]\nvoltadmin update { configuration-file }\nglobal qualifiers:\n--credentials={properties-file}\n--help\n--host={server-id}\n--kerberos\n--password={text}\n--ssl={ssl-config-file}\n--user={user-id}\n370VoltDB CLI Commands\nDescription\nThe voltadmin command allows you to perform administrative tasks on a VoltDB database. You specify\nthe database server to access and, optionally, authentication credentials using arguments to the voltadmin\ncommand. Individual administrative commands may have they own unique arguments as well.\nArguments\nThe following global arguments are available for all voltadmin commands.\n--credentials= {properties-file}\nSpecifies a file that lists the username and password of the account to use when connecting to a\ndatabase with security enabled. This is useful when writing shell scripts because it avoids having to\nhardcode the password as plain text in the script. The credentials file is interpreted as a Java properties\nfile defining the properties username and password . For example:\nusername: johndoe\npassword: 4tUn8\nBecause it is a Java properties file, you must escape certain special characters in the username or\npassword, including the colon or equals sign.\n-h, --help\nDisplays information about how to use a command. The --help flag and the help command perform\nthe same function.\n-H, --host= server-id[:port]\nSpecifies which database server to connect to. You can specify the server as a network address or\nhostname. When specifying an IPv6 address, enclose the address (exclusive of the optional colon and\nport number) in square brackets. By default, voltadmin attempts to connect to a database on localhost.\nYou can optionally specify the port number. If you do not specify a port, voltadmin uses the default\nadmin port.\n--kerberos\nSpecifies the use of Kerberos authentication when connecting to the database. You must login to your\nKerberos account using kinit before issuing the voltadmin with this argument.\n-p, --password= {text}\nSpecifies the password to use when connecting to the database. You must specify a username and\npassword if security is enabled for the database. If you specify a username with the --user argument\nbut not the --password argument, VoltDB prompts for the password. This is useful when writing shell\nscripts because it avoids having to hardcode passwords as plain text in the script.\n--ssl[=ssl-config-file]\nSpecifies the use of TLS encryption when communicating with the server. Only necessary if the cluster\nis configured to use TLS encryption for the external ports. See Section D, “Using CLI Commands\nwith TLS/SSL” for more information.\n-u, --user= user-id\nSpecifies the username to use for authenticating to the database. The username is required if the\ndatabase has security enabled.\n-v, -verbose\nDisplays additional information about the specific commands being executed.\n371VoltDB CLI Commands\nCommands\nThe following are the administrative functions that you can invoke using voltadmin.\nhelp [command ]\nDisplays information about the usage of individual commands or, if you do not specify a command,\nsummarizes usage information for all commands. The help command and --help qualifier are syn-\nonymous.\ndr drop\nRemoves the current cluster from an XDCR environment. Performing a drop breaks existing DR\nconnections, deletes pending binary logs and stops the queuing of DR data on the current cluster. It\nalso tells all other clusters in the XDCR relationship to drop their connection to the current cluster\nand remove any associated binary logs for that cluster.\nThis command will wait until all other clusters respond before returning to the shell prompt. If one\n(or more) of the clusters are unreachable, the command will periodically report which clusters it is\nwaiting for. Be aware that if you CTRL-C out of the command before it returns to the shell prompt,\none or more of the remote clusters will not have received the appropriate message and will not have\ncleared their logs for the targeted cluster. In that case, you need to clear that cluster's queues manually\nafter it comes back online using the dr reset --cluster command.\nThe dr drop command lets you effectively remove a single cluster — the cluster on which the the\ncommand is executed — from a multi-cluster XDCR environment in a single command.\ndr reset\nResets the database replication (DR) connection(s) for the database. Performing a reset breaks existing\nDR connections, deletes pending binary logs and stops the queuing of DR data on the current cluster.\nThis command is useful in passive DR for eliminating unnecessary resource usage on a master data-\nbase after the replica stops or is promoted. Note, however, after a reset DR must start over from scratch;\nit cannot be restarted where it left off. Similarly, if there are two clusters in an XDCR environment,\nyou can use dr reset for one cluster to drop the connection to the other cluster.\nIf you are using multiple XDCR clusters, the dr drop command is the recommended way to remove a\nrunning cluster from the environment. Otherwise, if you use the dr reset command you must choose\nbetween removing the connections to all other clusters or just one cluster using the following options.\nYou must issue the appropriate command on all applicable clusters:\n--all\nResets DR connections and queues to all other clusters on the current cluster. Choose this option\nif you want the current cluster to survive and then restart DR from scratch on the other remaining\nclusters.\n--cluster= {remote-cluster-ID}\nDrops the connection to just one cluster. Specify the ID of the remote cluster you wish to drop\nfrom the XDCR environment as an argument to the --cluster option. For example, if one cluster\nhas stopped and you want to remove it from the XDCR environment, you can reset the connections\nto that cluster by issuing the dr reset --cluster= {id} command on all the remaining clusters. You\nmust also specify --force when you specify --cluster.\n--force\nVerifies that you want to drop one cluster from a multi-cluster XDCR environment. There is a\nrisk, when a cluster fails, that it has not sent the same binary logs to all other clusters. In this\nsituation, if you drop the one cluster from the XDCR environment, the remaining clusters can\ndiverge. Which is why you must confirm that you really want to drop just one cluster.\n372VoltDB CLI Commands\nStopping the remote cluster with an orderly shutdown ( voltadmin shutdown ) ensures that all\nbinary logs are delivered. So it is then safe to do a dr reset with --cluster and --force. Otherwise,\nthe recommended approach is to choose one cluster as the source, stop all DR connections from\nthat cluster, then restart DR from scratch on the remaining clusters. However, you can, if you\nchoose, use --force to drop the one cluster if you are sure no divergence has occurred.\nThe --all and --cluster options are mutually exclusive.\nexport release --source= {source-table} --target= {export-target}\nResets any blocked export queues, resuming export at the next available export record. You must\nspecify both of the export release qualifiers:\n-s, --source= {source-table}\nSpecifies the source stream of the table whose queue you want to reset.\n-t, --target= {export-target}\nSpecifies the export target you want to reset.\ninspect\nDisplays information about the software, license, and cluster operating environment. Primarily used\nwhen communicating with customer support.\njstack [ server-id[:port] ]\nSaves the current state of all Java threads on one or more of the cluster nodes. If you specify a server\non the command line, Jstacks are taken for that node only. If you do not specify a server, Jstacks are\nsaved on all nodes of the cluster. The Jstack files are saved in the thread_dumps subfolder under\nthe database root directory. This command is primarily for use when working with VoltDB support\nto debug application or database issues.\nlicense {license-file}\nUpdates the software license for the database. After validating the license matches the current config-\nuration, the license is saved to the database root directory for each node in the cluster.\nlog4j {configuration-file}\nUpdates the logging configuration. You specify the new configuration as a Log4j XML configuration\nfile.\nnote {text}\nWrites the specified text message to the VoltDB log file. When security is enabled, the user must have\nadmin permissions to write to the log file.\npause [ --wait [ --timeout= {seconds} ] ]\nPauses the database, stopping any additional activity on the client port. Normally, pause returns im-\nmediately. However, you can use the --wait flag to have the command wait until all pending transac-\ntions are processed and all database replication (DR) and export queues are flushed. Use of --wait is\nrecommended if you are shutting down the database and do not intend to restart with recover, since\n--wait ensures all associated DR or export data is delivered prior to shutdown.\nSince it is possible that lost connections to external systems or other abnormal conditions can cause\nqueues to hang, the pause --wait command waits for up to two minutes if transactions are pending but\nnot being cleared. After two minutes of inactivity, the command times out and stops waiting, leaving\nthe database in a paused state. You can change the timeout period by using the --timeout (or -t) flag\nand specifying a different timeout period in seconds.\nIf the pause --wait command times out, review any error messages to determine the cause of the\ndelay. Once you correct the problem, you can either reissue the pause --wait command or check the\n@Statistics system procedure results to make sure all pending transactions and queues are clear.\n373VoltDB CLI Commands\npromote\nPromotes a replica database, stopping replication and enabling read/write queries on the client port.\nresize\nBegins the resize process for reducing the size of a K-safe cluster. See Section 9.3.2, “Removing\nNodes with Elastic Scaling” for more information on resizing clusters. The following qualifiers affect\nwhat actions are taken. Without qualifiers, the command tests to ensure a resize is possible, reports\nwhich nodes will be removed, and prompts before starting the resize process.\n--ignore=disabled_export\nIgnores any pending data for export targets that are disabled when performing the resize process.\nNormally, resize waits for all export queues to drain before starting the resize process, even if the\ntarget is currently disabled in the database configuration.\n-r, --retry\nRestarts the resize process after an unexpected failure.\n-t, --test\nTests to see if the cluster has enough nodes to perform a resize operation while retaining its K-\nsafety factor. If so, it reports which nodes would be removed during resizing.\n-y, --yes\nSkips the prompt when starting the resize process. This qualifier is useful when including the\nvoltadmin resize command in scripts where you are sure you want to start the process and do\nnot want interactive prompts.\nresume\nResumes normal database operation after a pause.\nsave [ {directory} {unique-ID} ]\nCreates a snapshot containing the current database contents. Snapshot files are saved to each server in\nthe cluster. If you use save without any arguments, the snapshot is saved into the database's snapshots\ndirectory where it can automatically be restored the next time the database starts. If you specify an\nalternate directory and ID, the snapshot files are saved to the specified path using the unique ID as\na file prefix.\nWhen saving into the default snapshots directory, VoltDB automatically performs a full snapshot\nin native mode. The following are additional arguments you can specify when saving to a specific\nlocation and unique ID. (Only the --blocking argument is allowed when saving to the default snapshots\ndirectory.)\n--format={ csv | native }\nSpecifies the format of the snapshot files. The allowable formats are CSV (comma-separated\nvalue) and native formats. Native format snapshots can be used for restoring the database. CSV\nfiles can be used by other utilities (such as spreadsheets or the VoltDB CSV loader) but cannot\nbe restored using the voltadmin restore command.\n--blocking\nSpecifies that the snapshot will block all other transactions until the snapshot is complete. The\nadvantage of blocking snapshots is that once the command completes you know the snapshot is\nfinished. The disadvantage is that the snapshot blocks ongoing use of the database.\nBy default, voltadmin performs non-blocking snapshots so as not to interfere with ongoing data-\nbase operation. However, note that the non-blocking save command only starts the snapshot. You\nmust use show snapshots to determine when the snapshot process is finished if you want to know\nwhen it is safe, for example, to shutdown the database.\n374VoltDB CLI Commands\n--skiptables={ table-name [,...] }\nSpecifies one or more tables to leave out of the snapshot. Separate multiple table names with\ncommas.\n--tables={ table-name [,...] }\nSpecifies what table(s) to include in the snapshot. Only the specified tables will be included.\nSeparate multiple table names with commas.\nrestore {directory} {unique-ID}\nRestores the data from a snapshot to the database. The data is read from a snapshot using the same\nunique ID and directory path that were used when the snapshot was created. If no tables exist in the\ndatabase (that is, no schema has been defined) the restore command will also restore the original\nschema, including stored procedure classes, before restoring the data.\nThe following arguments let you selectively include or exclude data from certain tables during the\nrestore operation.\n--skiptables={ table-name [,...] }\nData for the specified tables is not restored. All other tables are restored. Separate multiple table\nnames with commas.\n--tables={ table-name [,...] }\nOnly data for the specified tables is restored. Data for all other tables is ignored. Separate multiple\ntable names with commas.\nNote that if the database is empty (that is, has no existing schema), the full schema from the snapshot\nis always loaded even if you choose not to load the data for certain tables. Also, you can specify either\n--skiptables or --tables but not both on the same command.\nshow license\nDisplays information about the cluster's current license.\nshow snapshots\nDisplays information about up to ten previous snapshots. This command is useful for determining the\nsuccess or failure of snapshots started with the save command.\nstatus\nDisplays information on the state of the cluster, such as the number of nodes and uptime. You can use\nthe following options to customize the content and presentation of the status information:\n--dr\nAdds information about the current status of data replication to the display.\n-j, --json\nOutputs the information in JSON format.\n--continuous\nSpecifies that the information be continuously updated until you interrupt the command (with\nCTRL-C, for example).\nupdate {configuration}\nUpdates the configuration on a running database. There are limitations on what changes can be made\nto the configuration of a running database cluster. Allowable changes include the following:\n•Security settings, including user accounts\n•Import and export settings\n375VoltDB CLI Commands\n•Database replication settings (except the DR cluster ID)\n•Automated snapshots\n•System settings:\nFlush interval\nHeartbeat timeout\nQuery Timeout\nResource Limit — Disk Limit\nResource Limit — Memory Limit\nYou cannot use the update command to change paths, ports, command logging, partition detection,\nor <cluster> attributes (such as K safety or sites per host).\nstop [--force] {server-id}\nStops an individual node in the cluster. The voltadmin stop command can only be used on a K-safe\ncluster and will not intentionally shutdown the database. That is, the command will only stop a node\nif there are enough nodes left for the cluster to remain viable.\nBy default, the stop command waits for all partition and export leadership on the specified node to\nbe redistributed to other nodes in the cluster in an orderly fashion before stopping the node. You\ncan use the --force argument to stop the node immediately. However, if you force the node to stop,\nthe remainder of the cluster must negotiate leadership after the node stops, which can have several\nnegative effects. The advantages of using the default, orderly stop command are:\n•In-flight transactions queued to the stopped node are completed and returned to the client. A forced\nstop interrupts these transactions resulting in lost connection and other errors being returned to the\nclients.\n•Stopping the node has reduced impact on the ongoing transactions and workload for the database.\nA forced stop disrupts ongoing transactions while the cluster negotiates the migration of partition\nleadership.\n•Export queues are transitioned correctly, avoiding gaps and potentially lost export data that can\nresult if nodes are interrupted and restarted in quick succession.\nshutdown [ --force | --save | --cancel ] [ --timeout= {seconds} ]\nShuts down the database process on all nodes of the cluster. By default, voltadmin shutdown per-\nforms an orderly shutdown, pausing the database, completing all pending transactions and writing any\nqueued export, import, or DR data to disk before shutting down the database. You can also use one\nof the following arguments to modify the behavior of shutdown:\n--force\nStops the database immediately. If you do not need to save any in-process work, you can use the\n--force argument to stop the database immediately.\n--save\nSpecifies that not only will all data be made durable, all pending DR and export data will be\nsent to the corresponding external systems and a final snapshot will be taken before the cluster\nis shutdown. The resulting snapshot will be used, in place of command logs, the next time the\ndatabase is started with the voltdb start command. Using the final snapshot on startup permits\nchanges not normally allowed by command logs, such as upgrading the VoltDB software.\n--cancel\nCancels a pending shutdown. The shutdown --save command can be blocked if the targets for\npending DR or export are currently unavailable. If this happens, you can do a CTRL-C to interrupt\n376VoltDB CLI Commands\nthe shutdown --save command, but that does not cancel the shutdown itself and your database\nis not operational. The shutdown --cancel command cancels the shutdown operation and returns\nthe database to an operational state.\nSince it is possible that lost connections to external systems or other abnormal conditions can cause\nqueues to hang, the shutdown command (without the --force flag) waits for up to two minutes if\ntransactions are pending but not being cleared. After two minutes of inactivity, the command times\nout, leaving the database in a paused state but not shutdown. You can change the timeout period by\nusing the --timeout (or -t) flag and specifying a different timeout period in seconds.\nIf the shutdown command times out, review any error messages to determine the cause of the delay.\nYou can:\n•Do a shutdown --cancel to cancel the shutdown, correct the problem, then reissue the shutdown\ncommand\n•Do a shutdown --cancel to cancel the shutdown and resume normal database operations\n•Do a shutdown --force to initiate an immediate shutdown\nNote that if you do a shutdown --force after a shutdown --save command, the system will not have\ncreated a final snapshot.\nExamples\nThe following example performs an orderly shutdown.\n$ voltadmin shutdown\nThe next example uses pause and save to create a snapshot of the database contents as a backup before\nshutting down.\n$ voltadmin pause --wait\n$ voltadmin save --blocking ./ mydb \n$ voltadmin shutdown\nThe last example uses the shutdown --save command to create a snapshot of the database contents, similar\nto the previous example. However, in this case, the snapshot that is created will be used automatically to\nrestore the database on the next start command.\n$ voltadmin shutdown --save\n377VoltDB CLI Commands\nvoltdb\nvoltdb — Performs management tasks on the current server, such as starting and recovering the database.\nSyntax\nvoltdb collect [args]\nvoltdb get classes [args]\nvoltdb get deployment [args]\nvoltdb get schema [args]\nvoltdb mask [args] source-configuration-file [new-configuration-file ]\nvoltdb init [args]\nvoltdb start [args]\nDescription\nThe voltdb command performs local management functions on the current system, including:\n•Initializing the database root directory and setting configuration options\n•Starting the database process\n•Collecting log files into a single compressed file\n•Retrieving the classes, deployment, or schema from a database root directory\n•Hiding passwords in the configuration file\nThe action that is performed depends on which start action you specify to the voltdb command:\n•collect — the collect option collects system and process logs related to the VoltDB database process\non the current system and compresses them into a single file. This command is helpful when reporting\nproblems to VoltDB support.\n•get — the get option retrieves the current configuration, procedure classes, or schema from the database\nroot directory. The requested item is then written to a file. This command can be used whether the\ndatabase is running or not. You can use options to specify either or both the parent of the root directory\n(--dir) or the name and location of the output file ( --output ). Note that the get option can only be\nused on databases created using init and start.\n•mask — the mask option disguises the passwords associated with user accounts in the security section\nof the configuration file. The output of the voltdb mask command is either a new configuration file with\nhashed passwords or, if you do not specify an output file, the original input file is modified in place.\n•init — the init option initializes the root directory VoltDB uses for storing the configuration, logs, and\nother disk-based information (such as snapshots and command logs) for the database process. You only\nneed to initialize the root directory once. After that, VoltDB manages the content and selecting the\n378VoltDB CLI Commands\nappropriate start actions to maintain the database state. If you choose to re-initialize an existing root\ndirectory, you can use the --force argument to delete any previous data.1\n•start — the starts option starts the database process after the root directory has been initialized. The\nactual action that VoltDB takes depends on the current state of the database cluster:\n•If this is the first time the database has started, it creates a new database.\n•If the database has run before and is configured to use command logs or there is at least one snapshot\nin the snapshots directory, the database is restarted and previous data recovered.\n•If the cluster is already running and a server is missing (assuming the use of K-safety) the current\nnode will rejoin the running cluster.\n•If the cluster is already running with all servers present, the current node will be added to expand the\nsize of the cluster — as long as you use the --add argument on the start command.\nThe voltdb start command uses Java to instantiate the process. It is possible to customize the Java envi-\nronment, if necessary, by passing command line arguments to Java through the following environment\nvariables:\n•LOG4J_CONFIG_PATH — Specifies an alternate Log4J configuration file.\n•VOLTDB_HEAPMAX — Specifies the maximum heap size for the Java process. Specify the value\nas an integer number of megabytes. By default, the maximum heap size is set to 2048.\n•VOLTDB_OPTS — Specifies all other Java command line arguments. You must include both the\ncommand line flag and argument. For example, this environment variable can be used to specify system\nproperties using the -D flag:\nexport VOLTDB_OPTS=\"-DmyApp.DebugFlag=true\"\nLog Collection (voltdb collect) Arguments\nThe following arguments apply specifically to the collect action.\n-D --dir={directory}\nSpecifies the parent location for the database root directory from which to collect information. The\ndefault, if you do not specify a directory, is the current working directory.\n--days={integer}\nSpecifies the number of days of log files to collect. For example, using --days=1 will collect data\nfrom the last 24 hours. By default, VoltDB collects 14 days (2 weeks) worth of logs.\n--dry-run\nLists the actions that will be taken, including the files that will be collected, but does not actually\nperform the collection or upload.\n--no-prompt\nSpecifies that the process will not prompt for input, such as whether to delete the output file after\nuploading is complete. This argument is useful when starting the collect action from within a script.\n1The init --force command deletes command logs and overflow subfolders within the database root directory. However, to avoid accidentally\ndeleting backups, the snapshots subfolder is renamed rather than deleted. This way, it is possible to restore a snapshot in case of an unintended re-\ninitialization. On the other hand, this means you should periodically check your database root directories and purge any archived snapshots folders\n(named snapshots.nn) that are no longer needed.\n379VoltDB CLI Commands\n--output= {file}\nSpecifies the name and location of the resulting output file. The default output file name starts with\n\"voltdb_collect_\" and includes the current server IP or hostname, with a file extension of \".zip\" saved\nto the current working directory.\n--skip-heap-dump\nSpecifies that the heap dump not be included in the collection. The heap dump is usually significantly\nlarger than the other log files and can be excluded to save space.\nGet Resource (voltdb get) Arguments\nThe following arguments apply specifically to the get classes , get deployment , and get schema actions.\n-D --dir={directory}\nSpecifies the parent location for the database root directory. The default, if you do not specify a\ndirectory, is the current working directory.\n-f, --force\nAllows the command to overwrite an existing file. By default, the get actions will not overwrite ex-\nisting files.\n-o --output= {file-path}\nSpecifies the name and, optionally, location for the resulting output file. The default location is the\ncurrent working directory. The default file depends on the resource being requested:\n•procedures.jar for get classes\n•deployment.xml for get deployment\n•schema.sql for get schema\nInitialization (voltdb init) Arguments\nThe following arguments apply to the voltdb init command.\n-C, --config= {configuration-file}\nSpecifies the location of the database configuration file. The configuration file is an XML file that\ndefines the database configuration, including which options to enable when the database starts. See\nAppendix E, Configuration File (deployment.xml) for a complete description of the syntax of the\nconfiguration file.\nThe default, if you do not specify a configuration file, is a default configuration that includes command\nlogging (where available), no K-safety, and eight sites per host.\n-D --dir={directory}\nSpecifies the parent location for the database root directory. The root directory is named voltdb-\nroot and is created if it does not already exist in the specified location. If a voltdbroot directory\ndoes already exist, you must use the --force argument to override any existing data. The default, if\nyou do not specify a directory, is the current working directory.\n-f, --force\nInitializes the database root directory, even if files (such as command logs or snapshots) already exist\nin the specified directory. Initializing the root directory after previously running a database could\noverwrite and therefore erase old command logs. Therefore, VoltDB will not, by default, initialize the\ndatabase if such files exist. If you do not need the files from the previous session, you can use the --\nforce argument to overwrite these files.\n380VoltDB CLI Commands\n-j, --classes= {JAR-file} [, ...]\nSpecifies the location of one or more JAR files containing classes used to declare user-defined stored\nprocedures. The JAR files (and any schema definitions included with the --schema argument) are\nloaded automatically when the database starts. Separate multiple file names with commas. You can\nalso use asterisk (*) as a wildcard character in the file specification. If durability is enabled (through\ncommand logs or a shutdown snapshot) the classes specified on the init command are loaded only the\nfirst time the database starts and the command logs are used for subsequent starts. If no durability is\nprovided, the initialized classes are loaded on every start.\n-l, --license= {license-file}\nSpecifies the location of the license file, which is required when using the VoltDB Enterprise Edition.\nThe argument is ignored when using the community edition.\n-retain={integer}\nSpecifies the maximum number of snapshot directories to save when performing a voltdb init\n--force . When initializing a root directory with --force , VoltDB deletes all previous files in the\ndirectory except the snapshot subfolder, which is renamed snapshots.1 , snapshots.2 , and so on. By\ndefault, VoltDB saves only two older snapshot folders. The --retain argument lets you specify a\ndifferent maximum number of folders to save.\n-s, --schema= {schema-file} [, ...]\nSpecifies the location of one or more files containing database definition language (DDL) statements.\nThe DDL statements (and any classes included with the --classes argument) are loaded automatically\nwhen the database starts. Separate multiple file names with commas. You can also use asterisk (*)\nas a wildcard character in the file specification. If durability is enabled (through command logs or\na shutdown snapshot) the schema specified on the init command is loaded only the first time the\ndatabase starts and the command logs are used for subsequent starts. If no durability is provided, the\ninitialized schema is loaded on every start.\nDatabase Startup (voltdb start) Arguments\nThe following arguments apply to the voltdb start command.\n-D --dir={directory}\nSpecifies the parent location for the database root directory. This is the same directory specified on\nthe voltdb init command. (You must initialize the root directory before you can start the database.)\nThe default, if you do not specify a directory, is the current working directory.\n-H, --host= { host-id [,...] }\nSpecifies the network address of one or more nodes in the database cluster. VoltDB selects one of\nthese nodes to coordinate the start of the database or the adding or rejoining of servers. When starting\na database, all nodes must specify the same list of host addresses. Note that once the database starts\nand the cluster is complete, the role of the host node is complete and all nodes become peers.\nWhen rejoining or adding a server to a running cluster, you can specify any node(s) still in the cluster.\nThe host for an add or rejoin operation does not have to be the same node specified when the database\nstarted.\nThe default if you do not specify a host when creating or recovering the database is localhost .\nIn other words, a single node cluster running on the current system. You must specify a host on the\ncommand line when adding or rejoining a node or when starting a cluster.\nIf the host node is using an internal port other than the default (3021), you must specify the port as\npart of the host string, in the format host:port.\n381VoltDB CLI Commands\nWhen used in conjunction with the --missing flag, the first host in the list must be one of the current\nhosts, not one of the missing nodes.\n-c, --count= {number-of-nodes}\nSpecifies the number of nodes in the database cluster.\n--add\nWhen joining a running cluster, specifies that the new node can be \"added\", elastically expanding the\nsize of the cluster. The --add flag only takes affect when a node is joining a complete, running cluster.\nIf the cluster is starting or if a node is missing from a K-safe cluster, the current node will join the\ncluster as normal. But if the cluster is already running and has its full complement of members, you\nmust specify --add if you want to increase the size of the cluster.\n-B, --background\nStarts the server process in the background (as a daemon process).\n-g, --placement-group= {group-name}\nSpecifies the location of the server. When the K-safety value is greater than zero, VoltDB uses this\nargument to assist in rack-aware partitioning. The cluster will attempt to place multiple copies of each\npartition on different nodes to keep them physically as far apart as possible. The physical location is\nspecified by the group-name , which is an alphanumeric name. The names might represent physical\nservers, racks, switches, or anything meaningful to the user to avoid multiple copies failing at the\nsame time.\nTo be effective, placement groups must adhere to the following rules:\n•There must be more than one placement group specified for the cluster\n•The same number of nodes must be included in each placement group\n•The number of partition copies (that is, K+1) must be a multiple of the number of placement groups\nOtherwise, VoltDB issues a warning on startup and there are no guarantees the partitions will be\nevenly distributed.\n--ignore=thp\nFor Linux systems, allows the database to start even if the server is configured to use Transparent\nHuge Pages (THP). THP is a known problem for memory-intense applications like VoltDB. So under\nnormal conditions VoltDB will not start if the use of THP is enabled. This flag allows you to ignore\nthat restriction for test purposes. Do not use this flag on production systems.\n--missing={number-of-nodes}\nAllows a K-safe cluster to start without the full complement of nodes. This argument specifies how\nmany nodes are missing from the cluster at startup. For example, if the arguments are --count=5\nand --missing=2, then the database will start once three nodes join the cluster, assuming those nodes\ncan support at least one copy of each partition. Note that use of the --missing option means that\nthe cluster is not fully K-safe until the specified number of missing nodes rejoin the cluster after the\ndatabase starts. Also, the --hosts flag should list currently available hosts, not the missing nodes.\n--pause\nFor the create and recover operations only, starts the database in admin mode. Admin mode stops\napplications from performing write operations to the database through the client interface. This is\nuseful when performing administrative functions such as restoring a snapshot before allowing client\naccess. Once all administrative operations are complete, you can use the voltadmin resume command\nto resume normal operation for the database. If any nodes in the cluster start with the --pause switch,\nthe entire cluster starts paused.\n382VoltDB CLI Commands\n--safemode\nWhen using command logs to recover an existing database that cannot recover under normal circum-\nstances, the --safemode argument recovers the database to the last valid transaction. This argument\nshould only be used when troubleshooting a failed recovery. See the description of safe mode recovery\nin the VoltDB Administrator's Guide for details.\nNetwork Configuration Arguments\nIn addition to the arguments listed above for the voltdb start command, there are additional arguments\nthat specify the network configuration for server ports and interfaces when starting a VoltDB database.\nIn most cases, the default values can and should be accepted for these settings. The exceptions are the\nexternal and internal interfaces that should be specified whenever there are multiple network interfaces\non a single machine.\nYou can also, optionally, specify a unique network interface for individual ports by preceding the port\nnumber with the interface's IP address (or hostname) followed by a colon. Specifying the network interface\nas part of an individual port setting overrides the default interface for that port set by --externalinterface\nor --internalinterface.\nThe network configuration arguments to the voltdb start command are listed below. See the appendix\non server configuration options in the VoltDB Administrator's Guide for more information about network\nconfiguration options.\n--externalinterface= {ip-address}\nSpecifies the default network interface to use for external ports, such as the admin and client ports.\n--internalinterface = {ip-address}\nSpecifies the default network interface to use for internal communication, such as the internal port.\n--publicinterface= {ip-address}\nSpecifies the public network interface. This argument is useful for hosted systems where the internal\nand external interfaces may not be generally reachable from the Internet. In which case, specifying\nthe public interface helps the VoltDB Management Center provide publicly accessible links for the\ncluster nodes.\n--drpublic= {ip-address[:port-number]}\nSpecifies the publicly advertised network interface and, optionally, port number for database replica-\ntion (DR) communication. This is the address that is sent from the producer cluster to consumers. This\nargument is useful for hosted systems where the internal interfaces are not reachable from outside the\nhosted environment and the producer cluster must return an externally mapped port as the public DR\ninterface to remote consumers.\n--admin=[ip-address:] {port-number}\nSpecifies the admin port. The --admin flag overrides the admin port setting in the configuration file.\n--client=[ip-address:] {port-number}\nSpecifies the client port.\n--http=[ip-address:] {port-number}\nSpecifies the http port. The --http flag both sets the port number (and optionally the interface) and\nenables the http port, overriding the http setting, if any, in the configuration file.\n--internal= [ip-address:] {port-number}\nSpecifies the internal port used to communicate between cluster nodes.\n383VoltDB CLI Commands\n--replication= [ip-address:] {port-number}\nSpecifies the replication port used for database replication. The --replication flag overrides the repli-\ncation port setting in the configuration file.\n--zookeeper= [ip-address:] {port-number}\nSpecifies the zookeeper port. By default, the zookeeper port is bound to the server's internal interface\n(127.0.0.1).\nExamples\nThe first example shows the commands for initializing and starting a three-node database cluster using a\ncustom configuration file, deploy.xml , and the node zeus as the host.\n$ voltdb init --dir=~/mydb --config=deploy.xml \n$ voltdb start --dir=~/mydb --count=3 --host=zeus\nThe second example takes advantage of the defaults for the host and configuration arguments to initialize\nand start a single-node database in the current directory.\n$ voltdb init\n$ voltdb start\nThe next example shows the use of the --force argument to re-initialize the directory used in the first\nexample, to delete old data and set new configuration options from a different configuration file.\n$ voltdb init --dir=~/mydb --config=newdeploy.xml --force\n384Appendix E. Configuraon File\n(deployment.xml)\nThe configuration file describes the physical configuration of a VoltDB database cluster at runtime, in-\ncluding the number of sites per hosts and K-safety value, among other things. This appendix describes the\nsyntax for each component within the configuration file.\nThe configuration file is a fully-conformant XML file. If you are unfamiliar with XML, see Section E.1,\n“Understanding XML Syntax” for a brief explanation of XML syntax.\nE.1. Understanding XML Syntax\nThe configuration file is a fully-conformant XML file. XML files consist of a series of nested elements\nidentified by beginning and ending \"tags\". The beginning tag is the element name enclosed in angle brack-\nets and the ending tag is the same except that the element name is preceded by a slash. For example:\n<deployment>\n <cluster>\n </cluster>\n</deployment>\nElements can be nested. In the preceding example cluster is a child of the element deployment .\nElements can also have attributes that are specified within the starting tag by the attribute name, an equals\nsign, and its value enclosed in single or double quotes. In the following example the kfactor and\nsitesperhost attributes of the cluster element are assigned values of \"1\" and \"12\", respectively.\n<deployment>\n <cluster kfactor=\"1\" sitesperhost=\"12\">\n </cluster>\n</deployment>\nFinally, as a shorthand, elements that do not contain any children can be entered without an ending tag by\nadding the slash to the end of the initial tag. In the following example, the cluster and heartbeat\ntags use this form of shorthand:\n<deployment>\n <cluster kfactor=\"1\" sitesperhost=\"12\"/>\n <heartbeat timeout=\"10\"/>\n</deployment>\nFor complete information about the XML standard and XML syntax, see the official XML site at http://\nwww.w3.org/XML/ .\nE.2. The Structure of the Configuration File\nThe configuration file starts with the XML declaration. After the XML declaration, the root element of the\nconfiguration file is the deployment element. The remainder of the XML document consists of elements\nthat are children of the deployment element.\n385Configuration File (deployment.xml)\nFigure E.1, “Configuration XML Structure” shows the structure of the configuration file. The indentation\nindicates the hierarchical parent-child relationships of the elements and an ellipsis (...) shows where an\nelement may appear multiple times.\n386Configuration File (deployment.xml)\nFigure E.1. Configuration XML Structure<deployment>\n <cluster/>\n <paths>\n <commandlog/>\n <commandlogsnapshot/>\n <exportoverflow/>\n <snapshots/>\n <voltdbroot/>\n </paths>\n <commandlog>\n <frequency/>\n </commandlog>\n <dr>\n <connection/>\n </dr>\n <export>\n <configuration>\n <property/>...\n </configuration>...\n </export>\n <heartbeat/>\n <httpd/>\n <import>\n <configuration>\n <property/>...\n </configuration>...\n </import>\n <partition-detection/>\n <security/>\n <snapshot/>\n <ssl>\n <keystore/>\n <truststore/>\n </ssl>\n <snmp/>\n <systemsettings>\n <elastic/>\n <flushinterval>\n <dr/>\n <export/>\n </flushinterval>\n <procedure/>\n <query/>\n <resourcemonitor>\n <disklimit>\n <feature/>...\n </disklimit>\n <memorylimit/>\n </resourcemonitor>\n <snapshot/>\n <temptables/>\n </systemsettings>\n <topics>\n <broker>\n <property/>...\n </broker>\n <topic/>...\n </topics>\n <users>\n <user/>...\n </users>\n</deployment>\n387Configuration File (deployment.xml)\nTable E.1, “Configuration File Elements and Attributes” provides further detail on the elements, including\ntheir relationships (as child or parent) and the allowable attributes for each.\nTable E.1. Configuration File Elements and Attributes\nElement Child of Parent of Attributes\ndeployment*(root element) avro, cluster, com-\nmandlog, dr, export,\nheartbeat, httpd, im-\nport, partition-detec-\ntion, paths, security,\nsnapshot, snmp, ssl,\nsystemsettings, topics,\nusers\navro deployment registry={url}*\nnamespace={text}\nprefix={text}\ncluster*deployment kfactor={int}\nsitesperhost={int}\nheartbeat deployment timeout={int}*\npartition-detection deployment enabled={true|false}\ncommandlog deployment frequency enabled={true|false}\nlogsize={int}\nsynchronous={true|false}\nfrequency commandlog time={int}\ntransactions={int}\ndr deployment connection id={int}*\nrole={master|replica|xdcr}\nconnection dr source={server[,...]}*\nenabled={true|false}\npreferred-source={int}\nssl=[file-path]\nexport deployment configuration\nconfiguration*export property target={text}*\nenabled={true|false}\nexportconnectorclass={class-name}\ntype={file|http|jdbc|kafka|custom}\nproperty configuration name={text}*\nimport deployment configuration\nconfiguration*import property type={kafka|custom}*\nenabled={true|false}\nformat={csv|tsv}\nmodule={text}\npriority={int}\nproperty configuration name={text}\nhttpd deployment enabled={true|false}\npaths deployment commandlog, com-\nmandlogsnapshot,\n388Configuration File (deployment.xml)\nElement Child of Parent of Attributes\ndroverflow, expor-\ntoverflow, snapshots,\nvoltdbroot\ncommandlog paths path={directory-path}*\ncommandlogsnapshot paths path={directory-path}*\ndroverflow paths path={directory-path}*\nexportoverflow paths path={directory-path}*\nsnapshots paths path={directory-path}*\nvoltdbroot paths path={directory-path}*\nsecurity deployment enabled={true|false}\nprovider={hash|kerberos}\nsnapshot deployment enabled={true|false}\nfrequency={int}{s|m|h}\nprefix={text}\nretain={int}\nssl deployment keystore, truststore enabled={true|false}\nexternal={true|false}\ninternal={true|false}\nkeystore*ssl path={file-path}*\npassword={text}*\ntruststore ssl path={file-path}*\npassword={text}\nsnmp deployment target={IP-address}*\nauthkey={text}\nauthprotocol={SHA|MD5|NoAuth}\ncommunity={text}\nenabled={true|false}\nprivacykey={text}\nprivacyprotocol={text}\nusername={text}\nsystemsettings deployment elastic, flushinterval,\npriorities, procedure,\nquery, resourcemoni-\ntor, snapshot, tempta-\nbles\nelastic systemsettings duration={int}\nthroughput={int}\nflushinterval systemsettings dr, export minimum={int}\ndr flushinterval interval={int}\nexport flushinterval interval={int}\npriorities systemsettings dr, snapshot batchsize={int}\nenabled={true|false}\nmaxwait={int}\ndr priorities priority={int}\n389Configuration File (deployment.xml)\nElement Child of Parent of Attributes\nsnapshot priorities priority={int}\nprocedure systemsettings loginfo={int}\ncopyparameters={true|false}\nquery systemsettings timeout={int}*\nresourcemonitor systemsettings disklimit, memo-\nrylimitfrequency={int}\ndisklimit resourcemonitor feature\nfeature disklimit name={text}*\nsize={int[%]}*\nalert={int[%]}\nmemorylimit resourcemonitor size={int[%]}*\nalert={int[%]}\nsnapshot systemsettings priority={int}*\ntemptables systemsettings maxsize={int}*\nthreadpools deployment pool\npool threadpools name={text}*\nsize={text}*\ntopics deployment broker, topic enabled={true|false}\nthreadpool={text}\nbroker topics property\ntopic topics property name={text}*\nallow={role-name[,..]}\nformat={avro|csv|json}\nopaque={true|false}\npriority={int}\nprocedure={text}\nretention={text}\nproperty broker,topic name={text}\nusers deployment user\nuser users name={text}*\npassword={text}*\nroles={role-name[,..]}\n*Required\n390Appendix F. VoltDB Datatype\nCompability\nVoltDB supports eleven datatypes. When invoking stored procedures from different programming lan-\nguages or queuing SQL statements within a Java stored procedure, you must use an appropriate lan-\nguage-specific value and datatype for arguments corresponding to placeholders in the query. This appen-\ndix provides the mapping of language-specific datatypes to the corresponding VoltDB datatype.\nIn several cases, there are multiple possible language-specific datatypes that can be used. The following\ntables highlight the best possible matches in bold.\nF.1. Java and VoltDB Datatype Compatibility\nTable F.1, “Java and VoltDB Datatype Compatibility” shows the compatible Java datatypes for each Volt-\nDB datatype when:\n•Calling simple stored procedures defined using the CREATE PROCEDURE AS statement\n•Calling default stored procedures created for each table in the schema\nIn reverse, the table also shows which SQL datatype is used for the arguments and return values of user-\ndefined functions written in Java. The highlighted Java datatype listed in the second column results in the\ncorresponding SQL datatype being accepted or returned at runtime.\nNote that when calling user-defined stored procedures written in Java, you can use additional datatypes,\nincluding arrays and the VoltTable object, as arguments to the stored procedure, as long as the actual\nquery invocations within the stored procedure use the following datatypes. Within the stored procedure,\nwhen queuing SQL statements using the voltdbQueueSql method, implicit type casting is not guaranteed\nso using the highlighted Java type is recommended.\nVoltDB accepts both primitive numeric types (byte, short, int, and so on) and their reference type equiv-\nalents (Byte, Short, Integer, etc.). The reference types can be useful, especially when passing null values,\nwhere you can send a Java null. In most cases when using the primitive types, you must pass the largest\npossible negative value for the type in place of null.\nTable F.1. Java and VoltDB Datatype Compatibility\nSQL Datatype Compatible Java Datatypes Notes\nTINYINT byte/Byte\nshort/Short\nint/Integer\nlong/Long\nBigDecimal\nStringLarger datatypes (short, int, long, and BigDec-\nimal) are valid input types. However, VoltDB\nthrows a runtime error if the value exceeds the al-\nlowable range of a TINYINT.\nString input must be a properly formatted text\nrepresentation of an integer value in the correct\nrange.\nSMALLINT byte/Byte\nshort/Short\nint/Integer\nlong/Long\nBigDecimal\nStringLarger datatypes (int, long, and BigDecimal) are\nvalid input types. However, VoltDB throws a run-\ntime error if the value exceeds the allowable range\nof a SMALLINT.\n391VoltDB Datatype Compatibility\nSQL Datatype Compatible Java Datatypes Notes\nString input must be a properly formatted text\nrepresentation of an integer value in the correct\nrange.\nINTEGER byte/Byte\nshort/Short\nint/Integer\nlong/Long\nBigDecimal\nStringLarger datatypes (long and BigDecimal) are valid\ninput type. However, VoltDB throws a runtime er-\nror if the value exceeds the allowable range of an\nINTEGER.\nString input must be a properly formatted text\nrepresentation of an integer value in the correct\nrange.\nBIGINT byte/Byte\nshort/Short\nint/Integer\nlong/Long\nBigDecimal\nStringString input must be a properly formatted text\nrepresentation of an integer value in the correct\nrange.\nFLOAT double/Double\nbyte/Byte\nshort/Short\nint/Integer\nlong/Long\nBigDecimal\nStringBecause of the difference in how numbers are rep-\nresented in the two types, there can be a loss of\nprecision when using BigDecimal as input to a\nFLOAT value.\nString input must be a properly formatted text rep-\nresentation of a floating point value.\nDECIMAL BigDecimal\ndouble/Double\nbyte/Byte\nshort/Short\nint/Integer\nlong/Long\nStringString input must be a properly formatted text rep-\nresentation of a decimal number.\nGEOGRAPHY (none) Geospatial input should be converted from Well\nKnown Text (WKT) to a VoltDB native for-\nmat either using the GeographyValue.fromWK-\nT() method or by passing a String and using the\nPOLYGONFROMTEXT function within the SQL\nstatement.\nGEOGRAPHY_POINT (none) Geospatial input should be converted from Well\nKnown Text (WKT) to a VoltDB native format\neither using the GeographyPointValue.fromWK-\nT() method or by passing a String and using the\nPOINTFROMTEXT function within the SQL\nstatement.\nVARCHAR() String\nbyte[]\nbyte/Byte\nshort/Short\nint/Integer\nlong/Long\nBigDecimalByte arrays are interpreted as UTF-8 encoded\nstring values. String objects can use other encod-\nings.\nNumeric and timestamp values are converted to\ntheir string representation. For example, the dou-\n392VoltDB Datatype Compatibility\nSQL Datatype Compatible Java Datatypes Notes\nVoltDB TimestampType ble value 13.25 is interpreted as \"13.25\" when\nconverted to a VARCHAR.\nVARBINARY() String\nbyte[]\nByteBufferString input is interpreted as a hex-encoded binary\nvalue.\nFor ByteBuffers, the starting position and limit\nproperties are ignored and the entire buffer is in-\nterpreted as a byte array starting in position zero.\nTIMESTAMP VoltDB TimestampType\nint/Integer\nlong/Long\nStringFor String variables, the text must be formatted as\neither YYYY-MM-DD hh.mm.ss.nnnnnn or\njust the date portion YYYY-MM-DD .\n393Appendix G. System Procedures\nVoltDB provides system procedures that perform system-wide administrative functions. You can invoke\nsystem procedures interactively using the sqlcmd utility, or you can invoke them programmatically like\nother stored procedures, using the VoltDB client method callProcedure.\nThis appendix describes the following system procedures.\n•@AdHoc\n•@Explain\n•@ExplainProc\n•@ExplainView\n•@GetPartitionKeys\n•@Note\n•@Pause\n•@Ping\n•@Promote\n•@QueryStats\n•@Quiesce\n•@Resume\n•@Shutdown\n•@SnapshotDelete\n•@SnapshotRestore\n•@SnapshotSave\n•@SnapshotScan\n•@Statistics\n•@StopNode\n•@SwapTables\n•@SystemCatalog\n•@SystemInformation\n•@UpdateApplicationCatalog\n•@UpdateClasses\n•@UpdateLogging\n394System Procedures\n@AdHoc\n@AdHoc — Executes an SQL statement specified at runtime.\nSyntax\n@AdHoc String SQL-statement [, statement-parameter ... ]\nDescription\nThe @AdHoc system procedure lets you perform arbitrary SQL statements on a running VoltDB database.\nYou can execute multiple SQL statements — either queries or data definition language (DDL) statements\n— in a single call to @AdHoc by separating the individual statements with semicolons. When you do this,\nthe statements are performed as a single transaction. That is, the statements all succeed as a group or they\nall roll back if any of them fail. You cannot mix SQL queries and DDL in a single @AdHoc call.\nYou can use question marks in the SQL statement as placeholders that are replaced by parameters you\nprovide as additional arguments to the call. For example:\nsql = \"SELECT * FROM Products WHERE partnumber=? AND vendor=?;\"\nresults = client.callProcedure(\"@AdHoc\",sql, productid, vendorid);\nPerformance of ad hoc queries is optimized, where possible. However, it is important to note that ad hoc\nqueries are not pre-compiled, like queries in stored procedures. Therefore, use of stored procedures is\nrecommended over @AdHoc for frequent, repetitive, or performance-sensitive queries.\nReturn Values\nReturns one VoltTable for each statement, with as many rows as there are records returned by the statement.\nThe column names and datatypes match the names and datatypes of the fields returned by the query.\nExamples\nThe following program example uses @AdHoc to execute an SQL SELECT statement and display the\nnumber of reservations for a specific customer in the flight reservation database.\ntry {\n VoltTable[] results = client.callProcedure(\"@AdHoc\",\n \"SELECT COUNT(*) FROM RESERVATION \" +\n \"WHERE CUSTOMERID=\" + custid).getResults();\n System.out.printf(\"%d reservations found.\\n\",\n results[0].fetchRow(0).getLong(0));\n}\ncatch (Exception e) {\n e.printStackTrace();\n}\nNote that you do not need to explicitly invoke @AdHoc when using sqlcmd. You can type your statement\ndirectly into the sqlcmd prompt, like so:\n$ sqlcmd\n395System Procedures\n1> SELECT COUNT(*) FROM RESERVATION WHERE CUSTOMERID=12345;\n396System Procedures\n@Explain\n@Explain — Returns the execution plan for the specified SQL query.\nSyntax\n@Explain String SQL-statement\nDescription\nThe @Explain system procedure evaluates the specified SQL query and returns the resulting execution\nplan. Execution, or explain, plans describe how VoltDB expects to execute the query at runtime, including\nwhat indexes are used, the order the tables are joined, and so on. Execution plans are useful for identifying\nperformance issues in query design. See the chapter on execution plans in the VoltDB Guide to Perfor-\nmance and Customization for information on how to interpret the plans.\nReturn Values\nReturns one VoltTable with one row and one column.\nName Datatype Description\nEXECUTION_PLAN VARCHAR The execution plan as text.\nExamples\nThe following program example uses @Explain to evaluate an ad hoc SQL SELECT statement against\nthe voter sample application.\ntry {\n String query = \"SELECT COUNT(*) FROM CONTESTANTS;\"; \n VoltTable[] results = client.callProcedure(\"@Explain\",\n query ).getResults();\n System.out.printf(\"Query: %d\\nPlan:\\n%d\",\n query, results[0].fetchRow(0).getString(0));\n}\ncatch (Exception e) {\n e.printStackTrace();\n}\nIn the sqlcmd utility, the \"explain\" command is a shortcut for \"exec @Explain\". So the following two\ncommands are equivalent:\n$ sqlcmd\n1> exec @Explain 'SELECT COUNT(*) FROM CONTESTANTS';\n2> explain SELECT COUNT(*) FROM CONTESTANTS;\n397System Procedures\n@ExplainProc\n@ExplainProc — Returns the execution plans for all SQL queries in the specified stored procedure.\nSyntax\n@ExplainProc String procedure-name\nDescription\nThe @ExplainProc system procedure returns the execution plans for all of the SQL queries within the spec-\nified stored procedure. Execution, or explain, plans describe how VoltDB expects to execute the queries\nat runtime, including what indexes are used, the order the tables are joined, and so on. Execution plans\nare useful for identifying performance issues in query and stored procedure design. See the chapter on\nexecution plans in the VoltDB Guide to Performance and Customization for information on how to inter-\npret the plans.\nReturn Values\nReturns one VoltTable with one row for each query in the stored procedure.\nName Datatype Description\nSQL_STATEMENT VARCHAR The SQL query.\nEXECUTION_PLAN VARCHAR The execution plan as text.\nExamples\nThe following example uses @ExplainProc to evaluate the execution plans associated with the Contes-\ntantWinningStates stored procedure in the voter sample application.\ntry {\n VoltTable[] results = client.callProcedure(\"@ExplainProc\",\n \"ContestantWinningStates\" ).getResults();\n results[0].resetRowPosition();\n while (results[0].advanceRow()) {\n System.out.printf(\"Query: %d\\nPlan:\\n%d\",\n results[0].getString(0),results[0].getString(1));\n }\n}\ncatch (Exception e) {\n e.printStackTrace();\n}\nIn the sqlcmd utility, the \"explainproc\" command is a shortcut for \"exec @ExplainProc\". So the following\ntwo commands are equivalent:\n$ sqlcmd\n1> exec @ExplainProc 'ContestantWinningStates';\n2> explainproc ContestantWinningStates;\n398System Procedures\n@ExplainView\n@ExplainView — Returns the execution plans for the components of the specified view.\nSyntax\n@ExplainView String view-name\nDescription\nThe @ExplainView system procedure returns the execution plans for certain components of the specified\nview. Execution plans describe how VoltDB expects to calculate the component values as the referenced\ntables or streams are updated. The plans include what indexes are used, the order the tables are joined, and\nso on. Execution plans are useful for identifying performance issues in the design of the view statement.\nSee the chapter on execution plans in the VoltDB Guide to Performance and Customization for information\non how to interpret the plans.\nFor views, execution plans are listed for the calculation of MIN() and MAX() functions and multi-table\njoins only. For simple views — that is, views on a single table with aggregate functions other than MIN()\nor MAX() — the system procedure returns no rows.\nReturn Values\nReturns one VoltTable with one row for each component of the view.\nName Datatype Description\nTASK VARCHAR The function or join statement\nEXECUTION_PLAN VARCHAR The execution plan as text.\nExamples\nThe following example uses @ExplainView to evaluate the execution plans associated with a view that\njoins two tables.\ntry {\n VoltTable[] results = client.callProcedure(\"@ExplainView\",\n \"stats_by_city_and_state\" ).getResults();\n results[0].resetRowPosition();\n while (results[0].advanceRow()) {\n System.out.printf(\"Task: %d\\nPlan:\\n%d\",\n results[0].getString(0),results[0].getString(1));\n }\n}\ncatch (Exception e) {\n e.printStackTrace();\n}\nIn the sqlcmd utility, the \"explainview\" command is a shortcut for \"exec @ExplainView\". So the following\ntwo commands are equivalent:\n$ sqlcmd\n399System Procedures\n1> exec @ExplainView 'stats_by_city_and_state';\n2> explainview stats_by_city_and_state;\n400System Procedures\n@GetParonKeys\n@GetPartitionKeys — Returns a list of partition values, one for every partition in the database.\nSyntax\n@GetPartitionKeys String datatype\nDescription\nThe @GetPartitionKeys system procedure returns a set of partition values that you can use to reach every\npartition in the database. This procedure is useful when you want to run a stored procedure in every partition\nbut you do not want to use a multi-partition procedure. By running multiple single-partition procedures,\nyou avoid the impact on latency and throughput that can result from a multi-partition procedure. This\nis particularly true for longer running procedures. Using multiple, smaller procedures can also help for\nqueries that modify large volumes of data, such as large deletes.\nWhen you call @GetPartitionKeys you specify the datatype of the keys to return as the second parameter.\nYou specify the datatype as a case-insensitive string. Valid options are \"INTEGER\", \"STRING\", and\n\"VARCHAR\" (where \"STRING\" and \"VARCHAR\" are synonyms).\nNote that the results of the system procedure are valid at the time they are generated. If the cluster is static\n(that is, no nodes are being added and any rebalancing is complete), the results remain valid until the next\nelastic event. However, during rebalancing, the distribution of partitions is likely to change. So it is a good\nidea to call @GetPartitionKeys once to get the keys, act on them, then call the system procedure again to\nverify that the partitions have not changed.\nReturn Values\nReturns one VoltTable with a row for every unique partition in the cluster.\nName Datatype Description\nPARTITION_ID INTEGER The numeric ID of the partition.\nPARTITION_KEY INTEGER or\nSTRINGA valid partition key for the partition. The datatype of the\nkey matches the type requested in the procedure call.\nExamples\nThe following example shows the use of sqlcmd to get integer key values from @GetPartitionKeys:\n$sqlcmd\n1> exec @GetPartitionKeys integer;\nThe next example shows a Java program using @GetPartitionKeys to execute a stored procedure to clear\nout old records, one partition at a time.\nVoltTable[] results = client.callProcedure(\"@GetPartitionKeys\", \n \"INTEGER\").getResults();\nVoltTable keys = results[0];\nfor (int k=0;k<keys.getRowCount();k++) {\n long key = keys.fetchRow(k).getLong(1);\n401System Procedures\n client.callProcedure(\"PurgeOldData\", key);\n}\n402System Procedures\n@Note\n@Note — Writes a message into the VoltDB log file.\nSyntax\n@Note String message\nDescription\nThe @Note system procedure lets you write arbitrary text into the VoltDB log file as an INFO level\nmessage. Adding messages to the log file can be useful when debugging issues with your application.\nFor example, you can write a message when a new process starts and again when it ends, to see if it\ngenerates any unusual error messages between the two notes. You can also use this procedure to document\nadministrative actions taken on the system, such as updating the configuration or changing the schema.\nWhen security is enabled, the process or user must have admin permissions to be able to write to the log\nfile. Otherwise, the procedure call returns an error indicating you do not have permission to complete the\nrequest.\nThe @Note system procedure lets you write to the log from within your application. To perform the same\ntask interactively, you can execute @Note from the sqlcmd prompt or by using the voltadmin note com-\nmand.\nReturn Values\nReturns one VoltTable with one row\nName Datatype Description\nSTATUS BIGINT Always returns the value zero (0) indicating success.\nExample\nThe following program example uses @Note to write a message into the log file before and after perform-\ning a large series of delete operations.\nclient.callProcedure(\"@Note\", \"Starting database cleanup...\");\n /* Code to delete obsolete records */\n [ . . . ]\nclient.callProcedure(\"@Note\", \"Database cleanup completed.\");\nThe following example uses the voltadmin note command to perform the same function when running a\ncleanup program interactively. (Note the use of single quotes to enclose the message text to avoid punc-\ntuation being accidentally interpreted by the command shell).\n$ voltadmin note 'Starting database cleanup...'\n$ java MyCleanupProgram\n$ voltadmin note 'Database cleanup completed.'\n403System Procedures\n@Pause\n@Pause — Initiates read-only mode on the cluster.\nSyntax\n@Pause\nDescription\nThe @Pause system procedure initiates admin mode on the cluster. Admin mode puts the database into\nread-only mode and ensures no further changes to the database can be made through the client port when\nperforming sensitive administrative operations, such as taking a snapshot before shutting down.\nWhile in admin mode, any write transactions on the client port are rejected and return an error status. Read-\nonly transactions, including system procedures, are allowed. However, write transactions such as inserts,\ndeletes, or schema changes are only allowed through the admin port.\nSeveral important points to consider concerning @Pause are:\n•@Pause must be called through the admin port, not the standard client port.\n•Although write transactions on the client port are rejected in admin mode, existing connections from\nclient applications are not removed.\n•To return to normal database operation, you must call the system procedure @Resume on the admin port.\nReturn Values\nReturns one VoltTable with one row.\nName Datatype Description\nSTATUS BIGINT Always returns the value zero (0) indicating success.\nExamples\nIt is possible to call @Pause using the sqlcmd utility. However, you must explicitly connect to the admin\nport when starting sqlcmd to do this. Also, it is often easier to use the voltadmin utility, which connects\nto the admin port by default. For example, the following commands demonstrate pausing and resuming\nthe database using both sqlcmd and voltadmin :\n$ sqlcmd --port=21211\n1> exec @Pause;\n2> exec @Resume;\n$ voltadmin pause\n$ voltadmin resume\nThe following program example, if called through the admin port, initiates admin mode on the database\ncluster.\nclient.callProcedure(\"@Pause\");\n404System Procedures\n@Ping\n@Ping — Indicates whether the database is currently running.\nSyntax\n@Ping\nDescription\nThe @Ping system procedure returns a value of zero (0) indicating that the database is up and running.\nThe system procedure does not respond until the database completes its startup process.\nThe @Ping system procedure is a lightweight procedure and does not require any interaction between\ncluster nodes, which makes it a better choice than other system procedures (such as @Statistics) if all you\nneed to do is check if the database is running.\nReturn Values\nReturns one VoltTable with one row\nName Datatype Description\nSTATUS BIGINT Always returns the value zero (0) indicating success.\nExamples\nThe following program fragment calls @Ping to determine if the database is still available. For example,\nif the client application has not sent any data for awhile, it can check to see if the server is running before\nsending any new transactions. Note that you can either check the response value or the status value to\ndetermine if the call succeeded or not.\nClientResponse response = client.callProcedure(\"@Ping\");\nif (response.getStatus() == ClientResponse.SUCCESS) {\n /* We can continue */\n} else {\n /* Server is not ready */\n}\n405System Procedures\n@Promote\n@Promote — Promotes a replica database to normal operation.\nSyntax\n@Promote\nDescription\nThe @Promote system procedure promotes a replica database to normal operation. During database repli-\ncation, the replica database only accepts input from the master database. If, for any reason, the master\ndatabase fails and replication stops, you can use @Promote to change the replica database from a replica\nto a normal database. When you invoke the @Promote system procedure, the replica exits read-only mode\nand becomes a fully operational VoltDB database that can receive and execute both read-only and read/\nwrite queries.\nNote that once a database is promoted, it cannot return to its original role as the receiving end of database\nreplication without first stopping and reinitializing the database as a replica. If the database is not a replica,\ninvoking @Promote returns an error.\nReturn Values\nReturns one VoltTable with one row.\nName Datatype Description\nSTATUS BIGINT Always returns the value zero (0) indicating success.\nExamples\nThe following programming example promotes a database cluster.\nclient.callProcedure(\"@Promote\");\nIt is also possible to promote a replica database using sqlcmd or the voltadmin promote command. The\nfollowing commands are equivalent:\n$ sqlcmd\n1> exec @Promote;\n$ voltadmin promote\n406System Procedures\n@QueryStats\n@QueryStats — Queries statistics like a SQL table.\nSyntax\n@QueryStats String query-statement\nDescription\nThe @QueryStats system procedure lets you query the results of the @Statistics system procedure as if it\nwere a database table, filtering, aggregating, and ordering the results as you wish. You specify the query\nin a SQL-like statement as an argument to the procedure.\nThe query string is formatted as a SELECT statement, using the statistics result columns as selection ex-\npressions and FROM STATISTICS(selector,delta-flag) as the table specifier. For example, the following\nquery string returns the names of all export targets and their associated tables:\nSELECT target, source from STATISTICS(EXPORT,0);\nYou can also use standard SQL functions and clauses such as WHERE, GROUP BY, and ORDER BY\nto filter, aggregate, and re-order the output. For example, the following query reports the total number of\npending rows for each currently active export target, sorted in descending order:\nSELECT target, SUM(tuple_pending) from STATISTICS(EXPORT,0)\n WHERE active = 'TRUE' GROUP BY target ORDER BY SUM(tuple_pending) DESC;\nNote that although the query string is SQL like, it is not a true SQL statement and not all SQL expres-\nsions are supported. For instance, you cannot use complex arithmetic expressions or all forms of joins or\nsubclauses. However, you can join multiple results as long as you separate the \"tables\" with commas and\nspecify the join constraints in the WHERE clause. For example, the following example joins information\nabout transaction invocations with the resulting output size per host and connection:\nSELECT a.hostname, a.connection_id, \n SUM(a.invocations), SUM(b.bytes_written)\n from STATISTICS(initiator,0) AS a, STATISTICS(iostats,0) as b\n WHERE a.connection_id = b.connection_id \n GROUP BY a.hostname, a.connection_id;\nReturn Values\nReturns one VoltTable with the results of the query. The name, number, and datatype of the columns\ncorrespond to the columns in the query. The number of rows matches the number of @Statistics results\nmatching the query.\nExamples\nThe following program example uses @QueryStats to determine how many clients are connected to each\nhost of the cluster.\ntry {\n String query = \"SELECT hostname, count(*)\" +\n407System Procedures\n \" from statistics(liveclients,0) group by hostname;\"; \n VoltTable[] results = client.callProcedure(\"@QueryStats\",\n query ).getResults();\n return results;\n}\ncatch (Exception e) {\n e.printStackTrace();\n}\nIn the sqlcmd utility, you can use the querystats directive as a short cut for invoking the @QueryStats\nsystem procedure.\n$ sqlcmd\n1> querystats SELECT hostname,count(*) from statistics(liveclients,0) group by hostname;\n408System Procedures\n@Quiesce\n@Quiesce — Waits for all queued export and DR data to be processed or saved to disk\nSyntax\n@Quiesce\nDescription\nThe @Quiesce system procedure waits for any pending export or database replication (DR) data to be\ncleared before returning to the calling application. @Quiesce waits for export data to either be written to\nthe export connector or to the export overflow on disk. Similarly, it waits for pending DR data to be written\nto DR overflow then does an fsync to ensure all disk I/O is completed. Note that a graceful shutdown\n(using the voltadmin shutdown command without the --force argument) performs a quiesce as part of\nthe shutdown process.\nIf export and DR are not enabled, the procedure returns immediately.\nReturn Values\nReturns one VoltTable with one row.\nName Datatype Description\nSTATUS BIGINT Always returns the value zero (0) indicating success.\nExamples\nThe following example calls @Quiesce using sqlcmd:\n$ sqlcmd\n1> exec @Quiesce;\nThe following program example uses drain and @Quiesce to complete any asynchronous transactions and\nclear the export and DR queues before shutting down the database.\n // Complete all outstanding activities\ntry {\n client.drain();\n client.callProcedure(\"@Quiesce\");\n}\ncatch (Exception e) {\n e.printStackTrace();\n}\n // Shutdown the database.\ntry {\n client.callProcedure(\"@Shutdown\");\n}\n // We expect an exception when the connection drops.\n // Report any other exception.\n409System Procedures\ncatch (org.voltdb.client.ProcCallException e) { }\ncatch (Exception e) { e.printStackTrace(); }\n410System Procedures\n@Resume\n@Resume — Returns a paused database to normal operating mode.\nSyntax\n@Resume\nDescription\nThe @Resume system procedure switches all nodes in a database cluster from admin mode to normal\noperating mode. In other words, @Resume is the opposite of @Pause.\nAfter calling this procedure, the cluster returns to accepting read/write ad hoc queries and stored procedure\ninvocations from clients connected to the standard client port.\n@Resume must be invoked from a connection to the admin port.\nReturn Values\nReturns one VoltTable with one row.\nName Datatype Description\nSTATUS BIGINT Always returns the value zero (0) indicating success.\nExamples\nYou can call @Resume using the sqlcmd utility. However, you must explicitly connect to the admin port\nwhen starting sqlcmd to do this. It is often easier to use the voltadmin resume command, which connects\nto the admin port by default. For example, the following commands are equivalent:\n$ sqlcmd --port=21211\n1> exec @Resume;\n$ voltadmin resume\nThe following program example uses @Resume to return the cluster to normal operation.\nclient.callProcedure(\"@Resume\");\n411System Procedures\n@Shutdown\n@Shutdown — Shuts down the database.\nSyntax\n@Shutdown\nDescription\nThe @Shutdown system procedure performs an immediate shut down of a VoltDB database on all nodes\nof the cluster.\nNote\nThe @Shutdown system procedure does not wait for running transactions to complete or queued\ndata to be written to disk before stopping the database process, which can result in loss of data.\nTherefore, using the voltadmin shutdown command to perform an orderly shutdown, making\nall data durable, is the recommended method for stopping a VoltDB database.\nNote that once the database shuts down, the client connection is lost and the calling program cannot make\nany further requests to the server.\nExamples\nThe first example shows the recommended way to shutdown a VoltDB database, using the voltadmin\nshutdown command to perform an orderly shutdown:\n$ voltadmin shutdown\nThe next example shows calling @Shutdown from sqlcmd. This is equivalent to the voltdb shutdown\n--force command:\n$ sqlcmd\n1> exec @Shutdown;\nThe following program example uses @Shutdown to stop the database cluster programmatically. Note\nthe use of catch to separate out a VoltDB call procedure exception (which is expected) from any other\nexception.\ntry {\n client.callProcedure(\"@Shutdown\"); \n}\n // we expect an exception when the connection drops.\ncatch (org.voltdb.client.ProcCallException e) {\n System.out.println(\"Database shutdown initiated.\");\n}\n // report any other exception.\ncatch (Exception e) {\n e.printStackTrace();\n}\n412System Procedures\n@SnapshotDelete\n@SnapshotDelete — Deletes one or more native snapshots.\nSyntax\n@SnapshotDelete String[] directory-path s, String[] Unique-IDs\nDescription\nThe @SnapshotDelete system procedure deletes native snapshots from the database cluster. This is a clus-\nter-wide operation and a single invocation will remove the snapshot files from all of the nodes.\nThe procedure takes two parameters: a String array of directory paths and a String array of unique IDs\n(prefixes).\nThe two arrays are read as a series of value pairs, so that the first element of the directory path array and\nthe first element of the unique ID array will be used to identify the first snapshot to delete. The second\nelement of each array will identify the second snapshot to delete. And so on.\n@SnapshotDelete can delete native format snapshots only. The procedure cannot delete CSV format snap-\nshots.\nReturn Values\nReturns one VoltTable with a row for every snapshot file affected by the operation.\nName Datatype Description\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nPATH STRING The directory path where the snapshot file resides.\nNONCE STRING The unique identifier for the snapshot.\nNAME STRING The file name.\nSIZE BIGINT The total size, in bytes, of the file.\nDELETED STRING String value indicating whether the file was successfully\ndeleted (\"TRUE\") or not (\"FALSE\").\nRESULT STRING String value indicating the success (\"SUCCESS\") or failure\n(\"FAILURE\") of the request.\nERR_MSG STRING If the result is FAILURE, this column contains a message\nexplaining the cause of the failure.\nExample\nThe following example uses @SnapshotScan to identify all of the snapshots in the directory /tmp/volt-\ndb/backup/ . This information is then used by @SnapshotDelete to delete those snapshots.\ntry {\n results = client.callProcedure(\"@SnapshotScan\",\n413System Procedures\n \"/tmp/voltdb/backup/\").getResults();\n}\ncatch (Exception e) { e.printStackTrace(); }\nVoltTable table = results[0];\nint numofsnapshots = table.getRowCount();\nint i = 0;\nif (numofsnapshots > 0) {\n String[] paths = new String[numofsnapshots];\n String[] nonces = new String[numofsnapshots];\n for (i=0;i<numofsnapshots;i++) { paths[i] = \"/etc/voltdb/backup/\"; }\n table.resetRowPosition();\n i = 0;\n while (table.advanceRow()) { \n nonces[i] = table.getString(\"NONCE\");\n i++; \n } \n try {\n client.callProcedure(\"@SnapshotDelete\",paths,nonces);\n }\n catch (Exception e) { e.printStackTrace(); }\n}\n414System Procedures\n@SnapshotRestore\n@SnapshotRestore — Restores a database from disk using a native format snapshot.\nSyntax\n@SnapshotRestore String directory-path , String unique-ID\n@SnapshotRestore String json-encoded-options\nDescription\nThe @SnapshotRestore system procedure restores a previously saved database from disk to memory. The\nsnapshot must be in native format. (You cannot restore a CSV format snapshot using @SnapshotRestore.)\nThe restore request is propagated to all nodes of the cluster, so a single call to @SnapshotRestore will\nrestore the entire database cluster.\nIf the database is empty — that is, there is no data or schema defined in the database — the restore operation\nrestores the schema and the requested data. If a schema is already defined, the restore operation restores\ndata only for those tables defined in the current schema.\nThere are two forms of the @SnapshotRestroe procedure, as described below. See Chapter 13, Saving &\nRestoring a VoltDB Database for more information about saving and restoring VoltDB databases.\nIndividual Arguments\nWhen you specify the arguments as individual parameters, you must specify two arguments:\n1.The directory path where the snapshot files are stored\n2.An identifier that uniquely identifies the snapshot\nJSON-Encoded Arguments\nWhen you specify the arguments as a JSON-encoded string, you can specify not only what snapshot to\nuse, but what data to restore from that snapshot. Table G.1, “@SnapshotRestoreOptions” describes the\npossible options when restoring a snapshot using JSON-encoded arguments.\nTable G.1. @SnapshotRestoreOptions\nOption Description\npath Specifies the path where the snapshot files are stored.\nnonce Specifies the unique identifier for the snapshot.\nskiptables Specifies tables to leave out when restoring the snapshot. Use of tables or skiptables\nallows you to restore part of a snapshot. Specify the list of tables as a JSON array.\nFor example, the following JSON argument excludes the Areacode and Country\ntables from the restore operation:\n\"skiptables\":[\"areacode\",\"country\"]\ntables Specifies tables to include when restoring the snapshot. Use of tables or skiptables\nallows you to restore part of a snapshot. Specify the list of tables as a JSON array. For\n415System Procedures\nOption Description\nexample, the following JSON argument includes only the Employee and Company\ntables in the restore operation:\n\"tables\":[\"employee\",\"company\"]\nFor example, the JSON-encoded arguments to restore the tables Employee and Company from the \"mydb\"\nsnapshot in the /tmp directory is the following:\n{path:\"/tmp\",nonce:\"mydb\",tables:[\"employee\",\"company\"]}\nReturn Values\nReturns one VoltTable with a row for every table restored at each execution site.\nName Datatype Description\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nTABLE STRING The name of the table being restored.\nPARTITION_ID INTEGER The numeric ID for the logical partition that this site rep-\nresents. When using a K value greater than zero, there are\nmultiple copies of each logical partition.\nRESULT STRING String value indicating the success (\"SUCCESS\") or failure\n(\"FAILURE\") of the request.\nERR_MSG STRING If the result is FAILURE, this column contains a message\nexplaining the cause of the failure.\nExamples\nThe following example uses @SnapshotRestore to restore previously saved database content from the path\n/tmp/voltdb/backup/ using the unique identifier flight.\n$ sqlcmd\n1> exec @SnapshotRestore '/tmp/voltdb/backup/', 'flight';\nAlternately, you can use the voltadmin restore command to perform the same function:\n$ voltadmin restore /tmp/voltdb/backup/ flight\nSince there are a number of situations that impact what data is restored, it is a good idea to review the return\nvalues to see what tables and partitions were affected. In the following program example, the contents of\nthe VoltTable array is written to standard output so the operator can confirm that the restore completed\nas expected.\nVoltTable[] results = null;\ntry {\n results = client.callProcedure(\"@SnapshotRestore\",\n \"/tmp/voltdb/backup/\",\n \"flight\").getResults();\n416System Procedures\n}\ncatch (Exception e) {\n e.printStackTrace();\n}\nfor (int t=0; t<results.length; t++) {\n VoltTable table = results[t];\n for (int r=0;r<table.getRowCount();r++) {\n VoltTableRow row = table.fetchRow(r);\n System.out.printf(\"Node %d Site %d restoring \" +\n \"table %s partition %d.\\n\",\n row.getLong(\"HOST_ID\"), row.getLong(\"SITE_ID\"),\n row.getString(\"TABLE\"),row.getLong(\"PARTITION\"));\n }\n}\n417System Procedures\n@SnapshotSave\n@SnapshotSave — Saves the current database contents to disk.\nSyntax\n@SnapshotSave String directory-path , String unique-ID , Integer blocking-flag\n@SnapshotSave String json-encoded-options\n@SnapshotSave\nDescription\nThe @SnapshotSave system procedure saves the contents of the current in-memory database to disk. Each\nnode of the database cluster saves its portion of the database locally.\nThere are three forms of the @SnapshotSave stored procedure: a procedure call with individual argument\nparameters, with all arguments in a single JSON-encoded string, or with no arguments. When you specify\nthe arguments as individual parameters, VoltDB creates a native mode snapshot that can be used to recover\nor restore the database. When you specify the arguments as a JSON-encoded string, you can request a\ndifferent format for the snapshot, including CSV (comma-separated value) files that can be used for import\ninto other databases or utilities. When you specify no arguments a full, native snapshot is saved into the\ndefault snapshots directory in the database root directory.\nIndividual Arguments\nWhen you specify the arguments as individual parameters, you must specify three arguments:\n1.The directory path where the snapshot files are stored\n2.An identifier that is included in the file names to uniquely identify the files that make up a single\nsnapshot\n3.A flag value indicating whether the snapshot should block other transactions until it is complete or not\nThe resulting snapshot consists of multiple files saved to the directory specified by directory-path using\nunique-ID as a filename prefix. The third argument, blocking-flag , specifies whether the save is performed\nsynchronously (thereby blocking any following transactions until the save completes) or asynchronously.\nIf this parameter is set to any non-zero value, the save operation will block any following transactions. If\nit is zero, others transactions will be executed in parallel.\nThe files created using this invocation are in native VoltDB snapshot format and can be used to restore\nor recover the database at some later time. This is the same format used for automatic snapshots. See\nChapter 13, Saving & Restoring a VoltDB Database for more information about saving and restoring\nVoltDB databases.\nJSON-Encoded Arguments\nWhen you specify the arguments as a JSON-encoded string, you can specify what snapshot format you\nwant to create. Table G.2, “@SnapshotSave Options” describes all possible options when creating a snap-\nshot using JSON-encoded arguments.\n418System Procedures\nTable G.2. @SnapshotSave Options\nOption Description\nuripath Specifies the path where the snapshot files are created. Note that, as a JSON-encoded\nargument, the path must be specified as a URI, not just a system directory path.\nTherefore, a local directory must be specified using the file:// identifier, such\nas \"file:///tmp \", and the path must exist on all nodes of the cluster.\nnonce Specifies the unique identifier for the snapshot.\nblock Specifies whether the snapshot should be synchronous (true) and block other trans-\nactions or asynchronous (false).\nformat Specifies the format of the snapshot. Valid formats are \"csv\" and \"native\".\nWhen you save a snapshot in CSV format, the resulting files are in standard com-\nma-separated value format, with only one file for each table. In other words, dupli-\ncates (from replicated tables or duplicate partitions due to K-safety) are eliminated.\nCSV formatted snapshots are useful for import or reuse by other databases or utili-\nties. However, they cannot be used to restore or recover a VoltDB database.\nWhen you save a snapshot in native format, each node and partition saves its contents\nto separate files. These files can then be used to restore or recover the database. It\nis also possible to later convert native format snapshots to CSV using the snapshot\nutilities described in the VoltDB Administrator's Guide .\nskiptables Specifies tables to leave out of the snapshot. Use of tables or skiptables allows you\nto create a partial snapshot of the larger database. Specify the list of tables as a\nJSON array. For example, the following JSON argument excludes the Areacode and\nCountry tables from the snapshot:\n\"skiptables\":[\"areacode\",\"country\"]\ntables Specifies tables to include in the snapshot. Use of tables or skiptables allows you to\ncreate a partial snapshot of the larger database. Specify the list of tables as a JSON\narray. For example, the following JSON argument includes only the Employee and\nCompany tables in the snapshot:\n\"tables\":[\"employee\",\"company\"]\nFor example, the JSON-encoded arguments to synchronously save a CSV formatted snapshot to /tmp using\nthe unique identifier \"mydb\" is the following:\n{uripath:\"file:///tmp\",nonce:\"mydb\",block:true,format:\"csv\"}\nThe block and format arguments are optional. If you do not specify them they default to block:false\nand format:\"native\" . The arguments uripath and nonce are required. The tables and skiptables\narguments are mutually exclusive.\nBecause the unique identifier is used in the resulting filenames, the identifier can contain only characters\nthat are valid for Linux file names. In addition, hyphens (\"-\") and commas (\",\") are not permitted.\nNote that it is normal to perform manual saves synchronously, to ensure the snapshot represents a known\nstate of the database. However, automatic snapshots are performed asynchronously to reduce the impact\non ongoing database activity.\n419System Procedures\nReturn Values\nThe @SnapshotSave system procedure returns two different VoltTables, depending on the outcome of\nthe request.\nOption #1: one VoltTable with a row for every execution site. (That is, the number of hosts multiplied\nby the number of sites per host.).\nName Datatype Description\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nRESULT STRING String value indicating the success (\"SUCCESS\") or failure\n(\"FAILURE\") of the request.\nERR_MSG STRING If the result is FAILURE, this column contains a message\nexplaining the cause of the failure.\nOption #2: one VoltTable with a variable number of rows.\nName Datatype Description\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nTABLE STRING The name of the database table. The contents of each table\nis saved to a separate file. Therefore it is possible for the\nsnapshot of each table to succeed or fail independently.\nRESULT STRING String value indicating the success (\"SUCCESS\") or failure\n(\"FAILURE\") of the request.\nERR_MSG STRING If the result is FAILURE, this column contains a message\nexplaining the cause of the failure.\nExamples\nThe following example uses @SnapshotSave to save the current database content in native snapshot format\nto the path /tmp/voltdb/backup/ using the unique identifier flight on each node of the cluster.\n$ sqlcmd\n1> exec @SnapshotSave '/tmp/voltdb/backup/', 'flight', 1;\nAlternately, you can use the voltadmin save command to perform the same function. When using the\nvoltadmin save command, you use the --blocking flag instead of a third parameter to request a block-\ning save:\n$ voltadmin save --blocking /tmp/voltdb/backup/ flight \nNote that the procedure call will return successfully even if the save was not entirely successful. The\ninformation returned in the VoltTable array tells you what parts of the operation were successful or not.\nFor example, save may succeed on one node but not on another.\nThe following code sample performs the same function, but also checks the return values and notifies the\noperator when portions of the save operation are not successful.\n420System Procedures\nVoltTable[] results = null;\ntry { results = client.callProcedure(\"@SnapshotSave\",\n \"/tmp/voltdb/backup/\",\n \"flight\", 1).getResults(); }\ncatch (Exception e) { e.printStackTrace(); }\nfor (int table=0; table<results.length; table++) {\n for (int r=0;r<results[table].getRowCount();r++) {\n VoltTableRow row = results[table].fetchRow(r);\n if (row.getString(\"RESULT\").compareTo(\"SUCCESS\") != 0) {\n System.out.printf(\"Site %s failed to write \" +\n \"table %s because %s.\\n\",\n row.getString(\"HOSTNAME\"), row.getString(\"TABLE\"),\n row.getString(\"ERR_MSG\"));\n }\n }\n}\n421System Procedures\n@SnapshotScan\n@SnapshotScan — Lists information about existing native snapshots in a given directory path.\nSyntax\n@SnapshotScan String directory-path\nDescription\nThe @SnapshotScan system procedure provides information about any native snapshots that exist within\nthe specified directory path for all nodes on the cluster. The procedure reports the name (prefix) of the\nsnapshot, when it was created, how long it took to create, and the size of the individual files that make\nup the snapshot(s).\n@SnapshotScan does not include CSV format snapshots in its output. Only native format snapshots are\nlisted.\nReturn Values\nOn successful completion, this system procedure returns three VoltTables providing the following infor-\nmation:\n•A summary of the snapshots found\n•Available space in the directories scanned\n•Details concerning the Individual files that make up the snapshots\nThe first table contains one row for every snapshot found.\nName Datatype Description\nPATH STRING The directory path where the snapshot resides.\nNONCE STRING The unique identifier for the snapshot.\nTXNID BIGINT The transaction ID of the snapshot.\nCREATED BIGINT The timestamp when the snapshot was created (in millisec-\nonds).\nSIZE BIGINT The total size, in bytes, of all the snapshot data.\nTABLES_REQUIRED STRING A comma-separated list of all the table names listed in the\nsnapshot digest file. In other words, all of the tables that\nmake up the snapshot.\nTABLES_MISSING STRING A comma-separated list of database tables for which no data\ncan be found. (That is, the corresponding files are missing\nor unreadable.)\nTABLES_INCOMPLETE STRING A comma-separated list of database tables with only partial\ndata saved in the snapshot. (That is, data from some parti-\ntions is missing.)\nCOMPLETE STRING A string value indicating whether the snapshot as a whole is\ncomplete (\"TRUE\") or incomplete (\"FALSE\"). If this col-\n422System Procedures\nName Datatype Description\numn is \"FALSE\", the preceding two columns provide addi-\ntional information concerning what is missing.\nPATHTYPE STRING A string value indicating the type of snapshot and its lo-\ncation, where the type can be \"SNAP_PATH\" for manu-\nal snapshots, \"SNAP_CL\" for command log snapshots, and\n\"SNAP_AUTO\" for automated snapshots.\nThe second table contains one row for every host.\nName Datatype Description\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nPATH STRING The directory path specified in the call to the procedure.\nTOTAL BIGINT The total space (in bytes) on the device.\nFREE BIGINT The available free space (in bytes) on the device.\nUSED BIGINT The total space currently in use (in bytes) on the device.\nRESULT STRING String value indicating the success (\"SUCCESS\") or failure\n(\"FAILURE\") of the request.\nERR_MSG STRING If the result is FAILURE, this column contains a message\nexplaining the cause of the failure.\nThe third table contains one row for every file in the snapshot collection.\nName Datatype Description\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nPATH STRING The directory path where the snapshot file resides.\nNAME STRING The file name.\nTXNID BIGINT The transaction ID of the snapshot.\nCREATED BIGINT The timestamp when the snapshot was created (in millisec-\nonds).\nTABLE STRING The name of the database table the data comes from.\nCOMPLETED STRING A string indicating whether all of the data was successfully\nwritten to the file (\"TRUE\") or not (\"FALSE\").\nSIZE BIGINT The total size, in bytes, of the file.\nIS_REPLICATED STRING A string indicating whether the table in question is replicat-\ned (\"TRUE\") or partitioned (\"FALSE\").\nPARTITIONS STRING A comma-separated string of partition (or site) IDs from\nwhich data was taken during the snapshot. For partitioned\ntables where there are multiple sites per host, there can be\ndata from multiple partitions in each snapshot file. For repli-\ncated tables, data from only one copy (and therefore one\npartition) is required.\nTOTAL_PARTITIONS BIGINT The total number of partitions from which data was taken.\n423System Procedures\nName Datatype Description\nREADABLE STRING A string indicating whether the file is accessible (\"TRUE\")\nor not (\"FALSE\").\nRESULT STRING String value indicating the success (\"SUCCESS\") or failure\n(\"FAILURE\") of the request.\nERR_MSG STRING If the result is FAILURE, this column contains a message\nexplaining the cause of the failure.\nIf the system procedure fails because it cannot access the specified path, it returns a single VoltTable with\none row and one column.\nName Datatype Description\nERR_MSG STRING A message explaining the cause of the failure.\nExamples\nThe following example uses @SnapshotScan to list information about the snapshots in the directory /\ntmp/voltdb/backup/ .\n$ sqlcmd\n1> exec @SnapshotScan /tmp/voltdb/backup/;\nThe following program example performs the same function, using the VoltTable toString() method\nto display the results of the procedure call:\nVoltTable[] results = null;\ntry { results = client.callProcedure(\"@SnapshotScan\",\n \"/tmp/voltdb/backup/\").getResults();\n}\ncatch (Exception e) { e.printStackTrace(); }\nfor (VoltTable t: results) { \n System.out.println(t.toString()); \n}\nIn the return value, the first VoltTable in the array lists the snapshots and certain status information. The\nsecond element of the array provides information about the directory itself (such as used, free, and total disk\nspace). The third element of the array lists specific information about the individual files in the snapshot(s).\n424System Procedures\n@Stascs\n@Statistics — Returns statistics about the usage of the VoltDB database.\nSyntax\n@Statistics String component , Integer delta-flag\nDescription\nThe @Statistics system procedure returns information about the VoltDB database. The first argument,\ncomponent , specifies what aspect of VoltDB to return statistics about. The second argument, delta-flag ,\nspecifies whether statistics are reported from when the database started or since the last call to @Statistics\nwhere the flag was set.\nSome components report statistics at the moment, collected when you make the procedure call. While\nother components report statistics over time. For cumulative components, if the delta flag is set to zero,\nthe system procedure returns statistics since the database started. If the delta flag is non-zero, the system\nprocedure returns statistics for the interval since the last time @Statistics was called with a non-zero flag.\n(If @Statistics has not been called with a non-zero flag before, the first call with the flag set returns sta-\ntistics since startup.) The statistics that are affected by the delta flag are EXPORT, GC, IDLETIME, IM-\nPORT, INITIATOR, IOSTATS, PLANNER, QUEUE, TTL, and the procedure statistics (PROCEDURE,\nPROCEDUREDETAIL, PROCEDUREINPUT, PROCEDUREOUTPUT, and PROCEDUREPROFILE).\nNote that in a cluster with K-safety, if a node fails, the statistics reported by this procedure are reset to\nzero for the node when it rejoins the cluster.\nThe following are the allowable values of component :\n\"COMMANDLOG \" Returns information about the progress of command logging, including the\nnumber of segment files in use and the amount of command log data waiting\nto be written to disk.\n\"CPU\" Returns information about the amount of CPU used by each VoltDB server\nprocess. CPU usage is returned as a number between 0 and 100 representing\nthe amount of CPU used by the VoltDB process out of the total CPU avail-\nable for that server.\n\"DRCONSUMER \" Returns information about the status of database replication on a DR con-\nsumer, including the status and data replication rate of each partition. This\ninformation is available only if the database is licensed for database replica-\ntion and operating as a passive DR replica or an active XDCR database.\n\"DRPRODUCER \" Returns information about the status of database replication on a producer\ndatabase, including how much data is waiting to be sent to the consumer.\nThis information is available only if the database is licensed for database\nreplication and is operating as a passive master or an active XDCR database.\n\"DRROLE \" Returns information about the current state of database replication (DR), in-\ncluding the role of the cluster (master, replica, or XDCR) and whether DR\nhas started, is running, stopped, or been disabled.\n\"EXPORT \" Returns statistics on the export streams and targets, including how many\nrecords have be written, how many are pending, and the status of the export\nconnection.\n425System Procedures\n\"GC\" Returns statistics on Java garbage collection associated with the server\nprocess on each host.\n\"IDLETIME \" Returns statistics on how busy the partitions are. For each execution site, the\nresults provide a minimum, maximum, and average amount of time the site\nwaited without any transactions to process, as well as the overall percentage\nof time the site was waiting (that is, the partition was \"idle\").\n\"IMPORT \" Returns statistics on the import streams, including how many import trans-\nactions have succeeded, failed, and been retried and how many rows have\nbeen read but not applied yet.\n\"INDEX\" Returns information about the indexes in the database, including the number\nof keys for each index and the estimated amount of memory used to store\nthose keys. Separate information is returned for each partition in the data-\nbase.\n\"INITIATOR \" Returns information on the number of procedure invocations for each stored\nprocedure (including system and import procedures). The count of invoca-\ntions is reported for each connection to the database.\n\"IOSTATS \" Returns information on the number of messages and amount of data (in\nbytes) sent to and from each connection to the database.\n\"LATENCY \" Returns statistics on the latency of transactions. The information reports on\nmedian, percentage (99% through 99.999%), and maximum latency over the\nmost recent five second sampling period.\n\"LIVECLIENTS \" Returns information about the number of outstanding requests per client.\nYou can use this information to determine how much work is waiting in the\nexecution queues.\n\"MANAGEMENT\" Returns the same information as MEMORY , INITIATOR , PROCEDURE ,\nIOSTATS , TABLE, INDEX, IDLETIME [434], QUEUE [442], and\nCPU, except all in a single procedure call.\n\"MEMORY \" Returns statistics on the use of memory for each node in the cluster. MEMO-\nRY statistics include the current resident set size (RSS) of the VoltDB server\nprocess; the amount of memory used for Java temporary storage, database\ntables, indexes, and string (including varbinary) storage; as well as other in-\nformation.\n\"PARTITIONCOUNT \"Returns information on the number of unique partitions in the cluster. The\nVoltDB cluster creates multiple partitions based on the number of servers\nand the number of sites per host requested. So, for example, a 2 node cluster\nwith 4 sites per host will have 8 partitions. However, when you define a\ncluster with K-safety, there are duplicate partitions. PARTITIONCOUNT\nonly reports the number of unique partitions available in the cluster.\n\"PLANNER \" Returns information on the use of cached plans within each partition. Queries\nin stored procedures are planned when the procedure is declared in the\nschema. However, ad hoc queries must be planned at runtime. To improve\nperformance, VoltDB caches plans for ad hoc queries so they can be reused\nwhen a similar query is encountered later. There are two caches: the level\n1 cache performs exact matches on queries and the level 2 cache parameter-\nizes constants so it can match queries with the same plan but different input.\n426System Procedures\nThe planner statistics provide information about the size of each cache, how\nfrequently it is used, and the minimum, maximum, and average execution\ntime of ad hoc queries as a result.\n\"PROCEDURE \" Returns information on the usage of stored procedures for each site within\nthe database cluster sorted by partition. The information includes the name\nof the procedure, the number of invocations (for each site), and selected per-\nformance information on minimum, maximum, and average execution time.\n\"PROCEDUREDETAIL \"Returns detailed performance information about the individual statements\nwithin each stored procedure. PROCEDUREDETAIL returns information\nfor each statement in each procedure, grouped by site and partition within\nthe database cluster. The information includes the name of the procedure, the\nname of the statement, the number of invocations (for each site), and selected\nperformance information on minimum, maximum, and average execution\ntime.\n\"PROCEDUREINPUT \"Returns summary information on the size of the input data submitted with\nstored procedure invocations. PROCEDUREINPUT uses information from\nPROCEDURE, except it focuses on the input parameters and aggregates data\nfor the entire cluster.\n\"PROCEDUREOUTPUT \"Returns summary information on the size of the result sets returned by\nstored procedure invocations. PROCEDUREOUTPUT uses information\nfrom PROCEDURE, except it focuses on the result sets and aggregates data\nfor the entire cluster.\n\"PROCEDURE-\nPROFILE \"Returns summary information on the usage of stored procedures averaged\nacross all partitions in the cluster. The information from PROCEDURE-\nPROFILE is similar to the information from PROCEDURE, except it focus-\nes on the performance of the individual procedures rather than on procedures\nby partition. The weighted average across partitions is helpful for determin-\ning which stored procedures the application is spending most of its time in.\n\"QUEUE\" Returns statistics on the number of tasks in each partition's process queue\nand the average and maximum time tasks were waiting in the queue.\n\"QUEUEPRIORITY \"Returns statistics on the number of tasks in each priority queue for each par-\ntition and the average and maximum time tasks were waiting in the queue.\n\"REBALANCE \" Returns information on the current progress of rebalancing on the cluster.\nRebalancing occurs when one or more nodes are added \"on the fly\" to an\nelastic cluster. If no rebalancing is occurring, no data is returned. During a\nrebalance, this selector returns information about the speed of migration of\nthe data, the latency of rebalance tasks, and the estimated time until comple-\ntion. All rebalance statistics are cumulative for the current rebalance activity.\n\"SNAPSHOTSTATUS \"Returns information about the individual files of up to ten recent snapshots\nperformed by the database. The results include the directory path and prefix\nfor the snapshot, when it occurred, how long it took, and whether the snap-\nshot was completed successfully or not. The results report on both native and\nCSV snapshots, as well as manual, automated, and command log snapshots.\nNote that this selector does not tell you whether the snapshot files still exist,\nonly that the snapshot was performed. Use the @SnapshotScan procedure to\ndetermine what snapshots are available.\n427System Procedures\n\"SNAPSHOTSUMMA-\nRY\"Returns information about up to ten recent snapshots performed by the\ndatabase. Unlike SNAPSHOTSTATUS, which reports on individual files,\nSNAPSHOTSUMMARY provides a one row summary for each snapshot.\nThe results include the directory path and prefix, when the snapshot start-\ned, how long it took, and whether the snapshot was completed successfully\nor not. If the snapshot is still in progress, the results include the percentage\ncomplete. Note that this selector does not tell you whether the snapshot files\nstill exist, only that the snapshot was started. Use the @SnapshotScan pro-\ncedure to determine what snapshot files are available.\n\"TABLE\" Returns information about the database tables, including the number of rows\nper site for each table. This information can be useful for seeing how well\nthe rows are distributed across the cluster for partitioned tables.\n\"TASK\" Returns information about scheduled tasks and their current status. There are\nseparate queues for the task schedulers and the task procedures, which run\non the standard transactional partition queues. The TASK selector reports\non both. Separate selectors for TASK_SCHEDULER and TASK_PROCE-\nDURE report on a corresponding subset of the columns.\n\"TOPIC\" Returns statistics about the queues for outbound topics (that is, data fetched\nby consumers) including throughput, offsets, and retention policy, as well as\nhow much data is currently stored on disk.\n\"TTL\" Returns information about the processing of expired data in \"time to\nlive\" (TTL) tables, including how recently and how many records have been\ndeleted.\nNote that INITIATOR and PROCEDURE report information on both user-declared stored procedures and\nsystem procedures. These include certain system procedures that are used internally by VoltDB and are\nnot intended to be called by client applications. Only the system procedures documented in this appendix\nare intended for client invocation.\nReturn Values\nReturns different VoltTables depending on which component is requested. The following tables identify\nthe structure of the return values for each component. (Note that the MANAGEMENT component returns\nseven VoltTables.)\nCOMMANDLOG — Returns a row for every server in the cluster.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nOUTSTANDING_BYTES BIGINT The size, in bytes, of pending command log data. That is,\ndata for transactions that have been initiated but the log has\nyet to be written to disk. For synchronous logging, this value\nis always zero.\nOUTSTANDING_TXNS BIGINT The size, in number of transactions, of pending command\nlog data. That is, the number of transactions that have been\n428System Procedures\nName Datatype Description\ninitiated for which the log has yet to be written to disk. For\nsynchronous logging, this value is always zero.\nIN_USE_SEGMENT\n_COUNTINTEGER The total number of segment files currently in use for com-\nmand logging.\nSEGMENT_COUNT INTEGER The number of segment files allocated, including currently\nunused segments.\nFSYNC_INTERVAL INTEGER The average interval, in milliseconds, between the last 10\nfsync system calls.\nCPU — Returns a row for every server in the cluster.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nPERCENT_USED BIGINT The percentage of total CPU available used by the database\nserver process.\nDRCONSUMER — Returns two VoltTables. The first table returns a row for every host in the cluster,\nshowing whether a replication snapshot is in progress and if it is, the status of transmission to the consumer.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nCLUSTER_ID INTEGER The numeric ID of the current cluster.\nREMOTE_CLUSTER_ID INTEGER The numeric ID of the producer cluster.\nSTATE STRING A text string indicating the current state of replication. Pos-\nsible values are:\n•UNINITIALIZED — DR has not begun yet or has\nstopped\n•INITIALIZE — DR is enabled and the replica is attempt-\ning to contact the producer\n•SYNC — DR has started and the consumer is synchro-\nnizing by receiving snapshots of existing data from the\nmaster\n•RECEIVE — DR is underway and the consumer is re-\nceiving binary logs from the master\n•DISABLE — DR has been canceled for some reason and\nthe consumer is stopping DR\nREPLI-\nCATION_RATE_1MBIGINT The average rate of replication over the past minute. The\ndata rate is measured in bytes per second.\nREPLI-\nCATION_RATE_5MBIGINT The average rate of replication over the past five minutes.\nThe data rate is measured in bytes per second.\n429System Procedures\nThe second table contains information about the replication streams, which consist of a row per partition\nfor each server. The data shows the current state of replication and how much data has been received by\nthe consumer from each producer.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nCLUSTER_ID INTEGER The numeric ID of the current cluster.\nREMOTE_CLUSTER_ID INTEGER The numeric ID of the producer cluster.\nPARTITION_ID INTEGER The numeric ID for the logical partition.\nIS_COVERED STRING A text string of \"true\" or \"false\" indicating whether this par-\ntition is currently connected to and receiving data from a\nmatching partition on the producer cluster.\nCOVERING_HOST STRING The host name of the server in the producer cluster that\nis providing DR data to this partition. If IS_COVERED is\n\"false\", this field is empty.\nLAST_RECEIVED\n_TIMESTAMPTIMES-\nTAMPThe timestamp of the last transaction received from the pro-\nducer.\nLAST_APPLIED\n_TIMESTAMPTIMES-\nTAMPThe timestamp of the last transaction successfully applied\nto this partition on the consumer.\nIS_PAUSED STRING A text string of \"true\" or \"false\" indicating whether this par-\ntition is paused. A partition \"pauses\" when the schema of\nthe DR tables on the producer change to no longer match\nthe consumer and all binary logs prior to the change have\nbeen processed.\nDUPLICATE_BUFFERS BIGINT The number of repeated buffers received after the initial\npackets were dropped because the queue was full.\nIGNORED_BUFFERS BIGINT The number of buffers received but dropped because the\nqueue was full.\nAVAILABLE_BYTES INTEGER The number of free bytes left in the DR queue.\nAVAILABLE_BUFFERS INTEGER The number of free buffers left in the DR queue.\nCONSUMER_LIMIT_TYPE STRING The type of limit on the DR queue. The response is either\nBYTES or BUFFERS.\nDRPRODUCER — Returns two VoltTables. The first table contains information about the replication\nstreams, which consist of a row per partition for each server. The data shows the current state of replication\nand how much data is currently queued for each consumer.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nCLUSTER_ID INTEGER The numeric ID of the current cluster.\n430System Procedures\nName Datatype Description\nREMOTE_CLUSTER_ID INTEGER The numeric ID of the consumer cluster.\nPARTITION_ID INTEGER The numeric ID for the logical partition.\nSTREAMTYPE STRING The type of stream, which can either be \"TRANSAC-\nTIONS\" or \"SNAPSHOT\".\nTOTALBYTES BIGINT The total number of bytes currently queued for transmission\nto the replica.\nTOTALBYTESIN\nMEMORYBIGINT The total number of bytes of queued data currently held\nin memory. If the amount of total bytes is larger than the\namount in memory, the remainder is kept in overflow stor-\nage on disk.\nTOTALBUFFERS BIGINT The total number of buffers in this partition currently wait-\ning for acknowledgement from the replica. The partitions\nbuffer the binary logs to reduce overhead and optimize net-\nwork transfers.\nLASTQUEUEDDRID BIGINT The ID of the last transaction queued for transmission to the\nconsumer.\nLASTACKDRID BIGINT The ID of the last transaction acknowledged by the con-\nsumer.\nLASTQUEUEDTIMES-\nTAMPTIMES-\nTAMPThe timestamp of the last transaction queued for transmis-\nsion to the consumer.\nLASTACKTIMESTAMP TIMES-\nTAMPThe timestamp of the last transaction acknowledged by the\nconsumer.\nISSYNCED STRING A text string indicating whether the database is currently be-\ning replicated. If replication has not started, or the overflow\ncapacity has been exceeded (that is, replication has failed),\nthe value of ISSYNCED is \"false\". If replication is current-\nly in progress, the value is \"true\".\nMODE STRING A text string indicating whether this particular partition\nis replicating data to the consumer (\"NORMAL\") or not\n(\"PAUSED\"). Only one copy of each logical partition actu-\nally sends data during replication. So for clusters with a K-\nsafety value greater than zero, not all physical partitions will\nreport \"NORMAL\" even when replication is in progress.\nQUEUE_GAP BIGINT The number of missing transactions between those already\nacknowledged by the consumer and the next available for\ntransmission. Under normal operating conditions, this value\nis zero.\nCONNECTION_STATUS STRING A text string indicating whether the connection to the con-\nsumer is operational (\"UP\") or not (\"DOWN\"). If the con-\nnection between the producer and consumer is broken or if\nthe producer does not hear from the consumer for more than\n30 seconds, the connection is marked as \"DOWN\".\nAVAILABLE_BYTES INTEGER The number of bytes waiting to be sent to the consumer\nAVAILABLE_BUFFERS INTEGER The number of buffers waiting to be sent to the consumer.\nCONSUMER_LIMIT_TYPE STRING The type of limit on the DR queue. The response is either\nBYTES or BUFFERS.\n431System Procedures\nThe second table returns a row for every host in the cluster, showing whether a replication snapshot is in\nprogress and if it is, the status of transmission to the consumer.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nCLUSTER_ID INTEGER The numeric ID of the current cluster.\nREMOTE_CLUSTER_ID INTEGER The numeric ID of the consumer cluster.\nSTATE STRING A text string indicating the current state of replication.\nPossible values are \"OFF\" (replication is not enabled),\n\"PENDING\" (replication is enabled but not occurring), and\n\"ACTIVE\" (replication is enabled and a replica database\nhas initiated DR).\nSYNCSNAPSHOTSTATE STRING A text string indicating the current state of the synchroniza-\ntion snapshot that begins replication. During normal opera-\ntion, this value is \"NONE\" indicating either that replication\nis not active or that transactions are actively being replicat-\ned. If a synchronization snapshot is in progress, this value\nprovides additional information about the specific activity\nunderway.\nROWSINSYNC\nSNAPSHOTBIGINT Reserved for future use.\nROWSACKEDFORSYNC\nSNAPSHOTBIGINT Reserved for future use.\nDRROLE — Returns one row per connection showing the current status of DR for that cluster.\nName Datatype Description\nROLE STRING The role of the current cluster in a DR relationship. Possi-\nble values are NONE, MASTER, REPLICA, and XDCR.\n(None indicates that no DR ID is defined and the cluster\ncannot participate in DR.)\nSTATE STRING The current state of the DR relationship. Possible values are\nthe following:\n•DISABLED — DR is not enabled for the cluster\n•PENDING — DR is enabled but communication with the\nother cluster has not begun\n•ACTIVE — Communication with the other cluster has\nbegun\n•STOPPED — Communication with the other cluster has\nstopped due to a failure of some kind\nNote that if DR stops, issuing the voltadmin dr reset com-\nmand will return the cluster to the PENDING state.\nREMOTE_CLUSTER_ID INTEGER The DR ID of the other DR cluster, or -1 if not available\n(for example, when DR is disabled or communication has\nnot begun).\n432System Procedures\nEXPORT — Returns a separate row for each export stream per partition.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nPARTITION_ID INTEGER The numeric ID for the logical partition.\nSOURCE STRING The name of the export stream.\nTARGET STRING The name of the export target.\nACTIVE STRING Whether this site is currently actively exporting data. For\nnormal export in a K-safe cluster, only one copy of each\npartition actively exports data at any given time. Possible\nvalues for user export are \"TRUE\" and \"FALSE\".\nNote that cross datacenter replication (XDCR) uses the ex-\nport infrastructure to write the DR conflict logs. In this case,\nall copies of the partition write the logs and the ACTIVE\ncolumn is marked as \"XDCR\" to distinguish it from user-\ndefined export streams.\nTUPLE_COUNT BIGINT The total number of records queued to the export target.\nTUPLE_PENDING BIGINT The number of records out of TUPLE_COUNT still waiting\nto be written to or acknowledged by the target.\nLAST_QUEUED_TIMES-\nTAMPTIMES-\nTAMPThe timestamp when the most recent tuple was added to the\nexport queue for this partition (in milliseconds).\nLAST_ACKED_TIMES-\nTAMPTIMES-\nTAMPThe timestamp when the last tuple was acknowledged as\nreceived by the target (in milliseconds).\nAVERAGE_LATENCY BIGINT The average time between when records are inserted and\nthey are acknowledged by the target.\nMAXIMUM_LATENCY BIGINT The maximum time between when a record was inserted\nand it was acknowledged by the target.\nQUEUE_GAP BIGINT The number of records missing from the queue for the cur-\nrent stream and partition.\nSTATUS STRING The current status of the export connection. Possible values\nare the following:\n•ACTIVE — Queue is currently exporting to the target\n•BLOCKED — There is a gap in the queue and export\nis waiting to see if the missing records become available\nwhen a missing node rejoins\n•DROPPED — either the source stream has been dropped\nfrom the schema or the export configuration has been re-\nmoved from the configuration and queue is draining any\nremaining records\nNote that if the queue is blocked, the voltadmin export re-\nlease command returns the queue to the ACTIVE state.\n433System Procedures\nGC — Returns a separate row for each host.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nNEWGEN_GC_COUNT INTEGER The number of times \"young generation\" garbage collection\nwas performed.\nNEW-\nGEN_AVG_GC_TIMEBIGINT The average time (in milliseconds) taken by young genera-\ntion collections.\nOLDGEN_GC_COUNT INTEGER The number of times \"old generation\" garbage collection\nwas performed.\nOLD-\nGEN_AVG_GC_TIMEBIGINT The average time (in milliseconds) taken by old generation\ncollections.\nIDLETIME — Returns a separate row for each execution site and host.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nCOUNT BIGINT The number of times the execution site had to wait for a new\ntask (that is, the queue was empty).\nPERCENT FLOAT The percentage of time the execution site was waiting for a\nnew task (that is, the site was \"idle\").\nAVG BIGINT The average amount of time the execution site had to wait\nfor a new task (in microseconds).\nMIN BIGINT The minimum amount of time the execution site had to wait\nfor a new task (in nanoseconds).\nMAX BIGINT The maximum amount of time the execution site had to wait\nfor a new task (in nanoseconds).\nSTDDEV BIGINT The standard deviation of the waiting time (in microsec-\nonds).\nIMPORT — Returns a separate row for each import stream and each server.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nIMPORTER_NAME STRING The name of the import stream.\n434System Procedures\nName Datatype Description\nPROCEDURE_NAME STRING The name of the stored procedure invoked by the import\nstream to insert the incoming data.\nSUCCESSES BIGINT The number of import transactions that succeeded.\nFAILURES BIGINT The number of import transactions that failed.\nOUTSTANDING\n_REQUESTSBIGINT The number of records read from the import stream and\nwaiting to be inserted into the database.\nRETRIES BIGINT The number of attempts to replay failed transactions.\nINDEX — Returns a row for every index in every execution site.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nPARTITION_ID BIGINT The numeric ID for the logical partition that this site rep-\nresents. When using a K value greater than zero, there are\nmultiple copies of each logical partition.\nINDEX_NAME STRING The name of the index.\nTABLE_NAME STRING The name of the database table to which the index applies.\nINDEX_TYPE STRING A text string identifying whether the index is unique or not.\nPossible values include the following:\nCompactingTreeMultiMapIndex\nCompactingTreeUniqueIndex\nIS_UNIQUE TINYINT A byte value specifying whether the index is unique (1) or\nnot (0).\nIS_COUNTABLE TINYINT A byte value specifying whether the index maintains a\ncounter to optimize COUNT(*) queries.\nENTRY_COUNT BIGINT The number of index entries currently in the partition.\nMEMORY_ESTIMATE BIGINT The estimated amount of memory (in kilobytes) consumed\nby the current index entries.\nINITIATOR — Returns a separate row for each connection and the stored procedures initiated by that\nconnection.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nCONNECTION_ID BIGINT Numeric ID of the client connection invoking the proce-\ndure.\n435System Procedures\nName Datatype Description\nCONNECTION_HOST\nNAMESTRING The server name of the node from which the client connec-\ntion originates. In the case of import procedures, the name\nof the importer is reported here.\nPROCEDURE_NAME STRING The name of the stored procedure. If import is enabled, im-\nport procedures are included as well.\nINVOCATIONS BIGINT The number of times the stored procedure has been invoked\nby this connection on this host node.\nAVG_EXECUTION_TIME INTEGER The average length of time (in milliseconds) it took to exe-\ncute the stored procedure.\nMIN_EXECUTION_TIME INTEGER The minimum length of time (in milliseconds) it took to ex-\necute the stored procedure.\nMAX_EXECUTION_TIME INTEGER The maximum length of time (in milliseconds) it took to\nexecute the stored procedure.\nABORTS BIGINT The number of times the procedure was aborted.\nFAILURES BIGINT The number of times the procedure failed unexpectedly. (As\nopposed to user aborts or expected errors, such as constraint\nviolations.)\nIOSTATS — Returns one row for every client connection on the cluster.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nCONNECTION_ID BIGINT Numeric ID of the client connection invoking the proce-\ndure.\nCONNECTION_HOST\nNAMESTRING The server name of the node from which the client connec-\ntion originates.\nBYTES_READ BIGINT The number of bytes of data sent from the client to the host.\nMESSAGES_READ BIGINT The number of individual messages sent from the client to\nthe host.\nBYTES_WRITTEN BIGINT The number of bytes of data sent from the host to the client.\nMESSAGES_WRITTEN BIGINT The number of individual messages sent from the host to\nthe client.\nLATENCY — Returns a row for every server in the cluster.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp, in milliseconds, when the data was collect-\ned (not when the call was processed). If two calls to this\nselector return the same timestamp, the data being returned\nis identical.\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\n436System Procedures\nName Datatype Description\nINTERVAL INTEGER The length of the measurement interval, in milliseconds.\nThe interval is five seconds (5000).\nCOUNT INTEGER The total number of transactions during the interval.\nTPS INTEGER The number of transactions per second during the interval.\nP50 BIGINT The 50th percentile latency, in microseconds. This value\nmeasures the median latency.\nP95 BIGINT The 95h percentile latency, in microseconds.\nP99 BIGINT The 99th percentile latency, in microseconds.\nP99.9 BIGINT The 99.9th percentile latency, in microseconds.\nP99.99 BIGINT The 99.99th percentile latency, in microseconds.\nP99.999 BIGINT The 99.999th percentile latency, in microseconds.\nMAX BIGINT The maximum latency during the interval, in microseconds.\nLIVECLIENTS — Returns a row for every client connection currently active on the cluster.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nCONNECTION_ID BIGINT Numeric ID of the client connection invoking the proce-\ndure.\nCLIENT_HOSTNAME STRING The server name of the node from which the client connec-\ntion originates.\nADMIN TINYINT A byte value specifying whether the connection is to the\nclient port (0) or the admin port (1).\nOUTSTANDING_\nREQUEST_BYTESBIGINT The number of bytes of data sent from the client currently\npending on the host.\nOUTSTANDING_\nRESPONSE_MESSAGESBIGINT The number of messages on the host queue waiting to be\nretrieved by the client.\nOUTSTANDING_\nTRANSACTIONSBIGINT The number of transactions (that is, stored procedures) ini-\ntiated on behalf of the client that have yet to be completed.\nMEMORY — Returns a row for every server in the cluster.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nRSS INTEGER The current resident set size. That is, the total amount of\nmemory allocated to the VoltDB processes on the server.\nJAVAUSED INTEGER The amount of memory (in kilobytes) allocated by Java and\ncurrently in use by VoltDB.\n437System Procedures\nName Datatype Description\nJAVAUNUSED INTEGER The amount of memory (in kilobytes) allocated by Java but\nunused. (In other words, free space in the Java heap.)\nTUPLEDATA BIGINT The amount of memory (in kilobytes) currently in use for\nstoring database records.\nTUPLEALLOCATED BIGINT The amount of memory (in kilobytes) allocated for the stor-\nage of database records (including free space).\nINDEXMEMORY BIGINT The amount of memory (in kilobytes) currently in use for\nstoring database indexes.\nSTRINGMEMORY BIGINT The amount of memory (in kilobytes) currently in use for\nstoring string, binary, and geospatial data that is not stored\n\"in-line\" in the database record.\nTUPLECOUNT BIGINT The total number of database records currently in memory.\nPOOLEDMEMORY BIGINT The total size of memory (in kilobytes) allocated for tasks\nother than database records, indexes, and strings. (For ex-\nample, pooled memory is used for temporary tables while\nprocessing stored procedures.)\nPHYSICALMEMORY BIGINT The total size of physical memory (in kilobytes) on the serv-\ner.\nJAVAMAXHEAP INTEGER The maximum heap size (in kilobytes) of the Java runtime\nenvironment.\nPARTITIONCOUNT — Returns one row identifying the total number of partitions and the host that\nprovided that information.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nPARTITION_COUNT INTEGER The number of unique or logical partitions on the cluster.\nWhen using a K value greater than zero, there are multiple\ncopies of each logical partition.\nPLANNER — Returns a row for every planner cache. That is, one cache per execution site, plus one\nglobal cache per server. (The global cache is identified by a site and partition ID of minus one.)\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nPARTITION_ID INTEGER The numeric ID for the logical partition that this site rep-\nresents. When using a K value greater than zero, there are\nmultiple copies of each logical partition.\nCACHE1_LEVEL INTEGER The number of query plans in the level 1 cache.\n438System Procedures\nName Datatype Description\nCACHE2_LEVEL INTEGER The number of query plans in the level 2 cache.\nCACHE1_HITS BIGINT The number of queries that matched and reused a plan in\nthe level 1 cache.\nCACHE2_HITS BIGINT The number of queries that matched and reused a plan in\nthe level 2 cache.\nCACHE_MISSES BIGINT The number of queries that had no match in the cache and\nhad to be planned from scratch\nPLAN_TIME_MIN BIGINT The minimum length of time (in nanoseconds) it took to\ncomplete the planning of ad hoc queries.\nPLAN_TIME_MAX BIGINT The maximum length of time (in nanoseconds) it took to\ncomplete the planning of ad hoc queries.\nPLAN_TIME_AVG BIGINT The average length of time (in nanoseconds) it took to com-\nplete the planning of ad hoc queries.\nFAILURES BIGINT The number of times planning for an ad hoc query failed.\nPROCEDURE — Returns a row for every stored procedure that has been executed on the cluster, grouped\nby execution site.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nPARTITION_ID INTEGER The numeric ID for the logical partition that this site rep-\nresents. When using a K value greater than zero, there are\nmultiple copies of each logical partition.\nPROCEDURE STRING The class name of the stored procedure.\nINVOCATIONS BIGINT The total number of invocations of this procedure at this\nsite.\nTIMED_INVOCATIONS BIGINT The number of invocations used to measure the minimum,\nmaximum, and average execution time.\nMIN_EXECUTION_TIME BIGINT The minimum length of time (in nanoseconds) it took to\nexecute the stored procedure.\nMAX_EXECUTION_TIME BIGINT The maximum length of time (in nanoseconds) it took to\nexecute the stored procedure.\nAVG_EXECUTION_TIME BIGINT The average length of time (in nanoseconds) it took to exe-\ncute the stored procedure.\nMIN_RESULT_SIZE INTEGER The minimum size (in bytes) of the results returned by the\nprocedure.\nMAX_RESULT_SIZE INTEGER The maximum size (in bytes) of the results returned by the\nprocedure.\nAVG_RESULT_SIZE INTEGER The average size (in bytes) of the results returned by the\nprocedure.\n439System Procedures\nName Datatype Description\nMIN_PARAMETER\n_SET_SIZEINTEGER The minimum size (in bytes) of the parameters passed as\ninput to the procedure.\nMAX_PARAMETER\n_SET_SIZEINTEGER The maximum size (in bytes) of the parameters passed as\ninput to the procedure.\nAVG_PARAMETER\n_SET_SIZEINTEGER The average size (in bytes) of the parameters passed as input\nto the procedure.\nABORTS BIGINT The number of times the procedure was aborted.\nFAILURES BIGINT The number of times the procedure failed unexpectedly. (As\nopposed to user aborts or expected errors, such as constraint\nviolations.)\nTRANSACTIONAL TINYINT 0 or 1. Reserved for future use.\nPROCEDUREDETAIL — Returns a row for every statement in every stored procedure that has been\nexecuted on the cluster, grouped by execution site.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nPARTITION_ID INTEGER The numeric ID for the logical partition that this site rep-\nresents. When using a K value greater than zero, there are\nmultiple copies of each logical partition.\nPROCEDURE STRING The class name of the stored procedure.\nSTATEMENT STRING The name of the statement in the stored procedure. Cumula-\ntive statistics for all statements in the procedure are includ-\ned in a separate row labeled \"<ALL>\".\nINVOCATIONS BIGINT The total number of invocations of the statement at this site.\nTIMED_INVOCATIONS BIGINT The number of invocations used to measure the minimum,\nmaximum, and average execution time.\nMIN_EXECUTION_TIME BIGINT The minimum length of time (in nanoseconds) it took to\nexecute the statement.\nMAX_EXECUTION_TIME BIGINT The maximum length of time (in nanoseconds) it took to\nexecute the statement.\nAVG_EXECUTION_TIME BIGINT The average length of time (in nanoseconds) it took to exe-\ncute the statement.\nMIN_RESULT_SIZE INTEGER The minimum size (in bytes) of the results returned by the\nstatement.\nMAX_RESULT_SIZE INTEGER The maximum size (in bytes) of the results returned by the\nstatement.\nAVG_RESULT_SIZE INTEGER The average size (in bytes) of the results returned by the\nstatement.\n440System Procedures\nName Datatype Description\nMIN_PARAMETER\n_SET_SIZEINTEGER The minimum size (in bytes) of the parameters passed as\ninput to the statement.\nMAX_PARAMETER\n_SET_SIZEINTEGER The maximum size (in bytes) of the parameters passed as\ninput to the statement.\nAVG_PARAMETER\n_SET_SIZEINTEGER The average size (in bytes) of the parameters passed as input\nto the statement.\nABORTS BIGINT In the cumulative row for each procedure (\"<ALL>\"), the\nnumber of times the procedure was aborted. For individual\nstatements, this column is set to zero.\nFAILURES BIGINT The number of times the statement failed unexpectedly. (As\nopposed to user aborts or expected errors, such as constraint\nviolations.)\nPROCEDUREINPUT — Returns a row for every stored procedure that has been executed on the cluster,\nsummarized across the cluster.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nPROCEDURE STRING The class name of the stored procedure.\nWEIGHTED_PERC BIGINT A weighted average expressed as a percentage of the para-\nmeter set size for invocations of this stored procedure com-\npared to all stored procedure invocations.\nINVOCATIONS BIGINT The total number of invocations of this procedure.\nMIN_PARAMETER\n_SET_SIZEBIGINT The minimum parameter set size in bytes.\nMAX_PARAMETER\n_SET_SIZEBIGINT The maximum parameter set size in bytes.\nAVG_PARAMETER\n_SET_SIZEBIGINT The average parameter set size in bytes.\nTOTAL_PARAMETER\n_SET_SIZE_MBBIGINT The total input for all invocations of this stored procedure\nmeasured in megabytes.\nPROCEDUREOUTPUT — Returns a row for every stored procedure that has been executed on the\ncluster, summarized across the cluster.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nPROCEDURE STRING The class name of the stored procedure.\nWEIGHTED_PERC BIGINT A weighted average expressed as a percentage of the re-\nsult set size returned by invocations of this stored procedure\ncompared to all stored procedure invocations.\nINVOCATIONS BIGINT The total number of invocations of this procedure.\nMIN_RESULT_SIZE BIGINT The minimum result set size in bytes.\nMAX_RESULT_SIZE BIGINT The maximum result set size in bytes.\n441System Procedures\nName Datatype Description\nAVG_RESULT_SIZE BIGINT The average result set size in bytes.\nTOTAL_RESULT\n_SIZE_MBBIGINT The total output returned by all invocations of this stored\nprocedure measured in megabytes.\nPROCEDUREPROFILE — Returns a row for every stored procedure that has been executed on the\ncluster, summarized across the cluster.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nPROCEDURE STRING The class name of the stored procedure.\nWEIGHTED_PERC BIGINT A weighted average expressed as a percentage of the exe-\ncution time for this stored procedure compared to all stored\nprocedure invocations.\nINVOCATIONS BIGINT The total number of invocations of this procedure.\nAVG BIGINT The average length of time (in nanoseconds) it took to exe-\ncute the stored procedure.\nMIN BIGINT The minimum length of time (in nanoseconds) it took to\nexecute the stored procedure.\nMAX BIGINT The maximum length of time (in nanoseconds) it took to\nexecute the stored procedure.\nABORTS BIGINT The number of times the procedure was aborted.\nFAILURES BIGINT The number of times the procedure failed unexpectedly. (As\nopposed to user aborts or expected errors, such as constraint\nviolations.)\nQUEUE — Returns a separate row for each partition and host listing the current state of the process queue\nfor that execution site.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nCURRENT_DEPTH INTEGER The number of tasks currently in the queue.\nPOLL_COUNT BIGINT The number of tasks that left the queue (and started execut-\ning) in the past five seconds.\nAVG_WAIT BIGINT The average length of time (in microseconds) tasks were\nwaiting in the queue in the last five seconds.\nMAX_WAIT BIGINT The maximum length of time (in microseconds) tasks were\nwaiting in the queue in the last five seconds.\nQUEUEPRIORITY — Returns a separate row for each partition and host listing the current state of the\npriority queues for that execution site (if prioritization is enabled).\n442System Procedures\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nPRIORITY INTEGER The priority of the tasks in the queue.\nCURRENT_DEPTH INTEGER The number of tasks currently in the queue.\nPOLL_COUNT BIGINT The number of tasks that left the queue (and started execut-\ning) in the past five seconds.\nAVG_WAIT BIGINT The average length of time (in microseconds) tasks were\nwaiting in the queue in the last five seconds.\nMAX_WAIT BIGINT The maximum length of time (in microseconds) tasks were\nwaiting in the queue in the last five seconds.\nREBALANCE — Returns one row if the cluster is rebalancing. No data is returned if the cluster is not\nrebalancing.\nName Datatype Description\nTOTAL_RANGES BIGINT The total number of partition segments to be migrated.\nPERCENTAGE_MOVED FLOAT The percentage of the total segments that have already been\nmoved.\nMOVED_ROWS BIGINT The number of rows of data that have been moved.\nROWS_PER_SECOND FLOAT The average number of rows moved per second.\nESTIMATED\n_REMAININGBIGINT The estimated time remaining until the rebalance is com-\nplete, in milliseconds.\nMEGABYTES_PER\n_SECONDFLOAT The average volume of data moved per second, measured\nin megabytes.\nCALLS_PER_SECOND FLOAT The average number of rebalance work units, or transac-\ntions, executed per second.\nCALLS_LATENCY FLOAT The average execution time for rebalance transactions, in\nmilliseconds.\nSNAPSHOTSTATUS — Returns a row for every snapshot file in the recent snapshots performed on the\ncluster.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nTABLE STRING The name of the database table whose data the file contains.\nPATH STRING The directory path where the snapshot file resides.\nFILENAME STRING The file name.\n443System Procedures\nName Datatype Description\nNONCE STRING The unique identifier for the snapshot.\nTXNID BIGINT The transaction ID of the snapshot.\nSTART_TIME BIGINT The timestamp when the snapshot began (in milliseconds).\nEND_TIME BIGINT The timestamp when the snapshot was completed (in mil-\nliseconds).\nSIZE BIGINT The total size, in bytes, of the file.\nDURATION BIGINT The length of time (in seconds) it took to complete the snap-\nshot.\nTHROUGHPUT FLOAT The average number of bytes per second written to the file\nduring the snapshot process.\nRESULT STRING String value indicating whether the writing of the snapshot\nfile was successful (\"SUCCESS\") or not (\"FAILURE\").\nTYPE STRING String value indicating how the snapshot was initiated. Pos-\nsible values are:\n•AUTO — an automated snapshot as defined by the con-\nfiguration file\n•COMMANDLOG — a command log snapshot\n•MANUAL — a manual snapshot initiated by a user\nSNAPSHOTSUMMARY — Returns a row for every snapshot performed by the cluster, including up\nto ten snapshots\nName Datatype Description\nNONCE STRING The unique identifier for the snapshot.\nTXNID BIGINT The transaction ID of the snapshot.\nTYPE STRING String value indicating how the snapshot was initiated. Pos-\nsible values are:\n•AUTO — an automated snapshot as defined by the con-\nfiguration file\n•COMMANDLOG — a command log snapshot\n•MANUAL — a manual snapshot initiated by a user\nPATH STRING The target directory path for the snapshot files.\nSTART_TIME BIGINT The timestamp when the snapshot began (in milliseconds).\nEND_TIME BIGINT The timestamp when the snapshot was completed (in mil-\nliseconds).\nDURATION BIGINT The length of time (in seconds) it took to complete the snap-\nshot.\nPROGRESS_PCT FLOAT For snapshots currently in progress, the percent complete at\nthe time of the call.\nRESULT STRING String value indicating whether the writing of the snapshot\nwas successful (\"SUCCESS\") or not (\"FAILURE\").\nTABLE — Returns a row for every table, per partition. In other words, the number of tables, multiplied\nby the number of sites per host and the number of hosts.\n444System Procedures\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID BIGINT Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID BIGINT Numeric ID of the execution site on the host node.\nPARTITION_ID BIGINT The numeric ID for the logical partition that this site rep-\nresents. When using a K value greater than zero, there are\nmultiple copies of each logical partition.\nTABLE_NAME STRING The name of the database table.\nTABLE_TYPE STRING The type of the table. Values returned include \"Persistent-\nTable\" for normal data tables and views and \"Streamed-\nTable\" for streams.\nTUPLE_COUNT BIGINT The number of rows currently stored for this table in the\ncurrent partition. For streams, the cumulative total number\nof rows inserted into the stream.\nTUPLE_ALLOCATED\n_MEMORYBIGINT The total size of memory, in kilobytes, allocated for storing\ninline data associated with this table in this partition. The\nallocated memory can exceed the currently used memory\n(TUPLE_DATA_MEMORY). For streams, this field iden-\ntifies the amount of memory currently in use to queue ex-\nport data (both in memory and as export overflow) prior to\nits being passed to the export target.\nTUPLE_DATA_MEMORY BIGINT The total memory, in kilobytes, used for storing inline data\nassociated with this table in this partition. The total memo-\nry used for storing data for this table is the combination of\nmemory used for inline (tuple) and non-inline (string) data.\nSTRING_DATA\n_MEMORYBIGINT The total memory, in kilobytes, used for storing non-inline\nvariable length data (VARCHAR, VARBINARY, and GE-\nOGRAPHY) associated with this table in this partition. The\ntotal memory used for storing data for this table is the com-\nbination of memory used for inline (tuple) and non-inline\n(string) data.\nTUPLE_LIMIT INTEGER The row limit for this table. Row limits are optional and are\ndefined in the schema as a maximum number of rows that\nany partition can contain. If no row limit is set, this value\nis null.\nPERCENT_FULL INTEGER The percentage of the row limit currently in use by table\nrows in this partition. If no row limit is set, this value is zero.\nDR STRING A text string of \"true\" or \"false\" indicating whether the table\nis a DR table or not.\nEXPORT STRING If the table or stream is configured with MIGRATE or EX-\nPORT TO TARGET, the name of the associated export tar-\nget.\nTASK — Returns a row for every task, per logical partition.\n445System Procedures\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID BIGINT Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nPARTITION_ID BIGINT The numeric ID for the logical partition running the task\nprocedure. Directed procedures run on each logical parti-\ntion. Multi-partition procedures run on the multi-partition\ninitiator.\nTASK_NAME STRING The name of the task.\nSTATE STRING The current status of the task. Possible values include:\n•RUNNING — The task is enabled and running normally.\n•DISABLED — The task is disabled and not running.\n•ERROR — The task returned an error and was stopped\ndue to the ON ERROR STOP attribute.\n•PAUSED — The database is paused or is running on a\nDR replica, so the task is not currently running but will\nrestart when the database resumes or is promoted.\nSCOPE STRING The execution scope of the task, which matches the RUN\nON attribute. Possible values are DATABASE, HOSTS, or\nPARTITIONS.\nSCHEDULER_INVO-\nCATIONSBIGINT The total number of invocations of the task's schedule. For\nexample, if the task is scheduled for every 5 minutes, after\n20 minutes of normal operation you would expect 4 invo-\ncations.\nSCHEDULER_TO-\nTAL_EXECUTIONBIGINT The total time, in nanoseconds, consumed by the scheduler\nfor scheduling the task.\nSCHEDULER_MIN_EXE-\nCUTIONBIGINT The minimum amount of time, in nanoseconds, taken by the\nscheduler to schedule an instance of the task.\nSCHEDULER_MAX_EXE-\nCUTIONBIGINT The maximum amount of time, in nanoseconds, taken by\nthe scheduler to schedule an instance of the task.\nSCHEDULER_AVG_EXE-\nCUTIONBIGINT The average amount of time, in nanoseconds, taken by the\nscheduler to schedule instances of the task.\nSCHEDULER_TO-\nTAL_WAIT_TIMEBIGINT The total time, in nanoseconds, between the task's sched-\nuled start and when the scheduler was invoked.\nSCHED-\nULER_MIN_WAIT_TIMEBIGINT The minimum difference, in nanoseconds, between when\nthe task was scheduled to run and when the scheduler was\ninvoked.\nSCHED-\nULER_MAX_WAIT_TIMEBIGINT The maximum difference, in nanoseconds, between when\nthe task was scheduled to run and when the scheduler was\ninvoked.\nSCHED-\nULER_AVG_WAIT_TIMEBIGINT The average difference, in nanoseconds, between when the\ntask was scheduled to run and when the scheduler was in-\nvoked.\nSCHEDULER_STATUS STRING For future use.\n446System Procedures\nName Datatype Description\nPROCEDURE_INVO-\nCATIONSBIGINT The total number of invocations of the task's procedure.\nPROCEDURE_TO-\nTAL_EXECUTIONBIGINT The total time, in nanoseconds, consumed by the task for\nall of its invocations.\nPROCEDURE_MIN_EXE-\nCUTIONBIGINT The minimum amount of time, in nanoseconds, an instance\nof the task took to execute.\nPROCEDURE_MAX_EX-\nECUTIONBIGINT The maximum amount of time, in nanoseconds, an instance\nof the task took to execute.\nPROCEDURE_AVG_EXE-\nCUTIONBIGINT The average amount of time, in nanoseconds, instances of\nthe task took to execute.\nPROCEDURE_TO-\nTAL_WAIT_TIMEBIGINT The total time, in nanoseconds, between when the proce-\ndure was scheduled to run and when it was invoked.\nPROCE-\nDURE_MIN_WAIT_TIMEBIGINT The minimum difference, in nanoseconds, between when\nthe procedure was scheduled to run and when it was in-\nvoked.\nPROCE-\nDURE_MAX_WAIT_TIMEBIGINT The maximum difference, in nanoseconds, between when\nthe procedure was scheduled to run and when it was in-\nvoked.\nPROCE-\nDURE_AVG_WAIT_TIMEBIGINT The average difference, in nanoseconds, between when the\nprocedure was scheduled to run and when it was invoked.\nPROCEDURE_FAILURES BIGINT The number of times the procedure generated an error when\nrun.\nTOPIC — Returns a separate row for each topic and partition.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nHOST_ID INTEGER Numeric ID for the host node.\nHOSTNAME STRING Server name of the host node.\nSITE_ID INTEGER Numeric ID of the execution site on the host node.\nTOPIC STRING The name of the topic.\nPARTITION_ID INTEGER The numeric ID for the logical partition.\nFIRST_OFFSET BIGINT The value of the first offset currently available in the topic.\nLAST_OFFSET BIGINT The value of the last offset in the topic.\nFIRST_OFFSET_TIMES-\nTAMPTIMES-\nTAMPThe timestamp when the first offset was inserted into the\nqueue.\nLAST_OFFSET_TIMES-\nTAMPTIMES-\nTAMPThe timestamp when the most recent message (the last off-\nset) was inserted into the queue.\nBYTES_ON_DISK BIGINT The size, in bytes, of data stored on disk for this partition\nand topic.\nBYTES_FETCHED BIGINT The size, in bytes, of data sent to consumers for this partition\nand topic.\n447System Procedures\nName Datatype Description\nSTATE STRING The current level of completeness for this topic in this par-\ntition. If the server was down at any point, it may be miss-\ning records that were queued while the partition was offline.\nPossible values are:\n•STABLE — the queue is complete.\n•BACKFILLING — records are missing but are being re-\ntrieved from other servers.\n•BLOCKED — records are missing from all copies of the\npartition.\n•ORPHANED — the queue is no longer being served by\nthis partition, but is saved in case other copies of the\nqueue are blocked or backfilling and need the data. This\nis a transitional state and the queue is deleted as soon as\nno other copies need its records.\nIf a topic becomes blocked and the cluster is complete (has\nno missing nodes), you can use the voltadmin topic release\ncommand to \"jump\" past the missing offsets.\nMASTER STRING A text string of \"true\" or \"false\" indicating whether the cur-\nrent site is the master for the logical partition.\nRETENTION_POLICY STRING The retention policy for this topic.\nROWS_SKIPPED BIGINT Reserved for future use.\nERROR_OFFSET BIGINT If an error occurs while encoding a message for consumers,\nan error is returned to the consumer, the offset of the mes-\nsage is recoded here, and a description of the error stored\nin the next column.\nERROR_MESSAGE BIGINT A description of the last error that occurred while encoding\nmessages for consumers.\nTTL — Returns a separate row for each table in the database where TTL processing is currently active.\nIt does not list tables that do not have TTL defined or where TTL processing has been cancelled due to\nan error or lack of a suitable index.\nName Datatype Description\nTIMESTAMP BIGINT The timestamp when the information was collected (in mil-\nliseconds).\nTABLE_NAME STRING The name of the table.\nROWS_DELETED BIGINT The total number of rows expired and deleted by the TTL\nattribute.\nROWS_DELETED_LAST\n_ROUNDBIGINT The number of rows expired and deleted during the last TTL\nprocessing.\nROWS_REMAINING BIGINT The number of expired rows not deleted during the last TTL\nprocessing due to batch size limits. If TTL processing is\nkeeping up with the throughput, this value should tend to-\nwards zero.\nLAST_DELETE\n_TIMESTAMPBIGINT The timestamp when the last round of TTL processing oc-\ncurred (in milliseconds).\n448System Procedures\nExamples\nThe following example uses @Statistics to gather information about the distribution of table rows within\nthe cluster:\n$ sqlcmd\n1> exec @Statistics TABLE, 0;\nThe next program example shows a procedure that collects and displays the number of transactions (i.e.\nstored procedures) during a given interval, by setting the delta flag to a non-zero value. By calling this\nprocedure iteratively (for example, every five minutes), it is possible to identify fluctuations in the database\nworkload over time (as measured by the number of transactions processed).\nvoid measureWorkload() {\n VoltTable[] results = null;\n String procName;\n int procCount = 0;\n int sysprocCount = 0;\n try { results = client.callProcedure(\"@Statistics\",\n \"INITIATOR\",1).getResults(); }\n catch (Exception e) { e.printStackTrace(); }\n for (VoltTable t: results) { \n for (int r=0;r<t.getRowCount();r++) {\n VoltTableRow row = t.fetchRow(r);\n procName = row.getString(\"PROCEDURE_NAME\");\n /* Count system procedures separately */\n if (procName.substring(0,1).compareTo(\"@\") == 0)\n { sysprocCount += row.getLong(\"INVOCATIONS\"); }\n else \n { procCount += row.getLong(\"INVOCATIONS\"); }\n }\n }\n System.out.printf(\"System procedures: %d\\n\" +\n \"User-defined procedures: %d\\n\",+\n sysprocCount,procCount);\n}\n449System Procedures\n@StopNode\n@StopNode — Stops a VoltDB server process, removing the node from the cluster.\nSyntax\n@StopNode Integer host-ID\nDescription\nThe @StopNode system procedure lets you stop a specific server in a K-safe cluster. You specify which\nnode to stop using the host ID, which is the unique identifier for the node assigned by VoltDB when the\nserver joins the cluster.\nNote that by calling the @StopNode procedure on a node other than the node being stopped, you will\nreceive a return status indicating the success or failure of the call. If you call the procedure on the node that\nyou are requesting to stop, the return status can only indicate that the call was interrupted (by the VoltDB\nprocess on the node stopping), not whether it was successfully completed or not.\nIf you call @StopNode on a node or cluster that is not K-safe — either because it was started with a K-\nsafety value of zero or one or more nodes have failed so any further failure could crash the database — the\n@StopNode procedure will not be executed. You can only stop nodes on a cluster that will remain viable\nafter the node stops. To stop the entire cluster, please use the voltadmin shutdown command.\nReturn Values\nReturns one VoltTable with one row.\nName Datatype Description\nSTATUS BIGINT Always returns the value zero (0) indicating success.\nExamples\nThe following program example uses grep, sqlcmd, and the @SystemInformation stored procedure to\nidentify the host ID for a specific node (doodah) of the cluster. The example then uses that host ID (2) to\ncall @StopNode and stop the desired node.\n$ echo \"exec @SystemInformation overview;\" | sqlcmd | grep \"doodah\"\n 2 HOSTNAME doodah\n$ sqlcmd\n1> exec @StopNode 2;\nThe next example uses the voltadmin stop command to perform the same action. Note that voltadmin\nstop performs the translation from a network name to a host ID for you.\n$ voltadmin stop doodah\nThe following Java code fragment performs the same function programmatically.\ntry {\n results = client.callProcedure(\"@SystemInformation\",\n450System Procedures\n \"overview\").getResults();\n}\ncatch (Exception e) { e.printStackTrace(); }\nVoltTable table = results[0];\ntable.resetRowPosition();\nint targetHostID = -1;\nwhile (table.advanceRow() && targetHostId < 0) { \n if ( (table.getString(\"KEY\") == \"HOSTNAME\") && \n (table.getString(\"VALUE\") == targetHostName) ) {\n targetHostId = (int) table.getLong(\"HOST_ID\");\n }\n} \ntry {\n client.callProcedure(\"@SStopNode\", \n targetHostId).getResults();\n }\n catch (Exception e) { e.printStackTrace(); }\n451System Procedures\n@SwapTables\n@SwapTables — Swaps the contents of one table for another\nSyntax\n@SwapTables String[] table-name , String[] table-name\nDescription\nThe @SwapTables system procedure swaps the contents of one table for another. So, for example, if table\nA has 2 rows and table B has 10 rows, after executing the following system procedure call table A would\nhave 10 rows and table B would have 2 rows:\nsqlcmd> exec @SwapTables 'A' 'B';\nThe tables being swapped must have identical schema. That is the names, datatype, and order of the\ncolumns must be the same and the tables must have the same indexes and other constraints. Also there\ncannot be any views on either table. If these requirements are not met, or if either of the named tables does\nnot exist, the system procedure returns an error.\nThe system procedure provides a significant performance advantage over any comparable SQL statements\nwhen swapping large tables because the operation does not actually move any data. The pointers for the\ntwo tables are switched, eliminating any need for excessive temporary storage or data movement.\nWhen using database replication (DR), the @SwapTables procedure is treated like a schema change and\nwill pause replication. To use @SwapTables in a DR environment, follow the procedures for schema\nchanges. That is:\n•When using passive DR:\n1.Pause the master database with voltadmin pause --wait .\n2.Invoke @SwapTables on the master database.\n3.Resume the master database with voltadmin resume .\n4.Invoke the same @SwapTables call on the replica.\n•When using XDCR:\n1.Pause all of the clusters in the XDCR relationshp with voltadmin pause --wait .\n2.Invoke the same @SwapTables call on all of the databases.\n3.Resume all the databases with voltadmin resume .\nReturn Values\nReturns one VoltTable with one row and one column.\nName Datatype Description\nMODIFIED_TUPLES BIGINT The number of tuples affected by the swap. In other words,\nthe sum of the tuples in both tables.\n452System Procedures\nExamples\nThe following example uses the @SwapTables system procedure to replace a lookup table of hot topics\nwith an updated list in a single statement.\nsqlcmd> exec @SwapTables Hot_Topics Hot_Topics_Update;\n453System Procedures\n@SystemCatalog\n@SystemCatalog — Returns metadata about the database schema.\nSyntax\n@SystemCatalog String component\nDescription\nThe @SystemCatalog system procedure returns information about the schema of the VoltDB database, de-\npending upon the component keyword you specify. The following are the allowable values of component :\n\"COLUMNS \" Returns a list of columns for all of the tables in the database.\n\"FUNCTIONS \" Returns information about user-defined functions in the database.\n\"INDEXINFO \" Returns information about the indexes in the database schema. Note that the\nprocedure returns information for each column in the index. In other words,\nif an index is composed of three columns, the result set will include three\nseparate entries for the index, one for each column.\n\"PRIMARYKEYS \" Returns information about the primary keys in the database schema. Note\nthat the procedure returns information for each column in the primary key.\nIf an primary key is composed of three columns, the result set will include\nthree separate entries.\n\"PROCEDURECOLUM-\nNS\"Returns information about the arguments to the stored procedures.\n\"PROCEDURES \" Returns information about the stored procedures defined in the database.\n\"ROLES\" Returns information about the roles defined in the database and their asso-\nciated permissions.\n\"TABLES \" Returns information about the tables in the database.\n\"USERS\" Returns information about the users defined in the database and their asso-\nciated roles.\nReturn Values\nReturns a different VoltTable for each component. The layout of the VoltTables is designed to match the\ncorresponding JDBC data structures. Columns are provided for all JDBC properties, but where VoltDB\nhas no corresponding element the column is unused and a null value is returned.\nFor the COLUMNS component, the VoltTable has the following columns:\nName Datatype Description\nTABLE_CAT STRING Unused.\nTABLE_SCHEM STRING Unused.\nTABLE_NAME STRING The name of the database table the column belongs to.\n454System Procedures\nName Datatype Description\nCOLUMN_NAME STRING The name of the column.\nDATA_TYPE INTEGER An enumerated value specifying the corresponding Java\nSQL datatype of the column.\nTYPE_NAME STRING A string value specifying the datatype of the column.\nCOLUMN_SIZE INTEGER The length of the column in bits, characters, or digits, de-\npending on the datatype.\nBUFFER_LENGTH INTEGER Unused.\nDECIMAL_DIGITS INTEGER The number of fractional digits in a DECIMAL datatype\ncolumn. (Null for all other datatypes.)\nNUM_PREC_RADIX INTEGER Specifies the radix, or numeric base, for calculating the col-\numn size. A radix of 2 indicates the column size is measured\nin bits while a radix of 10 indicates a measurement in bytes\nor digits.\nNULLABLE INTEGER Indicates whether the column value can be null (1) or not\n(0).\nREMARKS STRING Contains the string \"PARTITION_COLUMN\" if the col-\numn is the partitioning key for a partitioned table. Other-\nwise null.\nCOLUMN_DEF STRING The default value for the column.\nSQL_DATA_TYPE INTEGER Unused.\nSQL_DATETIME_SUB INTEGER Unused.\nCHAR_OCTET_LENGTH INTEGER For variable length columns (VARCHAR and VARBI-\nNARY), the maximum length of the column. Null for all\nother datatypes.\nORDINAL_POSITION INTEGER An index specifying the position of the column in the list of\ncolumns for the table, starting at 1.\nIS_NULLABLE STRING Specifies whether the column can contain a null value\n(\"YES\") or not (\"NO\").\nSCOPE_CATALOG STRING Unused.\nSCOPE_SCHEMA STRING Unused.\nSCOPE_TABLE STRING Unused.\nSOURCE_DATE_TYPE SMALLINT Unused.\nIS_AUTOINCREMENT STRING Specifies whether the column is auto-incrementing or not.\n(Always returns \"NO\").\nFor the FUNCTIONS component, the VoltTable has the following columns:\nName Datatype Description\nFUNCTION_TYPE STRING The function type is always \"scalar\".\nFUNCTION_NAME STRING The name of the user-defined function.\nCLASS_NAME STRING The Java class name that contains the user-defined function\nmethod.\nMETHOD_NAME STRING The name of the method that implements the user-defined\nfunction.\n455System Procedures\nFor the INDEXINFO component, the VoltTable has the following columns:\nName Datatype Description\nTABLE_CAT STRING Unused.\nTABLE_SCHEM STRING Unused.\nTABLE_NAME STRING The name of the database table the index applies to.\nNON_UNIQUE TINYINT Value specifying whether the index is unique (0) or not (1).\nINDEX_QUALIFIER STRING Unused.\nINDEX_NAME STRING The name of the index that includes the current column.\nTYPE SMALLINT A value indicating the type of index, which is always three\n(3).\nORDINAL_POSITION SMALLINT An index specifying the position of the column in the index,\nstarting at 1.\nCOLUMN_NAME STRING The name of the column.\nASC_OR_DESC STRING A string value specifying the sort order of the index. Pos-\nsible values are \"A\" for ascending or null for unsorted in-\ndexes.\nCARDINALITY INTEGER Unused.\nPAGES INTEGER Unused.\nFILTER_CONDITION STRING Unused.\nFor the PRIMARYKEYS component, the VoltTable has the following columns:\nName Datatype Description\nTABLE_CAT STRING Unused.\nTABLE_SCHEM STRING Unused.\nTABLE_NAME STRING The name of the database table.\nCOLUMN_NAME STRING The name of the column in the primary key.\nKEY_SEQ SMALLINT An index specifying the position of the column in the pri-\nmary key, starting at 1.\nPK_NAME STRING The name of the primary key.\nFor the PROCEDURECOLUMNS component, the VoltTable has the following columns:\nName Datatype Description\nPROCEDURE_CAT STRING Unused.\nPROCEDURE_SCHEM STRING Unused.\nPROCEDURE_NAME STRING The name of the stored procedure.\nCOLUMN_NAME STRING The name of the procedure parameter.\nCOLUMN_TYPE SMALLINT An enumerated value specifying the parameter type. Al-\nways returns 1, corresponding to procedureColumnIn.\nDATA_TYPE INTEGER An enumerated value specifying the corresponding Java\nSQL datatype of the column.\n456System Procedures\nName Datatype Description\nTYPE_NAME STRING A string value specifying the datatype of the parameter.\nPRECISION INTEGER The length of the parameter in bits, characters, or digits,\ndepending on the datatype.\nLENGTH INTEGER The length of the parameter in bytes. For variable length\ndatatypes (VARCHAR and VARBINARY), this value\nspecifies the maximum possible length.\nSCALE SMALLINT The number of fractional digits in a DECIMAL datatype\nparameter. (Null for all other datatypes.)\nRADIX SMALLINT Specifies the radix, or numeric base, for calculating the pre-\ncision. A radix of 2 indicates the precision is measured in\nbits while a radix of 10 indicates a measurement in bytes\nor digits.\nNULLABLE SMALLINT Unused.\nREMARKS STRING If this column contains the string \"PARTITION_PARA-\nMETER\", the parameter is the partitioning key for a sin-\ngle-partitioned procedure. If the column contains the string\n\"ARRAY_PARAMETER\" the parameter is a native Java\narray. Otherwise this column is null.\nCOLUMN_DEF STRING Unused.\nSQL_DATA_TYPE INTEGER Unused.\nSQL_DATETIME_SUB INTEGER Unused.\nCHAR_OCTET_LENGTH INTEGER For variable length columns (VARCHAR and VARBI-\nNARY), the maximum length of the column. Null for all\nother datatypes.\nORDINAL_POSITION INTEGER An index specifying the position in the parameter list for the\nprocedure, starting at 1.\nIS_NULLABLE STRING Unused.\nSPECIFIC_NAME STRING Same as COLUMN_NAME\nFor the PROCEDURES component, the VoltTable has the following columns:\nName Datatype Description\nPROCEDURE_CAT STRING Unused.\nPROCEDURE_SCHEM STRING Unused.\nPROCEDURE_NAME STRING The name of the stored procedure.\nRESERVED1 STRING Unused.\nRESERVED2 STRING Unused.\nRESERVED3 STRING Unused.\nREMARKS STRING Unused.\nPROCEDURE_TYPE SMALLINT An enumerated value that specifies the type of procedure.\nAlways returns zero (0), indicating \"unknown\".\nSPECIFIC_NAME STRING Same as PROCEDURE_NAME.\nFor the ROLES component, the VoltTable has the following columns:\n457System Procedures\nName Datatype Description\nROLE STRING The name of the role.\nPERMISSIONS STRING A comma-separated list of permissions associated with the\nrole.\nFor the TABLES component, the VoltTable has the following columns:\nName Datatype Description\nTABLE_CAT STRING Unused.\nTABLE_SCHEM STRING Unused.\nTABLE_NAME STRING The name of the database table.\nTABLE_TYPE STRING Specifies whether the table is a data table (\"TABLE\"), a ma-\nterialized view (\"VIEW\"), or a stream ('EXPORT\").\nREMARKS STRING Unused.\nTYPE_CAT STRING Unused.\nTYPE_SCHEM STRING Unused.\nTYPE_NAME STRING Unused.\nSELF_REFERENCING\n_COL_NAMESTRING Unused.\nREF_GENERATION STRING Unused.\nFor the USERS component, the VoltTable has the following columns:\nName Datatype Description\nUSER STRING The name of the user.\nROLES STRING A comma-separated list of roles assigned to the user.\nExamples\nThe following example calls @SystemCatalog to list the stored procedures in the active database schema:\n$ sqlcmd\n1> exec @SystemCatalog procedures;\nThe next program example uses @SystemCatalog to display information about the tables in the database\nschema.\nVoltTable[] results = null;\ntry {\n results = client.callProcedure(\"@SystemCatalog\",\n \"TABLES\").getResults();\n System.out.println(\"Information about the database schema:\");\n for (VoltTable node : results) System.out.println(node.toString());\n}\ncatch (Exception e) {\n e.printStackTrace();\n}\n458System Procedures\n@SystemInformaon\n@SystemInformation — Returns configuration information about VoltDB and the individual nodes of the\ndatabase cluster.\nSyntax\n@SystemInformation\n@SystemInformation String component\nDescription\nThe @SystemInformation system procedure returns information about the configuration of the VoltDB\ndatabase or the individual nodes of the database cluster, depending upon the component keyword you\nspecify. The following are the allowable values of component :\n\"DEPLOY-\nMENT\"Returns information about the configuration of the database. In particular, this key-\nword returns information about the various features and settings enabled through the\nconfiguration file, such as export, snapshots, K-safety, and so on. These properties\nare returned in a single VoltTable of name/value pairs.\n\"LICENSE \"Returns information about the license in use by the database. The license properties\nare returned in a single VoltTable of name/value pairs.\n\"OVERVIEW \"Returns information about the individual servers in the database cluster, including the\nhost name, the IP address, the version of VoltDB running on the server, as well as the\npath to the configuration file in use. The overview also includes entries for the start\ntime of the server and length of time the server has been running.\nIf you do not specify a component, @SystemInformation returns the results of the OVERVIEW component\n(to provide compatibility with previous versions of the procedure).\nReturn Values\nReturns one of two VoltTables depending upon which component is requested.\nDEPLOYMENT — returns one row for each configuration property.\nName Datatype Description\nPROPERTY STRING The name of the configuration property. The system procedure reports\nthe following properties, depending on what features are enabled in\nthe database configuration:\n•adminport — admin port number\n•commandlogenabled — whether command logging is enabled or\nnot\n•commandlogfreqtime — frequency of command logging in mil-\nliseconds\n•commandlogfreqtxns — frequency of command logging in number\nof transactions\n•commandlogmode — command logging mode, sync or async\n459System Procedures\nName Datatype Description\n•commandlogpath — directory path to command log segments\n•commandlogsnapshotpath — directory path to command log snap-\nshots\n•droverflowpath — directory path for DR overflow\n•elasticduration — target duration of rebalance transactions in mil-\nliseconds\n•elasticthroughput — target throughput of rebalance transactions in\nMB/second\n•export — whether export is enabled or not\n•exportcursorpath — directory path for storing the current cursor\nlocation for export queues\n•exportoverflowpath — directory path to export overflow\n•heartbeattimeout — heartbeat timeout setting in seconds\n•hostcount — full number of servers in the cluster\n•httpenabled — whether the httpd port is enabled or not\n•httpport — httpd port number\n•jsonenabled — whether JSON is enabled on the httpd port or not\n•kfactor — K-safety value\n•largequeryswappath — directory path used as temporary storage\nfor processing large queries\n•partitiondetection — whether network partition detection is en-\nabled or not\n•querytimeout — default query timeout in seconds\n•sitesperhost — number of sites per host\n•snapshotenabled — whether automatic snapshots are enabled or not\n•snapshotfrequency — frequency of automatic snapshots\n•snapshotpath — directory path to automatic snapshot files\n•snapshotprefix — unique file prefix for automatic snapshots\n•snapshotpriority — system priority of automatic snapshots\n•snapshotretain — number of automatic snapshots to retain\n•temptablesmaxsize — maximum size of the temp tables\n•users — list of user names and their roles\n•voltdbroot — path of database root directory\nVALUE STRING The corresponding value of that property in the configuration file (ei-\nther explicitly or by default).\nLICENSE — returns one row for each license property.\nName Datatype Description\nPROPERTY STRING The name of the license property. The system procedure reports the\nfollowing properties, depending on what license features are enabled:\n•PERMIT_VERSION — for internal use\n•PERMIT_SCHEME — for internal use\n•TYPE — the type of license in use\n•ISSUER_COMPANY — the company that issued the license\n•ISSUER_EMAIL — the email address of the company that issued\nthe license\n•ISSUER_URL — the website of the company that issued the li-\ncense\n•ISSUER_PHONE — the telephone number of the company that\nissued the license\n460System Procedures\nName Datatype Description\n•ISSUE_DATE — the date the license was issued\n•LICENSEE — who the license was issued to\n•EXPIRATION — the date the license expires\n•HOSTCOUNT_MAX — the maximum number of hosts supported\nby the license\n•FEATURES_TRIAL — whether this is a trial license or not\n•FEATURES_UNRESTRICTED — whether the license allows un-\nrestricted access to all features\n•FEATURES_COMMANDLOGGING — whether the license sup-\nports the use of command logging\n•FEATURES_DRREPLICATION — whether the license supports\nthe use of passive database replication (DR)\n•FEATURES_DRACTIVEACTIVE — whether the license sup-\nports the use of cross data center replication (XDCR)\n•NOTE — notes regarding the license\n•SIGNATURE — an encrypted signature identifying the license (for\ninternal use)\nVALUE STRING The corresponding value of the property (either set explicitly by the\nlicense or by default).\nOVERVIEW — returns a row for every system attribute on every node of the cluster. The rows contain\nan additional column to identify the host node associated with the attribute.\nName Datatype Description\nHOST_ID INTEGER A numeric identifier for the host node.\nKEY STRING The name of the system attribute.\nVALUE STRING The corresponding value of that attribute for the specified host. The\nsystem procedure reports the following properties:\n•ADMININTERFACE — admin network interface\n•ADMINPORT — admin port number\n•BUILDSTRING — VoltDB software build string including version\nnumber\n•CATALOG — for internal use\n•CATALOGCRC — for internal use\n•CLIENTINTERFACE — client network interface\n•CLUSTERID — DR cluster ID\n•CLUSTERSAFETY — whether the database is running in normal\nK-safe mode (FULL), or if it has reduced k-safety due to non-de-\nterministic results from a transaction (REDUCED)\n•CLUSTERSTATE — whether the database is running normally or\npaused (that is, in admin mode)\n•DEPLOYMENT — path to current configuration in root directory\n•DRINTERFACE — database replication (DR) network interface\n•DRPORT — database replication (DR) port number\n•DRPUBLICINTERFACE — network interface advertised to mem-\nber clusters as DR interface\n•DRPUBLICPORT — port advertised to member clusters as DR\nport\n•FULLCLUSTERSIZE — initial cluster size\n•HOSTNAME — server host name\n461System Procedures\nName Datatype Description\n•HTTPINTERFACE — httpd network interface\n•HTTPPORT — httpd port number\n•INITIALIZED — for internal use\n•INTERNALINTERFACE — internal network interface\n•INTERNALPORT — internal port number\n•IPADDRESS — server IP address\n•IV2ENABLED — for internal use\n•KUBERNETES — whether the database is running under Kuber-\nnetes (true or false)\n•LAST_UPDATECORE_DURATION — length of last schema\nchange transaction\n•LICENSE — product license information\n•LOG4JPORT — log4J port number\n•PARTITIONGROUP — for internal use\n•PLACEMENTGROUP — name of the placement group the server\nbelongs to\n•PUBLICINTERFACE — network interface advertised by VoltDB\nManagement Center as the external interface\n•REPLICATIONROLE — database replication (DR) role,\nMASTER, REPLICA, XDCR, or none\n•STARTTIME — timestamp when cluster started\n•TOPICSPUBLICINTERFACE — network interface advertised to\nclients as the topic broker interface\n•TOPICSPUBLICPORT — network port advertised to clients as the\ntopic broker port\n•TOPICSPORT — Topic broker port number\n•UPTIME — how long the cluster has been running\n•VERSION — VoltDB software version number\n•VOLTDBROOT — path of database root directory\n•ZKINTERFACE — ZooKeeper network interface\n•ZKPORT — ZooKeeper port number\nExamples\nThe first example displays information about the individual servers in the database cluster:\n$ sqlcmd\n1> exec @SystemInformation overview;\nThe following program example uses @SystemInformation to display information about the nodes in the\ncluster and then about the database itself.\nVoltTable[] results = null;\ntry {\n results = client.callProcedure(\"@SystemInformation\",\n \"OVERVIEW\").getResults();\n System.out.println(\"Information about the database cluster:\");\n for (VoltTable node : results) System.out.println(node.toString());\n results = client.callProcedure(\"@SystemInformation\",\n \"DEPLOYMENT\").getResults();\n System.out.println(\"Information about the database configuration:\");\n for (VoltTable node : results) System.out.println(node.toString());\n462System Procedures\n}\ncatch (Exception e) {\n e.printStackTrace();\n}\n463System Procedures\n@UpdateApplicaonCatalog\n@UpdateApplicationCatalog — Reconfigures the database by replacing the configuration file.\nSyntax\n@UpdateApplicationCatalog byte[] null, String configuration\nDescription\nThe @UpdateApplicationCatalog system procedure lets you modify the configuration of a running data-\nbase without having to shutdown and restart.\nNote\nThe @UpdateApplicationCatalog system procedure originally supported updating a precompiled\nschema called an application catalog . Application catalogs are no longer supported in favor of\ninteractive DDL statements. However, the first argument is still required and should be sent as\na null value. See the voltadmin update command for an easier way to update the configuration\nfrom the command line.\nThe arguments to the system procedure are a null value and a string containing the contents of the config-\nuration file. That is, you pass the actual contents of the configuration file as a string. The new settings\nmust not contain any changes other than the allowed modifications listed in the description of the voltad-\nmin update command. If there are any other changes, the procedure returns an error indicating that an\nincompatible change has been found.\nTo simplify the process of encoding the configuration file contents, the Java client interface includes two\nhelper methods (one synchronous and one asynchronous) to encode the file and issue the stored procedure\nrequest:\nClientResponse client.updateApplicationCatalog( null, File configuration-file )\nClientResponse client.updateApplicationCatalog( clientCallback callback, null, File configura-\ntion-file)\nSimilarly, the sqlcmd utility interprets the configuration argument as a filename.\nExamples\nThe following example uses sqlcmd to update the configuration using the file myconfig.xml :\n$ sqlcmd\n1> exec @UpdateApplicationCatalog null, myconfig.xml;\nAn alternative is to use the voltadmin update command. In which case, the following command performs\nthe same function as the preceding sqlcmd example:\n$ voltadmin update myconfig.xml\nThe following program example uses the @UpdateApplicationCatalog procedure to update the current\ndatabase catalog, using the configuration file at project/production.xml with the synchronous\nhelper method.\n464System Procedures\nString newconfig = \"project/production.xml\";\ntry {\n client.updateApplicationCatalog(null, new File(newconfig));\n}\ncatch (Exception e) { e.printStackTrace(); }\n465System Procedures\n@UpdateClasses\n@UpdateClasses — Adds and removes Java classes from the database.\nSyntax\n@UpdateClasses byte[] JAR-file, String class-selector\nDescription\nThe @UpdateClasses system procedure performs two functions:\n•Loads into the database any Java classes and resources in the JAR file passed as the first parameter\n•Removes any classes matching the class selector string passed as the second parameter\nYou need to compile and pack your stored procedure classes into a JAR file and load them into the database\nusing @UpdateClasses before entering the CREATE PROCEDURE statements that define those classes\nas VoltDB stored procedures. Note that, for interactive use, the sqlcmd utility has two directives, load\nclasses and remove classes , that perform these actions in separate steps.\nTo remove classes, you specify the class names in the second parameter, the class selector. You can include\nmultiple class selectors using a comma-separated list. You can also use Ant-style wildcards in the class\nspecification to identify multiple classes. For example, the following command deletes all classes that are\nchildren of org.mycompany.utils as well as *.DebugHandler:\nsqlcmd\n1> exec @UpdateClasses NULL \"org.mycompany.utils.*,*.DebugHandler\";\nYou can also use the @UpdateClasses system procedure to include reusable code that is accessed by\nmultiple stored procedures. Reusable code includes both resource files, such as XML or other data files,\nas well as classes accessed by stored procedures.\n•Resource files must be stored in a subfolder within the JAR. Resources in the root directory are ignored.\n•Any classes and methods called by stored procedures must follow the same rules for deterministic be-\nhavior that stored procedures follow, as described in Section 5.1.2, “VoltDB Stored Procedures are De-\nterministic” .\n•Use of @UpdateClasses is not recommended for large, established libraries of classes used by stored\nprocedures. For larger, static libraries that do not need to be modified on the fly, the preferred approach\nis to include the code by placing JAR files in the /lib directory where VoltDB is installed on the database\nservers.\nClasses can be overwritten by loading a new class with the same path. Similarly, resource files can be\nupdated by reloading a file with the same name and location. Classes can be removed using the second\nargument to the system procedure (or the remove classes directive). However, there is no mechanism for\nremoving resources files other than classes once they are loaded.\nExamples\nThe following example compiles and packs Java stored procedures into the file myapp.jar. The example\nthen uses @UpdateClasses to load the classes from the JAR file, then defines and partitions a stored pro-\ncedure based on the uploaded classes.\n466System Procedures\n$ javac -cp \"/opt/voltdb/voltdb/*\" -d obj src/myapp/*.java\n$ jar cvf myapp.jar -C obj .\n$ sqlcmd\n1> exec @UpdateClasses myapp.jar \"\";\n2> CREATE PROCEDURE \n3> PARTITION ON TABLE Customer COLUMN CustomerID\n4> FROM CLASS myapp.procedures.AddCustomer;\nThe second example removes the class added and defined in the preceding example. Note that you must\ndrop the procedure definition first; you cannot delete classes that are referenced by defined stored proce-\ndures.\n$ sqlcmd\n1> DROP PROCEDURE AddCustomer;\n2> exec @UpdateClasses NULL \"myapp.procedures.AddCustomer\";\nAs an alternative, the loading and removing of classes can be performed using native sqlcmd directives\nload classes and remove classes . So the previous tasks can be performed using the following commands:\n$ sqlcmd\n1> load classes myapp.jar \"\";\n2> CREATE PROCEDURE \n3> PARTITION ON TABLE Customer COLUMN CustomerID\n4> FROM CLASS myapp.procedures.AddCustomer;\n [ . . . ]\n1> DROP PROCEDURE AddCustomer;\n2> remove classes \"myapp.procedures.AddCustomer\";\n467System Procedures\n@UpdateLogging\n@UpdateLogging — Changes the logging configuration for a running database.\nSyntax\n@UpdateLogging CString configuration\nDescription\nThe @UpdateLogging system procedure lets you change the logging configuration for VoltDB. The second\nargument, configuration , is a text string containing the Log4J XML configuration definition.\nReturn Values\nReturns one VoltTable with one row.\nName Datatype Description\nSTATUS BIGINT Always returns the value zero (0) indicating success.\nExamples\nFrom the command line, the recommended way to update the Log4J configuration is to use the voltadmin\nlog4j command, specifying the location of an updated configuration file. For example:\n$ voltadmin log4j mylog4j.xml\nThe following program example demonstrates another way to update the logging, using the system pro-\ncedure and passing the contents of an XML file as the argument:\ntry {\n Scanner scan = new Scanner(new File(xmlfilename)); \n scan.useDelimiter(\"\\\\Z\"); \n String content = scan.next(); \n client.callProcedure(\"@UpdateLogging\",content);\n}\ncatch (Exception e) {\n e.printStackTrace();\n}\n468" } ]
{ "category": "App Definition and Development", "file_name": "UsingVoltDB.pdf", "project_name": "VoltDB", "subcategory": "Database" }
[ { "data": "The Spreadsort High-Performance General-Case \nSorting Algorithm \n \nSteven J. Ross \nP. O. Box 513 \nClinton, NY 13323 \n \nAbstract \nA high-performance nearly in-place genera l-case sorting algorithm named SpreadSort is \ndemonstrated. It is approximatel y 4X as fast as Quicksort in normal cases, and up to 18X as fast \nwith distributions of limited variation (much like Bucketsort). The technique is mixed \ndistributional and comparison-based, merging many of the advantages of both techniques. Spreadsort can operate recursively, but is O(n) for continuous integr able functions, and has \nbetter than O(nlog(n)) worst-case performance when used with distributions where the keys have \nfinite length, so recursion past the second iteration is rare. This algorithm can be modified to be \nin-place with a modest speed loss. \nKeywords: Sorting, Quicksort, In-pl ace, Algorithm, Bucketsort, Radixsort \n \n1. Introduction \n“Sorting represents one of the most basic \noperations in computer science”[1] as it is the \nordering of a set in ascending or descending \norder, an operation with broad applicability. Along with significant theoretical interest, it has enormous practical application, using approximately 20 percent of computing \nresources today[2]. It is time-critical for \napplications varying from Databases[3] to Data Compression and web searches[4]. Prof. \nKnuth argues that θ(nlog(n)) performance is \nthe best to be expected of general-case \nsorting, depending on comparison-based methods in his proof[5]. \n Knuth was wrong about sorting. His proof is correct, based on his assumptions, but his assumptions and therefore \nconclusions are wrong. A sorting algorithm \ndoes not need to be purely comparison-based to perform well in all cases. Additionally, a comparison is not always a constant-time operation. The Spreadsort sorting algorithm \ndescribed here will work on any problem \nwith a total ordering (including ones with duplicates), which Knuth has stated as a requirement to consider the problem sorting[5]. It will also perform faster, with better than or equal average case and worst \ncase computational order as opposed to comparison-based sorting for all \ndistributions. In this paper, the weaknesses in Knuth’s assumptions are identified and discussed. Then, the Spreadsort algorithm is described. Finally, an analysis is made of the relative performance of Spreadsort and \ncomparison-based sorting, including such \ntechniques as Quicksort and Mergesort.\n \n \n1.1 Sorting Assumptions 1: \nComparison as a Constant Time Operation \nOne implicit assumption that is common in \nthe field of sorting is that a comparison is a \nconstant-time operation. This assumption is made in [5], in the discussion of Shellsort, algorithm D, when O(nlog(n)\n2) comparisons \nare considered equivalent to O(nlog(n)2) \nrunning time. Knuth avoids stating this \nassumption explicitly[5]. It is an assumption that is used when O(nlog(n)\n2) comparisons \nare considered equivalent to O(nlog(n)2) \noperations during a sort, where n is the number of items being sorted. \n A comparison does not take constant \ntime in the worst case. A comparison is an attempt to determine which of two variables is greater, or if they are equal. It will not stop until one of these results is determined. \nThere is only a finite length of the keys that \ncan be compared at one time, without shifting memory to load in another section. On a \nmodern microprocessor, this usually corresponds to the bus width. Thus a 32-bit \nprocessor can only compare 4 bytes in one \noperation; it needs at least one more operation to compare 8 bytes if the first four bytes are equal and the end hasn’t been reached. This comes down to a worst-case \nperformance of \nOb\nw \n   \n  where b is the length \nin bits of the key and w is the width in bits of \nthe maximum section the processor can \nhandle at a time. In the case of long strings \nthat differ at a random point in their length, this worst-case performance will be achieved. This means that a comparison-based algorithm that takes O(n log(n)) comparisons \ntakes \nOnblogn()\nw \n    \n   time. \n \n1.2 Sorting Assumpti ons 2: General-\ncase sorting must be Comparison-based \n“Now if we set N = n!, we get a lower bound \nfor the average number of comparisons in \nany sorting scheme. Asymptotically \nspeaking, this lower bound is \nlgn!+O1()=nlgn−n\nln2+Ologn()() .” [5] \npg. 193. \nThe above quotation assumes that a \npurely comparison-based algorithm obtains the minimum number of comparisons. It may seem necessary to assume that an algorithm \nbe purely comparison based to evaluate its \nrelative performance, but if it uses a small constant number of O(n) operations, it does not add to the computational order of the comparisons. Spreadsort can obtain significantly superior average-case \nperformance by splitting up a problem for \neasier comparison sorting. “Studies of the inherent complexity of sorting have usually been directed towards minimizing the number of times we make \ncomparisons between keys while sorting n \nitems.” [5] pg. 181. Partially due to the above logic, it is common to assume that algorithms having any distributional basis are not useful in the \ngeneral case[5]. While this is clearly true of Bucketsort, where the number of memory \nlocations goes up exponentially with the \nlength of the key in bytes, that does not make the assumption true in general. \nFor example, Radixsort, takes O(nb) \ntime, but can be used on any distribution. If log(n) is comparable in size to the bit width \nw, then Radixsort will have comparable \nworst-case performance to a comparison-based technique, and it can be even faster for very large n. Radixsort is a high-performance general-case distributional \nsorting algorithm that is usually slower than \nthe best know comparison-based algorithm Quicksort[6], but not badly so[7]. It is thus a general-case distributional sorting algorithm that refutes the common assumption. \nSpreadsort, described below, is a mixed \ndistributional and comparison-based algorithm that has high average-case performance and excellent worst-case performance. The code used in testing exhibits is fully generalized with a user-\ndefined value() method, much like \nQuicksort’s compare() method. The value() \nmethod must take in an object and return its corresponding integer value. For large byte widths, a position must also be passed in, and \nthere must be a method to return the length in \nbytes of an object.\n \n \n2 SpreadSort \nMost modern comparison-based algorithms \nare based on some variant of the divide and conquer technique, where the list is recursively split in half until each of the \npieces is small enough to be quickly sorted, \nand then the list is reformed fully sorted. Mergesort implements this process in reverse, but the same logarithmic progression is apparent. One question rarely asked about \nthese splittings is: why divide by two, and not \nby 3, 4, or some much larger number? What is the optimal number of pieces to split into at each step? It is easiest to implement splitting by factors of two, but that doesn't necessarily \ncorrelate with the best performance. \nSpreadsort uses the theo ry that the optimum \nnumber of bins is a fraction of n. This optimum number is determined by \nminimizing the sum of the average bin overhead time and the average bin subsorting \ntime, both of these times being functions of \nthe number of bins. This bin count has been empirically found to be in the range of n/4 to \nn/8 on most systems. \n The Spreadsort algorithm is a different divide and conquer algorithm, \ndividing by a fraction of n instead of by 2. It \nis a recursive algorithm, as with other divide and conquer algorithms. The recursive part begins by calculating the maximum and minimum values of the distribution (a quick \nO(n) task), then evenly splitting up the \ndistribution of possible key values ( m) in \nbetween these values into ( n/c) bins, where c \nis a small positive integer. Note that a similar technique can be used on keys of \nindeterminate length, if the key is assumed to \nbe followed by an infinite succession of minimum values. Each item's key value is then divided by a previously calculated factor (m/(n/c)) to decide which bin to put it in. n \nitems are thus mapped to ( n/c) bins in an \nO(n) operation. By doing this, the \ndistribution size for each bin is cut by ( n/c), \nand the average number of items per bin is c. Then a series of tests is applied: If the number of bins is greater than or equal to the \nrange of keys, then the data is already sorted \n(see Bucketsort), and no bin-by-bin tests are necessary. If the bins aren’t already fully sorted, then the comparison below is checked: \nlogn2()()2\n2<logm2() \n \nIf this comparison is true, then the worst case for the recursive application of Spreadsort \n(assuming constant-time comparisons for the \nmoment) is worse than the normal case for Quicksort, so Quicksort is selected for the bin. Otherwise, recursive application of Spreadsort can continue cutting up the problem both in terms of key size and \nconcentration. This recursive application has \na worst case performance that can be calculated using the assumption of the branching tree structure, with a division into \ntwo equal-sized pieces per recursive operation: \n \nx is the number of Spreadsort recursive operations necessary to sort a list in worst case. x is the smallest integer such that \nm\nn\nc20⋅n\nc21⋅...⋅n\nc2x \n   \n  ≤1 \nTaking the case where the sides are equal and \nmultiplying the series, x can be solved for. \nm=nx\ncx2xx−1()()\nlog2m()=xlog2n()−xx−1() log22()−xlog2c()\n−x2+xlog2n()−1−log2c() () −log2m()=0\n−x2+xlog2n\nc \n   \n  −1 \n   \n  −log2m()=0\nx=−log2n\nc \n   \n  −1 \n   \n  ±log2n\nc \n   \n  −1 \n   \n  2\n−4log2m()\n−2\nx=log2n\nc \n   \n  −1 \n   \n  −log2n\nc \n   \n  −1 \n   \n  2\n−4log2m()\n2\nThis requires: \nlog2n\nc \n   \n  −1 \n   \n  2\n≥4log2m() \nIt is notable that if this condition is true, x is \nalways less than log 2(n), as by inspection: \nlog2n\nc \n   \n  −1 \n   \n  \n2<log2n() \n As long as \nlog2n\nc \n   \n  −1 \n   \n  2\n≥4log2m() is true, then the \ndata is fastest sorted by Spreadsort. When it \nis not true, Quicksort is used. Another \nalgorithm, such as Mergesort can be used \ninstead of Quicksort if strict O(nlog(n)) comparisons performance is considered necessary. The decision stop because the data is already sorted, continue with \nSpreadsort, or stop and use Quicksort is made with every bin created. This gives Spreadsort \nthe same absolute worst-case performance as \nthe comparison-based algorithm it is used with. Distributions where \nlog2n\nc \n   \n  −1 \n   \n  2\n>4log2m(), which are \nrelatively common, have a better worst-case \nperformance than O(nlog(n)) comparisons: \nx=log2n\nc \n   \n  −1 \n   \n  −log2n\nc \n   \n  −1 \n   \n  2\n−4log2m()\n2\nx=log2n\nc \n   \n  −1 \n   \n  1−1−4log2m()\nlog2n\nc \n   \n  −1 \n   \n  2\n2 \n  \n \n  \n \n  \n \n  \n  \n \n  \n \n  \n \n \nx≈\nlog2m()\nlog2n\nc \n   \n  −1 \n   \n  \nUsing the approximation that the square root \nof 1 minus a small value is 1 minus half that \nvalue. It is notable that if the small value equals 1, the below result is still correct. The number of operations is n plus n times the number of recursive calls x: \nOn+nx()\nOn+nlognm() () \nThe major advantage of Spreadsort is \nthat each separation takes only O( n) time and \nsplits the problem into θ(n) pieces. If the \ndistribution is random, then the bins can be sorted separately with a net O( n) time, \nassuming a generally small constant number \nof items , c, per bin. If the distribution is \nGaussian, then it will actually operate much like a simple random distribution for this case, as the tails of a Gaussian taper off rapidly, limiting the total size of the \ndistribution being cut up. For the more spiky \ndistributions, the large spikes can be recursively cut down in key size to the point \nthat they are fully distributionally sorted. Additionally, unless the spike has a \ndiscontinuous shape, the first iteration will \nturn each spike into its own bin, which is then sorted normally. With distributions of signification size (1MB+), no recursive calls after the second are usually necessary on high peaks. \n Worst case performance occurs for \nSpreadsort when the key size is large and the distribution branches with each application of the sort into just a few branches (two, worst case), and each of these branches are at the \nedges of the previous distribution bin. This \nwill limit each recursive distribution size cut, forcing many recursive calls. With a large enough key size and relatively small n, this \ntype of problem will force Spreadsort to fall \nback on an O( nlog(n)) comparisons \ntechnique.\n \n \n3 Other Algorithms: \nA description of the main types of sorting \nalgorithms currently in use can be found in reference 5. Short descriptions of the primary algorithms pertinent to Spreadsort \nare provided in Appendix A. These include \nQuicksort, Mergesort, Bucketsort, and Radixsort. On an Altivec processor, only Bucketsort was capable of outperforming Spreadsort under any circumstances, and that \nbeing when the range of possible keys is \nsmaller than the number of items being sorted. Mergesort is capable of operations on serial media, but there is a high-performance version of Spreadsort that also works with serial media and is much faster than \nMergesort. \n \n4 Performance Comparison: \nSpreadsort reduces most sorting problems to \none or two distribution-based steps, followed by comparison-based sorting of small subbins, taking about 4 comparisons per item. In contrast a comparison-based algorithm \nwill take 20 to 30 operations per item for a \nlist of a few million items, but only the first few operations will be in main memory and the rest will move onto the cache. The net effect of this difference in number of \noperations is to give a factor of four to six times speed advantage to the Spreadsort \nalgorithm over Quicksort, for large key \nranges. This advantage is maximized for distributions that consist of only evenly spread (random) numbers and large, thin clumps. It is smaller for distributions with more mixed groupings of medium-sized \nclumps and empty sections. In cases where \nthe range of key values is comparable to or less than the number of items being sorted, the Spreadsort speed ranges between seven \nand eighteen times as fast as Quicksort. \n The latest version of Spreadsort uses \nabout 20% more memory than Quicksort, to hold bin information. By increasing the bin size from 8-16 (a good average size) to 32-64, and more complex rearranging of \nelements, the speed degrades about 10%, but \nthe memory usage becomes less than 1.05n, only a minor increase over Quicksort. If truly in-place sorting is desired, using bin counts that are between the 3/4 and 9/10 power of the number of items works well. \nThe test shown in Figure 1 for semirandom \ndata sets of variable size illustrates this speed \nperformance advantage with tests that include \nread, sort, and write time on a Pentium II 266MHz running LINUX with 64MB of \nRAM. The spikes leading upwards are where \nthe computer ran out of memory, noticeably early with the simple Spreadsort due to its memory requirements. Many of the algorithms slow down significantly on virtual memory right before running out of memory. \nIt should be noted that the file I/O time for SpreadSort and Quicksort in these examples \nis an equal .4s/MB. In the interest of using exact data, this compensation was not \nincluded in Figure 1, but makes the speed \nimprovement for Spreadsort more visible in Figure 2. It should be noted that the file I/O time for the commercial application being tested against is unknown, but “sorting” time is nearly identical for already sorted data, and \nthat the out of memory algorithm is \ndependent on hard drive speed. \nA limited ANSI C version of \nSpreadSort is up to 5.2X as fast as Quicksort \nfor sorting on an Altivec, but uses a little more than twice the memory necessary to hold the data. Figure 3 shows the results for the fully-optimized n early in-place (up to \n1.2n memory) SpreadSort operating on an \nAltivec, as the key range varies. The \ntransition from a bitlength of 24 to 23 is due to the onset of bucketsorting. The transition at bitlength 31 is probably an anomaly due to the start of the test. It is notable that on the \nAltivec Spreadsort never drops below 4X as \nfast as Quicksort. \nFigure 3: Relative Algorithm Performance for 250MB on a 400MHz Altivec \n050100150200250300350400450500\n0 5 10 15 20 25 30\nBitlength(bits)\n02468101214161820\nQsort Time(s)\nSpreadSort Time (s)\nRatio\n Spreadsort defeats the θ(nlog(n)) Figure 1: SpreadSort Speed Comparison\n1101001000\n4 8 16 32 64 128 256\nFile Size log(MB)\nQuicksort\nBest Commercial\nSimple Spreadsort\nOut of Memory SpreadsoFigure 2: Algorithm Comparison Compensated for File I/O \n0.11101001000\n4 8 16 32 64 128 256 512\nFile Size log(MB)\nQuicksort\nBest Commercial\nSimple Spreadsort\nOut of Memory Spreadsocomparison-based limit by combining two \naspects of the problem, the distribution and the bin size. By solving both problems \nsimultaneously, it keeps cutting down until \none or the other is ready for a quick O( n) \nsolution. With most distributions, this should be just one or two iterations, which has been proven for a similar technique[8]. The simple generalization of Quicksort is \nobtained for Spreadsort by using a value \nfunction that returns a value for a key. This ends up bringing Spreadsort to the point where it can solve the vast majority of sorting problems within 2 iterations plus the time it \ntakes to comparison-sort an 8 item bin.\n \n \n5 Conclusion \nSpreadsort is a practical general-case sorting \nalgorithm with θ(n) average-case \nperformance and good worst-case \nperformance, being O( nlog(n)) in \ncomparisons and O( nlog n(m)) in time. It can \nbe used in any situation where a definite ordering can be applied to all possible values, even values of infinite key length. Because \nthe core Spreadsort technique divides the \nproblem both distributionally and numerically (smaller bucket sizes), it makes the problem simpler to solve for both subsidiary comparison-based and distributionally-based algorithms. Each \nsplitting operation takes O(n) time, but will \ncut the remaining key length by a fraction of n, while cutting the bucket size. With all distributions, each operation will divide the distribution into multiple pieces, commonly a \nlarge number. If the distribution is not cut \ninto many pieces, then it is well set up for recursive application and eventual final distributional sorting. If the distribution is cut into many pieces, then a comparison-based sort can easily sort the small subsidiary \nbins. This improved divide-and-conquer \ntechnique provides a significant real performance enhancement over conventional \nθ(nlog(n)) techniques, such as Quicksort. \nThis performance enhancement is gained by using a normally constant number of time-consuming operations instead of a log( n) \nnumber of quick operations. In practical \napplications processor caches and memory consumption influence speed, but Spreadsort shows a clear improvement. This improvement has been verified by experiment, and shows a general-case \ndistributional algorithm that has superior \nperformance to Quicksort.\n \n \nReferences \n[1] J. D. Bright, G. F. Sullivan, and G. M. \nMasson, “A Formally Verified Sorting Certifier,” \nIEEE Transactions on Com puters, Vol 46, No. 12, \nDecember 1997. \n[2] M. H. Nodine and J. S. Vitter, “Large-\nScale Sorting in Parallel Memories,” 3rd ACM \nSymp. On Parallel Algorithms and Architectures, \npp. 29-39, 1991. \n[3] V. Markl and R. Bayer, “A Cost \nFunction for Uniformly Partitioned UB-Trees”, \nDatabase Engineering and Applications \nSymposium, 2000, pp 410-416. \n[4] K. Sadakane, “A Fast Algorithm for \nMaking Suffix Arrays and for Burrows-Wheeler \nTransformation,” Data Compression Conference \n1998. pp 129-138. \n[5] Donald E. Knuth, The Art of Computer \nProgramming -- Sorting and Searching , vol. 3, \n1997. [6] C.A.R. Hoare, “Quicksort,” Computer J., \nvol. 6, no. 1, pp. 10-15, 1962. \n[7] A. Andersson and S. Nilsson. A New \nEfficient Radix Sort. In 35\nth Symp. On \nFoundations of Computer Science, pp. 714-721, \n1994. \n[8] Markku Tamminen, “Two Levels are as \nGood as Any” J. Algorithms 6, pp. 138-144, 1985\n " } ]
{ "category": "App Definition and Development", "file_name": "original_spreadsort06_2002.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "ZLIB(3) ZLIB(3)\nNAME\nzlib − compression/decompression libr ar y\nSYNOPSIS\n[see zlib.h forfull description]\nDESCRIPTION\nThezliblibrar yis a general purpose data compression libr ar y.The code is thread saf e, assuming\nthat the standard libr ar yfunctions used are thread saf e, such as memor yallocation routines .It\nprovides in-memor ycompression and decompression functions ,including integ rity checks of the\nuncompressed data. This version of the libr ar ysuppor ts only one compression method (defla-\ntion) but other algorithms ma yb ea dded later with the same stream interface.\nCompression can be done in a single step if the b uffers are large enough or can be done b y\nrepeated calls of the compression function. In the latter case ,the application must provide more\ninput and/or consume the output (providing more output space) before each call.\nThe libr ar yalso supports reading and writing files in gzip(1) (.gz) f or mat with an interface similar\nto that of stdio.\nThe libr ar ydoes not install an ysignal handler .The decoder checks the consistency of the com-\npressed data, so the libr ar yshould ne vercrash eveni nt he case of corrupted input.\nAll functions of the compression libr ar yare documented in the file zlib.h .The distr ibution source\nincludes examples of use of the libr ar yin the files test/example.c andtest/minigzip.c, as well as\nother examples in the examples/ director y .\nChanges to this version are documented in the file ChangeLog that accompanies the source.\nzlibis built in to man ylanguages and operating systems ,including but not limited to J ava, Python,\n.NET ,PHP,Per l,Ruby, Swift, and Go.\nAn exper imental package to read and write files in the .zip f or mat, wr itten on top of zlibby G illes\nVollant (info@winimage.com), is a vailable at:\nhttp://www.winimage.com/zLibDll/minizip .html and also in the contr ib/minizip director y of\nthe main zlibsource distribution.\nSEE ALSO\nThezlibwebsite can be found at:\nhttp://zlib.net/\nThe data f or mat used b ythezliblibrar yis described b yRFC (Request for Comments) 1950 to\n1952 in the files:\nhttp://tools.ietf.org/html/rfc1950 (for the zlib header and trailer f or mat)\nhttp://tools.ietf.org/html/rfc1951 (for the deflate compressed data f or mat)\nhttp://tools.ietf.org/html/rfc1952 (for the gzip header and trailer f or mat)\nMar k Nelson wrote an article about zlibforthe Jan. 1997 issue of Dr.Dobb’sJour nal; acopyof\nthe article is a vailable at:\nhttp://mar knelson.us/1997/01/01/zlib-engine/\nREPORTING PROBLEMS\nBefore reporting a problem, please chec kthezlibwebsite to v er ify that you ha ve the latest v er-\nsion of zlib;otherwise ,obtain the latest version and see if the problem still e xists.Please read\nthezlibFA Q at:\nhttp://zlib.net/zlib_faq.html\nbefore asking for help .Send questions and/or comments to zlib@gzip .org, or (for the Windo ws\nDLL version) to Gilles Vollant (info@winimage.com).\n13 Oct 2022 1ZLIB(3) ZLIB(3)\nAUTHORS AND LICENSE\nVersion 1.2.13\nCopyr ight (C) 1995-2022 Jean-loup Gailly and Mar kAdler\nThis software is provided ’as-is’, without an yexpress or implied w arranty .Inn oe vent will the\nauthors be held liable for an ydamages arising from the use of this software.\nPermission is g ranted to an yone to use this software for an ypur pose ,including commercial appli-\ncations ,and to alter it and redistribute it freely ,subject to the following restrictions:\n1. The or igin of this software must not be misrepresented; you must not claim that you wrote the\nor iginal software .I fyou use this software in a product, an ac knowledgment in the product doc-\numentation would be appreciated but is not required.\n2. Altered source versions must be plainly mar keda ss uch, and must not be misrepresented as\nbeing the original software.\n3. This notice ma ynot be remo vedo ra ltered from an ysource distribution.\nJean-loup Gailly Mar k Adler\njloup@gzip .org madler@alumni.caltech.edu\nThe deflate f or mat used b yzlibwasdefined b yPhil Katz. The deflate and zlibspecifications\nwere written b yL .P eter Deutsch. Thanks to all the people who reported problems and suggested\nvarious impro vements in zlib;who are too numerous to cite here.\nUNIX manual page b yR .P .C .R odgers ,U .S.N ational Libr ar y of Medicine\n(rodgers@nlm.nih.gov).\n13 Oct 2022 2" } ]
{ "category": "App Definition and Development", "file_name": "zlib.3.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "E/#0Ecien t Algorithms for Optim um Cycle Mean and Optim um Cost to Time RatioProblemsAli Dasdan Sandy S/. Irani and Ra jesh K/. GuptaDept/. of Computer Science Dept/. of Information and Computer ScienceUniv ersit y of Illinois/, Urbana/, IL /6/1/8/0/1 Univ ersit y of California/, Irvine/, CA /9/2/6/9/7dasdan/@cs/.uiuc/.edu f irani/,rgupta g /@ics/.uci/.ed uAbstractThe goal of this pap er is to iden tify the most e/#0Ecien t al/-gorithms for the optim um mean cycle and optim um costto time ratio problems and compare them with the p opu/-lar ones in the CAD comm unit y /. These problems ha v en u/-merous imp ortan t application s in CAD/, graph theory /, dis/-crete ev en t system theory /, and man ufacturing systems/. Inparticular/, they are fundamen tal to the p erformance anal/-ysis of digital systems suc h as sync hronous/, async hronous/,data/#0Do w/, and em b edded real/-time systems/. F or instance/,algorithms for these problems are used to compute the cyclep erio d of an y cyclic digital system/. Without loss of gen/-eralit y /,w e discuss these algorithms in the con text of theminim um mean cycle problem /#28MCMP/#29/. W e p erformed acomprehensiv e exp erimen tal study of ten leading algorithmsfor MCMP /.W e programmed these algorithms uniformly ande/#0Ecien tly /.W e systematically compared them on a test suitecomp osed of random graphs as w ell as b enc hmark circuits/.Ab o v e all/, our results pro vide imp ortan t insigh ti n to the p er/-formance of these algorithms in practice/. One of the mostsurprising results of this pap er is that Ho w ard/'s algorithm/,kno wn primarily in the sto c hastic con trol comm unit y /,i s b yfar the fastest algorithm on our test suite although the onlykno wn b ound on its running time is exp onen tial/. W e pro videt w o stronger b ounds on its running time/./1 Intro ductionConsider a digraph G /=/#28 V/; E /#29 with n no des and m arcs/.Asso ciate with eac h arc e in E t w on um b ers/: a w eigh t /#28orcost/#29 w /#28 e /#29 and a transit time t /#28 e /#29/. The w eigh t and tran/-sit time of a path in G is equal to the sum of the w eigh tsand transit times of the arcs on the path/, resp ectiv ely /. Thelength of a path is equal to the n um b er of arcs on the path/.Let w /#28 C /#29/, t /#28 C /#29/, and j C j denote the w eigh t/, transit time/, andlength of a cycle C in G /.The /#28cycle/#29 r atio /#1A /#28 C /#29 and the /#28cycle/#29 me an /#15 /#28 C /#29 of cycleC are de/#0Cned as/#1A /#28 C /#29/=\nw /#28 C /#29t /#28 C /#29\n/;t /#28 C /#29 /#3E /0 /; and /#15 /#28 C /#29/=\nw /#28 C /#29j C j\n/;Published in Pro c/. /3/6th Design Automation Conf/./#28D A C/#29/, pp/. /3/7/-/4/2/, Jun/. /1/9/9/9/.\nresp ectiv ely /. Note that /#1A /#28 C /#29 giv es the a v erage w eigh tp e rtransit time/, and /#15 /#28 C /#29 giv es the a v erage arc w eigh to n C /.The cycle ratio is historically called the cost to time ratio/.The mean of C is a sp ecial case of its ratio in that its meanis obtained from its ratio b y setting the transit time of ev eryarc on C to unit y /.The minimum cycle r atio /#28MCR/#29 /#1A\n/#03and the minim umcycle mean /#28MCM/#29 /#15\n/#03of G are de/#0Cned as/#1A\n/#03/= minC /2 G\nf /#1A /#28 C /#29 g and /#15\n/#03/= minC /2 G\nf /#15 /#28 C /#29 g /;resp ectiv ely /. In b oth cases/, C ranges o v er all the cycles inG /. The resp ectiv e problems are called the minimum cycler atio pr oblem /#28MCRP/#29 and the minimum me an cycle pr ob/-lem /#28MCMP/#29\n/1/. The de/#0Cnitions of the maxim um v ersions ofb oth of these problems are analogous/./1/./1 Applications in CADThe applicatio ns of b oth MCRP and MCMP are imp ortan tand n umerous/. See /#5B/1/2/#5D for the applications in graph the/-ory /. Our fo cus is on the applications in the CAD of digitalsystems/. These problems ha v e fundamental imp ortance tothe p erformance analysis of discrete ev en t systems /#5B/3/#5D/, whic hcan also mo del digital systems/. These problems are appli/-cable to the p erformance analysis of suc h digital systems assync hronous /#5B/2/3/#5D/, async hronous /#5B/4/#5D/, DSP /#5B/1/4/#5D/, or em b eddedreal/-time /#5B/1/8/#5D/. Simply put/, the algorithms for these problemsare essen tial to ols to /#0Cnd the cycle p erio d of a giv en cyclicdiscrete ev en t system/. Once determined/, the cycle p erio dis used to describ e the b eha vior of the system analyticall yo v er an in/#0Cnite time p erio d/. F or instance/, the algorithmsfor these problems are used to compute the iteration b oundof a data/#0Do w graph /#5B/1/4/#5D/, the time separation b et w een ev en to ccurrences in a cyclic ev en t graph /#5B/1/3/#5D/, and the optimalclo c ks c hedules for circuits /#5B/2/2 /#5D/./1/./2 Related w o rkThere are man y algorithms prop osed for b oth MCRP andMCMP /.W e giv e a comprehensiv e classi/#0Ccation of the fastestand the most common ones in T able /1/. References to a fewold algorithms can b e found in /#5B/1/2/#5D/. Note that as MCMP isa sp ecial case of MCRP /,a n y algorithm for the latter prob/-lem can b e used to solv e the former problem/. Con v ersely /,it is also p ossible to solv e MCRP using an algorithm for/1W ei n ten tionally use MCMP to refer to /#5Cthe minim um mean cycleproblem/\"/./1T able /1 /: Minim um mean cycle and minim um cost to time ratio algorithms for a graph G with n no des and marcs/. /#28 W /, the maxim um arc w eigh t/; T /, the total transit time of G /; N is the pro duct of the out/-degrees of allthe no des in G /./#29Minim um mean cycle algorithmsName Source Y ear Running time Result Complexit y/1 DG Dasdan /& Gupta /#5B/8/#5D /1/9/9/7 O /#28 nm /#29 Exact P olynomial/2 HO Hartmann /& Orlin /#5B/1/2 /#5D /1/9/9/3 O /#28 nm /#29 Exact P olynomial/3 Karp/'s Karp /#5B/1/5/#5D /1/9/7/8 /#02/#28 nm /#29 Exact P olynomial/4 Hartmann /& Orlin /#5B/1/2 /#5D /1/9/9/3 O /#28 nm /+ n\n/2lg n /#29 Exact P olynomial/5 YTO Y oung/, T arjan/, /& Orlin /#5B/2/5 /#5D /1/9/9/1 O /#28 nm /+ n\n/2lg n /#29 Exact P olynomial/6 Karp /& Orlin /#5B/1/6 /#5D /1/9/8/1 /#02/#28 n\n/3/#29 Exact P olynomial/7 K O Karp /& Orlin /#5B/1/6 /#5D /1/9/8/1 O /#28 nm lg n /#29 Exact P olynomial/8 O A/1 Orlin /& Ah uja /#5B/2/1 /#5D /1/9/9/2 O /#28\npn m lg /#28 nW /#29/#29 Appro ximate Pseudop oly /./9 O A/2 Orlin /& Ah uja /#5B/2/1 /#5D /1/9/9/2 O /#28\npn m lg\n/2/#28 nW /#29/#29 Appro ximate Pseudop oly /./1/0 Cuninghame/-Green /& Yixun /#5B/7/#5D /1/9/9/6 O /#28 n\n/4/#29 Exact P olynomialMinim um cost to time ratio algorithmsName Source Y ear Running time Result Complexit y/1/1 Burns/' Burns /#5B/4/#5D /1/9/9/1 O /#28 n\n/2m /#29 Exact P olynomial/1/2 Megiddo /#5B/1/9 /#5D /1/9/7/9 O /#28 n\n/2m lg n /#29 Exact P olynomial/1/3 Hartmann /& Orlin /#5B/1/2 /#5D /1/9/9/3 O /#28 Tm /#29 Exact Pseudop oly /./1/4 La wler/'s La wler /#5B/1/7/#5D /1/9/7/6 O /#28 nm lg/#28 nW /#29/#29 Appro ximate Pseudop oly /./1/5 I t o/&P arhi /#5B/1/4/#5D /1/9/9/5 O /#28 Tm /+ T\n/3/#29 Exact Pseudop oly /./1/6 Gerez et al/. /#5B/1/0/#5D /1/9/9/2 O /#28 Tm /+ T\n/3lg T /#29 Appro ximate Pseudop oly /./1/7 Gerez et al/. /#5B/1/0/#5D /1/9/9/2 O /#28 Tm /+ T\n/4/#29 Exact Pseudop oly /./1/8 Ho w ard/'s Co c het/-T errasson et al/. /#5B/6/#5D /1/9/9/7 O /#28 Nm /#29 Exact Pseudop oly /.MCMP /#5B/1/1/#5D/. Th us/, w e will fo cus only on MCMP in thispap er/.In T able /1/, the p olynomial and pseudop olyno mia l algo/-rithms are resp ectiv ely ordered according to their w orst/-caserunning times/. Those with the same running time are pre/-sen ted in alphab etical order of their in v en tors/' names/. Somereferences are cited more than once b ecause they con tainmore than one algorithm/. Some algorithms cannot pro ducethe exact MCM or MCR/, in whic h case they are said toreturn appro ximate results/. The amoun t of error that canb e tolerated in the appro ximate results can usually b e con/-trolled/. This amoun t of error is denoted b y /#0F /, and called theprecision of the algorithm/./1/./3 Motivations/, contributions/, and metho dologyOur main goal in this pap er is to iden tify the most e/#0Ecien talgorithms for MCMP and MCRP and compare them withthe curren t practice in the CAD comm unit y for solving sim/-ilar problems/. W eh a v e realized this goal with the follo wingmotiv ations and con tributions/:/#28/1/#29 Despite the imp ortance of MCMP and MCRP /, mostof the w ork in the CAD comm unit y that deals with theseproblems are not a w are of most of the algorithms in T a/-ble /1/. The most p opular algorithms in the CAD comm u/-nit y are Karp/'s algorithm /#5B/1/5/#5D/, La wler/'s algorithm /#5B/1/7/#5D/, andto some exten t/, Burns/' algorithm /#5B/4/#5D/. W e sho w that thereare far more e/#0Ecien t algorithms than these p opular ones/.In particular/, w e sho w that Ho w ard/'s algorithm is signi/#0C/-can tly faster than all the others/. In addition/, w e pro vide animpro v ed v ersion of this algorithm as w ell as t w o strongerb ounds on its running time/./#28/2/#29 Most of the earlier w ork do es not presen ta n y exp er/-imen tal analysis of ev en the algorithms that they in tro duce/.There has not b een a clear understanding of the p erformanceof an y of the algorithms studied in this pap er although the/-oretical b ounds on their running times ha v e b een pro v en/.Finding more e/#0Ecien t implemen tation of these algorithmsis v ery imp ortan t b ecause their applicati ons require that\nthey b e run man y times/, e/.g/./, see /#5B/8/#5D/. This pap er is the /#0Crststudy that systematically compares their p erformance and\npro vides a great deal of insigh ti n to their b eha vior/. W e alsopro vide some implemen tational impro v emen ts for most ofthe algorithms/.In this study /,w e fo cus on the ten leading MCM algo/-rithms and the MCM v ersions of the MCR algorithms fromT able /1/, all of whic h are named in the table/. The remain/-ing algorithms in this table are not included in our studyb ecause they are v ery similar to the c hosen ones/. W e imple/-men ted eac h algorithm in a uniform and e/#0Ecien t manner/.W e tested them on a series of random graphs/, obtained us/-ing one generator from /#5B/5/#5D/, and real b enc hmark circuits/, ob/-tained from logic syn thesis b enc hmarks/. The running timeas w ell as represen tativ e op eration coun ts/, as adv o cated in/#5B/2/#5D/, are measured and compared/. W en o w giv e a review ofthe these algorithms and then presen t the exp erimen tal re/-sults and our observ ations/./2 Minimum Mean Cycle Algo rithmsW e /#0Crst giv e a di/#0Beren t form ulation of MCMP that is moreuseful to explain the b eha vior of the minim um mean cyclealgorithms/.The minim um cycle mean /#15\n/#03of a graph G /=/#28 V/; E /#29 canb e de/#0Cned as the optim um v alue of /#15 in the follo wing linearprogram/:max /#15 s/.t/. d /#28 v /#29 /, d /#28 u /#29 /#14 w /#28 u/; v /#29 /, /#15/; /8 /#28 u/; v /#29 /2 E/; /#28/1/#29where d /#28 v /#29 is called the distanc e /#28or the no de p oten tial/#29 ofv /. The maxim um is c hosen o v er all v alues for d /#28 /#01 /#29/. Whenthe inequaliti es are all satis/#0Ced/, d /#28 v /#29 is equal to the w eigh tof the shortest path from s to v in G when /#15\n/#03is subtractedfrom ev ery arc w eigh t/. The no de s is arbitrarily c hosen asthe sour c e in adv ance/. Let G/#15\ndenote the graph obtainedfrom G b y subtracting /#15 from the w eigh to f e v ery arc/. Theminim um cycle mean /#15\n/#03is the largest v alue of /#15 for whic hG/#15\nhas no negativ e cycles/./2W es a y that an arc /#28 u/; v /#29 /2 E is critic al if d /#28 v /#29 /, d /#28 u /#29/=w /#28 u/; v /#29 /, /#15 /, whic hw e refer to as the critic ality criterion /.W es a y that a no de is critic al if it is adjacen t to a criticalarc/, and that a graph is critic al if all of its arcs are critical/.The critical subgraph of G/#15\n/#03con tains all the minim um meancycles of G /, as implied b y Equation /1/. The critical subgraphis imp ortan t to compute b ecause the critical subgraph of agraph G con tains all the arcs and no des that determine thep erformance of the system mo deled b y G /. After running anMCM or MCR algorithm on G /, the critical subgraph of Gcan easily b e computed using its de/#0Cnition/. As a result/, w epresen t eac h algorithm in the con text of computing /#15\n/#03only /.W e assume that the input graph G to the algorithm incon text is cyclic and strongly connected/. This assumptionsimpli/#0Ces most of the algorithms and generally impro v estheir running times in practice/. Note that if G is not stronglyconnected/, its minim um cycle mean can b e found easily/: /#0Crstpartition G in to its strongly connected comp onen ts/, run thealgorithm on eac h strongly connected comp onen t/, and thentak e as the minim um cycle mean of G the minim um of thecycle means returned b y the algorithm/. This is the w a yw eimplemen ted all of the algorithms/.W en o w review the minim um mean cycle algorithms inour study /. More detailed discussion of these algorithms to/-gether with their pseudo co de is giv en in /#5B/9 /#5D/./2/./1 Burns/' algo rithmBurns/' algorithm /#5B/4/#5D is actually the minim um mean cyclev ersion of the original Burns/' algorithm for MCRP /.W eh a v edisco v ered that the algorithm in /#5B/7/#5D is iden tical to Burns/'algorithm/. Burns/' algorithm is based on linear program/-ming/. It is an iterativ e algorithm constructed b y apply/-ing the primal/-dual metho d/. It solv es the ab o v e linear pro/-gram /#28Equation /1/#29 and its dual sim ultaneousl y /. In essence/,the b eha vior of Burns/' algorithm is v ery similar to that ofthe parametric shortest path algorithms b elo w suc ha st h eK O algorithm/: The K O algorithm impro v es up on an ini/-tial acyclic critical subgraph of G un til the critical subgraphb ecomes cyclic/, at whic h p oin t the minim um cycle mean isfound/. Burns/' algorithm also op erates on the critical sub/-graph and terminates when it b ecomes cyclic/. It di/#0Bers fromthe K O algorithm in that at ev ery iteration/, it reconstructsthe critical subgraph from scratc h/./2/./2 Ka rp/'s algo rithm and its va riantsDe/#0Cne Dk\n/#28 v /#29 to b e the w eigh t of the shortest path of length kfrom s /, the source/, to v /;i f n os u c h path exists/, then Dk\n/#28 v /#29/=/+ /1 /. Karp/'s algorithm /#5B/1/5/#5D is based on his observ ation that/#15\n/#03/= minv /2 V\nmax/0 /#14 k /#14 n /, /1\nDn\n/#28 v /#29 /, Dk\n/#28 v /#29n /, k\n/;whic h is called Karp/'s theorem/. Karp/'s algorithm computeseac h Dk\n/#28 v /#29b y the recurrenceDk\n/#28 v /#29/= min/#28 u/;v /#29 /2 E\nf Dk /, /1\n/#28 u /#29/+ w /#28 u/; v /#29 g /;k /=/1 /; /2 /;/:/:/:/;n /;D/0\n/#28 s /#29/= /0 /;D/0\n/#28 v /#29/= /+ /1 /;v /6/= s/:Note that d /#28 v /#29 and D /#28 v /#29 are related to eac h other b y theequation d /#28 v /#29 /= min/0 /#14 k /#14 n /, /1\nf Dk\n/#28 v /#29 /, k/#15 g /. As observ ed in/#5B/8/, /1/2 /, /2/5/#5D/, this recurrence/, whic h is not computed recur/-siv ely /, mak es the b est and w orst cases of Karp/'s algorithmthe same/, whic hi sw h y it runs in /#02/#28 nm /#29/.W eh a v e three impro v emen ts on Karp/'s algorithm/: theDG algorithm/, the HO algorithm/, and the Karp/2 algorithm/.\nThe DG algorithm /#5B/8/#5D impro v es up on Karp/'s algorithm b yeliminatin g unnecessary w ork in tro duced b y the ab o v e re/-currence/. It w orks in a breadth/-/#0Crst manner in that startingfrom the source/, it visits the successors of no des rather thantheir predecessors/, as done in the recurrence/. This pro cesscreates an unfolding of G /, and when the algorithm is imple/-men ted using link ed lists/, its running time b ecomes equal tothe size of the /#5Cunfolded/\" graph/. Dep ending on the struc/-ture of G /, the running time ranges from /#02/#28 m /#29t o O /#28 mn /#29/.The HO algorithm /#5B/1/2/#5D also impro v es up on Karp/'s algo/-rithm/. It helps to terminate Karp/'s algorithm early with/-out c hanging its structure/, i/.e/./, it still uses the ab o v e re/-currence/. It is based on the observ ation that man y of theshortest paths computed b y Karp/'s algorithm will con taincycles/. If one of these cycles is critical/, then the minim umcycle mean is found/, whic h su/#0Eces to terminate the algo/-rithm/. The HO algorithm essen tially c hec ks the critical/-it y of eac h cycle on the shortest paths computed/. If theearly termination is not p ossible/, this algorithm can add ano v erhead of O /#28 n\n/2/+ m lg n /#29 in total to the running time ofKarp/'s algorithm although it do es not c hange the runningtime asymptotically /.The Karp/2 algorithm is a space e/#0Ecien tv ersion of Karp/'salgorithm\n/2/. Karp/'s algorithm tak es up /#02/#28 n\n/2/#29 space in orderto store the D /-v alues/. The Karp/2 algorithm reduces thisspace requiremen tt o /#02 /#28 n /#29/. The Karp/2 algorithm p erformst w o passes/. In the /#0Crst pass/, it computes Dn\n/#28 v /#29 for eac hno de v without storing Dk\n/#28 v /#29 for k/#3C n /. In the second pass/,it computes the fraction in Karp/'s theorem as it computeseac h Dk\n/#28 v /#29/, k/#3Cn /. The DG and HO algorithms also su/#0Berfrom this large space complexit y problem/. F ortunately /, thetec hnique used in the Karp/2 algorithm is also applicable tothese v arian ts/./2/./3 P a rametric sho rtest path algo rithmsThe K O algorithm /#5B/1/6/#5D and the YTO algorithm /#5B/2/5/#5D are inthe category of parametric shortest path algorithms/. TheYTO algorithm is essen tially an e/#0Ecien t implemen tation ofthe K O algorithm/. These algorithms are based on the ob/-serv ation that the minim um cycle mean /#15\n/#03is the largest /#15suc h that G/#15\ndo es not ha v ea n y negativ e cycles/. Th us/, thesealgorithms start with /#15 /= /,/1 and alw a ys main tain a tree ofshortest paths to a source no de s /. These algorithms c hange/#15 incremen tally so that the shortest path tree c hanges b yone arc in eac h iteration/. When a cycle of w eigh t zero isdetected in G/#15\n/, that cycle is the cycle with the minim ummean/./2/./4 La wler/'s algo rithmLa wler/'s algorithm /#5B/1/7/#5D is based on the same observ ation asthe parametric shortest path algorithms/. It also uses the fact\nthat /#15\n/#03of G lies b et w een the minim um and the maxim umarc w eigh ts in G /.L a wler/'s algorithm do es a binary searc ho v er the p ossible v alues of /#15\n/#03and c hec ks for a negativ e cyclein G/#15\nev ery iteration/. If one is found/, then the c hosen /#15 isto o large so it is decreased/; if not/, it is to o small so it isincreased/. La wler/'s algorithm terminates when the in terv alfor the p ossible v alues of /#15\n/#03b ecomes to o small/. The size ofthat in terv al/, /#0F /, determines the precision of the algorithm/./2Suggested b y S/. Gaub ert of INRIA/, F rance/./3Input /: A strongly connected digraph G /=/#28 V/; E /#29/.Output /: The minim um cycle mean /#15\n/#03of G /./1 for eac hn o d e u /2 V do d /#28 u /#29 /#20 /+ /1/2 for eac h arc /#28 u/; v /#29 /2 E do/3 if /#28 w /#28 u/; v /#29 /#3Cd /#28 u /#29/#29 then/4 d /#28 u /#29 /#20 w /#28 u/; v /#29/; /#19 /#28 u /#29 /#20 v /#2F/* /#19 is the p olicy /*/#2F/5 while /#28true/#29 do /#2F/* Main lo op /- Iterate /*/#2F/6 E/#19\n/#20f /#28 u/; /#19 /#28 u /#29/#29 /2 E g /#2F/* Find the set E/#19\nof p olicy arcs /*/#2F/#2F/* Compute /#15 in the p olicy graph G/#19\n/*/#2F/7 Examine ev ery cycle in G/#19\n/=/#28 V/; E/#19\n/#29/./8 Let C b e the cycle with the smallest mean in G/#19\n/./9 Let /#15 /#20 w /#28 C /#29 /= j C j/1/0 Select an arbitrary no de s /2 C /./#2F/* Compute the no de distances using the rev erse BFS /*/#2F/1/1 if /#28there is a path from v to s in G/#19\n/#29 then/1/2 d /#28 v /#29 /#20 d /#28 /#19 /#28 v /#29/#29 /+ w /#28 v/; /#19 /#28 v /#29/#29 /, /#15/#2F/* Impro v e the no de distances /*/#2F/1/3 impr ov ed /#20 f alse/1/4 for eac h arc /#28 u/; v /#29 /2 E do/1/5 /#0E /#28 u /#29 /#20 d /#28 u /#29 /, /#28 d /#28 v /#29/+ w /#28 u/; v /#29 /, /#15 /#29/1/6 if /#28 /#0E /#28 u /#29 /#3E /0/#29 then/1/7 if /#28 /#0E /#28 u /#29 /#3E/#0F /#29 then impr ov ed /#20 tr ue/1/8 d /#28 u /#29 /#20 d /#28 v /#29/+ w /#28 u/; v /#29 /, /#15 /; /#19 /#28 u /#29 /#20 v/#2F/* If not m uc h impro v emen t in the no de distances/, exit /*/#2F/1/9 if /#28 N O T impr ov ed /#29 then return /#15Figure /1 /: An impro v ed v ersion of Ho w ard/'s mini/-m um mean cycle algorithm/./2/./5 Ho w a rd/'s Algo rithmAn impro v ed v ersion of Ho w ard/'s algorithm /#5B/6/#5D is giv en inFigure /1/. It is similar to the st yle of the parametric short/-est path algorithms except that it starts with a large /#15 anddecreases /#15 un til the shortest paths in G/#15\nare w ell de/#0Cned/.It computes /#15 on the p olicy gr aph whic h is simply a sub/-graph of G suc h that the out/-degree of eac h no de is exactlyone/. Note that the p olicy graph has n arcs/. F or a giv en /#15 /,the algorithm attempts to /#0Cnd the shortest paths from ev/-ery no de to an c hosen no de s using the breadth/-/#0Crst searc h/#28BFS/#29 algorithm/. In doing so/, it either disco v ers that theshortest paths are w ell de/#0Cned in whic h case the correct /#15has b een found or it disco v ers a negativ e cycle in G/#15\n/. In thelatter case/, the negativ e cycle has a smaller mean w eigh tthan the curren t /#15 /. In this case/, /#15 can b e up dated to themean w eigh t of the new cycle and the pro cess con tin ues/.The b eaut yo fH o w ard/'s algorithm is that eac h iterationis extremely simple and requires only /#02/#28 m /#29 time/. Mean/-while/, although it ensures that the v alue of /#15 is non/-increasingfrom one iteration to another/, it usually manages to mak esigni/#0Ccan t progress in decreasing the v alue of /#15 in a v ery fewn um b er of iterations/. In /#5B/9/#5D/, w eh a v e pro v ed that /#15 decreasesb y at least /#0F/=n at least ev ery n iterations of the main lo opof Ho w ard/'s algorithm/, where /#0F is the precision of the algo/-rithm/. This result leads to t w o stronger b ounds on the run/-ning time of Ho w ard/'s algorithm/: /#28/1/#29 its running time is atmost O /#28 nm/#0B /#29/, where /#0B is the n um b er of simple cycles in G /,or /#28/2/#29 its running time is at most O /#28 n\n/2m /#28 wmax\n/, wmin\n/#29 /=/#0F /#29/,where wmax\nand wmin\nare the maxim um and minim um arcw eigh ts in G /.\n/2/./6 Scaling algo rithmsThe O A/1 and O A/2 algorithms /#5B/2/1/#5D are in this category /. Theyassume that the arc w eigh ts are in tegers b ounded b y W /.I fW is p olynomial in n /, then these algorithms are asymp/-totically the fastest algorithms/. The O A/2 algorithm appliesscaling to a h ybrid v ersion of an assignmen t algorithm/, calledthe auction algorithm/, and the successiv e shortest path algo/-rithm/. It uses an appro ximate binary searc h tec hnique/. TheO A/1 algorithm is the same as the O A/2 algorithm except thatit do es not use the successiv e shortest path algorithm/./3 Exp erimental F ramew o rkW e programmed the algorithms in C/+/+ using the LED Alibrary v ersion /3/./4/./1/. This library is a template library fore/#0Ecien t data t yp es and algorithms /#5B/2/0/#5D/. In order to ensureuniformit y of implemen tation/, all the algorithms w ere im/-plemen ted in the same st yle b y one of us/. W e also /#0Dattenedeac h algorithm in that w e man ually inlined all the functionsother than the functions needed b y the LED A data t yp es/.This eliminated the o v erhead of function in v o cations/. Thetotal size of the programs is appro ximately /2/7/0/0 lines ofC/+/+ co de/.W e compiled and link ed eac h program using the SunC/+/+ compiler CC v ersion /3/./0/./1 under the O/4 optimizationoption/. W e conducted the exp erimen ts on a Sun Sparc /2/0Mo del /5/1/2 with t w o CPUs/, /6/4 MB of main memory /, and/1/0/5 MB of sw ap space/. The op erating system w as SunOSv ersion /5/./5/./1/.W e did t w o sets of exp erimen ts/: one to measure the run/-ning time of eac h algorithm and another to coun t the k eyop erations of eac h algorithm/, as adv o cated in /#5B/2/#5D/. Our testsuite con tained random graphs/, generated using SPRAND /#5B/5/#5D/,and cyclic sequen tial m ulti/-lev el logic b enc hmark circuits/,obtained from the /1/9/9/1 Logic Syn thesis and OptimizationBenc hmarks /#5B/2/4/#5D/. SPRAND pro duces a graph with n no desand m arcs b y /#0Crst building a Hamiltonian cycle on theno des and then adding m /, n arcs at random/. This cyclemak es the graph strongly connected/. W e generated /1/0 v er/-sions of eac h random graph/. The exp erimen tal data rep ortedfor these graphs in this pap er are the a v erage o v er these /1/0runs/. The arc w eigh ts in the random graphs w ere uniformlydistributed in /#5B/1/, /1/0/0/0/0/#5D/, whic h is the default w eigh ti n ter/-v al in SPRAND/. Due to lac k of space/, w e do not presen t theexp erimen tal results for the b enc hmark circuits/: they canb e found in /#5B/9/#5D/.The prop erties of the random graphs in our test suiteare giv en together with the running times in T ables /2/. W eused sparse random graphs in our test suite b ecause realcircuits are sparse and w ew an ted our random graphs torepresen t them as closely as p ossible/. W e did more exp eri/-men ts than w ere rep orted in this pap er/. Ho w ev er/, since thetrend for the dep endence of the p erformance on the graphparameters is eviden t from the results that w e included inthis pap er/, w e did not see an y need to include more exp er/-imen tal results/. When doing our exp erimen ts/, w e tried tofollo w the guideline s in /#5B/1/#5D/. When comparing the algorithmsusing their op eration coun ts/, w e compared only the relev an tones b ecause all the algorithms do not ha v e the same kindof op erations/. F or instance/, w e compared only the K O andYTO algorithms for the n um b er of heap op erations/./4/4 Exp erimental Results and Observations/4/./1 The minimum cycle mean and the graph pa rametersF or the random graphs/, the minim um cycle mean is almostindep enden t of the n um b er of no des/, and it c hanges in v erselywith the densit y of the graph/. This observ ation is exp ectedb ecause as the densit y of a graph increases/, the graph con/-tains more cycles and the critical cycles get smaller/. Thissimple observ ation will b e used to explain the b eha vior ofsome of the algorithms/./4/./2 K O versus YTOIn our implemen tation/, b oth algorithms use Fib onacci heaps/,whic h is the default heap data structure in LED A/. Since theYTO algorithm is essen tially an implemen tation of the K Oalgorithm using Fib onacci heaps/, their use in the YTO algo/-rithm w as a natural c hoice/. Their use in the K O algorithmw as preferred to mak e these t w o algorithms comparable/.F rom the exp erimen tal results rep orted in /#5B/9/#5D/, w e can seethat b oth the algorithms p erform almost the same n um be rof iterations on eac h test case/; ho w ev er/, the YTO algorithmpro vides sa vings in the n um b er of heap op erations/, esp e/-cially in the n um b er of insertions/. The sa vings are morepronounced on the random graphs/, and they get b etter asthe densit y increases b ecause the rate of increase in thesen um b ers in the K O algorithm is larger/. Their running timesare comparable but the YTO algorithm p erforms a bit fasterwhen the densit y increases/. This is exp ected b ecause theYTO algorithm p erforms few er heap op erations/./4/./3 Numb er of iterationsBurns/'/, K O/, YTO/, and Ho w ard/'s algorithms p erform a n um/-b er of iterations b efore they con v erge/. An upp er b ound onthe n um b er of iterations for the /#0Crst three algorithms is n\n/2/.An upp er b ound for Ho w ard/'s algorithm is the pro duct ofthe out/-degrees of all the no des/. W e also measured the v alueof k when the HO algorithm terminates/. W e refer to thisv alue as /#5Cthe n um b er of iterations/\" of the HO algorithm al/-though it is not one in the sense of the other algorithms/. Itis alw a ys less than n /.F rom the exp erimen tal results rep orted in /#5B/9/#5D/, the n um/-b er of iterations is alw a ys less than the n um be r o f n od e sfor eac h algorithm although there are a few anomalies withHo w ard/'s algorithm/. It seems that unless n /= m /, the n um/-b er of iterations for the /#0Crst three algorithms is around n/= /2on the random graphs/, eac h of whic h is strongly connected/.Moreo v er/, Burns/' algorithm p erforms few er n um b er of iter/-ations than the K O algorithm/, and the K O and YTO algo/-rithms p erform the same n um b er of iterations/. The n um be rof iterations of the Ho w ard/'s algorithm is drastically smallcompared to the other algorithms/. In /#5B/6/#5D/, it is conjecturedthat the n um b er of iterations is O /#28lg n /#29 on the a v erage/, andit is O /#28 m /#29 in the w orst case/. Our exp erimen ts supp ort thew orst case conjecture/. They also sho w that the n um b er of it/-erations for Ho w ard/'s algorithm and the HO algorithm getssmaller as the densit y of the graph increases although someanomalies exist/. This can b e explained b y the /#0Crst observ a/-tion/./4/./4 Ka rp/'s algo rithm and its va riantsF rom the exp erimen tal results rep orted in /#5B/9/#5D/, it seems thatthe impro v emen ta c hiev ed b y the DG algorithm in the n um/-b er of arcs visited during the computation of Dk\n/#28 v /#29 for eac h\nv is v ery small on the random graphs/, indicating that it isnot e/#0Bectiv e for dense graphs/. The impro v emen t on the cir/-cuits is far b etter/, whic h explains the b etter p erformance ofthe DG algorithm o v er Karp/'s algorithm/.The space e/#0Ecien tv ersion of Karp/'s algorithm/, the Karp/2algorithm/, roughly doubles its running time/, as exp ected/.The space e/#0Eciency of the Karp/2 algorithm is directly ap/-plicable to the DG and HO algorithms/. The most e/#0Bectiv eimpro v emen t on the Karp/'s algorithm is the HO algorithm/.Its running time is ev en b etter than those algorithms whic hare asymptotically faster than it/. Extrap olating from theKarp/2 algorithm/, w e can sa y that the space e/#0Ecien tv ersionof the HO algorithm will double its running time/, whic h stillmain tains its sup eriorit y to most of the other algorithms/./4/./5 Running timesThe running time comparisons are giv en in T ables /2/. Theresults sho w the follo wing/: Ho w ard/'s algorithm is the fastestb y a great margin/. The HO algorithm ranks second/, whic hindicates that the early termination sc heme in the HO al/-gorithm is v ery e/#0Bectiv e/. The slo w est algorithm is La wler/'salgorithm/.The go o d p erformance of Karp/'s algorithm/, esp eciallyon small test cases/, is mostly due to its simplici t y/; it con/-tains three simple nested lo ops/. Its simplicit y facilitates itsoptimization b y a compiler/, e/.g/./, when compiled without op/-timization/, the DG algorithm almost alw a ys b eats it/. Ho w/-ev er/, as the n um b er of no des gets larger/, its p erformancedegrades more rapidly /.Burns/' algorithm is slo w er than the K O and YTO algo/-rithms although it p erforms few er n um b er of iterations andit do es not p erform exp ensiv e op erations suc h as heap op er/-ations/. W e attribute this b eha vior to the fact that it is notincremen tal/; ev ery iteration builds from the scratc h/.The O A/1 and O A/2 algorithms are not as fast as their run/-ning time implies/. They are in general slo w er than Karp/'salgorithm/. W e attribute m uc h of this to their complexit y/;they are more di/#0Ecult to optimize than the other algorithms/./5 Conclusions and F uture W o rkW eh a v e presen ted e/#0Ecien t algorithms for the minim ummean cycle problem/. This pap er is the /#0Crst study that bringsthem to the atten tion of the CAD comm unit y /.W eh a v e sys/-tematically compared these algorithms on random graphs asw ell as b enc hmark circuits and pro vide imp ortan t insigh tsin to their individua l p erformance as w ell as relativ e p erfor/-mance in practice/. One of the most surprising results of thisstudy is that Ho w ard/'s algorithm is b y far the fastest al/-gorithm on the graphs tested in this study /. Unfortunately /,the kno wn b ounds on the running time of this algorithm/,including our b ounds/, are exp onen tial/. W e are w orking onimpro ving these algorithms based on the insigh t that w eha v e obtained from this study /. So far/, w eh a v e dev elop edimpro v ed v ersions of Ho w ard/'s algorithm and La wler/'s algo/-rithm/.Ackno wledgmentsThe authors w ould lik et o a c kno wledge supp ort from the follo winga w ards/: NSF MIP /9/5/-/0/1/6/1/5 /#28CAREER/#29/, NSF CCR/-/9/8/0/6/8/9/8/, NSFCCR/-/9/6/2/5/8/4/4/, D ARP AD ABT/6/3/-/9/8/-C/-/0/0/4/5/, the Univ ersit y of Cali/-fornia MICR O program/, the In terstate Electronics F ello wship/, andthe D A C Design Automation Graduate Sc holarship/./5T able /2 /: The running time comparisons of Burns/'/, K O/, YTO/, Ho w ard/'s/, HO/, Karp/'s/, DG/, La wler/'s/, Karp/2/,and O A/1 algorithms on the random graphs with n no des and m arcs/. F or the cases mark ed with N/#2FA/, eitherw e could not get a result in a da y /,o r w e ran out of memory b ecause of the quadratic space complexit y of thealgorithm in con text/.n m Burns K O YTO Ho w ard HO Karp DG La wler Karp/2 O A/1/5/1/2 /5/1/2 /3/./4/8 /1/./5/1 /1/./6/7 /0/./0/1 /1/./0/0 /0/./7/9 /0/./0/6 /1/1/./0/9 /1/./4/1 /3/2/8/./8/8/5/1/2 /7/6/8 /2/./3/4 /1/./0/4 /1/./1/2 /0/./1/6 /0/./3/2 /0/./9/8 /1/./0/3 /6/./5/1 /1/./8/3 /5/./8/0/5/1/2 /1/0/2/4 /2/./7/2 /1/./2/1 /1/./2/1 /6/./7/5 /0/./2/9 /1/./1/7 /1/./2/6 /9/./2/6 /2/./2/5 /5/./6/6/5/1/2 /1/2/8/0 /4/./1/1 /1/./8/2 /1/./7/3 /0/./1/7 /0/./3/1 /1/./3/7 /1/./4/7 /1/0/./6/2 /2/./7/1 /6/./9/8/5/1/2 /1/5/3/6 /3/./5/2 /1/./5/9 /1/./5/2 /0/./1/3 /0/./2/7 /1/./5/7 /1/./6/9 /1/0/./9/8 /2/./8/7 /6/./5/1/1/0/2/4 /1/0/2/4 /1/3/./9/8 /5/./8/7 /6/./5/0 /0/./0/2 /4/./0/3 /3/./3/6 /0/./2/5 /4/4/./8/2 /6/./7/2 /2/7/9/0/./1/2/1/0/2/4 /1/5/3/6 /1/0/./1/7 /4/./4/1 /4/./6/1 /0/./3/4 /1/./0/7 /4/./1/7 /4/./6/6 /3/4/./6/7 /7/./8/7 /1/2/./3/4/1/0/2/4 /2/0/4/8 /1/1/./3/2 /4/./9/8 /4/./9/9 /0/./2/1 /0/./8/4 /5/./0/5 /5/./6/4 /3/0/./3/3 /9/./0/4 /1/3/./7/8/1/0/2/4 /2/5/6/0 /1/5/./1/6 /6/./7/4 /6/./6/2 /0/./2/3 /0/./9/4 /5/./9/1 /6/./6/3 /5/4/./7/7 /1/0/./8/2 /2/3/./6/7/1/0/2/4 /3/0/7/2 /1/3/./9/1 /6/./2/5 /5/./9/0 /0/./2/2 /0/./8/7 /6/./7/7 /7/./5/6 /5/1/./9/1 /1/4/./6/0 /1/7/./1/3/2/0/4/8 /2/0/4/8 /5/5/./8/8 /2/3/./1/3 /2/5/./4/6 /0/./0/4 /1/6/./4/5 /1/3/./4/8 /1/./0/2 /1/8/6/./3/5 /2/1/./8/0 /2/0/1/1/0/./2/8/2/0/4/8 /3/0/7/2 /4/4/./5/5 /2/0/./3/7 /2/2/./1/9 /0/./6/4 /4/./2/6 /1/7/./1/4 /1/9/./4/5 /1/7/8/./8/6 /2/9/./6/5 /6/2/./8/1/2/0/4/8 /4/0/9/6 /4/2/./8/8 /2/0/./5/9 /2/0/./3/1 /0/./8/8 /3/./1/4 /2/1/./8/7 /2/4/./9/6 /1/6/5/./6/1 /4/2/./2/5 /3/7/./0/4/2/0/4/8 /5/1/2/0 /6/3/./2/2 /3/0/./9/5 /2/9/./9/5 /0/./7/6 /3/./5/6 /2/7/./1/0 /3/0/./8/3 /2/2/1/./9/0 /5/3/./3/0 /8/0/./9/7/2/0/4/8 /6/1/4/4 /7/3/./9/2 /3/6/./5/6 /3/4/./6/1 /0/./8/0 /3/./5/3 /3/2/./8/6 /3/7/./0/5 /2/4/4/./0/5 /6/4/./8/9 /8/5/./8/7/4/0/9/6 /4/0/9/6 /2/1/8/./3/1 /9/1/./5/0 /1/0/0/./4/0 /0/./0/7 N/#2FA /5/5/./7/6 /4/./5/6 /6/5/9/./7/4 /8/9/./5/9 N/#2FA/4/0/9/6 /6/1/4/4 /1/6/1/./0/7 /7/9/./0/9 /8/1/./0/5 /7/./0/0 N/#2FA /7/6/./8/2 /8/6/./6/4 /7/3/6/./1/6 /1/3/5/./4/6 N/#2FA/4/0/9/6 /8/1/9/2 /1/6/7/./6/3 /8/8/./8/6 /8/8/./0/1 /1/./4/7 N/#2FA /1/0/3/./1/3 /1/1/5/./8/1 /7/8/1/./8/4 /1/9/5/./3/5 N/#2FA/4/0/9/6 /1/0/2/4/0 /2/4/2/./7/5 /1/3/2/./2/6 /1/3/0/./0/1 /1/./6/2 N/#2FA /1/2/9/./0/3 /1/4/4/./7/5 /1/3/0/5/./4/7 /2/5/9/./1/9 N/#2FA/4/0/9/6 /1/2/2/8/8 /2/3/6/./7/1 /1/3/9/./2/2 /1/3/7/./8/7 /1/3/./8/4 N/#2FA /1/5/6/./7/0 /1/7/3/./9/4 /1/1/3/2/./5/7 /3/1/3/./0/6 N/#2FA/8/1/9/2 /8/1/9/2 /8/2/6/./0/8 /3/6/3/./4/5 /3/9/8/./1/1 /0/./1/4 N/#2FA N/#2FA N/#2FA /2/8/1/9/./3/0 /3/5/5/./0/0 N/#2FA/8/1/9/2 /1/2/2/8/8 /5/5/9/./8/8 /3/0/6/./5/2 /3/2/9/./7/3 /4/./0/9 N/#2FA N/#2FA N/#2FA /2/9/4/9/./2/5 /5/9/5/./0/2 N/#2FA/8/1/9/2 /1/6/3/8/4 /6/2/6/./6/5 /3/8/2/./8/2 /3/8/0/./5/8 /4/./5/3 N/#2FA N/#2FA N/#2FA /3/7/0/8/./9/8 /8/5/8/./0/1 N/#2FA/8/1/9/2 /2/0/4/8/0 /8/4/0/./5/0 /5/3/6/./5/0 /5/2/4/./8/2 /4/./7/3 N/#2FA N/#2FA N/#2FA /5/1/1/2/./6/7 /1/1/1/0/./8/4 N/#2FA/8/1/9/2 /2/4/5/7/6 /8/7/4/./2/1 /6/0/9/./6/0 /5/8/7/./5/1 /5/./5/7 N/#2FA N/#2FA N/#2FA /5/4/1/7/./5/8 /1/9/0/5/./2/0 N/#2FAReferences/#5B/1/#5D Ah uja/, R/. K/./, Ko dialam/, M/./, Mishra/, A/. K/./, and Orlin/, J/. B/.Computational in v estigation of maxim um /#0Do w algorithms/. Eu/-r op e an J/. of Op er ational R ese ar ch /, /9/7 /#28/1/9/9/7/#29/, /5/0/9/#7B/5/4/2/./#5B/2/#5D Ah uja/, R/. K/./, Magnan ti/, T/. L/./, and Orlin/, J/. B/. Network Flows /.Pren tice Hall/, Upp er Saddle Riv er/, NJ/, USA/, /1/9/9/3/./#5B/3/#5D Bacelli/, F/./, Cohen/, G/./, Olsder/, G/. J/./, and Quadrat/, J/./-P /. Syn/-chr onization and Line arity /. John Wiley /& Sons/, New Y ork/, NY/,USA/, /1/9/9/2/./#5B/4/#5D Burns/, S/. M/. P erformance analysis and optimization of asyn/-c hronous circuits/. PhD thesis/, California Institute of T ec hnology /,/1/9/9/1/./#5B/5/#5D Cherk assky /, B/. V/./, Goldb erg/, A/. V/./, and Radzik/, T/. Shortestpath algorithms/: Theory and exp erimen tal ev aluation/. In Pr o c/./5th A CM/-SIAM Symp/. on Discr ete A lgorithms /#28/1/9/9/4/#29/, pp/. /5/1/6/#7B/5/2/5/./#5B/6/#5D Co c het/-T errasson/, J/./, Cohen/, G/./, Gaub ert/, S/./, McGettric k/, M/./,and Quadrat/, J/./-P /. Numerical computation of sp ectral elemen tsin max/-plus algebra/. In Pr o c/. IF A C Conf/. on Syst/. Structur eand Contr ol /#28/1/9/9/8/#29/./#5B/7/#5D Cuninghame/-Green/, R/. A/./, and Yixun/, L/. Maxim um cycle/-meansof w eigh ted digraphs/. Applie d Math/./-JCU /1/1 /#28/1/9/9/6/#29/, /2/2/5/#7B/3/4/./#5B/8/#5D Dasdan/, A/./, and Gupta/, R/. K/. F aster maxim um and minim ummean cycle algorithms for system p erformance analysis/. IEEET r ans/. Computer/-A ide d Design /1/7 /, /1/0 /#28Oct/. /1/9/9/8/#29/./#5B/9/#5D Dasdan/, A/./, Irani/, S/./, and Gupta/, R/. K/. An exp erimen tal studyof minim um mean cycle algorithms/. T ec h/. rep/. /#23/9/8/-/3/2/, Univ/. ofCalifornia/, Irvine/, July /1/9/9/8/./#5B/1/0/#5D Gerez/, S/. H/./, de Gro ot/, S/. M/. H/./, and Herrmann/, O/. E/. Ap olynomial/-time algorithm for the computation of the iteration/-\np erio d b ound in recursiv e data/-/#0Do w graphs/. IEEE T r ans/. onCir cuits and Syst/./-/1 /3/9 /, /1 /#28Jan/. /1/9/9/2/#29/, /4/9/#7B/5/2/./#5B/1/1/#5D Gondran/, M/./, and Minoux/, M/. Gr aphs and A lgorithms /. JohnWiley and Sons/, New Y ork/, NY/, USA/, /1/9/8/4/./#5B/1/2/#5D Hartmann/, M/./, and Orlin/, J/. B/. Finding minim um cost totime ratio cycles with small in tegral transit times/. Networks/2/3 /#28/1/9/9/3/#29/, /5/6/7/#7B/7/4/./#5B/1/3/#5D Hulgaard/, H/./, Burns/, S/. M/./, Amon/, T/./, and Borriello/, G/. Analgorithm for exact b ounds on the time separation of ev en ts inconcurren t systems/. IEEE T r ans/. Comput/. /4/4 /,/1 /1/#28 N o v/. /1/9/9/5/#29/,/1/3/0/6/#7B/1/7/.\n/#5B/1/4/#5D Ito/, K/./, and P arhi/, K/. K/. Determining the minim um iterationp erio d of an algorithm/. J/. VLSI Signal Pr o c essing /1/1 /, /3 /#28Dec/./1/9/9/5/#29/, /2/2/9/#7B/4/4/./#5B/1/5/#5D Karp/, R/. M/. A c haracterization of the minim um cycle mean ina digraph/. Discr ete Mathematics /2/3 /#28/1/9/7/8/#29/, /3/0/9/#7B/1/1/./#5B/1/6/#5D Karp/, R/. M/./, and Orlin/, J/. B/. P arametric shortest path algo/-rithms with an application to cyclic sta/#0Eng/. Discr ete Applie dMathematics /3 /#28/1/9/8/1/#29/, /3/7/#7B/4/5/./#5B/1/7/#5D La wler/, E/. L/. Combinatorial Optimizatio n/: Networks and Ma/-tr oids /. Holt/, Reinhart/, and Winston/, New Y ork/, NY/, USA/, /1/9/7/6/./#5B/1/8/#5D Math ur/, A/./, Dasdan/, A/./, and Gupta/, R/. K/. Rate analysis of em/-b edded systems/. A CM T r ans/. on Design A utomation of Ele c/-tr onic Systems /3 /, /3 /#28July /1/9/9/8/#29/./#5B/1/9/#5D Megiddo/, N/. Com binatorial optimization with rational ob jec/-tiv e functions/. Mathematics of Op er ations R ese ar ch /4 /, /4 /#28No v/./1/9/7/9/#29/, /4/1/4/#7B/4/2/4/./#5B/2/0/#5D Mehlhorn/, K/./, and Naher/, S/. LED A/: A platform for com binato/-rial and geometric computing/. Comm/. of the A CM /3/8 /, /1 /#28/1/9/9/5/#29/,/9/6/#7B/1/0/2/./#5B/2/1/#5D Orlin/, J/. B/./, and Ah uja/, R/. K/. New scaling algorithms for theassignmen t and minim um mean cycle problems/. Mathematic alPr o gr amming /5/4 /#28/1/9/9/2/#29/, /4/1/#7B/5/6/./#5B/2/2/#5D Szymanski/, T/. G/. Computing optimal clo c ks c hedules/. In Pr o c/./2/9th Design A utomation Conf/. /#28/1/9/9/2/#29/, A CM/#2FIEEE/, pp/. /3/9/9/#7B/4/0/4/./#5B/2/3/#5D T eic h/, J/./, Sriram/, S/./, Thiele/, L/./, and Martin/, M/. P erformanceanalysis and optimization of mixed async hronous sync hronoussystems/. IEEE T r ans/. Computer/-A ide d Design /1/6 /, /5 /#28Ma y/1/9/9/7/#29/, /4/7/3/#7B/8/4/./#5B/2/4/#5D Y ang/, S/. Logic syn thesis and optimization b enc hmarks user guidev ersion /3/./0/. T ec h/. rep/./, Micro electronics Cen ter of North Car/-olina/, Jan/. /1/9/9/1/./#5B/2/5/#5D Y oung/, N/. E/./, T arjan/, R/. E/./, and Orlin/, J/. B/. F aster parametricshortest path and minim um/-balance algorithms/. Networks /2/1/#28/1/9/9/1/#29/, /2/0/5/#7B/2/1/./6" } ]
{ "category": "App Definition and Development", "file_name": "dasdan-dac99.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "count_star_intervalsum_allsum_all_filtersum_all_yearsum_pricetop_100_commitdatetop_100_partstop_100_parts_detailstop_100_parts_filter\n1 2 3 4 5 6Speedup FactorQuery\nCores 8 (1 node) 48 (6 nodes)Druid Scaling ... 100GB" } ]
{ "category": "App Definition and Development", "file_name": "tpch_scaling_factor.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "Iterator Traits\nAuthor : David Abrahams\nContact : dave@boost-consulting.com\nOrganization :Boost Consulting\nDate : 2004-11-01\nCopyright : Copyright David Abrahams 2004.\nabstract: Header <boost/iterator/iterator_traits.hpp> provides the ability to access\nan iterator’s associated types using MPL-compatible metafunctions .\nOverview\nstd::iterator_traits provides access to five associated types of any iterator: its value_type ,refer-\nence,pointer ,iterator_category , and difference_type . Unfortunately, such a“multi-valued”traits\ntemplate can be difficult to use in a metaprogramming context. <boost/iterator/iterator_traits.hpp>\nprovides access to these types using a standard metafunctions .\nSummary\nHeader <boost/iterator/iterator_traits.hpp> :\ntemplate <class Iterator>\nstruct iterator_value\n{\ntypedef typename\nstd::iterator_traits<Iterator>::value_type\ntype;\n};\ntemplate <class Iterator>\nstruct iterator_reference\n{\ntypedef typename\nstd::iterator_traits<Iterator>::reference\ntype;\n};\ntemplate <class Iterator>\nstruct iterator_pointer\n{\ntypedef typename\nstd::iterator_traits<Iterator>::pointer\ntype;\n1};\ntemplate <class Iterator>\nstruct iterator_difference\n{\ntypedef typename\ndetail::iterator_traits<Iterator>::difference_type\ntype;\n};\ntemplate <class Iterator>\nstruct iterator_category\n{\ntypedef typename\ndetail::iterator_traits<Iterator>::iterator_category\ntype;\n};\nBroken Compiler Notes\nBecause of workarounds in Boost, you may find that these metafunctions actually work better than the\nfacilities provided by your compiler’s standard library.\nOn compilers that don’t support partial specialization, such as Microsoft Visual C++ 6.0 or 7.0, you\nmay need to manually invoke BOOST BROKEN COMPILER TYPE TRAITS SPECIALIZATION on\nthevalue_type of pointers that are passed to these metafunctions.\nBecause of bugs in the implementation of GCC-2.9x, the name of iterator_category is changed to\niterator_category_ on that compiler. A macro, BOOST_ITERATOR_CATEGORY , that expands to either\niterator_category oriterator_category_ , as appropriate to the platform, is provided for portability.\n2" } ]
{ "category": "App Definition and Development", "file_name": "iterator_traits.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "A Japanese translation of an earlier version of thi s tutorial can be found at \nhttp://prdownloads.sourceforge.jp/jyugem/7127/fsm -tutorial -jp.pdf . Kindly contributed by Mitsuo \nFukasawa. \nContents \nIntroduction \nHow to read this tutorial \nHello World! \nBasic topics: A stop watch \nDefining states and events \nAdding reactions \nState -local storage \nGetting state information out of the machine \nIntermediate topics: A digital camera \nSpreading a state machine over multiple translation units \nDeferring events \nGuards \nIn -state reactions \nTransition actions \nAdvanced topics \nSpecifying multiple reactions for a state \nPosting events \nHistory \nOrthogonal states \nState queries \nState type information \nException handling \nSubmachines & Parametrized States \nAsynchronous state machines \nIntroduction \nThe Boost Statechart library is a framework that al lows you to quickly transform a UML statechart \ninto executable C++ code, without needing to use a code generator. Thanks to support for almost all \nUML features the transformation is straight-forward and the resulting C++ code is a nearly \nredundancy-free textual description of the statecha rt. \nHow to read this tutorial \nThis tutorial was designed to be read linearly. Fir st time users should start reading right at the \nbeginning and stop as soon as they know enough for the task at hand. Specifically: \nThe Boost Statechart \nLibrary \nTutorial Page 1 of 32 The Boost Statechart Library - Tutorial \n2006/12/03/circle6Small and simple machines with just a handful of st ates can be implemented reasonably well by \nusing the features described under Basic topics: A stop watch \n/circle6For larger machines with up to roughly a dozen stat es the features described under Intermediate \ntopics: A digital camera are often helpful \n/circle6Finally, users wanting to create even more complex machines and project architects evaluating \nBoost.Statechart should also read the Advanced topics section at the end. Moreover, reading the \nLimitations section in the Rationale is strongly suggested \nHello World! \nWe will use the simplest possible program to make o ur first steps. The statechart ... \n \n... is implemented with the following code: \n#include <boost/statechart/state_machine.hpp> \n#include <boost/statechart/simple_state.hpp> \n#include <iostream> \n \nnamespace sc = boost::statechart; \n \n// We are declaring all types as structs only to av oid having to \n// type public. If you don't mind doing so, you can just as well \n// use class. \n \n// We need to forward-declare the initial state bec ause it can \n// only be defined at a point where the state machi ne is \n// defined. \nstruct Greeting; \n \n// Boost.Statechart makes heavy use of the curiousl y recurring \n// template pattern. The deriving class must always be passed as \n// the first parameter to all base class templates. \n// \n// The state machine must be informed which state i t has to \n// enter when the machine is initiated. That's why Greeting is \n// passed as the second template parameter. \nstruct Machine : sc::state_machine< Machine, Greeti ng > {}; \n \n// For each state we need to define which state mac hine it \n// belongs to and where it is located in the statec hart. Both is \n// specified with Context argument that is passed t o \n// simple_state<>. For a flat state machine as we h ave it here, \n// the context is always the state machine. Consequ ently, \n// Machine must be passed as the second template pa rameter to \n// Greeting's base (the Context parameter is explai ned in more \n// detail in the next example). \nstruct Greeting : sc::simple_state< Greeting, Machi ne > \n{ \nPage 2 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 // Whenever the state machine enters a state, it creates an \n // object of the corresponding state class. The o bject is then \n // kept alive as long as the machine remains in t he state. \n // Finally, the object is destroyed when the stat e machine \n // exits the state. Therefore, a state entry acti on can be \n // defined by adding a constructor and a state ex it action can \n // be defined by adding a destructor. \n Greeting() { std::cout << \"Hello World!\\n\"; } // entry \n ~Greeting() { std::cout << \"Bye Bye World!\\n\"; } // exit \n}; \n \nint main() \n{ \n Machine myMachine; \n // The machine is not yet running after construct ion. We start \n // it by calling initiate(). This triggers the co nstruction of \n // the initial state Greeting \n myMachine.initiate(); \n // When we leave main(), myMachine is destructed what leads to \n // the destruction of all currently active states . \n return 0; \n} \nThis prints Hello World! and Bye Bye World! before exiting. \nBasic topics: A stop watch \nNext we will model a simple mechanical stop watch w ith a state machine. Such watches typically have \ntwo buttons: \n/circle6Start/Stop \n/circle6Reset \nAnd two states: \n/circle6Stopped: The hands reside in the position where the y were last stopped: \n/ring2Pressing the reset button moves the hands back to t he 0 position. The watch remains in the \nStopped state \n/ring2Pressing the start/stop button leads to a transitio n to the Running state \n/circle6Running: The hands of the watch are in motion and c ontinually show the elapsed time \n/ring2Pressing the reset button moves the hands back to t he 0 position and leads to a transition to \nthe Stopped state \n/ring2Pressing the start/stop button leads to a transitio n to the Stopped state \nHere is one way to specify this in UML: Page 3 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 \nDefining states and events \nThe two buttons are modeled by two events. Moreover , we also define the necessary states and the \ninitial state. The following code is our starting point, subsequen t code snippets must be inserted : \n#include <boost/statechart/event.hpp> \n#include <boost/statechart/state_machine.hpp> \n#include <boost/statechart/simple_state.hpp> \n \nnamespace sc = boost::statechart; \n \nstruct EvStartStop : sc::event< EvStartStop > {}; \nstruct EvReset : sc::event< EvReset > {}; \n \nstruct Active; \nstruct StopWatch : sc::state_machine< StopWatch, Ac tive > {}; \n \nstruct Stopped; \n \n// The simple_state class template accepts up to fo ur parameters: \n// - The third parameter specifies the inner initia l state, if \n// there is one. Here, only Active has inner stat es, which is \n// why it needs to pass its inner initial state S topped to its \n// base \n// - The fourth parameter specifies whether and wha t kind of \n// history is kept \n \n// Active is the outermost state and therefore need s to pass the \n// state machine class it belongs to \nstruct Active : sc::simple_state< \n Active, StopWatch, Stopped > {}; \n \n// Stopped and Running both specify Active as their Context, \n// which makes them nested inside Active \nstruct Running : sc::simple_state< Running, Active > {}; \nstruct Stopped : sc::simple_state< Stopped, Active > {}; \n \n// Because the context of a state must be a complet e type (i.e. \n// not forward declared), a machine must be defined from \n// \"outside to inside\". That is, we always start wi th the state \n// machine, followed by outermost states, followed by the direct \n// inner states of outermost states and so on. We c an do so in a \n// breadth-first or depth-first way or employ a mix ture of the \nPage 4 of 32 The Boost Statechart Library - Tutorial \n2006/12/03// two. \n \nint main() \n{ \n StopWatch myWatch; \n myWatch.initiate(); \n return 0; \n} \nThis compiles but doesn't do anything observable ye t. \nAdding reactions \nFor the moment we will use only one type of reactio n: transitions. We insert the bold parts of the \nfollowing code: \n#include <boost/statechart/transition.hpp> \n \n// ... \n \nstruct Stopped; \nstruct Active : sc::simple_state< Active, StopWatch , Stopped > \n{ \n typedef sc::transition< EvReset, Active > reactions ; \n}; \n \nstruct Running : sc::simple_state< Running, Active > \n{ \n typedef sc::transition< EvStartStop, Stopped > reac tions; \n}; \n \nstruct Stopped : sc::simple_state< Stopped, Active > \n{ \n typedef sc::transition< EvStartStop, Running > reac tions; \n}; \n \n// A state can define an arbitrary number of reacti ons. That's \n// why we have to put them into an mpl::list<> as s oon as there \n// is more than one of them \n// (see Specifying multiple reactions for a state ). \n \nint main() \n{ \n StopWatch myWatch; \n myWatch.initiate(); \n myWatch.process_event( EvStartStop() ); \n myWatch.process_event( EvStartStop() ); \n myWatch.process_event( EvStartStop() ); \n myWatch.process_event( EvReset() ); \n return 0; \n} \nNow we have all the states and all the transitions in place and a number of events are also sent to th e \nstop watch. The machine dutifully makes the transit ions we would expect, but no actions are executed Page 5 of 32 The Boost Statechart Library - Tutorial \n2006/12/03yet. \nState-local storage \nNext we'll make the stop watch actually measure tim e. Depending on the state the stop watch is in, we \nneed different variables: \n/circle6Stopped: One variable holding the elapsed time \n/circle6Running: One variable holding the elapsed time and one variable storing the point in time at \nwhich the watch was last started. \nWe observe that the elapsed time variable is needed no matter what state the machine is in. Moreover, \nthis variable should be reset to 0 when we send an EvReset event to the machine. The other variable \nis only needed while the machine is in the Running state. It should be set to the current time of the \nsystem clock whenever we enter the Running state. U pon exit we simply subtract the start time from \nthe current system clock time and add the result to the elapsed time. \n#include <ctime> \n \n// ... \n \nstruct Stopped; \nstruct Active : sc::simple_state< Active, StopWatch , Stopped > \n{ \n public: \n typedef sc::transition< EvReset, Active > react ions; \n \n Active() : elapsedTime_( 0.0 ) {} \n double ElapsedTime() const { return elapsedTime_; } \n double & ElapsedTime() { return elapsedTime_; } \n private: \n double elapsedTime_; \n}; \n \nstruct Running : sc::simple_state< Running, Active > \n{ \n public: \n typedef sc::transition< EvStartStop, Stopped > reactions; \n \n Running() : startTime_( std::time( 0 ) ) {} \n ~Running() \n { \n // Similar to when a derived class object acc esses its \n // base class portion, context<>() is used to gain \n // access to the direct or indirect context o f a state. \n // This can either be a direct or indirect ou ter state \n // or the state machine itself \n // (e.g. here: context< StopWatch >()). \n context< Active >().ElapsedTime() += \n std::difftime( std::time( 0 ), startTime_ ); \n } \n private: \n std::time_t startTime_; \n}; Page 6 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 \n// ... \nThe machine now measures the time, but we cannot ye t retrieve it from the main program. \nAt this point, the advantages of state-local storag e (which is still a relatively little-known feature ) may \nnot yet have become apparent. The FAQ item \" What's so cool about state -local storage? \" tries to \nexplain them in more detail by comparing this StopW atch with one that does not make use of state-\nlocal storage. \nGetting state information out of the machine \nTo retrieve the measured time, we need a mechanism to get state information out of the machine. With \nour current machine design there are two ways to do that. For the sake of simplicity we use the less \nefficient one: state_cast<>() (StopWatch2.cpp shows the slightly more complex al ternative). As \nthe name suggests, the semantics are very similar t o the ones of dynamic_cast . For example, when \nwe call myWatch.state_cast< const Stopped & >() and the machine is currently in the \nStopped state, we get a reference to the Stopped state. Otherwise std::bad_cast is thrown. We \ncan use this functionality to implement a StopWatch member function that returns the elapsed time. \nHowever, rather than ask the machine in which state it is and then switch to different calculations fo r \nthe elapsed time, we put the calculation into the S topped and Running states and use an interface to \nretrieve the elapsed time: \n#include <iostream> \n \n// ... \n \nstruct IElapsedTime \n{ \n virtual double ElapsedTime() const = 0; \n}; \n \nstruct Active; \nstruct StopWatch : sc::state_machine< StopWatch, Ac tive > \n{ \n double ElapsedTime() const \n { \n return state_cast< const IElapsedTime & >().Elapsed Time(); \n } \n}; \n \n// ... \n \nstruct Running : IElapsedTime, \n sc::simple_state< Running, Active > \n{ \n public: \n typedef sc::transition< EvStartStop, Stopped > reactions; \n \n Running() : startTime_( std::time( 0 ) ) {} \n ~Running() \n { \n context< Active >().ElapsedTime() = ElapsedTime(); \n } Page 7 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 \n virtual double ElapsedTime() const \n { \n return context< Active >().ElapsedTime() + \n std::difftime( std::time( 0 ), startTime_ ); \n } \n private: \n std::time_t startTime_; \n}; \n \nstruct Stopped : IElapsedTime, \n sc::simple_state< Stopped, Active > \n{ \n typedef sc::transition< EvStartStop, Running > re actions; \n \n virtual double ElapsedTime() const \n { \n return context< Active >().ElapsedTime(); \n } \n}; \n \nint main() \n{ \n StopWatch myWatch; \n myWatch.initiate(); \n std::cout << myWatch.ElapsedTime() << \"\\n\"; \n myWatch.process_event( EvStartStop() ); \n std::cout << myWatch.ElapsedTime() << \"\\n\"; \n myWatch.process_event( EvStartStop() ); \n std::cout << myWatch.ElapsedTime() << \"\\n\"; \n myWatch.process_event( EvStartStop() ); \n std::cout << myWatch.ElapsedTime() << \"\\n\"; \n myWatch.process_event( EvReset() ); \n std::cout << myWatch.ElapsedTime() << \"\\n\"; \n return 0; \n} \nTo actually see time being measured, you might want to single-step through the statements in main \n() . The StopWatch example extends this program to an interactive console application. \nIntermediate topics: A digital camera \nSo far so good. However, the approach presented abo ve has a few limitations: \n/circle6Bad scalability: As soon as the compiler reaches th e point where \nstate_machine::initiate() is called, a number of template instantiations tak e place, \nwhich can only succeed if the full declaration of e ach and every state of the machine is known. \nThat is, the whole layout of a state machine must b e implemented in one single translation unit \n(actions can be compiled separately, but this is of no importance here). For bigger (and more \nreal-world) state machines, this leads to the follo wing limitations: \n/ring2At some point compilers reach their internal templa te instantiation limits and give up. This \ncan happen even for moderately- sized machines. For example, in debug mode one popu lar \ncompiler refused to compile earlier versions of the BitMachine example for anything \nabove 3 bits. This means that the compiler reached its limits somewhere between 8 states, Page 8 of 32 The Boost Statechart Library - Tutorial \n2006/12/0324 transitions and 16 states, 64 transitions \n/ring2Multiple programmers can hardly work on the same st ate machine simultaneously because \nevery layout change will inevitably lead to a recom pilation of the whole state machine \n/circle6Maximum one reaction per event: According to UML a state can have multiple reactions \ntriggered by the same event. This makes sense when all reactions have mutually exclusive \nguards. The interface we used above only allows for at most one unguarded reaction for each \nevent. Moreover, the UML concepts junction and choi ce point are not directly supported \nAll these limitations can be overcome with custom r eactions. Warning: It is easy to abuse custom \nreactions up to the point of invoking undefined beh avior. Please study the documentation before \nemploying them! \nSpreading a state machine over multiple translation units \nLet's say your company would like to develop a digi tal camera. The camera has the following controls: \n/circle6Shutter button, which can be half-pressed and fully -pressed. The associated events are \nEvShutterHalf , EvShutterFull and EvShutterReleased \n/circle6Config button, represented by the EvConfig event \n/circle6A number of other buttons that are not of interest here \nOne use case for the camera says that the photograp her can half-press the shutter anywhere in the \nconfiguration mode and the camera will immediately go into shooting mode. The following statechart \nis one way to achieve this behavior: \n \nThe Configuring and Shooting states will contain nu merous nested states while the Idle state is \nrelatively simple. It was therefore decided to buil d two teams. One will implement the shooting mode \nwhile the other will implement the configuration mo de. The two teams have already agreed on the \ninterface that the shooting team will use to retrie ve the configuration settings. We would like to ens ure \nthat the two teams can work with the least possible interference. So, we put the two states in their o wn \ntranslation units so that machine layout changes wi thin the Configuring state will never lead to a \nrecompilation of the inner workings of the Shooting state and vice versa. \nUnlike in the previous example, the excerpts presen ted here often outline different options to \nachieve the same effect. That's why the code is oft en not equal to the Camera example code. \nComments mark the parts where this is the case. \nPage 9 of 32 The Boost Statechart Library - Tutorial \n2006/12/03Camera.hpp: \n#ifndef CAMERA_HPP_INCLUDED \n#define CAMERA_HPP_INCLUDED \n \n#include <boost/statechart/event.hpp> \n#include <boost/statechart/state_machine.hpp> \n#include <boost/statechart/simple_state.hpp> \n#include <boost/statechart/custom_reaction.hpp> \n \nnamespace sc = boost::statechart; \n \nstruct EvShutterHalf : sc::event< EvShutterHalf > { }; \nstruct EvShutterFull : sc::event< EvShutterFull > { }; \nstruct EvShutterRelease : sc::event< EvShutterRelea se > {}; \nstruct EvConfig : sc::event< EvConfig > {}; \n \nstruct NotShooting; \nstruct Camera : sc::state_machine< Camera, NotShoot ing > \n{ \n bool IsMemoryAvailable() const { return true; } \n bool IsBatteryLow() const { return false; } \n}; \n \nstruct Idle; \nstruct NotShooting : sc::simple_state< \n NotShooting, Camera, Idle > \n{ \n // With a custom reaction we only specify that we might do \n // something with a particular event, but the act ual reaction \n // is defined in the react member function, which can be \n // implemented in the .cpp file. \n typedef sc::custom_reaction< EvShutterHalf > reacti ons; \n \n // ... \n sc::result react( const EvShutterHalf & ); \n}; \n \nstruct Idle : sc::simple_state< Idle, NotShooting > \n{ \n typedef sc::custom_reaction< EvConfig > reactions; \n \n // ... \n sc::result react( const EvConfig & ); \n}; \n \n#endif \nCamera.cpp: \n#include \"Camera.hpp\" \n \n// The following includes are only made here but no t in \n// Camera.hpp Page 10 of 32 The Boost Statechart Library - Tutorial \n2006/12/03// The Shooting and Configuring states can themselv es apply the \n// same pattern to hide their inner implementation, which \n// ensures that the two teams working on the Camera state \n// machine will never need to disturb each other. \n#include \"Configuring.hpp\" \n#include \"Shooting.hpp\" \n \n// ... \n \n// not part of the Camera example \nsc::result NotShooting::react( const EvShutterHalf & ) \n{ \n return transit< Shooting >(); \n} \n \nsc::result Idle::react( const EvConfig & ) \n{ \n return transit< Configuring >(); \n} \nCaution: Any call to simple_state<>::transit<>() or \nsimple_state<>::terminate() (see reference ) will inevitably destruct the state object \n(similar to delete this; )! That is, code executed after any of these calls may invoke \nundefined behavior! That's why these functions should only be called a s part of a return statement. \nDeferring events \nThe inner workings of the Shooting state could look as follows: \n \nPage 11 of 32 The Boost Statechart Library - Tutorial \n2006/12/03When the user half-presses the shutter, Shooting an d its inner initial state Focusing are entered. In the \nFocusing entry action the camera instructs the focu sing circuit to bring the subject into focus. The \nfocusing circuit then moves the lenses accordingly and sends the EvInFocus event as soon as it is done . \nOf course, the user can fully-press the shutter whi le the lenses are still in motion. Without any \nprecautions, the resulting EvShutterFull event woul d simply be lost because the Focusing state does \nnot define a reaction for this event. As a result, the user would have to fully-press the shutter agai n \nafter the camera has finished focusing. To prevent this, the EvShutterFull event is deferred inside th e \nFocusing state. This means that all events of this type are stored in a separate queue, which is empti ed \ninto the main queue when the Focusing state is exit ed. \nstruct Focusing : sc::state< Focusing, Shooting > \n{ \n typedef mpl::list< \n sc::custom_reaction< EvInFocus >, \n sc::deferral< EvShutterFull > \n > reactions; \n \n Focusing( my_context ctx ); \n sc::result react( const EvInFocus & ); \n}; \nGuards \nBoth transitions originating at the Focused state a re triggered by the same event but they have mutual ly \nexclusive guards. Here is an appropriate custom rea ction: \n// not part of the Camera example \nsc::result Focused::react( const EvShutterFull & ) \n{ \n if ( context< Camera >().IsMemoryAvailable() ) \n { \n return transit< Storing >(); \n } \n else \n { \n // The following is actually a mixture between an in-state \n // reaction and a transition. See later on how to implement \n // proper transition actions. \n std::cout << \"Cache memory full. Please wait... \\n\"; \n return transit< Focused >(); \n } \n} \nCustom reactions can of course also be implemented directly in the state declaration, which is often \npreferable for easier browsing. \nNext we will use a guard to prevent a transition an d let outer states react to the event if the batter y is \nlow: \nCamera.cpp: \n// ... \nsc::result NotShooting::react( const EvShutterHalf & ) \n{ Page 12 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 if ( context< Camera >().IsBatteryLow() ) \n { \n // We cannot react to the event ourselves, so w e forward it \n // to our outer state (this is also the default if a state \n // defines no reaction for a given event). \n return forward_event(); \n } \n else \n { \n return transit< Shooting >(); \n } \n} \n// ... \nIn-state reactions \nThe self-transition of the Focused state could also be implemented as an in -state reaction , which has \nthe same effect as long as Focused does not have an y entry or exit actions: \nShooting.cpp: \n// ... \nsc::result Focused::react( const EvShutterFull & ) \n{ \n if ( context< Camera >().IsMemoryAvailable() ) \n { \n return transit< Storing >(); \n } \n else \n { \n std::cout << \"Cache memory full. Please wait... \\n\"; \n // Indicate that the event can be discarded. So , the \n // dispatch algorithm will stop looking for a r eaction \n // and the machine remains in the Focused state . \n return discard_event(); \n } \n} \n// ... \nBecause the in-state reaction is guarded, we need t o employ a custom_reaction<> here. For \nunguarded in-state reactions in_state_reaction <> should be used for better code-readability. \nTransition actions \nAs an effect of every transition, actions are execu ted in the following order: \n1. Starting from the innermost active state, all exi t actions up to but excluding the innermost \ncommon context \n2. The transition action (if present) \n3. Starting from the innermost common context, all e ntry actions down to the target state followed \nby the entry actions of the initial states \nExample: Page 13 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 \nHere the order is as follows: ~D(), ~C(), ~B(), ~A( ), t(), X(), Y(), Z(). The transition action t() is \ntherefore executed in the context of the InnermostC ommonOuter state because the source state has \nalready been left (destructed) and the target state has not yet been entered (constructed). \nWith Boost.Statechart, a transition action can be a member of any common outer context. That is, the \ntransition between Focusing and Focused could be im plemented as follows: \nShooting.hpp: \n// ... \nstruct Focusing; \nstruct Shooting : sc::simple_state< Shooting, Camer a, Focusing > \n{ \n typedef sc::transition< \n EvShutterRelease, NotShooting > reactions; \n \n // ... \n void DisplayFocused( const EvInFocus & ); \n}; \n \n// ... \n \n// not part of the Camera example \nstruct Focusing : sc::simple_state< Focusing, Shoot ing > \n{ \n typedef sc::transition< EvInFocus, Focused , \n Shooting, &Shooting::DisplayFocused > reactions; \n}; \nOr , the following is also possible (here the state ma chine itself serves as the outermost context): \n// not part of the Camera example \nstruct Camera : sc::state_machine< Camera, NotShoot ing > \n{ \n void DisplayFocused( const EvInFocus & ); \n}; \nPage 14 of 32 The Boost Statechart Library - Tutorial \n2006/12/03// not part of the Camera example \nstruct Focusing : sc::simple_state< Focusing, Shoot ing > \n{ \n typedef sc::transition< EvInFocus, Focused , \n Camera, &Camera::DisplayFocused > reactions; \n}; \nNaturally, transition actions can also be invoked f rom custom reactions: \nShooting.cpp: \n// ... \nsc::result Focusing::react( const EvInFocus & evt ) \n{ \n // We have to manually forward evt \n return transit< Focused >( &Shooting::DisplayFocused , evt ); \n} \nAdvanced topics \nSpecifying multiple reactions for a state \nOften a state must define reactions for more than o ne event. In this case, an mpl::list<> must be \nused as outlined below: \n// ... \n \n#include <boost/mpl/list.hpp> \n \nnamespace mpl = boost::mpl; \n \n// ... \n \nstruct Playing : sc::simple_state< Playing, Mp3Play er > \n{ \n typdef mpl::list< \n sc::custom_reaction< EvFastForward >, \n sc::transition< EvStop, Stopped > > reactions; \n \n /* ... */ \n}; \nPosting events \nNon -trivial state machines often need to post internal events. Here's an example of how to do this: \nPumping::~Pumping() \n{ \n post_event( EvPumpingFinished() ); \n} \nThe event is pushed into the main queue. The events in the queue are processed as soon as the current \nreaction is completed. Events can be posted from in side react functions, entry-, exit- and transition Page 15 of 32 The Boost Statechart Library - Tutorial \n2006/12/03actions. However, posting from inside entry actions is a bit more complicated (see e.g. \nFocusing::Focusing() in Shooting.cpp in the Camera example): \nstruct Pumping : sc::state < Pumping, Purifier > \n{ \n Pumping( my_context ctx ) : my_base( ctx ) \n { \n post_event( EvPumpingStarted() ); \n } \n // ... \n}; \nAs soon as an entry action of a state needs to cont act the \"outside world\" (here: the event queue in t he \nstate machine), the state must derive from state<> rather than from simple_state<> and must \nimplement a forwarding constructor as outlined abov e (apart from the constructor, state<> offers \nthe same interface as simple_state<> ). Hence, this must be done whenever an entry actio n makes \none or more calls to the following functions: \n/circle6simple_state<>::post_event() \n/circle6simple_state<>::clear_shallow_history<>() \n/circle6simple_state<>::clear_deep_history<>() \n/circle6simple_state<>::outermost_context() \n/circle6simple_state<>::context<>() \n/circle6simple_state<>::state_cast<>() \n/circle6simple_state<>::state_downcast<>() \n/circle6simple_state<>::state_begin() \n/circle6simple_state<>::state_end() \nIn my experience, these functions are needed only r arely in entry actions so this workaround should \nnot uglify user code too much. \nHistory \nPhotographers testing beta versions of our digital camera said that they really liked that half-pressing \nthe shutter anytime (even while the camera is being configured) immediately readies the camera for \npicture-taking. However, most of them found it unin tuitive that the camera always goes into the idle \nmode after releasing the shutter. They would rather see the camera go back into the state it had befor e \nhalf-pressing the shutter. This way they can easily test the influence of a configuration setting by \nmodifying it, half- and then fully-pressing the shu tter to take a picture. Finally, releasing the shut ter \nwill bring them back to the screen where they have modified the setting. To implement this behavior \nwe'd change the state chart as follows: Page 16 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 \nAs mentioned earlier, the Configuring state contain s a fairly complex and deeply nested inner machine. \nNaturally, we'd like to restore the previous state down to the innermost state (s) in Configuring, that's \nwhy we use a deep history pseudo state. The associa ted code looks as follows: \n// not part of the Camera example \nstruct NotShooting : sc::simple_state< \n NotShooting, Camera, Idle, sc::has_deep_history > \n{ \n // ... \n}; \n \n// ... \n \nstruct Shooting : sc::simple_state< Shooting, Camer a, Focusing > \n{ \n typedef sc::transition< \n EvShutterRelease, sc::deep_history< Idle > > reactions; \n \n // ... \n}; \nHistory has two phases: Firstly, when the state con taining the history pseudo state is exited, \ninformation about the previously active inner state hierarchy must be saved. Secondly, when a \ntransition to the history pseudo state is made late r, the saved state hierarchy information must be \nretrieved and the appropriate states entered. The f ormer is expressed by passing either \nhas_shallow_history , has_deep_history or has_full_history (which combines \nshallow and deep history) as the last parameter to the simple_state and state class templates. \nThe latter is expressed by specifying either shallow_history<> or deep_history<> as a \ntransition destination or, as we'll see in an insta nt, as an inner initial state. Because it is possib le that a \nstate containing a history pseudo state has never b een entered before a transition to history is made, \nboth class templates demand a parameter specifying the default state to enter in such situations. \nThe redundancy necessary for using history is check ed for consistency at compile time. That is, the \nPage 17 of 32 The Boost Statechart Library - Tutorial \n2006/12/03state machine wouldn't have compiled had we forgott en to pass has_deep_history to the base of \nNotShooting . \nAnother change request filed by a few beta testers says that they would like to see the camera go back \ninto the state it had before turning it off when th ey turn it back on. Here's the implementation: \n \n// ... \n \n// not part of the Camera example \nstruct NotShooting : sc::simple_state< NotShooting, Camera, \n mpl::list< sc::deep_history< Idle > > , \n sc::has_deep_history > \n{ \n // ... \n}; \n \n// ... \nUnfortunately, there is a small inconvenience due t o some template-related implementation details. \nWhen the inner initial state is a class template in stantiation we always have to put it into an \nmpl::list<> , although there is only one inner initial state. M oreover, the current deep history \nimplementation has some limitations . \nOrthogonal states \nPage 18 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 \nTo implement this statechart you simply specify mor e than one inner initial state (see the Keyboard \nexample): \nstruct Active; \nstruct Keyboard : sc::state_machine< Keyboard, Acti ve > {}; \n \nstruct NumLockOff; \nstruct CapsLockOff; \nstruct ScrollLockOff; \nstruct Active: sc::simple_state< Active, Keyboard, \n mpl::list< NumLockOff, CapsLockOff, ScrollLockOff > > {}; \nActive's inner states must declare which orthogonal region they belong to: \nstruct EvNumLockPressed : sc::event< EvNumLockPress ed > {}; \nstruct EvCapsLockPressed : sc::event< EvCapsLockPre ssed > {}; \nstruct EvScrollLockPressed : \n sc::event< EvScrollLockPressed > {}; \n \nstruct NumLockOn : sc::simple_state< \n NumLockOn, Active ::orthogonal< 0 > > \n{ \n typedef sc::transition< \n EvNumLockPressed, NumLockOff > reactions; \n}; \n \nstruct NumLockOff : sc::simple_state< \n NumLockOff, Active ::orthogonal< 0 > > \n{ \n typedef sc::transition< \n EvNumLockPressed, NumLockOn > reactions; \n}; \nPage 19 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 \nstruct CapsLockOn : sc::simple_state< \n CapsLockOn, Active ::orthogonal< 1 > > \n{ \n typedef sc::transition< \n EvCapsLockPressed, CapsLockOff > reactions; \n}; \n \nstruct CapsLockOff : sc::simple_state< \n CapsLockOff, Active ::orthogonal< 1 > > \n{ \n typedef sc::transition< \n EvCapsLockPressed, CapsLockOn > reactions; \n}; \n \nstruct ScrollLockOn : sc::simple_state< \n ScrollLockOn, Active ::orthogonal< 2 > > \n{ \n typedef sc::transition< \n EvScrollLockPressed, ScrollLockOff > reactions; \n}; \n \nstruct ScrollLockOff : sc::simple_state< \n ScrollLockOff, Active ::orthogonal< 2 > > \n{ \n typedef sc::transition< \n EvScrollLockPressed, ScrollLockOn > reactions; \n}; \northogonal< 0 > is the default, so NumLockOn and NumLockOff could just as well pass \nActive instead of Active::orthogonal< 0 > to specify their context. The numbers passed to \nthe orthogonal member template must correspond to the list positi on in the outer state. Moreover, \nthe orthogonal position of the source state of a tr ansition must correspond to the orthogonal position of \nthe target state. Any violations of these rules lea d to compile time errors. Examples: \n// Example 1: does not compile because Active speci fies \n// only 3 orthogonal regions \nstruct WhateverLockOn: sc::simple_state< \n WhateverLockOn, Active :: orthogonal< 3 > > {}; \n \n// Example 2: does not compile because Active speci fies \n// that NumLockOff is part of the \"0th\" orthogonal region \nstruct NumLockOff : sc::simple_state< \n NumLockOff, Active :: orthogonal< 1 > > {}; \n \n// Example 3: does not compile because a transition between \n// different orthogonal regions is not permitted \nstruct CapsLockOn : sc::simple_state< \n CapsLockOn, Active :: orthogonal< 1 > > \n{ \n typedef sc::transition< \n EvCapsLockPressed, CapsLockOff > reactions; \n}; \n Page 20 of 32 The Boost Statechart Library - Tutorial \n2006/12/03struct CapsLockOff : sc::simple_state< \n CapsLockOff, Active :: orthogonal< 2 > > \n{ \n typedef sc::transition< \n EvCapsLockPressed, CapsLockOn > reactions; \n}; \nState queries \nOften reactions in a state machine depend on the ac tive state in one or more orthogonal regions. This is \nbecause orthogonal regions are not completely ortho gonal or a certain reaction in an outer state can \nonly take place if the inner orthogonal regions are in particular states. For this purpose, the \nstate_cast<> function introduced under Getting state information out of the machine is also \navailable within states. \nAs a somewhat far-fetched example, let's assume tha t our keyboard also accepts \nEvRequestShutdown events, the reception of which makes the keyboard terminate only if all lock \nkeys are in the off state. We would then modify the Keyboard state machine as follows: \nstruct EvRequestShutdown : sc::event< EvRequestShut down > {}; \n \nstruct NumLockOff; \nstruct CapsLockOff; \nstruct ScrollLockOff; \nstruct Active: sc::simple_state< Active, Keyboard, \n mpl::list< NumLockOff, CapsLockOff, ScrollLockOff > > \n{ \n typedef sc::custom_reaction< EvRequestShutdown > reactions; \n \n sc::result react( const EvRequestShutdown & ) \n { \n if ( ( state_downcast< const NumLockOff * >() ! = 0 ) && \n ( state_downcast< const CapsLockOff * >() != 0 ) && \n ( state_downcast< const ScrollLockOff * >( ) != 0 ) ) \n { \n return terminate(); \n } \n else \n { \n return discard_event(); \n } \n } \n}; \nPassing a pointer type instead of reference type re sults in 0 pointers being returned instead of \nstd::bad_cast being thrown when the cast fails. Note also the us e of state_downcast<>() \ninstead of state_cast<>() . Similar to the differences between \nboost::polymorphic_downcast<>() and dynamic_cast , state_downcast<>() is a \nmuch faster variant of state_cast<>() and can only be used when the passed type is a mos t-\nderived type. state_cast<>() should only be used if you want to query an additi onal base. \nCustom state queries \nIt is often desirable to find out exactly which sta te(s) a machine currently resides in. To some exten t Page 21 of 32 The Boost Statechart Library - Tutorial \n2006/12/03this is already possible with state_cast<>() and state_downcast<>() but their utility is \nrather limited because both only return a yes/no an swer to the question \"Are you in state X?\". It is \npossible to ask more sophisticated questions when y ou pass an additional base class rather than a stat e \nclass to state_cast<>() but this involves more work (all states need to de rive from and \nimplement the additional base), is slow (under the hood state_cast<>() uses dynamic_cast ), \nforces projects to compile with C++ RTTI turned on and has a negative impact on state entry/exit \nspeed. \nEspecially for debugging it would be so much more u seful being able to ask \"In which state(s) are \nyou?\". For this purpose it is possible to iterate o ver all active innermost states with \nstate_machine<>::state_begin() and state_machine<>::state_end() . \nDereferencing the returned iterator returns a refer ence to const \nstate_machine<>::state_base_type , the common base of all states. We can thus print the \ncurrently active state configuration as follows (se e the Keyboard example for the complete code): \nvoid DisplayStateConfiguration( const Keyboard & kb d ) \n{ \n char region = 'a'; \n \n for ( \n Keyboard::state_iterator pLeafState = kbd.state _begin(); \n pLeafState != kbd.state_end(); ++pLeafState ) \n { \n std::cout << \"Orthogonal region \" << region << \": \"; \n // The following use of typeid assumes that \n // BOOST_STATECHART_USE_NATIVE_RTTI is defined \n std::cout << typeid( *pLeafState ).name() << \"\\ n\"; \n ++region; \n } \n} \nIf necessary, the outer states can be accessed with \nstate_machine<>::state_base_type::outer_state_ptr() , which returns a pointer \nto const state_machine<>::state_base_type . When called on an outermost state this \nfunction simply returns 0. \nState type information \nTo cut down on executable size some applications mu st be compiled with C++ RTTI turned off. This \nwould render the ability to iterate over all active states pretty much useless if it weren't for the \nfollowing two functions: \n/circle6static unspecified_type simple_state<>::static_type() \n/circle6unspecified_type \n state_machine<>::state_base_type::dynamic_type() const \nBoth return a value that is comparable via operator==() and std::less<> . This alone would \nbe enough to implement the DisplayStateConfiguration function above without the help of \ntypeid but it is still somewhat cumbersome as a map must be used to associate the type information \nvalues with the state names. \nCustom state type information Page 22 of 32 The Boost Statechart Library - Tutorial \n2006/12/03That's why the following functions are also provide d (only available when \nBOOST_STATECHART_USE_NATIVE_RTTI is not defined): \n/circle6template< class T > \nstatic void simple_state<>::custom_static_type_ptr( const T * ); \n/circle6template< class T > \nstatic const T * simple_state<>::custom_static_type _ptr(); \n/circle6template< class T > \nconst T * state_machine<>:: \n state_base_type::custom_dynamic_type_ptr() const; \nThese allow us to directly associate arbitrary stat e type information with each state ... \n// ... \n \nint main() \n{ \n NumLockOn::custom_static_type_ptr( \"NumLockOn\" ); \n NumLockOff::custom_static_type_ptr( \"NumLockOff\" ); \n CapsLockOn::custom_static_type_ptr( \"CapsLockOn\" ); \n CapsLockOff::custom_static_type_ptr( \"CapsLockOff \" ); \n ScrollLockOn::custom_static_type_ptr( \"ScrollLock On\" ); \n ScrollLockOff::custom_static_type_ptr( \"ScrollLoc kOff\" ); \n \n // ... \n} \n... and rewrite the display function as follows: \nvoid DisplayStateConfiguration( const Keyboard & kb d ) \n{ \n char region = 'a'; \n \n for ( \n Keyboard::state_iterator pLeafState = kbd.state _begin(); \n pLeafState != kbd.state_end(); ++pLeafState ) \n { \n std::cout << \"Orthogonal region \" << region << \": \"; \n std::cout << \n pLeafState->custom_dynamic_type_ptr< char >() << \"\\n\"; \n ++region; \n } \n} \nException handling \nExceptions can be propagated from all user code exc ept from state destructors. Out of the box, the sta te \nmachine framework is configured for simple exceptio n handling and does not catch any of these \nexceptions, so they are immediately propagated to t he state machine client. A scope guard inside the \nstate_machine<> ensures that all state objects are destructed befo re the exception is caught by the \nclient. The scope guard does not attempt to call an y exit functions (see Two stage exit below) that \nstates might define as these could themselves throw other exceptions which would mask the original \nexception. Consequently, if a state machine should do something more sensible when exceptions are \nthrown, it has to catch them before they are propag ated into the Boost.Statechart framework. This Page 23 of 32 The Boost Statechart Library - Tutorial \n2006/12/03exception handling scheme is often appropriate but it can lead to considerable code duplication in sta te \nmachines where many actions can trigger exceptions that need to be handled inside the state machine \n(see Error handling in the Rationale). \nThat's why exception handling can be customized thr ough the ExceptionTranslator parameter \nof the state_machine class template. Since the out-of-the box behavior is to not translate any \nexceptions, the default argument for this parameter is null_exception_translator . A \nstate_machine<> subtype can be configured for advanced exception h andling by specifying the \nlibrary-supplied exception_translator<> instead. This way, the following happens when an \nexception is propagated from user code: \n1. The exception is caught inside the framework \n2. In the catch block, an exception_thrown event is allocated on the stack \n3. Also in the catch block, an immediate dispatch of the exception_thrown event is \nattempted. That is, possibly remaining events in th e queue are dispatched only after the \nexception has been handled successfully \n4. If the exception was handled successfully, the st ate machine returns to the client normally. If the \nexception could not be handled successfully, the or iginal exception is rethrown so that the client \nof the state machine can handle the exception \nOn platforms with buggy exception handling implemen tations users would probably want to \nimplement their own model of the ExceptionTranslator concept (see also Discriminating exceptions ). \nSuccessful exception handling \nAn exception is considered handled successfully, if : \n/circle6an appropriate reaction for the exception_thrown event has been found, and \n/circle6the state machine is in a stable state after the re action has completed. \nThe second condition is important for scenarios 2 a nd 3 in the next section. In these scenarios, the s tate \nmachine is in the middle of a transition when the e xception is handled. The machine would be left in \nan invalid state, should the reaction simply discar d the event without doing anything else. \nexception_translator<> simply rethrows the original exception if the exce ption handling was \nunsuccessful. Just as with simple exception handlin g, in this case a scope guard inside the \nstate_machine<> ensures that all state objects are destructed befo re the exception is caught by the \nclient. \nWhich states can react to an exception_thrown event? \nShort answer: If the state machine is stable when t he exception is thrown, the state that caused the \nexception is first tried for a reaction. Otherwise the outermost unstable state is first tried for a reaction. \nLonger answer: There are three scenarios: \n1. A react member function propagates an exception before calling any of the reaction functions \nor the action executed during an in-state reaction propagates an exception. The state that caused \nthe exception is first tried for a reaction, so the following machine will transit to Defective after \nreceiving an EvStart event: \n Page 24 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 \n \n2. A state entry action (constructor) propagates an exception: \n/ring2If there are no orthogonal regions, the direct oute r state of the state that caused the \nexception is first tried for a reaction, so the fol lowing machine will transit to Defective \nafter trying to enter Stopped: \n \n \n/ring2If there are orthogonal regions, the outermost unstable state is first tried for a reaction. The \noutermost unstable state is found by first selectin g the direct outer state of the state that \ncaused the exception and then moving outward until a state is found that is unstable but \nhas no direct or indirect outer states that are uns table. This more complex rule is necessary \nbecause only reactions associated with the outermos t unstable state (or any of its direct or \nindirect outer states) are able to bring the machin e back into a stable state. Consider the \nfollowing statechart: \n \nPage 25 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 \n \nWhether this state machine will ultimately transiti on to E or F after initiation depends on \nwhich of the two orthogonal regions is initiated fi rst. If the upper orthogonal region is \ninitiated first, the entry sequence is as follows: A, D, B, (exception is thrown). Both D and \nB were successfully entered, so B is the outermost unstable state when the exception is \nthrown and the machine will therefore transition to F. However, if the lower orthogonal \nregion is initiated first, the sequence is as follo ws: A, B, (exception is thrown). D was \nnever entered so A is the outermost unstable state when the exception is thrown and the \nmachine will therefore transition to E. \nIn practice these differences rarely matter as top- level error recovery is adequate for most \nstate machines. However, since the sequence of init iation is clearly defined (orthogonal \nregion 0 is always initiated first, then region 1 a nd so forth), users can accurately control \nwhen and where they want to handle exceptions \n3. A transition action propagates an exception: The innermost common outer state of the source \nand the target state is first tried for a reaction, so the following machine will transit to Defective \nafter receiving an EvStartStop event: \n \nPage 26 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 \nAs with a normal event, the dispatch algorithm will move outward to find a reaction if the first tried \nstate does not provide one (or if the reaction expl icitly returned forward_event(); ). However, in \ncontrast to normal events, it will give up once it has unsuccessfully tried an outermost state , so \nthe following machine will not transit to Defective after receiving an EvNumLockP ressed event: \n \nInstead, the machine is terminated and the original exception rethrown. \nDiscriminating exceptions \nBecause the exception_thrown event is dispatched from within the catch block, w e can rethrow \nand catch the exception in a custom reaction: \nstruct Defective : sc::simple_state< \n Defective, Purifier > {}; \nPage 27 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 \n// Pretend this is a state deeply nested in the Pur ifier \n// state machine \nstruct Idle : sc::simple_state< Idle, Purifier > \n{ \n typedef mpl::list< \n sc::custom_reaction< EvStart >, \n sc::custom_reaction< sc::exception_thrown > \n > reactions; \n \n sc::result react( const EvStart & ) \n { \n throw std::runtime_error( \"\" ); \n } \n \n sc::result react( const sc::exception_thrown & ) \n { \n try \n { \n throw; \n } \n catch ( const std::runtime_error & ) \n { \n // only std::runtime_errors will lead to a tr ansition \n // to Defective ... \n return transit< Defective >(); \n } \n catch ( ... ) \n { \n // ... all other exceptions are forwarded to our outer \n // state(s). The state machine is terminated and the \n // exception rethrown if the outer state(s) c an't \n // handle it either... \n return forward_event(); \n } \n \n // Alternatively, if we want to terminate the m achine \n // immediately, we can also either rethrow or t hrow \n // a different exception. \n } \n}; \nUnfortunately, this idiom (using throw; inside a try block nested inside a catch block) does \nnot work on at least one very popular compiler. If you have to use one of these platforms, you can \npass a customized exception translator class to the state_machine class template. This will allow \nyou to generate different events depending on the t ype of the exception. \nTwo stage exit \nIf a simple_state<> or state<> subtype declares a public member function with the signature \nvoid exit() then this function is called just before the state object is destructed. As explained \nunder Error handling in the Rationale, this is useful for two things th at would otherwise be difficult or \ncumbersome to achieve with destructors only: Page 28 of 32 The Boost Statechart Library - Tutorial \n2006/12/031. To signal a failure in an exit action \n2. To execute certain exit actions only during a transition or a termination but not when the state \nmachine object is destructed \nA few points to consider before employing exit() : \n/circle6There is no guarantee that exit() will be called: \n/ring2If the client destructs the state machine object wi thout calling terminate() beforehand \nthen the currently active states are destructed wit hout calling exit() . This is necessary \nbecause an exception that is possibly thrown from exit() could not be propagated on to \nthe state machine client \n/ring2exit() is not called when a previously executed action pr opagated an exception and that \nexception has not (yet) been handled successfully. This is because a new exception that \ncould possibly be thrown from exit() would mask the original exception \n/circle6A state is considered exited, even if its exit function propagated an exception. That is, the sta te \nobject is inevitably destructed right after calling exit() , regardless of whether exit() \npropagated an exception or not. A state machine con figured for advanced exception handling is \ntherefore always unstable while handling an excepti on propagated from an exit function \n/circle6In a state machine configured for advanced exceptio n handling the processing rules for an \nexception event resulting from an exception propaga ted from exit() are analogous to the ones \ndefined for exceptions propagated from state constr uctors. That is, the outermost unstable state is \nfirst tried for a reaction and the dispatcher then moves outward until an appropriate reaction is \nfound \nSubmachines & parameterized states \nSubmachines are to event-driven programming what fu nctions are to procedural programming, \nreusable building blocks implementing often needed functionality. The associated UML notation is not \nentirely clear to me. It seems to be severely limit ed (e.g. the same submachine cannot appear in \ndifferent orthogonal regions) and does not seem to account for obvious stuff like e.g. parameters. \nBoost.Statechart is completely unaware of submachin es but they can be implemented quite nicely with \ntemplates. Here, a submachine is used to improve th e copy-paste implementation of the keyboard \nmachine discussed under Orthogonal states : \nenum LockType \n{ \n NUM_LOCK, \n CAPS_LOCK, \n SCROLL_LOCK \n}; \n \ntemplate< LockType lockType > \nstruct Off; \nstruct Active : sc::simple_state< \n Active, Keyboard, mpl::list< \n Off< NUM_LOCK >, Off< CAPS_LOCK >, Off< SCROLL_LO CK > > > {}; \n \ntemplate< LockType lockType > \nstruct EvPressed : sc::event< EvPressed< lockType > > {}; \n \ntemplate< LockType lockType > \nstruct On : sc::simple_state< \n On< lockType >, Active::orthogonal< lockType > > Page 29 of 32 The Boost Statechart Library - Tutorial \n2006/12/03{ \n typedef sc::transition< \n EvPressed< lockType >, Off< lockType > > reacti ons; \n}; \n \ntemplate< LockType lockType > \nstruct Off : sc::simple_state< \n Off< lockType >, Active::orthogonal< lockType > > \n{ \n typedef sc::transition< \n EvPressed< lockType >, On< lockType > > reactio ns; \n}; \nAsynchronous state machines \nWhy asynchronous state machines are necessary \nAs the name suggests, a synchronous state machine p rocesses each event synchronously. This behavior \nis implemented by the state_machine class template, whose process_event function only \nreturns after having executed all reactions (includ ing the ones provoked by internal events that actio ns \nmight have posted). This function is strictly non-r eentrant (just like all other member functions, so \nstate_machine<> is not thread-safe). This makes it difficult for t wo state_machine<> \nsubtype objects to communicate via events in a bi-d irectional fashion correctly, even in a single-\nthreaded program . For example, state machine A is in the middle of processing an external event. \nInside an action, it decides to send a new event to state machine B (by calling B::process_event \n() ). It then \"waits\" for B to send back an answer via a boost::function<> -like call-back, which \nreferences A::process_event() and was passed as a data member of the event. Howe ver, while \nA is \"waiting\" for B to send back an event, A::process_event() has not yet returned from \nprocessing the external event and as soon as B answers via the call-back, A::process_event() is \nunavoidably reentered. This all really happens in a single thr ead, that's why \"wait\" is in quotes. \nHow it works \nThe asynchronous_state_machine class template has none of the member functions th e \nstate_machine class template has. Moreover, asynchronous_state_machine<> subtype \nobjects cannot even be created or destroyed directl y. Instead, all these operations must be performed \nthrough the Scheduler object each asynchronous state machine is associat ed with. All these \nScheduler member functions only push an appropriate item int o the schedulers' queue and then \nreturn immediately. A dedicated thread will later p op the items out of the queue to have them \nprocessed. \nApplications will usually first create a fifo_scheduler<> object and then call \nfifo_scheduler<>::create_processor<>() and \nfifo_scheduler<>::initiate_processor() to schedule the creation and initiation of one \nor more asynchronous_state_machine<> subtype objects. Finally, \nfifo_scheduler<>::operator()() is either called directly to let the machine(s) ru n in the \ncurrent thread, or, a boost::function<> object referencing operator()() is passed to a new \nboost::thread . Alternatively, the latter could also be done righ t after constructing the \nfifo_scheduler<> object. In the following code, we are running one state machine in a new \nboost::thread and the other in the main thread (see the PingPong example for the full source \ncode): \nstruct Waiting; Page 30 of 32 The Boost Statechart Library - Tutorial \n2006/12/03struct Player : \n sc::asynchronous_state_machine< Player, Waiting > \n{ \n // ... \n}; \n \n// ... \n \nint main() \n{ \n // Create two schedulers that will wait for new e vents \n // when their event queue runs empty \n sc::fifo_scheduler<> scheduler1( true ); \n sc::fifo_scheduler<> scheduler2( true ); \n \n // Each player is serviced by its own scheduler \n sc::fifo_scheduler<>::processor_handle player1 = \n scheduler1.create_processor< Player >( /* ... * / ); \n scheduler1.initiate_processor( player1 ); \n sc::fifo_scheduler<>::processor_handle player2 = \n scheduler2.create_processor< Player >( /* ... * / ); \n scheduler2.initiate_processor( player2 ); \n \n // the initial event that will start the game \n boost::intrusive_ptr< BallReturned > pInitialBall = \n new BallReturned(); \n \n // ... \n \n scheduler2.queue_event( player2, pInitialBall ); \n \n // ... \n \n // Up until here no state machines exist yet. The y \n // will be created when operator()() is called \n \n // Run first scheduler in a new thread \n boost::thread otherThread( boost::bind( \n &sc::fifo_scheduler<>::operator(), &scheduler1, 0 ) ); \n scheduler2(); // Run second scheduler in this thr ead \n otherThread.join(); \n \n return 0; \n} \nWe could just as well use two boost::threads: \nint main() \n{ \n // ... \n \n boost::thread thread1( boost::bind( \n &sc::fifo_scheduler<>::operator(), &scheduler1, 0 ) ); \n boost::thread thread2( boost::bind( Page 31 of 32 The Boost Statechart Library - Tutorial \n2006/12/03 &sc::fifo_scheduler<>::operator(), &scheduler2, 0 ) ); \n \n // do something else ... \n \n thread1.join(); \n thread2.join(); \n \n return 0; \n} \nOr, run both machines in the same thread: \nint main() \n{ \n sc::fifo_scheduler<> scheduler1( true ); \n \n sc::fifo_scheduler<>::processor_handle player1 = \n scheduler1.create_processor< Player >( /* ... * / ); \n sc::fifo_scheduler<>::processor_handle player2 = \n scheduler1.create_processor< Player >( /* ... * / ); \n \n // ... \n \n scheduler1(); \n \n return 0; \n} \nIn all the examples above, fifo_scheduler<>::operator()() waits on an empty event queue \nand will only return after a call to fifo_scheduler<>::terminate() . The Player state \nmachine calls this function on its scheduler object right before terminating. \n \nRevised 03 December, 2006 \nCopyright © 2003-2006 Andreas Huber Dönni \nDistributed under the Boost Software License, Versi on 1.0. (See accompanying file LICENSE_1_0.txt \nor copy at http://www.boost.org/LICENSE_1_0.txt ) \nPage 32 of 32 The Boost Statechart Library - Tutorial \n2006/12/03" } ]
{ "category": "App Definition and Development", "file_name": "tutorial.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "0200,000400,000600,000800,000\nJan 01 Jan 15 Feb 01 Feb 15 Mar 01 Mar 15 Apr 01\ntimeevents / sEvents per second − hourly average" } ]
{ "category": "App Definition and Development", "file_name": "radstack-event-throughput.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "aggregation top−n\n0200400600\n02500500075001000012500count_star_interval\nsum_all\nsum_all_filter\nsum_all_year\nsum_price\ntop_100_commitdate\ntop_100_parts\ntop_100_parts_details\ntop_100_parts_filter\nQueryTime (seconds)engine\nDruid\nMySQLMedian Query Time (3+ runs) − 100GB data − single node" } ]
{ "category": "App Definition and Development", "file_name": "tpch_100gb.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "2009-05-08Lecture\nheld at the Boost Library Conference 2009Joachim Faulhaber\nSlide Design by Chih-Hao Tsaihttp://www.chtsai.orgCopyright © Joachim Faulhaber 2009Distributed under Boost Software Licence 1.0An Introduction to the \nInterval Template Library2\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Lecture Outline\nBackground and Motivation\nDesign\nExamples\nSemantics\nImplementation\nFuture Works\nAvailability3\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Background and Motivation\nInterval containers simplified the implementation of \ndate and time related tasks\nDecomposing “histories” of attributed events into \nsegments with constant attributes.\nWorking with time grids, e.g. a grid of months.\nAggregations of values associated to date or time \nintervals.\n… that occurred frequently in programs like\nBilling modules\nTherapy scheduling programs\nHospital and controlling statistics4\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nBackground is the date time problem domain ...\n… but the scope of the Itl as a generic library is more \ngeneral: \nan interval_set is a set\n that is implemented as a set of intervals \nan interval_map is a map\n that is implemented as a map of interval value pairs5\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Aspects\nThere are two aspects in the design of interval \ncontainers\nConceptual aspect\ninterval_set <int> mySet;\nmySet.insert(42);\nbool has_answer = mySet.contains(42);\nOn the conceptual aspect an interval_set can be used \njust as a set of elements\nexcept for . . .\n. . . iteration over elements\nconsider interval_set<double> or interval_set<string>\nIterative Aspect\nIteration is always done over intervals6\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAddability and Subtractability\nAll of itl's (interval) containers are Addable and \nSubtractable \nThey implement operators += , +, -= and -\n+= -=\nsets set union set difference\nmaps ? ?\nA possible implementation for maps\nPropagate addition/subtraction to the associated values \n. . . or aggregate on overlap\n. . . or aggregate on collision7\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap\n→ a\n→ b\n+→ a\n→ (a + b)\n→ b\nDecompositional \neffect on Intervals\nAccumulative effect \non associated values\nI\nJJ-II-J\nI∩J\nI, J: intervals, a,b: associated values8\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap, a minimal example\ntypedef itl::set<string> guests;\ninterval_map <time, guests> party;\n \nparty += make_pair(\n interval< time>::rightopen (20:00, 22:00), guests( \"Mary\"));\nparty += make_pair(\n interval< time>::rightopen (21:00, 23:00), guests( \"Harry\")); \n// party now contains\n[20:00, 21:00)->{ \"Mary\"} \n[21:00, 22:00)->{ \"Harry\",\"Mary\"} //guest sets aggregated \n[22:00, 23:00)->{ \"Harry\"}9\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Granu\n-larityStyle Sets Maps\ninterval interval\njoining interval_set interval_map\nseparating separate_interval_set\nsplitting split_interval_set split_interval_map\nelement set mapDesign\nThe Itl's class templates10\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Joining\nIntervals are joined on overlap or on touch\n. . . for maps , if associated values are equal\nKeeps interval_maps and sets in a minimal form\n interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 5)} interval_map\n \n {[1 3) ->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 5) }\n ->1 ->2 ->1 11\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Splitting\nIntervals are split on overlap and kept separate on touch\nAll interval borders are preserved (insertion memory)\n split_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 2)[2 3)[3 4) }\n \n = {[1 2)[2 3)[3 4)[4 5)} split_interval_map\n \n {[1 3) ->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 4)[4 5) }\n ->1 ->2 ->1 ->1 12\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Separating\nIntervals are joined on overlap but kept separate on \ntouch\nPreserves borders that are never crossed (preserves a \nhidden grid).\n separate_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 4)[4 5)} 13\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA few instances of intervals (interval.cpp)\ninterval< int> int_interval = interval< int>::closed(3,7);\ninterval< double> sqrt_interval\n = interval< double>::rightopen (1/sqrt(2.0), sqrt(2.0));\ninterval< std::string > city_interval\n = interval<std::string>:: leftopen(\"Barcelona\" , \"Boston\");\ninterval< boost::ptime> time_interval\n = interval< boost::ptime>::open(\n time_from_string( \"2008-05-20 19:30\" ),\n time_from_string( \"2008-05-20 23:00\" )\n );14\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks \n(month_and_week_grid.cpp )\n#include <boost/itl/gregorian.hpp> //boost::gregorian plus adapter code \n#include <boost/itl/split_interval_set.hpp>\n// A split_interval_set of gregorian dates as date_grid.\ntypedef split_interval_set<boost::gregorian::date> date_grid;\n// Compute a date_grid of months using boost::gregorian.\ndate_grid month_grid( const interval<date>& scope)\n{\n date_grid month_grid;\n // Compute a date_grid of months using boost::gregorian.\n . . .\n return month_grid;\n}\n// Compute a date_grid of weeks using boost::gregorian.\ndate_grid week_grid( const interval<date>& scope)\n{\n date_grid week_grid;\n // Compute a date_grid of weeks using boost::gregorian.\n . . .\n return week_grid;\n}15\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks\nvoid month_and_time_grid()\n{\n date someday = day_clock::local_day();\n date thenday = someday + months(2);\n interval<date> scope = interval<date>::rightopen(someday, thenday);\n // An intersection of the month and week grids ...\n date_grid month_and_week_grid \n = month_grid(scope) & week_grid(scope);\n // ... allows to iterate months and weeks. Whenever a month\n // or a week changes there is a new interval.\n for(date_grid::iterator it = month_and_week_grid.begin(); \n it != month_and_week_grid.end(); it++)\n { . . . }\n // We can also intersect the grid into an interval_map to make\n // shure that all intervals are within months and week bounds.\n interval_map< boost::gregorian::date, some_type> accrual;\n compute_some_result(accrual, scope);\n accrual &= month_and_week_grid;\n}16\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\n(partys_guest_average.cpp)\nclass counted_sum\n{\npublic:\ncounted_sum() :_sum(0),_count(0){}\ncounted_sum( int sum):_sum(sum),_count(1){}\nint sum()const {return _sum;}\nint count()const{return _count;}\ndouble average() const\n { return _count==0 ? 0.0 : _sum/ static_cast <double>(_count); }\ncounted_sum& operator += (const counted_sum& right)\n{ _sum += right.sum(); _count += right.count(); return *this; }\nprivate:\nint _sum;\nint _count;\n};\nbool operator == (const counted_sum& left, const counted_sum& right)\n{ return left.sum()==right.sum() && left.count()==right.count(); } 17\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\nvoid partys_height_average()\n{\n interval_map<ptime, counted_sum > height_sums;\n height_sums += (\n make_pair(\n interval<ptime>::rightopen(\n time_from_string( \"2008-05-20 19:30\" ), \n time_from_string( \"2008-05-20 23:00\" )), \n counted_sum(165) ) // Mary is 1,65 m tall.\n );\n // Add height of more pary guests . . . \n interval_map<ptime, counted_sum>::iterator height_sum_ =\n height_sums.begin();\n while(height_sum_ != height_sums.end())\n {\n interval<ptime> when = height_sum_->first;\n double height_average = (*height_sum_++).second. average();\n cout << \"[\" << when.first() << \" - \" << when.upper() << \")\"\n << \": \" << height_average << \" cm\" << endl;\n }\n}18\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval containers allow to express a variety of date \nand time operations in an easy way.\nExample man_power.cpp ...\nSubtract weekends and holidays from an interval_set\nworktime -= weekends(scope)\nworktime -= german_reunification_day\nIntersect an interval_map with an interval_set\nclaudias_working_hours &= worktime\nSubtract and interval_set from an interval map\nclaudias_working_hours -= claudias_absense_times\nAdding interval_maps\ninterval_map<date, int> manpower;\nmanpower += claudias_working_hours;\nmanpower += bodos_working_hours;19\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval_maps can also be intersected\nExample user_groups.cpp\ntypedef boost::itl::set<string> MemberSetT;\ntypedef interval_map<date, MemberSetT> MembershipT;\nvoid user_groups()\n{\n . . .\n MembershipT med_users;\n // Compute membership of medical staff\n med_users += make_pair( member_interval_1, MemberSetT( \"Dr.Jekyll\" ));\n med_users += . . . \n MembershipT admin_users;\n // Compute membership of administation staff\n med_users += make_pair( member_interval_2, MemberSetT( \"Mr.Hyde\"));\n . . .\n MembershipT all_users = med_users + admin_users;\n MembershipT super_users = med_users & admin_users;\n . . .\n}20\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl sets is based on a concept itl::Set\nitl::set , interval_set , split_interval_set \nand separate_interval_set are models of concept \nitl::Set\n// Abstract part\nempty set: Set::Set()\nsubset relation: bool Set::contained_in (const Set& s2)const\nequality: bool is_element_equal (const Set& s1, const Set& s2)\nset union: Set& operator += (Set& s1, const Set& s2)\n Set operator + (const Set& s1, const Set& s2)\nset difference: Set& operator -= (Set& s1, const Set& s2)\n Set operator - (const Set& s1, const Set& s2)\nset intersection: Set& operator &= (Set& s1, const Set& s2)\n Set operator & (const Set& s1, const Set& s2) \n// Part related to sequential ordering\nsorting order: bool operator < (const Set& s1, const Set& s2)\nlexicographical equality:\n bool operator == (const Set& s1, const Set& s2)\n 21\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl maps is based on a concept itl::Map\nitl::map , interval_map and split_interval_map \nare models of concept \nitl::Map\n// Abstract part\nempty map: Map::Map()\nsubmap relation: bool Map::contained_in (const Map& m2)const\nequality: bool is_element_equal (const Map& m1, const Map& m2)\nmap union: Map& operator += (Map& m1, const Map& m2)\n Map operator + (const Map& m1, const Map& m2)\nmap difference: Map& operator -= (Map& m1, const Map& m2)\n Map operator - (const Map& m1, const Map& m2)\nmap intersection: Map& operator &= (Map& m1, const Map& m2)\n Map operator & (const Map& m1, const Map& m2) \n// Part related to sequential ordering\nsorting order: bool operator < (const Map& m1, const Map& m2)\nlexicographical equality:\n bool operator == (const Map& m1, const Map& m2)\n 22\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nDefining semantics of itl concepts via sets of laws\naka c++0x axioms\nChecking law sets via automatic testing:\nA Law Based Test Automaton LaBatea\nGenerate\nlaw instance\napply law to instance\ncollect violations\nCommutativity<T a, U b, +>:\n a + b = b + a;23\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nLexicographical Ordering and Equality\nFor all itl containers operator < implements a strict \nweak ordering . \nThe induced equivalence of this ordering is \nlexicographical equality which is implemented as \noperator ==\nThis is in line with the semantics of \nSortedAssociativeContainers24\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nSubset Ordering and Element Equality\nFor all itl containers function contained_in \nimplements a partial ordering .\nThe induced equivalence of this ordering is \nequality of elements which is implemented as \nfunction is_element_equal .25\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nitl::Sets\nAll itl sets implement a Set Algebra , which is to say \nsatisfy a “ classical” set of laws . . .\n. . . using is_element_equal as equality\nAssociativity, Neutrality, Commutativity (for + and &)\nDistributivity, DeMorgan, Symmetric Difference\nMost of the itl sets satisfy the classical set of laws \neven if . . .\n. . . lexicographical equality: operator == is used\nThe differences reflect proper inequalities in sequence \nthat occur for separate_interval_set and \nsplit_interval_set . 26\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nConcepts induction / concept transition\nThe semantics of itl Maps appears to be determined by \nthe codomain type of the map\nItl Maps are mapping the semantics of the codomain \ntype on themselves .\n is model of example\nMap<D,Monoid> Monoid interval_map<int, string>\nMap<D,CommutMonoid > CommutMonoid interval_map<int, unsigned >\nMap<D,AbelianGroup> AbelianGroup interval_map<int, int>\n \nMap<D,Set> Set interval_map<int, set<int> >\n 27\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Implementation\nItl containers are implemented simply based on\nstd::set and std::map\nBasic operations like adding and subtracting intervals \nhave a best case complexity of O(lg n) , if the added or \nsubtracted intervals are relatively small .\nWorst case complexity of adding or subtracting \nintervals for interval_set is O(n).\nFor all other interval containers adding or subtracting \nintervals has a worst case performance of O(n lg(n)) .\nThere is a potential for optimization . . .28\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Implementation\nA segment_tree implementaion: A balanced tree,\nwhere . . .\nan interval represents a perfectly balanced subtree\nlarge intervals are rotated towards the root\nFirst results\nmuch better worst case performance O(n) instead of \nO(n lg(n))\nbut slower for best case due to heavier bookkeeping \nand recursive algorithms.29\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Future Works\nCompleting and optimizing the segment_tree \nimplementation of interval containers\nImplementing interval_maps of sets more efficiently\nRevision of features of the extended itl (itl_plus.zip)\nDecomposition of histories : k histories hk with attribute \ntypes A1, ..., Ak are “decomposed ” to a product history \nof tuples of attribute sets:\n(h1<T,A1>,..., h<T,Ak>) → h<T, (set<A1>,…, set<Ak>)>\nCubes (generalized crosstables): Applying aggregate \non collision to maps of tuple value pairs in order to \norganize hierachical data and their aggregates.30\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nItl project on sourceforge (version 2.0.1)\nhttp://sourceforge.net/projects/itl\nLatest version on boost vault/Containers (3.0.0)\nhttp://www.boostpro.com/vault/ → containers\nitl.zip : Core itl in preparation for boost\nitl_plus.zip : Extended itl including product histories, cubes \nand automatic validation (LaBatea).\nOnline documentation at\nhttp://www.herold-faulhaber.de/\nDoxygen generated docs for (version 2.0.1)\nhttp://www.herold-faulhaber.de/itl/\nLatest boost style documentation (version 3.0.0)\nhttp://www.herold-faulhaber.de/boost_itl/doc/libs/itl/doc/html/31\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nBoost sandbox\nhttps://svn.boost.org/svn/boost/sandbox/itl/\nCore itl: Interval containers preparing for boost\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl/\nExtended itl_xl: “histories” and cubes\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl_xt/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl_xt/\nValidater LaBatea: Currently only vc8 or newer\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/validate/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/validate/2009-05-08Lectureheld at the Boost Library Conference 2009Joachim Faulhaber\nSlide Design by Chih-Hao Tsaihttp://www.chtsai.orgCopyright © Joachim Faulhaber 2009Distributed under Boost Software Licence 1.0An Introduction to the \nInterval Template Library2\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Lecture Outline\nBackground and Motivation\nDesign\nExamples\nSemantics\nImplementation\nFuture Works\nAvailability3\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Background and Motivation\nInterval containers simplified the implementation of \ndate and time related tasks\nDecomposing “histories” of attributed events into \nsegments with constant attributes.\nWorking with time grids, e.g. a grid of months.\nAggregations of values associated to date or time \nintervals.\n… that occurred frequently in programs like\nBilling modules\nTherapy scheduling programs\nHospital and controlling statistics4\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nBackground is the date time problem domain ...\n… but the scope of the Itl as a generic library is more \ngeneral: \nan interval_set is a set\n that is implemented as a set of intervals \nan interval_map is a map\n that is implemented as a map of interval value pairs5\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Aspects\nThere are two aspects in the design of interval \ncontainers\nConceptual aspect\ninterval_set<int> mySet;\nmySet.insert(42);\nbool has_answer = mySet.contains(42);\nOn the conceptual aspect an interval_set can be used \njust as a set of elements\nexcept for . . .\n. . . iteration over elements\nconsider interval_set<double> or interval_set<string>\nIterative Aspect\nIteration is always done over intervals6\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAddability and Subtractability\nAll of itl's (interval) containers are Addable and \nSubtractable \nThey implement operators +=, +, -= and -\n+=-=\nsets set union set difference\nmaps ? ?\nA possible implementation for maps\nPropagate addition/subtraction to the associated values \n. . . or aggregate on overlap\n. . . or aggregate on collision7\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap\n→ a\n→ b\n+→ a\n→ (a + b)\n→ b\nDecompositional \neffect on Intervals\nAccumulative effect \non associated values\nI\nJJ-II-J\nI∩J\nI, J: intervals, a,b: associated values8\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap, a minimal example\ntypedef itl::set<string> guests;\ninterval_map<time, guests> party;\n \nparty += make_pair(\n interval<time>::rightopen(20:00, 22:00), guests( \"Mary\"));\nparty += make_pair(\n interval<time>::rightopen(21:00, 23:00), guests( \"Harry\")); \n// party now contains[20:00, 21:00)->{\"Mary\"} [21:00, 22:00)->{\"Harry\",\"Mary\"} //guest sets aggregated [22:00, 23:00)->{\"Harry\"}9\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Granu\n-larityStyle Sets Maps\ninterval interval\njoining interval_set interval_map\nseparating separate_interval_set\nsplitting split_interval_set split_interval_map\nelement set mapDesign\nThe Itl's class templates10\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Joining\nIntervals are joined on overlap or on touch\n. . . for maps, if associated values are equal\nKeeps interval_maps and sets in a minimal form\n interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 5)} interval_map\n \n {[1 3)->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 5) }\n ->1 ->2 ->1 11\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Splitting\nIntervals are split on overlap and kept separate on touch\nAll interval borders are preserved (insertion memory)\n split_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 2)[2 3)[3 4) }\n \n = {[1 2)[2 3)[3 4)[4 5)} split_interval_map\n \n {[1 3)->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 4)[4 5) }\n ->1 ->2 ->1 ->1 12\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Separating\nIntervals are joined on overlap but kept separate on \ntouch\nPreserves borders that are never crossed (preserves a \nhidden grid).\n separate_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 4)[4 5)} 13\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA few instances of intervals (interval.cpp)\ninterval<int> int_interval = interval< int>::closed(3,7);\ninterval<double> sqrt_interval\n = interval<double>::rightopen(1/sqrt(2.0), sqrt(2.0));\ninterval<std::string> city_interval\n = interval<std::string>:: leftopen(\"Barcelona\", \"Boston\");\ninterval<boost::ptime> time_interval\n = interval<boost::ptime>::open(\n time_from_string(\"2008-05-20 19:30\" ), time_from_string(\"2008-05-20 23:00\" ) );14\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks \n(month_and_week_grid.cpp )\n#include <boost/itl/gregorian.hpp> //boost::gregorian plus adapter code #include <boost/itl/split_interval_set.hpp>\n// A split_interval_set of gregorian dates as date_grid.typedef split_interval_set<boost::gregorian::date> date_grid;\n// Compute a date_grid of months using boost::gregorian.date_grid month_grid( const interval<date>& scope){ date_grid month_grid; // Compute a date_grid of months using boost::gregorian. . . . return month_grid;}\n// Compute a date_grid of weeks using boost::gregorian.date_grid week_grid( const interval<date>& scope){ date_grid week_grid; // Compute a date_grid of weeks using boost::gregorian. . . . return week_grid;}15\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks\nvoid month_and_time_grid(){ date someday = day_clock::local_day(); date thenday = someday + months(2); interval<date> scope = interval<date>::rightopen(someday, thenday);\n // An intersection of the month and week grids ... date_grid month_and_week_grid = month_grid(scope) & week_grid(scope);\n // ... allows to iterate months and weeks. Whenever a month // or a week changes there is a new interval. for(date_grid::iterator it = month_and_week_grid.begin(); it != month_and_week_grid.end(); it++) { . . . }\n // We can also intersect the grid into an interval_map to make // shure that all intervals are within months and week bounds. interval_map<boost::gregorian::date, some_type> accrual; compute_some_result(accrual, scope); accrual &= month_and_week_grid;\n}16\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\n(partys_guest_average.cpp)\nclass counted_sum{public:counted_sum():_sum(0),_count(0){}counted_sum(int sum):_sum(sum),_count(1){}\nint sum()const {return _sum;}int count()const{return _count;}double average()const { return _count==0 ? 0.0 : _sum/ static_cast<double>(_count); }\ncounted_sum& operator += (const counted_sum& right){ _sum += right.sum(); _count += right.count(); return *this; }\nprivate:int _sum;int _count;};\nbool operator == (const counted_sum& left, const counted_sum& right){ return left.sum()==right.sum() && left.count()==right.count(); } 17\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\nvoid partys_height_average(){ interval_map<ptime, counted_sum> height_sums;\n height_sums += ( make_pair( interval<ptime>::rightopen( time_from_string( \"2008-05-20 19:30\"), time_from_string( \"2008-05-20 23:00\")), counted_sum(165)) // Mary is 1,65 m tall. );\n // Add height of more pary guests . . . \n interval_map<ptime, counted_sum>::iterator height_sum_ = height_sums.begin(); while(height_sum_ != height_sums.end()) { interval<ptime> when = height_sum_->first; double height_average = (*height_sum_++).second. average();\n cout << \"[\" << when.first() << \" - \" << when.upper() << \")\" << \": \" << height_average << \" cm\" << endl; }}18\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval containers allow to express a variety of date \nand time operations in an easy way.\nExample man_power.cpp ...\nSubtract weekends and holidays from an interval_set\nworktime -= weekends(scope)\nworktime -= german_reunification_day\nIntersect an interval_map with an interval_set\nclaudias_working_hours &= worktime\nSubtract and interval_set from an interval map\nclaudias_working_hours -= claudias_absense_times\nAdding interval_maps\ninterval_map<date, int> manpower;\nmanpower += claudias_working_hours;\nmanpower += bodos_working_hours;19\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval_maps can also be intersected\nExample user_groups.cpp\ntypedef boost::itl::set<string> MemberSetT;typedef interval_map<date, MemberSetT> MembershipT;\nvoid user_groups(){ . . .\n MembershipT med_users; // Compute membership of medical staff med_users += make_pair(member_interval_1, MemberSetT(\"Dr.Jekyll\")); med_users += . . . \n MembershipT admin_users; // Compute membership of administation staff med_users += make_pair(member_interval_2, MemberSetT(\"Mr.Hyde\")); . . .\n MembershipT all_users = med_users + admin_users;\n MembershipT super_users = med_users & admin_users; . . .\n}20\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl sets is based on a concept itl::Set\nitl::set, interval_set, split_interval_set \nand separate_interval_set are models of concept \nitl::Set\n// Abstract partempty set: Set::Set()subset relation: bool Set::contained_in(const Set& s2)constequality: bool is_element_equal(const Set& s1, const Set& s2)set union: Set& operator += (Set& s1, const Set& s2) Set operator + (const Set& s1, const Set& s2)set difference: Set& operator -= (Set& s1, const Set& s2) Set operator - (const Set& s1, const Set& s2)set intersection: Set& operator &= (Set& s1, const Set& s2) Set operator & (const Set& s1, const Set& s2) \n// Part related to sequential orderingsorting order: bool operator < (const Set& s1, const Set& s2)lexicographical equality: bool operator == (const Set& s1, const Set& s2) 21\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl maps is based on a concept itl::Map\nitl::map, interval_map and split_interval_map \nare models of concept \nitl::Map\n// Abstract partempty map: Map::Map()submap relation: bool Map::contained_in(const Map& m2)constequality: bool is_element_equal(const Map& m1, const Map& m2)map union: Map& operator += (Map& m1, const Map& m2) Map operator + (const Map& m1, const Map& m2)map difference: Map& operator -= (Map& m1, const Map& m2) Map operator - (const Map& m1, const Map& m2)map intersection: Map& operator &= (Map& m1, const Map& m2) Map operator & (const Map& m1, const Map& m2) \n// Part related to sequential orderingsorting order: bool operator < (const Map& m1, const Map& m2)lexicographical equality: bool operator == (const Map& m1, const Map& m2) 22\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nDefining semantics of itl concepts via sets of laws\naka c++0x axioms\nChecking law sets via automatic testing:\nA Law Based Test Automaton LaBatea\nGenerate\nlaw instance\napply law to instance\ncollect violations\nCommutativity<T a, U b, +>:\n a + b = b + a;23\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nLexicographical Ordering and Equality\nFor all itl containers operator < implements a strict \nweak ordering. \nThe induced equivalence of this ordering is \nlexicographical equality which is implemented as \noperator ==\nThis is in line with the semantics of \nSortedAssociativeContainers24\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nSubset Ordering and Element Equality\nFor all itl containers function contained_in \nimplements a partial ordering .\nThe induced equivalence of this ordering is \nequality of elements which is implemented as \nfunction is_element_equal .25\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nitl::Sets\nAll itl sets implement a Set Algebra, which is to say \nsatisfy a “classical” set of laws . . .\n. . . using is_element_equal as equality\nAssociativity, Neutrality, Commutativity (for + and &)\nDistributivity, DeMorgan, Symmetric Difference\nMost of the itl sets satisfy the classical set of laws \neven if . . .\n. . . lexicographical equality: operator == is used\nThe differences reflect proper inequalities in sequence \nthat occur for separate_interval_set and \nsplit_interval_set . 26\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nConcepts induction / concept transition\nThe semantics of itl Maps appears to be determined by \nthe codomain type of the map\nItl Maps are mapping the semantics of the codomain \ntype on themselves.\n is model of example\nMap<D,Monoid> Monoid interval_map<int, string>\nMap<D,CommutMonoid> CommutMonoid interval_map<int, unsigned>\nMap<D,AbelianGroup> AbelianGroup interval_map<int, int>\n \nMap<D,Set> Set interval_map<int, set<int>>\n 27\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Implementation\nItl containers are implemented simply based on\nstd::set and std::map\nBasic operations like adding and subtracting intervals \nhave a best case complexity of O(lg n) , if the added or \nsubtracted intervals are relatively small.\nWorst case complexity of adding or subtracting \nintervals for interval_set is O(n).\nFor all other interval containers adding or subtracting \nintervals has a worst case performance of O(n lg(n)) .\nThere is a potential for optimization . . .28\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Implementation\nA segment_tree implementaion: A balanced tree,\nwhere . . .\nan interval represents a perfectly balanced subtree\nlarge intervals are rotated towards the root\nFirst results\nmuch better worst case performance O(n) instead of \nO(n lg(n))\nbut slower for best case due to heavier bookkeeping \nand recursive algorithms.29\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Future Works\nCompleting and optimizing the segment_tree \nimplementation of interval containers\nImplementing interval_maps of sets more efficiently\nRevision of features of the extended itl (itl_plus.zip)\nDecomposition of histories : k histories hk with attribute \ntypes A1, ..., Ak are “decomposed” to a product history \nof tuples of attribute sets:\n(h1<T,A1>,..., h<T,Ak>) → h<T, (set<A1>,…, set<Ak>)>\nCubes (generalized crosstables): Applying aggregate \non collision to maps of tuple value pairs in order to \norganize hierachical data and their aggregates.30\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nItl project on sourceforge (version 2.0.1)\nhttp://sourceforge.net/projects/itl\nLatest version on boost vault/Containers (3.0.0)\nhttp://www.boostpro.com/vault/ → containers\nitl.zip : Core itl in preparation for boost\nitl_plus.zip : Extended itl including product histories, cubes \nand automatic validation (LaBatea).\nOnline documentation at\nhttp://www.herold-faulhaber.de/\nDoxygen generated docs for (version 2.0.1)\nhttp://www.herold-faulhaber.de/itl/\nLatest boost style documentation (version 3.0.0)\nhttp://www.herold-faulhaber.de/boost_itl/doc/libs/itl/doc/html/31\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nBoost sandbox\nhttps://svn.boost.org/svn/boost/sandbox/itl/\nCore itl: Interval containers preparing for boost\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl/\nExtended itl_xl: “histories” and cubes\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl_xt/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl_xt/\nValidater LaBatea: Currently only vc8 or newer\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/validate/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/validate/" } ]
{ "category": "App Definition and Development", "file_name": "intro_to_itl_3_0_0_bc09.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Dgraph: Synchronously Replicated, Transactional and Distributed Graph\nDatabase\nManish Jain\nmanish@dgraph.io\nDgraph Labs, Inc.\nVersion: 0.8 Last Updated: March 1, 2021\nAbstract\nDgraph is a distributed graph database which provides hori-\nzontal scalability, distributed cluster-wide ACID transactions,\nlow-latency arbitrary-depth joins, synchronous replication,\nhigh availability and crash resilience. Aimed at real-time trans-\nactional workloads, Dgraph shards and stores data in a way\nto optimize joins and traversals, while still providing data\nretrieval and aggregation. Dgraph’s unique take is to provide\nlow-latency arbitrary-depth joins in a constant number of net-\nwork calls (typically, just one network call) that would be\nrequired to execute a single join, irrespective of the size of\nthe cluster or the size of the result set.\n1 Introduction\nDistributed systems or databases tend to suffer from join depth\nproblem. That is, as the number of traversals of relationships\nincrease within a query, the number of network calls required\n(in a sufficiently sharded dataset) increase. This is typically\ndue to entity-based data sharding, where entities are randomly\n(sometimes with a heuristic) distributed across servers con-\ntaining all the relationships and attributes along with them.\nThis approach suffers from high-fanout result set in interme-\ndiate steps of a graph query causing them to do a broadcast\nacross the cluster to perform joins on the entities. Thus, a sin-\ngle graph query results in network broadcasts, hence causing\na jump in the query latency as the cluster grows.\nDgraph is a distributed database with a native graph back-\nend. It is the only native graph database to be horizontally\nscalable and support full ACID-compliant cluster-wide dis-\ntributed transactions. In fact, Dgraph is the first graph database\nto have been Jepsen [ ?] tested for transactional consistency.\nDgraph automatically shards data into machines, as the\namount of data or the number of servers change, and auto-\nmatically reshards data to move it across servers to balance\nthe load. It also supports synchronous replication backed\nby Raft [ ?] protocol, which allows the queries to seamlessly\nfailover to provide high availability.Dgraph solves the join depth problem with a unique shard-\ning mechanism. Instead of sharding by entities, as most sys-\ntems do, Dgraph shards by relationships. Dgraph’s unique\nway of sharding data is inspired by research at Google [ ?],\nwhich shows that the overall latency of a query is greater than\nthe latency of the slowest component. The more servers a\nquery touches to execute, the slower the query latency would\nbe. By doing relationship based sharding, Dgraph can execute\na join or traversal in a single network call (with a backup\nnetwork call to replica if the first is slow), irrespective of the\nsize of the cluster or the input set of entities. Dgraph executes\narbitrary-depth joins without network broadcasts or collecting\ndata in a central place. This allows the queries to be fast and\nlatencies to be low and predictable.\n2 Dgraph Architecture\nDgraph consists of Zeros and Alphas, each representing a\ngroup that they are serving. Zeros serve group zero and Alphas\nserve group one, group two and onwards. Each group forms\na Raft cluster of 1, 3 or 5 members configurable by a human\noperator (henceforth, referred to as the operator). All updates\nmade to the group are serialized via Raft consensus algorithm\nand applied in that order to the leader and followers.\nZeros store and propagate metadata about the cluster while\nAlphas store user data. In particular, Zeros are responsible for\nmembership information, which keeps track of the group each\nAlpha server is serving, its internal IP address for communi-\ncation within the cluster, the shards it is serving, etc. Zeros do\nnot keep track of the health of the Alphas and take actions on\nthem – that is considered the job of the operator. Using this\ninformation, Zero can tell the new Alpha to either join and\nserve an existing group, or form a new group.\nThe membership information is streamed out from Zero to\nall the Alphas. Alphas can use this membership information\nto route queries (or mutations) which hit the cluster. Every\ninstance in the cluster forms a connection with every other\ninstance (thus forming 2\u0002(N\n2)open connections, where N\n= number of Dgraph instances in the cluster), however, the\n1Figure 1: Dgraph Architecture: There is one Zero group and\nmultiple Alpha groups. Each group is a Raft group consisting\nof one or more members.\nusage of this connection depends on their relationship. For\nexample, a Raft leader-follower relationship would have heart-\nbeats (every 100 ms) and data flowing, while an Alpha would\nonly talk to Alpha in another group when it needs to do so\nfor processing queries or mutations. Every open connection\ndoes have light-weight health checks to avoid stalling on a tar-\nget server which has become unresponsive (died, partitioned,\netc.). Both Alphas and Zeros expose one port for intra-cluster\ncommunication over Grpc [ ?] and one for external commu-\nnication with clients over HTTP. Alphas additionally expose\nan external Grpc port for communication with Grpc based\nclients – all official clients run over Grpc.\nZero also runs an oracle which hands out monotonically-\nincreasing logical timestamps for transactions in the cluster\n(no relation to system time). A Zero leader would typically\nlease out a bandwidth of timestamps upfront via Raft proposal\nand then service timestamp requests strictly from memory\nwithout any further coordination. Zero oracle tracks additional\nthings for aiding with transaction commits, which would be\nelaborated in section ??.\nZero gets information about the size of data in each group\nfrom the Alpha leaders, which it uses to make decisions about\nshard movement, which would be elaborated in section ??.\n2.1 Data Format\nDgraph can input data in a JSON format or (slightly modified)\nRDF NQuad format. Dgraph would break down a JSON map\ninto smaller chunks, with each JSON key-value forming one\nrecord equivalent of a single RDF triple record. When parsing\nRDF Triple or JSON, data is directly converted into an internalprotocol buffer [ ?] data format and not interchanged among\nthe two.\n{\n\"uid\" : \"0xab\",\n\"type\" : \"Astronaut\",\n\"name\" : \"Mark Watney\",\n\"birth\" : \"2005/01/02\",\n\"follower\": { \"uid\": \"0xbc\", ... },\n}\n<0xab> <type> \"Astronaut\" .\n<0xab> <name> \"Mark Watney\" .\n<0xab> <birth> \"2005/01/02\" .\n<0xab> <follower> <0xbc> .\nA triple is typically expressed as a subject-predicate-object\nor a subject-predicate-value. Subject is a node, predicate is a\nrelationship, and object can be another node or a primitive data\ntype. One points from a node to another node, the other points\nfrom a node to a value. In the above example, the triple with\nname is a type of subject-predicate-value (typically referred\nto as an attribute), while the triple with follower is a type of\nsubject-predicate-object. Dgraph makes no difference in how\nit handles these two types of records (to avoid confusion over\nthese two types, we’ll refer to them as object-values). Dgraph\nconsiders this as the unit of record and a typical JSON map\nwould be broken into multiple such records.\nData can be retrieved from Dgraph using GraphQL [ ?]\nand a modified version of GraphQL, called GraphQL+- [ ?].\nGraphQL+- has most of the same properties as GraphQL. But,\nadds various properties which are important for a database,\nlike query variables, functions and blocks. More information\nabout how the query language came to be and the differences\nbetween GraphQL and GraphQL+- can be found in this blog\npost [ ?].\nAs mentioned in section ??, all internal and external com-\nmunication in Dgraph runs via Grpc and Protocol Buffers.\nDgraph also exposes HTTP endpoints to allow building client\nlibraries in languages which are not supported by these two.\nThere is a functionality parity between HTTP endpoints and\nAPIs exposed via Grpc.\nIn accordance with the GraphQL spec, query responses\nfrom Dgraph are in JSON format, both over HTTP and Grpc.\n2.2 Data Storage\nDgraph data is stored in an embeddable key-value database\ncalled Badger [ ?] for data input-output on disk. Badger is\nan LSM-tree based design, but differs from others in how it\ncan optionally store values separately from keys to generate\na much smaller LSM tree, which results in both lower write\nand read amplification. Various benchmarks run by the team\n2show Badger to provide equivalent or faster writes than other\nLSM based DBs, while providing equivalent read latencies\ncompared to B+-tree based DBs (which tend to provide much\nfaster reads than LSM trees).\nAs mentioned above, all records with the same predicate\nform one shard. Within a shard, records sharing the same\nsubject-predicate are grouped and condensed into one single\nkey-value pair in Badger. This value is referred to as a posting\nlist, a terminology commonly used in search engines to refer\nto a sorted list of doc ids containing a search term. A posting\nlist is stored as a value in Badger, with the key being derived\nfrom subject and predicate.\n<0x01> <follower> <0xab> .\n<0x01> <follower> <0xbc> .\n<0x01> <follower> <0xcd> .\n...\nkey = <follower, 0x01>\nvalue = <0xab, 0xbc, 0xcd, ...>\nAll subjects in Dgraph are assigned a globally unique id,\ncalled a uid. Auidis stored as a 64-bit unsigned integer\n(uint64) to allow efficient, native treatment by Go language\nin the code base. Zero is responsible for handing out uids as\nneeded by the Alphas and does it in the same monotonically\nincreasing fashion as timestamps (section ??). A uidonce\nallocated is never reallocated or reassigned. Thus, every node\nin the graph can be referenced by a unique integer.\nObject-values are stored in postings. Each posting has an\ninteger id. When the posting holds an object, the id is the\nuidassigned to that object. When posting holds a value, the\ninteger id for value is determined based upon the schema\nof the predicate. If the predicate allows multiple values, the\ninteger id for the value would be a fingerprint of the value. If\nthe predicate stores values with language, the integer id would\nbe a fingerprint of the language tag. Otherwise, the integer id\nwould be set to maximum possible uint64 (264- 1). Both uid\nand integer id is never set to zero.\nValue could be one of the many supported data types: int,\nfloat, string, datetime, geo, etc. The data is converted into bi-\nnary format and stored in a posting along with the information\nabout the original type. A posting can also hold facets. Facets\nare key-value labels on an edge, treated like attachments.\nIn a common case where the predicate only has objects\n(and no values like follower edge), a posting list would consist\nlargely of sorted uids . These are optimized by doing integer\ncompression. The uids are grouped in blocks of 256 integers\n(configurable), where each block has a base uidand a binary\nblob. The blob is generated by taking a difference of current\nuidwith the last and storing the difference in bytes encoded\nusing group varint. This generates a data compression ratio\nof 10. When doing intersections, we can use these blocks to\ndo binary searches or block jumps to avoid decoding all the\nblocks. Sorted integer encoding is a hotly researched topic\nand there is a lot of room for optimization here in terms\nFigure 2: Posting list structure stored in group varint-encoded\nblocks\nof performance. Work is going on currently to use Roaring\nBitmaps [ ?] instead to represent this data.\nThanks to these techniques, a single edge traversal corre-\nsponds to only a single Badger lookup. For example, finding\na list of all of X’s followers would involve doing a lookup\non<follower, X> key which would give a posting list con-\ntaining all of their followers’ uids . Further lookups can be\nmade to get a list of posts made by followers . Common fol-\nlowers between X and Y an be found by doing two lookups\nfollowed by intersecting the sorted int lists of <follower,\nX>and<follower, Y> . Note that distributed joins and (ob-\nject based) traversals only require uids to be transmitted over\nnetwork, which is also very efficient. All this allows Dgraph\nto be very efficient on these operations, without compromis-\ning on the typical select * from table where X=Y style\nrecord lookups.\nThis type of data storage has benefits in joins and traversals,\nbut comes with an additional problem of high fan-out. If there\nare too many records with the same <subject, predicate> ,\nthe overall posting list could grow to an untenable size. This\nis typically only a problem for objects (not so much for val-\nues). We solve this by binary splitting a posting list as soon\nas its on-disk size hits a certain threshold. A split posting list\nwould be stored as multiple keys in Badger, with optimiza-\ntions made to avoid retrieving the splits until the operation\nneeds them. Despite storage differences, the posting list con-\ntinues to provide the same sorted iteration via APIs as an\nunsplit list.\n32.3 Data Sharding\nWhile Dgraph shares a lot of features of NoSQL and dis-\ntributed SQL databases, it is quite different in how it handles\nits records. In other databases, a row or document would be\nthe smallest unit of storage (guaranteed to be located together),\nwhile sharding could be as simple as generating equal sized\nchunks consisting of many of these records.\nDgraph’s smallest unit of record is a triple (subject-\npredicate-object, described below), with each predicate in\nits entirety forming a shard. In other words, Dgraph logically\ngroups all the triples with the same predicate and considers\nthem one shard. Each shard is then assigned a group (1..N)\nwhich can then be served by all the Alphas serving that group,\nas explained in section ??.\nThis data sharding model allows Dgraph to execute a com-\nplete join in a single network call and without any data fetch-\ning across servers by the caller. This combined with grouping\nof records in a unique way on disk to convert operations which\nwould typically be executed by expensive disk iterations, into\nfewer, cheaper disk seeks makes Dgraph internal working\nquite efficient.\nTo elaborate this further, consider a dataset which contains\ninformation about where people live (predicate: \"lives-in\")\nand what they eat (predicate: \"eats\"). Data might look some-\nthing like this:\n<person-a> <lives-in> <sf> .\n<person-a> <eats> <sushi> .\n<person-a> <eats> <indian> .\n...\n<person-b> <lives-in> <nyc> .\n<person-b> <eats> <thai> .\nIn this case, we’ll have two shards: lives-in andeats. As-\nsuming the worst case scenario where the cluster is so big that\neach shard lives on a separate server. For a query which asks\nfor[people who live in SF and eat Sushi] , Dgraph\nwould execute one network call to server containing lives-\ninand do a single lookup for all the people who live in\nSF (* <lives-in> <sf> ). In the second step, it would take\nthose results and send them over to server containing eats,\ndo a single lookup to get all the people who eat Sushi ( *\n<eats> <sushi> ), and intersect with the previous step’s re-\nsultset to generate the final list of people from SF who eat\nSushi. In a similar fashion, this result set can then be further\nfiltered/joined, each join executing in one network call.\nAs we learnt in section ??, the result set is a list of sorted\n64-bit unsigned integers, which make the retrieval and inter-\nsection operations very efficient.\nFigure 3: Data sharding\n2.4 Data Rebalancing\nAs explained above, each shard contains a whole predicate\nin its entirety which means Dgraph shards can be of uneven\nsize. The shards not only contain the original data, but also\nall of their indices. Dgraph groups contain many shards, so\nthe groups can also be of uneven size. The group and shard\nsizes are periodically communicated to Zero. Zero uses this\ninformation to try to achieve a balance among groups, using\nheuristics. Current one being used is just data size, with the\nidea that equal sized groups would allow similar resource\nusage across servers serving those groups. Other heuristics,\nparticularly around query traffic, could be added later.\nTo achieve balance, Zero would move shards from one\ngroup to another. It does so by marking the shard read-only,\nthen asking the source group to iterate over the underlying key-\nvalues concurrently and streaming them over to the leader of\nthe destination group. The destination group leader proposes\nthese key-values via Raft, gaining all the correctness that\ncomes with it. Once all the proposals have been successfully\napplied by the destination group, Zero would mark the shard\nas being served by the destination group. Zero would then\ntell source group to delete the shard from its storage, thus\nfinalizing the process.\nWhile this process sounds pretty straighforward, there are\nmany race and edge conditions here which can cause transac-\ntional correctness to be violated as shown by Jepsen tests [ ?].\nWe’ll showcase some of these violations here:\n1. A violation can occur when a slightly behind Alpha\n4server would think that it is still serving the shard (despite the\nshard having moved to another group) and allow mutations\nto be run on itself. To avoid this, all transactions states keep\nthe shard and the group info for the writes (along with their\nconflict keys as we’ll see in section ??). The shard-group\ninformation is then checked by Zero to ensure that what the\ntransaction observes (via Alpha it talked to) and what Zero\nhas is the same – a mismatch would cause a transaction abort.\n2. Another violation happens when a transaction commits\nafter the shard was put into read-only mode – this would cause\nthat commit to be ignored during the shard transfer. Zero\ncatches this by assigning a timestamp to the move operation.\nAny commits (on this shard) at a higher timestamp would be\naborted, until the shard move has completed and the shard is\nbrought back to the read-write mode.\n3. Yet another violation can occur when the destination\ngroup receives a read below the move timestamp, or a source\ngroup receives a read after it has deleted the shard. In both\ncases, no data exists which can cause the reads to incorrectly\nreturn back nil values. Dgraph avoids this by informing the\ndestination group of the move timestamp, which it can use\nto reject any reads for that shard below it. Similarly, Zero\nincludes a membership mark at which the source Alpha must\nreach before the group can delete the shard, thus, every Alpha\nmember of the group would know that it is no longer servig\nthe data before deleting it.\nOverall, the mechanism of membership information syn-\nchronization during a shard move proved the hardest to get\nright with respect to transactional correctness.\n3 Indexing\nDgraph is designed to be a primary database for applications.\nAs such, it supports most of the commonly needed indices. In\nparticular, for strings, it supports regular expressions, full-text\nsearch, term matching, exact and hash matching index. For\ndatetime, it supports year, month, day and hour level indices.\nFor geo, it supports nearby, within, etc. operations, and so\non...\nAll these indices are stored by Dgraph using the same post-\ning list format described above. The difference between an\nindex and data is the key. A data key is typically <predicate,\nuid> , while an index key is <predicate, token> . A token\nis derived from the value of the data, using an index tokenizer.\nEach index tokenizer supports this interface:\ntype Tokenizer interface {\nName() string\n// Type returns the string representation of\n// the typeID that we care about.\nType() string\n// Tokens return tokens for a given value. The// tokens shouldn’t be encoded with the byte\n// identifier.\nTokens(interface{}) ([]string, error)\n// Identifier returns the prefix byte for this\n// token type. This should be unique. The range\n// 0x80 to 0xff (inclusive) is reserved for\n// user-provided custom tokenizers.\nIdentifier() byte\n// IsSortable returns true if the tokenizer can\n// be used for sorting/ordering.\nIsSortable() bool\n// IsLossy() returns true if we don’t store the\n// values directly as index keys during\n// tokenization. If a predicate is tokenized\n// using a lossy tokenizer, we need to fetch\n// the actual value and compare.\nIsLossy() bool\n}\nEvery tokenizer has a globally unique identifier\n(Identifier() byte ), including custom tokenizers pro-\nvided by operators. The tokens generated are prefixed with a\ntokenizer identifier to be able to traverse through all tokens\nbelonging to only that tokenizer. This is useful when doing\niteration for inequality queries (greater than, less than, etc.).\nNote that inequality queries can only be done if a tokenizer is\nsortable ( IsSortable() bool ). For example, in strings, an\nexact index is sortable, but a hash index is not.\nDepending upon which index a predicate has set in the\nschema, every mutation in that predicate would invoke one\nor more of these tokenizers to generate the tokens. Note that\nindices only operate on values, not objects. A set of tokens\nwould be generated with the before mutation value and an-\nother set with the after mutation value. Mutations would be\nadded to delete the subject uid from the posting lists of before\ntokens and to add the subject uid to the after tokens.\nNote that all indices have object values, so they largely deal\nonly in uids. Indices in particular can suffer from high fan-out\nproblem and are solved using posting list splits described in\nthe section ??.\n4 Multiple Version Concurrency Control\nAs described in section ??, data is stored in posting list format,\nwhich consists of postings sorted by integer ids. All posting\nlist writes are stored as deltas to Badger on commit, using the\ncommit timestamp. Note that timestamps are monotonically\nincreasing globally across the DB, so any future commits are\nguaranteed to have a higher timestamp.\nIt is not possible to update this list in-place, for multiple\nreasons. One is that Badger (and most LSM trees) writes are\n5immutable, which plays very well with filesystems and rsync.\nSecond is that adding an entry within a sorted list requires\nmoving following entries, which depending upon the position\nof the entry can be expensive. Third, as the posting list grows,\nwe want to avoid rewriting a large value every time a mutation\nhappens (for indices, it can happen quite frequently).\nDgraph considers a posting list as a state. Every future\nwrite is then stored as a delta with a higher timestamp. A delta\nwould typically consist of postings with an operation (set or\ndelete). To generate a posting list, Badger would iterate the\nversions in descending order, starting from the read timestamp,\npicking all deltas until it finds the latest state. To run a posting\nlist iteration, the right postings for a transaction would be\npicked, sorted by integer ids, and then merge-sort operation is\nrun between these delta postings and the underlying posting\nlist state.\nEarlier iterations of this mechanism were aimed at keep-\ning the delta layer sorted by integer ids as well, overlaying it\non top of the state to avoid doing sorting during the reads —\nany addition or deletion made would be consolidated based\non what was already in the delta layer and the state. These\niterations proved too complex to maintain for the team and\nsuffered from hard to find bugs. Ultimately, that concept was\ndropped in favor of a simple understandable solution of pick-\ning the right postings for a read and sorting them before itera-\ntion. Additionally, earlier APIs implemented both forward and\nbackward iteration adding complexity. Over time, it became\nclear that only forward iteration was required, simplifying the\ndesign.\nThere are many benefits in avoiding having to regenerate\nthe posting list state on every write. At the same time, as\ndeltas accumulate, the work of list regeneration gets delegated\nto the readers, which can slow down the reads. To find a\nbalance and avoid gaining deltas indefinitely, we added a\nrollup mechanism.\nRollups: As keys get read, Dgraph would selectively re-\ngenerate the posting lists which have a minimum number of\ndeltas, or haven’t been regenerated for a while. The regener-\nation is done by starting from the latest state, then iterating\nover the deltas in order and merging them with the state. The\nfinal state is then written back at the latest delta timestamp, re-\nplacing the delta and forming a new state. All previous deltas\nand states for that key can then be discarded to reclaim space.\nThis system allows Dgraph to provide MVCC. Each read\nis operating upon an immutable version of the DB. Newer\ndeltas are being generated at higher timestamps and would be\nskipped during a read at a lower timestamp.\n5 Transactions\nDgraph has a design goal of being simple to operate. As\nsuch, one of the goals is to not depend upon any third party\nsystem. This proved quite hard to achieve while providing\nhigh availability for not only data but also transactions.\nFigure 4: MVCC\nWhile designing transactions in Dgraph, we looked at pa-\npers from Spanner [ ?], HBase [ ?], Percolator [ ?] and others.\nSpanner most famously uses atomic clocks to assign times-\ntamps to transactions. This comes at the cost of lower write\nthroughput on commodity servers which don’t have GPS\nbased clock sync mechanism. So, we rejected that idea in fa-\nvor of having a single Zero server, which can hand out logical\ntimestamps at a much faster pace.\nTo avoid Zero becoming a single point of failure, we run\nmultiple Zero instances forming a Raft group. But, this comes\nwith a unique challenge of how to do handover in case of\nleader relection. Omid, Reloaded [ ?] (referenced as Omid2)\npaper handles this problem by utilizing external system. In\nOmid2, they run a standby timestamp server to take over in\ncase the leader fails. This standby server doesn’t need to get\nthe latest transaction state information, because Omid2 uses\nZookeeper [ ?], a centralized service for maintaining transac-\ntion logs. Similarly, TiDB built TiKV , which uses a Raft-based\nreplication model for the key-values. This allows every write\nby TiDB to automatically be considered highly-available. Sim-\nilarly, Bigtable [ ?], uses Google Filesystem [ ?] for distributed\nstorage. Thus, no direct information transfer needs to happen\namong the multiple servers forming the quorum.\nWhile this concept achieves simplicity in the database, we\nwere not entirely thrilled with this idea due to two reasons.\nOne, we had an explicit goal of non-reliance on any third-\nparty system to make running Dgraph operationally easier,\nand felt that a solution should be possible without pushing\n6synchronous replication within Badger (storage). Second, we\nwanted to avoid touching disk unless necessary. By having\nRaft be part of the Dgraph process, we can find-tune when\nthings get written to state to achieve better efficiency. In fact,\nour implementation of transactions don’t write to DB state on\ndisk until they are committed (still written to Raft WAL).\nWe closely looked at HBase papers ( [ ?], [?]) for other\nideas, but they didn’t directly fit our needs. For example,\nHBase pushed a lot of transaction information back to the\nclient, giving them critical information about what they should\nor should not read to maintain the transactional guarantees.\nThis however, makes the client libraries harder to build and\nmaintain, something we did not like. On top of that, a graph\nquery can touch millions of keys in the intermediate steps, it’s\nexpensive to keep track of all that information and propagate\nthat to the client.\nAim for Dgraph client libraries was to keep as minimal\nstate as possible to allow open-source users unfamiliar with\nthe internals of Dgraph to build and maintain libraries in\nlanguages unfamiliar to us (for example, Elixir).\n// TODO: Do I describe the first iteration?\nWe simply could not find a paper at the time which de-\nscribed how to build a simple to understand, highly-available\ntransactional system which could be run without assuming\nthat the storage layer is highly available. So, we had to come\nup with a new solution. Our second iteration still faced many\nissues as proven by Jepsen tests. So, we simplified our second\niteration to a third one, which is as follows.\n5.1 Lock-Free High Availability Transaction\nProcessing\nDgraph follows a lock-free transaction model. Each transac-\ntion pursues its course concurrently, never blocking on other\ntransactions, while reading the committed data at or below its\nstart timestamp. As mentioned before, Zero leader maintains\nan Oracle which hands out logical transaction timestamps\nto Alphas. Oracle also keeps track of a commit map, storing\na conflict key!latest commit timestamp. As shown in al-\ngorithm ??, every transaction provides the Oracle the list of\nconflict keys, along with the start timestamp of the transac-\ntion. Conflict keys are derived from the modified keys, but\nare not the same. For each write, a conflict key is calculated\ndepending upon the schema. When a transaction requests a\ncommit, Zero would check if any of those keys has a commit\ntimestamp higher than the start timestamp of the transaction.\nIf the condition is met, the transaction is aborted. Otherwise,\na new timestamp is leased by the Oracle, set as the commit\ntimestamp and conflict keys in the map are updated.\nThe Zero leader then proposes this status update (commit\nor abort) in the form of a start !commit ts (where commit\nts = 0 for abort) to the followers and achieves quorum. Once\nquorum is achieved, Zero leader streams out this update to\nthe subscribers, which are Alpha leaders. To keep the designAlgorithm 1 Commit ( Ts,Keys )\n1:foreach key k2Keys do\n2: iflastCommit (k)>Tsthen\n3: Propose (Ts abort )\n4: return\n5: end if\n6:end for\n7:Tc GetTimestamps (1)\n8:foreach key k2Keys do\n9: lastCommit (k) Tc\n10:end for\n11:Propose (Ts Tc)\nAlgorithm 2 Watermark: Calculate DoneUntil ( T,isPending )\n1:ifT/2MinHeap then\n2: MinHeap T\n3:end if\n4:pending (T) isPending\n5:curDoneTs DoneUntil\n6:foreach minTs2MinHeap .Peek ()do\n7: ifpending (minTs )then\n8: break\n9: end if\n10: MinHeap .Pop ()\n11: curDoneTs minTs\n12:end for\n13:DoneUntil curDoneTs\nsimple, Zero does not push to any Alpha leader. It is the job\nof (whoever is) the latest Alpha leader to establish an open\nstream from Zero to receive transaction status updates.\nAlong with the transaction status update, Zero leader also\nsends out a MaxAssigned timestamp. MaxAssigned is cal-\nculated using a Watermark algorithm ??, which maintains\na min-heap of all allocated timestamps, both start and com-\nmit timestamps. As consensus is achieved, the timestamps\nare marked as done and MaxAssigned gets advanced to the\nmaximum timestamp up until which everything has achieved\nconsensus as needed. Note that start timestamps don’t typi-\ncally need a consensus (unless lease needs to be updated) and\nget marked as done immediately. Commit timestamps always\nneed a consensus to ensure that Zero group achieves quorum\non the status of the transaction. This allows a Zero follower\nto become a leader and have full knowledge of transaction\nstatuses. This ordering is crucial to achieve the transactional\nguarantees as we will see below.\nOnce Alpha leaders receive this update, they would propose\nit to their followers, applying the updates in the same order.\nAll Raft proposal applications in Alphas are done serially.\nAlphas also have an Oracle, which keeps track of the pending\ntransactions. They maintain the start timestamp, along with a\ntransaction cache which keeps all the updated posting lists in\n7Figure 5: MaxAssigned watermark. Open circles represent\nand filled circles represent done. Start timestamps 1, 2, and 4\nare immediately marked as done. Commit timestamp 3 begins\nand must have consensus before it is done. Watermark keeps\ntrack of the highest timestamp at and below which everything\nis done.\nFigure 6: The MaxAssigned system ensures that linearizable\nreads. Reads at timestamps higher than the current MaxAs-\nsigned (MA) must block to ensure the writes up until the read\ntimestamp are applied. Txn 2 receives start ts 3, and a read at\nts 3 must acknowledge any writes up to ts 2.\nmemory. On a transaction abort, the cache is simply dropped.\nOn a transaction commit, the posting lists are written to Bad-\nger using the commit timestamp. Finally, the MaxAssigned\ntimestamp is updated.\nEvery read or write operation must have a start times-\ntamp. When a new query or mutation hits an Alpha, it would\nask Zero to assign a timestamp. This operation is typically\nbatched to only allow one pending assignment call to Zero\nleader per Alpha. If the start timestamp of a newly received\nquery is higher than the MaxAssigned registered by that Al-\npha, it would block the query until its MaxAssigned reaches\nor exceeds the start ts. This solution nicely tackles a wide-\narray of edge case scenarios, including Alpha falling back or\ngoing behind a network partition from its peers or just restart-\ning after a crash, etc. In all those cases, the queries would\nbe blocked until the Alpha has seen all updates up until the\ntimestamp of the query, thus maintaining the guarantee oftransactions and linearizable reads.\nFor correctness, only Zero leader is allowed to assign times-\ntamps, uids, etc. There are edge cases where Zero followers\nwould mistakenly think they’re the leaders and serve stale\ndata — Dgraph does multiple things to avoid these scenarios.\n1. If a Zero leadership changes, the new leader would lease\nout a range of timestamps higher than the previous leader has\nseen. However, an older commit proposal stuck with the older\nleader can get forwarded to the new one. This can allow a\ncommit to happen at an older timestamp, causing failure of\ntransactional guarantees. We avoid this by disallowing Zero\nfollowers forwarding requests to the leader and rejecting those\nproposals.\n// TODO: We should have a membership section, which\nexplains how membership works and is transmitted to Alphas.\n2. Every membership state update streamed from Zero re-\nquires a read-quorum (check with Zero peers to find the latest\nRaft index update seen by the group). If the Zero is behind\na partition, for example, it wouldn’t be able to achieve this\nquorum and send out a membership update. Alphas expect an\nupdate periodically and if they don’t hear from the Zero leader\nafter a few cycles, they’d consider the Zero leader defunct,\nabolish connection and retry to establish connection with a\n(potentially different) healthy leader.\n6 Consistency Model\nDgraph supports MVCC, Read Snapshots and Distributed\nACID transactions. The transactions are cluster-wide across\nuniversal dataset – not limited by any key level or server\nlevel restrictions. Transactions are also lockless. They don’t\nblock/wait on seeing pending writes by uncommitted trans-\nactions. They can all proceed concurrently and Zero would\nchoose to commit or abort them depending on conflicts.\nConsidering the expense of tracking all the data read by a\nsingle graph query (could be millions of keys), Dgraph does\nnot provide Serializable Snapshot Isolation. Instead, Dgraph\nprovides Snapshot Isolation, tracking writes which is a much\nmore contained set than reads.\nDgraph hands out monotonically increasing timestamps\n(represented by T) for transactions (represented by Tx).\nErgo, if any transaction Txicommits before Txjstarts, then\nTTxi\ncommit<TTxj\nstart. Any commit at Tcommit is guaranteed to be\nseen by a read at timestamp Treadby any client, if Tread>\nTcommit . Thus, Dgraph reads are linearizable. Also, all reads\nare snapshots across the entire cluster, seeing all previously\ncommitted transactions in full.\nAs mentioned, Dgraph reads are linearizable. While this is\ngreat for correctness, it can cause performance issues when a\nlot of reads and writes are going on simultaneously. All reads\nare supposed to block until the Alpha has seen all the writes\nup until the read timestamp. In many cases, operators would\nopt for performance over achieving linearizablity. Dgraph\n8provides two options for speeding up reads:\n1. A typical read-write transaction would allocate a new\ntimestamp to the client. This would update MaxAssigned\nwhich would then flow via Zero leader to Alpha leaders and\nthen get proposed. Until that happens, a read can’t proceed.\nRead-only transactions would still require a read timestamp\nfrom Zero, but Zero would opportunistically hand out the\nsame read timestamp to multiple callers, allowing Alpha to\namortize the cost of reaching MaxAssigned across multiple\nqueries.\n2. Best-effort transactions are a variant of read-only trans-\nactions, which would use an Alpha’s observed MaxAssigned\ntimestamp as the read timestamp. Thus, the receiver Alpha\ndoes not have to block at all and can continue to process the\nquery. This is the equivalent of eventual consistency model\ntypical in other databases. Ultimately, every Dgraph read is a\nsnapshot over the entire distributed database and none of the\nreads would violate the snapshot guarantee.1\n7 Replication\nMost updates to Dgraph are done via Raft. Let’s start with\nAlphas which can push a lot of data through the system. All\nmutations and transaction updates are proposed via Raft and\nare made part of the Raft write-ahead logs. On a crash and\nrestart, the Raft logs are replayed from the last snapshot to\nbring the state machine back up to the correct latest state. On\nthe flip side, the longer the logs, the longer it takes for Alpha\nto replay them on a restart, causing a start delay. So, the logs\nmust be trimmed by taking a snapshot which indicates that\nthe state up until that point has been persisted and does not\nneed to be replayed on a restart.\nAs mentioned above, Alphas write mutations to the Raft\nWAL, but keep them in memory in a transaction cache. When\na transaction is committed, the mutations are written to the\nstate at the commit timestamp. This means that on a restart,\nall the pending transactions must be brought back to memory\nvia the Raft WAL. This requires a calculation to pick the\nright Raft index to trim the logs at, which would keep all the\npending transactions in their entirety in the logs.\nOne of the lessons we learnt while fixing Jepsen issues\nwas that, to improve debuggability of a complex distributed\nsystem, the system should run like clock work. In other words,\nonce an event in one system has happened, events in other\nsystems should almost be predictable. This guiding principle\ndetermined how we take snapshots.\nRaft paper allows leaders and followers to take snapshots\nindependently of each other. Dgraph used to do that but that\nbrought unpredictability to the system and made debugging\n1Note however that a typical Dgraph query could hit multiple Alphas in\nvarious groups — some of these Alphas might not have reached the read\ntimestamp (initial Alpha’s MaxAssigned timestamp) yet. In those cases, the\nquery could still block until those Alphas catch up.much harder. So, keeping with the hard learnt lesson of pre-\ndictability principle, we changed it to make the leader calcu-\nlate the snapshot index and propose this result. This allowed\nleader and followers to all take snapshot at the same index,\nexactly the same time (if they’re generally caught up). Further\nmore, this group level snapshot event is then communicated\nto Zero to allow it to trim the conflict map by removing all\nentries below the snapshot timestamp. Following this chain\nof events in logs has improved debuggability of the system\ndramatically.\nDgraph only keeps metadata in Raft snapshots, the actual\ndata is stored separately. Dgraph does not make a copy of\nthat data during snapshot. When a follower falls behind and\nneeds a snapshot, it asks the leader for it and leader would\nstream the snapshot from its state (Badger, just like Dgraph,\nsupports MVCC and when doing a read at a certain times-\ntamp, is operating upon a logical snapshot of the DB). In the\nprevious versions, follower would wipe out its current state\nbefore accepting the updates from the leader. In the newer\nversions, leader can choose to send only the delta state up-\ndate to the follower, which can decrease the data transmitted\nconsiderably.\n8 High Availability and Scalability\nDgraph’s architecture revolves around Raft groups for update\nlog serialization and replication. In the CAP throrem, this\nfollows CP, i.e. in a network partition, Dgraph would choose\nconsistency over availability. However, the concepts of CAP\ntheorem should not be confused with high availability, which\nis determined by how many instances can be lost without the\nservice getting affected.\nIn a three-node group, Dgraph can loose one instance per\ngroup without causing any measurable impact on the function-\nality of the database. However, loosing two instances from\nthe same group would cause Dgraph to block, considering all\nupdates go through Raft. In a five-node group, the number of\ninstances that can be lost without affecting functionality is\ntwo. We do not recommend running more than five replicas\nper group.\nGiven the central managerial role of Dgraph Zero, one\nmight assume that Zero would be the single point of failure.\nHowever, that’s not the case. In the scenario where Zero\nfollower dies, nothing changes really. If the Zero leader dies,\none of the Zero followers would become the leader, renew its\ntimestamp and uid assignment lease, pick up the transaction\nstatus logs (stored via Raft) and start accepting requests from\nAlphas. The only thing that could be lost during this transition\nare transactions which were trying to commit with the lost\nZero. They might error out, but could be retried. Same goes\nfor Alphas. All Alpha followers have the same information\nas the Alpha leader and any of the members of the group can\nbe lost without losing any state.\nDgraph can support as many groups as can be represented\n9by 32-bit integer (even that is an artificial limit). Each group\ncan have one, three, five (potentially more, but not recom-\nmended) replicas. The number of uids (graph nodes) that can\nbe present in the system are limited by 64-bit unsigned integer,\nsame goes for transaction timestamps. All of these are very\ngenerous limits and not a cause of concern for scalability.\n9 Queries\nA typical Dgraph query can hit many Alphas, depending upon\nwhere the predicates lie. Each query is sub-divided into tasks,\neach task responsible for one predicate.\n9.1 Traversals\nDgraph query tasks (henceforth referred to as tasks) are gen-\nerally built around the mechanism of converting uid list to\nmatrix during traversal. The query can have a list of uids to\ntraverse, the execution engine would do lookups in Badger\nconcurrently to get the posting lists for each Uid (note that\npredicate is always part of the task), converting each uid to\na list. Thus, a task query would return a list of Uid lists, aka\nUidMatrix. If the predicate holds a value (example, predicate\nname), the UidList returns a list of values, aka ValueMatrix.\nA predicate could allow only one uid/value, or allow mul-\ntiple uids/value. This mechanism works correctly in either\nof those scenarios. If the posting list only has one uid/value,\nthe resulting list would only have one element. A matrix in\nthis case would have a list of lists, each list with zero or one\nelement. Note that there’s parity between the index of the Uid\nin list and the index of the list in UidMatrix. So, Dgraph can\naccurately maintain the relationships.\nA ValueMatrix is typically the leaf in the task tree. Once\nwe have values, we just need to encode them in the results.\nHowever, a task with UidMatrix result would typically have\nsub-tasks. Those sub-tasks would need a query UidList for\nprocessing. Dgraph would merge-sort the UidMatrix into a\nsingle, sorted list of Uids, which would be copied over to the\nsub-tasks. Each sub-task could similarly run expand on the\nsame or other predicates.\n9.2 Functions\nDgraph also supports functions. These functions provide an\neasy way to query Dgraph when the global uid space needs\nto be restricted to a small set (or even a single uid). Functions\nalso provide advanced functionality like regular expressions,\nfull-text search, equality and inequality over sortable data\ntypes, geo-spatial searches, etc. These functions are also en-\ncoded into a task query, except this time they don’t start with a\nUidList. The task query instead contains tokens, derived from\nthe tokenizers corresponding to the index these functions are\nusing (as explained above). Most functions require some sort\nof index to operate, for example, regular expression queriesuse trigram indexing, geo-spatial queries uses S2-cell based\ngeo indexing and so on... As described in section above, in-\ndexing keys encode predicate and token, instead of a predicate\nand uid. So, the mechanism to fill up the matrix is the same as\nin any other task query. Only this time, we use list of tokens\ninstead of a list of Uids as the query set.\n9.3 Filters\nThe technique described above works for traversals. But, fil-\nters (intersections) are a big part of user queries. Each task\ncontains a UidList as a query and a matrix as a result. Task\nalso stores a resulting uid list, which can store a uid set from\nthe resulting UidMatrix. Depending upon whether filters are\napplied or not, this uid set can be the same as merge-sorted\nUidMatrix or a subset of it.\nFilters are a tree in their own right. Dgraph supports AND,\nOR and NOT filters, which can be further combined to create\na complex filter tree. Filters typically consist of functions\nwhich can ask for more information and are represented as\ntasks. These tasks execute in the same mechanism described\nabove, but do one additional thing. The tasks also contain the\nsource list of Uids (the resulting set from the parent task to\nwhich the filter is being applied to). This list of uids is sent as\npart of the filter task. The task uses these uids to perform any\nintersections at the destination server, returning only a subset\nof the results, instead of retrieving all results for the task. This\ncan significantly cut down the result payload size while also\nallowing optimizations during filter task execution to speed\nthings up. Once the results are returned, the co-ordinator\nserver would stitch up the results using the AND, OR or NOT\noperators.\n9.4 Intersections\nThe uid intersection itself uses three modes of integer inter-\nsection, choosing between linear scan, block jump or binary\nsearch depending upon the ratio of the size of the results and\nthe size of the source UidList to provide the best performance.\nWhen the two lists are of the same size, Dgraph uses linear\nscan over both the lists. When one list is much longer than\nother, Dgraph would iterate over the shorter list and do bi-\nnary lookups over the longer. For some range in between,\nDgraph would iterate over the shorter and do forward seeking\nblock jumps over the longer list. Dgraph’s block based integer\nencoding mechanism makes all this quite efficient.\nTODO: Talk about ACID.\n10 Future Work\nWe had removed data caching from Dgraph due to heavy read-\nwrite contention, and built a new, contention-free Go cache\nlibrary to aid our reads. Work is underway in integrating that\nwith Dgraph. Dgraph does not have any query or response\n10caching — such a cache would be difficult to maintain in\nan MVCC environment where each read can have different\nresults, based on its timestamp.\nSorted integer encoding and intersection is a hotly re-\nsearched topic and there is a lot of room for optimization\nhere in terms of performance. As mentioned earlier, work is\nunderway in experimenting a switch to Roaring Bitmaps.\nWe also plan to work on a query optimizer, which can\nbetter determine the right sequence in which to execute query.\nSo far, the simple nature of GraphQL has let the operators\nmanually optimize their queries — but surely Dgraph can do\na better job knowing the state of data.\nFuture work here is to allow writes during the shard move,\nwhich depending upon the size of the shard can take some\ntime.\nTODO: Add a conclusion.11 Acknowledgments\nDgraph wouldn’t have been possible without the tireless con-\ntributions of its core dev team and extended community. This\nwork also wouldn’t have been possible without funding from\nour investors. A full list of contributors is present here:\ngithub.com/dgraph-io/dgraph/graphs/contributors\nDgraph is an open source software, available on\nhttps://github.com/dgraph-io/dgraph\nMore information about Dgraph is available on\nhttps://dgraph.io\n11" } ]
{ "category": "App Definition and Development", "file_name": "dgraph.pdf", "project_name": "Dgraph", "subcategory": "Database" }
[ { "data": ",_:_._;._7A6SACTOSCÓMICOS\\DE.PeLLAvLUCIO\nICubiertadeesteLoretoPradoEnriqueChicoteyFranciscoMelgares\nde¡Adivorciarsefocan!\n123JACINTO^APELLAYJOSÉDESLUCIÓ¡ADIVORCIARSETOCJUGUETECÓMICO,ENTRESACTOSORIGINALEst/enadoenelTeatroCómico,deMadrid,eldía10deDiciembredeANTONIOMt>****3^'AÑOVI|5DEMARZODE1932MADRIDNÚM.234\nREPARTOPERSONAJESACTORESValentinaLoretoPrado.CristinaConsueloNieva.EsmeraldinaCarmenL.SolíaJenaraJuliaMedero.LiberadaPePitadelCid-PetraLuisaMelchor.1>eresaJosefinaInfiesta.BiasaEmiliadelCid.Mcasia..'..'..AmaliaAnchorena.LudaNatividadRodrigue».PascasioEnriqueChicote.TeodoioFranciscoMelgares.AmadorJoséCuenca.CiríacoJoKéSampietro.ManoloRodolfoRecober.BenignoJoséLuciaFedericoAntonioMartínez.Miguel-JoséDelgado.BenitoJuanJiménezRomero.LaacciónenMadrid.—Épocaactual.-^Acotacionesdelladodelactor.Nqta.—Lascuatroúltimasmujeresyloscincoúltimoshombresadmiteneldoble.\n" } ]
{ "category": "App Definition and Development", "file_name": "adivorciarsetoca00cape.pdf", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "Iterator Facade and Adaptor\nAuthor : David Abrahams, Jeremy Siek, Thomas Witt\nContact : dave@boost-consulting.com ,jsiek@osl.iu.edu ,witt@styleadvisor.com\nOrganization :Boost Consulting , Indiana University Open Systems Lab ,Zephyr Asso-\nciates, Inc.\nDate : 2004-11-01\nNumber : This is a revised version of N1530=03-0113, which was accepted for\nTechnical Report 1 by the C++ standard committee’s library working\ngroup.\ncopyright: Copyright David Abrahams, Jeremy Siek, and Thomas Witt 2003.\nabstract: We propose a set of class templates that help programmers build standard-\nconforming iterators, both from scratch and by adapting other iterators.\nTable of Contents\nMotivation\nImpact on the Standard\nDesign\nIterator Concepts\nInteroperability\nIterator Facade\nUsage\nIterator Core Access\noperator[]\noperator->\nIterator Adaptor\nSpecialized Adaptors\nProposed Text\nHeader <iterator_helper> synopsis [lib.iterator.helper.synopsis]\nIterator facade [lib.iterator.facade]\nClass template iterator_facade\niterator_facade Requirements\niterator_facade operations\nIterator adaptor [lib.iterator.adaptor]\nClass template iterator_adaptor\niterator_adaptor requirements\n1iterator_adaptor base class parameters\niterator_adaptor public operations\niterator_adaptor protected member functions\niterator_adaptor private member functions\nSpecialized adaptors [lib.iterator.special.adaptors]\nIndirect iterator\nClass template pointee\nClass template indirect_reference\nClass template indirect_iterator\nindirect_iterator requirements\nindirect_iterator models\nindirect_iterator operations\nReverse iterator\nClass template reverse_iterator\nreverse_iterator requirements\nreverse_iterator models\nreverse_iterator operations\nTransform iterator\nClass template transform_iterator\ntransform_iterator requirements\ntransform_iterator models\ntransform_iterator operations\nFilter iterator\nClass template filter_iterator\nfilter_iterator requirements\nfilter_iterator models\nfilter_iterator operations\nCounting iterator\nClass template counting_iterator\ncounting_iterator requirements\ncounting_iterator models\ncounting_iterator operations\nFunction output iterator\nClass template function_output_iterator\nHeader\nfunction_output_iterator requirements\nfunction_output_iterator models\nfunction_output_iterator operations\nMotivation\nIterators play an important role in modern C++ programming. The iterator is the central abstrac-\ntion of the algorithms of the Standard Library, allowing algorithms to be re-used in in a wide va-\nriety of contexts. The C++ Standard Library contains a wide variety of useful iterators. Every\none of the standard containers comes with constant and mutable iterators2, and also reverse ver-\nsions of those same iterators which traverse the container in the opposite direction. The Standard\n2also supplies istream_iterator and ostream_iterator for reading from and writing to streams,\ninsert_iterator ,front_insert_iterator and back_insert_iterator for inserting elements into\ncontainers, and raw_storage_iterator for initializing raw memory [7].\nDespite the many iterators supplied by the Standard Library, obvious and useful iterators are missing,\nand creating new iterator types is still a common task for C++ programmers. The literature documents\nseveral of these, for example line iterator [3] and Constant iterator [9]. The iterator abstraction is so\npowerful that we expect programmers will always need to invent new iterator types.\nAlthough it is easy to create iterators that almost conform to the standard, the iterator requirements\ncontain subtleties which can make creating an iterator which actually conforms quite difficult. Further,\nthe iterator interface is rich, containing many operators that are technically redundant and tedious to\nimplement. To automate the repetitive work of constructing iterators, we propose iterator_facade ,\nan iterator base class template which provides the rich interface of standard iterators and delegates\nits implementation to member functions of the derived class. In addition to reducing the amount of\ncode necessary to create an iterator, the iterator_facade also provides compile-time error detection.\nIterator implementation mistakes that often go unnoticed are turned into compile-time errors because\nthe derived class implementation must match the expectations of the iterator_facade .\nA common pattern of iterator construction is the adaptation of one iterator to form a new one.\nThe functionality of an iterator is composed of four orthogonal aspects: traversal, indirection, equality\ncomparison and distance measurement. Adapting an old iterator to create a new one often saves\nwork because one can reuse one aspect of functionality while redefining the other. For example, the\nStandard provides reverse_iterator , which adapts any Bidirectional Iterator by inverting its direction\nof traversal. As with plain iterators, iterator adaptors defined outside the Standard have become\ncommonplace in the literature:\n•Checked iter[13] adds bounds-checking to an existing iterator.\n•The iterators of the View Template Library[14], which adapts containers, are themselves adaptors\nover the underlying iterators.\n•Smart iterators [5] adapt an iterator’s dereferencing behavior by applying a function object to the\nobject being referenced and returning the result.\n•Custom iterators [4], in which a variety of adaptor types are enumerated.\n•Compound iterators [1], which access a slice out of a container of containers.\n•Several iterator adaptors from the MTL [12]. The MTL contains a strided iterator, where each\ncall to operator++() moves the iterator ahead by some constant factor, and a scaled iterator,\nwhich multiplies the dereferenced value by some constant.\nTo fulfill the need for constructing adaptors, we propose the iterator_adaptor class template.\nInstantiations of iterator_adaptor serve as a base classes for new iterators, providing the default\nbehavior of forwarding all operations to the underlying iterator. The user can selectively replace these\nfeatures in the derived iterator class. This proposal also includes a number of more specialized adaptors,\nsuch as the transform_iterator that applies some user-specified function during the dereference of\nthe iterator.\n1We use the term concept to mean a set of requirements that a type must satisfy to be used with a\nparticular template parameter.\n2The term mutable iterator refers to iterators over objects that can be changed by assigning to the\ndereferenced iterator, while constant iterator refers to iterators over objects that cannot be modified.\n3Impact on the Standard\nThis proposal is purely an addition to the C++ standard library. However, note that this proposal\nrelies on the proposal for New Iterator Concepts.\nDesign\nIterator Concepts\nThis proposal is formulated in terms of the new iterator concepts as proposed in n1550 , since user-\ndefined and especially adapted iterators suffer from the well known categorization problems that are\ninherent to the current iterator categories.\nThis proposal does not strictly depend on proposal n1550 , as there is a direct mapping between new\nand old categories. This proposal could be reformulated using this mapping if n1550 was not accepted.\nInteroperability\nThe question of iterator interoperability is poorly addressed in the current standard. There are currently\ntwo defect reports that are concerned with interoperability issues.\nIssue 179concerns the fact that mutable container iterator types are only required to be convertible\nto the corresponding constant iterator types, but objects of these types are not required to interoperate\nin comparison or subtraction expressions. This situation is tedious in practice and out of line with\nthe way built in types work. This proposal implements the proposed resolution to issue 179, as most\nstandard library implementations do nowadays. In other words, if an iterator type A has an implicit or\nuser defined conversion to an iterator type B, the iterator types are interoperable and the usual set of\noperators are available.\nIssue 280concerns the current lack of interoperability between reverse iterator types. The proposed\nnew reverse iterator template fixes the issues raised in 280. It provides the desired interoperability\nwithout introducing unwanted overloads.\nIterator Facade\nWhile the iterator interface is rich, there is a core subset of the interface that is necessary for all the\nfunctionality. We have identified the following core behaviors for iterators:\n•dereferencing\n•incrementing\n•decrementing\n•equality comparison\n•random-access motion\n•distance measurement\nIn addition to the behaviors listed above, the core interface elements include the associated types\nexposed through iterator traits: value_type ,reference ,difference_type , and iterator_category .\nIterator facade uses the Curiously Recurring Template Pattern (CRTP) [ Cop95 ] so that the user\ncan specify the behavior of iterator_facade in a derived class. Former designs used policy objects to\nspecify the behavior, but that approach was discarded for several reasons:\n1.the creation and eventual copying of the policy object may create overhead that\ncan be avoided with the current approach.\n42.The policy object approach does not allow for custom constructors on the created\niterator types, an essential feature if iterator_facade should be used in other\nlibrary implementations.\n3.Without the use of CRTP, the standard requirement that an iterator’s opera-\ntor++ returns the iterator type itself would mean that all iterators built with the\nlibrary would have to be specializations of iterator_facade<...> , rather than\nsomething more descriptive like indirect_iterator<T*> . Cumbersome type gen-\nerator metafunctions would be needed to build new parameterized iterators, and\na separate iterator_adaptor layer would be impossible.\nUsage\nThe user of iterator_facade derives his iterator class from a specialization of iterator_facade and\npasses the derived iterator class as iterator_facade ’s first template parameter. The order of the other\ntemplate parameters have been carefully chosen to take advantage of useful defaults. For example,\nwhen defining a constant lvalue iterator, the user can pass a const-qualified version of the iterator’s\nvalue_type asiterator_facade ’sValue parameter and omit the Reference parameter which follows.\nThe derived iterator class must define member functions implementing the iterator’s core behaviors.\nThe following table describes expressions which are required to be valid depending on the category of\nthe derived iterator type. These member functions are described briefly below and in more detail in the\niterator facade requirements.\nExpression Effects\ni.dereference() Access the value referred to\ni.equal(j) Compare for equality with j\ni.increment() Advance by one position\ni.decrement() Retreat by one position\ni.advance(n) Advance by npositions\ni.distance_to(j) Measure the distance to j\nIn addition to implementing the core interface functions, an iterator derived from iterator_facade\ntypically defines several constructors. To model any of the standard iterator concepts, the iterator must\nat least have a copy constructor. Also, if the iterator type Xis meant to be automatically interoperate\nwith another iterator type Y(as with constant and mutable iterators) then there must be an implicit\nconversion from XtoYor from YtoX(but not both), typically implemented as a conversion constructor.\nFinally, if the iterator is to model Forward Traversal Iterator or a more-refined iterator concept, a default\nconstructor is required.\nIterator Core Access\niterator_facade and the operator implementations need to be able to access the core member functions\nin the derived class. Making the core member functions public would expose an implementation detail\nto the user. The design used here ensures that implementation details do not appear in the public\ninterface of the derived iterator type.\nPreventing direct access to the core member functions has two advantages. First, there is no possi-\nbility for the user to accidently use a member function of the iterator when a member of the value type\nwas intended. This has been an issue with smart pointer implementations in the past. The second and\nmain advantage is that library implementers can freely exchange a hand-rolled iterator implementation\nfor one based on iterator_facade without fear of breaking code that was accessing the public core\nmember functions directly.\n5In a naive implementation, keeping the derived class’ core member functions private would require\nit to grant friendship to iterator_facade and each of the seven operators. In order to reduce the\nburden of limiting access, iterator_core_access is provided, a class that acts as a gateway to the\ncore member functions in the derived iterator class. The author of the derived class only needs to grant\nfriendship to iterator_core_access to make his core member functions available to the library.\niterator_core_access will be typically implemented as an empty class containing only private\nstatic member functions which invoke the iterator core member functions. There is, however, no need\nto standardize the gateway protocol. Note that even if iterator_core_access used public member\nfunctions it would not open a safety loophole, as every core member function preserves the invariants\nof the iterator.\noperator[]\nThe indexing operator for a generalized iterator presents special challenges. A random access iterator’s\noperator[] is only required to return something convertible to its value_type . Requiring that it return\nan lvalue would rule out currently-legal random-access iterators which hold the referenced value in a\ndata member (e.g. counting_iterator ), because *(p+n) is a reference into the temporary iterator\np+n, which is destroyed when operator[] returns.\nWritable iterators built with iterator_facade implement the semantics required by the preferred\nresolution to issue 299 and adopted by proposal n1550 : the result of p[n] is an object convertible\nto the iterator’s value_type , and p[n] = x is equivalent to *(p + n) = x (Note: This result object\nmay be implemented as a proxy containing a copy of p+n). This approach will work properly for any\nrandom-access iterator regardless of the other details of its implementation. A user who knows more\nabout the implementation of her iterator is free to implement an operator[] that returns an lvalue in\nthe derived iterator class; it will hide the one supplied by iterator_facade from clients of her iterator.\noperator->\nThe reference type of a readable iterator (and today’s input iterator) need not in fact be a reference,\nso long as it is convertible to the iterator’s value_type . When the value_type is a class, however, it\nmust still be possible to access members through operator-> . Therefore, an iterator whose reference\ntype is not in fact a reference must return a proxy containing a copy of the referenced value from its\noperator-> .\nThe return types for iterator_facade ’soperator-> and operator[] are not explicitly specified.\nInstead, those types are described in terms of a set of requirements, which must be satisfied by the\niterator_facade implementation.\nIterator Adaptor\nTheiterator_adaptor class template adapts some Base3type to create a new iterator. Instantiations of\niterator_adaptor are derived from a corresponding instantiation of iterator_facade and implement\nthe core behaviors in terms of the Base type. In essence, iterator_adaptor merely forwards all\noperations to an instance of the Base type, which it stores as a member.\nThe user of iterator_adaptor creates a class derived from an instantiation of iterator_adaptor\nand then selectively redefines some of the core member functions described in the iterator_facade\ncore requirements table. The Base type need not meet the full requirements for an iterator; it need\n[Cop95] [Coplien, 1995] Coplien, J., Curiously Recurring Template Patterns, C++ Report, February\n1995, pp. 24-27.\n3The term “Base” here does not refer to a base class and is not meant to imply the use of derivation. We\nhave followed the lead of the standard library, which provides a base() function to access the underlying\niterator object of a reverse_iterator adaptor.\n6only support the operations used by the core interface functions of iterator_adaptor that have not\nbeen redefined in the user’s derived class.\nSeveral of the template parameters of iterator_adaptor default to use_default . This allows\nthe user to make use of a default parameter even when she wants to specify a parameter later in the\nparameter list. Also, the defaults for the corresponding associated types are somewhat complicated,\nso metaprogramming is required to compute them, and use_default can help to simplify the imple-\nmentation. Finally, the identity of the use_default type is not left unspecified because specification\nhelps to highlight that the Reference template parameter may not always be identical to the iterator’s\nreference type, and will keep users from making mistakes based on that assumption.\nSpecialized Adaptors\nThis proposal also contains several examples of specialized adaptors which were easily implemented\nusing iterator_adaptor :\n•indirect_iterator , which iterates over iterators, pointers, or smart pointers and applies an extra\nlevel of dereferencing.\n•A new reverse_iterator , which inverts the direction of a Base iterator’s motion, while allowing\nadapted constant and mutable iterators to interact in the expected ways (unlike those in most\nimplementations of C++98).\n•transform_iterator , which applies a user-defined function object to the underlying values when\ndereferenced.\n•filter_iterator , which provides a view of an iterator range in which some elements of the\nunderlying range are skipped.\n•counting_iterator , which adapts any incrementable type (e.g. integers, iterators) so that incre-\nmenting/decrementing the adapted iterator and dereferencing it produces successive values of the\nBase type.\n•function_output_iterator , which makes it easier to create custom output iterators.\nBased on examples in the Boost library, users have generated many new adaptors, among them\na permutation adaptor which applies some permutation to a random access iterator, and a strided\nadaptor, which adapts a random access iterator by multiplying its unit of motion by a constant factor.\nIn addition, the Boost Graph Library (BGL) uses iterator adaptors to adapt other graph libraries,\nsuch as LEDA [10] and Stanford GraphBase [8], to the BGL interface (which requires C++ Standard\ncompliant iterators).\nProposed Text\nHeader <iterator_helper> synopsis [lib.iterator.helper.synopsis]\nstruct use_default;\nstruct iterator_core_access { /* implementation detail */ };\ntemplate <\nclass Derived\n, class Value\n, class CategoryOrTraversal\n, class Reference = Value&\n, class Difference = ptrdiff_t\n7>\nclass iterator_facade;\ntemplate <\nclass Derived\n, class Base\n, class Value = use_default\n, class CategoryOrTraversal = use_default\n, class Reference = use_default\n, class Difference = use_default\n>\nclass iterator_adaptor;\ntemplate <\nclass Iterator\n, class Value = use_default\n, class CategoryOrTraversal = use_default\n, class Reference = use_default\n, class Difference = use_default\n>\nclass indirect_iterator;\ntemplate <class Dereferenceable>\nstruct pointee;\ntemplate <class Dereferenceable>\nstruct indirect_reference;\ntemplate <class Iterator>\nclass reverse_iterator;\ntemplate <\nclass UnaryFunction\n, class Iterator\n, class Reference = use_default\n, class Value = use_default\n>\nclass transform_iterator;\ntemplate <class Predicate, class Iterator>\nclass filter_iterator;\ntemplate <\nclass Incrementable\n, class CategoryOrTraversal = use_default\n, class Difference = use_default\n>\nclass counting_iterator;\ntemplate <class UnaryFunction>\nclass function_output_iterator;\n8Iterator facade [lib.iterator.facade]\niterator_facade is a base class template that implements the interface of standard iterators in terms\nof a few core functions and associated types, to be supplied by a derived iterator class.\nClass template iterator_facade\ntemplate <\nclass Derived\n, class Value\n, class CategoryOrTraversal\n, class Reference = Value&\n, class Difference = ptrdiff_t\n>\nclass iterator_facade {\npublic:\ntypedef remove_const<Value>::type value_type;\ntypedef Reference reference;\ntypedef Value* pointer;\ntypedef Difference difference_type;\ntypedef /* see below */ iterator_category;\nreference operator*() const;\n/* see below */ operator->() const;\n/* see below */ operator[](difference_type n) const;\nDerived& operator++();\nDerived operator++(int);\nDerived& operator--();\nDerived operator--(int);\nDerived& operator+=(difference_type n);\nDerived& operator-=(difference_type n);\nDerived operator-(difference_type n) const;\nprotected:\ntypedef iterator_facade iterator_facade_;\n};\n// Comparison operators\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type // exposition\noperator ==(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type\noperator !=(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type\noperator <(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\n9iterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type\noperator <=(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type\noperator >(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type\noperator >=(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\n// Iterator difference\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\n/* see below */\noperator-(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\n// Iterator addition\ntemplate <class Dr, class V, class TC, class R, class D>\nDerived operator+ (iterator_facade<Dr,V,TC,R,D> const&,\ntypename Derived::difference_type n);\ntemplate <class Dr, class V, class TC, class R, class D>\nDerived operator+ (typename Derived::difference_type n,\niterator_facade<Dr,V,TC,R,D> const&);\nThe iterator_category member of iterator_facade is\niterator-category (CategoryOrTraversal, value_type, reference)\nwhere iterator-category is defined as follows:\niterator-category (C,R,V) :=\nif (C is convertible to std::input_iterator_tag\n|| C is convertible to std::output_iterator_tag\n)\nreturn C\nelse if (C is not convertible to incrementable_traversal_tag)\nthe program is ill-formed\nelse return a type X satisfying the following two constraints:\n1. X is convertible to X1, and not to any more-derived\n10type, where X1 is defined by:\nif (R is a reference type\n&& C is convertible to forward_traversal_tag)\n{\nif (C is convertible to random_access_traversal_tag)\nX1 = random_access_iterator_tag\nelse if (C is convertible to bidirectional_traversal_tag)\nX1 = bidirectional_iterator_tag\nelse\nX1 = forward_iterator_tag\n}\nelse\n{\nif (C is convertible to single_pass_traversal_tag\n&& R is convertible to V)\nX1 = input_iterator_tag\nelse\nX1 = C\n}\n2.category-to-traversal (X) is convertible to the most\nderived traversal tag type to which X is also\nconvertible, and not to any more-derived traversal tag\ntype.\n[Note: the intention is to allow iterator_category to be one of the five original category tags when\nconvertibility to one of the traversal tags would add no information]\nThe enable_if_interoperable template used above is for exposition purposes. The member op-\nerators should only be in an overload set provided the derived types Dr1and Dr2are interoperable,\nmeaning that at least one of the types is convertible to the other. The enable_if_interoperable ap-\nproach uses SFINAE to take the operators out of the overload set when the types are not interoperable.\nThe operators should behave as-if enable_if_interoperable were defined to be:\ntemplate <bool, typename> enable_if_interoperable_impl\n{};\ntemplate <typename T> enable_if_interoperable_impl<true,T>\n{ typedef T type; };\ntemplate<typename Dr1, typename Dr2, typename T>\nstruct enable_if_interoperable\n: enable_if_interoperable_impl<\nis_convertible<Dr1,Dr2>::value || is_convertible<Dr2,Dr1>::value\n, T\n>\n{};\niterator_facade Requirements\nThe following table describes the typical valid expressions on iterator_facade ’sDerived parameter,\ndepending on the iterator concept(s) it will model. The operations in the first column must be made ac-\ncessible to member functions of class iterator_core_access . In addition, static_cast<Derived*>(iterator_facade*)\nshall be well-formed.\n11In the table below, Fisiterator_facade<X,V,C,R,D> ,ais an object of type X,bandcare objects of\ntype const X ,nis an object of F::difference_type ,yis a constant object of a single pass iterator type\ninteroperable with X, and zis a constant object of a random access traversal iterator type interoperable\nwith X.\niterator_facade Core Operations\nExpression Return Type Assertion/Note Used to implement It-\nerator Concept(s)\nc.dereference() F::reference Readable Iterator, Writable\nIterator\nc.equal(y) convertible to bool true iff candyrefer to the\nsame position.Single Pass Iterator\na.increment() unused Incrementable Iterator\na.decrement() unused Bidirectional Traversal Iter-\nator\na.advance(n) unused Random Access Traversal\nIterator\nc.distance_to(z) convertible to\nF::difference_typeequivalent to dis-\ntance(c, X(z)) .Random Access Traversal\nIterator\niterator_facade operations\nThe operations in this section are described in terms of operations on the core interface of Derived\nwhich may be inaccessible (i.e. private). The implementation should access these operations through\nmember functions of class iterator_core_access .\nreference operator*() const;\nReturns: static_cast<Derived const*>(this)->dereference()\noperator->() const; (seebelow )\nReturns: Ifreference is a reference type, an object of type pointer equal to:\n&static_cast<Derived const*>(this)->dereference()\nOtherwise returns an object of unspecified type such that, (*static_cast<Derived\nconst*>(this))->m is equivalent to (w = **static_cast<Derived const*>(this),\nw.m) for some temporary object wof type value_type .\nunspecified operator[](difference_type n) const;\nReturns: an object convertible to value_type . For constant objects vof type value_type ,\nandnof type difference_type ,(*this)[n] = v is equivalent to *(*this + n) = v ,\nandstatic_cast<value_type const&>((*this)[n]) is equivalent to static_cast<value_type\nconst&>(*(*this + n))\nDerived& operator++();\nEffects: static_cast<Derived*>(this)->increment();\nreturn *static_cast<Derived*>(this);\nDerived operator++(int);\nEffects: Derived tmp(static_cast<Derived const*>(this));\n++*this;\nreturn tmp;\n12Derived& operator--();\nEffects: static_cast<Derived*>(this)->decrement();\nreturn *static_cast<Derived*>(this);\nDerived operator--(int);\nEffects: Derived tmp(static_cast<Derived const*>(this));\n--*this;\nreturn tmp;\nDerived& operator+=(difference_type n);\nEffects: static_cast<Derived*>(this)->advance(n);\nreturn *static_cast<Derived*>(this);\nDerived& operator-=(difference_type n);\nEffects: static_cast<Derived*>(this)->advance(-n);\nreturn *static_cast<Derived*>(this);\nDerived operator-(difference_type n) const;\nEffects: Derived tmp(static_cast<Derived const*>(this));\nreturn tmp -= n;\ntemplate <class Dr, class V, class TC, class R, class D>\nDerived operator+ (iterator_facade<Dr,V,TC,R,D> const&,\ntypename Derived::difference_type n);\ntemplate <class Dr, class V, class TC, class R, class D>\nDerived operator+ (typename Derived::difference_type n,\niterator_facade<Dr,V,TC,R,D> const&);\nEffects: Derived tmp(static_cast<Derived const*>(this));\nreturn tmp += n;\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type\noperator ==(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\nReturns: ifis_convertible<Dr2,Dr1>::value\nthen ((Dr1 const&)lhs).equal((Dr2 const&)rhs) .\nOtherwise, ((Dr2 const&)rhs).equal((Dr1 const&)lhs) .\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type\noperator !=(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\nReturns: ifis_convertible<Dr2,Dr1>::value\nthen !((Dr1 const&)lhs).equal((Dr2 const&)rhs) .\nOtherwise, !((Dr2 const&)rhs).equal((Dr1 const&)lhs) .\n13template <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type\noperator <(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\nReturns: ifis_convertible<Dr2,Dr1>::value\nthen ((Dr1 const&)lhs).distance_to((Dr2 const&)rhs) < 0 .\nOtherwise, ((Dr2 const&)rhs).distance_to((Dr1 const&)lhs) > 0 .\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type\noperator <=(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\nReturns: ifis_convertible<Dr2,Dr1>::value\nthen ((Dr1 const&)lhs).distance_to((Dr2 const&)rhs) <= 0 .\nOtherwise, ((Dr2 const&)rhs).distance_to((Dr1 const&)lhs) >= 0 .\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type\noperator >(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\nReturns: ifis_convertible<Dr2,Dr1>::value\nthen ((Dr1 const&)lhs).distance_to((Dr2 const&)rhs) > 0 .\nOtherwise, ((Dr2 const&)rhs).distance_to((Dr1 const&)lhs) < 0 .\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,bool>::type\noperator >=(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\nReturns: ifis_convertible<Dr2,Dr1>::value\nthen ((Dr1 const&)lhs).distance_to((Dr2 const&)rhs) >= 0 .\nOtherwise, ((Dr2 const&)rhs).distance_to((Dr1 const&)lhs) <= 0 .\ntemplate <class Dr1, class V1, class TC1, class R1, class D1,\nclass Dr2, class V2, class TC2, class R2, class D2>\ntypename enable_if_interoperable<Dr1,Dr2,difference>::type\noperator -(iterator_facade<Dr1,V1,TC1,R1,D1> const& lhs,\niterator_facade<Dr2,V2,TC2,R2,D2> const& rhs);\nReturn Type: ifis_convertible<Dr2,Dr1>::value\nthen difference shall be iterator_traits<Dr1>::difference_type .\nOtherwise difference shall be iterator_traits<Dr2>::difference_type\nReturns: ifis_convertible<Dr2,Dr1>::value\nthen -((Dr1 const&)lhs).distance_to((Dr2 const&)rhs) .\nOtherwise, ((Dr2 const&)rhs).distance_to((Dr1 const&)lhs) .\n14Iterator adaptor [lib.iterator.adaptor]\nEach specialization of the iterator_adaptor class template is derived from a specialization of itera-\ntor_facade . The core interface functions expected by iterator_facade are implemented in terms of\ntheiterator_adaptor ’sBase template parameter. A class derived from iterator_adaptor typically\nredefines some of the core interface functions to adapt the behavior of the Base type. Whether the\nderived class models any of the standard iterator concepts depends on the operations supported by the\nBase type and which core interface functions of iterator_facade are redefined in the Derived class.\nClass template iterator_adaptor\ntemplate <\nclass Derived\n, class Base\n, class Value = use_default\n, class CategoryOrTraversal = use_default\n, class Reference = use_default\n, class Difference = use_default\n>\nclass iterator_adaptor\n: public iterator_facade<Derived, V’,C’,R’,D’> // see details\n{\nfriend class iterator_core_access;\npublic:\niterator_adaptor();\nexplicit iterator_adaptor(Base const& iter);\ntypedef Base base_type;\nBase const& base() const;\nprotected:\ntypedef iterator_adaptor iterator_adaptor_;\nBase const& base_reference() const;\nBase& base_reference();\nprivate: // Core iterator interface for iterator_facade.\ntypename iterator_adaptor::reference dereference() const;\ntemplate <\nclass OtherDerived, class OtherItera-\ntor, class V, class C, class R, class D\n>\nbool equal(iterator_adaptor<OtherDerived, OtherItera-\ntor, V, C, R, D> const& x) const;\nvoid advance(typename iterator_adaptor::difference_type n);\nvoid increment();\nvoid decrement();\ntemplate <\nclass OtherDerived, class OtherItera-\ntor, class V, class C, class R, class D\n>\ntypename iterator_adaptor::difference_type distance_to(\niterator_adaptor<OtherDerived, OtherItera-\ntor, V, C, R, D> const& y) const;\n15private:\nBase m_iterator; // exposition only\n};\niterator_adaptor requirements\nstatic_cast<Derived*>(iterator_adaptor*) shall be well-formed. The Base argument shall be\nAssignable and Copy Constructible.\niterator_adaptor base class parameters\nThe V’,C’,R’, and D’parameters of the iterator_facade used as a base class in the summary of\niterator_adaptor above are defined as follows:\nV’= if (Value is use_default)\nreturn iterator_traits<Base>::value_type\nelse\nreturn Value\nC’= if (CategoryOrTraversal is use_default)\nreturn iterator_traversal<Base>::type\nelse\nreturn CategoryOrTraversal\nR’= if (Reference is use_default)\nif (Value is use_default)\nreturn iterator_traits<Base>::reference\nelse\nreturn Value&\nelse\nreturn Reference\nD’= if (Difference is use_default)\nreturn iterator_traits<Base>::difference_type\nelse\nreturn Difference\niterator_adaptor public operations\niterator_adaptor();\nRequires: The Base type must be Default Constructible.\nReturns: An instance of iterator_adaptor with m_iterator default constructed.\nexplicit iterator_adaptor(Base const& iter);\nReturns: An instance of iterator_adaptor with m_iterator copy constructed from iter .\nBase const& base() const;\nReturns: m_iterator\n16iterator_adaptor protected member functions\nBase const& base_reference() const;\nReturns: A const reference to m_iterator .\nBase& base_reference();\nReturns: A non-const reference to m_iterator .\niterator_adaptor private member functions\ntypename iterator_adaptor::reference dereference() const;\nReturns: *m_iterator\ntemplate <\nclass OtherDerived, class OtherIterator, class V, class C, class R, class D\n>\nbool equal(iterator_adaptor<OtherDerived, OtherIterator, V, C, R, D> const& x) const;\nReturns: m_iterator == x.base()\nvoid advance(typename iterator_adaptor::difference_type n);\nEffects: m_iterator += n;\nvoid increment();\nEffects: ++m_iterator;\nvoid decrement();\nEffects: --m_iterator;\ntemplate <\nclass OtherDerived, class OtherItera-\ntor, class V, class C, class R, class D\n>\ntypename iterator_adaptor::difference_type distance_to(\niterator_adaptor<OtherDerived, OtherIterator, V, C, R, D> const& y) const;\nReturns: y.base() - m_iterator\nSpecialized adaptors [lib.iterator.special.adaptors]\nThe enable_if_convertible<X,Y>::type expression used in this section is for exposition purposes.\nThe converting constructors for specialized adaptors should be only be in an overload set provided\nthat an object of type Xis implicitly convertible to an object of type Y. The signatures involving\nenable_if_convertible should behave as-if enable_if_convertible were defined to be:\ntemplate <bool> enable_if_convertible_impl\n{};\ntemplate <> enable_if_convertible_impl<true>\n{ struct type; };\ntemplate<typename From, typename To>\nstruct enable_if_convertible\n: enable_if_convertible_impl<is_convertible<From,To>::value>\n{};\n17If an expression other than the default argument is used to supply the value of a function parameter\nwhose type is written in terms of enable_if_convertible , the program is ill-formed, no diagnostic\nrequired.\n[Note: The enable_if_convertible approach uses SFINAE to take the constructor out of the\noverload set when the types are not implicitly convertible. ]\nIndirect iterator\nindirect_iterator adapts an iterator by applying an extra dereference inside of operator*() . For\nexample, this iterator adaptor makes it possible to view a container of pointers (e.g. list<foo*> ) as\nif it were a container of the pointed-to type (e.g. list<foo> ).indirect_iterator depends on two\nauxiliary traits, pointee andindirect_reference , to provide support for underlying iterators whose\nvalue_type is not an iterator.\nClass template pointee\ntemplate <class Dereferenceable>\nstruct pointee\n{\ntypedef /* see below */ type;\n};\nRequires: For an object xof type Dereferenceable ,*xis well-formed. If ++xis ill-formed\nit shall neither be ambiguous nor shall it violate access control, and Dereference-\nable::element_type shall be an accessible type. Otherwise iterator_traits<Dereferenceable>::value_type\nshall be well formed. [Note: These requirements need not apply to explicit or partial\nspecializations of pointee ]\ntype is determined according to the following algorithm, where xis an object of type Dereference-\nable :\nif ( ++x is ill-formed )\n{\nreturn ‘‘Dereferenceable::element_type‘‘\n}\nelse if (‘‘*x‘‘ is a mutable reference to\nstd::iterator_traits<Dereferenceable>::value_type)\n{\nreturn iterator_traits<Dereferenceable>::value_type\n}\nelse\n{\nreturn iterator_traits<Dereferenceable>::value_type const\n}\nClass template indirect_reference\ntemplate <class Dereferenceable>\nstruct indirect_reference\n{\ntypedef /* see below */ type;\n};\n18Requires: For an object xof type Dereferenceable ,*xis well-formed. If ++xis ill-formed\nit shall neither be ambiguous nor shall it violate access control, and pointee<Dereferenceable>::type&\nshall be well-formed. Otherwise iterator_traits<Dereferenceable>::reference\nshall be well formed. [Note: These requirements need not apply to explicit or partial\nspecializations of indirect_reference ]\ntype is determined according to the following algorithm, where xis an object of type Dereference-\nable :\nif ( ++x is ill-formed )\nreturn ‘‘pointee<Dereferenceable>::type&‘‘\nelse\nstd::iterator_traits<Dereferenceable>::reference\nClass template indirect_iterator\ntemplate <\nclass Iterator\n, class Value = use_default\n, class CategoryOrTraversal = use_default\n, class Reference = use_default\n, class Difference = use_default\n>\nclass indirect_iterator\n{\npublic:\ntypedef /* see below */ value_type;\ntypedef /* see below */ reference;\ntypedef /* see below */ pointer;\ntypedef /* see below */ difference_type;\ntypedef /* see below */ iterator_category;\nindirect_iterator();\nindirect_iterator(Iterator x);\ntemplate <\nclass Iterator2, class Value2, class Category2\n, class Reference2, class Difference2\n>\nindirect_iterator(\nindirect_iterator<\nIterator2, Value2, Category2, Reference2, Difference2\n> const& y\n, typename enable_if_convertible<Iterator2, Itera-\ntor>::type* = 0 // exposition\n);\nIterator const& base() const;\nreference operator*() const;\nindirect_iterator& operator++();\nindirect_iterator& operator--();\nprivate:\nIterator m_iterator; // exposition\n};\n19The member types of indirect_iterator are defined according to the following pseudo-code, where\nVisiterator_traits<Iterator>::value_type\nif (Value is use_default) then\ntypedef remove_const<pointee<V>::type>::type value_type;\nelse\ntypedef remove_const<Value>::type value_type;\nif (Reference is use_default) then\nif (Value is use_default) then\ntypedef indirect_reference<V>::type reference;\nelse\ntypedef Value& reference;\nelse\ntypedef Reference reference;\nif (Value is use_default) then\ntypedef pointee<V>::type* pointer;\nelse\ntypedef Value* pointer;\nif (Difference is use_default)\ntypedef iterator_traits<Iterator>::difference_type difference_type;\nelse\ntypedef Difference difference_type;\nif (CategoryOrTraversal is use_default)\ntypedef iterator-category (\niterator_traversal<Iterator>::type,‘‘reference‘‘,‘‘value_type‘‘\n) iterator_category;\nelse\ntypedef iterator-category (\nCategoryOrTraversal,‘‘reference‘‘,‘‘value_type‘‘\n) iterator_category;\nindirect_iterator requirements\nThe expression *v, where vis an object of iterator_traits<Iterator>::value_type , shall be valid\nexpression and convertible to reference .Iterator shall model the traversal concept indicated by it-\nerator_category .Value ,Reference , and Difference shall be chosen so that value_type ,reference ,\nanddifference_type meet the requirements indicated by iterator_category .\n[Note: there are further requirements on the iterator_traits<Iterator>::value_type if the\nValue parameter is not use_default , as implied by the algorithm for deducing the default for the\nvalue_type member.]\nindirect_iterator models\nIn addition to the concepts indicated by iterator_category and by iterator_traversal<indirect_iterator>::type ,\na specialization of indirect_iterator models the following concepts, Where vis an object of itera-\ntor_traits<Iterator>::value_type :\n•Readable Iterator if reference(*v) is convertible to value_type .\n•Writable Iterator if reference(*v) = t is a valid expression (where tis an object of\ntype indirect_iterator::value_type )\n20•Lvalue Iterator if reference is a reference type.\nindirect_iterator<X,V1,C1,R1,D1> is interoperable with indirect_iterator<Y,V2,C2,R2,D2>\nif and only if Xis interoperable with Y.\nindirect_iterator operations\nIn addition to the operations required by the concepts described above, specializations of indirect_iterator\nprovide the following operations.\nindirect_iterator();\nRequires: Iterator must be Default Constructible.\nEffects: Constructs an instance of indirect_iterator with a default-constructed m_iterator .\nindirect_iterator(Iterator x);\nEffects: Constructs an instance of indirect_iterator with m_iterator copy constructed\nfrom x.\ntemplate <\nclass Iterator2, class Value2, unsigned Access, class Traversal\n, class Reference2, class Difference2\n>\nindirect_iterator(\nindirect_iterator<\nIterator2, Value2, Access, Traversal, Reference2, Difference2\n> const& y\n, typename enable_if_convertible<Iterator2, Iterator>::type* = 0 // expo-\nsition\n);\nRequires: Iterator2 is implicitly convertible to Iterator .\nEffects: Constructs an instance of indirect_iterator whose m_iterator subobject is\nconstructed from y.base() .\nIterator const& base() const;\nReturns: m_iterator\nreference operator*() const;\nReturns: **m_iterator\nindirect_iterator& operator++();\nEffects: ++m_iterator\nReturns: *this\nindirect_iterator& operator--();\nEffects: --m_iterator\nReturns: *this\nReverse iterator\nThe reverse iterator adaptor iterates through the adapted iterator range in the opposite direction.\n21Class template reverse_iterator\ntemplate <class Iterator>\nclass reverse_iterator\n{\npublic:\ntypedef iterator_traits<Iterator>::value_type value_type;\ntypedef iterator_traits<Iterator>::reference reference;\ntypedef iterator_traits<Iterator>::pointer pointer;\ntypedef iterator_traits<Iterator>::difference_type difference_type;\ntypedef /* see below */ iterator_category;\nreverse_iterator() {}\nexplicit reverse_iterator(Iterator x) ;\ntemplate<class OtherIterator>\nreverse_iterator(\nreverse_iterator<OtherIterator> const& r\n, typename enable_if_convertible<OtherIterator, Itera-\ntor>::type* = 0 // exposition\n);\nIterator const& base() const;\nreference operator*() const;\nreverse_iterator& operator++();\nreverse_iterator& operator--();\nprivate:\nIterator m_iterator; // exposition\n};\nIfIterator models Random Access Traversal Iterator and Readable Lvalue Iterator, then itera-\ntor_category is convertible to random_access_iterator_tag . Otherwise, if Iterator models Bidirec-\ntional Traversal Iterator and Readable Lvalue Iterator, then iterator_category is convertible to bidi-\nrectional_iterator_tag . Otherwise, iterator_category is convertible to input_iterator_tag .\nreverse_iterator requirements\nIterator must be a model of Bidirectional Traversal Iterator. The type iterator_traits<Iterator>::reference\nmust be the type of *i, where iis an object of type Iterator .\nreverse_iterator models\nA specialization of reverse_iterator models the same iterator traversal and iterator access concepts\nmodeled by its Iterator argument. In addition, it may model old iterator concepts specified in the\nfollowing table:\nIfImodels then reverse_iterator<I> models\nReadable Lvalue Iterator, Bidirectional Traversal\nIteratorBidirectional Iterator\nWritable Lvalue Iterator, Bidirectional Traversal\nIteratorMutable Bidirectional Iterator\nReadable Lvalue Iterator, Random Access\nTraversal IteratorRandom Access Iterator\nWritable Lvalue Iterator, Random Access\nTraversal IteratorMutable Random Access Iterator\n22reverse_iterator<X> is interoperable with reverse_iterator<Y> if and only if Xis interoperable\nwith Y.\nreverse_iterator operations\nIn addition to the operations required by the concepts modeled by reverse_iterator ,reverse_iterator\nprovides the following operations.\nreverse_iterator();\nRequires: Iterator must be Default Constructible.\nEffects: Constructs an instance of reverse_iterator with m_iterator default constructed.\nexplicit reverse_iterator(Iterator x);\nEffects: Constructs an instance of reverse_iterator with m_iterator copy constructed\nfrom x.\ntemplate<class OtherIterator>\nreverse_iterator(\nreverse_iterator<OtherIterator> const& r\n, typename enable_if_convertible<OtherIterator, Itera-\ntor>::type* = 0 // exposition\n);\nRequires: OtherIterator is implicitly convertible to Iterator .\nEffects: Constructs instance of reverse_iterator whose m_iterator subobject is con-\nstructed from y.base() .\nIterator const& base() const;\nReturns: m_iterator\nreference operator*() const;\nEffects:\nIterator tmp = m_iterator;\nreturn *--tmp;\nreverse_iterator& operator++();\nEffects: --m_iterator\nReturns: *this\nreverse_iterator& operator--();\nEffects: ++m_iterator\nReturns: *this\nTransform iterator\nThe transform iterator adapts an iterator by modifying the operator* to apply a function object to\nthe result of dereferencing the iterator and returning the result.\n23Class template transform_iterator\ntemplate <class UnaryFunction,\nclass Iterator,\nclass Reference = use_default,\nclass Value = use_default>\nclass transform_iterator\n{\npublic:\ntypedef /* see below */ value_type;\ntypedef /* see below */ reference;\ntypedef /* see below */ pointer;\ntypedef iterator_traits<Iterator>::difference_type difference_type;\ntypedef /* see below */ iterator_category;\ntransform_iterator();\ntransform_iterator(Iterator const& x, UnaryFunction f);\ntemplate<class F2, class I2, class R2, class V2>\ntransform_iterator(\ntransform_iterator<F2, I2, R2, V2> const& t\n, typename enable_if_convertible<I2, Iterator>::type* = 0 // ex-\nposition only\n, typename enable_if_convertible<F2, UnaryFunction>::type* = 0 // ex-\nposition only\n);\nUnaryFunction functor() const;\nIterator const& base() const;\nreference operator*() const;\ntransform_iterator& operator++();\ntransform_iterator& operator--();\nprivate:\nIterator m_iterator; // exposition only\nUnaryFunction m_f; // exposition only\n};\nIfReference isuse_default then the reference member of transform_iterator isresult_of<UnaryFunction(iterator_traits<Iterator>::reference)>::type .\nOtherwise, reference isReference .\nIfValue isuse_default then the value_type member is remove_cv<remove_reference<reference>\n>::type . Otherwise, value_type isValue .\nIfIterator models Readable Lvalue Iterator and if Iterator models Random Access Traver-\nsal Iterator, then iterator_category is convertible to random_access_iterator_tag . Otherwise, if\nIterator models Bidirectional Traversal Iterator, then iterator_category is convertible to bidi-\nrectional_iterator_tag . Otherwise iterator_category is convertible to forward_iterator_tag .\nIfIterator does not model Readable Lvalue Iterator then iterator_category is convertible to in-\nput_iterator_tag .\ntransform_iterator requirements\nThe type UnaryFunction must be Assignable, Copy Constructible, and the expression f(*i) must be\nvalid where fis an object of type UnaryFunction ,iis an object of type Iterator , and where the type\noff(*i) must be result_of<UnaryFunction(iterator_traits<Iterator>::reference)>::type .\nThe argument Iterator shall model Readable Iterator.\n24transform_iterator models\nThe resulting transform_iterator models the most refined of the following that is also modeled by\nIterator .\n•Writable Lvalue Iterator if transform_iterator::reference is a non-const reference.\n•Readable Lvalue Iterator if transform_iterator::reference is a const reference.\n•Readable Iterator otherwise.\nThe transform_iterator models the most refined standard traversal concept that is modeled by\ntheIterator argument.\nIftransform_iterator is a model of Readable Lvalue Iterator then it models the following original\niterator concepts depending on what the Iterator argument models.\nIfIterator models then transform_iterator models\nSingle Pass Iterator Input Iterator\nForward Traversal Iterator Forward Iterator\nBidirectional Traversal Iterator Bidirectional Iterator\nRandom Access Traversal Iterator Random Access Iterator\nIftransform_iterator models Writable Lvalue Iterator then it is a mutable iterator (as defined in\nthe old iterator requirements).\ntransform_iterator<F1, X, R1, V1> is interoperable with transform_iterator<F2, Y, R2, V2>\nif and only if Xis interoperable with Y.\ntransform_iterator operations\nIn addition to the operations required by the concepts modeled by transform_iterator ,trans-\nform_iterator provides the following operations.\ntransform_iterator();\nReturns: An instance of transform_iterator with m_f and m_iterator default con-\nstructed.\ntransform_iterator(Iterator const& x, UnaryFunction f);\nReturns: An instance of transform_iterator with m_finitialized to fand m_iterator\ninitialized to x.\ntemplate<class F2, class I2, class R2, class V2>\ntransform_iterator(\ntransform_iterator<F2, I2, R2, V2> const& t\n, typename enable_if_convertible<I2, Iterator>::type* = 0 // expo-\nsition only\n, typename enable_if_convertible<F2, UnaryFunction>::type* = 0 // expo-\nsition only\n);\nReturns: An instance of transform_iterator with m_finitialized to t.functor() and\nm_iterator initialized to t.base() .\nRequires: OtherIterator is implicitly convertible to Iterator .\nUnaryFunction functor() const;\nReturns: m_f\n25Iterator const& base() const;\nReturns: m_iterator\nreference operator*() const;\nReturns: m_f(*m_iterator)\ntransform_iterator& operator++();\nEffects: ++m_iterator\nReturns: *this\ntransform_iterator& operator--();\nEffects: --m_iterator\nReturns: *this\nFilter iterator\nThe filter iterator adaptor creates a view of an iterator range in which some elements of the range are\nskipped. A predicate function object controls which elements are skipped. When the predicate is applied\nto an element, if it returns true then the element is retained and if it returns false then the element\nis skipped over. When skipping over elements, it is necessary for the filter adaptor to know when to\nstop so as to avoid going past the end of the underlying range. A filter iterator is therefore constructed\nwith pair of iterators indicating the range of elements in the unfiltered sequence to be traversed.\nClass template filter_iterator\ntemplate <class Predicate, class Iterator>\nclass filter_iterator\n{\npublic:\ntypedef iterator_traits<Iterator>::value_type value_type;\ntypedef iterator_traits<Iterator>::reference reference;\ntypedef iterator_traits<Iterator>::pointer pointer;\ntypedef iterator_traits<Iterator>::difference_type difference_type;\ntypedef /* see below */ iterator_category;\nfilter_iterator();\nfilter_iterator(Predicate f, Iterator x, Iterator end = Iterator());\nfilter_iterator(Iterator x, Iterator end = Iterator());\ntemplate<class OtherIterator>\nfilter_iterator(\nfilter_iterator<Predicate, OtherIterator> const& t\n, typename enable_if_convertible<OtherIterator, Itera-\ntor>::type* = 0 // exposition\n);\nPredicate predicate() const;\nIterator end() const;\nIterator const& base() const;\nreference operator*() const;\nfilter_iterator& operator++();\nprivate:\nPredicate m_pred; // exposition only\n26Iterator m_iter; // exposition only\nIterator m_end; // exposition only\n};\nIfIterator models Readable Lvalue Iterator and Bidirectional Traversal Iterator then itera-\ntor_category is convertible to std::bidirectional_iterator_tag . Otherwise, if Iterator models\nReadable Lvalue Iterator and Forward Traversal Iterator then iterator_category is convertible to\nstd::forward_iterator_tag . Otherwise iterator_category is convertible to std::input_iterator_tag .\nfilter_iterator requirements\nThe Iterator argument shall meet the requirements of Readable Iterator and Single Pass Iterator or\nit shall meet the requirements of Input Iterator.\nThe Predicate argument must be Assignable, Copy Constructible, and the expression p(x) must be\nvalid where pis an object of type Predicate ,xis an object of type iterator_traits<Iterator>::value_type ,\nand where the type of p(x) must be convertible to bool .\nfilter_iterator models\nThe concepts that filter_iterator models are dependent on which concepts the Iterator argument\nmodels, as specified in the following tables.\nIfIterator models then filter_iterator models\nSingle Pass Iterator Single Pass Iterator\nForward Traversal Iterator Forward Traversal Iterator\nBidirectional Traversal Iterator Bidirectional Traversal Iterator\nIfIterator models then filter_iterator models\nReadable Iterator Readable Iterator\nWritable Iterator Writable Iterator\nLvalue Iterator Lvalue Iterator\nIfIterator models then filter_iterator models\nReadable Iterator, Single Pass Iterator Input Iterator\nReadable Lvalue Iterator, Forward Traversal Iterator Forward Iterator\nWritable Lvalue Iterator, Forward Traversal Iterator Mutable Forward Iterator\nWritable Lvalue Iterator, Bidirectional Iterator Mutable Bidirectional Iterator\nfilter_iterator<P1, X> is interoperable with filter_iterator<P2, Y> if and only if Xis inter-\noperable with Y.\nfilter_iterator operations\nIn addition to those operations required by the concepts that filter_iterator models, filter_iterator\nprovides the following operations.\nfilter_iterator();\nRequires: Predicate andIterator must be Default Constructible.\nEffects: Constructs a filter_iterator whose“m pred“, m_iter , and m_end members are\na default constructed.\nfilter_iterator(Predicate f, Iterator x, Iterator end = Iterator());\n27Effects: Constructs a filter_iterator where m_iter is either the first position in the\nrange [x,end) such that f(*m_iter) == true or else“m iter == end“. The member\nm_pred is constructed from fandm_end from end.\nfilter_iterator(Iterator x, Iterator end = Iterator());\nRequires: Predicate must be Default Constructible and Predicate is a class type (not a\nfunction pointer).\nEffects: Constructs a filter_iterator where m_iter is either the first position in the\nrange [x,end) such that m_pred(*m_iter) == true or else“m iter == end“. The\nmember m_pred is default constructed.\ntemplate <class OtherIterator>\nfilter_iterator(\nfilter_iterator<Predicate, OtherIterator> const& t\n, typename enable_if_convertible<OtherIterator, Itera-\ntor>::type* = 0 // exposition\n);‘‘\nRequires: OtherIterator is implicitly convertible to Iterator .\nEffects: Constructs a filter iterator whose members are copied from t.\nPredicate predicate() const;\nReturns: m_pred\nIterator end() const;\nReturns: m_end\nIterator const& base() const;\nReturns: m_iterator\nreference operator*() const;\nReturns: *m_iter\nfilter_iterator& operator++();\nEffects: Increments m_iter and then continues to increment m_iter until either m_iter\n== m_end orm_pred(*m_iter) == true .\nReturns: *this\nCounting iterator\ncounting_iterator adapts an object by adding an operator* that returns the current value of the\nobject. All other iterator operations are forwarded to the adapted object.\nClass template counting_iterator\ntemplate <\nclass Incrementable\n, class CategoryOrTraversal = use_default\n, class Difference = use_default\n>\nclass counting_iterator\n28{\npublic:\ntypedef Incrementable value_type;\ntypedef const Incrementable& reference;\ntypedef const Incrementable* pointer;\ntypedef /* see below */ difference_type;\ntypedef /* see below */ iterator_category;\ncounting_iterator();\ncounting_iterator(counting_iterator const& rhs);\nexplicit counting_iterator(Incrementable x);\nIncrementable const& base() const;\nreference operator*() const;\ncounting_iterator& operator++();\ncounting_iterator& operator--();\nprivate:\nIncrementable m_inc; // exposition\n};\nIf the Difference argument is use_default then difference_type is an unspecified signed integral\ntype. Otherwise difference_type isDifference .\niterator_category is determined according to the following algorithm:\nif (CategoryOrTraversal is not use_default)\nreturn CategoryOrTraversal\nelse if (numeric_limits<Incrementable>::is_specialized)\nreturn iterator-category (\nrandom_access_traversal_tag, Incrementable, const Incrementable&)\nelse\nreturn iterator-category (\niterator_traversal<Incrementable>::type,\nIncrementable, const Incrementable&)\n[Note: implementers are encouraged to provide an implementation of operator- and a dif-\nference_type that avoids overflows in the cases where std::numeric_limits<Incrementable>::is_specialized\nis true.]\ncounting_iterator requirements\nThe Incrementable argument shall be Copy Constructible and Assignable.\nIfiterator_category is convertible to forward_iterator_tag orforward_traversal_tag , the\nfollowing must be well-formed:\nIncrementable i, j;\n++i; // pre-increment\ni == j; // operator equal\nIfiterator_category is convertible to bidirectional_iterator_tag orbidirectional_traversal_tag ,\nthe following expression must also be well-formed:\n--i\nIfiterator_category is convertible to random_access_iterator_tag orrandom_access_traversal_tag ,\nthe following must must also be valid:\n29counting_iterator::difference_type n;\ni += n;\nn = i - j;\ni < j;\ncounting_iterator models\nSpecializations of counting_iterator model Readable Lvalue Iterator. In addition, they model the con-\ncepts corresponding to the iterator tags to which their iterator_category is convertible. Also, if Cat-\negoryOrTraversal is not use_default then counting_iterator models the concept corresponding to\nthe iterator tag CategoryOrTraversal . Otherwise, if numeric_limits<Incrementable>::is_specialized ,\nthen counting_iterator models Random Access Traversal Iterator. Otherwise, counting_iterator\nmodels the same iterator traversal concepts modeled by Incrementable .\ncounting_iterator<X,C1,D1> is interoperable with counting_iterator<Y,C2,D2> if and only if\nXis interoperable with Y.\ncounting_iterator operations\nIn addition to the operations required by the concepts modeled by counting_iterator ,counting_iterator\nprovides the following operations.\ncounting_iterator();\nRequires: Incrementable is Default Constructible.\nEffects: Default construct the member m_inc .\ncounting_iterator(counting_iterator const& rhs);\nEffects: Construct member m_inc from rhs.m_inc .\nexplicit counting_iterator(Incrementable x);\nEffects: Construct member m_inc from x.\nreference operator*() const;\nReturns: m_inc\ncounting_iterator& operator++();\nEffects: ++m_inc\nReturns: *this\ncounting_iterator& operator--();\nEffects: --m_inc\nReturns: *this\nIncrementable const& base() const;\nReturns: m_inc\nFunction output iterator\nThe function output iterator adaptor makes it easier to create custom output iterators. The adaptor\ntakes a unary function and creates a model of Output Iterator. Each item assigned to the output\niterator is passed as an argument to the unary function. The motivation for this iterator is that\ncreating a conforming output iterator is non-trivial, particularly because the proper implementation\nusually requires a proxy object.\n30Class template function_output_iterator\nHeader\n#include <boost/function_output_iterator.hpp>\ntemplate <class UnaryFunction>\nclass function_output_iterator {\npublic:\ntypedef std::output_iterator_tag iterator_category;\ntypedef void value_type;\ntypedef void difference_type;\ntypedef void pointer;\ntypedef void reference;\nexplicit function_output_iterator();\nexplicit function_output_iterator(const UnaryFunction& f);\n/* see below */ operator*();\nfunction_output_iterator& operator++();\nfunction_output_iterator& operator++(int);\nprivate:\nUnaryFunction m_f; // exposition only\n};\nfunction_output_iterator requirements\nUnaryFunction must be Assignable and Copy Constructible.\nfunction_output_iterator models\nfunction_output_iterator is a model of the Writable and Incrementable Iterator concepts.\nfunction_output_iterator operations\nexplicit function_output_iterator(const UnaryFunction& f = UnaryFunction());\nEffects: Constructs an instance of function_output_iterator with m_fconstructed from\nf.\noperator*();\nReturns: An object rof unspecified type such that r = t is equivalent to m_f(t) for all\nt.\nfunction_output_iterator& operator++();\nReturns: *this\nfunction_output_iterator& operator++(int);\nReturns: *this\n31" } ]
{ "category": "App Definition and Development", "file_name": "facade-and-adaptor.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "The RADStack: Open Source Lambda Architecture for\nInteractive Analytics\nFangjin Y ang, Gian Merlino, Nelson Ray, Xavier Léauté, Himanshu Gupta, Eric Tschetter\n{fangjinyang, gianmerlino, ncray86, xavier.leaute, g.himanshu, echeddar}@gmail.com\nABSTRACT\nThe Real-time Analytics Data Stack, colloquially referred to\nas the RADStack, is an open-source data analytics stack de-\nsigned to provide fast, flexible queries over up-to-the-second\ndata. It is designed to overcome the limitations of either\na purely batch processing system (it takes too long to sur-\nface new events) or a purely real-time system (it’s difficult\nto ensure that no data is left behind and there is often no\nway to correct data after initial processing). It will seam-\nlessly return best-effort results on very recent data combined\nwith guaranteed-correct results on older data. In this paper,\nwe introduce the architecture of the RADStack and discuss\nour methods of providing interactive analytics and a flexible\ndata processing environment to handle a variety of real-world\nworkloads.\n1. INTRODUCTION\nThe rapid growth of the Hadoop[16] ecosystem has en-\nabled many organizations to flexibly process and gain in-\nsights from large quantities of data. These insights are typ-\nically generated from business intelligence, or OnLine Ana-\nlyticalProcessing(OLAP)queries. Hadoophasproventobe\nan extremely effective framework capable of providing many\nanalytical insights and is able to solve a wide range of dis-\ntributed computing problems. However, as much as Hadoop\nis lauded for its wide range of use cases, it is derided for its\nhigh latency in processing and returning results. A common\napproach to surface data insights is to run MapReduce jobs\nthat may take several hours to complete.\nData analysis and data-driven applications are becoming\nincreasingly important in industry, and the long query times\nencountered with using batch frameworks such as Hadoop\nare becoming increasingly intolerable. User facing appli-\ncations are replacing traditional reporting interfaces as the\npreferred means for organizations to derive value from their\ndatasets. In order to provide an interactive user experi-\nence with data applications, queries must complete in an\norder of milliseconds. Because most of these interactions\nrevolve around data exploration and computation, organi-\nzations quickly realized that in order to support low latency\nqueries, dedicated serving layers were necessary. Today,\nmost of these serving layers are Relational Database Man-\nagement Systems (RDBMS) or NoSQL key/value stores.\nNeither RDBMS nor NoSQL key/value stores are partic-\nularly designed for analytics [19], but these technologies are\nstill frequently selected as serving layers. Solutions that in-\nvolve these broad-focus technologies can be inflexible once\ntailored to the analytics use case, or suffer from architecture\ndrawbacks that prevent them from returning queries fast\nenough to power interactive, user-facing applications [20].An ideal data serving layer alone is often not sufficient\nas a complete analytics solution. In most real-world use\ncases, rawdatacannotbedirectlystoredintheservinglayer.\nRaw data suffers from many imperfections and must first\nbe processed (transformed, or cleaned) before it is usable\n[17]. The drawback of this requirement is that loading and\nprocessing batch data is slow, and insights on events cannot\nbe obtained until hours after the events have occurred.\nTo address the delays in data freshness caused by batch\nprocessing frameworks, numerous open-source stream pro-\ncessingframeworkssuchasApacheStorm[12], ApacheSpark\nStreaming[25], and Apache Samza[1] have gained popular-\nity for offering a low-latency model to ingest and process\nevent streams at near real-time speeds. The drawback of\nalmost all stream processors is that they do not necessarily\nprovide the same correctness guarantees as batch processing\nframeworks. Events can come in days late, and may need\nto be corrected after the fact. Large batches of data may\nalso need to be reprocessed when new columns are added or\nremoved.\nCombining batch processing, streaming processing, and\na serving layer in a single technology stack is known as a\nlambda architecture[9]. In lambda architectures, data en-\ntering the system is concurrently fed to both the batch and\nstreaming processing layer. The streaming layer is responsi-\nble for immediately processing incoming data, however, the\nprocessed data may suffer from duplicated events and other\nimperfections in data accuracy. The batch layer processes\nincoming data much slower than the streaming layer, but is\nable to provide accurate views of data. The serving layer\nmerges the results from the batch and streaming layers and\nprovides an interface for queries. Although each individual\ncomponent in a lambda architecture has their own limita-\ntions, the pieces complement each other extremely well and\nthe overall stack is robust enough to handle a wide array of\ndata processing and querying challenges at scale.\nThe RADStack is an open source lambda architecture im-\nplementation meant to offer flexible, low-latency analytic\nqueries on near real-time data. The solution combines the\nlow latency guarantees of stream processors and the correct-\nness and flexibility guarantees of batch processors. It also\nintroduces a serving layer specifically designed for interac-\ntive analytics. The stack’s main building blocks are Apache\nKafka[11], Apache Samza, Apache Hadoop, and Druid [23],\nand we have found that the combination of technologies is\nflexible enough to handle a wide variety of processing re-\nquirements and query loads. Each piece of the stack is de-\nsigned to do a specific set of things very well. This paper\nwill cover the details and design principles of the RADStack.\nOur contributions are around the architecture of the stack\nitself, the introduction of Druid as a serving layer, and our\n1Figure 1: The components of the RADStack. Kafka\nacts as the event delivery endpoints. Samza and\nHadoop process data to load data into Druid. Druid\nacts as the endpoint for queries.\nmodel for unifying real-time and historical workflows.\nThe structure of the paper is as follows: Section 2 de-\nscribes the problems and use cases that led to the creation\nof the RADStack. Section 3 describes Druid, the serving\nlayer of the stack, and how Druid is built for real-time and\nbatch data ingestion, as well as exploratory analytics. Sec-\ntion 4 covers the role of Samza and Hadoop for data pro-\ncessing, and Section 5 describes the role of Kafka for event\ndelivery. In Section 6, we present our production metrics.\nSection 7 presents our experiences with running the RAD-\nStack in production, and in Section 8 we discuss the related\nsolutions.\n2. BACKGROUND\nThe RADStack was first developed to address problems\nin the online advertising. In online advertising, automated\nsystems from different organizations will place bids against\none another to display users ads in the milliseconds before a\nwebpageloads. Theseactionsgenerateatremendousvolume\nof data. The data shown in Table 1 is an example of such\ndata. Each event is comprised of three components: a times-\ntamp indicating when the event occurred; a set of dimen-\nsions indicating various attributes about the event; and a set\nof metrics concerning the event. Organizations frequently\nserve this insights to this data to ad publishers through vi-\nsualizations and data applications. These applications must\nrapidly compute drill-down and aggregates with this data,\nand answer questions such as “How many clicks occurred\nover the span of one week for publisher google.com?” or\n“How many impressions were seen over the last quarter in\nSan Francisco?”. Queries over any arbitrary number of di-\nmensions should return in a few hundred milliseconds.\nAs an additional requirement, user-facing applications of-\nten face highly concurrent workloads and good applications\nneedtoproviderelativelyconsistentperformancetoallusers.\nOf course, backend infrastructure also needs to be highly\navailable. Downtime is costly and many businesses cannot\nafford to wait if a system is unavailable in the face of soft-\nware upgrades or network failure.\nTo address these requirements of scale, stability, and per-\nformance, we created Druid. Druid was designed from the\nground up to provide arbitrary data exploration, low la-\ntency aggregations, and fast data ingestion. Druid was also\ndesigned to accept fully denormalized data, and moves away\nfrom the traditional relational model. Since most raw data\nis not denormalized, it must be processed before it can be\ningested and queried. Multiple streams of data had to bejoined, cleaned up, and transformed before it was usable in\nDruid, but that was the trade-off we were willing to make\nin order to get the performance necessary to power an in-\nteractive data application. We introduced stream process-\ning to our stack to provide the processing required before\nraw data could be loaded into Druid. Our stream process-\ning jobs range from simple data transformations, such as id\nto name lookups, up to complex operations such as multi-\nstream joins. Pairing Druid with a stream processor enabled\nflexible data processing and querying, but we still had prob-\nlems with event delivery. Our events were delivered from\nmany different locations and sources, and peaked at several\nmillion events per second. We required a high throughput\nmessage bus that could hold these events for consumpation\nby our stream processor. To simplify data transmission for\nour clients, we wanted the message bus to be the single de-\nlivery endpoint for events entering our cluster.\nOur stack would be complete here if real-time processing\nwere perfect, but the open source stream processing space\nis still young. Processing jobs can go down for extended\nperiods of time and events may be delivered more than\nonce. These are realities of any production data pipeline.\nTo overcome these issues, we included Hadoop in our stack\nto periodically clean up any data generated by the real-time\npipeline. We stored a copy of the raw events we received in\na distributed file system, and periodically ran batch process-\ning jobs over this data. The high level architecture of our\nsetup is shown in Figure 1. Each component is designed\nto do a specific set of things well, and there is isolation in\nterms of functionality. Individual components can entirely\nfail without impacting the services of the other components.\n3. THE SERVING LAYER\nDruid is a column-oriented data store designed for ex-\nploratory analytics and is the serving layer in the RAD-\nStack. A Druid cluster consists of different types of nodes\nand, similar to the overall design of the RADStack, each\nnode type is instrumented to perform a specific set of things\nwell. We believe this design separates concerns and simpli-\nfies the complexity of the overall system. To solve complex\ndata analysis problems, the different node types come to-\ngether to form a fully working system. The composition of\nand flow of data in a Druid cluster are shown in Figure 2.\n3.1 Segments\nData tables in Druid (called ”data sources”) are collec-\ntions of timestamped events and partitioned into a set of\nsegments, where each segment is typically 5–10 million rows.\nSegments represent the fundamental storage unit in Druid\nand Druid queries only understand how to scan segments.\nDruid always requires a timestamp column as a method\nof simplifying data distribution policies, data retention poli-\ncies, and first level query pruning. Druid partitions its data\nsources into well defined time intervals, typically an hour\nor a day, and may further partition on values from other\ncolumns to achieve the desired segment size. The time gran-\nularity to partition segments is a function of data volume\nand time range. A data set with timestamps spread over a\nyear is better partitioned by day, and a data set with times-\ntamps spread over a day is better partitioned by hour.\nSegments are uniquely identified by a data source iden-\ntifier, the time interval of the data, and a version string\n2Timestamp Publisher Advertiser Gender City Click Price\n2011-01-01T01:01:35Z bieberfever.com google.com Male San Francisco 0 0.65\n2011-01-01T01:03:63Z bieberfever.com google.com Male Waterloo 0 0.62\n2011-01-01T01:04:51Z bieberfever.com google.com Male Calgary 1 0.45\n2011-01-01T01:00:00Z ultratrimfast.com google.com Female Taiyuan 0 0.87\n2011-01-01T02:00:00Z ultratrimfast.com google.com Female New York 0 0.99\n2011-01-01T02:00:00Z ultratrimfast.com google.com Female Vancouver 1 1.53\nTable 1: Sample ad data. These events are created when users views or clicks on ads.\nReal-time \nNodes \nCoordinator \nNodes Broker Nodes \nHistorical \nNodes Metadata \nStorage \nDistributed \nCoordination \nDeep \nStorage Streaming \nData \nBatch \nData Client \nQueries \nQueries \nMetadata \nData/Segments Druid Nodes \nExternal Dependencies \nFigure 2: An overview of a Druid cluster and the flow of data through the cluster.\nthat increases whenever a new segment is created. The ver-\nsion string indicates the freshness of segment data; segments\nwith later versions have newer views of data (over some\ntime range) than segments with older versions. This seg-\nment metadata is used by the system for concurrency con-\ntrol; read operations always access data in a particular time\nrange from the segments with the latest version identifiers\nfor that time range.\nDruid segments are stored in a column orientation. Given\nthat Druid is best used for aggregating event streams, the\nadvantagesofstoringaggregateinformationascolumnsrather\nthan rows are well documented [2]. Column storage allows\nfor more efficient CPU usage as only what is needed is ac-\ntually loaded and scanned. In a row oriented data store,\nall columns associated with a row must be scanned as part\nof an aggregation. The additional scan time can introduce\nsignificant performance degradations [2].\nDruid nodes use one thread to scan one segment at a time,\nand the amount of data that can be scanned in parallel is\ndirectly correlated to the number of available cores in the\ncluster. Segments are immutable, and hence, this no con-\ntention between reads and writes in a segment.\nA single query may scan thousands of segments concur-\nrently, and many queries may run at the same time. We\nwant to ensure that the entire cluster is not starved out\nwhile a single expensive query is executing. Thus, segments\nhave an upper limit in how much data they can hold, and\nare sized to be scanned in a few milliseconds. By keeping\nsegment computation very fast, cores and other resources\nare constantly being yielded. This ensures segments from\ndifferent queries are always being scanned.\nDruid segments are very self-contained for the time inter-\nval of data that they hold. Column data is stored directlyin the segment. Druid has multiple column types to repre-\nsent various data formats. Timestamps are stored in long\ncolumns, dimensions are stored in string columns, and met-\nrics are stored in int, float, long or double columns. Depend-\ning on the column type, different compression methods may\nbe used. Metric columns are compressed using LZ4[3] com-\npression. String columns are dictionary encoded, similar to\nother data stores such as PowerDrill[8]. Additional indexes\nmay be created for particular columns. For example, Druid\nwill by default create inverted indexes for string columns.\n3.2 Streaming Data Ingestion\nDruid real-time nodes encapsulate the functionality to in-\ngest, query, and create segments from event streams. Events\nindexed via these nodes are immediately available for query-\ning. The nodes are only concerned with events for a rela-\ntively small time range (e.g. hours) and periodically hand\noff immutable batches of events they have collected over\nthis small time range to other nodes in the Druid cluster\nthat are specialized in dealing with batches of immutable\nevents. The nodes announce their online state and the data\nthey serve using a distributed coordination service (this is\ncurrently Zookeeper[10]).\nReal-time nodes employ a log structured merge tree[14]\nfor recently ingested data. Incoming events are first stored\nin an in-memory buffer. The in-memory buffer is directly\nqueryable and Druid behaves as a key/value store for queries\non events that exist in this JVM heap-based store. The in-\nmemory buffer is heavily write optimized, and given that\nDruid is really designed for heavy concurrent reads, events\ndo not remain in the in-memory buffer for very long. Real-\ntimenodespersisttheirin-memoryindexes todiskeitherpe-\nriodically or after some maximum row limit is reached. This\npersist process converts data stored in the in-memory buffer\n3event_23312\nevent_23481\nevent_23593\n...\nevent_1234\nevent_2345\n...event_3456\nevent_4567\n...\nevent_5678\nevent_6789\n...event_7890\nevent_8901\n...Disk and persisted indexesHeap and in-memory index\nPersistevent_34982\nevent_35789\nevent_36791\n...\nevent_1234\nevent_2345\n...event_3456\nevent_4567\n...\nevent_5678\nevent_6789\n...event_7890\nevent_8901\n...Off-heap memory and \npersisted indexes\nLoadQueries\nFigure 3: Real-time nodes write events to a write\noptimized in-memory index. Periodically, events are\npersisted to disk, converting the write optimized for-\nmat to a read optimized one. On a periodic basis,\npersisted indexes are then merged together and the\nfinal segment is handed off. Queries will hit both\nthe in-memory and persisted indexes.\nto the column oriented segment storage format described\nin Section 3.1. Persisted segments are memory mapped\nand loaded to off-heap memory such that they can still be\nqueried. This is illustrated in Figure 4. Data is continuously\nqueryable during the persist process.\nReal-time ingestion in Druid is self-throttling. If a signifi-\ncant spike occurs in event volume from the upstream event\nproducer, there are a few safety mechanisms built in. Re-\ncall that events are first stored in an in-memory buffer and\npersists can occur when a maximum configurable row limit\nis reached. Spikes in event volume should cause persists\nto occur more often and not overflow the in-memory buffer.\nHowever, theprocessofbuildingasegmentdoesrequiretime\nand resources. If too many concurrent persists occur, and if\nevents are added to the in-memory buffer faster than they\ncan be removed through the persist process, problems can\nstill arise. Druid sets a limit on the maximum number of\npersists that can occur at a time, and if this limit is reached,\nDruid will begin to throttle event ingestion. In this case, the\nonus is on the upstream consumer to be resilient in the face\nof increasing backlog.\nReal-timenodesstorerecentdataforaconfigurableperiod\nof time, typically an hour. This period is referred to as the\nsegment granularity period. The nodes employ a sliding\nwindow to accept and reject events and use the wall-clock\ntime as the basis of the window. Events within a range of\nthe node’s wall-clock time are accepted, and events outside\nthis window are dropped. This period is referred to as the\nwindow period and typical window periods are 10 minutes\nin length. At the end of the segment granularity period plus\nthe window period, a real-time node will hand off the data\nit has collected during the segment granularity period. The\nuse of the window period means that delayed events may\nbe dropped. In practice, we see that these occurrences are\nrare, but they do occur. Druid’s real-time logic does not\nguarantee exactly once processing and is instead best effort.\nThe lack of exactly once processing in Druid is one of the\nmotivations for requiring batch fixup in the RADStack.For further clarification, consider Figure 4. Figure 4 illus-\ntrates the operations of a real-time node. The node starts\nat 13:37 and, with a 10 minute window period, will only\naccept events for a window between 13:27 and 13:47. When\nthe first events are ingested, the node announces that it is\nserving a segment for an interval from 13:00 to 14:00. Every\n10 minutes (the persist period is configurable), the node will\nflush and persist its in-memory buffer to disk. Near the end\nof the hour, the node will likely see events for 14:00 to 15:00.\nWhen this occurs, the node prepares to serve data for the\nnext hour and creates a new in-memory buffer. The node\nthen announces that it is also serving a segment from 14:00\nto 15:00. At 13:10, which is the end of the hour plus the\nwindow period, the node begins the hand off process.\n3.3 Hand off\nReal-time nodes are designed to deal with a small win-\ndow of recent data and need periodically hand off segments\nthey’ve built. The hand-off process first involves a com-\npaction step. The compaction process finds all the segments\nthat were created for a specific interval of time (for example,\nall the segments that were created by intermediate persists\nover the period of an hour). These segments are merged\ntogether to form a final immutable segment for handoff.\nHandoff occurs in a few steps. First, the finalized segment\nis uploaded to a permanent backup storage, typically a dis-\ntributedfilesystemsuchasS3[5]orHDFS[16], whichDruid\nrefers to as “deep storage”. Next, an entry is created in the\nmetadata store (typically a RDBMS such as MySQL) to in-\ndicate that a new segment has been created. This entry in\nthe metadata store will eventually cause other nodes in the\nDruid cluster to download and serve the segment. The real-\ntimenodecontinuestoservethesegmentuntilitnoticesthat\nthe segment is available on Druid historical nodes, which are\nnodes that are dedicated to serving historical data. At this\npoint, the segment is dropped and unannounced from the\nreal-time node. The entire handoff process is fluid; data re-\nmains continuously queryable throughout the entire handoff\nprocess. Segments created by real-time processing are ver-\nsioned by the start of the segment granularity interval.\n3.4 Batch Data Ingestion\nThe core component used by real-time ingestion is a hash\nmap that can be incrementally populated and finalized to\ncreateanimmutablesegment. Thiscorecomponentisshared\nacross both real-time and batch ingestion. Druid has built\nin support for creating segments by leveraging Hadoop and\nrunning MapReduce jobs to partition data for segments.\nEvents can be read in one at a time directly from static\nfiles in a ”streaming” fashion.\nSimilar to the real-time ingestion logic, segments created\nthrough batch ingestion are directly uploaded to deep stor-\nage. Druid’s Hadoop-based batch indexer will also create an\nentry in the metadata storage once all segments have been\ncreated. The version of the segments created by batch inges-\ntion are based on the time the batch processing job started\nat.\n3.5 Unifying Views\nWhen new entries are created in the metadata storage,\nthey will eventually be noticed by Druid coordinator nodes.\nDruid coordinator nodes poll the metadata storage for what\nsegments should be loaded on Druid historical nodes, and\n413:00 14:00 15:00\n13:37\n- node starts\n- announce segment \nfor data 13:00-14:0013:47\npersist data for 13:00-14:00\n~14:00\n- announce segment \nfor data 14:00-15:0014:10\n- merge and handoff for data 13:00-14:00\n- persist data for 14:00-15:00~14:11\n- unannounce segment \nfor data 13:00-14:00\n13:57\npersist data for 13:00-14:0014:07\npersist data for 13:00-14:00Figure 4: The node starts, ingests data, persists, and periodically hands data off. This process repeats\nindefinitely. The time periods between different real-time node operations are configurable.\ncompare the result with what is actually loaded on those\nnodes. Coordinator nodes will tell historical nodes to load\nnew segments, drop outdated segments, and move segments\nacross nodes.\nDruid historical nodes are very simple in operation. They\nknow how to load, drop, and respond to queries to scan\nsegments. Historical nodes typically store all the data that\nis older than an hour (recent data lives on the real-time\nnode). The real-time handoff process requires that a histor-\nical must first load and begin serving queries for a segment\nbeforethatsegmentcanbedroppedfromthereal-timenode.\nSince segments are immutable, the same copy of a segment\ncan exist on multiple historical nodes and real-time nodes.\nMost nodes in typical production Druid clusters are histor-\nical nodes.\nTo consolidate results from historical and real-time nodes,\nDruid has a set of broker nodes which act as the client query\nendpoint. Broker nodes in part function as query routers to\nhistorical and real-time nodes. Broker nodes understand\nthe metadata published in distributed coordination service\n(Zookeeper) about what segments are queryable and where\nthose segments are located. Broker nodes route incoming\nqueries such that the queries hit the right historical or real-\ntime nodes. Broker nodes also merge partial results from\nhistorical and real-time nodes before returning a final con-\nsolidated result to the caller.\nBroker nodes maintain a segment timeline containing in-\nformation about what segments exist in the cluster and the\nversion of those segments. Druid uses multi-version concun-\ncurrency control to manage how data is extracted from seg-\nments. Segments with higher version identifiers have prece-\ndence over segments with lower version identifiers. If two\nsegments exactly overlap for an interval, Druid only consid-\ners the data from the segment with the higher version. This\nis illustrated in Figure 5\nSegments are inserted into the timeline as they are an-\nnounced. The timeline sorts the segment based on their\ndata interval in a data structure similar to an interval tree.\nLookups in the timeline will return all segments with in-\nSegment_v4 \nSegment_v3 \nSegment_v2 \nSegment_v1 \nSegment_v4 Segment_v3 Segment_v1 Day 1 Day 2 Day 3 \nResults Figure 5: Druid utilizes multi-version concurrency\ncontrol and reads data from segments with the latest\nversion for a given interval. Segments that are that\ncompletely overshadowed are ignored and eventually\nautomatically dropped from the cluster.\ntervals that overlap the lookup interval, along with interval\nranges for which the data in a segment is valid.\nBrokers extract the interval of a query and use it for\nlookups into the timeline. The result of the timeline is used\nto remap the original query into a set of specific queries\nfor the actual historical and real-time nodes that hold the\npertinent query data. The results from the historical and\nreal-time nodes are finally merged by the broker, which re-\nturns the final result to the caller.\nThe coordinator node also builds a segment timeline for\nsegments in the cluster. If a segment is completely over-\nshadowed by one or more segments, it will be flagged in this\ntimeline. When the coordinator notices overshadowed seg-\nments, it tells historical nodes to drop these segments from\nthe cluster.\n4. THE PROCESSING LAYER\nAlthough Druid can ingest events that are streamed in\none at a time, data must be denormalized beforehand as\nDruid cannot yet support join queries. Furthermore, real\nworld data must often be transformed before it is usable by\nan application.\n5Figure 6: Ad impressions and clicks are recorded\nin two separate streams. An event we want to join\nis located in two different Kafka partitions on two\ndifferent topics.\n4.1 Stream Processing\nStream processors provide infrastructure to develop pro-\ncessing logic for unbounded sequences of messages. We use\nApache Samza as our stream processor, although other tech-\nnologiesareviablealternatives(weinitiallychoseStorm, but\nhave since switched to Samza). Samza provides an API to\nwrite jobs that run over a sequence of tuples and perform\noperations over those tuples in a user-defined way. The in-\nput to each job is provided by Kafka, which can also act as\na sink for the output of the job. Samza jobs are executed\nin a resource management and task execution framework\nsuch as YARN[21]. It is beyond the scope of this paper to\ngo into the full details of Kafka/YARN/Samza interactions,\nbut more information is available in other literature[1]. We\nwill instead focus on how we leverage this framework for\nprocessing data for analytic use cases.\nOn top of Samza infrastructure, we introduce the idea of\na “pipeline”, which is a grouping for a series of related pro-\ncessing stages where “upstream” stages produce messages\nthat are consumed by “downstream” stages. Some of our\njobs involve operations such as renaming data, inserting de-\nfault values for nulls and empty strings, and filtering data.\nOne pipeline may write to many data sources in Druid.\nTo understand a real-world pipeline, let’s consider an ex-\nample from online advertising. In online advertising, events\nare generated by impressions (views) of an ad and clicks\nof an ad. Many advertisers are interested in knowing how\nmany impressions of an ad converted into clicks. Impression\nstreams and click streams are almost always generated as\nseparate streams by ad servers. Recall that Druid does not\nsupport joins at query time, so the events must be generated\nat processing time. An example of data generated by these\ntwo event streams is shown in Figure 6. Every event has a\nunique impression id that identifies the ad served. We use\nthis id as our join key.\nThefirst stage of the Samza processing pipeline is a shuffle\nstep. Events are written to a keyed Kafka topic based on\nthe hash of an event’s impression id. This ensures that the\nevents that need to be joined are written to the same Kafka\ntopic. YARNcontainersrunningSamzatasksmayreadfrom\none or more Kafka topics, so it is important Samza task for\njoins actually has both events that need to be joined. This\nis shuffle stage is shown in Figure 7.\nThe next stage in the data pipeline is to actually join\nthe impression and click events. This is done by another\nSamza task that creates a new event in the data with a new\nfield called ”is_clicked”. This field is marked as ”true” if an\nimpression event and a click event with the same impression\nFigure 7: A shuffle operation ensures events to be\njoined at stored in the same Kafka partition.\nFigure 8: The join operation adds a new field,\n”is_clicked”.\nid are both present. The original events are discarded, and\nthe new event is send further downstream. This join stage\nshown in Figure 8\nThe final stage of our data processing is to enhance the\ndata. This stage cleans up faults in data, and performs\nlookups and transforms of events. Once data is cleaned,\nit is ready to be delivered to Druid for queries. The total\nstreaming data processing pipeline is shown in Figure 9.\nThe system we have designed is not perfect. Because\nwe are doing windowed joins and because events cannot be\nbuffered indefinitely, not all joins are guaranteed to com-\nplete. If events are substantially delayed and do not arrive\nin the allocated window period, they will not be joined. In\npractice, thisgenerallyleadstoone“primary”eventcontinu-\ningthroughthepipelineandothersecondaryeventswiththe\nsame join key getting dropped. This means that our stream\nprocessing layer is not guaranteed to deliver 100% accurate\nresults. Furthermore, even without this restriction, Samza\ndoes not offer exactly-once processing semantics. Problems\nin network connectivity or node failure can lead to dupli-\ncated events. For these reasons, we run a separate batch\npipeline that generates a more accurate transformation of\nthe ingested data.\nFigure 9: The streaming processing data pipeline.\n6The final job of our processing pipeline is to deliver data\nto Druid. For high availability, processed events from Samza\nare transmitted concurrently to two real-time nodes. Both\nnodes receive the same copy of data, and effectively act as\nreplicas of each other. The Druid broker can query for either\ncopyofthedata. Whenhandoffoccurs, bothreal-timenodes\nrace to hand off the segments they’ve created. The segment\nthat is pushed into deep storage first will be the one that is\nused for historical querying, and once that segment is loaded\non the historical nodes, both real-time nodes will drop their\nversions of the same segment.\n4.2 Batch Processing\nOurbatchprocessingpipelineiscomposedofamulti-stage\nMapReduce[4] pipeline. The first set of jobs mirrors our\nstream processing pipeline in that it transforms data and\njoins relevant events in preparation for the data to be loaded\ninto Druid. The second set of jobs is responsible for directly\ncreating immutable Druid segments. The indexing code for\nboth streaming and batch ingestion in Druid is shared be-\ntween the two modes of ingestion. These segments are then\nuploaded to deep storage and registered with the metadata\nstore. Druid will proceed to load the batch generated seg-\nments.\nThebatchprocesstypicallyrunsmuchlessfrequentlythan\nthe real-time process, and may run many hours or even days\nafter raw events have been delivered. The wait is necessary\nfor severely delayed events, and to ensure that the raw data\nis indeed complete.\nSegments generated by the batch process are versioned by\nthe start time of the process. Hence, segments created by\nbatch processing will have a version identifier that is greater\nthan segments created by real-time processing. When these\nbatch created segments are loaded in the cluster, they atom-\nically replace segments created by real-time processing for\ntheir processed interval. Hence, soon after batch processing\ncompletes, Druid queries begin reflecting batch-originated\ndata rather than real-time-originated data.\nWeusethestreamingdatapipelinedescribedinSection4.1\nto deliver immediate insights on events as they are occur-\nring, and the batch data pipeline described in this section to\nprovideanaccuratecopyofthedata. Thebatchprocesstyp-\nically runs much less frequently than the real-time process,\nand may run many hours or even days after raw events have\nbeen delivered. The wait is necessary for severely delayed\nevents, and to ensure that the raw data is indeed complete.\n5. THE DELIVERY LAYER\nIn our stack, events are delivered over HTTP to Kafka.\nEvents are transmitted via POST requests to a receiver that\nacts as a front for a Kafka producer. Kafka is a distributed\nmessaging system with a publish and subscribe model. At\na high level, Kafka maintains events or messages in cate-\ngories called topics. A distributed Kafka cluster consists of\nnumerous brokers, which store messages in a replicated com-\nmit log. Kafka consumers subscribe to topics and process\nfeeds of published messages.\nKafka provides functionality isolation between producers\nofdataandconsumersofdata. Thepublish/subscribemodel\nworks well for our use case as multiple consumers can sub-\nscribe to the same topic and process the same set of events.\nWe have two primary Kafka consumers. The first is a Samza\njob that reads messages from Kafka for stream processing asData Source Dimensions Metrics\na 25 21\nb 30 26\nc 71 35\nd 60 19\ne 29 8\nf 30 16\ng 26 18\nh 78 14\nTable 2: Characteristics of production data sources.\ndescribed in Section 4.1. Topics in Kafka map to pipelines\nin Samza, and pipelines in Samza map to data sources in\nDruid. The second consumer reads messages from Kafka\nand stores them in a distributed file system. This file sys-\ntemisthesameastheoneusedforDruid’sdeepstorage, and\nalso acts as a backup for raw events. The purpose of storing\nraw events in deep storage is so that we can run batch pro-\ncessing jobs over them at any given time. For example, our\nstream processing layer may choose to not include certain\ncolumns when it first processes a new pipeline. If we want\nto include these columns in the future, we can reprocess the\nraw data to generate new Druid segments.\nKafka is the single point of delivery for events entering our\nsystem, and must have the highest availability. We repli-\ncate our Kafka producers across multiple datacenters. In\nthe event that Kafka brokers and consumers become unre-\nsponsive, as long as our HTTP endpoints are still available,\nwe can buffer events on the producer side while recovering\nthe system. Similarily, if our processing and serving lay-\ners completely fail, we can recover by replaying events from\nKafka.\n6. PERFORMANCE\nDruid runs in production at several organizations, and to\nbrieflydemonstrateitsperformance, wehavechosentoshare\nsome real world numbers for one of the larger production\nclusters. We also include results from synthetic workloads\non TPC-H data.\n6.1 Query Performance in Production\nDruid query performance can vary signficantly depending\non the query being issued. For example, sorting the values\nof a high cardinality dimension based on a given metric is\nmuch more expensive than a simple count over a time range.\nTo showcase the average query latencies in a production\nDruid cluster, we selected 8 frequently queried data sources,\ndescribed in Table 2.\nThequeriesvaryfromstandardaggregatesinvolvingdiffer-\nent types of metrics and filters, ordered group bys over one\nor more dimensions with aggregates, and search queries and\nmetadata retrieval queries. Queries involving a single col-\numn are very frequent, and queries involving all columns are\nvery rare.\n\u000fThere were approximately 50 total data sources in this\nparticularclusterandseveralhundredusersissuingqueries.\n\u000fTherewasapproximately10.5TBofRAMavailableinthis\ncluster and approximately 10TB of segments loaded. Col-\nlectively, there are about 50 billion Druid rows in this\ncluster. Results for every data source is not shown.\n70.00.51.0\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequery time (s)datasource\na\nb\nc\nd\ne\nf\ng\nhMean query latency\n0.00.51.01.5\n01234\n0510152090%ile 95%ile 99%ile\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequery time (seconds)datasource\na\nb\nc\nd\ne\nf\ng\nhQuery latency percentilesFigure 10: Query latencies of production data\nsources.\n\u000fThis cluster uses Intel®Xeon®E5-2670 processors and\nconsists of 1302 processing threads and 672 total cores\n(hyperthreaded).\n\u000fA memory-mapped storage engine was used (the machine\nwasconfiguredtomemorymapthedatainsteadofloading\nit into the Java heap.)\nQuery latencies are shown in Figure 10. Across all the\nvarious data sources, average query latency is approximately\n550 milliseconds, with 90% of queries returning in less than\n1 second, 95% in under 2 seconds, and 99% of queries re-\nturning in less than 10 seconds.\n6.2 Query Benchmarks on TPC-H Data\nWe also present Druid benchmarks on TPC-H data. We\nselected queries more typical of Druid’s workload to demon-\nstrate query performance. In Figure 11, we present our\nresults compared again Google BigQuery, which is Google\nDremel[13]. The Druid results were ran on a 24 vCPU, 156\nGB Google Compute Engine instance (2.2 GHz Intel Xeon\nE5v4Broadwell)andtheBigQueryresultswererunthrough\nGoogle’swebinterface. Inourresults, Druidperformedfrom\n2-20x faster than BigQuery.\nOur Druid setup used Amazon EC2 m3.2xlarge instance\ntypes (Intel®Xeon®E5-2680 v2 @ 2.80GHz) for historical\nnodes and c3.2xlarge instances (Intel®Xeon®E5-2670 v2\n@ 2.50GHz) for broker nodes.\nWebenchmarkedDruid’sscanrateat53,539,211rows/sec-\nond/core for select count(*) equivalent query over a given\nFigure 11: Druid benchmarked against Google Big-\nQuery – 100GB TPC-H data.\nData Source Dimensions Metrics Peak Events/s\nd1 34 24 218123\nd2 36 24 172902\nd3 46 21 170264\nd4 40 17 94064\nd5 41 23 68104\nd6 31 31 64222\nd7 29 8 30048\nTable 3: Characteristics of production data sources.\ntime interval and 36,246,530 rows/second/core for a select\nsum(float) type query.\n6.3 Data Ingestion Performance\nTo showcase the ingestion latency of the RADStack, we\nselected the top seven production datasources in terms of\npeak events ingested per second for early 2015. These data-\nsources are described in Table 3. Our production ingestion\nsetup used over 40 nodes, each with 60GB of RAM and 32\ncores (12 x Intel®Xeon®E5-2670). Each pipeline for each\ndatasource involved transforms and joins.\nIngestion latency is heavily dependent on the complexity\nof the data set being ingested. The data complexity is deter-\nmined by the number of dimensions in each event, the num-\nber of metrics in each event, and the types of aggregations\nwe want to perform on those metrics. With the most basic\ndata set (one that only has a timestamp column), our setup\ncan ingest data at a rate of 800,000 events/second/core,\nwhich is really just a measurement of how fast we can de-\nserialize events. At peak, a single node was able to process\n62259 events/second. The total peak events per second was\n840500. The median events per second was 590100. The\nfirst and third quantiles were 487000 events/s and 663200\nevents/s respectively.\n7. PRODUCTION EXPERIENCES\n7.1 Experiences with Druid\n7.1.1 Query Patterns\nDruid is often used for exploratory analytics and report-\ning, which are two very distinct use cases. Exploratory ana-\nlytic workflows provide insights by progressively narrowing\n8down a view of data until an interesting observation is made.\nUsers tend to explore short intervals of recent data. In the\nreporting use case, users query for much longer data inter-\nvals, and the volume of queries is generally much less. The\ninsights that users are looking for are often pre-determined.\n7.1.2 Multitenancy\nExpensiveconcurrentqueriescanbeproblematicinamul-\ntitenant environment. Queries for large data sources may\nend up hitting every historical node in a cluster and con-\nsume all cluster resources. Smaller, cheaper queries may be\nblocked from executing in such cases. We introduced query\nprioritization to address these issues. Each historical node\nis able to prioritize which segments it needs to scan. Proper\nquery planning is critical for production workloads. Thank-\nfully, queries for a significant amount of data tend to be for\nreporting use cases and can be de-prioritized. Users do not\nexpect the same level of interactivity in this use case as they\ndo when they are exploring data.\n7.1.3 Node Failures\nSingle node failures are common in distributed environ-\nments, but many nodes failing at once are not. If historical\nnodes completely fail and do not recover, their segments\nneed to be reassigned, which means we need excess cluster\ncapacity to load this data. The amount of additional capac-\nity to have at any time contributes to the cost of running\na cluster. From our experiences, it is extremely rare to see\nmore than 2 nodes completely fail at once and hence, we\nleave enough capacity in our cluster to completely reassign\nthe data from 2 historical nodes.\n7.2 Experiences with Ingestion\n7.2.1 Multitenancy\nBefore moving our streaming pipeline to Samza, we exper-\nimented with other stream processors. One of the biggest\npains we felt was around multi-tenancy. Multiple pipelines\nmay contend for resources, and it is often unclear how vari-\nous jobs impact one another when running in the same en-\nvironment. Given that each of our pipelines is composed\nof different tasks, Samza was able to provide per task re-\nsource isolation, which was far easier to manage than per\napplication resource isolation.\n7.3 Operational Monitoring\nProper monitoring is critical to run a large scale dis-\ntributed cluster, especially with many different technologies.\nEach Druid node is designed to periodically emit a set of op-\nerational metrics. These metrics may include system level\ndata such as CPU usage, available memory, and disk capac-\nity, JVM statistics such as garbage collection time, and heap\nusage, or node specific metrics such as segment scan time,\ncache hit rates, and data ingestion latencies. Druid also\nemits per query metrics so we can examine why a particular\nquery may be slow. We’ve also added functionality to peri-\nodically emit metrics from Samza, Kafka, and Hadoop. We\nemit metrics from our production RADStack and load them\ninto a dedicated metrics RADstack. The metrics cluster is\nused to explore the performance and stability of the pro-\nduction cluster. This dedicated metrics cluster has allowed\nus to find numerous production problems, such as gradualquery speed degradations, less than optimally tuned hard-\nware, and various other system bottlenecks.\n8. RELATED WORK\n8.1 Hybrid Batch/Streaming Workflows\nSpark[24] is a cluster computing framework optimized for\niterative workflows. Spark Streaming is a separate project\nthat converts sequences of tuples into immutable micro-\nbatches. Each micro-batch can be processed using the un-\nderlying Spark framework. Spark SQL is a query optimiza-\ntion layer that can sit on top of Spark and issue SQL queries,\nalong with Spark’s native API. Druid’s approach to query-\ning is quite different and Druid insteads builds immutable\nindexed data structures optimized for low latency OLAP\nqueries, and does not leverage lineage in its architecture.\nThe RADStack can theoretically be composed of Spark and\nSpark Streaming for processing, Kafka for event delivery,\nand Druid to serve queries.\n8.2 Druid and Other Data Stores\nDruid builds on many of the same principles as other dis-\ntributed columnar data stores[7], and in-memory databasesi\nsuch as SAP’s HANA[6] and VoltDB[22]. These data stores\nlackDruid’slowlatencyingestioncharacteristics. Druidalso\nhas native analytical features baked in, similar to ParAc-\ncel[15], however, Druid allows system wide rolling software\nupdates with no downtime.\nDruid is similar to C-Store[18] in that it has two subsys-\ntems, a read-optimized subsystem in the historical nodes\nand a write-optimized subsystem in real-time nodes. Real-\ntime nodes are designed to ingest a high volume of ap-\npend heavy data, and do not support data updates. Unlike\nthe two aforementioned systems, Druid is meant for OLAP\ntransactions and not OLTP transactions.\n9. CONCLUSIONS AND FUTURE WORK\nIn this paper we presented the RADStack, a collection\nof complementary technologies that can be used together\nto power interactive analytic applications. The key pieces\nof the stack are Kafka, Samza, Hadoop, and Druid. Druid\nis designed for exploratory analytics and is optimized for\nlow latency data exploration, aggregation, and ingestion,\nand is well suited for OLAP workflows. Samza and Hadoop\ncomplement Druid and add data processing functionality,\nand Kafka enables high throughput event delivery problem.\nWe believe that in later iterations of this work, batch pro-\ncessing may not be necessary. As open source technologies\nmature, the existing problems around exactly-once process-\ning will eventually be solved. The Druid, Samza and Kafka\ncommunities are working on exactly once, lossless processing\nfor their respective systems, and in the near future, the same\nguarantees that the RADStack provides right now should be\navailable using only these technologies.\n10. ACKNOWLEDGMENTS\nDruid, Samza, Kafka, and Hadoop could not have been\nbuilt the assistance of their respective communities. We\nwant to thank everyone that contributes to open source for\ntheir invaluable support.\n911. REFERENCES\n[1] Apache samza. http://samza.apache.org/ , 2014.\n[2] D. J. Abadi, S. R. Madden, and N. Hachem.\nColumn-stores vs. row-stores: How different are they\nreally? In Proceedings of the 2008 ACM SIGMOD\ninternational conference on Management of data ,\npages 967–980. ACM, 2008.\n[3] Y. Collet. Lz4: Extremely fast compression algorithm.\ncode. google. com , 2013.\n[4] J. Dean and S. Ghemawat. Mapreduce: simplified\ndata processing on large clusters. Communications of\nthe ACM , 51(1):107–113, 2008.\n[5] G. DeCandia, D. Hastorun, M. Jampani,\nG. Kakulapati, A. Lakshman, A. Pilchin,\nS. Sivasubramanian, P. Vosshall, and W. Vogels.\nDynamo: amazon’s highly available key-value store. In\nACM SIGOPS Operating Systems Review , volume 41,\npages 205–220. ACM, 2007.\n[6] F. Färber, S. K. Cha, J. Primsch, C. Bornhövd,\nS. Sigg, and W. Lehner. Sap hana database: data\nmanagement for modern business applications. ACM\nSigmod Record , 40(4):45–51, 2012.\n[7] B. Fink. Distributed computation on dynamo-style\ndistributed storage: riak pipe. In Proceedings of the\neleventh ACM SIGPLAN workshop on Erlang\nworkshop , pages 43–50. ACM, 2012.\n[8] A. Hall, O. Bachmann, R. Büssow, S. Gănceanu, and\nM. Nunkesser. Processing a trillion cells per mouse\nclick.Proceedings of the VLDB Endowment ,\n5(11):1436–1446, 2012.\n[9] M. Hausenblas and N. Bijnens. Lambda architecture.\nURL: http://lambda-architecture. net/. Luettu , 6:2015,\n2014.\n[10] P. Hunt, M. Konar, F. P. Junqueira, and B. Reed.\nZookeeper: Wait-free coordination for internet-scale\nsystems. In USENIX ATC , volume 10, 2010.\n[11] J. Kreps, N. Narkhede, and J. Rao. Kafka: A\ndistributed messaging system for log processing. In\nProceedings of 6th International Workshop on\nNetworking Meets Databases (NetDB), Athens, Greece ,\n2011.\n[12] N. Marz. Storm: Distributed and fault-tolerant\nrealtime computation. http://storm-project.net/ ,\nFebruary 2013.\n[13] S. Melnik, A. Gubarev, J. J. Long, G. Romer,\nS. Shivakumar, M. Tolton, and T. Vassilakis. Dremel:\ninteractive analysis of web-scale datasets. Proceedings\nof the VLDB Endowment , 3(1-2):330–339, 2010.\n[14] P. O’Neil, E. Cheng, D. Gawlick, and E. O’Neil. The\nlog-structured merge-tree (lsm-tree). Acta\nInformatica , 33(4):351–385, 1996.[15] Paraccel analytic database.\nhttp://www.paraccel.com/resources/Datasheets/\nParAccel-Core-Analytic-Database.pdf , March\n2013.\n[16] K. Shvachko, H. Kuang, S. Radia, and R. Chansler.\nThe hadoop distributed file system. In Mass Storage\nSystems and Technologies (MSST), 2010 IEEE 26th\nSymposium on , pages 1–10. IEEE, 2010.\n[17] M. Stonebraker, D. Abadi, D. J. DeWitt, S. Madden,\nE. Paulson, A. Pavlo, and A. Rasin. Mapreduce and\nparallel dbmss: friends or foes? Communications of\nthe ACM , 53(1):64–71, 2010.\n[18] M. Stonebraker, D. J. Abadi, A. Batkin, X. Chen,\nM. Cherniack, M. Ferreira, E. Lau, A. Lin,\nS. Madden, E. O’Neil, et al. C-store: a\ncolumn-oriented dbms. In Proceedings of the 31st\ninternational conference on Very large data bases ,\npages 553–564. VLDB Endowment, 2005.\n[19] M. Stonebraker, J. Becla, D. J. DeWitt, K.-T. Lim,\nD. Maier, O. Ratzesberger, and S. B. Zdonik.\nRequirements for science data bases and scidb. In\nCIDR, volume 7, pages 173–184, 2009.\n[20] E. Tschetter. Introducing druid: Real-time analytics\nat a billion rows per second. http://druid.io/blog/\n2011/04/30/introducing-druid.html , April 2011.\n[21] V. K. Vavilapalli, A. C. Murthy, C. Douglas,\nS. Agarwal, M. Konar, R. Evans, T. Graves, J. Lowe,\nH. Shah, S. Seth, et al. Apache hadoop yarn: Yet\nanother resource negotiator. In Proceedings of the 4th\nannual Symposium on Cloud Computing , page 5.\nACM, 2013.\n[22] L. VoltDB. Voltdb technical overview.\nhttps://voltdb.com/ , 2010.\n[23] F. Yang, E. Tschetter, X. Léauté, N. Ray, G. Merlino,\nand D. Ganguli. Druid: a real-time analytical data\nstore. In Proceedings of the 2014 ACM SIGMOD\ninternational conference on Management of data ,\npages 157–168. ACM, 2014.\n[24] M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma,\nM. McCauley, M. J. Franklin, S. Shenker, and\nI. Stoica. Resilient distributed datasets: A\nfault-tolerant abstraction for in-memory cluster\ncomputing. In Proceedings of the 9th USENIX\nconference on Networked Systems Design and\nImplementation , pages 2–2. USENIX Association,\n2012.\n[25] M. Zaharia, T. Das, H. Li, S. Shenker, and I. Stoica.\nDiscretized streams: an efficient and fault-tolerant\nmodel for stream processing on large clusters. In\nProceedings of the 4th USENIX conference on Hot\nTopics in Cloud Computing , pages 10–10. USENIX\nAssociation, 2012.\n10" } ]
{ "category": "App Definition and Development", "file_name": "radstack.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": " Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nSecurity-Review Report TiKV 02.2020\nCure53, Dr.-Ing. M. Heiderich, MSc. N. Krein, MSc. D. Weißer, H. Hippert, BSc. J. Hector\nIndex\nIntroduction\nScope\nTest Methodology\nPhase 1: General security posture checks\nPhase 2: Manual code auditing\nPhase 1: General security posture checks\nApplication/Service/Project Specifics\nLanguage Specifics\nExternal Libraries & Frameworks\nConfiguration Concerns\nAccess Control\nLogging/Monitoring\nUnit/Regression and Fuzz-Testing\nDocumentation\nOrganization/Team/Infrastructure Specifics\nSecurity Contact\nSecurity Fix Handling\nBug Bounty\nBug Tracking & Review Process\nEvaluating the Overall Posture\nPhase 2: Manual code auditing & pentesting\nTLS Certificates/Handling\nMiscellaneous Issues\nTIK-01-001 SCA: Security vulnerabilities in outdated library versions (Info)\nConclusions & Verdict\nCure53, Berlin · 03/04/20 1/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nIntroduction\n“TiKV is an open-source, distributed, and transactional key-value database. Unlike other\ntraditional NoSQL systems, TiKV not only provides classical key-value APIs, but also\ntransactional APIs with ACID compliance. Built in Rust and powered by Raft, TiKV was\noriginally created to complement TiDB, a distributed HTAP database compatible with the\nMySQL protocol.”\nFrom https://github.com/tikv/tikv\nThis report documents the findings of a security assessment of the TiKV complex. The\nproject was carried out by Cure53 in February 2020 and entailed a broad look at the\nmaturity levels of security found on the TiKV software and surrounding scope, inclusive\nof a penetration test and a code audit.\nIt should be noted that the project was commissioned and funded by CNCF as a typical\nphase of the CNCF project graduation process. This assessment took place in the\nframes of long-term and well-established cooperation between Cure53 and CNCF. Five\ntesters examined the scope in February 2020, namely in calendar weeks CW7 and\nCW8; the invested work amounted to a total of eighteen person-days.\nAfter starting the project in a timely fashion, Cure53 effectively inspected the TiKV\ncomplex in terms of security processes, response and infrastructure. To best structure\nthe work in relation to the objectives, the work was carried out in several phases. During\nPhase 1, Cure53 focused on general security posture checks, while Phase 2 was\ndedicated to manual code auditing. The latter was aimed at finding implementation-\nrelated issues that can lead to security bugs. The findings from each of the phases are\nrecounted in respective chapters of this report.\nPhase 1 notably yielded a rather high number of issues and impressions. On the\ncontrary, Phase 2 was much less fruitful as regards discoveries, meaning that fewer\nfindings stem from the manual code review parts of the audit. This is also because of the\nfact that the majority of time was invested into the posture review and a much shorter\nchunk of the budget was spent on code audits.\nOver the duration of this engagement, the Cure53 team worked closely with the TiKV\nteam, remaining connected with those in-house on a dedicated, private channel on the\nTiKV Slack workspace. The communications were smooth and the TiKV team was\nhelpful in answering all of the Cure53’s questions comprehensively.\nIn the following sections, the report will first shed light on the scope and key test\nparameters. Next, all findings will be discussed in dedicated chapters for each of the two\nCure53, Berlin · 03/04/20 2/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nphases, starting with Phase 1. After that, the report will discuss one finding from the\ncode review phase, which is essentially a general weakness of lower severity. Finally,\nthe report will close with broader conclusions about this 2020 project. Cure53 elaborates\non the general impressions and issues a verdict on the TiKV project on the basis of the\ntesting team’s observations and collected evidence. Tailored hardening\nrecommendations pertinent to the TiKV code, infrastructure and surroundings are also\nincorporated into the final section.\nScope\n•TiKV 4.0.0-alpha\n◦https://github.com/tikv/tikv/releases \n▪Commit: bd94da3107ad4d515458068b124d8b107ebac1e6\n◦A detailed scope document was shared with Cure53 by TiKV\n◦A test server setup was made available for Cure53 by TiKV\n◦Cure53 was given access to relevant documentation material by the TiKV Team\nTest Methodology\nThe following paragraphs describe the metrics and methodologies used to evaluate the\nsecurity posture of the TiKV project and codebase. In addition, it includes results for\nindividual areas of the project’s security properties that were either selected by Cure53\nor singled out by other involved parties as needing a closer inspection.\nAs with previous tests for CNCF, this assignment was also divided into two phases. The\ngeneral security posture and maturity of the audited code base, TiKV, has been\nexamined in Phase 1. The usage of external frameworks is audited, security constraints\nfor TiKV configurations were examined and the documentation had been deeply studied\nin order to get a general idea of security awareness at TiKV. This was followed with\nresearch on how security reports and vulnerabilities are handled and how much the\nentire standpoint towards a healthily secure infrastructure is taken as a serious matter.\nThe latter phase covered actual tests and audits against the TiKV’s codebase, with the\nactual code quality and its hardening being judged.\nPhase 1: General security posture checks\nAs mentioned earlier, Phase 1 enumerates general qualities of the audited project. Here,\na meta-level perspective on the general security posture is reached by providing details\nabout the language specifics, configurational pitfalls and general documentation. An\nadditional view on how TiKV handles vulnerability reports and how the disclosure\nprocess works is provided as well. A perception rooted in the maturity of TiKV is given,\nCure53, Berlin · 03/04/20 3/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nsolely on a meta-level. Actual impressions linked to the code quality relate to Phase 2 of\nthe audit process.\nPhase 2: Manual code auditing\nFor this component, Cure53 performed a small-scale code review and attempted to\nidentify security-relevant areas of the project’s codebase and inspect them for flaws that\nare usually present in distributed database systems. This is an addition to the previous\nmaturity analysis and supplies a more detailed perspective on the project’s\nimplementation when it comes to security-relevant portion of the code. Still, this Phase\nwas limited by the budget and cannot be seen as complete without a large-scale code\nreview with in-depth analysis of the multiple parts forming the project’s scope. As such,\nthe goal was not to reach an extensive coverage but to gain an impression about the\noverall quality of TiKV and determine which parts of the project's scope deserve\nthorough audits in the future.\nLater chapters in this report will also elaborate on what was being inspected, why and\nwith what implications for the TiKV software complex.\nPhase 1: General security posture checks\nThis Phase is meant to provide a more detailed overview of the TiKV project’s security\nproperties that are seen as somewhat separate from both the code and the TiKV\nsoftware. The first few subsections of the posture audit focus on more abstract\ncomponents of a specific project instead of judging the code quality itself. Later\nsubsections look at elements that are linked more strongly to the organizational and\nteam aspect of TiKV. In addition to the items presented below, the Cure53 team also\nfocused on the following tasks to be able to conduct a cross-comparative analysis of all\nobservations.\n•The documentation was examined to understand all provided functionality and\nacquire examples of how a real-world deployment of TiKV looks like.\n•The network topology and connected parts of the overall architecture were\nexamined. This also included consideration of relevant runtime- and\nenvironment-specifications that are necessary to run TiKV.\n•The main control flow of the TiKV application was followed and the main\nstructure of the codebase has been analyzed.\n•High-level code audits have been conducted. This was necessary to get a quick\nimpression of the overall style and to reach an understanding of which areas are\ninteresting for a more deep-dive approach in Phase 2 of the audit.\nCure53, Berlin · 03/04/20 4/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \n•Normally, past vulnerability reports in TiKV would have been checked out to spot\ninteresting areas that suffered in the past. However, TiKV never received a\nvulnerability report.\n•Concluding on the steps above, the project’s maturity was evaluated; specific\nquestions about the software were compiled from a general catalogue according\nto individual applicability.\nApplication/Service/Project Specifics\nIn this section, Cure53 will describe the areas that were inspected for having insight on\nthe application-specific aspects that lead to a good security posture. These include\nchoice of programming language, selection and oversight of external third-party libraries,\nas well as other technical aspects like logging, monitoring, test coverage and access\ncontrol.\nLanguage Specifics\nProgramming languages can provide functions that pose an inherent security risk and\ntheir use is either deprecated or discouraged. For example, strcpy() in C has led to many\nsecurity issues in the past and should be avoided altogether. Another example would be\nthe manual construction of SQL queries versus the usage of prepared statements. The\nchoice of language and enforcing the usage of proper API functions are therefore crucial\nfor the overall security of the project.\nTiKV is written in Rust, which is a language with built-in memory management that can\nbe both safe and unsafe depending on how it is used. It has proven to be a good choice\nfor programmers that do not want to worry about dangling pointers or Use-After-Free\nvulnerabilities. The TiKV’s development originally started with Go - another programming\nlanguage with a good track record of keeping applications mostly free from memory\nsafety issues. However, constraints with Go’s garbage collection and unsatisfactory\nbindings to the C language, the switch ultimately has been made to Rust.\nConsequently, depending on how one chooses to write Rust code, as either safe or\nunsafe, it plays a big role on how defensively written the code has to be. It is also\nimportant to mention that TiKV solely makes use of Rust Nightly versions and thus uses\nfeatures that are not yet enabled for the stable branch. Generally, TiKV’s code makes a\nsolid impression. Source code is sufficiently commented. Test-cases are separated from\nthe rest of the runtime. Different components are independently packaged. Deep code\nnesting is avoided by early error handling. Since TiKV makes use of low-level unsafe\ncode patterns, it is necessary to implement sufficient bounds and access checks. Here,\nTiKV uses assertions that are present in the release version as well and, thus, makes\nCure53, Berlin · 03/04/20 5/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nsure that program flow terminates early. At the time of testing, Cure53 did not manage to\nspot an issue with the unsafe parts of TiKV.\nExternal Libraries & Frameworks\nWhile external libraries and frameworks can also contain vulnerabilities, it is nonetheless\nbeneficial to rely on sophisticated libraries instead of reinventing the wheel with every\nproject. This is especially true for cryptographic implementations, since those are known\nto be prone to errors.\nTiKV makes heavy use of external libraries and other server components, therefore\navoiding reimplementation of already existing solutions. The framework uses Rust’s\ndependency manager called Cargo1 to keep track of and manage all its dependencies.\nThe TiKV project is currently not using any kind of tracking (or security tracking) for\nexternal third-party dependencies. Running the Cargo-integrated tool cargo-audit2\nrevealed multiple issues going back as far back as September 2018. This includes\ndependencies which are no longer actively maintained and, therefore, pose a\nsubstantiated concern in terms of security risk since issues will likely go unfixed or\nunnoticed and take a very long time to get addressed. Furthermore, multiple packages\nhave been identified that contain active security issues, which have been patched and\nfor which updates are available, leading to the conclusion that patch management is a\nkey area which must be improved for further development of the project. This issue is\ndescribed in more detail in TIK-01-001.\nFurther investigation revealed that the cargo-audit plugin was once integrated into the CI\nprocess but has since been disabled because a specific package could not be updated\nand generated constant notifications. After the package had finally been updated, the\nteam forgot to enable the audit functionality again, leaving the project without checks or\nprotection as regards security issues.\nConfiguration Concerns\nComplex and adaptable software systems usually have many variable options which can\nbe configured accordingly to the actually deployed application necessities. While this is a\nvery flexible approach, it also leaves immense room for mistakes. As such, it often\ncreates the need for additional and detailed documentation, in particular when it comes\nto security.\nIn terms of security, TiKV provides the means to configure TLS for the connections\nbetween the individual TiKV nodes. Due to the requirement of having valid certificates, it\n1 https://blog.rust-lang.org/2016/05/05/cargo-pillars.html2 https://blog.rust-lang.org/inside-rust/2019/10/03/Keeping-secure-with-cargo-audit-0.9.html\nCure53, Berlin · 03/04/20 6/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nis hard to provide this feature by default. However, the documentation on the website on\nhow TLS needs to be configured is fairly simple and straightforward and, as such, should\nbe considered by everyone that uses TiKV across untrusted networks.\nAt the time of writing, on-disk encryption of data was not available and had a ‘work in\nprogress’ status which can be tracked through the GitHub Issue 3680 . In one of the\nticket comments, it was mentioned that there may be problems regarding the log entries,\nwhich may contain some data. This should definitely be considered during the\ndevelopment of the feature. Once this feature is available on all releases, turning it on by\ndefault should be considered to add an additional layer of security to the stored data.\nWhile auditing various code parts, it was discovered that the status server exposes two\ndebug endpoints reachable through HTTP. These endpoints may leak sensitive\ninformation and it is recommended to extend the security configuration section, so as to\nmention this as a side-effect of enabling the status server. Overall, TiKV does not\nprovide much room for misconfigurations that have a severe impact on the security.\nAccess Control\nWhenever an application needs to perform a privileged action, it is crucial that an access\ncontrol model is in place to ensure that appropriate permissions are present. Further, if\nthe application provides an external interface for interaction purposes, some form of\nseparation and access control may be required.\nTiKV does not implement any sort of security model and has no AAA (Authentication,\nAuthorization, Accounting) functionality and does not provide any method to limit access\nto the existing databases through user-accounts, roles or client certificates.\nInstead of having to secure the custom interfaces or monitoring ports, TiKV relies on the\nfeatures offered by Kubernetes and the permissions defined in the local Kubernetes\nenvironment. Thus, permissions can be managed by the cluster administration via the\nmeans provided by Kubernetes.\nIf Kubernetes is not in use, it uses Docke r Swarms RBAC, resource and network\nseparation to achieve access control goals.\nLogging/Monitoring\nHaving a good logging/monitoring system in place allows developers and users to\nidentify potential issues more easily or get an idea of what is going wrong. It can also\nprovide security-relevant information, for example when a verification of a signature fails.\nConsequently, having such a system in place has a positive influence on the project.\nCure53, Berlin · 03/04/20 7/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nTiKV builds its logging mechanism on top of Rust’s slog crate. Slog’s extensibility allows\nfor easy implementation of a standard logging interface that can be triggered with Rust’s\ndefault macros. Its functionality is centralized within a separate package called tikv_util\nand implements both formatting and file logging that is written to depend on the log level.\nA simple command-line switch allows you to specify where logs end up.\nMonitoring itself is handled through Prometheus and Grafana, where Prometheus stores\nmonitoring and performance data and while Grafana displays them. There are two\ninterfaces one can use. First, there is an HTTP interface to return monitoring data about\nPD components such as information about load balancing or internal data such as\ncluster details and capacity levels. Generally, this acts as an interface for keep-alive type\ndata. The metrics interface, on the other hand, exposes performance data ranging from\ngarbage collection to number of failed commands. This data can be directly fed to\nPrometheus that by itself contains a useful feature set such as an AlertManager that\nadditionally can forward notifications via Mail or SMS. Altogether, TiKV utilizes a modern\nsoftware stack for logging and monitoring that leaves no real room for complaints.\nUnit/Regression and Fuzz-Testing\nWhile tests are essential for any project, their importance grows with the scale of the\nendeavor. Especially for large-scale compounds, testing ensures that functionality is not\nbroken by code changes. Further, it generally facilitates the premise where features\nfunction the way they are supposed to. Regression tests also help guarantee that\npreviously disclosed vulnerabilities do not get reintroduced into the codebase. Testing is\ntherefore essential for the overall security of the project.\nTiKV uses Cargo as a universal project tool as is the standard in Rust projects. Tests are\nsplit into test modules in the respective code files and a larger section for integration\ntests which reside in a separate directory. This follows the best practices for unit testing\nunder rust, as can be found here3. Test runs are integrated into the projects Makefile and\nrun in an automated fashion in TiKVs CI environment.\nTiKV integrates multiple different fuzzing libraries to test their project extensively, namely\nLLVMs libfuzzer, AFL and Googles Honggfuzz. However, the tests do not run in an\nautomated pipeline and are currently run sporadically in a manual fashion. To strengthen\nthe projects security posture, it is recommended to reintegrate the tests into an\nautomated CI task, running them at least in a monthly rhythm. The TiKV team plans to\nadd the fuzz testing back to their regular, planned testing schedule.\n3 https://doc.rust-lang.org/book/ch11-01-writing-tests.html\nCure53, Berlin · 03/04/20 8/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nDocumentation\nGood documentation contributes greatly to the overall state of the project. It can ease\nthe workflow and ensure final quality of the code. For example, having a coding\nguideline which is strictly enforced during the patch review process ensures that the\ncode is readable and can be easily understood by a spectrum of developers. Following\ngood conventions can also reduce the risk of introducing bugs and vulnerabilities to the\ncode.\nOverall, the TiKV project leaves a good impression regarding the documentation aspect.\nNote that during the period of this review, the online documentation contained a small\nnotification that there is currently a refactoring taking place. Thus, the state of the\ndocumentation, compared to the outline given here, may have changed. However, TiKV\ndoes a good job of providing documentation that helps users and developers get started.\nA general docs section provides information about features and the architecture of the\nproject. Further, different aspects for general users are well-documented, for example\nthe deployment, configuration, monitoring and scaling processes are all described in\ntheir dedicated sections. A very positive impression leaves the ‘ Deep Dive’ section,\nwhich provides a more in-depth explanation of various components which tremendously\neases the process of new developers or contributors that are just getting started with the\nproject.\nIn addition, the website provides a dedicated section which goes into little detail of how\nto become a contributor and provides references to various documentations contained in\nthe repository. There, information about formatting code comments and the deployed\nstyle guide is provided which provides a good foundation for consistent and readable\ncode throughout the project. The repository also contains a more detailed description\nabout contributing with a rough flow of the contribution process outlined.\nAlthough the overall impression is very good, there is a minor recommendation for\nimprovement in regards to the documentation that is worth considering. Currently, the\n“Secure Config” section contains information on how to report security issues. This may\nbe rather hard to find and should be more easily reachable from the main website. For\nexample, the “Community” drop down could include a reference to the vulnerability\ndisclosure documentation, to ensure security researchers can responsibly disclose\npotential security issues.\nCure53, Berlin · 03/04/20 9/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nOrganization/Team/Infrastructure Specifics\nThis section will describe the areas Cure53 looked at to find out about the security\nqualities of the TiKV project that cannot be linked to the code and software but rather\nencompass handling of incidents. As such, it tackles the level of preparedness for critical\nbug reports within the TiKV development team. In addition, Cure53 also investigated the\ndegree of community involvement, i.e. through the use of bug bounty programs. While a\ngood level of code quality is paramount for a good security posture, the processes and\nimplementations around it can also make a difference in the final assessment of the\nsecurity posture.\nSecurity Contact\nTo ensure a secure and responsible disclosure of security vulnerabilities, it is important\nto have a dedicated point of contact. This person/team should be known, meaning that\nall necessary information such as an email address and preferably also encryption keys\nof that contact should be communicated appropriately.\nThe MAINTAINERS.md4 file lists email addresses of project maintainers that can be\ncontacted to report vulnerabilities. However, the document omits important details, such\nas the respective PGP keys and an outline of the disclosure process. The guideline on\nwhere to report security issues is quite hidden as it is part of the document that also\nexplains how to set up certificates in TiKV5. This is clearly not the appropriate place to\npresent this kind of information. Instead, it is advised to put all details related to reporting\nand disclosing security issues in a dedicated SECURITY.md in the project's repository.\nSecurity Fix Handling\nWhen fixing vulnerabilities in a public repository, it should not be obvious that a particular\ncommit addresses a security issue. Moreover, the commit message should not give a\ndetailed explanation of the issue. This would allow an attacker to construct an exploit\nbased on the patch and the provided commit message prior to the public disclosure of\nthe vulnerability. This means that there is a window of opportunity for attackers between\npublic disclosure and wide-spread patching or updating of vulnerable systems.\nAdditionally, as part of the public disclosure process, a system should be in place to\nnotify users about fixed vulnerabilities.\nAt this point in time, it cannot be evaluated how security fixes are handled and how they\nare disclosed. This is because there are no public vulnerability reports, no CVEs and\nnone of the commits mentions that a security issue was fixed.\n4 https://github.com/tikv/tikv/blob/master/MAINTAINERS.md5 https://tikv.org/docs/3.0/tasks/configure/security/#reporting-vulnerabilities\nCure53, Berlin · 03/04/20 10/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nBug Bounty\nHaving a bug bounty program acts as a great incentive in rewarding researchers and\ngetting them interested in projects. Especially for large and complex projects that require\na lot of time to get familiar with the codebase, bug bounties work on the basis of the\npotential reward for efforts.\nThe TiKV project does not have a bug bounty program at present, however this should\nnot be strictly viewed in a negative way. This is because bug bounty programs require\nadditional resources and management, which are not always a given for all projects.\nHowever, if resources become available, establishing a bug bounty program for TiKV\nshould be considered. It is believed that such a program could provide a lot of value to\nthe project.\nBug Tracking & Review Process\nA system for tracking bug reports or issues is essential for prioritizing and delegating\nwork. Additionally, having a review process ensures that no unintentional code, possibly\nmalicious code, is introduced into the codebase. This makes good tracking and review\ninto two core characteristics of a healthy codebase.\nIn TiKV, bugs which are not security related are handled via Github's issue tracker.\nThere is a small guideline6 on what to include in bug reports and an issue template\nexists as well7. However, there is room to improve in regards to visibility of those links as\nthey are not easy to find. The guideline is not linked anywhere on the TiKV website and\nthe template was only found in the security documentation.\nUsers can submit their own contributions to the TiKV project via pull requests on Github.\nThe workflow for adding contributions is explained in detail in the project's\nCONTRIBUTING.md which is considered suitable for open source projects. Submissions\nare reviewed by two TiKV maintainers in order to prevent the submission of malicious or\ndysfunctional code.\n6 https://github.com/tikv/tikv/blob/master/.github/ISSUE_TEMPLATE/bug-report.md7 https://github.com/tikv/tikv/issues/new?template=bug-report.md\nCure53, Berlin · 03/04/20 11/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nEvaluating the Overall Posture\nSince TiKV is still a relatively young project, it is hard to judge its security posture in all\naspects. A few parts of the posture audit were found inapplicable. For example, how\nTiKV handles vulnerability reports and disclosure processes will remain to be seen in the\nfuture. Also, things like implementation of access-control are outsourced to software like\nKubernetes or Docker where the permissions have to be defined through RBAC and\nadditional network segmentation.\nStill, the code audits gave Cure53 the impression that TiKV is on a good path and that\npotential concerns about the project’s maturity might be misplaced. The decision to use\nRust as a base language helps a lot. Usage of unsafe code parts is rather limited, as\nsuch, the number of potential memory-safety issues is drastically reduced. Although the\ndocumentation is well-written and helps a lot with getting up to speed with several\ncomponents of TiKV, concerns about correct and secure deployments arose. Cure53\nhopes that the upcoming rewrite of the documentation will help in this regard and provide\nmore insight into areas that can create pitfalls inside the configuration.\nPhase 2: Manual code auditing & pentesting\nThis section comments on the code auditing coverage within areas of special interest\nand documents the steps undertaken during the second phase of the audit against the\nTiKV software complex.\n•TiKV database encryption feature was evaluated and was found to use the\nstandard RocksDB encryption feature. However, at the moment only a bitwise\nXOR is implemented as the feature is not yet production-ready and is to be\nreplaced with an AES implementation in the future.\n•The codebase features a large number of TODO blocks, which according to the\ndevelopment team have not been properly tracked or addressed so far. Those\nissues will now be evaluated and added to the Github issue tracker of the project.\n•Handling of environment variables has been analyzed and produced no findings.\n•TiKV’s SecurityManager code has been analyzed and is responsible for the\nsetup of the TLS configuration as well as the database encryption. No issues\nhave been spotted.\n•The code was analyzed for security critical debug code in production parts\nwithout any results.\n•SST and metadata handling, as well as checksum verification, were analyzed.\nNo immediate issues have been spotted. However, the code trail spans multiple\nprojects and multiple programming languages and is rather complex, so it could\nnot be audited in depth.\nCure53, Berlin · 03/04/20 12/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \n•The connection between the sample TiKV Go client and the server has been\nfuzzed on gRPC level to see how stable the protocol is handled, but no\nunintended behaviors or crashes were spotted.\n•Use-cases of the unsafe statements in combination with buffer copy operations\nand length checks were audited. No issues were found in the given time. Two\ninstances in the code seemed a bit risky at first but, upon closer inspection, they\nturned out to be safe and did not pose a risk.\n•The status server component was checked in regard to the exposed HTTP\nendpoints. One of two debug endpoints ( /debug/pprof/heap) could potentially\nlead to leaking sensitive heap information. However, this information is collected\nby Prometheus and used for their profiling.\nTLS Certificates/Handling\nThe TiKV project supports the use of TLS to establish secure sessions for\ncommunication. The overall implementation used for handling TLS and certificates\nthroughout the project signifies standard components made available by the OpenSSL\nbindings into the Rust language. The OpenSSL package is used by TiKV to offer TLS\nfunctions throughout the implementation. A configuration option was found to disable\nproper hostname validation, which has been added to the configuration section of this\ndocument. However, no insecure standard values are in use.\nMiscellaneous Issues\nThis section covers those noteworthy findings that did not lead to an exploit but might aid\nan attacker in achieving their malicious goals in the future. Most of these results are\nvulnerable code snippets that did not provide an easy way to be called. Conclusively,\nwhile a vulnerability is present, an exploit might not always be possible.\nTIK-01-001 SCA: Security vulnerabilities in outdated library versions (Info)\nAnalyzing the libraries in use revealed that multiple ones do not leverage the most\nrecent versions available. Some libraries are no longer actively maintained and pose a\nthreat to the future security posture of the project and should either be exchanged for\ndifferent libraries or have to be maintained by the TiKV project group. In addition,\nmultiple libraries contain known security vulnerabilities for which patches and updates\nare available.\nSteps to Reproduce:\n1.Install the cargo-audit package\n2.cargo install cargo-audit --features=fix\n3.Change to the code directory of TiKV and run\nCure53, Berlin · 03/04/20 13/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \n4.cargo-audit audit\n5.Results will be shown on the command line\n6.Alternatively run make pre-audit && make audit\nIt is recommended to upgrade the necessary libraries and formulate a long-term plan on\nhow to handle outdated and no longer maintained external dependencies. As the project\nMakefile contains the audit option, it should be resolved why it was not used or how it\nneeds to be integrated into the CI build process.\nConclusions & Verdict\nThis assessment of the TiKV scope, curated by CNCF and executed by Cure53 in early\n2020, concludes with generally positive results. The security posture of the TiKV has\nbeen positively evaluated by the involved five members of the Cure53 team. Similarly,\nhigh-quality premise was noted for the code base and documentation, therefore the state\nof the TiKV software stack can be summarized as mature.\nTo give some context, this investigation belongs to a series of high-level assessments\nconducted by Cure53 for projects selected by CNCF-selected. However, it stands in\ncontrast to classic code audits and pentests due to its meta-level perspective and\nforward-looking foci. With such a framing of the project’s premise, Cure53 comments\nmostly on the general security qualities (Phase 1) with minimal emphasis on individual\nfindings (Phase 2). This was also reflected in the allocation of budget.\nStarting with enumerating some of the positive aspects and findings, Cure53 would like\nto underline that the TiKV project makes a sound and strong appearance at the meta-\nlevel as regards code quality, coding patterns, style coherence and general structure.\nThis is also reinforced by the fact that static code analysis in the later parts of the audit\nphase did not reveal significant problems. In Cure53’s expert opinion, automated testing\nor vulnerability scanning will likely not yield more findings. However, deep-dives into\nspecific code areas are definitely necessary.\nNext up, fuzzing the gRPC API revealed a solid foundation. The software stack seemed\nstable and Cure53 did not run into any sort of unexpected behaviors or sudden crashes.\nThe chosen language, Rust, does its job of providing a sound codebase which suffers\nfrom no obvious memory safety issues. The number of unsafe code blocks is kept at a\nminimum level. Those marked as unsafe are implemented in a defensive manner and\ninclude thorough error checks. Finally, logging and monitoring is well-handled by\nsupplying the necessary endpoints for Prometheus. Pluggable Grafana instances can\nadditionally be fed with data to visualize any abnormalities.\nCure53, Berlin · 03/04/20 14/15 Dr.-Ing. Mario Heiderich, Cure53\n Bielefelder Str. 14 \n D 10709 Berlin\n cure53.de · mario@cure53.de \nWhile the above conclusions point to stellar results, Cure53 also noticed some aspects\nthat could be improved or reflect minor inconsistencies that should be addressed. This\nby no means overturns the above verdict but rather aims to present slight alterations that\ncould be made to strengthen the perceived high-level of maturity even further. First,\nworth-highlighting is the fact that the TiKV’s codebase contains a fairly large number of\nTODOs in the sources. This is generally a sign of incomplete functionalities and might\nmean that it is perhaps too early to judge the maturity of TiKV holistically and\nconclusively.\nThe Phase 1 of the project early on highlighted that “ Unit/Regression and Fuzz-Testing ”\nis somewhat incomplete. Specifically, the implemented fuzzing tests are not properly run\nor evaluated at the moment. This definitely requires attention. Additionally, the integrated\ndependency scanner was disabled for convenience reasons as one dependency could\nnot be updated in the past. Sadly, this was later forgotten and never brought to the\noptimal state. Essentially, this also led to the finding documented as TIK-01-001.\nIt is nevertheless vital to highlight that the team behind TiKV was very quick to\nacknowledge the issues mentioned above (especially the disabled dependency scanner)\nand was very thankful to have them pointed them out, promising to have them\naddressed in the near future. Finally, while it is clear that resources are limited for the\nmanagement and rewarding of the issues being found by external community members,\nthe project would likely benefit from a bug bounty program. In other words, dedicating\nfinancial means to such a mechanism can be advised.\nIn conclusion, TiKV should be seen as properly mature and delivering on its security\npromises. This verdict mostly stems from the positive notes above and the overall good\ncode quality and documentation. In light of the findings from this February 2020\nassessment, Cure53 can recommend TiKV for public deployment, especially when\nintegrated into a containerized solution via Kubernetes and Prometheus for additional\nmonitoring.\nCure53 would like to thank Calvin Weng, Yongquan Ren, Jay Lee, Neil Shen, Nick\nCameron, WInk Yao and Zhou Quiang from the TiKV team as well as Chris Aniszczyk of\nThe Linux Foundation, for their excellent project coordination, support and assistance,\nboth before and during this assignment. Special gratitude also needs to be extended to\nThe Linux Foundation for sponsoring this project.\nCure53, Berlin · 03/04/20 15/15" } ]
{ "category": "App Definition and Development", "file_name": "Security-Audit.pdf", "project_name": "TiKV", "subcategory": "Database" }
[ { "data": "C l o u d E v e n t s\nS e c u r i t y A s s e s s m e n t\nOctober 26, 2022\nPrepared for:\nDoug Davis,\nCloud Native Computing Foundation\nOpen Source Technology Improvement Fund\nPrepared by:\nAlex Useche and Hamid Kashfi\nA b o u t T r a i l o f B i t s\nFounded in 2012 and headquartered in New York, Trail of Bits provides technical security \nassessment and advisory services to some of the world’s most targeted organizations. We \ncombine high- end security research with a real -world attacker mentality to reduce risk and \nfortify code. With 100+ employees around the globe, we’ve helped secure critical software \nelements that support billions of end users, including Kubernetes and the Linux kernel.\nWe maintain an exhaustive list of publications at\nhttps://github.com/trailofbits/publications\n, \nwith links to papers, presentations, public audit reports, and podcast appearances.\nIn recent years, Trail of Bits consultants have showcased cutting-edge research through \npresentations at CanSecWest, HCSS, Devcon, Empire Hacking, GrrCon, LangSec, NorthSec, \nthe O’Reilly Security Conference, PyCon, REcon, Security BSides, and SummerCon.\nWe specialize in software testing and code review projects, supporting client organizations \nin the technology, defense, and finance industries, as well as government entities. Notable \nclients include HashiCorp, Google, Microsoft, Western Digital, and Zoom.\nTrail of Bits also operates a center of excellence with regard to blockchain security. Notable \nprojects include audits of Algorand, Bitcoin SV, Chainlink, Compound, Ethereum 2.0, \nMakerDAO, Matic, Uniswap, Web3, and Zcash.\nTo keep up to date with our latest news and announcements, please follow\n@trailofbits\non \nTwitter and explore our public repositories at\nhttps://github.com/trailofbits\n.\nTo engage us \ndirectly, visit our “Contact” page at\nhttps://www.trailofbits.com/contact\n,\nor email us at \ninfo@trailofbits.com\n.\nTrail of Bits, Inc. \n228 Park Ave S #80688 \nNew York, NY 10003 \nhttps://www.trailofbits.com \ninfo@trailofbits.com\nT r a i l o f B i t s\n1\nCloudEvents Security Assessment\nP U B L I C\nN o t i c e s a n d R e m a r k s\nC o p y r i g h t a n d D i s t r i b u t i o n\n© 2022 by Trail of Bits, Inc.\nAll rights reserved. Trail of Bits hereby asserts its right to be identified as the creator of this \nreport in the United Kingdom.\nThis report is considered by Trail of Bits to be public information;\nit is licensed to the Linux \nFoundation under the terms of the project statement of work and has been made public at \nthe Linux Foundation’s request.\nMaterial within this\nreport may not be reproduced or \ndistributed in part or in whole without the express written permission of Trail of Bits.\nT e s t C o v e r a g e D i s c l a i m e r\nAll activities undertaken by Trail of Bits in association with this project were performed in \naccordance with a statement of work and agreed upon project plan.\nSecurity assessment projects are time-boxed and often reliant on information that may be \nprovided by a client, its affiliates, or its partners. As a result, the findings documented in \nthis report should not be considered a comprehensive list of security issues, flaws, or \ndefects in the target system or codebase.\nTrail of Bits uses automated testing techniques to rapidly test the controls and security \nproperties of software. These techniques augment our manual security review work, but \neach has its limitations: for example, a tool may not generate a random edge case that \nviolates a property or may not fully complete its analysis during the allotted time. Their use \nis also limited by the time and resource constraints of a project.\nT r a i l o f B i t s\n2\nCloudEvents Security Assessment\nP U B L I C\nT a b l e o f C o n t e n t s\nAbout Trail of Bits\n1\nNotices and Remarks\n2\nTable of Contents\n3\nExecutive Summary\n5\nProject Summary\n7\nProject Goals\n8\nProject Targets\n9\nProject Coverage\n11\nThreat Model\n13\nData Types:\n13\nComponents\n13\nTrust Zones\n14\nTrust Zone Connections\n16\nThreat Actors\n17\nData Flow\n17\nHigh-Level Overview\n17\nCloudEvents SDK\n18\nAutomated Testing\n22\nSummary of Findings\n23\nDetailed Findings\n24\n1. [Java SDK] Reliance on default encoding\n24\n2. [Java SDK] Outdated Vulnerable Dependencies\n26\nT r a i l o f B i t s\n3\nCloudEvents Security Assessment\nP U B L I C\n3. [JavaScript SDK] Potential XSS in httpTransport()\n28\n4. [Go SDK] Outdated Vulnerable Dependencies\n29\n5. [Go SDK] Downcasting of 64-bit integer\n30\n6. [Go SDK] ReadHeaderTimeout not configured\n32\n7. [CSharp SDK] Outdated Vulnerable Dependencies\n33\nSummary of Recommendations\n35\nA. Vulnerability Categories\n36\nB. Security Controls\n38\nC. Non-Security-Related Findings\n39\nD. Automated Analysis Tool Configuration\n43\nD.1. Semgrep\n43\nD.2. CodeQL\n44\nD.3. TruffleHog\n44\nD.4. snyk-cli\n44\nD.5. yarn audit\n45\nD.6. Intellij IDE Plugins\n45\nT r a i l o f B i t s\n4\nCloudEvents Security Assessment\nP U B L I C\nE x e c u t i v e S u m m a r y\nE n g a g e m e n t O v e r v i e w\nThe Linux Foundation, via strategic partner Open Source Technology Improvement Fund, \nengaged Trail of Bits to review the security of its CloudEvents specification and SDKs. From \nSeptember 19 to September 30, 2022, a team of two consultants conducted a security \nreview of the client-provided source code, with four person-weeks of effort. Details of the \nproject’s timeline, test targets, and coverage are provided in subsequent sections of this \nreport.\nP r o j e c t S c o p e\nOur testing efforts were focused on the identification of flaws that could result in a \ncompromise of confidentiality, integrity, or availability of the target system. We conducted \nthis audit with full knowledge of the system. We had access to the source code and \ndocumentation. We performed dynamic automated and manual testing of the target \nsystem, using both automated and manual processes.\nS u m m a r y o f F i n d i n g s\nThe audit did not uncover any significant flaws or defects that could impact system \nconfidentiality, integrity, or availability. A summary of the findings and details on notable \nfindings are provided below.\nSome of the findings in this report have their severity marking as\nUndetermined\n. This is \nbecause in engagements of this nature, the code and vulnerabilities in its dependencies are \nhighly dependent on the context in which they are used. As a result, the severity of issues \ncannot be determined and generalized. Moreover, due to time constraints, we did not \nmanually triage outdated, third-party dependencies or their vulnerabilities.\nE X P O S U R E A N A L Y S I S\nSeverity\nCount\nInformational\n1\nUndetermined\n6\nC A T E G O R Y B R E A K D O W N\nCategory\nCount\nData Validation\n1\nDenial of Service\n1\nPatching\n3\nT r a i l o f B i t s\n5\nCloudEvents Security Assessment\nP U B L I C\nUndefined Behavior\n2\nN o t a b l e F i n d i n g s\nSignificant flaws that impact system confidentiality, integrity, or availability are listed below.\n●\nT O B - C E - { 1 , 4 , 7 } \nReviews of multiple SDKs as well as consulting with the team indicates that SDKs are \nnot actively and routinely maintained for security updates, leaving some of them \nwith multiple outdated and vulnerable dependencies. Maintenance of SDK health is \na subject that is already covered in the\nSDK Governance\nspecification.\nT r a i l o f B i t s\n6\nCloudEvents Security Assessment\nP U B L I C\nP r o j e c t S u m m a r y\nC o n t a c t I n f o r m a t i o n\nThe following managers were associated with this project:\nDan Guido\n, Account Manager\nMary O’Brien\n, Project\nManager \ndan@trailofbits.com\nmary.obrien@trailofbits.com\nDerek Zimmer\n, Program Manager\nAmir Montazery\n, Program\nManager \nderek@ostif.org\namir@ostif.org\nThe following engineers were associated with this project:\nAlex Useche\n, Consultant\nHamid Kashfi\n, Consultant \nalex.useche@trailofbits.com\nhamid.kashfi@trailofbits.com\nP r o j e c t T i m e l i n e\nThe significant events and milestones of the project are listed below.\nDate\nEvent\nSeptember 7, 2022\nPre-project kickoff call\nSeptember 26, 2022\nStatus update meeting #1\nOctober 3, 2022\nDelivery of report draft\nOctober 3, 2022\nReport readout meeting\nOctober 26, 2022\nDelivery of final report\nT r a i l o f B i t s\n7\nCloudEvents Security Assessment\nP U B L I C\nP r o j e c t G o a l s\nThe engagement was scoped to provide a security assessment of the CloudEvents \nspecification and SDKs, including a lightweight threat model. Specifically, we sought to \nachieve the following non-exhaustive list of goals:\n●\nBuild an understanding of the CloudEvents specification and evaluate its suitability\nfor use in secure environments\n●\nThreat modeling\n●\nIndividual SDK audit and conformance to the specification\n●\nDocumentation review\n●\nCurrent testing evaluation and recommendations for improvement\nT r a i l o f B i t s\n8\nCloudEvents Security Assessment\nP U B L I C\nP r o j e c t T a r g e t s\nThe engagement involved a review and testing of the targets listed below\nC l o u d E v e n t s S p e c i fi c a t i o n\nRepository\nhttps://github.com/cloudevents/spec\nVersion\n2e09394c6297dad6d25edbc50717bbc71dba124a\nType\nSpecification documentation\nPlatform\nN/A\nC l o u d E v e n t s S D K f o r G o\nRepository\nhttps://github.com/cloudevents/sdk-go\nVersion\na7187527ab3278128c1b2a8fe9856d49ecddf25d\nType\nGo\nPlatform\nLinux\nC l o u d E v e n t s S D K f o r J a v a\nRepository\nhttps://github.com/cloudevents/sdk-java\nVersion\nb9eaa2fcaaf5569552e39ece4fce4a99064145e9\nType\nJava\nPlatform\nLinux\nC l o u d E v e n t s S D K f o r P H P\nRepository\nhttps://github.com/cloudevents/sdk-php\nVersion\n602cd26557e5522060531b3103450b34b678be1c\nType\nPHP\nPlatform\nLinux\nC l o u d E v e n t s S D K f o r P y t h o n\nRepository\nhttps://github.com/cloudevents/sdk-python\nVersion\n60f848a2043e64b37f44878f710a1c38f4d2d5f4\nType\nPython\nPlatform\nLinux\nT r a i l o f B i t s\n9\nCloudEvents Security Assessment\nP U B L I C\nC l o u d E v e n t s S D K f o r R u s t\nRepository\nhttps://github.com/cloudevents/sdk-rust\nVersion\nc380078bf45fcebe1af6299d75539cd6ba37f7d3\nType\nRust\nPlatform\nLinux\nT r a i l o f B i t s\n10\nCloudEvents Security Assessment\nP U B L I C\nP r o j e c t C o v e r a g e\nThis section provides an overview of the analysis coverage of the review, as determined by \nour high-level engagement goals. Our approaches include the following:\n●\nA review of the CloudEvents core specification\n●\nA review of the latest release (v1.0.2) version of the following protocol specifications \nfor CloudEvents:\n○\nAMQP Protocol Binding\n○\nAVRO Event Format\n○\nHTTP Protocol Binding\n○\nJSON Event Format\n○\nKafka Protocol Binding\n○\nMQTT Protocol Binding\n○\nNATS Protocol Binding\n○\nWebSockets Protocol Binding\n○\nProtobuf Event Format\n○\nXML Event Format\n○\nWeb hook\n●\nA lightweight threat modeling exercise covering potential high-level threats that \ncould arise when using CloudEvents and the SDKs\n●\nReview of testing coverage for the various SDKs\n●\nReview data validation strategies\n●\nReview of serialization, deserialization, encoding, and decoding logic\n●\nReview for potential cases where event data may be unnecessarily leaked\n●\nA manual code review of the SDKs listed in the\nProject\nTargets\nsection of this report\nT r a i l o f B i t s\n11\nCloudEvents Security Assessment\nP U B L I C\nC o v e r a g e L i m i t a t i o n s\nBecause of the time-boxed nature of testing work, it is common to encounter coverage \nlimitations. The following list outlines the coverage limitations of the engagement and \nindicates system elements that may warrant further review:\n●\nBecause we focused on the most used SDKs, our coverage for the following \nlanguages was limited:\n○\nPHP, CSharp, Python, Ruby, PowerShell\n●\nManual static-analysis reviews were limited to SDK implementations. Testing and \nexample codes provided along with each SDK was skipped.\n●\nVulnerable third-party libraries are highlighted and included without in-depth triage \nof their potential impact on the given SDK.\n●\nCloudEvents extensions and adapters were out of scope for this engagement.\nT r a i l o f B i t s\n12\nCloudEvents Security Assessment\nP U B L I C\nT h r e a t M o d e l\nAs part of the audit, Trail of Bits conducted a lightweight threat model, drawing from the \nMozilla Rapid Risk Assessment\nmethodology and the\nNational Institute of Standards and \nTechnology’s (NIST) guidance on data-centric threat modeling (\nNIST 800-154\n). We began our \nassessment of the design of CloudEvents by reviewing the documentation on the \nCloudEvents website and the various README files in the CloudEvents spec repository.\nD a t a T y p e s :\nAn application that produces CloudEvents logs and event data which contains various \nattributes. All CloudEvents contain the following:\n●\nID\n●\nSource URI\n●\nSpecification version\n●\nType of event (often used for observability, routing, etc.)\n●\nData specific to the event\nAdditionally, CloudEvents could optionally include the following:\n●\nData content type\n●\nURI identifying the data schema\n●\nA subject string\n●\nTime stamp\nC o m p o n e n t s\nThe following table describes each of the components identified for our analysis.\nComponent\nDescription\nEvents\nInclude context and data about an occurrence. Events are routed from an \nevent producer (the source) to interested event consumers.\nSource\nContext in which the occurrence happened. In some cases, the source \nmight consist of multiple Producers. Typically a managed service.\nProducer\nApplication or process producing the event (i.e. monitoring app).\nConsumer\nReceives the event and acts upon it. It uses the context and data to \nexecute\nT r a i l o f B i t s\n13\nCloudEvents Security Assessment\nP U B L I C\nIntermediary\nReceives a message containing an event for the purpose of forwarding it to \nthe next receiver, which might be another intermediary or a Consumer.\nAction\nTypically, custom code developed by a developer such as a lambda \nfunction or Azure function.\nMessage\nContains a body with context (metadata) and Event Data (the payload or \nactual message). Messages comprise the following:\n●\nMessage Context: Metadata about an event. Used by tools and \napplications to identify the relationship of Events to aspects of the \nsystem or to other Events.\n●\nMessage Event Data: Event payload\nT r u s t Z o n e s\nTrust zones capture logical boundaries where controls should or could be enforced by the \nsystem and allow developers to implement controls and policies between components’ \nzones.\nZone\nDescription\nIncluded Components\nInternet\nThe wider external-facing \ninternet zone typically includes \nusers and cloud services that use \nCloudEvents to send and process \nevent data.\n●\nAll\nLocal Network\nAny network inaccessible from \nthe internet (i.e., private virtual \nnetwork or on-prem intranet \nwhere an application sending or \nreceiving CloudEvents resides)\n●\nAll\nLocal System\nServer running application with \nCloudEvents SDK. This could be \nmanaged by a cloud provider or\n●\nProducer\n●\nConsumer\n●\nIntermediary\nT r a i l o f B i t s\n14\nCloudEvents Security Assessment\nP U B L I C\nan on-prem system. If an \nattacker gets access to a local \nsystem, they would have access \nto the processes that make up \nthe producer.\nIn the table below, we further distinguished between two general zones. Because we are \nmodeling the risk profile of an SDK, we consider that it can be used in multiple different \nways where, for instance, both the producer and consumer components could run on the \ninternet or an internal network. Distinguishing between the zones shown below helps us \nbetter describe sets of attacks based on communication between the consumer, producer, \nand location of the source.\nZone\nDescription\nIncluded Components\nProducer Zone\nThe zone where the producer \nruns and where CloudEvents are \nfirst created from events sent by \na source. The producer can be in \nan internal or externally \nreachable network.\n●\nProducer\n●\nEvents\n●\nMessage\n●\nSource (*)\nConsumer \nZone\nThe zone where the consumer \nruns. The consumer uses the \nCloudEvents SDK to read, \ndecode, and deserialize events \nsent by a producer. The \nconsumer can be in an internal \nor externally reachable network.\n●\nConsumer\n●\nMessage\n●\nAction\nIntermediary \nZone\nThe zone in which an optional \nintermediary producer may be \nlocated. An intermediary may \nmutate CloudEvents before \nsending them to the final \nconsumer.\n●\nConsumer\n●\nMessage\n●\nAction\n(*) It is possible that the source and producers are the same. For instance, an API may generate\nT r a i l o f B i t s\n15\nCloudEvents Security Assessment\nP U B L I C\nits own CloudEvents by using the CloudEvents SDK. However, this may not always be the case, as \nthe producer could be a separate application designed to receive log and event data.\nT r u s t Z o n e C o n n e c t i o n s\nWe can draw from our understanding of what data flows between trust zones and why to \nenumerate attack scenarios.\nOriginating \nZone\nDestination Zone\nConnection\nAuthentication & \nAuthorization\nProducer Zone\nConsumer Zone\nInternet \n→\n Internet\nProducer or \nconsumer \ndependent\nInternet \n→\n Local Network\nLocal Network \n→\n Local \nNetwork\nLocal Network \n→\n Internet\nProducer Zone\nIntermediary \nZone\nInternet \n→\n Internet\nInternet \n→\n Local Network\nLocal Network \n→\n Local \nNetwork\nLocal Network \n→\n Internet\nIntermediary \nZone\nConsumer Zone\nInternet \n→\n Internet\nInternet \n→\n Local Network\nLocal Network \n→\n Local \nNetwork\nLocal Network \n→\n Internet\nLocal System\nLocal System\n●\nLocalhost \ncommunication with\nT r a i l o f B i t s\n16\nCloudEvents Security Assessment\nP U B L I C\nLocal Network\nconsumer or producer \napplication\n●\nOutput to STDOUT\nT h r e a t A c t o r s\nSimilarly to establishing trust zones, defining malicious actors before conducting a threat \nmodel is useful in determining which protections, if any, are necessary to mitigate or \nremediate a vulnerability. We also define other “users” of the system who may be impacted \nby, or induced to undertake, an attack.\nActor\nDescription\nExternal Attacker\nAn attacker on the internet. An external attacker will seek to get access \nto internal systems running CloudEvent sources, producers, or \nconsumers.\nMalicious Internal \nUser\nMalicious internal users often have privileged access to a wide range of \nresources, such as the network transporting events, systems \nproducing or consuming events, or intermediary systems.\nInternal Attacker\nAn internal attacker is one who has transited one or more trust \nboundaries, such as an attacker with direct access to the system \nrunning CloudEvents.\nAdministrator\nA cloud, system, or network administrator with privileged access to the \ninfrastructure and services where CloudEvents are produced or \nreceived.\nApplication \ndeveloper\nAn application or service developer who uses the CloudEvents SDK.\nD a t a F l o w\nH i g h - L e v e l O v e r v i e w \nThe diagram below demonstrates CloudEvents flow of events and processing and considers \ncomponents that are important to contextualize possible threat scenarios.\nT r a i l o f B i t s\n17\nCloudEvents Security Assessment\nP U B L I C\nC l o u d E v e n t s S D K \nThe diagram below highlights the operations performed by a CloudEvents SDK:\nT r a i l o f B i t s\n18\nCloudEvents Security Assessment\nP U B L I C\nT h r e a t S c e n a r i o s\nThreat\nDescription\nActor(s)\nComponent(s)\nProducer DoS \ncondition due to \nimproper \nencoding and \nserialization\nThe producer may use a \nCloudEvents SDK with buggy \nencoding or serialization \nlogic (e.g., nil dereferences), \nleading the producer to \ncrash. An internal attacker \nthat is aware of such a bug \ncould generate malicious \nevent data in order to cause \nthe producer to crash. A lack \nof sufficient unit testing \nrequirements could increase \nthe likelihood of this threat.\n●\nInternal \nattacker\n●\nEvent\n●\nProducer\n●\nCloudEvent \nMessage\n●\nSource\nConsumer DoS \ncondition due to \nimproper \ndecoding an \ndeserialization\nThe consumer may use a \nCloudEvents SDK with buggy \ndecoding or deserialization \nlogic (e.g., nil dereferences), \nleading the consumer to \ncrash. A lack of sufficient \nunit testing requirements \ncould increase the likelihood \nof this threat.\n●\nEvent\n●\nConsumer\n●\nMessage\n●\nSource\nDropped or out \nof order events\nThe spec does not define \nany fields that can be used \nto keep track of events, as \ntimestamps are optional. A \nnaive developer may \nassume events are received \nin the order in which they \nwere produced and not \naccount for the possibility of \ndropped events due to disk \nfaults or race condition \nbugs.\n●\nApplication \ndeveloper\n●\nProducer\n●\nMessage\n●\nSource\n●\nEvent\nT r a i l o f B i t s\n19\nCloudEvents Security Assessment\nP U B L I C\nA malicious \nextension \ncompromises the \nconfidentiality or \nintegrity of event \ndata\nA developer uses a \nCloudEvents extension that \ncan introduce unexpected, \nmalicious behavior, allowing \nattackers to re-route or \nmodify CloudEvents data.\n●\nApplication \ndeveloper\n●\nExternal \nattacker\n●\nProducer\n●\nConsumer\n●\nIntermedia \nry\n●\nMessage\n●\nEvent\nCloudEvents are \nmodified by a \nmalicious \nintermediary, or \none the \ndeveloper is not \naware of.\nAn intermediary modifies \nCloudEvents before \nrerouting them to a \nconsumer. Because the spec \ndoes not define signing \nrequirements, there is no \nway for the source to know \nwhether the event was \nchanged. Although \nmaintaining the integrity of \nCloudEvent messages is \nstated as a\nnon-goal\nin the \nCloudEvent spec, there \nshould be a central place \nwhere developers can easily \nbecome aware of this.\n●\nMalicious \ninternal user\n●\nInternal \nattacker\n●\nExternal \nattacker\n●\nIntermedia \nry\n●\nConsumer\n●\nMessage\n●\nEvent\nA malicious or \nvulnerable SDK \ndependency \ncompromises the \nconfidentiality or \nintegrity of event \ndata\nA malicious SDK dependency \ncan introduce unexpected, \nmalicious behavior, allowing \nattackers to re-route or \nmodify CloudEvents data or \nto introduce backdoors or \nRCE vulnerabilities.\n●\nExternal \nattacker\n●\nInternal \nattacker\n●\nConsumer\n●\nProducer\n●\nIntermedia \nry\n●\nMessage\n●\nEvent\n●\nMessage\nVulnerable use of \nSDK due to lack \nof clarity on what \neach SDK \nsupports\nThe specs mention that \nSDKs SHOULD validate data. \nHowever, this is not a \nrequirement. A developer \nmight, for instance, assume \nthat every SDK is required to \nvalidate CloudEvents and as \na result use the SDK in an \ninsecure manner, skipping \nlogic such as data validation\n●\nDeveloper\n●\nConsumer\n●\nProducer\n●\nIntermedia \nry\n●\nEvent\nT r a i l o f B i t s\n20\nCloudEvents Security Assessment\nP U B L I C\ndue to assumptions based \non differences between \nwhat SDK supports.\nR e c o m m e n d a t i o n s\n●\nCentralize and condense documentation regarding security considerations that \ndevelopers should be aware of when using the CloudEvents SDKs. In particular, \ndevelopers should be able to easily reference security concerns that they should be \nresponsible for.\n●\nThe CloudEvents specification uses the words “MUST” to define logic or \nresponsibilities that every SDK should implement. On the other hand, it uses the \nword “SHOULD” to define suggested but optional features. Developers should be \nable to easily reference which SHOULD features are or are not implemented by each \nSDK. We suggest using a table format in the spec’s repo to display this information.\n●\nConsider adding fuzzing tests for every SDK, and adding the various SDKs to the \nOSS-fuzz\nproject.\n●\nDocument minimum unit testing requirements for all SDKs. Currently, there are no \nformal testing requirements for pull requests sent for the various SDKs.\nT r a i l o f B i t s\n21\nCloudEvents Security Assessment\nP U B L I C\nA u t o m a t e d T e s t i n g\nTrail of Bits uses automated techniques to extensively test the security properties of \nsoftware. We use both open-source static analysis and fuzzing utilities, along with tools \ndeveloped in house, to perform automated testing of source code and compiled software.\nT e s t H a r n e s s C o n fi g u r a t i o n\nWe used the following tools in the automated testing phase of this project:\nTool\nDescription\nSemgrep\nAn open-source static analysis tool for finding bugs and enforcing code \nstandards when editing or committing code and during build time\nClippy\nRust linter\nJetBrains \nInspectors\nJetBrain build in inspectors for Go and Rust codebases\nT r a i l o f B i t s\n22\nCloudEvents Security Assessment\nP U B L I C\nS u m m a r y o f F i n d i n g s\nThe table below summarizes the findings of the review, including type and severity details.\nID\nTitle\nType\nSeverity\n1\n[Java SDK] Reliance on default encoding\nUndefined \nBehavior\nUndetermined\n2\n[Java SDK] Outdated Vulnerable Dependencies\nPatching\nUndetermined\n3\n[JavaScript SDK] Potential XSS in httpTransport()\nData Validation\nUndetermined\n4\n[Go SDK] Outdated Vulnerable Dependencies\nPatching\nUndetermined\n5\n[Go SDK] Downcasting of 64-bit integer\nUndefined \nBehavior\nUndetermined\n6\n[Go SDK] ReadHeaderTimeout not configured\nDenial of Service\nInformational\n7\n[CSharp SDK] Outdated Vulnerable Dependencies\nPatching\nUndetermined\nT r a i l o f B i t s\n23\nCloudEvents Security Assessment\nP U B L I C\nD e t a i l e d F i n d i n g s\n1 . [ J a v a S D K ] R e l i a n c e o n d e f a u l t e n c o d i n g\nSeverity:\nUndetermined\nDifficulty:\nLow\nType: Undefined Behavior\nFinding ID: TOB-CE-1\nTarget: Java SDK\nD e s c r i p t i o n \nMultiple instances were identified in which the\ngetByte()\nstandard Java API is used \nwithout specifying any encoding. Doing so causes the Java SDK to rely on the system \ndefault encoding, which can differ across platforms and systems used by event actors and \ncause unexpected differences in processing of event data.\nThe specification states that appropriate and RFC-compliant encodings MUST be followed, \nbut the implementation in the Java SDK and documentation should be improved to \nhighlight the importance of matching encoding across actors.\nNot all observed instances are necessarily problematic, as they are handling binary data. \nHowever, this behavior should be documented and handled in the SDK implementation, \ndocumentation, and supplied examples.\n28 import io.cloudevents.CloudEvent;\n29 import io.cloudevents.core.builder.CloudEventBuilder;\n30\n31 import java.net.URI;\n32\n33 final CloudEvent event = CloudEventBuilder.v1()\n34 .withId(\"000\")\n35 .withType(\"example.demo\")\n36 .withSource(URI.create(\"http://example.com\"))\n37 .withData(\"text/plain\",\"Hello world!\".\ngetBytes()\n) \n38 .build();\n39 ```\nFigure 1.1: Java SDK documentation providing bad example for\ngetBytes() \n(\ndocs/core.md#28–40\n)\nT r a i l o f B i t s\n24\nCloudEvents Security Assessment\nP U B L I C\n93\nprivate\nbyte\n[]\ngetBinaryData\n(Message<?>\nmessage)\n{ \n94\nObject\npayload\n=\nmessage.getPayload(); \n95\nif\n(payload\ninstanceof\nbyte\n[])\n{ \n96\nreturn\n(\nbyte\n[])\npayload; \n97\n} \n98\nelse\nif\n(payload\ninstanceof\nString)\n{ \n99\nreturn\n((String)\npayload).\ngetBytes(Charset.defaultCharset()\n); \n100\n} \n101\nreturn\nnull\n; \n102 }\nFigure 1.2: Using\ngetBytes()\nand relying on default\ncharset can lead to unexpected behavior \n(\nspring/src/main/java/io/cloudevents/spring/messaging/CloudEventMessageCo\nnverter.java#93–102\n)\nE x p l o i t S c e n a r i o \nThe event producer, the intermediary (using the SDK), and the consumer use different \ndefault encodings for their systems. Without acknowledging a fixed encoding, the data is \nhandled and processed using an unintended encoding, resulting in unexpected behavior.\nR e c o m m e n d a t i o n s \nShort term, improve the SDK documentation to highlight the importance of matching \nencoding acros actors.\nLong term, review all similar instances across the SDK and improve test cases to cover \nhandling of message and data encoding.\nR e f e r e n c e s\n●\nThe Java Tutorials; Byte Encodings and Strings\n●\nPMD New rule: Reliance on default charset #2186\nT r a i l o f B i t s\n25\nCloudEvents Security Assessment\nP U B L I C\n2 . [ J a v a S D K ] O u t d a t e d V u l n e r a b l e D e p e n d e n c i e s\nSeverity:\nUndetermined\nDifficulty:\nMedium\nType: Patching\nFinding ID: TOB-CE-2\nTarget: Java SDK\nD e s c r i p t i o n \nMultiple outdated dependencies with publicly known vulnerabilities, including multiple \nhigh- and medium-risk vulnerabilities, were identified in the Java SDK. The open-source\nsnyk\ntool was used to automatically audit each module.\nDue to time constraints and ease \nof remediation, exploitability of these issues within the context of the SDK was not \nmanually reviewed.\nA list of Java SDK modules and their vulnerable dependencies is provided below:\nModule\nDependency\nDetails\nIo.cloudevents:cloudevents-kafka\norg.apache.kafka:kafka-clients@2.5.0\nintroduced by\norg.apache.kafka:kafka-clients@2.5.0\nTiming Attack\n[Medium \nSeverity]\nio.cloudevents:cloudevents-http-vertx\nio.netty:netty-common@4.1.74.Final\nintroduced by io.vertx:vertx-core@4.2.5 >\nio.netty:netty-common@4.1.74.Final\nInformation Exposure\n[Medium Severity]\nio.cloudevents:cloudevents-http-vertx\nio.netty:netty-handler@4.1.74.Final\nintroduced by io.vertx:vertx-core@4.2.5 >\nio.netty:netty-handler@4.1.74.Final\nImproper Certificate \nValidation\n[Medium Severity]\nio.cloudevents:cloudevents-protobuf\ncom.google.protobuf:protobuf-java@3.15.0\nintroduced by\ncom.google.protobuf:protobuf-java@3.15.0\nDenial of Service (DoS)\n[High Severity]\nio.cloudevents:cloudevents-protobuf\ncom.google.code.gson:gson@2.8.6\nintroduced by\ncom.google.protobuf:protobuf-java-util@3.1\n5.0 > com.google.code.gson:gson@2.8.6\nDenial of Service (DoS)\n[High Severity]\nT r a i l o f B i t s\n26\nCloudEvents Security Assessment\nP U B L I C\nE x p l o i t S c e n a r i o \nAttackers identified vulnerable dependencies by observing the public GitHub repository of \nthe SDK. They can then craft malicious requests (HTTP, event, etc.) that will be processed by \nSDK APIs to exploit these issues.\nR e c o m m e n d a t i o n s \nShort term, upgrade all outdated third-party dependencies used in the SDK.\nLong term, outdated and vulnerable dependencies should be automatically and \ncontinuously highlighted as part of the CI/CD pipeline. Alternatively, developers can \nconfigure GitHub actions that warns developers when new security updates are available \nfor dependencies.\nT r a i l o f B i t s\n27\nCloudEvents Security Assessment\nP U B L I C\n3 . [ J a v a S c r i p t S D K ] P o t e n t i a l X S S i n h t t p T r a n s p o r t ( )\nSeverity:\nUndetermined\nDifficulty:\nLow\nType: Data Validation\nFinding ID: TOB-CE-3\nTarget:\nsdk-javascript/src/transport/http/index.ts\nD e s c r i p t i o n \nThe\nhttpTransport()\nmethod in the JavaScript SDK writes\nraw response messages from \nthe endpoint when an error occurs. If user-controlled data is reflected in the error \nmessage, and the callee of this API includes the response in a web page without sanitizing \nthe output, the application using the SDK and rendering its results will become vulnerable \nto XSS.\nValidation and sanitization of data is not enforced by the specification, but the SDK \ndocumentation should highlight lack of sanitization of HTTP responses when this API is \nused in an emitter.\n55 req.on(\n\"error\"\n,\nreject); \n56\nreq.write(message.body);\n57 req.end();\nFigure 3.1: Directly writing HTTP response bypasses HTML escaping and can lead to XSS \n(\nsrc/transport/http/index.ts#55–57\n)\nE x p l o i t S c e n a r i o \nAn application consumes the API output and includes it in a web page without sanitizing. \nAttackers trigger XSS in the application by injecting events that trigger error responses \ncontaining their payload.\nR e c o m m e n d a t i o n s \nShort term, escape JavaScript/HTML when directly writing out responses.\nLong term, improve the SDK documentation to highlight the importance of sanitization of \nresponses from SDK APIs, as it’s not mandated by the specification or the SDK.\nT r a i l o f B i t s\n28\nCloudEvents Security Assessment\nP U B L I C\n4 . [ G o S D K ] O u t d a t e d V u l n e r a b l e D e p e n d e n c i e s\nSeverity:\nUndetermined\nDifficulty:\nLow\nType: Patching\nFinding ID: TOB-CE-4\nTarget: Go SDK\nD e s c r i p t i o n \nMultiple outdated dependencies with publicly known vulnerabilities were identified in the \nGo SDK. The open-source\nsnyk\ntool was used to automatically\naudit each module. Due to \ntime constraints and ease of remediation, exploitability of these issues within the context \nof the SDK was not manually reviewed.\nA list of Go SDK modules and their vulnerable dependencies is provided below:\nModule\nDependency\nDetails\nprotocol/ws/v2/go.mod\nIntroduced through:\nnhooyr.io/websocket@1.8.6\nDenial of Service (DoS)\n[High Severity]\nsamples/ws/go.mod\nIntroduced through\n/protocol/ws/v2@2.5.0\nDenial of Service (DoS)\n[High Severity]\nE x p l o i t S c e n a r i o \nAttackers identified vulnerable dependencies by observing the public GitHub repository of \nthe SDK. They can then craft malicious requests (HTTP, event, etc.) that will be processed by \nSDK APIs to exploit these issues.\nR e c o m m e n d a t i o n s \nShort term, upgrade all outdated third-party dependencies used in the SDK.\nLong term, outdated and vulnerable dependencies should be automatically and \ncontinuously highlighted as part of the CI/CD pipeline. Alternatively, developers can \nconfigure GitHub actions that warns developers when new security updates are available \nfor dependencies.\nT r a i l o f B i t s\n29\nCloudEvents Security Assessment\nP U B L I C\n5 . [ G o S D K ] D o w n c a s t i n g o f 6 4 - b i t i n t e g e r\nSeverity:\nUndetermined\nDifficulty:\nLow\nType: Undefined Behavior\nFinding ID: TOB-CE-5\nTarget:\nsql/v2/parser/expression_visitor.go\n,\nsql/v2/utils/casting.go\nD e s c r i p t i o n \nThe\nstrconv.Atoi\nfunction parses an\nint\n: a machine\ndependent integer type that will be \nint64 for 64-bit targets. There are places throughout the codebase where the result \nreturned from\nstrconv.Atoi\nis later converted to a\nsmaller type: int16 or int32. This may \noverflow with a certain input.\n279\nfunc\n(v\n*expressionVisitor)\nVisitIntegerLiteral(ctx \n*gen.IntegerLiteralContext)\ninterface\n{}\n{ \n280\nval,\nerr\n:=\nstrconv\n.Atoi\n(ctx.GetText()) \n281\nif\nerr\n!=\nnil\n{ \n282\nv.parsingErrors\n=\nappend\n(v.parsingErrors,\nerr) \n283\n} \n284\nreturn\nexpression.NewLiteralExpression(\nint32\n(val)\n) \n285 }\nFigure 5.1: Downcasting of 64-bit integer \n(\nsql/v2/parser/expression_visitor.go#279–285\n)\n34\ncase\ncesql.IntegerType: \n35\nswitch\nval.(\ntype\n)\n{ \n36\ncase\nstring\n: \n37\nv,\nerr\n:=\nstrconv\n.Atoi\n(val.(\nstring\n)) \n38\nif\nerr\n!=\nnil\n{ \n39\nerr\n=\nfmt.Errorf(\n\"cannot cast from String\nto Integer: \n%w\"\n,\nerr)\n40\n} \n41\nreturn\nint32\n(v)\n,\nerr \n42\n}\nFigure 5.2: Downcasting of 64-bit integer (\nsql/v2/utils/casting.go#34–42\n)\nE x p l o i t S c e n a r i o \nA value is parsed from a configuration file with Atoi, resulting in an integer. It is then \ndowncasted to a lower precision value, resulting in a potential overflow or underflow that is \nnot handled by the Golang compiler an error or panic.\nT r a i l o f B i t s\n30\nCloudEvents Security Assessment\nP U B L I C\nR e c o m m e n d a t i o n s \nShort term, when parsing strings into fixed-width integer types, use\nstrconv.ParseInt\nor\nstrconv.ParseUint\nwith an appropriate\nbitSize\nargument\ninstead of\nstrconv.Atoi\n.\nLong term, use open-source automated static-analysis tools such as Semgrep as part of the \ndevelopment process to check for common vulnerabilities in the code.\nT r a i l o f B i t s\n31\nCloudEvents Security Assessment\nP U B L I C\n6 . [ G o S D K ] R e a d H e a d e r T i m e o u t n o t c o n fi g u r e d\nSeverity:\nInformational\nDifficulty:\nLow\nType: Denial of Service\nFinding ID: TOB-CE-6\nTarget: Go SDK\nD e s c r i p t i o n \nThe\nhttp.server\nAPI in Go can be initialized with\nfour different timeouts, including\nReadHeaderTimeout\n. Without specifying a value for\nthis timeout, the listener instance will \nbecome vulnerable to the Slowloris DoS attack.\n34\n// After listener is invok\n35 listener,\nerr\n:=\np.listen() \n36\nif\nerr\n!=\nnil\n{ \n37\nreturn\nerr \n38 }\n39\n40 p.server\n=\n&http.Server{ \n41\nAddr:\nlistener.Addr().String(), \n42\nHandler:\nattachMiddleware(p.Handler,\np.middleware), \n43 }\nFigure 6.1: ReadheaderTimeout not configured for http.server \n(\nv2/protocol/http/protocol_lifecycle.go#34–43\n)\nE x p l o i t S c e n a r i o \nAttackers can exhaust server resources by opening multiple HTTP connections to the \nserver, keeping the connections open, and slowly and continuously sending new HTTP \nheader lines over the socket. This will eventually exhaust all open file handles.\nR e c o m m e n d a t i o n s \nShort term, specify appropriate timeout value for the\nReadHeaderTimeout\nparameter.\nLong term, improve the code and SDK documentation to consider\nother means\nof handling \ntimeouts and preventing DoS attacks.\nT r a i l o f B i t s\n32\nCloudEvents Security Assessment\nP U B L I C\n7 . [ C S h a r p S D K ] O u t d a t e d V u l n e r a b l e D e p e n d e n c i e s\nSeverity:\nUndetermined\nDifficulty:\nLow\nType: Patching\nFinding ID: TOB-CE-7\nTarget: CSharp SDK\nD e s c r i p t i o n \nMultiple outdated dependencies with publicly known vulnerabilities were identified in the \nCSharp SDK. The open-source\nsnyk\ntool was used to\nautomatically audit each module. Due \nto time constraints and ease of remediation, exploitability of these issues within the \ncontext of the SDK was not manually reviewed.\nA list of CSharp SDK modules and their vulnerable dependencies is provided below:\nModule\nDependency\nDetails\nCloudNative.CloudEvents.AspNet\nCore.csproj\nIntroduced through: Microsoft.AspNetCore.Mvc.Core\nVersion=\"2.1.16\"\nRemote Code\nExecution\n[High Severity]\nCloudNative.CloudEvents.AspNet\nCore.csproj\nIntroduced through: Microsoft.AspNetCore.Mvc.Core\n2.1.16\n>Microsoft.Extensions.DependencyModel/2.1.0\n> Newtonsoft.Json 9.0.1\nInsecure Defaults\n[High Severity]\nCloudNative.CloudEvents.Avro/o\nbj/project.assets.json\nIntroduced through: Apache.Avro/1.11.0\n> Newtonsoft.Json\": \"10.0.3\nInsecure Defaults\n[High Severity]\nCloudNative.CloudEvents.Avro/o\nbj/project.assets.json\nIntroduced through: Apache.Avro/1.11.0\n> Newtonsoft.Json\": \"10.0.3 >\nSystem.Xml.XmlDocument 4.3.0 > … >\nSystem.Text.RegularExpression 4.3.0\nDenial of Service\n[High Severity]\nE x p l o i t S c e n a r i o \nAttackers identified vulnerable dependencies by observing the public GitHub repository of \nthe SDK. They can then craft malicious requests (HTTP, event, etc.) that will be processed by \nSDK APIs to exploit these issues.\nT r a i l o f B i t s\n33\nCloudEvents Security Assessment\nP U B L I C\nR e c o m m e n d a t i o n s \nShort term, upgrade all outdated third-party dependencies used in the SDK.\nLong term, outdated and vulnerable dependencies should be automatically and \ncontinuously highlighted as part of the CI/CD pipeline. Alternatively, developers can \nconfigure GitHub actions that warn developers when new security updates are available for \ndependencies.\nT r a i l o f B i t s\n34\nCloudEvents Security Assessment\nP U B L I C\nS u m m a r y o f R e c o m m e n d a t i o n s\nThe CloudEvent specification and SDK are works in progress with multiple SDKs \nimplemented in ten different languages. Trail of Bits recommends that Linux Foundation \naddress the findings detailed in this report and take the following additional steps prior to \ndeployment:\n●\nIntroduce automated dependency auditing and vulnerability scanning into the \ndevelopment process for all SDKs and improve the SDK Governance guidelines to \nmake these steps a mandatory part of contribution and maintenance.\n●\nUse static-analysis tools such as Semgrep (or commercially available alternatives) as \nwell as linting plugins for the IDEs to highlight and mitigate common vulnerable bug \npatterns and usage of deprecated APIs as soon as they are introduced into the code. \nMany such tools can be directly integrated into the CI/CD pipeline or used as GitHub \nactions.\nT r a i l o f B i t s\n35\nCloudEvents Security Assessment\nP U B L I C\nA . V u l n e r a b i l i t y C a t e g o r i e s\nThe following tables describe the vulnerability categories, severity levels, and difficulty \nlevels used in this document.\nVulnerability Categories\nCategory\nDescription\nAccess Controls\nInsufficient authorization or assessment of rights\nAuditing and Logging\nInsufficient auditing of actions or logging of problems\nAuthentication\nImproper identification of users\nConfiguration\nMisconfigured servers, devices, or software components\nCryptography\nA breach of system confidentiality or integrity\nData Exposure\nExposure of sensitive information\nData Validation\nImproper reliance on the structure or values of data\nDenial of Service\nA system failure with an availability impact\nError Reporting\nInsecure or insufficient reporting of error conditions\nPatching\nUse of an outdated software package or library\nSession Management\nImproper identification of authenticated users\nTesting\nInsufficient test methodology or test coverage\nTiming\nRace conditions or other order-of-operations flaws\nUndefined Behavior\nUndefined behavior triggered within the system\nT r a i l o f B i t s\n36\nCloudEvents Security Assessment\nP U B L I C\nSeverity Levels\nSeverity\nDescription\nInformational\nThe issue does not pose an immediate risk but is relevant to security best \npractices.\nUndetermined\nThe extent of the risk was not determined during this engagement.\nLow\nThe risk is small or is not one the client has indicated is important.\nMedium\nUser information is at risk; exploitation could pose reputational, legal, or \nmoderate financial risks.\nHigh\nThe flaw could affect numerous users and have serious reputational, legal, \nor financial implications.\nDifficulty Levels\nDifficulty\nDescription\nUndetermined\nThe difficulty of exploitation was not determined during this engagement.\nLow\nThe flaw is well known; public tools for its exploitation exist or can be \nscripted.\nMedium\nAn attacker must write an exploit or will need in-depth knowledge of the \nsystem.\nHigh\nAn attacker must have privileged access to the system, may need to know \ncomplex technical details, or must discover other weaknesses to exploit this \nissue.\nT r a i l o f B i t s\n37\nCloudEvents Security Assessment\nP U B L I C\nB . S e c u r i t y C o n t r o l s\nThe following tables describe the security controls and rating criteria used for the threat \nmodel.\nSecurity Controls\nCategory\nDescription\nAccess Controls\nAuthorization, session management, separation of duties, etc.\nAudit and \nAccountability\nLogging, non-repudiation, monitoring, analysis, reporting, etc.\nAwareness and \nTraining\nPolicy, procedures, and related capabilities\nConfiguration \nManagement\nInventory, secure baselines, configuration management, & change control\nCryptography\nThe cryptographic controls implemented at rest, in transit, and in process\nDenial of Service\nThe controls to defend against different types of denial-of-service attacks \nimpacting availability\nIdentification and \nAuthentication\nUser and system identification and authentication controls\nMaintenance\nPreventative and predictive maintenance, and related controls\nSystem and \nCommunications \nProtection\nNetwork level controls to protect data\nSystem and \nInformation \nIntegrity\nSoftware integrity, malicious code protection, monitoring, information \nhandling, and related controls\nSystem and \nServices \nAcquisition\nDevelopment lifecycle, documentation, supply chain, etc.\nT r a i l o f B i t s\n38\nCloudEvents Security Assessment\nP U B L I C\nC . N o n - S e c u r i t y - R e l a t e d F i n d i n g s\nThe following recommendations are not associated with specific vulnerabilities. However, \nthey enhance code readability and may prevent the introduction of vulnerabilities in the \nfuture.\n●\nThe CESQL parser of the Java SDK (and likely other SDKs) uses ANTLR to generate a \nparser (Java) code based on the supplied grammar file. The\nCESQLParserParse \nclass file generated as the result contains a switch statement (at line 457) that lacks \na default value. This can lead to unexpected behavior when parsing expressions.\nMore in-depth analysis and review of the implementation of ANTLR is necessary to \ninvestigate the actual impact of this issue. Moreover, the ANTLR and expressions \nparsed by its implementation need to be audited to assess potential attack vectors \n(such as Expression Language injection) based on user-controlled data.\nFigure C.1: Switch statement missing default value \n(\ntarget/generated-sources/antlr4/io/cloudevents/sql/generated/CESQLParser\nParser.java\n#\n457-525)\n●\nThe\nCloudEventDeserializer\nclass in the Java SDK implements\na\nswitch \nstatement\nfor parsing the\nspecVersion\nvalue. The first\ncase ending at line 124 is \nnot ending or breaking, which makes it fall through the next case statement.\n111\nswitch\n(specVersion)\n{ \n112\ncase\nV03: \n113\nboolean\nisBase64\n=\n\"base64\"\n.equals(getOptionalStringNode(\nthis\n.node, \nthis\n.p,\n\"datacontentencoding\"\n)); \n114\nif\n(node.has(\n\"data\"\n))\n{ \n115\nif\n(isBase64)\n{ \n116\ndata\n=\nT r a i l o f B i t s\n39\nCloudEvents Security Assessment\nP U B L I C\nBytesCloudEventData.wrap(node.remove(\n\"data\"\n).binaryValue()); \n117\n}\nelse\n{ \n118\nif\n(JsonFormat.dataIsJsonContentType(contentType))\n{ \n119\n// This solution is quite\nbad, but i see no alternatives \nnow.\n120\n// Hopefully in future\nwe can improve it \n121\ndata\n=\nnew\nJsonCloudEventData(node.remove(\n\"data\"\n)); \n122\n}\nelse\n{ \n123\nJsonNode\ndataNode\n=\nnode.remove(\n\"data\"\n); \n124\nassertNodeType(dataNode,\nJsonNodeType.STRING,\n\"data\"\n, \n\"Because content type is not a json, only a string is accepted as data\"\n); \n125\ndata\n= \nBytesCloudEventData.wrap(dataNode.asText().getBytes());\nFigure C.2: Select statement fall through can lead to unexpected behavior \n(\nformats/json-jackson/src/main/java/io/cloudevents/jackson/CloudEventDese\nrializer.java#111–125\n)\n●\nJsonCloudEvcentData()\nis documented as deprecated\nin the Java SDK, but was \nfound to be used in the implementation.\n118\nif\n(JsonFormat.dataIsJsonContentType(contentType))\n{ \n119\n// This solution is quite bad,\nbut i see no alternatives now. \n120\n// Hopefully in future we can\nimprove it \n121\ndata\n=\nnew\nJsonCloudEventData(\nnode.remove(\n\"data\"\n)); \n...\n136\nif\n(JsonFormat.dataIsJsonContentType(contentType))\n{ \n137\n// This solution is quite bad,\nbut i see no alternatives now. \n138\n// Hopefully in future we can improve\nit \n139\ndata\n=\nnew\nJsonCloudEventData(\nnode.remove(\n\"data\"\n));\nFigure C.3: Using deprecated API methods \n(\nformats/json-jackson/src/main/java/io/cloudevents/jackson/CloudEventDese\nrializer.java#118–139\n)\n●\nThe Go SDK is using deprecated Golang APIs in multiple places across the codebase. \nThe staticcheck tool from the Golang toolchain was used to identify and highlight \nthe following cases:\n114 ioutil.ReadAll(reader)\nclient_protocol.go:13:2:\n\"io/ioutil\"\nhas been deprecated\nsince Go 1.16: As of Go \n1.16, the same functionality is now provided by package io or package os, and\nthose implementations should be preferred in new code. See the specific\nfunction documentation for details. (SA1019)\ninternal/connection_test.go:43:38:\ngrpc.WithInsecure\nis deprecated: use \nWithTransportCredentials and insecure.NewCredentials() instead. Will be\nsupported throughout 1.x. (SA1019)\ninternal/connection_test.go:45:38:\ngrpc.WithInsecure\nis deprecated: use \nWithTransportCredentials and insecure.NewCredentials() instead. Will be\nsupported throughout 1.x. (SA1019)\nT r a i l o f B i t s\n40\nCloudEvents Security Assessment\nP U B L I C\nprotocol_test.go:25:38:\ngrpc.WithInsecure\nis deprecated: use \nWithTransportCredentials and insecure.NewCredentials() instead. Will be\nsupported throughout 1.x. (SA1019)\nwrite_message.go:11:2: \"\nio/ioutil\"\nhas been deprecated\nsince Go 1.16: As of Go \n1.16, the same functionality is now provided by package io or package os, and\nthose implementations should be preferred in new code. See the specific\nfunction documentation for details. (SA1019)\nparser/expression_visitor.go:35:9: assigning the result of this type assertion to\na variable (switch tree := tree.(type)) could eliminate type assertions in\nswitch cases (S1034)\ntest/tck_test.go:9:2: \"\nio/ioutil\n\" has been deprecated\nsince Go 1.16: As of Go \n1.16, the same functionality is now provided by package io or package os, and\nthose implementations should be preferred in new code. See the specific\nfunction documentation for details. (SA1019)\nclient/client_test.go:12:2:\n\"io/ioutil\"\nhas been deprecated\nsince Go 1.16: As of \nGo 1.16, the same functionality is now provided by package io or package os,\nand those implementations should be preferred in new code. See the specific\nfunction documentation for details. (SA1019)\nclient/observability_service.go:28:2: this value of ctx is never used (SA4006)\nbinding/test/mock_binary_message.go:12:2: \"\nio/ioutil\"\nhas been deprecated since \nGo 1.16: As of Go 1.16, the same functionality is now provided by package io\nor package os, and those implementations should be preferred in new code. See\nthe specific function documentation for details. (SA1019)\nbinding/test/mock_structured_message.go:12:2:\n\"io/ioutil\"\nhas been deprecated \nsince Go 1.16: As of Go 1.16, the same functionality is now provided by\npackage io or package os, and those implementations should be preferred in\nnew code. See the specific function documentation for details. (SA1019)\nbinding/utils/structured_message_test.go:11:2:\n\"io/ioutil\"\nhas been deprecated \nsince Go 1.16: As of Go 1.16, the same functionality is now provided by\npackage io or package os, and those implementations should be preferred in\nnew code. See the specific function documentation for details. (SA1019)\nclient/client_test.go:13:2:\n\"io/ioutil\"\nhas been deprecated\nsince Go 1.16: As of \nGo 1.16, the same functionality is now provided by package io or package os,\nand those implementations should be preferred in new code. See the specific\nfunction documentation for details. (SA1019)\nprotocol/http/message_test.go:12:2:\n\"io/ioutil\"\nhas\nbeen deprecated since Go \n1.16: As of Go 1.16, the same functionality is now provided by package io or\npackage os, and those implementations should be preferred in new code. See\nthe specific function documentation for details. (SA1019)\nprotocol/http/protocol.go:303:36: should use constant http.StatusTooManyRequests\ninstead of numeric literal 429 (ST1013)\nprotocol/http/protocol_retry.go:13:2:\n\"io/ioutil\"\nhas been deprecated since Go \n1.16: As of Go 1.16, the same functionality is now provided by package io or\npackage os, and those implementations should be preferred in new code. See\nthe specific function documentation for details. (SA1019)\nprotocol/http/result_test.go:95:5: should\nuse t.Errorf(...)\ninstead of \nt.Error(fmt.Sprintf(...)) (S1038)\nprotocol/http/write_request.go:12:2:\n\"io/ioutil\"\nhas\nbeen deprecated since Go \n1.16: As of Go 1.16, the same functionality is now provided by package io or\npackage os, and those implementations should be preferred in new code. See\nthe specific function documentation for details. (SA1019)\nT r a i l o f B i t s\n41\nCloudEvents Security Assessment\nP U B L I C\nFigure C.4: Usage of Golang deprecated methods in the SDK \n(\nprotocol/ws/v2/client_protocol.go#114\n)\n●\nUnhandled errors were identified. Below is an example. The same pattern was \nobserved multiple times across the codebase:\n112\nfunc\nconsumeStream(reader\nio.Reader)\n{ \n113\n//TODO is there a less expensive way to consume\nthe stream? \n114\nioutil.ReadAll(reader\n) \n115 }\nFigure C.5: Unhandled error (\nprotocol/ws/v2/client_protocol.go#112–115\n)\nT r a i l o f B i t s\n42\nCloudEvents Security Assessment\nP U B L I C\nD . A u t o m a t e d A n a l y s i s T o o l C o n fi g u r a t i o n\nAs part of this assessment, we performed automated testing on the Skiff codebase using \nfive tools: Semgrep, CodeQL, snyk-cli, yarn audit, and\ncomposer outdated\ntools and \ncommands.. Details about testing are provided below.\nD . 1 . S e m g r e p \nWe performed static analysis on multiple SDK source code repositories using Semgrep to \nidentify low-complexity weaknesses. We used several rule sets (some examples are shown \nin figure D.1.1), including our own set of\npublic\nrules\n, which resulted in the identification of \nsome code quality issues and areas that may require further investigation. Note that these \nrule sets will output repeated results, which should be ignored.\nsemgrep --metrics=off --sarif --config=\n\"p/r2c\" \nsemgrep --metrics=off --sarif --config=\n\"p/r2c-ci\" \nsemgrep --metrics=off --sarif --config=\n\"p/r2c-security-audit\" \nsemgrep --metrics=off --sarif --config=\n\"p/r2c-best-practices\" \nsemgrep --metrics=off --sarif --config=\n\"p/eslint-plugin-security\" \nsemgrep --metrics=off --sarif --config=\n\"p/javascript\" \nsemgrep --metrics=off --sarif --config=\n\"p/typescript\" \nsemgrep --metrics=off --sarif --config=\n\"p/clientside-js\" \nsemgrep --metrics=off --sarif --config=\n\"p/react\" \nsemgrep --metrics=off --sarif --config=\n\"p/nodejs\" \nsemgrep --metrics=off --sarif --config=\n\"p/nodejsscan\" \nsemgrep --metrics=off --sarif --config=\n\"p/owasp-top-ten\" \nsemgrep --metrics=off --sarif --config=\n\"p/jwt\" \nsemgrep --metrics=off --sarif --config=\n\"p/xss\" \nsemgrep --metrics=off --sarif --config=\n\"p/supply-chain\" \nsemgrep --metrics=off --sarif --config=\n\"p/security-audit\" \nsemgrep --metrics=off --sarif --config=\"p/golang\"\nsemgrep --metrics=off --sarif --config=\"r/dgryski.semgrep-go\"\nFigure D.1.1: Commands used to run Semgrep\nAlternatively, Semgrep can be configured to automatically detect and use relevant rulesets \nbased on an identified programming language or filename. Note that the auto mode \nrequires submitting metrics online, which means some metadata about the package and \nrepository will be disclosed to the tool developers. This is not an issue with open-source \nprojects but should be considered if Semgrep is used against private or internal \nrepositories.\nsemgrep --config=\nauto\nFigure D.1.2: Commands used to run Semgrep in auto mode\nT r a i l o f B i t s\n43\nCloudEvents Security Assessment\nP U B L I C\nD . 2 . C o d e Q L \nWe intended to use CodeQL to analyze multiple SDK codebases. Due to time constraints \nand the requirements of developing custom queries in order to properly process SDK APIs, \nthis step was skipped in the engagement. However, developers can benefit from this tool in \nthe future, especially when the SDK is integrated with larger codebases.\n# Create the JavaScript database\ncodeql database create codeql.db --language=javascript\n# Run all JavaScript queries\ncodeql database analyze codeql.db --format=sarif-latest --output=codeql_res.sarif --\ntob-javascript-all.qls\n# Create the Golang database\ncodeql database create codeql.db --language=golang\n# Run all Golang queries\ncodeql database analyze codeql.db --format=sarif-latest --output=codeql_res.sarif --\ntob-golang-all.qls\nFigure D.2.1: Commands used to run CodeQL\nD . 3 . T r u \u0000 e H o g \nWe ran\ncomposer outdated\non the PHP SDK source repository\nto highlight outdated \npackages. Two outdated packages were identified, but the vulnerabilities they contain \nwould not affect the SDK.\ncomposer outdated\nFigure D.3.1: Command used to run TruffleHog\nD . 4 .\nsnyk-cli \nWe ran\nsnyk-cli\non multiple SDK source repositories\nto identify outdated and vulnerable \nthird-party packages. Snyk automatically performs recursive checks on sub-modules and \ndifferent programming languages as dependency configuration files for every used \nlanguage are found.\nsnyk-to-html\nis a third-party\ntool and should be installed as a \nseparate Node package if needed. It is worth noting that using snyk cli often depends on \nthe existence of a functional tool-chain for the language. For instance, in order for snyk to \nbe able to produce complete results for Java, the package should be buildable by\nmaven\n.\nsnyk test\nsnyk test --all-projects --json |snyk-to-html --output ../snyk.html\nFigure D.4.1: Command used to run\nsnyk-cli\nT r a i l o f B i t s\n44\nCloudEvents Security Assessment\nP U B L I C\nD . 5 .\nyarn audit \nWe ran the\nyarn audit\ntool on the JavaScript SDK source\ndirectory to identify outdated \nand vulnerable third-party packages. It is recommended to use\nyarn audit\nalongside\nsnyk-cli\nfor better coverage. The\nyarn-audit-html\nis a third-party tool and should be \ninstalled as a separate Node package if needed.\nyarn audit\nyarn audit –group dependencies\nyarn audit --high |grep -E 'high|critical' |sort|uniq\nyarn audit --json |yarn-audit-html --output ../yarn.html\nFigure D.5.1: Command used to run\nyarn audit\nagainst\nJavaScript SDK\nD . 6 .\nIntellij IDE Plugins\nWe benefited from the following Intellij IDE plugins during our manual code review process \nto quickly highlight common vulnerable code patterns.\n●\nFindBugs\n(with FindSecurityBugs plugin)\n●\nSnyk Security\n(Identify vulnerabilities in dependencies)\n●\nCheckMarx AST\n(Identify vulnerabilities in dependencies)\n●\nSonarLint\n(Identify common vulnerable code patterns)\n●\nPVS-Studio\n(Identify common vulnerable code patterns)\n●\nBuilt in inspectors\nT r a i l o f B i t s\n45\nCloudEvents Security Assessment\nP U B L I C\n" } ]
{ "category": "App Definition and Development", "file_name": "CE-SecurityAudit-2022-10.pdf", "project_name": "CloudEvents", "subcategory": "Streaming & Messaging" }
[ { "data": "Filter Iterator\nAuthor : David Abrahams, Jeremy Siek, Thomas Witt\nContact : dave@boost-consulting.com ,jsiek@osl.iu.edu ,witt@ive.uni-hannover.de\nOrganization :Boost Consulting , Indiana University Open Systems Lab , University of\nHanover Institute for Transport Railway Operation and Construction\nDate : 2004-11-01\nCopyright : Copyright David Abrahams, Jeremy Siek, and Thomas Witt 2003.\nabstract: The filter iterator adaptor creates a view of an iterator range in which some\nelements of the range are skipped. A predicate function object controls which elements\nare skipped. When the predicate is applied to an element, if it returns true then the\nelement is retained and if it returns false then the element is skipped over. When\nskipping over elements, it is necessary for the filter adaptor to know when to stop so\nas to avoid going past the end of the underlying range. A filter iterator is therefore\nconstructed with pair of iterators indicating the range of elements in the unfiltered\nsequence to be traversed.\nTable of Contents\nfilter_iterator synopsis\nfilter_iterator requirements\nfilter_iterator models\nfilter_iterator operations\nExample\nfilter_iterator synopsis\ntemplate <class Predicate, class Iterator>\nclass filter_iterator\n{\npublic:\ntypedef iterator_traits<Iterator>::value_type value_type;\ntypedef iterator_traits<Iterator>::reference reference;\ntypedef iterator_traits<Iterator>::pointer pointer;\ntypedef iterator_traits<Iterator>::difference_type difference_type;\ntypedef /* see below */ iterator_category;\nfilter_iterator();\nfilter_iterator(Predicate f, Iterator x, Iterator end = Iterator());\nfilter_iterator(Iterator x, Iterator end = Iterator());\ntemplate<class OtherIterator>\n1filter_iterator(\nfilter_iterator<Predicate, OtherIterator> const& t\n, typename enable_if_convertible<OtherIterator, Itera-\ntor>::type* = 0 // exposition\n);\nPredicate predicate() const;\nIterator end() const;\nIterator const& base() const;\nreference operator*() const;\nfilter_iterator& operator++();\nprivate:\nPredicate m_pred; // exposition only\nIterator m_iter; // exposition only\nIterator m_end; // exposition only\n};\nIfIterator models Readable Lvalue Iterator and Bidirectional Traversal Iterator then itera-\ntor_category is convertible to std::bidirectional_iterator_tag . Otherwise, if Iterator models\nReadable Lvalue Iterator and Forward Traversal Iterator then iterator_category is convertible to\nstd::forward_iterator_tag . Otherwise iterator_category is convertible to std::input_iterator_tag .\nfilter_iterator requirements\nThe Iterator argument shall meet the requirements of Readable Iterator and Single Pass Iterator or\nit shall meet the requirements of Input Iterator.\nThe Predicate argument must be Assignable, Copy Constructible, and the expression p(x) must be\nvalid where pis an object of type Predicate ,xis an object of type iterator_traits<Iterator>::value_type ,\nand where the type of p(x) must be convertible to bool .\nfilter_iterator models\nThe concepts that filter_iterator models are dependent on which concepts the Iterator argument\nmodels, as specified in the following tables.\nIfIterator models then filter_iterator models\nSingle Pass Iterator Single Pass Iterator\nForward Traversal Iterator Forward Traversal Iterator\nBidirectional Traversal Iterator Bidirectional Traversal Iterator\nIfIterator models then filter_iterator models\nReadable Iterator Readable Iterator\nWritable Iterator Writable Iterator\nLvalue Iterator Lvalue Iterator\nIfIterator models then filter_iterator models\nReadable Iterator, Single Pass Iterator Input Iterator\nReadable Lvalue Iterator, Forward Traversal Iterator Forward Iterator\nWritable Lvalue Iterator, Forward Traversal Iterator Mutable Forward Iterator\nWritable Lvalue Iterator, Bidirectional Iterator Mutable Bidirectional Iterator\n2filter_iterator<P1, X> is interoperable with filter_iterator<P2, Y> if and only if Xis inter-\noperable with Y.\nfilter_iterator operations\nIn addition to those operations required by the concepts that filter_iterator models, filter_iterator\nprovides the following operations.\nfilter_iterator();\nRequires: Predicate andIterator must be Default Constructible.\nEffects: Constructs a filter_iterator whose“m pred“, m_iter , and m_end members are\na default constructed.\nfilter_iterator(Predicate f, Iterator x, Iterator end = Iterator());\nEffects: Constructs a filter_iterator where m_iter is either the first position in the\nrange [x,end) such that f(*m_iter) == true or else“m iter == end“. The member\nm_pred is constructed from fandm_end from end.\nfilter_iterator(Iterator x, Iterator end = Iterator());\nRequires: Predicate must be Default Constructible and Predicate is a class type (not a\nfunction pointer).\nEffects: Constructs a filter_iterator where m_iter is either the first position in the\nrange [x,end) such that m_pred(*m_iter) == true or else“m iter == end“. The\nmember m_pred is default constructed.\ntemplate <class OtherIterator>\nfilter_iterator(\nfilter_iterator<Predicate, OtherIterator> const& t\n, typename enable_if_convertible<OtherIterator, Itera-\ntor>::type* = 0 // exposition\n);‘‘\nRequires: OtherIterator is implicitly convertible to Iterator .\nEffects: Constructs a filter iterator whose members are copied from t.\nPredicate predicate() const;\nReturns: m_pred\nIterator end() const;\nReturns: m_end\nIterator const& base() const;\nReturns: m_iterator\nreference operator*() const;\nReturns: *m_iter\nfilter_iterator& operator++();\nEffects: Increments m_iter and then continues to increment m_iter until either m_iter\n== m_end orm_pred(*m_iter) == true .\n3Returns: *this\ntemplate <class Predicate, class Iterator>\nfilter_iterator<Predicate,Iterator>\nmake_filter_iterator(Predicate f, Iterator x, Iterator end = Iterator());\nReturns: filter iterator <Predicate,Iterator >(f, x, end)\ntemplate <class Predicate, class Iterator>\nfilter_iterator<Predicate,Iterator>\nmake_filter_iterator(Iterator x, Iterator end = Iterator());\nReturns: filter iterator <Predicate,Iterator >(x, end)\nExample\nThis example uses filter_iterator and then make_filter_iterator to output only the positive\nintegers from an array of integers. Then make_filter_iterator is is used to output the integers\ngreater than -2.\nstruct is_positive_number {\nbool operator()(int x) { return 0 < x; }\n};\nint main()\n{\nint numbers_[] = { 0, -1, 4, -3, 5, 8, -2 };\nconst int N = sizeof(numbers_)/sizeof(int);\ntypedef int* base_iterator;\nbase_iterator numbers(numbers_);\n// Example using filter_iterator\ntypedef boost::filter_iterator<is_positive_number, base_iterator>\nFilterIter;\nis_positive_number predicate;\nFilterIter filter_iter_first(predicate, numbers, numbers + N);\nFilterIter filter_iter_last(predicate, numbers + N, numbers + N);\nstd::copy(filter_iter_first, fil-\nter_iter_last, std::ostream_iterator<int>(std::cout, \" \"));\nstd::cout << std::endl;\n// Example using make_filter_iterator()\nstd::copy(boost::make_filter_iterator<is_positive_number>(numbers, num-\nbers + N),\nboost::make_filter_iterator<is_positive_number>(numbers + N, num-\nbers + N),\nstd::ostream_iterator<int>(std::cout, \" \"));\nstd::cout << std::endl;\n// Another example using make_filter_iterator()\n4std::copy(\nboost::make_filter_iterator(\nstd::bind2nd(std::greater<int>(), -2)\n, numbers, numbers + N)\n, boost::make_filter_iterator(\nstd::bind2nd(std::greater<int>(), -2)\n, numbers + N, numbers + N)\n, std::ostream_iterator<int>(std::cout, \" \")\n);\nstd::cout << std::endl;\nreturn boost::exit_success;\n}\nThe output is:\n4 5 8\n4 5 8\n0 -1 4 5 8\nThe source code for this example can be found here.\n5" } ]
{ "category": "App Definition and Development", "file_name": "filter_iterator.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "BLOCK INDIRECT\nA new parallel sorting algorithm\n Francisco Jose Tapia\nfjtapia @ gmail . com\nBRIEF\nModern processors obtain their power increasing the number of “cores” or HW threads, which permit them to\nexecute several processes simultaneously, with a shared memory structure.\nSPEED OR LOW MEMORY\nIn the parallel sorting algorithms, we can find two categories .\nSUBDIVISION ALGORITHMS \nFilter the data and generate two or more parts. Each part obtained is filtered and divided by other threads,\nuntil the size of the data to sort is smaller than a predefined size, then it is sorted by a single thread. The\nalgorithm most frequently used in the filter and sort is quick sort.\nThese algorithms are fast with a small number of threads, but inefficient with a large number of HW threads.\nExamples of this category are :\n◦Intel Threading Building Blocks (TBB)\n◦Microsoft PPL Parallel Sort.\nMERGING ALGORITHMS\nDivide the data into many parts at the beginning, and sort each part with a separate thread. When the parts\nare sorted, merge them to obtain the final result. These algorithms need additional memory for the merge,\nusually an amount equal to the size of the input data.\nWith a small number of threads, these algorithms usually have similar speed to the subdivision algorithms,\nbut with many threads they are much faster . Examples of this category are :\n◦GCC Parallel Sort (based on OpenMP)\n◦Microsoft PPL Parallel Buffered Sort\nSPEED AND LOW MEMORY\nThis new algorithm is an unstable parallel sort algorithm, created for processors connected with shared\nmemory. This provides excellent performance in machines with many HW threads, similar to the GCC\nParallel Sort, and better than TBB, with the additional advantage of lower memory consumption.\nThis algorithm uses as auxiliary memory a block_size elements buffer for each thread. The block_size is an\ninternal parameter of the algorithm, which, in order to achieve the highest speed, change according the size\nof the objects to sort according the next table. The strings use a block_size of 128.\nobject size 1 - 1516 - 3132 - 6364 - 127128 - 255256 - 511512 - \nblock_size 409620481024768512256128\nThe worst case memory usage for the algorithm is when elements are large and there are many threads.\nWith big elements (512 bytes), and 12 threads, the memory measured was:\n•GCC Parallel Sort 1565 MB\n•Threading Building Blocks (TBB) 783 MB\n•Block Indirect Sort 812 MB\nBlock Indirect Sort - A new parallel sorting algorithm page : 1INDEX\n1.- OVERVIEW OF THE PARALLEL SORTING ALGORITHMS\n2.- INTERNAL DESCRIPTION OF THE ALGORITHM\n2.1.- SUBDIVISION OF THE MERGE\n2.2.- NUMBER OF ELEMENTS NO MULTIPLE OF THE BLOCK SIZE.\n2.3.- INDIRECT SORTING\n2.4.- IN PLACE REARRANGEMENT FROM A INDEX (BLOCK SORTING)\n2.5- SEQUENCE PARALLELIZATION\n3.- BENCHMARKS\n3.1.- INTRODUCTION\n3.2.- DESCRIPTION\n3.3.- LINUX 64 GCC 5.2 Benchmarks\n3.3.1.- Single Thread Algorithms\n3.3.2.- Parallel Algorithms\n3.4.- Windows 10 VISUAL STUDIO 2015 x64 Benchmarks\n3.4.1.- Single Thread Algorithms\n3.4.2.- Parallel Algorithms\n4.- BIBLIOGRAPHY\n5.- GRATITUDE\nBlock Indirect Sort - A new parallel sorting algorithm page : 21.- OVERVIEW OF THE PARALLEL SORTING\n ALGORITHMS\nAmong the unstable parallel sorting algorithms, there are basically two types:\n1.- SUBDIVISION ALGORITHMS\nAs Parallel Quick Sort. One thread divides the problem in two parts. Each part obtained is divided by\nother threads, until the subdivision generates sufficient parts to keep all the threads buys. The below\nexample shows that this means with a 32 HW threads processor, with N elements to sort.\nStep\n1\n2\n3\n4\n5\n6Threads working\n1\n2\n4\n8\n16\n32Threads waiting\n31\n30\n28\n24\n16\n0Elements to process by each thread\n N\n N / 2\n N / 4\n N / 8\n N / 16\n N / 32\nVery even splitting would be unusual in reality, where most subdivisions are uneven\nThis algorithm is very fast and don't need additional memory, but the performance is not good when\nthe number of threads grows. In the table before, until the 6th division, don't have work for to have\nbusy all the HW threads, with the additional problem that the first division is the hardest, because the\nnumber of elements is very large.\n2.- MERGING ALGORITHMS,\nDivide the data into many parts at the beginning, and sort each part with a separate thread. When\nthe parts are sorted, merge them to obtain the final result. These algorithms need additional memory\nfor the merge, usually an amount equal to the size of the input data.\nThese algorithms provide the best performance with many threads, but their performance with a low\nnumber of threads is worse than the subdivision algorithms.\n2.- INTERNAL DESCRIPTION OF THE ALGORITHM\nThis new algorithm (Block Indirect), is a merging algorithm. It has similar performance to GCC Parallel Sort\nWith many threads, but using a low amount of additional memory, close to that used by subdivision\nalgorithms.\nInternally, the algorithm, manage blocks of elements. The number of elements of the block (block_size),\nchange according the size of the objects to sort according the next table. The strings use a block_size of\n128.\nobject size 1 - 1516 - 3132 - 6364 - 127128 - 255256 - 511512 - \nblock_size 409620481024768512256128\nThis new algorithm only need an auxiliary memory of one block of elements for each HW thread. The worst\ncase memory usage for the algorithm is when elements are large and there are many threads. With big\nelements (512 bytes), and 12 threads, the memory measured was:\n•GCC Parallel Sort 1565 MB\n•Threading Building Blocks (TBB) 783 MB\n•Block Indirect Sort 812 MB\nBlock Indirect Sort - A new parallel sorting algorithm page : 3These algorithms are not optimal when using just 1 thread, but are easily parallelized, and need only a small\namount of auxiliary memory.\nThe algorithm divide the number of elements in a number of parts that is the first power of two greater than\nor equal to the number of threads to use. ( For example: with 3 threads, make 4 parts, for 9 threads make 16\nparts…) Each part obtained is sorted by the parallel intro sort algorithm.\n12345678910111213141516\n1-23-45-67-89-1011-1213-1415-16\n1-2-3-4 5-6-7-8 9-10-11-12 13-14-15-16\n1-2-3-4-5-6-7-8 9-10-11-12-13-14-15-16\n1-2-3-4-5-6-7-8-9-10-11-12-13-14-15-16\nWith the sorted parts, merge pairs of parts. We consider the elements are in blocks of fixed size. We initially\nrestrict the number of elements to sort (N) to a multiple of the block size. For to explain the algorithm, I use\nseveral examples with a block size of 4. Each thread receives a number of blocks to sort, and when\ncomplete we have a succession of blocks sorted.\nFor the merge, we have two successions of block sorted. We sort the blocks of the two parts the first element\nof the block. This merge is not fully sorted, is sorted only by the first element. We call this First Merge\nBut if we merge the first block with the second, the second with the third and this successively, we will obtain\na new list fully sorted with a size of the sum of the blocks of the first part plus the blocks of the second part.\nThis merge algorithm, only need an auxiliary memory of the size of a block.\nPart 1Part 2First\nmergePass 1Pass 2Pass 3Pass 4Final\nMerge\n2\n5\n9\n102\n3\n4\n52\n3\n4\n5\n2\n5\n9\n103\n4\n6\n73\n4\n6\n76\n7\n9\n106\n7\n8\n96\n7\n8\n9\n12\n28\n32\n348\n11\n13\n148\n11\n13\n1410\n11\n13\n1410\n11\n12\n1310\n11\n12\n13\n35\n37\n39\n4016\n20\n27\n2912\n28\n32\n3414\n28\n32\n3414\n16\n20\n2714\n16\n20\n27\n44\n46\n50\n7136\n38\n45\n6016\n20\n27\n2928\n29\n32\n3428\n29\n32\n34\n35\n37\n39\n4035\n36\n37\n3835\n36\n37\n38\n36\n38\n45\n6039\n40\n45\n6039\n40\n44\n4539\n40\n44\n45\n44\n46\n50\n7146\n50\n60\n7146\n50\n60\n71\nBlock Indirect Sort - A new parallel sorting algorithm page : 4The idea which make interesting this algorithm, is that you can divide, in an easy way, the total number of\nblocks to merge, in several parts, obtaining several groups of blocks, independents between them, which\ncan be merged in parallel.\n2.1.- MERGE SUBDIVISION\nSuppose, we have the next first merge, with block size of 4, and want to divide in two independent parts, to\nbe executed by different threads.\nTo divide, we must looking for a border between two blocks of different color. And make the merge between\nthem. In the example, to obtain parts of similar size , we can cut in the frontier 4-5, or in the border 5 -6,\nbeing the two options.\nBlock\nNumberFirst\nMergeOption 1\nBorder 4 -5Option 2\nBorder 5-6\n010\n12\n14\n2010\n12\n14\n2010\n12\n14\n20\n121\n70\n85\n20021\n70\n85\n20021\n70\n85\n200\n222\n24\n28\n3022\n24\n28\n3022\n24\n28\n30\n331\n35\n37\n4031\n35\n37\n4031\n35\n37\n40\n441\n43\n45\n5041\n43\n45\n5041\n43\n45\n50\n5201\n470\n890\n2000201\n470\n890\n2000201\n210\n212\n216\n6210\n212\n216\n220210\n212\n216\n220220\n470\n890\n2000\n7221\n224\n227\n230221\n224\n227\n230221\n224\n227\n230\n82100\n2104\n2106\n21102100\n2104\n2106\n21102100\n2104\n2106\n2110\n92120\n2124\n2126\n21302120\n2124\n2126\n21302120\n2124\n2126\n2130\nIn the option 1, the frontier is between the blocks 4 and 5, and the last of the block 4 is less or equal than the \nfirst of the block 5, and don't need merge. We have two parts, which can be merged in a parallel way The \nfirst with the blocks 0 to 4, and the second with the 5 to 9.\nBlock Indirect Sort - A new parallel sorting algorithm page : 5In the option 2, the frontier is between the blocks 5 and 6, and the last of the block 5 is greater than the first\nof the block 6, and we must do the merge of the two blocks, and appear new values for the blocks 5 and 6.\nThen we have two parts, which can be merged in a parallel way. The first with the blocks from 0 to 5, and the\nsecond with the 6 to 9.\n2.2.- NUMBER OF ELEMENTS NOT MULTIPLE OF THE BLOCK \nSIZE\nUntil now, we assumed the number of elements is a multiple of the block size. When this is not true, we\nconfigure the blocks, beginning for the position 0, and at end we have an incomplete block called tail. The tail\nblock always is in the last part to merge. We use a special operation just for this block, described in the next\nexample :\nWe have two groups of blocks A and B, with NA and NB blocks respectively. The tail block, if exist, always is\nin the group B. Merge the tail block with the last block of the group A. With the merge two cases appear\nshown in the next examples:\nCase 1: The first value of the NA block don't change, and don't do nothing with the blocks.\nGroup AGroup B\nMerge of\nthe blocks\nNA and NBGroup A Group B\n NA -134\n35\n38\n3928\n29\n36\n41 NA-134\n35\n38\n3928\n29\n36\n41\n NA40\n50\n56\n70 \nNB-146\n47\n49\n51 NA40\n50\n52\n54 \nNB -146\n47\n49\n51\n \nNB52\n54 NB56\n70\nCase 2: The first value of the block A, changes, and we delete the block of the group A and insert in the\ngroup B, immediately before the tail block. With this operation we guarantee the tail block is the last\nGroup AGroup B\nMerge of the blocks NA\nand NB.\nThe first value of NA\nchange.\nTake the NA block and\ninsert in the NB position.Group AGroup B\n \nNA-126\n27\n36\n37 \nNB-211\n12\n13\n15 \nNA -126\n27\n36\n37 NB-\n211\n12\n13\n15\n \nNA40\n50\n56\n70 \nNB-116\n17\n19\n20 \nNB-116\n17\n19\n20\nNB22\n24 \nNB22\n24\n40\n50\nNB+156\n70\nBlock Indirect Sort - A new parallel sorting algorithm page : 62.3.- INDIRECT SORT\nIn today's computers, the main bottleneck is the data bus. When the process needs to manage memory in\nmemory different locations, as in the parallel sorting algorithms, the data bus limits the speed of the\nalgorithm.\nIn the benchmarks done on a 32 HW threads, sorting N elements of 64 Bytes of size requires the same time\nas sorting N / 2 elements 128 bytes of size. The comparison is the same with the two sizes.\nIn the algorithm described, in each merge, the blocks are moved, to be merged in the next step. This slows\ndown the algorithm, due to the data bus bottleneck.\nTo avoid this, do an indirect merge. We have an index with the relative position of the blocks. This implies the\nblocks to merge are not contiguous, but that doesn't change the validity of the algorithm.\nWhen all the merges are complete, with this index we move the blocks, and have all the data sorted. This\nblock movement is done with a simple and parallel algorithm,\n2.4.- IN PLACE REARRANGEMENT FROM A INDEX (BLOCK \nSORTING)\nThe tail block, if one exists, is always in the last position, and doesn't need to be moved. We move blocks\nwith the same size using an index. The best way is to see with an example :\nWe have an unsorted data vector (D) with numbers. We also have, an index ( I ), which is a vector with the\nrelative position of the elements of D\nD01234567\n200500600900100400700800\nI01234567\n40512673\nTo move the data, we need an auxiliary variable (Aux) of the same size as the data. In this example the data\nis a number, but in the case of blocks, the variable must have the size of a block.\nThis can be a bit confusing. Here are the steps of the process :\n•Aux = D[0], and after this we must find which element must be copied in the position D[0], we find it\nin the position 0 of the index, and it is the position 4.\n•The next step is D[0] = D[4], and find the position to move to the position 4, in the position 4 of the\nindex, and this is the position 2.\n•When doing this successively, once the new position obtained is the first position used, (in this\nexample is the 0), move to this position from the Aux variable, and the cycle is closed.\nIn this example the steps are:\nAux D[0]←\nD[0] D[4] ←\nD[4] D[2] ←\nD[2] D[5] ←\nD[5] D[6] ←\nD[6] D[7] ←\nD[7] D[3] ←\nD[3] D[1] ←\nD[1] Aux ←\nBlock Indirect Sort - A new parallel sorting algorithm page : 7If we follow the arrows, we see a closed cycle. This cycle has a sequence formed by the position of the\nelements passed. In this example the sequence is 0, 4, 2, 5, 6, 7, 3, 1.\nWith small elements the sequence is useless, because instead of extracting the sequence, we can move the\ndata, and all is done. But with big elements, as the blocks used in this algorithm, the sequence are very\nuseful, because from the sequences, we generate the parallel work for the threads.\nThere can be several cycles In an index, with their corresponding sequences. To extract the sequences,\nbegin with the index.\n•If in a position, the content is the position, indicate this element is sorted, and don't need to be\nmoved.\n•If it's different, this imply it's the beginning of a cycle, and must extract the sequence, as described\nbefore, and determine the positions visited in the index.\nWe can see with an example of an index with several cycles.\nData vector (D)\n01234567891011121314\n242320151210211719132218141116\nIndex (I)\n01234567891011121314\n51349123147118261010\nDoing the procedure previously described, find 3 sequences\n•5, 3, 9, 8, 11, 6, 14, 0\n•4, 12, 10, 2\n•13, 1\nIn the real problems, usually appear a few long sequences, and many small sequences. This permits parallel\nexecution, but it's not very efficient, because a long sequence can keep one thread busy while the other\nthreads are waiting, because they are finished with the sort sequences. Or even worse, there can be just\none sequence.\nTo deal with this issue, the long sequences can be easily divided and done in parallel. This permit an optimal\nparallelization.\n2.5- SEQUENCE PARALLELIZATION\nThe procedure can be a bit confusing, so here’s an example:\nData vector (D)\n01234567891011121314151617\n10014070609000160805013015020170101103012040\nIndex vector (I)\n01234567891011121314151617\n51311151783274014169110612\nIf we extract the sequence, as described before, we find only one loop, and one sequence. This sequence is;\n5 8 7 2 11 14 1 13 9 4 17 12 16 6 3 15 10 0\nWe want to divide in 3 sequences of 6 elements. Each sequence obtained are independent between them\nand can be done in parallel. The procedure is :\nBlock Indirect Sort - A new parallel sorting algorithm page : 8Generate a fourth sequence with the contents on the last position of each sequence. In this example is\n14 12 0\nNow, consider the 3 sequences as independent and can be applied in parallel. We apply the sequences over\nthe vector of data (D), and when all are finished, apply the sequence obtained with the last position of the\nsub sequences. And all is done\nSee this example:\nThe data vector is\n0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17\n100 140 70 60 90 00 160 80 50 130 150 20 170 10 110 30 120 40\nApply this sequence\n5 8 7 2 11 14\nThe new data vector is\n0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17\n100 140 20 60 90 50 160 70 80 130 150 110 170 10 00 30 120 40\nApply this sequence\n1 13 9 4 17 12\nThe new data vector is\n0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17\n100 10 20 60 40 50 160 70 80 90 150 110 140 13000 30 120 170\nApply this sequence\n16 6 3 15 10 0\nThe new data vector is\n0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17\n12010 20 30 40 50 60 70 80 90 100 110 140 13000 150 160170\nThese 3 sub sequences can be done in parallel, because don't have any shared element.\nFinally, when the subsequences are been applied, we apply the last sequence, obtained with the last\npositions of the sub sequences\n14 12 0\nThe new data vector is fully sorted\n0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17\n00 10 20 30 40 50 60 70 80 90 100 110 120 130140 150 160170\nBlock Indirect Sort - A new parallel sorting algorithm page : 93.- BENCHMARKS\n3.1.- INTRODUC T ION\nTo benchmark this we use the implementation proposed for the Boost Sort Parallel Library. It's pending of \nthe final approval, due this can suffer some changes until the final version and definitive approval in the \nboost library. You can find in https :// github . com / fjtapia / sort _ parallel.\nIf you want run the benchmarks in your machine, you can find the code, instructions and procedures in \nhttps :// github . com / fjtapia / sort _ parallel _ benchmark\nFor the comparison, we use these parallel algorithms:\n1.GCC Parallel Sort\n2.Intel TBB Parallel Sort\n3.Block Indirect Sort\n3.2.- DESCRIPTION\nThe benchmark are running in a machine with a I7 5820 3.3 GHz 6 cores, 12 threads, quad channel\nmemory (2133 MHz) with Ubuntu and the GCC 5.2 compiler\nThe compiler used was the GCC 5.2 64 bits\nThe benchmark have 3 parts:\n1.- Sort of 100000000 uint64_t numbers randomly generated. The utility of this benchmark is to see the \nspeed with small elements with a very fast comparison.\n2.- Sort of 10000000 of strings randomly filled. The comparison is no so easy as the integers.\n3.- Sort of objects of several sizes. The objects are arrays of 64 bits numbers, randomly filled. We will check \nwith arrays of 1 , 2 , 4, 8, 16, 32 and 64 numbers. \nDefinition \nof the objectBytes Number of \nelements to sort \n uint64_t [1] 8 100 000 000 \n uint64_t [2] 16 50 000 000 \n uint64_t [4] 32 25 000 000 \n uint64_t [8] 64 12 500 000 \n uint64_t [16] 128 6 250 000 \n uint64_t [32] 256 3 125 000 \n uint64_t [64] 512 1 562 500 \nThe C++ definition of the objects is \ntemplate <uint32_t NN>\nstruct int_array\n{ uint64_t M[NN];\n};\nThe comparison between objects can be of two ways:\n•Heavy comparison : The comparison is done with the sum of all the numbers of the array. In \neach comparison, make the sum. \n•Light comparison : It's done using only the first number of the array, as a key in a register. \nBlock Indirect Sort - A new parallel sorting algorithm page : 103.3.- LINUX 64 GCC 5.2 Benchmarks\nThe benchmark are running in a I7 5820 3.3 GHz 6 cores, 12 threads, quad channel memory (2133 MHz) \nwith Ubuntu and the GCC 5.2 compiler.\n3.3.1.-SINGLE THREAD ALGORITHMS\nThe algorithms involved in this benchmark are :\nStableMemory used Comments\nGCC sort noN + Log N\nboost sort noN + Log N\nGCC stable_sort yesN + N / 2 \nBoost stable_sort yesN + N / 2 \nBoost spreadsort yesN + Log NExtremely fast algorithm, only for integers, floats \nand strings\nINTEGER BENCHMARKS Sort of 100000000 64 bits numbers, randomly filled \n TimeMemory \n GCC sort 8.33 secs784 MB\n Boost sort 8.11 secs784 MB\n GCC stable sort 8.69 secs1176 MB\n Boost stable sort 8.75 secs1175 MB\n Boost Spreadsort 4.33 secs784 MB\nSTRINGS BENCHMARKS Sort of 10 000 000 strings randomly filled\n TimeMemory \n GCC sort 6.39 secs820 MB\n Boost sort 7.01 secs820 MB\n GCC stable sort 12.99 secs1132 MB\n Boost stable sort 9.17 secs976 MB\n Boost Spreadsort 2.44 secs820 MB\nOBJECTS BENCHMARKS Sorting of objects of different sizes. The objects are arrays of 64 bits numbers. \nThis benchmark is done using two kinds of comparison.\nHeavy comparison : The comparison is done with the sum of all the numbers of the array. In each \ncomparison, make the sum.\n8 \nbytes16 \nbytes32 \nbytes 64 \nbytes 128 \nbytes256 \nbytes512 \nbytesMemory \nused\nGCC sort 8.75 4.493.031.971.711.371.17783 MB\nBoost sort 8.194.422.651.911.671.351.09783 MB\nGCC stable_sort 10.235.673.672.942.62.492.341174 MB\nBoost stable_sort 8.855.113.182.412.011.861.601174 MB\nBlock Indirect Sort - A new parallel sorting algorithm page : 11Light comparison : It's done using only the first number of the array, as a key in a register.\n8 \nbytes16 \nbytes32 \nbytes64 \nbytes 128 \nbytes256 \nbytes512 \nbytesMemory \nused\nGCC sort 8.694.312.351.501.230.860.79783 MB\nBoost sort 8.184.042.251.451.240.880.76783 MB\nGCC stable_sort 10.345.263.202.572.472.412.301174 MB\nBoost stable_sort 8.924.592.511.941.681.681.501174 MB\n3.3.2.-PARALLEL ALGORITHMS\nThe algorithms involved in this benchmark are :\nStable Memory used Comments\nGCC parallel sort No2N Based on OpenMP\nTBB parallel sort NoN + LogN\nBoost parallel sort NoN +block_size*num threads New parallel algorithm\nGCC parallel stable sort Yes2 N Based on OpenMP\nBoost parallel stable sort YesN / 2 \nBoost sample sort Yes N\nTBB parallel stable sort Yes NExperimental code, not in the TBB official\nThe block_size is an internal parameter of the algorithm, which in order to achieve the highest speed,\nchange according the size of the objects to sort according to the next table. The strings use a block_size of\n128.\nobject size (bytes) 1 - 1516 - 3132 - 6364 - 127128 - 255256 - 511512 - \nblock_size 409620481024768512256128\nFor the benchmark I use the next additional code:\n•Threading Building Blocks ( TBB) \n•OpenMP \n•Threading Building Block experimental code (\nhttps://software.intel.com/sites/default/files/managed/48/9b/parallel_stable_sort.zip ) \nThe most significant of this parallel benchmark is the comparison between the Parallel Sort algorithms. GCC\nparallel sort is extremely fast with many cores, but need an auxiliary memory of the same size then the data.\nIn the other side Threading Building Blocks (TBB), is not so fast with many cores , but the auxiliary memory\nis LogN.\nThe Boost Parallel Sort (internally named Block Indirect Sort), is a new algorithm created and implemented\nby the author for this library, which combine the speed of GCC Parallel sort, with a small memory\nconsumption (block_size elements for each thread). The worst case for this algorithm is when have very big\nelements and many threads. With big elements (512 bytes), and 12 threads, The memory measured was:\nGCC Parallel Sort (OpenMP) 1565 MB\nThreading Building Blocks (TBB) 783 MB\nBlock Indirect Sort 812 MB\nIn machines with a small number of HW threads, TBB is faster than GCC, but with a great number of HW\nthreads GCC is more faster than TBB. Boost Parallel Sort have similar speed than GCC Parallel Sort with a\ngreat number of HW threads, and similar speed to TBB with a small number. \nBlock Indirect Sort - A new parallel sorting algorithm page : 12INTEGER BENCHMARKS Sort of 100 000 000 64 bits numbers, randomly filled\ntime \n( secs)memory \n(MB) \nOMP parallel_sort 1,251560\nTBB parallel_sort 1,64783\nBoost parallel_sort 1,08786\nOMP parallel_stable_sort 1,561948\nTBB parallel_stable_sort 1,561561\nBoost sample_sort 1,191565\nBoost parallel_stable_sort 1,541174\nSTRING BENCHMARK Sort of 10000000 strings randomly filled\ntime \n(secs)memory \n(MB) \nOMP parallel_sort 1,492040\nTBB parallel_sort 1,84820\nBoost parallel_sort 1,3822\nOMP parallel_stable_sort 2,252040\nTBB parallel_stable_sort 2,11131\nBoost sample_sort 1,511134\nBoost \nparallel_stable_sort2,1977\nOBJECT BENCHMARKS Sorting of objects of different sizes. The objects are arrays of 64 bits number. \nThis benchmark is done using two kinds of comparison.\nHeavy comparison : The comparison is done with the sum of all the numbers of the array. In each \ncomparison, make the sum.\n8 \nbytes16 \nbytes32 \nbytes64 \nbytes128 \nbytes256 \nbytes512 \nbytesMemory\nUsed\nOMP parallel_sort 1,270,720,560,450,410,390,321565\nTBB parallel_sort 1,630,80,560,50,440,390,32783\nBoost parallel_sort 1,130,670,530,470,430,410,34812\nOMP \nparallel_stable_sort1,621,381,231,191,091,070,971954\nTBB \nparallel_stable_sort1,581,020,810,760,730,730,711566\nBoost sample_sort 1,150,790,630,620,620,610,61566\nBoost \nparallel_stable_sort1,581,020,80,760,730,730,711175\nBlock Indirect Sort - A new parallel sorting algorithm page : 13Light comparison : It's done using only the first number of the array, as a key in a register.\n8 \nbytes16 \nbytes32 \nbytes64 \nbytes128 \nbytes256 \nbytes512 \nbytesMemory\nused \nOMP parallel_sort 1,240,710,480,410,380,350,321565\nTBB parallel_sort 1,660,80,520,430,40,350,32783\nBoost parallel_sort 1,110,650,490,430,410,370,34812\nOMP \nparallel_stable_sort1,551,361,231,181,091,070,971954\nTBB \nparallel_stable_sort1,580,910,750,720,710,720,711566\nBoost \nparallel_stable_sort1,160,740,630,620,610,610,61566\nBoost sample_sort 1,560,910,750,720,720,720,711175\n3.4.- WINDOWS 10 VISUAL STUDIO 2015 x64 Benchmarks\nThe benchmark are running in a virtual machine with Windows 10 and 10 threads over a I7 5820 3.3 GHz \nwith Visual Studio 2015 C++ compiler.\n3.4.1.-SINGLE THREAD ALGORITHMS \nThe algorithms involved in this benchmark are :\nStableMemory used Comments\nstd::sort noN + Log N\nboost sort noN + Log N\nstd::stable_sort yesN + N / 2 \nBoost stable_sort yesN + N / 2 \nBoost spreadsort yesN + Log NExtremely fast algorithm, only for integers, floats and \nstrings\nINTEGER BENCHMARKS Sort of 100000000 64 bits numbers, randomly filled \nTime\n(secs)Memory\n(MB)\nstd::sort 13763\nBoost sort 10,74763\nstd::stable_sort 14,941144\nBoost stable_sort 13,371144\nBoost spreadsort 9,58763\nSTRING BENCHMARKS Sort of 10 000 000 strings randomly filled\nTime\n(secs)Memory\n(MB)\nstd::sort 13,3862\nBoost sort 13,6862\nstd::stable_sort 26,991015\nBoost stable_sort 20,641015\nBoost spreadsort 5,7862\nBlock Indirect Sort - A new parallel sorting algorithm page : 14OBJECTS BENCHMARK Sorting of objects of different sizes. The objects are arrays of 64 bits numbers. \nThis benchmark is done using two kinds of comparison.\nHeavy comparison : The comparison is done with the sum of all the numbers of the array. In each \ncomparison, make the sum.\n8 \nbytes16 \nbytes32 \nbytes64 \nbytes128 \nbytes256 \nbytes512 \nbytesMemory\nused\nstd::sort 13,366,984,22,582,872,372,29763\nBoost sort 10,545,613,262,722,451,761,73763\nstd::stable_sort 15,498,475,473,973,853,552,991144\nBoost stable_sort 13,118,865,064,163,93,063,321144\nLight comparison : It's done using only the first number of the array, as a key in a register.\n8 \nbytes16 \nbytes32 \nbytes64 \nbytes128 \nbytes256 \nbytes512 \nbytesMemory\nused\nstd::sort 14,157,264,332,691,921,981,73763\nBoost sort 10,3352,991,851,531,461,4763\nstd::stable_sort 14,687,644,293,333,222,863,081144\nBoost stable_sort 13,598,364,453,733,162,812,61144\n3.4.2.-PARALLEL ALGORITHMS\nThe algorithms involved in this benchmark are :\nStable Memory used Comments\nPPL parallel sort No N\nPPL parallel buffered sort No 2 N\nBoost parallel sort NoN +block_size*num threads New parallel algorithm\nBoost parallel stable sort YesN + N / 2 \nBoost sample sort Yes 2 N\nThe block_size is an internal parameter of the algorithm, which in order to achieve the highest speed, \nchange according the size of the objects to sort according to the next table. The strings use a block_size of \n128.\nobject size (bytes) 1 - 1516 - 3132 - 6364 - 127128 - 255256 - 511512 - \nblock_size 409620481024768512256128\nINTEGER BENCHMARKS Sort of 100 000 000 64 bits numbers, randomly filled\nTime\n(secs)Memory\n(MB)\nPPL parallel sort 3,11764\nPPL parallel buffered sort 1,741527\nBoost parallel sort 2,1764\nBoost sample sort 2,781511\nBoost parallel stable sort 3,31145\nBlock Indirect Sort - A new parallel sorting algorithm page : 15STRINGS BENCHMARK Sort of 10000000 strings randomly filled\nTime\n(secs)Memory\n(MB)\nPPL parallel sort 3,76864\nPPL parallel buffered sort 3,771169\nBoost parallel sort 3,41866\nBoost sample sort 3,741168\nBoost parallel stable sort 5,71015\nOBJECTS BENCHMARKS Sorting of objects of different sizes. The objects are arrays of 64 bits number. \nThis benchmark is done using two kinds of comparison.\nHeavy comparison : The comparison is done with the sum of all the numbers of the array. In each \ncomparison, make the sum.\n8 \nbytes16 \nbytes32 \nbytes64 \nbytes128 \nbytes256 \nbytes512 \nbytesMemory\nused\nPPL parallel sort 2,841,711,010,840,890,770,65764\nPPL parallel \nbuffered sort2,21,2920,880,981,320,821527\nBoost parallel sort 1,930,820,90,720,770,680,69764\nBoost sample sort 3,022,032,151,411,551,821,391526\nBoost parallel \nstable sort3,362,671,621,451,381,191,371145\nLight comparison : It's done using only the first number of the array, as a key in a register.\n8 \nbytes16 \nbytes32 \nbytes64 \nbytes128 \nbytes256 \nbytes512 \nbytesMemory\nused\nPPL parallel sort 3,11,370,970,70,610,580,57764\nPPL parallel \nbuffered sort2,311,390,90,881,10,891,441527\nBoost parallel sort 2,151,210,70,720,410,510,54764\nBoost sample sort 3,41,941,561,4121,411,961526\nBoost parallel \nstable sort3,562,371,791,451,721,341,441145\nBlock Indirect Sort - A new parallel sorting algorithm page : 164.- BIBLIOGRA PHY\n•Introduction to Algorithms, 3rd Edition (Thomas H. Cormen, Charles E. Leiserson, Ronald L. \nRivest, Clifford Stein)\n•Structured Parallel Programming: Patterns for Efficient Computation (Michael McCool, James \nReinders, Arch Robison)\n•Algorithms + Data Structures = Programs (Niklaus Wirth)\n5.- GRATITUDE\nTo CESVIMA (http :// www . cesvima . upm . es /), Centro de Cálculo de la Universidad Politécnica de\nMadrid. When need machines for to tune this algorithm, I contacted with the investigation department of\nmany Universities of Madrid. Only them, help me.\nTo Hartmut Kaiser, Adjunct Professor of Computer Science at Louisiana State University. By their faith in my\nwork,\n \nTo Steven Ross, by their infinite patience in the long way in the develop of this algorithm, and their wise\nadvises.\nBlock Indirect Sort - A new parallel sorting algorithm page : 17" } ]
{ "category": "App Definition and Development", "file_name": "block_indirect_sort_en.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Administrator's Guide\nAbstract\nThis books explains how to create and manage VoltDB databases and the clusters that run\nthem.\nV11.3Administrator's Guide\nV11.3\nCopyright © 2014-2022 Volt Active Data, Inc.\nThe text and illustrations in this document are licensed under the terms of the GNU Affero General Public License Version 3 as published by the\nFree Software Foundation. See the GNU Affero General Public License ( http://www.gnu.org/licenses/ ) for more details.\nMany of the core VoltDB database features described herein are part of the VoltDB Community Edition, which is licensed under the GNU Affero\nPublic License 3 as published by the Free Software Foundation. Other features are specific to the VoltDB Enterprise Edition and VoltDB Pro, which\nare distributed by Volt Active Data, Inc. under a commercial license.\nThe VoltDB client libraries, for accessing VoltDB databases programmatically, are licensed separately under the MIT license.\nYour rights to access and use VoltDB features described herein are defined by the license you received when you acquired the software.\nVoltDB is a trademark of Volt Active Data, Inc.\nVoltDB software is protected by U.S. Patent Nos. 9,600,514, 9,639,571, 10,067,999, 10,176,240, and 10,268,707. Other patents pending.\nThis document was generated on March 07, 2022.Table of Contents\nPreface ............................................................................................................................ vii\n1. Structure of This Book ........................................................................................... vii\n2. Related Documents ................................................................................................ vii\n1. Managing VoltDB Databases ............................................................................................. 1\n1.1. Getting Started ..................................................................................................... 1\n1.2. Understanding the VoltDB Utilities .......................................................................... 2\n1.3. Management Tasks ................................................................................................ 3\n2. Preparing the Servers ....................................................................................................... 4\n2.1. Server Checklist .................................................................................................... 4\n2.2. Install Required Software ....................................................................................... 4\n2.3. Configure Memory Management ............................................................................. 5\n2.3.1. Disable Swapping ....................................................................................... 5\n2.3.2. Disable Transparent Huge Pages ................................................................... 5\n2.3.3. Enable Virtual Memory Mapping and Overcommit ........................................... 6\n2.4. Turn off TCP Segmentation .................................................................................... 6\n2.5. Configure Time Services ........................................................................................ 7\n2.6. Increase Resource Limits ........................................................................................ 7\n2.7. Configure the Network ........................................................................................... 8\n2.8. Assign Network Ports ............................................................................................ 8\n2.9. Eliminating Server Process Latency ......................................................................... 8\n3. Starting and Stopping the Database ................................................................................... 10\n3.1. Configuring the Cluster and Database ..................................................................... 10\n3.2. Initializing the Database Root Directory .................................................................. 11\n3.3. Starting the Database ........................................................................................... 12\n3.4. Loading the Database Definition ............................................................................ 13\n3.4.1. Preloading the Schema and Classes When You Initialize the Database ................ 13\n3.4.2. Loading the Schema and Classes After the Database Starts ............................... 13\n3.5. Stopping the Database .......................................................................................... 14\n3.6. Restarting the Database ........................................................................................ 14\n3.7. Starting and Stopping Individual Servers ................................................................. 14\n4. Maintenance and Upgrades .............................................................................................. 16\n4.1. Backing Up the Database ...................................................................................... 16\n4.2. Updating the Database Schema .............................................................................. 17\n4.2.1. Performing Live Schema Updates ................................................................ 17\n4.2.2. Performing Updates Using Save and Restore ................................................. 17\n4.3. Upgrading the Cluster .......................................................................................... 18\n4.3.1. Performing Server Upgrades ....................................................................... 19\n4.3.2. Performing Rolling Hardware Upgrades on K-Safe Clusters .............................. 19\n4.3.3. Adding Servers to a Running Cluster with Elastic Scaling ................................ 20\n4.3.4. Removing Servers from a Running Cluster with Elastic Scaling ......................... 20\n4.3.5. Reconfiguring the Cluster During a Maintenance Window ................................ 21\n4.4. Upgrading VoltDB Software ................................................................................. 22\n4.4.1. Upgrading VoltDB Using Save and Restore ................................................... 22\n4.4.2. Upgrading Older Versions of VoltDB Manually ............................................. 22\n4.4.3. Upgrading VoltDB With Reduced Downtime Using a DR Replica ..................... 23\n4.4.4. Performing an Online Upgrade Using Multiple XDCR Clusters ......................... 26\n4.4.5. Performing an Online Upgrade With Limited Hardware ................................... 27\n4.4.6. Downgrading, or Falling Back to a Previous VoltDB Version ............................ 30\n4.5. Updating the VoltDB Software License ................................................................... 31\n5. Monitoring VoltDB Databases ......................................................................................... 32\n5.1. Monitoring Overall Database Activity ..................................................................... 32\niiiAdministrator's Guide\n5.1.1. VoltDB Management Center ....................................................................... 32\n5.1.2. System Procedures .................................................................................... 32\n5.1.3. SNMP Alerts ........................................................................................... 34\n5.2. Setting the Database to Read-Only Mode When System Resources Run Low .................. 36\n5.2.1. Monitoring Memory Usage ......................................................................... 37\n5.2.2. Monitoring Disk Usage .............................................................................. 37\n5.3. Integrating VoltDB with Other Monitoring Systems ................................................... 38\n5.3.1. Integrating with Prometheus ....................................................................... 39\n5.3.2. Integrating with Nagios .............................................................................. 39\n5.3.3. Integrating with New Relic ......................................................................... 40\n6. Logging and Analyzing Activity in a VoltDB Database ........................................................ 41\n6.1. Introduction to Logging ........................................................................................ 41\n6.2. Creating the Logging Configuration File .................................................................. 41\n6.3. Changing the Timezone of Log Messages ................................................................ 43\n6.4. Managing VoltDB Log Files ................................................................................. 44\n6.5. Enabling Your Custom Log Configuration When Starting VoltDB ................................ 44\n6.6. Changing the Configuration on the Fly .................................................................... 44\n7. What to Do When Problems Arise .................................................................................... 46\n7.1. Where to Look for Answers .................................................................................. 46\n7.2. Handling Errors When Restoring a Database ............................................................ 46\n7.2.1. Logging Constraint Violations ..................................................................... 47\n7.2.2. Safe Mode Recovery ................................................................................. 47\n7.3. Collecting the Log Files ....................................................................................... 48\nA. Server Configuration Options .......................................................................................... 50\nA.1. Server Configuration Options ................................................................................ 50\nA.1.1. Network Configuration (DNS) .................................................................... 50\nA.1.2. Time Configuration .................................................................................. 51\nA.2. Process Configuration Options .............................................................................. 51\nA.2.1. Maximum Heap Size ................................................................................ 51\nA.2.2. Other Java Runtime Options (VOLTDB_OPTS) ............................................ 51\nA.3. Database Configuration Options ............................................................................ 52\nA.3.1. Sites per Host .......................................................................................... 52\nA.3.2. K-Safety ................................................................................................. 52\nA.3.3. Network Partition Detection ....................................................................... 53\nA.3.4. Automated Snapshots ................................................................................ 53\nA.3.5. Import and Export .................................................................................... 53\nA.3.6. Command Logging ................................................................................... 53\nA.3.7. Heartbeat ................................................................................................ 54\nA.3.8. Temp Table Size ...................................................................................... 54\nA.3.9. Query Timeout ........................................................................................ 54\nA.3.10. Flush Interval ......................................................................................... 55\nA.3.11. Long-Running Process Warning ................................................................ 56\nA.3.12. Copying Array Parameters ....................................................................... 56\nA.3.13. Transaction Prioritization ......................................................................... 56\nA.4. Path Configuration Options .................................................................................. 57\nA.4.1. VoltDB Root ........................................................................................... 57\nA.4.2. Snapshots Path ......................................................................................... 57\nA.4.3. Export Overflow Path ............................................................................... 58\nA.4.4. Command Log Path .................................................................................. 58\nA.4.5. Command Log Snapshots Path ................................................................... 58\nA.5. Network Ports .................................................................................................... 58\nA.5.1. Client Port .............................................................................................. 59\nA.5.2. Admin Port ............................................................................................. 59\nA.5.3. Web Interface Port (http) ........................................................................... 59\nivAdministrator's Guide\nA.5.4. Internal Server Port .................................................................................. 60\nA.5.5. Replication Port ....................................................................................... 60\nA.5.6. Zookeeper Port ........................................................................................ 61\nA.5.7. TLS/SSL Encryption (Including HTTPS) ...................................................... 61\nB. Snapshot Utilities .......................................................................................................... 63\nsnapshotconvert ......................................................................................................... 64\nsnapshotverifier ......................................................................................................... 65\nvList of Tables\n1.1. Database Management Tasks .......................................................................................... 3\n3.1. Selecting Database Features in the Configuration File ........................................................ 10\n4.1. Overview of the Online Upgrade Process ........................................................................ 28\n5.1. SNMP Configuration Attributes ..................................................................................... 34\n5.2. SNMP Events ............................................................................................................. 35\n5.3. Nagios Plugins ............................................................................................................ 39\n6.1. VoltDB Components for Logging ................................................................................... 43\nA.1. VoltDB Port Usage ..................................................................................................... 58\nviPreface\nThis book explains how to manage VoltDB databases and the clusters that host them. It is intended for\ndatabase administrators and operators, responsible for the ongoing management and maintenance of data-\nbase infrastructure.\n1. Structure of This Book\nThis book is divided into 7 chapters and 2 appendices:\n•Chapter 1, Managing VoltDB Databases\n•Chapter 2, Preparing the Servers\n•Chapter 3, Starting and Stopping the Database\n•Chapter 4, Maintenance and Upgrades\n•Chapter 5, Monitoring VoltDB Databases\n•Chapter 6, Logging and Analyzing Activity in a VoltDB Database\n•Chapter 7, What to Do When Problems Arise\n•Appendix A, Server Configuration Options\n•Appendix B, Snapshot Utilities\n2. Related Documents\nThis book does not describe how to design or develop VoltDB databases. For a complete description of\nthe development process for VoltDB and all of its features, please see the accompanying manual Using\nVoltDB. For new users, see the VoltDB Tutorial . These and other books describing VoltDB are available\non the web from http://docs.voltdb.com/ .\nviiChapter 1. Managing VoltDB Databases\nVoltDB is a distributed, in-memory database designed from the ground up to maximize throughput perfor-\nmance on commodity servers. The VoltDB architecture provides many advantages over traditional data-\nbase products while avoiding the pitfalls of NoSQL solutions:\n•By partitioning the data and stored procedures, VoltDB can process multiple queries in parallel without\nsacrificing the consistency or durability of an ACID-compliant database.\n•By managing all data in memory with a single thread for each partition, VoltDB avoids overhead such\nas record locking, latching, and device-contention inherent in traditional disk-based databases.\n•VoltDB databases can scale up to meet new capacity or performance requirements simply by adding\nmore nodes to the cluster.\n•Partitioning is automated, based on the schema, so there is no need to manually shard or repartition the\ndata when scaling up as with many NoSQL solutions.\n•Finally, VoltDB Enterprise Edition provides features to ensure durability and high availability through\ncommand logging, locally replicating partitions (K-safety), and wide-area database replication.\nEach of these features is described, in detail, in the Using VoltDB manual. This book explains how to use\nthese and other features to manage and maintain a VoltDB database cluster from a database administrator's\nperspective.\n1.1. Getting Started\nBefore you set up VoltDB for use in a production environment, you need to make four decisions:\n•What database features to use — Which features you want to use are defined in the configuration file\nand set with the voltdb init command.\n•Physical structure of the cluster — The number and addresses of the nodes in the cluster, which you\nspecify when you start the cluster with the voltdb start command.\n•Logical structure of the database — The logical structure of the database tables and views, otherwise\nknown as the schema, is defined in standard SQL statements and can be applied to the database using\nthe sqlcmd command line utility.\n•Stored procedures — The schema declares stored procedures. The procedures themselves execute\ntransactions against the data and are written as Java classes. You load the stored procedures as JAR files\nusing the sqlcmd command line utility.\nTo initialize a VoltDB database cluster, you need a configuration file. The configuration file lets you enable\nand configure various database options including availability, durability, and security. The configuration\nfile also defines certain attributes of the database on the current server, in particular the paths for disk-\nbased files created by the database such as command logs and snapshots. All nodes in the cluster must\nspecify the same cluster configuration file when they initialize the database root directory with the voltdb\ninit command.\nWhen you actually start the database cluster, using the voltdb start command, you declare the size the\ncluster by specifying the number of nodes in the cluster and one or more of the nodes as potential hosts.\nVoltDB selects one of the specified nodes as the \"leader\" to coordinate startup.\n1Managing VoltDB Databases\nWhen using the VoltDB Enterprise Edition, you will also need a license file, often called license.xml .\nVoltDB automatically looks for the license file in the user's current working directory, the home directory,\nor the voltdb/ subfolder where VoltDB is installed. If you keep the license file in a different directory\nor under a different name, you can use to --license argument on the voltdb init command to specify\nthe license file location.\nFinally, to prepare the database for a specific application, you will need the database schema, including\nthe DDL statements that describe the database's logical structure, and a JAR file containing the stored\nprocedure class files. In general, the database schema and stored procedures are produced as part of the\ndatabase development process, which is described in the Using VoltDB manual.\nThis book assumes the schema and stored procedures have already been created. The configuration file,\non the other hand, defines the run-time configuration of the cluster. Establishing the correct settings for\nthe configuration file and physically managing the database cluster is the duty of the administrators who\nare responsible for maintaining database operations. This book is written for those individuals and covers\nthe standard procedures associated with database administration.\n1.2. Understanding the VoltDB Utilities\nVoltDB provides several command line utilities, each with a different function. Familiarizing yourself with\nthese utilities and their uses can make managing VoltDB databases easier. The three primary command\nline tools for creating, managing, and testing VoltDB databases are:\nvoltdb Starts the VoltDB database process. The voltdb command can also collect log files\nfor analyzing possible system errors (see Section 7.3, “Collecting the Log Files” for\ndetails).\nThe voltdb command runs locally and does not require a running database.\nvoltadmin Issues administrative commands to a running VoltDB database. You can use voltad-\nmin to save and restore snapshots, pause and resume admin mode, and to shutdown\nthe database, among other tasks.\nThe voltadmin command can be run remotely, performs cluster-wide operations\nand requires a running database to connect to.\nsqlcmd Lets you issue SQL queries and invoke stored procedures interactively. The sqlcmd\ncommand is handy for testing database access without having to write a client ap-\nplication.\nThe sqlcmd command can be run remotely and requires a running database to con-\nnect to.\nIn addition to the preceding general-purpose tools, VoltDB provides several other tools for specific tasks:\ncsvloader, jd-\nbcloader, and\nkafkaloaderThese utilities load data from external sources into an existing VoltDB database.\nThey let you load data from CSV or text-based data files, JDBC data sources, or\nApache Kafka streams. These commands can be run remotely and require a running\ndatabase to connect to.\nsnapshotconvert Converts native snapshot files to csv or tabbed text files. The snapshotconvert com-\nmand is useful when exporting a snapshot in native format to text files for import\ninto another data utility. (This utility is provided for legacy purposes. It is now pos-\nsible to write snapshots directly to CSV format without post-processing, which is\nthe recommended approach.)\n2Managing VoltDB Databases\nThe snapshotconvert command runs locally and does not require a running data-\nbase.\nsnapshotverify Verifies that a set of native snapshot files are complete and valid.\nThe snapshotverify command runs locally and does not require a running database.\nFinally, VoltDB includes a browser-based management console — VoltDB Management Center — for\nmonitoring databases in real time. See Section 5.1.1, “VoltDB Management Center” for more information\nabout using the Management Center.\n1.3. Management Tasks\nDatabase administration responsibilities fall into five main categories, as described in Table 1.1, “Database\nManagement Tasks” . The following chapters are organized by category and explain how to perform each\ntask for a VoltDB database.\nTable 1.1. Database Management Tasks\nPreparing the Servers Before starting the database, you must make sure that the server hardware and\nsoftware is properly configured. This chapter provides a checklist of tasks to\nperform before starting VoltDB.\nBasic Database Opera-\ntionsThe basic operations of initializing, starting, and stopping the database. This\nchapter describes the procedures needed to handle these fundamental tasks.\nMaintenance and Up-\ngradesOver time, both the cluster and the database may require maintenance — either\nplanned or emergency. This chapter explains the procedures for performing\nhardware and software maintenance, as well as standard maintenance, such\nas backing up the database and upgrading the hardware, the software, and the\ndatabase schema.\nPerformance Monitoring Another important role for many database administrators is monitoring data-\nbase performance. Monitoring is important for several reasons:\n•Performance Analysis\n•Load Balancing\n•Fault Detection\nThis chapter describes the tools available for monitoring VoltDB databases.\nProblem Reporting &\nAnalysisIf an error does occur and part or all of the database cluster fails, it is not only\nimportant to get the database up and running again, but to diagnose the cause\nof the problem and take corrective actions. VoltDB produces a number of log\nfiles that can help with problem resolution. This chapter describes the different\nlogs that are available and how to use them to diagnose database issues.\n3Chapter 2. Preparing the Servers\nVoltDB is designed to run on commodity servers, greatly reducing the investment required to operate\na high performance database. However, out of the box, these machines are not necessarily configured\nfor optimal performance of a dedicated, clustered application like VoltDB. This is especially true when\nusing cloud-based services. This chapter provides best practices for configuring servers to maximize the\nperformance and stability of your VoltDB installation.\n2.1. Server Checklist\nThe very first step in configuring the servers is making sure you have sufficient memory, computing power,\nand system resources such as disk space to handle the expected workload. The VoltDB Planning Guide\nprovides detailed information on how to size your server requirements.\nThe next step is to configure the servers and assign appropriate resources for VoltDB tasks. Specific server\nfeatures that must be configured for VoltDB to perform optimally are:\n•Install required software\n•Configure memory management\n•Turn off TCP Segmentation\n•Configure the time synchronization services\n•Increase resource limits\n•Define network addresses for all nodes in the cluster\n•Assign network ports\n2.2. Install Required Software\nTo start, VoltDB requires a recent release of the Linux operating system. The supported operating systems\nfor running production VoltDB databases are:\n•CentOS 7.0 and later, or version 8.0 and later\n•Red Hat (RHEL) 7.0 and later, or version 8.0 and later\n•Ubuntu 18.04 and 20.04\nIt may be possible to run VoltDB on other versions of Linux and Macintosh OS X 10.9 and later is sup-\nported for development purposes. However, the preceding operating system versions are the only fully\ntested and supported base platforms for running VoltDB in production.\nIn addition to the base operating system, VoltDB requires the following software at a minimum:\n•Java 8, 11, or 17\n•Time synchronization services, such as NTP or chrony\n•Python 3.6 or later\n4Preparing the Servers\nOracle Java SDK 8, 11, or 17 is recommended, but OpenJDK 8, 11, and 17 are also supported.\nVoltDB works best when the system clocks on all cluster nodes are synchronized to within 100 millisec-\nonds or less. However, the clocks are allowed to differ by up to 200 milliseconds before VoltDB refuses to\nstart. NTP, the Network Time Protocol, or chrony are recommended for achieving the necessary synchro-\nnization. NTP is installed and enabled by default on many operating systems. However, the configuration\nmay need adjusting (see Section 2.5, “Configure Time Services” for details) and in cloud instances where\nhosted servers are run in a virtual environment, a time service may not be installed or enabled by default.\nTherefore you need to do this manually.\nFinally, VoltDB implements its command line interface through Python. Python 3.6 or later is required\nto use the VoltDB shell commands.\n2.3. Configure Memory Management\nBecause VoltDB is an in-memory database, proper memory management is vital to the effective operation\nof VoltDB databases. Three important aspects of memory management are:\n•Swapping\n•Memory Mapping (Transparent Huge Pages)\n•Virtual memory\nThe following sections explain how best to configure these features for optimal performance of VoltDB.\n2.3.1. Disable Swapping\nSwapping is an operating system feature that optimizes memory usage when running multiple processes\nby swapping processes in and out of memory. However, any contention for memory, including swapping,\nwill have a very negative impact on VoltDB performance and functionality. You should disable swapping\nwhen using VoltDB.\nTo disable swapping on Linux systems, use the swapoff command. Alternately, you can set the kernel\nparameter vm.swappiness to zero.\n2.3.2. Disable Transparent Huge Pages\nTransparent Huge Pages (THP) are another operating system feature that optimizes memory usage for\nsystems with large amounts of memory. THP changes the memory mapping to use larger physical pages.\nThis can be helpful for general-purpose computing running multiple processes. However, for memory-in-\ntensive applications such as VoltDB, THP can actually negatively impact performance.\nTherefore, it is important to disable Transparent Huge Pages on servers running VoltDB. The following\ncommands, run as root or from another privileged account, disable THP:\n$ echo never >/sys/kernel/mm/transparent_hugepage/enabled \n$ echo never >/sys/kernel/mm/transparent_hugepage/defrag\nOr:\n$ echo madvise >/sys/kernel/mm/transparent_hugepage/enabled \n$ echo madvise >/sys/kernel/mm/transparent_hugepage/defrag\n5Preparing the Servers\nFor RHEL systems (including CentOS), replace \"transparent_hugepage\" with \"redhat_transparen-\nt_hugepage\".\nNote, however, that these commands disable THP only while the server is running. Once the server reboots,\nthe default setting will return. Therefore, we recommend you disable THP permanently as part of the\nstartup process. For example, you can add the following commands to a server startup script (such as /\netc/rc.local):\n#!/bin/bash\nfor f in /sys/kernel/mm/*transparent_hugepage/enabled; do\n if test -f $f; then echo never > $f; fi\ndone\nfor f in /sys/kernel/mm/*transparent_hugepage/defrag; do\n if test -f $f; then echo never > $f; fi\ndone \nTHP are enabled by default in Ubuntu 14.04 and later as well as RHEL 6.x and 7.x. To see if they are\nenabled on your current system, use either of the following pair of commands:\n$ cat /sys/kernel/mm/transparent_hugepage/enabled\n$ cat /sys/kernel/mm/transparent_hugepage/defrag\n$ cat /sys/kernel/mm/redhat_transparent_hugepage/enabled\n$ cat /sys/kernel/mm/redhat_transparent_hugepage/defrag\nIf THP is disabled, the output from the preceding commands should be either “always madvise [never]”\nor “always [madvise] never”.\n2.3.3. Enable Virtual Memory Mapping and Overcommit\nAlthough swapping is bad for memory-intensive applications like VoltDB, the server does make use of\nvirtual memory (VM) and there are settings that can help VoltDB make effective use of that memory.\nFirst, it is a good idea to enable VM overcommit. This avoids VoltDB encountering unnecessary limits\nwhen managing virtual memory. This is done on Linux by setting the system parameter vm.overcom-\nmit_memory to a value of \"1\".\n$ sysctl -w vm.overcommit_memory=1\nSecond, for large memory systems, it is also a good idea to increase the VM memory mapping limit. So\nfor servers with 64 Gigabytes or more of memory, the recommendation is to increase VM memory map\ncount to 1048576. You do this on Linux with the system parameter max_map_count . For example:\n$ sysctl -w vm.max_map_count=1048576\nRemember that for both overcommit and the memory map count, the parameters are only active while the\nsystem is running and will be reset to the default on reboot. So be sure to add your new settings to the file\n/etc/sysctl.conf to ensure they are in effect when the system is restarted.\n2.4. Turn off TCP Segmentation\nUnder certain conditions, the use of TCP segmentation offload (TSO) and generic receive offload (GRO)\ncan cause nodes to randomly drop out of a cluster. These settings let the system to batch network packets,\nproducing unnecessary latency and interfering with the necessary communication between VoltDB cluster\nnodes. The symptoms of this problem are that nodes timeout — that is, the rest of the cluster thinks they\n6Preparing the Servers\nhave failed — although the node is still running and no other network issues (such as a network partition)\nare the cause.\nDisabling TSO and GRO is recommended for any VoltDB clusters that experience such instability. The\ncommands to disable offloading are the following, where N is replaced by the number of the ethernet card:\nethtool -K ethN tso off\nethtool -K ethN gro off\nNote that these commands disable offloading temporarily. You must issue these commands every time the\nnode reboots or, preferably, put them in a startup configuration file.\nIt is also a good idea to check that TCP_RETRIES2 has not been altered. Setting TCP_RETRIES2 too low\n(below 8) can cause similar unpredictable timeouts. See the description of the VoltDB heartbeat timeout\nsetting in Section A.3.7, “Heartbeat” for details.\n2.5. Configure Time Services\nTo orchestrate activities between the cluster nodes, VoltDB relies on the system clocks being synchro-\nnized. Many functions within VoltDB — such as cluster start up, nodes rejoining, and schema updates\namong others — are sensitive to variations in the time values between nodes in the cluster. Therefore, it\nis important to keep the clocks synchronized within the cluster. Specifically:\n•The server clocks in the cluster must be synchronized to within 200 milliseconds of each other when\nthe cluster starts. (Ideally, skew between nodes should be kept under 10 milliseconds.)\n•Time must not move backwards\nThe easiest way to achieve these goals is to install and configure a time service such as NTP (Network Time\nProtocol) or chrony to use a common time host server for synchronizing the servers. NTP is often installed\nby default but may require additional configuration to achieve acceptable synchronization. Specifically,\nlisting only one time server (and the same one for all nodes in the cluster) ensures minimal skew between\nservers. You can even establish your own time server to facilitate this. All nodes in the cluster should\nalso list each other as peers. For example, the following NTP configuration file uses a local time server\n(myntpsvr) and establishes all nodes in the cluster as peers:\nserver myntpsvr burst iburst minpoll 4 maxpoll 4\npeer voltsvr1 burst iburst minpoll 4 maxpoll 4\npeer voltsvr2 burst iburst minpoll 4 maxpoll 4\npeer voltsvr3 burst iburst minpoll 4 maxpoll 4\nserver 127.127.0.1\nSee the chapter on Configuring NTP in the Guide to Performance and Customization for an example of\nconfiguring a time service for optimal performance when running VoltDB.\n2.6. Increase Resource Limits\nThere are several resource limits managed by the operating system where per-user default values are opti-\nmized for time-sharing systems but can be too restrictive for dedicated applications like VoltDB. In partic-\nular, although VoltDB is an in-memory database, process threads require large numbers of file descriptors,\nto the point where the file descriptor limit can interfere with VoltDB operations.\n7Preparing the Servers\nIt is recommended that you increase the process and file descriptor limits for the process starting the\nVoltDB server. You can do this with the ulimit shell command prior to starting VoltDB. The recommended\nminimum limits for processes and file descriptors are 8192 and 16384, respectively. Note that these are top\nlimits, so on dedicated servers there are no drawbacks to setting these values even higher. For example,\nthe following commands set the limits to 10,000 and 40,000 before starting the server:\n$ ulimit -u 10000\n$ ulimit -n 40000\n$ voltdb start\nTo set the limits permanently, you can set the limits as part of the system initialization. See your operation\nsystem documentation on ulimit and init.d for more information.\n2.7. Configure the Network\nIt is also important to ensure that the network is configured correctly so all of the nodes in the VoltDB\ncluster recognize each other. If the DNS server does not contain entries for all of the servers in the cluster,\nan alternative is to add entries in the /etc/hosts file locally for each server in the cluster. For example:\n12.24.48.101 voltsvr1\n12.24.48.102 voltsvr2\n12.24.48.103 voltsvr3\n12.24.48.104 voltsvr4\n12.24.48.105 voltsvr5\n2.8. Assign Network Ports\nVoltDB uses a number of network ports for functions such as internal communications, client connections,\nrejoin, database replication, and so on. For these features to perform properly, the ports must be open and\navailable. Review the following list of ports to ensure they are open and available (that is, not currently\nin use).\nFunction Default Port\nNumber\nClient Port 21212\nAdmin Port 21211\nWeb Interface Port (httpd) 8080\nInternal Server Port 3021\nReplication Port 5555\nZookeeper port 7181\nAlternately, you can reassign the port numbers that VoltDB uses. See Section A.5, “Network Ports” for\na description of the ports and how to reassign them.\n2.9. Eliminating Server Process Latency\nThe preceding sections explain how to configure your servers and network to maximize the performance of\nVoltDB. The goal is to avoid server functions, such as swapping or Java garbage collection, from disrupting\nthe proper operation of the VoltDB process.\n8Preparing the Servers\nAny latency in the scheduling of VoltDB threads can impact the performance of your database. These\ndelays result in corresponding latency in the database transactions themselves. But equally important,\nprolonged latency can interrupt intra-cluster communication as well, to the point where the cluster may\nincorrectly assume a node has failed and drop it as a member. If server latency causes a node not to respond\nto network messages beyond the heartbeat timeout setting, the rest of the cluster will drop the node as a\n\"dead host\".\nTherefore, in addition to the configuration settings described earlier in this chapter, the following are some\nknown causes of latency you should watch out for:\n•Other applications — Clearly, running other applications on the same servers as VoltDB can result\nin unpredictable resource conflicts for memory, CPU, and disk access. Running VoltDB on dedicated\nservers is always recommended for production environments.\n•Frequent snapshots — Initiating snapshots consumes resources. Especially on a database under heavy\nload, this can result in latency spikes. Although it is possible to run both automated snapshots and\ncommand logging (which performs its own snapshots), they are redundant and can cause unnecessary\ndelays. Also, when using command logging on a busy database, consider increasing the size of the\ncommand log segments if snapshots are occurring too frequently.\n•I/O contention — Contention for disk resources can interfere with the effective processing of VoltDB\ndurability features. This can be avoided by allocating separate devices for individual disk-based activity.\nFor example, wherever possible locate command logs and snapshots on separate devices.\n•JVM statistics collection — Enabling Java Virtual Machine (JVM) statistics can produce erratic latency\nissues for memory-intensive applications like VoltDB. Disabling JVM stats is strongly recommended\nwhen running VoltDB servers. You can disable JVM stats by issuing the following command before\nstarting the VoltDB process:\nexport VOLTDB_OPTS='-XX:+PerfDisableSharedMem'\nAlternately, you can write the JVM stats to an in-memory virtual disk, such as /tmpfs.\n•Hardware power saving options — Beware of hardware options that attempt to conserve energy by\nputting \"idle\" processes or resources into a reduced or sleep state. Resuming quiesced resources takes\ntime and the requesting process is blocked for that period. Make sure power saving options are disabled\nfor the resources you need (such as CPUs and disks).\nAlthough not specific to server resources, the following are some additional causes of latency due to\nimproper database and application design. When combined with the previous server issues, they can result\nin erratic and troublesome performance and even node failures.\n•Sequential scans of large tables — Perhaps the most common cause of latency is queries that require\na sequential scan of extremely large tables of data. Any query that must read through every record in\na table will perform badly in proportion to the size of the table. Be sure to review the execution plans\nfor key transactions to ensure indexes are used as expected and add indexes to avoid sequential scans\nwherever possible.\n•Large deletes — VoltDB retains and reuses memory whenever you delete tuples. If the amount of\ndeleted space reaches a certain percentage of overall memory usage, VoltDB compresses the memory\nused. Transactions wait while this function is performed. To avoid latency caused by compaction, you\ncan perform deletes in smaller, ongoing transactions. The USING TTL feature of the CREATE TABLE\nstatement can assist in automating the incremental purging of old records.\n9Chapter 3. Starng and Stopping the\nDatabase\nThe fundamental operations for database administration are starting and stopping the database. But before\nyou start the database, you need to decide what database features you want to enable and how they should\nwork. These features include attributes such as the amount of replication you want to use to increase\navailability in case of server failure and what level of durability is required for those cases where the\ndatabase itself stops. These and other settings are defined in the configuration file, which you specify on\nthe command line when you initialize the root directory for the database on each server.\nThis chapter explains how to configure the cluster's physical structure and features in the configuration\nfile and how to initialize the root directory and start and stop the database.\n3.1. Configuring the Cluster and Database\nYou specify the cluster configuration and what features to use in the configuration file, which is an XML\nfile that you can create and edit manually. In the simplest case, the configuration file specifies how many\npartitions to create on each server, and what level of availability (K-safety) to use. For example:\n<?xml version=\"1.0\"?>\n<deployment>\n <cluster sitesperhost=\"12\"\n kfactor=\"1\"\n />\n</deployment>\n•The sitesperhost attribute specifies the number of partitions (or \"sites\") to create on each server.\nSet to eight by default, it is possible to optimize the number of sites per host in relation to the number\nof processors per machine. The optimal number is best determined by performance testing against the\nexpected workload. See the chapter on \" Benchmarking \" in the VoltDB Planning Guide for details.\n•The kfactor attribute specifies the K-safety value to use. The higher the K-safety value, the more\nnode failures the cluster can withstand without affecting database availability. However, increasing the\nK-safety value increases the number of copies of each unique partition. High availability is a trade-\noff between replication to protect against node failure and the number of unique partitions, therefore\nthroughput performance. See the chapter on availability in the Using VoltDB manual for more informa-\ntion on determining an optimal K-safety value.\nIn addition to the sites per host and K-safety, you can use the configuration file to enable and configure\nspecific database features such as export, command logging, and so on. The following table summarizes\nsome of the key features that are settable in the configuration file.\nTable 3.1. Selecting Database Features in the Configuration File\nFeature Example\nCommand Logging — Command logging\nprovides durability by logging transactions to\ndisk so they can be replayed during a recov-\nery. You can configure the type of command\nlogging (synchronous or asynchronous), the<commandlog enabled=\"true\"\n synchronous=\"false\">\n <frequency time=\"300\" \n transactions=\"1000\"/>\n</commandlog>\n10Starting and Stopping the Database\nFeature Example\nlog file size, and the frequency of the logs (in\nterms of milliseconds or number of transac-\ntions).\nSnapshots — Automatic snapshot provide an-\nother form of durability by creating snapshots\nof the database contents, that can be restored\nlater. You can configure the frequency of the\nsnapshots, the unique file prefix, and how\nmany snapshots are kept at any given time.<snapshot enabled=\"true\"\n frequency=\"30m\"\n prefix=\"mydb\"\n retain=\"3\" />\nExport — Export allows you to write select-\ned records from the database to one or more\nexternal targets, which can be files, another\ndatabase, or another service. VoltDB provides\ndifferent export connectors for each protocol.\nYou can configure the type of export for each\nstream as well as other properties, which are\nspecific to the connector type. For example,\nthe file connector requires a specific type (or\nformat) for the files and a unique identifier\ncalled a \"nonce\".<export>\n <configuration target=\"dblog\" type=\"file\">\n <property name=\"type\">csv</property>\n <property name=\"nonce\">dblog</property>\n </configuration>\n</export>\nSecurity & Accounts — Security lets you\nprotect your database against unwanted ac-\ncess by requiring all connections authenticate\nagainst known usernames and passwords. In\nthe deployment file you can define the user ac-\ncounts and passwords and what role or roles\neach user fulfills. Roles define what permis-\nsions the account has. Roles are defined in the\ndatabase schema.<security enabled=\"true\"/>\n<users>\n <user name=\"admin\" \n password=\"superman\" \n roles=\"administrator\"/>\n <user name=\"mitty\" \n password=\"thurber\" \n roles=\"user,writer\"/>\n</users>\nFile Paths — Paths define where VoltDB\nwrites any files or other disc-based content.\nYou can configure specific paths for each type\nof service, such as snapshots, command logs,\nexport overflow, etc.<paths>\n <exportoverflow path=\"/tmp/overflow\" />\n <snapshots path=\"/opt/archive\" />\n</paths>\n3.2. Initializing the Database Root Directory\nOnce you create the configuration file, you are ready to initialize the database root directory, using the\nvoltdb init command. You issue this command on each node of the cluster, specifying the location for\nthe root directory, the configuration file, license, and schema and stored procedure class files. There are\ndefaults for each argument. But if you do specify the configuration, license, schema or classes you must\nspecify the same values on every node of the cluster. For example:\n$ voltdb init --dir=~/database \\ \n --config=deployment.xml \\ \n --license=~/license.xml \\ \n --schema=myschema.sql \\ \n --classes=myprocs.jar \nOn the command line, you can specify up to five arguments:\n11Starting and Stopping the Database\nThe location where the root directory will be created\nThe configuration file, which enables and sets attributes for specific VoltDB features\nThe license file (when using the VoltDB Enterprise Edition)\nOne or more SQL DDL files\nOne or more JAR files containing stored procedure classes\nWhen you initialize the root directory, VoltDB:\n1.Creates the root directory (voltdbroot) as a subfolder of the specified parent directory\n2.Saves the configuration and license, plus any schema and class files to preload, in the new root directory\nNote that you only need to initialize the root directory once. Once the root directory is initialized, you can\nstart and stop the database as needed. VoltDB uses the root directory to manage the current configuration\noptions and backups of the data — if those features are selected — in command logs and snapshots. If you\ndo not specify a license on the command line, VoltDB looks for a license in the current working directory,\nyour home directory, or in the directory where the VoltDB software is installed and copies it into the root\ndirectory if it finds one.\nIf the root directory already exists or has been initialized before, you cannot re-initialize the directory\nunless you include the --force argument. This is to protect you against accidentally deleting data from a\nprevious database session.\n3.3. Starting the Database\nOnce you initialize the root directory, you are ready to start the database using the voltdb start command.\nYou issue this command, specifying the location of the root directory, the number of servers required, and\none or more server addresses to use as \"host\" to manage the initial formation of the cluster. You issue the\nsame command on every node in the cluster. For example:\n$ voltdb start --dir=~/database \\ \n --count=5 \\ \n --host=svr1,svr2 \nOn the command line, you specify four arguments:\nThe location of the root directory\nThe number of servers in the cluster\nOne or more nodes from the cluster to use as the \"host\", to coordinate the initial startup of the cluster\nYou must specify the same number of servers and hosts (listed in exactly the same order) on all nodes of\nthe cluster. You can, optionally, specify all nodes of the cluster in the --host argument. In which case, you\ncan leave off the --count argument and VoltDB assumes the number of hosts is the total number of servers.\nWhen you start the database, all nodes select one of the servers from the host list as the \"host\". The host\nthen:\n1.Waits until the necessary number of servers (as specified by the count) are connected\n2.Creates the network mesh between the servers\n3.Verifies that the configuration options match for all nodes\nAt this point, the cluster is fully initialized and the \"host\" ends its special role and becomes a peer to\nall the other nodes. If the database was run before and command logs or automated snapshots exist, the\n12Starting and Stopping the Database\ncluster now recovers the data from the previous session. All nodes in the cluster then write an informational\nmessage to the console verifying that the database is ready:\nServer completed initialization.\n3.4. Loading the Database Definition\nStored procedures are compiled into classes and then packaged into a JAR file, as described in the section\non installing stored procedures in the Using VoltDB manual. To fully load the database definition you\nwill need one or more JAR files of stored procedure classes and a text file containing the data definition\nlanguage (DDL) statements that declare the database schema.\nResponsibility for loading the database schema and stored procedures varies from company to company.\nIn some cases, operators and administrators are only responsible for initiating the database; developers\nmay load and modify the schema themselves. In other cases, the administrators are responsible for both\nstarting the cluster and loading the correct database schema as well.\nIf the schema and stored procedures are predefined, you can include them when you initialize the database\nroot directory and VoltDB will preload them when the database starts for the first time. Otherwise, you\ncan load the schema and class files using the sqlcmd utlity after the database starts. The following sections\ndescribe each approach.\n3.4.1. Preloading the Schema and Classes When You Initial-\nize the Database\nIf the database schema is predefined, you can include it when you initialize the database root directory,\nusing the --schema and --classes arguments to the voltdb init command. The --schema flag lets\nyou specify one or more text files containing SQL DDL statements and the --classes flag lets you\nspecify one or more JAR files containing the classes associated with any stored procedures you want to\ndeclare.\nNote that DDL statements and Java classes can be order-dependent. For example, a stored procedure\ndefinition can depend on the existence of a table definition to define its partitioning column. VoltDB loads\nany classes before loading the schema file. However, you should be sure to specify the individual schema\nfiles or JAR files in the order you want them loaded.\nAlso, you must specify the same files, in the same order, when initializing all nodes of the cluster. For\nexample:\n$ voltdb init --dir=~/db \\ \n --schema=tables.sql,streams.sql,procs.sql \\ \n --classes=globalprocs.jar,myprocs.jar \n3.4.2. Loading the Schema and Classes After the Database\nStarts\nIf you are responsible for defining the correct schema once the database is running, or modifying an existing\nschema, you can do this using the sqlcmd utility. The following example assumes the schema is contained\nin two files: storedprocs.jar and dbschema.sql . Once the database cluster has started, you can\nstart the sqlcmd utility and load the files at the sqlcmd prompt using the sqlcmd load classes and file\ndirectives:\n$ sqlcmd\n13Starting and Stopping the Database\n1> load classes storedprocs.jar;\n2> file dbschema.sql;\nNote that when loading the schema, you should always load the stored procedures first, so the class files\nare available for any CREATE PROCEDURE statements within the schema.\n3.5. Stopping the Database\nHow you choose to stop a VoltDB depends on what features you have enabled. If you are using command\nlogging (which is enabled by default in the VoltDB Enterprise Edition), it is a good idea to perform an\norderly shutdown when stopping the database to ensure that all active client queries have a chance to\ncomplete and return their results (and no new queries start) before the shutdown occurs.\nTo perform an orderly shutdown you can use the voltadmin shutdown command:\n$ voltadmin shutdown\nAs with all voltadmin commands, you can use them remotely by specifying one of the cluster servers on\nthe command line:\n$ voltadmin shutdown --host=voltsvr2\nIf security is enabled, you will also need to specify a username and password for a user with admin per-\nmissions:\n$ voltadmin shutdown --host=voltsvr2 -u root -p Suda51\nIf you are not using command logging, you want to make sure you perform a snapshot before shutting\ndown. You can do this manually using the voltadmin save command. Or you can simply add the --save\nargument to the voltadmin shutdown command:\n$ voltadmin shutdown --save\nThe most recent snapshot saved to the database snapshots directory (by the voltadmin save command to\nthe default location, automated snapshots, or voltadmin shutdown --save ) will automatically be restored\nby the next voltdb start command.\n3.6. Restarting the Database\nRestarting a VoltDB database is done the same way as starting the database for the first time, except there\nis no need to initialize the root directory. You simply issue the same voltdb start command you did when\nyou started it for the first time. For example:\n$ voltdb start --dir=~/database \\ \n --count=5 \\ \n --host=svr1,svr2 \nIf you are using command logging, or you created a snapshot in the default snapshots directory, VoltDB\nautomatically reinstates the data once the cluster is established. After the schema is loaded and all data is\nrestored, the database enables client access.\n3.7. Starting and Stopping Individual Servers\nWhen using K-safety, it is possible for one or more nodes in a cluster to stop without stopping the database\nitself. (See the chapter on availability in the Using VoltDB manual for a complete description of K-safety.)\n14Starting and Stopping the Database\nIf a server stops — either intentionally or accidentally — you can start the server and have it rejoin the\ncluster using the same voltdb start command used to start the cluster. For example:\n$ voltdb start --dir=~/database \\ \n --count=5 \\ \n --host=svr1,svr2 \nThe start command will check to see if the cluster is still running, based on the list of servers in the --\nhost argument. If so, the server will rejoin the cluster.\nNote that if there are multiple servers listed in the --host argument, the server can rejoin even if it is\none of the listed hosts. If you only list one host and that is the server that stopped, you will need to list\na different server in the --host argument — any server that is still an active member of the running\ncluster. (This is why listing multiple nodes in the --host argument is beneficial: you can use exactly the\nsame start command in multiple situations.)\nIf you want to stop a single node in a K-safe cluster — for example, to perform maintenance on the\nhardware — you can do this using the voltadmin stop command. The voltadmin stop command stops a\nsingle node, as long as the cluster has enough K-safety to remain viable after the nodes stops. (If not, the\nstop command is rejected.) For example to stop svr2, you can issue the following command:\n$ voltadmin stop --host=svr1 svr2 \nNote that the stop command does not have to issued on the server that is being stopped. You can issue\nthe command on any active server in the cluster. See Chapter 4, Maintenance and Upgrades for more\ninformation about performing maintenance tasks.\n15Chapter 4. Maintenance and Upgrades\nOnce the database is running, it is the administrator's role to keep it running. This chapter explains how\nto perform common maintenance and upgrade tasks, including:\n•Database backups\n•Schema and stored procedure updates\n•System and hardware upgrades\n•VoltDB software upgrades\n•License updates\n4.1. Backing Up the Database\nIt is a common safety precaution to backup all data associated with computer systems and store copies off-\nsite in case of system failure or other unexpected events. Backups are usually done on a scheduled basis\n(every day, every week, or whatever period is deemed sufficient).\nVoltDB provides several options for backing up the database contents. The easiest option is to save a\nnative snapshot then backup the resulting snapshot files to removable media for archiving. The advantage\nof this approach is that native snapshots contain both a complete copy of the data and the schema. So\nin case of failure the snapshot can be restored to the current or another cluster using a single voltadmin\nrestore command.\nThe key thing to remember when using native snapshots for backup is that each server saves its portion\nof the database locally. So you must fetch the snapshot files for all of the servers to ensure you have a\ncomplete set of files. The following example performs a manual snapshot on a five node cluster then uses\nscp to remotely copy the files from each server to a single location for archiving.\n$ voltadmin save --blocking --host=voltsvr3 \\\n /tmp/voltdb backup\n$ scp -l 100 'voltsvr1:/tmp/voltdb/backup*' /tmp/archive/\n$ scp -l 100 'voltsvr2:/tmp/voltdb/backup*' /tmp/archive/\n$ scp -l 100 'voltsvr3:/tmp/voltdb/backup*' /tmp/archive/\n$ scp -l 100 'voltsvr4:/tmp/voltdb/backup*' /tmp/archive/\n$ scp -l 100 'voltsvr5:/tmp/voltdb/backup*' /tmp/archive/\nNote that if you are using automated snapshots or command logging (which also creates snapshots), you\ncan use the automated snapshots as the source of the backup. However, the automated snapshots use a\nprogrammatically generated file prefix, so your backup script will need some additional intelligence to\nidentify the most recent snapshot and its prefix.\nThe preceding example also uses the scp limit flag ( -l 100) to constrain the bandwidth used by the copy\ncommand to 100kbits/second. Use of the -l flag is recommended to avoid the copy operation blocking the\nVoltDB server process and impacting database performance.\nFinally, if you wish to backup the data in a non-proprietary format, you can use the voltadmin save --for-\nmat=csv command to create a snapshot of the data as comma-separated value (CSV) formatted text files.\nThe advantage is that the resulting files are usable by more systems than just VoltDB. The disadvantage is\nthat the CSV files only contain the data, not the schema. These files cannot be read directly into VoltDB,\n16Maintenance and Upgrades\nlike a native snapshot can. Instead, you will need to initialize and start a new database, load the schema,\nthen use the csvloader utility to load individual files into each table to restore the database completely.\n4.2. Updating the Database Schema\nAs an application evolves, the database schema often needs changing. This is particularly true during\nthe early stages of development and testing but also happens periodically with established applications,\nas the database is tuned for performance or adjusted to meet new requirements. In the case of VoltDB,\nthese updates may involve changes to the table definitions, to the indexes, or to the stored procedures. The\nfollowing sections explain how to:\n•Perform live schema updates\n•Change unique indexes and partitioning using save and restore\n4.2.1. Performing Live Schema Updates\nThere are two ways to update the database schema for a VoltDB database: live updates and save/restore\nupdates. For most updates, you can update the schema while the database is running. To perform this\ntype of live update, you use the DDL CREATE, ALTER, and DROP statements to modify the schema\ninteractively as described in the section on modifying the schema in the Using VoltDB manual.\nYou can make any changes you want to the schema as long as the tables you are modifying do not contain\nany data. The only limitations on performing live schema changes are that you cannot:\n•Add or broaden unique constraints (such as indexes or primary keys) on tables with existing data\n•Reduce the datatype size of columns on tables with existing data (for example, changing the datatype\nfrom INTEGER to TINYINT)\nThese limitations are in place to guarantee that the schema change will succeed without any pre-existing\ndata violating the constraint. If you know that the data in the database does not violate the new constraints\nyou can make these changes using the save and restore commands, as described in the following section.\n4.2.2. Performing Updates Using Save and Restore\nIf you need to add unique indexes or reduce columns to database tables with existing data, you must use\nthe voltadmin save and restore commands to perform the schema update. This requires shutting down\nand restarting the database to allow VoltDB to validate the existing data against the new constraints.\nTo perform a schema update using save and restore, use the following steps:\n1.Create a new schema file containing the updated DDL statements.\n2.Pause the database ( voltadmin pause ).\n3.Save a snapshot of the database contents to an specific location ( voltadmin save --blocking {path}\n{file-prefix} ).\n4.Shutdown the database ( voltadmin shutdown ).\n5.Re-initialize and restart the database starting in admin mode ( voltdb init --force and voltdb start --\npause).\n6.Load the stored procedures and new schema (using the sqlcmd LOAD CLASSES and FILE directives)\n17Maintenance and Upgrades\n7.Restore the snapshot created in Step #3 ( voltadmin restore {path} {file-prefix} ).\n8.Return the database to normal operations ( voltadmin resume ).\nFor example:\n$ # Issue once\n$ voltadmin pause\n$ voltadmin save --blocking /opt/archive/ mydb\n$ voltadmin shutdown\n$ # Issue next two commands on all servers\n$ voltdb init --dir=~/mydb --config=deployment.xml --force\n$ voltdb start --dir=~/mydb --host=svr1,svr2 --count=5 \n$ # Issue only once\n$ sqlcmd\n1> load classes storedprocs.jar;\n2> file newschema.sql;\n3> exit \n$ voltadmin restore /opt/archive mydb\n$ voltadmin resume\nThe key point to remember when adding new constraints is that there is the possibility that the restore\noperation will fail if existing records violate the new constraint. This is why it is important to make sure\nyour database contents are compatible with the new schema before performing the update.\n4.3. Upgrading the Cluster\nSometimes you need to update or reconfigure the server infrastructure on which the VoltDB database is\nrunning. Server upgrades are one example. A server upgrade is when you need to fix or replace hardware,\nupdate the operating system, or otherwise modify the underlying system.\nServer upgrades usually require stopping the VoltDB database process on the specific server being ser-\nviced. However, if your database cluster uses K-safety for enhanced availability, it is possible to complete\nserver upgrades without any database downtime by performing a rolling hardware upgrade , where each\nserver is upgraded in turn using the voltadmin stop and start commands.\nAnother type of upgrade is when you want to reconfigure the cluster as a whole. Reasons for reconfiguring\nthe cluster are because you want to add or remove servers from the cluster or you need to modify the\nnumber of partitions per server that VoltDB uses.\nAdding and removing servers from the cluster can happen without stopping the database. This is called\nelastic scaling . Changing the K-Safety factor or number of sites per host requires restarting the cluster\nduring a maintenance window .\nThe following sections describe five methods of cluster upgrade:\n•Performing server upgrades\n•Performing rolling upgrades on K-safe clusters\n•Adding servers to a running cluster through elastic scaling\n•Removing servers from a running cluster through elastic scaling\n18Maintenance and Upgrades\n•Reconfiguring the cluster with a maintenance window\n4.3.1. Performing Server Upgrades\nIf you need to upgrade or replace the hardware or software (such as the operating system) of the individual\nservers, this can be done without taking down the database as a whole. As long as the server is running\nwith a K-safety value of one or more, it is possible to take a server out of the cluster without stopping the\ndatabase. You can then fix the server hardware, upgrade software (other than VoltDB), even replace the\nserver entirely with a new server, then bring the server back into the cluster.\nTo perform a server upgrade:\n1.Stop the VoltDB server process on the server using the voltadmin stop command. As long as the cluster\nis K-safe, the rest of the cluster will continue running.\n2.Perform the necessary upgrades.\n3.Have the server rejoin the cluster using the voltdb start command.\nThe start command starts the database process on the server, contacts the database cluster, then copies the\nnecessary partition content from other cluster nodes so the server can then participate as a full member of\nthe cluster, While the server is rejoining, the other database servers remain accessible and actively process\nqueries from client applications.\nWhen rejoining a cluster you can use the same start command used when starting the cluster as a whole. If,\nhowever, you need to replace the server (say, for example, in the case of a disk failure), you will also need\nto initialize a root directory for the database process on the new machine. You do this using the current\nconfiguration file for the cluster. For example:\n$ voltdb init --dir=~/database --config=deployment.xml\n$ voltdb start --dir=~/database --host=svr1,svr2\nIf no changes have been made, you can use the same configuration file used to initialize the other servers.\nIf you have used voltadmin update to change the configuration or changed settings using the VoltDB\nManagement Center (VMC), you can download a copy of the latest configuration from VMC.\nIf the cluster is not K-safe — that is, the K-safety value is 0 — then you must follow the instructions in\nSection 4.3.5, “Reconfiguring the Cluster During a Maintenance Window” to upgrade the servers.\n4.3.2. Performing Rolling Hardware Upgrades on K-Safe\nClusters\nIf you need to upgrade all of the servers in a K-safe cluster (for example, if you are upgrading the operating\nsystem), you can perform a rolling hardware upgrade by stopping, upgrading, then rejoining each server\none at a time. Using this process the entire cluster can be upgraded without suffering any downtime of\nthe database. Just be sure to wait until the rejoining server has become a full member of the cluster before\nremoving and upgrading the next server in the rotation. Specifically, wait until the following message\nappears in the log or on the console for the rejoining server:\nNode rejoin completed. \nAlternately, you can attempt to connect to the server remotely — for example, using the sqlcmd command\nline utility. If your connection is rejected, the rejoin has not finished. If you successfully connect to the\nclient port of the rejoining node, you know the rejoin is complete:\n19Maintenance and Upgrades\n$ sqlcmd --servers=myserver\nSQL Command :: myserver:21212\n1>\nNote\nYou cannot update the VoltDB software itself using the rolling hardware upgrade process, on-\nly the operating system, hardware, or other software. See Section 4.4, “Upgrading VoltDB Soft-\nware” for information about minimizing downtime during a VoltDB software upgrade.\n4.3.3. Adding Servers to a Running Cluster with Elastic Scal-\ning\nIf you want to add servers to a VoltDB cluster — usually to increase performance and/or capacity — you\ncan do this without having to restart the database. You add servers to the cluster using the voltdb start\ncommand with the --add flag. Note, as always, you must initialize a root directory before issuing the\nstart command. For example:\n$ voltdb init --dir=~/database --config=deployment.xml\n$ voltdb start --dir=~/database --host=svr1,svr2 --add\nThe --add flag specifies that if the cluster full — that is, all of the specified number of servers are currently\nactive in the cluster — the joining node can be added to elastically expand the cluster. You must elastically\nadd a full complement of servers to match the K-safety value (K+1) before the servers can participate as\nactive members of the cluster. For example, if the K-safety value is 2, you must add 3 servers before they\nactually become part of the cluster and the cluster rebalances its partitions.\nWhen you add servers to a VoltDB database, the cluster performs the following actions:\n1.The new servers are added to the cluster configuration and sent copies of the schema, stored procedures,\nand deployment file.\n2.Once sufficient servers are added, copies of all replicated tables and their share of the partitioned tables\nare sent to the new servers.\n3.As the data is rebalanced, the new servers begin processing transactions for the partition content they\nhave received.\n4.Once rebalancing is complete, the new servers are full members of the cluster.\nIf the cluster is not at its full complement of servers when you issue a voltdb start --add command, the\nadded server will join the cluster as a replacement for a missing node rather than extending the cluster.\nOnce the cluster is back to its full complement of nodes, the next voltdb start --add command will extend\nthe cluster.\n4.3.4. Removing Servers from a Running Cluster with Elastic\nScaling\nJust as you can add nodes to a running cluster to add capacity, you can remove nodes from a running cluster\nto reduce capacity. Obviously, you want to make sure that the smaller cluster has sufficient resources, such\nas memory, for your data and workload. If you are using K-safety, you also need to be sure the current\ncluster is large enough to remove nodes and still meet the requirements for your specific K-safety setting.\n20Maintenance and Upgrades\nTo remove nodes from a running cluster, you use the voltadmin resize command. The first step is to verify\nthat the cluster has enough nodes to reduce in size. You do this with the voltadmin resize --test command:\n$ voltadmin resize --test\nThe voltadmin resize --test command checks the cluster to make sure there are enough nodes to still be\noperational after the reduction and it reports which nodes will be removed as a result of the operation.\nThe number of nodes that will be removed is calculated as the smallest number that allows the cluster to\nmaintain K-safety. Without K-Safety, that is one node. With K-Safety, that is at least K+1, but possibly\nmore depending on the cluster configuration. The remaining node count and configuration must satisfy\nthe requirement that the number of nodes and the total number of partitions are both divisible by K+1.\nOnce you are ready to start reducing the cluster size, issue the voltadmin resize command without any\narguments:\n$ voltadmin resize\nThis command verifies that the cluster can be resized, reports which nodes will be removed, asks you to\nconfirm that you want to begin, and then starts the resize operation. Because resizing the cluster involves\nreorganizing and rebalancing the partitions, it can take a significant amount of time, depending on the\nsize of the database and the ongoing workload. You can track the progress of the resize operation using\nthe voltadmin status command. You can also adjust the priority between rebalancing the partitions and\nongoing client transactions by setting the duration and throughput of the rebalance operation. See the\nsection on \" Configuring How VoltDB Rebalances Nodes During Elastic Scaling \" in the Using VoltDB\nmanual for details.\nNote that once resizing starts, you cannot cancel the operation. So be certain you want to reduce the size\nof the cluster before beginning. If for any reason the resize operation fails unexpectedly, you can use the\nvoltadmin resize --retry command to restart the cluster reduction.\n4.3.5. Reconfiguring the Cluster During a Maintenance Win-\ndow\nIf you want to modify the cluster configuration, such as the number of sites per host or K-Safety factor,\nyou need to restart the database cluster as a whole. You can also choose to add or remove nodes from the\ncluster during this operation. Stopping the database temporarily to reconfigure the cluster is known as a\nmaintenance window .\nThe steps for reconfiguring the cluster with a maintenance window are:\n1.Place the database in admin mode ( voltadmin pause ).\n2.Perform a manual snapshot of the database ( voltadmin save --blocking ).\n3.Shutdown the database ( voltadmin shutdown ).\n4.Make the necessary changes to the configuration file.\n5.Reinitialize the database root directory on all nodes specifying the edited configuration file ( voltdb\ninit --force ).\n6.Start the new database in admin mode ( voltdb start --pause)\n7.Restore the snapshot created in Step #2 ( voltadmin restore ).\n8.Return the database to normal operations ( voltadmin resume ).\n21Maintenance and Upgrades\n4.4. Upgrading VoltDB Software\nAs new versions of VoltDB become available, you will want to upgrade the VoltDB software on your\ndatabase cluster. The simplest approach for upgrading recent versions of VoltDB — V6.8 or later — is\nto perform an orderly shutdown saving a final snapshot, upgrade the software on all servers, then re-start\nthe database. (If you are upgrading from earlier versions of the software, you can still upgrade using a\nsnapshot. But you will need to perform the save and restore operations manually.)\nHowever, upgrading using snapshots involves downtime while the software is being updated. An alterna-\ntive is to use database replication (DR) — either passive DR or cross data center replication (XDCR) —\nto upgrade with minimal or no downtime.\nUsing passive DR you can copy the active database contents to a new cluster, then switch the application\nclients to point to the new server. The advantage of this process is that the only downtime the business\napplication sees is the time needed to promote the new cluster and redirect the clients.\nUsing cross data center replication (XDCR), it is possible to perform an online upgrade , where there is\nno downtime and the database is accessible throughout the upgrade operation. If two or more clusters are\nalready active participants in an XDCR environment, you can shutdown and upgrade the clusters, one at\na time, to perform the upgrade leaving at least one cluster available at all times.\nYou can also use XDCR to upgrade a cluster, with limited extra hardware, by operationally splitting the\ncluster into two. Although this approach does not require downtime, it does reduce the K-safety for the\nduration of the upgrade.\nThe following sections describe the five approaches to upgrading VoltDB software:\n•Upgrading VoltDB Using Save and Restore\n•Upgrading Older Versions of VoltDB Manually\n•Upgrading VoltDB With Reduced Downtime Using a DR Replica\n•Performing an Online Upgrade Using Multiple XDCR Clusters\n•Performing an Online Upgrade With Limited Hardware\n4.4.1. Upgrading VoltDB Using Save and Restore\nUpgrading the VoltDB software on a single database cluster is easy. All you need to do is perform an\norderly shutdown saving a final snapshot, upgrade the VoltDB software on all servers in the cluster, then\nrestart the database. The steps to perform this procedure are:\n1.Shutdown the database and save a final snapshot ( voltadmin shutdown --save ).\n2.Upgrade VoltDB on all cluster nodes.\n3.Restart the database ( voltdb start ).\nThis process works for any recent (V6.8 or later) release of VoltDB.\n4.4.2. Upgrading Older Versions of VoltDB Manually\nTo upgrade older versions of VoltDB software (prior to V6.8), you must perform the save and restore\noperations manually. The steps when upgrading from older versions of VoltDB are:\n22Maintenance and Upgrades\n1.Place the database in admin mode ( voltadmin pause ).\n2.Perform a manual snapshot of the database ( voltadmin save --blocking ).\n3.Shutdown the database ( voltadmin shutdown ).\n4.Upgrade VoltDB on all cluster nodes.\n5.Re-initialize the root directory on all nodes ( voltdb init --force ).\n6.Start a new database in admin mode ( voltdb start --pause ).\n7.Restore the snapshot created in Step #2 ( voltadmin restore ).\n8.Return the database to normal operations ( voltadmin resume ).\n4.4.3. Upgrading VoltDB With Reduced Downtime Using a\nDR Replica\nWhen upgrading the VoltDB software in a production environment, it is possible to minimize the disruption\nto client applications by upgrading across two clusters using passive database replication (DR). To use this\nprocess you need a second database cluster to act as the DR replica and you must have a unique cluster\nID assigned to the current database.\nThe basic process for upgrading the VoltDB software using DR is to:\n1.Install the new VoltDB software on the secondary cluster\n2.Use passive DR to synchronize the database contents from the current cluster to the new cluster\n3.Pause the current database and promote the new cluster, switching the application clients to the new\nupgraded database\nThe following sections describe in detail the prerequisites for using this process, the steps to follow, and\n— in case there are any issues with the updated database — the process for falling back to the previous\nsoftware version.\n4.4.3.1. Prerequisites for Upgrading with Passive DR\nThe prerequisites for using DR to upgrade VoltDB are:\n•A second cluster with the same configuration (that is, the same number of servers and sites per host)\nas the current database cluster.\n•The current database cluster must have a unique cluster ID assigned in its deployment file.\nThe cluster ID is assigned in the <dr> section of the deployment file and must be set when the cluster\nstarts. It cannot be added or altered while the database is running. So if you are considering using this\nprocess for upgrading your production systems, be sure to add a <dr> tag to the deployment and assign a\nunique cluster ID when starting the database, even if you do not plan on using DR for normal operations.\nFor example, you would add the following element to the deployment file when starting your primary\ndatabase cluster to assign it the unique ID of 3.\n<dr id=\"3\">\n23Maintenance and Upgrades\nImportant\nAn important constraint to be aware of when using this process is that you must not make any\nschema changes during the upgrade process . This includes the period after the upgrade while you\nverify the application's proper operation on the new software version. If any changes are made to\nthe schema, you may not be able to readily fall back to the previous version.\n4.4.3.2. The Passive DR Upgrade Process\nThe procedure for upgrading the VoltDB software on a running database using DR is the following. In the\nexamples, we assume the existing database is running on a cluster with the nodes oldsvr1 and oldsvr2\nand the new cluster includes servers newsvr1 and newsvr2 . We will assign the clusters unique IDs 3\nand 4, respectively.\n1.Install the new VoltDB software on the secondary cluster.\nFollow the steps in the section \" Installing VoltDB \" in the Using VoltDB manual to install the latest\nVoltDB software.\n2.Start the second cluster as a replica of the current database cluster.\nOnce the new software is installed, create a new database on the secondary server using the voltdb init\n--force and voltdb start commands and including the necessary DR configuration to create a replica\nof the current database. For example, the configuration file on the new cluster might look like this:\n<dr id=\"4\" role=\"replica\">\n <connection source=\"oldsvr1,oldsvr2\"/>\n</dr>\nOnce the second cluster starts, apply the schema from the current database to the second cluster. Once\nthe schema match on the two databases, replication will begin.\n3.Wait for replication to stabilize.\nDuring replication, the original database will send a snapshot of the current content to the new replica,\nthen send binary logs of all subsequent transactions. You want to wait until the snapshot is finished and\nthe ongoing DR is processing normally before proceeding.\n•First monitor the DR statistics on the new cluster. The DR consumer state changes to \"RECEIVE\"\nonce the snapshot is complete. You can check this in the Monitor tab of the VoltDB Management\nCenter or from the command line by using sqlcmd to call the @Statistics system procedure, like so:\n$ sqlcmd --servers=newsvr1\n1> exec @Statistics drconsumer 0;\n•Once the new cluster reports the consumer state as \"RECEIVE\", you can monitor the rate of replica-\ntion on the existing database cluster using the DR producer statistics. Again, you can view these sta-\ntistics in the Monitor tab of the VoltDB Management Center or by calling @Statistics using sqlcmd:\n$ sqlcmd --servers=oldsvr1\n1> exec @Statistics drproducer 0;\nWhat you are looking for on the producer side is that the DR latency is low; ideally under a second.\nBecause the DR latency helps determine how long you will wait for the cluster to quiesce when you\npause it and, subsequently, how long the client applications will be stalled waiting for the new cluster\nto be promoted. You determine the latency by looking at the difference between the statistics for the\n24Maintenance and Upgrades\nlast queued timestamp and the last ACKed timestamp. The difference between these values gives you\nthe latency in microseconds. When the latency reaches a stable, low value you are ready to proceed.\n4.Pause the current database.\nThe next step is to pause the current database. You do this using the voltadmin pause --wait command:\n$ voltadmin pause --host=oldsvr1 --wait\nThe --wait flag tells voltadmin to wait until all DR and export queues are flushed to their downstream\ntargets before returning control to the shell prompt. This guarantees that all transactions have reached\nthe new replica cluster.\nIf DR or export are blocked for any reason — such as a network outage or the target server unavail-\nable — the voltadmin pause --wait command will continue to wait and periodically report on what\nqueues are still busy. If the queues do not progress, you will want to fix the underlying problem before\nproceeding to ensure you do not lose any data.\n5.Promote the new database.\nOnce the current database is fully paused, you can promote the new database, using the voltadmin\npromote command:\n$ voltadmin promote --host=newsvr1\nAt this point, your database is up and running on the new VoltDB software version.\n6.Redirect client applications to the new database.\nTo restore connectivity to your client applications, redirect them from the old cluster to the new cluster\nby creating connections to the new cluster servers newsvr1, newsvr2, and so on.\n7.Shutdown the original cluster.\nAt this point you can shutdown the old database cluster.\n8.Verify proper operation of the database and client applications.\nThe last step is to verify that your applications are operating properly against the new VoltDB software.\nUse the VoltDB Management Center to monitor database transactions and performance and verify\ntransactions are completing at the expected rate and volume.\nYour upgrade is now complete. If, at any point, you decide there is an issue with your application or your\ndatabase, it is possible to fall back to the previous version of VoltDB as long as you have not made any\nchanges to the underlying database schema. The next section explains how to fall back when necessary.\n4.4.3.3. Falling Back to a Previous Version\nIn extreme cases, you may find there is an issue with your application and the latest version of VoltDB. Of\ncourse, you normally would discover this during testing prior to a production upgrade. However, if that\nis not the case and an incompatibility or other conflict is discovered after the upgrade is completed, it is\npossible to fall back to a previous version of VoltDB. The basic process for falling back is to the following:\n•If any problems arise before Step #6 (redirecting the clients) is completed, simply shutdown the new\nreplica and resume the old database using the voltadmin resume command:\n$ voltadmin shutdown --host=newsvr1\n25Maintenance and Upgrades\n$ voltadmin resume --host=oldsvr1\n•If issues are found after Step #6, the fall back procedure is basically to repeat the upgrade procedure\ndescribed in Section 4.4.3.2, “The Passive DR Upgrade Process” except reversing the roles of the clus-\nters and replicating the data from the new cluster to the old cluster. That is:\n1.Update the configuration file on the new cluster to enable DR as a master, removing the <connection>\nelement:\n<dr id=\"4\" role=\"master\"/>\n2.Shutdown the original database and edit the configuration file to enable DR as a replica of the new\ncluster:\n<dr id=\"3\" role=\"replica\">\n <connection source=\"newsvr1,newsvr2\"/>\n</dr>\n3.Re-initialize and start the old cluster using the voltdb init --force and voltdb start commands.\n4.Follow steps 3 through 8 in Section 4.4.3.2, “The Passive DR Upgrade Process” reversing the roles\nof the new and old clusters.\n4.4.4. Performing an Online Upgrade Using Multiple XDCR\nClusters\nIt is also possible to upgrade the VoltDB software using cross data center replication (XDCR), by simply\nshutting down, upgrading, and then re-initalizing each cluster, one at a time. This process requires no\ndowntime, assuming your client applications are already designed to switch between the active clusters.\nUse of XDCR for upgrading the VoltDB software is easiest if you are already using XDCR because it\ndoes not require any additional hardware or reconfiguration. The following instructions assume that is the\ncase. Of course, you could also create a new cluster and establish XDCR replication between the old and\nnew clusters just for the purpose of upgrading VoltDB. The steps for the upgrade outlined in the following\nsections are the same. But first you must establish the cross data center replication between the two (or\nmore) clusters. See the chapter on Database Replication in the Using VoltDB manual for instructions on\ncompleting this initial step.\nOnce you have two clusters actively replicating data with XCDCR (let's call them clusters A and B), the\nsteps for upgrading the VoltDB software on the clusters is as follows:\n1.Pause and shutdown cluster A ( voltadmin pause --wait and shutdown ).\n2.Clear the DR state on cluster B ( voltadmin dr reset ).\n3.Update the VoltDB software on cluster A.\n4.Start a new database instance on A, making sure to use the old deployment file so the XDCR connections\nare configured properly ( voltdb init --force and voltdb start ).\n5.Load the schema on Cluster A so replication starts.\n6.Once the two clusters are synchronized, repeat steps 1 through 4 for cluster B.\nNote that since you are upgrading the software, you must create a new instance after the upgrade (step\n#3). When upgrading the software, you cannot recover the database using just the voltdb start command;\n26Maintenance and Upgrades\nyou must use voltdb init --force first to create a new instance and then reload the existing data from the\nrunning cluster B.\nAlso, be sure all data has been copied to the upgraded cluster A after step #4 and before proceeding to\nupgrade the second cluster. You can do this by checking the @Statistics system procedure selector DR-\nCONSUMER on cluster A. Once the DRCONSUMER statistics State column changes to \"RECEIVE\",\nyou know the two clusters are properly synchronized and you can proceed to step #5.\n4.4.4.1. Falling Back to a Previous Version\nIn extreme cases, you may decide after performing the upgrade that you do not want to use the latest\nversion of VoltDB. If this happens, it is possible to fall back to the previous version of VoltDB.\nTo \"downgrade\" from a new version back to the previous version, follow the steps outlined in Section 4.4.4,\n“Performing an Online Upgrade Using Multiple XDCR Clusters” except rather than upgrading to the new\nversion in Step #2, reinstall the older version of VoltDB. This process is valid as long as you have not\nmodified the schema or deployment to use any new or changed features introduced in the new version .\n4.4.5. Performing an Online Upgrade With Limited Hardware\nIt is possible to use XDCR and K-safety to perform an online upgrade, where the database remains acces-\nsible throughout the upgrade process, without requiring an additional cluster. As opposed to upgrading\ntwo separate XDCR clusters, this process enables an online upgrade with little or no extra hardware re-\nquirements.\nOn the other hand, this upgrade process is quite complex and during the upgrade, the K-safety of the\ndatabase is reduced. In other words, this process trades off the need for extra hardware against a more\ncomplicated upgrade process and increased risk to availability if any nodes crash unexpectedly during the\nupgrade.\nEssentially, the process for upgrading a cluster with limited additional hardware is to split the existing\ncluster hardware into two separate clusters, upgrading the software along the way. To make this possible,\nthere are a several prerequisites:\n•The cluster must be configured as a XDCR cluster .\nThe cluster configuration must contain a <dr> element that identifies the cluster ID and specifies the\nrole as \"xdcr\". For example:\n<dr id=\"1\" role=\"xdcr\"/>\nNote that the replication itself can be disabled (that is, listen=\"false\" ), but the cluster must have\nthe XDCR configuration in place before the online upgrade begins.\n•All of the tables in the database must be identified as DR tables.\nThis means that the schema must specify DR TABLE {table-name} for all of the tables, or else\ndata for the non-DR tables will be lost.\n•K-Safety must be enabled.\nThe K-safety value (set using the kfactor attribute of the <cluster> element in the configuration\nfile) must be set to one or higher.\nAdditionally, if the K-safety value is 1 and the cluster has an odd number of nodes, you will need one\nadditional server to complete the upgrade. (The additional server is no longer needed after the upgrade\nis completed.)\n27Maintenance and Upgrades\nTable 4.1, “Overview of the Online Upgrade Process” summarizes the overall process for online upgrade.\nBecause the process is complicated and requires shutting down and restarting database servers in a specific\norder, it is important to create an upgrade plan before you begin. The voltadmin plan_upgrade command\ngenerates such a plan, listing the detailed steps for each phase based on your current cluster configuration.\nBefore you attempt the online upgrade process, use voltadmin plan_upgrade to generate the plan and\nthoroughly review it to make sure you understand what is required.\nTable 4.1. Overview of the Online Upgrade Process\nPhase #1 Ensure the cluster meets all the necessary prerequisites (XDCR and K-safety configured,\nall tables DR-enabled).\nInstall the new VoltDB software version in the same location on all cluster nodes.\nIf not already set, be sure to enable DR for the cluster by setting the listen attribute to true\nin the cluster configuration.\nFor the purposes of demonstration, the following illustrations assume a four-node database\nconfigured with a cluster ID set to 1.\nPhase #2 Stop half the cluster nodes. Because of K-safety, it is possible for half the servers to be\nremoved from the cluster without stopping the database. The plan generated by voltadmin\nplan_upgrade will tell you exactly which nodes to stop.\nIf the cluster has an odd number of nodes and the K-safety is set to one, you can stop the\nsmaller \"half\" of the cluster and then add another server to create two equal halves.\nPhase #3 Create a new cluster on the stopped nodes, using the new VoltDB software version. Be\nsure to configure the cluster with a new cluster ID, XDCR enabled, and listing the original\ncluster as the DR connection source. You can also start the new cluster using the --missing\nargument so although it starts with only half the nodes, it expects the full complement of\nnodes for full k-safety. In the example, the start command would specify --count=2 and\n--missing=2.\nAfter loading the schema and procedures from the original cluster, XDCR will synchronize\nthe two clusters.\n28Maintenance and Upgrades\nPhase #4 Once the clusters are synchronized, redirect all client applications to the new cluster.\nStop the original cluster and reset DR on the new cluster using the voltadmin dr reset\ncommand.\nPhase #5 Reinitialize and rejoin the remaining nodes to the new cluster, using the new software\nversion. Because the new cluster was started with the --missing option, the remaining\nnodes can join the cluster and bring it back to full K-safety.\nIf you started with an odd number of servers and K=1 (and therefore added a server to\ncreate the second cluster) you can either rejoin all but one of the remaining nodes, or if you\nwant to remove the specific server you added, remove it before rejoining the last node.\nComplete At this point the cluster is fully upgraded.\n4.4.5.1. Falling Back to the Original Software Version\nIn extreme cases, you may decide during or after performing the upgrade that you do not want to use the\nlatest version of VoltDB. If this happens, it is possible to fall back to the previous version of VoltDB.\n29Maintenance and Upgrades\nDuring the phases identified as #1 and #2 in Table 4.1, “Overview of the Online Upgrade Process” , you\ncan always rejoin the stopped nodes to the original cluster to return to your original configuration.\nDuring or after you complete phase #3, you can return to your original configuration by:\n1.Stopping the new cluster.\n2.Resetting DR on the original cluster ( voltadmin dr reset ).\n3.Reinitializing and rejoining the stopped nodes to the original cluster using the original software version.\nOnce the original cluster is stopped as part of phase #4, the way to revert to the original software version is\nto repeat the entire procedure starting at phase #1 and reversing the direction — \"downgrading\" from the\nnew software version to the old software version. This process is valid as long as you have not modified\nthe schema or deployment to use any new or changed features introduced in the new version .\n4.4.6. Downgrading, or Falling Back to a Previous VoltDB\nVersion\nThe sections describing the upgrade process for passive DR , active XDCR , and XDCR with limited hard-\nware all explain how to fall back to the previous version of VoltDB in case of emergency. This section\nexplains how to fall back, or downgrade, when using the standard save and restore process described in\nSection 4.4.1, “Upgrading VoltDB Using Save and Restore” .\nThe following process works if you are reverting between two recent versions of VoltDB and you do\nnot use any new features between the upgrade and the downgrade. There are no guarantees an attempt\nto downgrade will succeed if the two software versions are more than one major version apart or if you\nutilize a new feature from the higher version software prior to downgrading.\nWith those caveats, the most reliable way to fall back to a previous VoltDB version is:\n1.Extract the database schema and stored procedure classes\n2.Pause the database, save a snapshot, and shutdown\n3.Re-install the previous version of VoltDB\n4.Initialize a new database root directory, using the extracted schema and classes\n5.Start the new database instance (in pause mode) using the older version of VoltDB\n6.Manually restore the data from the snapshot created in Step #2\n7.Resume normal operations\nThis process ensures that only the schema, stored procedures, and data are returned to the older version of\nthe software, and new software features will not impact your restore process. For example:\n$ voltdb get schema -D ~/db/new --output=/tmp/mydb.sql\n$ voltdb get classes -D ~/db/new --output=/tmp/mydb.jar\n$ voltadmin pause\n$ voltadmin save /tmp mydata\n$ voltadmin shutdown\n[ downgrade VoltDB software . . . ]\n30Maintenance and Upgrades\n$ voltdb init -f -D ~/db/old --schema=/tmp/mydb.sql --classes=/top/mydb.jar\n$ voltdb start -D ~/db/old --pause &\n$ voltadmin restore /tmp mydata\n$ voltadmin resume\n4.5. Updating the VoltDB Software License\nThe VoltDB Enterprise Edition is licensed software. Once the license expires, you will not be able to restart\nyour database cluster without a new license. So it is a good idea to update the license before it expires to\navoid any interruption to your service.\nYou can use the voltadmin show license command to see information about your current license, including\nthe expiration date. You can then use the voltadmin license command to replace the current license with\na new license file.\n$ voltadmin license\nINFO: The license is updated successfully.\n . . .\nWhen you issue the show license command, VoltDB verifies that the license file is valid and the terms of\nthe license are sufficient to support the current database configuration. Once verified, the license is applied\nto all nodes of the cluster and information about the new license is displayed.\nIf a node fails to get updated (for example, if a node fails during the license update), you will need to\nupdate that node independently when bringing it back into the cluster. You can do this by including the\nnew license file on the command line when you restart the node. For example:\n$ voltdb start -D ~/mydb --license license.xml\nInitializing VoltDB...\n31Chapter 5. Monitoring VoltDB Databases\nMonitoring is an important aspect of systems administration. This is true of both databases and the infra-\nstructure they run on. The goals for database monitoring include ensuring the database meets its expected\nperformance target as well as identifying and resolving any unexpected changes or infrastructure events\n(such as server failure or network outage) that can impact the database. This chapter explains:\n•How to monitor overall database health and performance using VoltDB\n•How to automatically pause the database when resource limits are exceeded\n•How to integrate VoltDB monitoring with other enterprise monitoring infrastructures\n5.1. Monitoring Overall Database Activity\nVoltDB provides several tools for monitoring overall database activity. The following sections describe\nthe three primary monitoring tools within VoltDB:\n•VoltDB Management Center\n•System Procedures\n•SNMP Alerts\n5.1.1. VoltDB Management Center\nhttp://voltserver:8080/\nThe VoltDB Management Center provides a graphical display of key aspects of database performance,\nincluding throughput, memory usage, query latency, and partition usage. To use the Management Center,\nconnect to one of the cluster nodes using a web browser, specifying the HTTP port (8080 by default)\nas shown in the example URL above. The Management Center shows graphs for cluster throughput and\nlatency as well as CPU and memory usage for the current server. You can also use the Management Center\nto examine the database schema and to issue ad hoc SQL queries.\n5.1.2. System Procedures\nVoltDB provides callable system procedures that return detailed information about the usage and perfor-\nmance of the database. In particular, the @Statistics system procedure provides a wide variety of informa-\ntion depending on the selector keyword you give it. Some selectors that are particularly useful for moni-\ntoring include the following:\n•MEMORY — Provides statistics about memory usage for each node in the cluster. Information includes\nthe resident set size (RSS) for the server process, the Java heap size, heap usage, available heap memory,\nand more. This selector provides the type of information displayed by the Process Memory Report,\nexcept that it returns information for all nodes of the cluster in a single call.\n•PROCEDUREPROFILE — Summarizes the performance of individual stored procedures. Informa-\ntion includes the minimum, maximum, and average execution time as well as the number of invocations,\nfailures, and so on. The information is summarized from across the cluster as whole. This selector re-\nturns information similar to the latency graph in VoltDB Management Center.\n•TABLE — Provides information about the size, in number of tuples and amount of memory consumed,\nfor each table in the database. The information is segmented by server and partition, so you can use\n32Monitoring VoltDB Databases\nit to report the total size of the database contents or to evaluate the relative distribution of data across\nthe servers in the cluster.\nWhen using the @Statistics system procedure with the PROCEDUREPROFILE selector for monitoring,\nit is a good idea to set the second parameter of the call to \"1\" so each call returns information since the\nlast call. In other words, statistics for the interval since the last call. Otherwise, if the second parameter is\n\"0\", the procedure returns information since the database started and the aggregate results for minimum,\nmaximum, and average execution time will have little meaning.\nWhen calling @Statistics with the MEMORY or TABLE selectors, you can set the second parameter to\n\"0\" since the results are always a snapshot of the memory usage and table volume at the time of the call. For\nexample, the following Python script uses @Statistics with the MEMORY and PROCEDUREPROFILE\nselectors to check for memory usage and latency exceeding certain limits. Note that the call to @Statistics\nuses a second parameter of 1 for the PROCEDUREPROFILE call and a parameter value of 0 for the\nMEMORY call.\nimport sys\nfrom voltdbclient import *\nnano = 1000000000.0\nmemorytrigger = 4 * (1024*1024) # 4gbytes\navglatencytrigger = .01 * nano # 10 milliseconds\nmaxlatencytrigger = 2 * nano # 2 seconds\nserver = \"localhost\"\nif (len(sys.argv) > 1): server = sys.argv[1]\nclient = FastSerializer(server, 21212)\nstats = VoltProcedure( client, \"@Statistics\", \n [ FastSerializer.VOLTTYPE_STRING, \n FastSerializer.VOLTTYPE_INTEGER ] )\n# Check memory\nresponse = stats.call([ \"memory\", 0 ])\nfor t in response.tables:\n for row in t.tuples:\n print 'RSS for node ' + row[2] + \"=\" + str(row[3])\n if (row[3] > memorytrigger):\n print \"WARNING: memory usage exceeds limit.\"\n# Check latency\nresponse = stats.call([ \"procedureprofile\", 1 ])\navglatency = 0\nmaxlatency = 0\nfor t in response.tables:\n for row in t.tuples:\n if (avglatency < row[4]): avglatency = row[4]\n if (maxlatency < row[6]): maxlatency = row[6]\nprint 'Average latency= ' + str(avglatency) \nprint 'Maximum latency= ' + str(maxlatency)\nif (avglatency > avglatencytrigger):\n print \"WARNING: Average latency exceeds limit.\"\nif (maxlatency > maxlatencytrigger):\n print \"WARNING: Maximum latency exceeds limit.\"\n33Monitoring VoltDB Databases\nclient.close()\nThe @Statistics system procedure is the the source for many of the monitoring options discussed in this\nchapter. Two other system procedures, @SystemCatalog and @SystemInformation, provide general in-\nformation about the database schema and cluster configuration respectively and can be used in monitoring\nas well.\nThe system procedures are useful for monitoring because they let you customize your reporting to whatever\nlevel of detail you wish. The other advantage is that you can automate the monitoring through scripts\nor client applications that call the system procedures. The downside, of course, is that you must design\nand create such scripts yourself. As an alternative for custom monitoring, you can consider integrating\nVoltDB with existing third party monitoring applications, as described in Section 5.3, “Integrating VoltDB\nwith Other Monitoring Systems” . You can also set the database to automatically pause if certain system\nresources run low, as described in the next section.\n5.1.3. SNMP Alerts\nIn addition to monitoring database activity on a \"as needed\" basis, you can enable VoltDB to proactively\nsend Simple Network Management Protocol (SNMP) alerts whenever important events occur within the\ncluster. SNMP is a standard for how SNMP agents send messages (known as \"traps\") to management\nservers or \"management stations\".\nSNMP is a lightweight protocol. SNMP traps are sent as UDP broadcast messages in a standard format that\nis readable by SNMP management stations. Since they are broadcast messages, the sending agent does not\nwait for a confirmation or response. And it does not matter, to the sender, whether there is a management\nserver listening to receive the message or not. You can use any SNMP-compliant management server to\nreceive and take action based on the traps.\nWhen you enable SNMP in the deployment file, VoltDB operates as an SNMP agent sending traps when-\never management changes occur in the cluster. You enable SNMP with the <snmp> element in the de-\nployment file. You configure how and where VoltDB sends SNMP traps using one or more of the attributes\nlisted in Table 5.1, “SNMP Configuration Attributes” .\nTable 5.1. SNMP Configuration Attributes\nAttribute Default Value Description\ntarget (none) Specifies the IP address or host name of the SNMP manage-\nment station where traps will be sent in the form {IP-or-host-\nname}[:port-number] . If you do not specify a port number, the\ndefault is 162. The target attribute is required.\ncommunity public Specifies the name of the \"community\" the VoltDB agent be-\nlongs to.\nusername (none) Specifies the username for SNMP V3 authentication. If you do\nnot specify a username, VoltDB sends traps in SNMP V2c for-\nmat. If you specify a username, VoltDB uses SNMP V3 and the\nfollowing attributes let you configure the authentication mech-\nanisms used.\nauthprotocol SHA\n(SNMP V3 only)Specifies the authentication protocol for SNMP V3. Allowable\noptions are:\n•SHA\n•MD5\n•NoAuth\n34Monitoring VoltDB Databases\nAttribute Default Value Description\nauthkey voltdbauthkey\n(SNMP V3 only)Specifies the authentication key for SNMP V3 when the pro-\ntocol is other than NoAuth.\nprivacyprotocol AES\n(SNMP V3 only)Specifies the privacy protocol for SNMP V3. Allowable op-\ntions are:\n•AES\n•DES\n•NoPriv\n•3DES*\n•AES192*\n•AES256*\nprivacykey voltdbprivacykey\n(SNMP V3 only)Specifies the privacy key for SNMP V3 when the privacy pro-\ntocol is other than NoPriv.\n*Use of 3DES, AES192, or AES256 privacy requires the Java Cryptography Extension (JCE) be installed on the system. The JCE\nis specific to the version of Java you are running. See the the Java web site for details.\nSNMP is enabled by default when you include the <snmp> element in the deployment file. Alternately,\nyou can explicitly enable and disable SNMP using the enabled={true|false} attribute to the ele-\nment. For example, the following deployment file entry enables SNMP alerts, sending traps to mgtsvr.my-\ncompany.com using SNMP V3 with the username \"voltdb\":\n<snmp enabled=\"true\"\n target=\"mgtsvr.mycompany.com\"\n username=\"voltdb\"\n/>\nOnce SNMP is enabled, VoltDB sends alerts for the events listed in Table 5.2, “SNMP Events” .\nTable 5.2. SNMP Events\nName Severity Description\ncrash FATAL When a server or cluster crashes.\nclusterPaused INFO When the cluster pauses and enters admin mode.\nclusterResume INFO When the cluster exits admin mode and resumes normal oper-\nation.\nhostDown ERROR When a server shuts down or is recognized as having left the\ncluster.\nhostUp INFO When a server joins the cluster.\nstreamBlocked WARN When an export stream is blocked due to data missing from the\nexport queue and all cluster nodes are running.\nstatisticsTrigger WARN When certain operational states are compromised. Specifical-\nly:\n•When a K-safe cluster loses one or more nodes\n•When using database replication, the connection to the re-\nmote cluster is broken\nresourceTrigger WARN When certain resource limits are exceeded. Specifically\n•Memory usage\n•Disk usage\n35Monitoring VoltDB Databases\nName Severity Description\nSee Section 5.2, “Setting the Database to Read-Only Mode\nWhen System Resources Run Low” for more information\nabout configuring SNMP alerts for resources.\nresourceClear INFO When resource limits return to levels below the trigger value.\nFor the latest details about each event trap, see the VoltDB SNMP Management Information Base (MIB),\nwhich is installed with the VoltDB server software in the file /tools/snmp/VOLTDB-MIB in the in-\nstallation directory.\n5.2. Setting the Database to Read-Only Mode\nWhen System Resources Run Low\nVoltDB, like all software, uses system resources to perform its tasks. First and foremost, as an in-memory\ndatabase, VoltDB relies on having sufficient memory available for storing the data and processing queries.\nHowever, it also makes use of disk resources for snapshots and caching data for other features, such as\nexport and database replication.\nIf system resources run low, one or more nodes may fail impacting availability, or worse, causing a service\ninterruption. The best solution for this situation is to plan ahead and provision sufficient resources for your\nneeds. The goal of the VoltDB Planning Guide is to help you do this.\nHowever, even with the best planning, unexpected conditions can result in resource shortages or overuse.\nIn these situations, you want the database to protect itself against all-out failure.\nYou can do this by setting resource limits in the VoltDB deployment file. System resource limits are set\nwithin the <systemsettings> and <resourcemonitor> elements. For example:\n<systemsettings>\n <resourcemonitor frequency=\"30\">\n <memorylimit size=\"70%\" alert=\"60%\"/>\n <disklimit>\n <feature name=\"snapshots\" size=\"75%\" alert=\"60%\"/>\n <feature name=\"droverflow\" size=\"60%\"/>\n </disklimit>\n </resourcemonitor>\n</systemsettings>\nThe deployment file lets you set limits on two types of system resources:\n•Memory Usage\n•Disk Usage\nFor each resource type you can set the maximum size and, optionally, the level at which an alert is sent if\nSNMP is enabled. In all cases, the allowable amount of the resource to be used can be specified as either a\nvalue representing a number of gigabytes or a percentage of the total available. If the limit set by the alert\nattribute is exceeded and SNMP is enabled, an SNMP alert is sent. If the limit set by the size attribute is\nexceeded, the database will be \"paused\", putting it into read-only mode to avoid using any further resources\nor possibly failing when the resource becomes exhausted. When the database pauses, an error message is\nwritten to the log file (and the console) reporting the event. This allows you as the system administrator\nto correct the situation by reducing memory usage or deleting unnecessary files. Once sufficient resources\nare freed up, you can return the database to normal operation using the voltadmin resume command.\n36Monitoring VoltDB Databases\nThe resource limits are checked every 60 seconds by default. However, you can adjust how frequently\nthey are checked — to accommodate the relative stability or volatility of your resource usage — using\nthe frequency attribute of the <resourcemonitor> tag. In the preceding example, the frequency\nhas been reduced to 30 seconds.\nOf course, the ideal is to catch excessive resource use before the database is forced into read-only mode.\nUse of SNMP and system monitors such as Nagios and New Relic to generate alerts at limits lower than\nthe VoltDB resource monitor are strongly recommended. And you can integrate other VoltDB monitoring\nwith these monitoring utilities as described in Section 5.3, “Integrating VoltDB with Other Monitoring\nSystems” . But the resource monitor size limit is provided as a last resort to ensure the database does not\ncompletely exhaust resources and crash before the issue can be addressed.\nThe following sections describe how to set limits for the individual resource types.\n5.2.1. Monitoring Memory Usage\nYou specify a memory limit in the deployment file using the <memorylimit> element and specifying\nthe maximum allowable resident set size (RSS) for the VoltDB process in the size attribute. You can\nexpress the limit as a fixed number of gigabytes or as a percentage of total available memory. Use a percent\nsign to specify a percentage. For example, the following setting will cause the VoltDB database to go into\nread-only mode if the RSS size exceeds 10 gigabytes on any of the cluster nodes.\n<systemsettings>\n <resourcemonitor>\n <memorylimit size=\"10\"/>\n </resourcemonitor>\n</systemsettings>\nWhereas the following example sets the limit at 70% of total available memory.\n<systemsettings>\n <resourcemonitor>\n <memorylimit size=\"70%\"/>\n </resourcemonitor>\n</systemsettings>\nYou can also set a trigger value for SNMP alerts — assuming SNMP is enabled — using the alert\nattribute. For instance, the following example sets the SNMP trigger value to 60%.\n<systemsettings>\n <resourcemonitor>\n <memorylimit size=\"70%\" alert=\"60%\" />\n </resourcemonitor>\n</systemsettings>\nIf you do not specify a limit in the deployment file, VoltDB automatically sets a maximum size limit of\n80% and an SNMP alert level of 70% by default.\n5.2.2. Monitoring Disk Usage\nYou specify disk usage limits in the deployment file using the <disklimit> element. Within the\n<disklimit> element, you use the <feature> element to identify the limit for a device based on the\nVoltDB feature that utilizes it. For example, to set a limit on the amount of space used on the device where\nautomatic snapshots are stored, you identify the feature as \"snapshots\" and specify the limit as a number\n37Monitoring VoltDB Databases\nof gigabytes or as a percentage of total space on the disk. The following deployment file entry sets the disk\nlimit for snapshots at 200 gigabytes and the limit for command logs at 70% of the total available space:\n<systemsettings>\n <resourcemonitor>\n <disklimit>\n <feature name=\"snapshots\" size=\"200\"/>\n <feature name=\"commandlog\" size=\"70%\"/>\n </disklimit>\n </resourcemonitor>\n</systemsettings>\nYou can also set a trigger value for SNMP alerts — assuming SNMP is enabled — using the alert at-\ntribute. For instance, the following example sets the SNMP trigger value to 150 gigabytes for the snapshots\ndisk and 60% for the commandlog disk.\n<systemsettings>\n <resourcemonitor>\n <disklimit>\n <feature name=\"snapshots\" size=\"200\" alert=\"150\" />\n <feature name=\"commandlog\" size=\"70%\" alert=\"60%\" />\n </disklimit>\n </resourcemonitor>\n</systemsettings>\nNote that you specify the device based on the feature that uses it. However, the limits applies to all data on\nthat device, not just the space used by that feature. If you specify limits for two features that use the same\ndevice, the lower of the two limits will be applied. So, in the previous example, if snapshots and command\nlogs both use a device with 250 gigabytes of total space, the database will be set to read-only mode if the\ntotal amount of used space exceeds the command logs limit of 70%, or 175 gigabytes.\nIt is also important to note that there are no default resource limits or alerts for disks. If you do not explicitly\nspecify a disk limit, there is no protection against running out of disk space. Similarly, unless you explicitly\nset an SNMP alert level, no alerts will be sent for the associated device.\nYou can identify disk limits and alerts for any of the following VoltDB features, using the specified key-\nwords:\n•Automated snapshots (snapshots)\n•Command logs (commandlog)\n•Command log snapshots (commandlogsnapshot)\n•Database replication overflow (droverflow)\n•Export overflow (exportoverflow)\n5.3. Integrating VoltDB with Other Monitoring\nSystems\nIn addition to the tools and system procedures that VoltDB provides for monitoring the health of your\ndatabase, you can also integrate this data into third-party monitoring solutions so they become part of your\n38Monitoring VoltDB Databases\noverall enterprise monitoring architecture. VoltDB supports integrating VoltDB statistics and status with\nthe following monitoring systems:\n•Prometheus\n•Nagios\n•New Relic\n5.3.1. Integrating with Prometheus\nIf you use Prometheus to monitor your systems and services, you can include VoltDB in your monitoring\ninfrastructure. VoltDB Enterprise Edition provides a Prometheus agent that runs as a separate process\ncollecting statistics from the cluster, which are then available to the Prometheus server through port 1234\nby default. To use the agent:\n1.Set default to the folder /tools/monitoring/prometheus within the directory where you in-\nstalled VoltDB. For example:\n$ cd opt/voltdb/tools/monitoring/prometheus\n2.Execute the script voltdb-prometheus :\n$ bash voltdb-prometheus\nSee the comment at the beginning of the script for command arguments you can use to modify the agent,\nincluding the ability to specify the IP address, port, username, and password for the VoltDB server to\nmonitor.\n5.3.2. Integrating with Nagios\nIf you use Nagios to monitor your systems and services, you can include VoltDB in your monitoring\ninfrastructure. VoltDB Enterprise Edition provides Nagios plugins that let you monitor four key aspects of\nVoltDB. The plugins are included in a subfolder of the tools directory where VoltDB is installed. Table 5.3,\n“Nagios Plugins” lists each plugin and what it monitors.\nTable 5.3. Nagios Plugins\nPlugin Monitors Scope Description\ncheck_voltdb_ports Availability Server Reports whether the specified server is reachable\nor not.\ncheck_voltdb_memory Memory\nusageServer Reports on the amount of memory in use by Volt-\nDB for a individual node. You can specify the\nseverity criteria as a percentage of total memory.\ncheck_voltdb_cluster K-safety Clus-\nter-wideReports on whether a K-safe cluster is complete or\nnot. That is, whether the cluster has the full com-\nplement of nodes or if any have failed and not re-\njoined yet.\ncheck_voltdb_replica-\ntionDatabase\nreplicationClus-\nter-wideReports the status of database replication. Con-\nnect the plugin to one or more nodes on the master\ndatabase.\nNote that the httpd and JSON options must be enabled in the deployment file for the VoltDB database for\nthe Nagios plugins to query the database status.\n39Monitoring VoltDB Databases\n5.3.3. Integrating with New Relic\nIf you use New Relic as your monitoring tool, there is a VoltDB plugin to include monitoring of VoltDB\ndatabases to your New Relic dashboard. To use the New Relic plugin, you must:\n•Define the appropriate configuration for your server.\n•Start the voltdb-newrelic process that gathers and sends data to New Relic.\nYou define the configuration by editing and renaming the template files that can be found in the /tools\n/monitoring/newrelic/config folder where VoltDB is installed. The configuration files let you\nspecify your New Relic license and which databases are monitored. A README file in the /newrelic\nfolder provides details on what changes to make to the configuration files.\nYou start the monitoring process by running the script voltdb-newrelic that also can be found in\nthe/newrelic folder. The script must be running for New Relic to monitor your databases.\n40Chapter 6. Logging and Analyzing Acvity\nin a VoltDB Database\nVoltDB uses Log4J, an open source logging service available from the Apache Software Foundation, to\nprovide access to information about database events. By default, when using the VoltDB shell commands,\nthe console display is limited to warnings, errors, and messages concerning the status of the current process.\nA more complete listing of messages (of severity INFO and above) is written to log files in the subfolder\n/log, relative to the database root directory.\nThe advantages of using Log4J are:\n•Logging is compiled into the code and can be enabled and configured at run-time.\n•Log4J provides flexibility in configuring what events are logged, where, and the format of the output.\n•By using an open source logging service with standardized output, there are a number of different ap-\nplications, such as Chainsaw, available for filtering and presenting the results.\nLogging is important because it can help you understand the performance characteristics of your applica-\ntion, check for abnormal events, and ensure that the application is working as expected.\nOf course, any additional processing and I/O will have an incremental impact on the overall database\nperformance. To counteract any negative impact, Log4J gives you the ability to customize the logging to\nsupport only those events and servers you are interested in. In addition, when logging is not enabled, there\nis no impact to VoltDB performance. With VoltDB, you can even change the logging profile on the fly\nwithout having to shutdown or restart the database.\nThe following sections describe how to enable and customize logging of VoltDB using Log4J. This chap-\nter is not intended as a tutorial or complete documentation of the Log4J logging service. For general in-\nformation about Log4J, see the Log4J web site at http://wiki.apache.org/logging-log4j/ .\n6.1. Introduction to Logging\nLogging is the process of writing information about application events to a log file, console, or other\ndestination. Log4J uses XML files to define the configuration of logging, including three key attributes:\n•Where events are logged. The destinations are referred to as appenders in Log4J (because events are\nappended to the destinations in sequential order).\n•What events are logged. VoltDB defines named classes of events (referred to as loggers) that can be\nenabled as well as the severity of the events to report.\n•How the logging messages are formatted (known as the layout),\n6.2. Creating the Logging Configuration File\nVoltDB ships with a default Log4J configuration file, voltdb/log4j.xml, in the installation directory. The\nsample applications and the VoltDB shell commands use this file to configure logging and it is recom-\nmended for new application development. This default Log4J file lists all of the VoltDB-specific logging\n41Logging and Analyzing Ac-\ntivity in a VoltDB Database\ncategories and can be used as a template for any modifications you wish to make. Or you can create a\nnew file from scratch.\nThe following is an example of a Log4J configuration file:\n<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!DOCTYPE log4j:configuration SYSTEM \"log4j.dtd\">\n<log4j:configuration xmlns:log4j=\"http://jakarta.apache.org/log4j/\">\n<appender name=\"Async\" class=\"org.apache.log4j.AsyncAppender\">\n <param name=\"Blocking\" value=\"true\" />\n <appender-ref ref=\"Console\" />\n <appender-ref ref=\"File\" />\n</appender>\n<appender name=\"Console\" class=\"org.apache.log4j.ConsoleAppender\">\n <param name=\"Target\" value=\"System.out\" />\n <layout class=\"org.apache.log4j.TTCCLayout\" />\n</appender>\n<appender name=\"File\" class=\"org.apache.log4j.FileAppender\">\n <param name=\"File\" value=\"/tmp/voltdb.log\" />\n <param name=\"Append\" value=\"true\" />\n <layout class=\"org.apache.log4j.TTCCLayout\" />\n</appender>\n<logger name=\"AUTH\"> \n<!-- Print all VoltDB authentication messages -->\n <level value=\"trace\" />\n</logger>\n<root>\n <priority value=\"debug\" />\n <appender-ref ref=\"Async\" />\n</root>\n</log4j:configuration>\nThe preceding configuration file defines three destinations, or appenders, called Async, Console, and File.\nThe appenders define the type of output (whether to the console, to a file, or somewhere else), the location\n(such as the file name), as well as the layout of the messages sent to the appender. See the log4J documen-\ntation for more information about layout.\nNote that the appender Async is a superset of Console and File. So any messages sent to Async are routed\nto both Console and File. This is important because for logging of VoltDB, you should always use an\nasynchronous appender as the primary target to avoid the processing of the logging messages from blocking\nother execution threads.\nMore importantly, you should not use any appenders that are susceptible to extended delays, blockages,\nor slow throughput, This is particularly true for network-based appenders such as SocketAppender and\nthird-party log infrastructures including logstash and JMS. If there is any prolonged delay in writing to the\nappenders, messages can end up being held in memory causing performance degradation and, ultimately,\ngenerating out of memory errors or forcing the database into read-only mode.\nThe configuration file also defines a root class. The root class is the default logger and all loggers inherit the\nroot definition. So, in this case, any messages of severity \"debug\" or higher are sent to the Async appender.\n42Logging and Analyzing Ac-\ntivity in a VoltDB Database\nNote\nThis example is for demonstration purposes only. Normally, do not set the severity to either \"de-\nbug\" or \"trace\" for production systems unless instructed to by VoltDB Support. Trace and debug\nlogging generate a significant number of messages that can negatively impact performance. They\ncontain internal information for debugging purposes and provide no additional value otherwise.\nFinally, the configuration file defines a logger specifically for VoltDB authentication messages. The logger\nidentifies the class of messages to log (in this case \"AUTH\"), as well as the severity (\"trace\"). VoltDB\ndefines several different classes of messages you can log. Table 6.1, “VoltDB Components for Logging”\nlists the loggers you can invoke.\nTable 6.1. VoltDB Components for Logging\nLogger Description\nADHOC Execution of ad hoc queries\nAUTH Authentication and authorization of clients\nCOMPILER Interpretation of SQL in ad hoc queries\nCONSOLE Informational messages intended for display on the\nconsole\nDR Database replication sending data\nDRAGENT Database replication receiving data\nEXPORT Exporting data\nGC Java garbage collection\nHOST Host specific events\nIMPORT Importing data\nELASTIC Elastic addition of nodes to the cluster\nLOADER Bulk loading of data (including as part of import)\nNETWORK Network events related to the database cluster\nREJOIN Node recovery and rejoin\nSNAPSHOT Snapshot activity\nSQL Execution of SQL statements\nTM Transaction management\nTOPICS Streaming data in topics\n6.3. Changing the Timezone of Log Messages\nBy default all VoltDB logging is reported in GMT (Greenwich Mean Time). If you want the logging to be\nreported using a different timezone, you can use extensions to the Log4J service to achieve this.\nTo change the timezone of log messages:\n1.Download the extras kit from the Apache Extras for Apache Log4J website, http://logging.a-\npache.org/log4j/extras/ .\n2.Unpack the kit and place the included JAR file in the /lib/extension folder of the VoltDB instal-\nlation directory.\n43Logging and Analyzing Ac-\ntivity in a VoltDB Database\n3.Update your Log4J configuration file to enable the Log4J extras and specify the desired timezone for\nlogging for each appender.\nYou enable the Log4J extras by specifying EnhancedPatternLayout as the layout class for the ap-\npenders you wish to change. You then identify the desired timezone as part of the layout pattern. For\nexample, the following XML fragment changes the timezone of messages written to the file appender to\nGMT minus four hours:\n<appender name=\"file\" class=\"org.apache.log4j.DailyMaxRollingFileAppender\">\n <param name=\"file\" value=\"log/volt.log\"/>\n <param name=\"DatePattern\" value=\"'.'yyyy-MM-dd\" />\n <layout class=\" org.apache.log4j.EnhancedPatternLayout \">\n <param name=\"ConversionPattern\" \n value=\" %d{ISO8601}{GMT-4} %-5p [%t] %c: %m%n\"/>\n </layout>\n</appender>\nYou can use any valid ISO-8601 timezone specification, including named timezones, such as EST.\n6.4. Managing VoltDB Log Files\nVoltDB uses a rolling log appender that \"rolls\" the files, periodically saving the old log files and creating\na new file for subsequent messages. By default, the log files are rolled daily.\nVoltDB also automatically \"prunes\" older log files to help conserve disk space on the server. The appender\nspecifies the maximum number of files to keep, keeping 30 by default.\nYou can customize your log configuration to specify a different rolling period and/or a different number\nof files to keep. For example, the following Log4J configuration rolls the log files twice a day and keeps\n14 files, or a week's worth of logs:\n<!-- file appender captures all loggers messages. -->\n<appender name=\"file\" class=\"org.apache.log4j.DailyMaxRollingFileAppender\">\n <param name=\"file\" value=\"log/volt.log\"/>\n <param name=\"MaxBackupIndex\" value=\"14\"/>\n <param name=\"DatePattern\" value=\"'.'yyyy-MM-dd-a\" />\n <layout class=\"org.apache.log4j.PatternLayout\">\n <param name=\"ConversionPattern\" value=\"%d %-5p [%t] %c: %m%n\"/>\n </layout>\n</appender>\n6.5. Enabling Your Custom Log Configuration\nWhen Starting VoltDB\nOnce you create your Log4J configuration file, you specify which configuration file to use by defining the\nvariable LOG4J_CONFIG_PATH before starting the VoltDB database. For example:\n$ LOG4J_CONFIG_PATH=\"$HOME/MyLog4jConfig.xml\"\n$ voltdb start -H svr1,svr2\n6.6. Changing the Configuration on the Fly\nOnce the database has started, you can still start or reconfigure the logging without having to stop and\nrestart the database. By calling the system procedure @UpdateLogging you can pass the configuration\n44Logging and Analyzing Ac-\ntivity in a VoltDB Database\nXML to the servers as a text string. For any appenders defined in the new updated configuration, the\nexisting appender is removed and the new configuration applied. Other existing appenders (those not\nmentioned in the updated configuration XML) remain unchanged.\n45Chapter 7. What to Do When Problems\nArise\nAs with any high performance application, events related to the database process, the operating system, and\nthe network environment can impact how well or poorly VoltDB performs. When faced with performance\nissues, or outright failures, the most important task is identifying and resolving the root cause. VoltDB\nand the server produce a number of log files and other artifacts that can help you in the diagnosis. This\nchapter explains:\n•Where to look for log files and other information about the VoltDB server process\n•What to do when recovery fails\n•How to collect the log files and other system information when reporting a problem to VoltDB\n7.1. Where to Look for Answers\nThe first place to look when an unrecognized problem occurs with your VoltDB database is the console\nwhere the database process was started. VoltDB echoes key messages and errors to the console. For exam-\nple, if a server becomes unreachable, the other servers in the cluster will report an error indicating which\nnode has failed. Assuming the cluster is K-safe, the remaining nodes will then re-establish a quorum and\ncontinue, logging this event to the console as well.\nHowever, not all messages are echoed on the console.1 A more complete record of errors, warnings, and\ninformational messages is written to a log file, log/volt.log , inside the voltdbroot directory. So, for\nexample, if you start the database using the command voltdb start --dir=~/db , the log file is ~/db/\nvoltdbroot/log/volt.log .) The volt.log file can be extremely helpful for identifying unex-\npected but non-fatal events that occurred earlier and may identify the cause of the current issue.\nIf VoltDB encounters a fatal error and exits, shutting down the database process, it also attempts to write\nout a crash file in the current working directory. The crash file name has the prefix \"voltdb_crash\" followed\nby a timestamp identifying when the file is created. Again, this file can be useful in diagnosing exactly\nwhat caused the crash, since it includes the last error message, a brief profile of the server and a dump of\nthe Java threads running in the server process before it crashed.\nTo summarize, when looking for information to help analyze system problems, three places to look are:\n1.The console where the server process was started.\n2.The log file in log/volt.log\n3.The crash file named voltdb_crash{timestamp}.txt in the server process's working directory\n7.2. Handling Errors When Restoring a Database\nAfter determining what caused the problem, the next step is often to get the database up and running again\nas soon as possible. When using snapshots or command logs, this is done using the voltdb start command\ndescribed in Section 3.6, “Restarting the Database” . However, in unusual cases, the restart itself may fail.\n1Note that you can change which messages are echoed to the console and which are logged by modifying the Log4j configuration file. See the\nchapter on logging in the Using VoltDB manual for details.\n46What to Do When Problems Arise\nThere are several situations where an attempt to recover a database — either from a snapshot or command\nlogs — may fail. For example, restoring data from a snapshot to a schema where a unique index has been\nadded can result in a constraint violation. In this case, the restore operation continues but any records that\ncaused a constraint violation are saved to a CSV file.\nOr when recovering command logs, the log may contain a transaction that originally succeeded but fails\nand raises an exception during playback. In this situation, VoltDB issues a fatal error and stops the database\nto avoid corrupting the contents.\nAlthough protecting you from an incomplete recovery is the appropriate default behavior, there may be\ncases where you want to recover as much data as possible, with full knowledge that the resulting data\nset does not match the original. VoltDB provides two processes for performing partial recoveries in case\nof failure:\n•Logging constraint violations during snapshot restore\n•Performing command log recovery in safe mode\nThe following sections describe these procedures.\nWarning\nIt is critically important to recognize that the techniques described in this section do not produce\na complete copy of the original database or resolve the underlying problem that caused the initial\nrecovery to fail. These techniques should never be attempted without careful consideration and\nfull knowledge and acceptance of the risks associated with partial data recovery.\n7.2.1. Logging Constraint Violations\nThere are several situations that can cause a snapshot restore to fail because of constraint violations. Rather\nthan have the operation fail as a whole, VoltDB continues with the restore process and logs the constraint\nviolations to a file instead. This way you can review the tuples that were excluded and decide whether to\nignore or replace their content manually after the restore completes.\nBy default, the constraint violations are logged to one or more files (one per table) in the same directory\nas the snapshot files. In a cluster, each node logs the violations that occur on that node. If you know there\nare going to constraint violations and want to save the logged constraints to a different location, you can\nuse a special JSON form of the @SnapshotRestore system procedure. You specify the path of the log\nfiles in a JSON attribute, duplicatePaths . For example, the following commands perform a restore\nof snapshot files in the directory /var/voltdb/snapshots/ with the unique identifier myDB. The\nrestore operation logs constraint violations to the directory /var/voltdb/logs .\n$ sqlcmd\n1> exec @SnapshotRestore '{ \"path\":\"/var/voltdb/snapshots/\", \n \"nonce\":\"myDB\", \n \"duplicatesPath\":\"/var/voltdb/logs/\" }';\n2> exit\nConstraint violations are logged as needed, one file per table, to CSV files with the name {table} -\nduplicates-{timestamp} .csv.\n7.2.2. Safe Mode Recovery\nOn rare occasions, recovering a database from command logs may fail. This can happen, for example, if\na stored procedure introduces non-deterministic content. If a recovery fails, the specific error is known.\n47What to Do When Problems Arise\nHowever, there is no way for VoltDB to know the root cause or how to continue. Therefore, the recovery\nfails and the database stops.\nWhen this happens, VoltDB logs the last successful transaction before the recovery failed. You can then\nask VoltDB to restart up to but not including the failing transaction by performing a recovery in safe mode .\nYou request safe mode by adding the --safemode switch to the voltdb start command, like so:\n$ voltdb start --safemode --dir=~/mydb\nWhen VoltDB recovers from command logs in safe mode it enables two distinct behaviors:\n•Snapshots are restored, logging any constraint violations\n•Command logs are replayed up to the last valid transaction\nThis means that if you are recovering using an automated snapshot (rather than command logs), you can\nrecover some data even if there are constraint violations during the snapshot restore. Also, when recovering\nfrom command logs, VoltDB will ignore constraint violations in the command log snapshot and replay all\ntransactions that succeeded in the previous attempt.\nIt is important to note that to successfully use safe mode with command logs, you must perform a regular\nrecovery operation first — and have it fail — so that VoltDB can determine the last valid transaction. Also,\nif the snapshot and the command logs contain both constraint violations and failed transactions, you may\nneed to run recovery in safe mode twice to recover as much data as possible. Once to complete restoration\nof the snapshot, then a second time to recover the command logs up to a point before the failed transaction.\n7.3. Collecting the Log Files\nVoltDB includes a utility that collects all of the pertinent logs for a given server. The log collector retrieves\nthe necessary system and process files from the server, creates a compressed archive file and, optionally,\nuploads it via SFTP to a support site. For customers requesting support from VoltDB, your support contact\nwill often provide instructions on how and when to use the log collector and where to submit the files.\nNote that the database does not need to be running to use the log collector. It can find and collect the log\nfiles based solely on the location of the VoltDB root directory where the database was run.\nTo collect the log files, use the voltdb collect command with the same directory specification you would\nuse to initialize or start the database:\n$ voltdb collect --prefix=mylogs -D /home/db\nWhen you run the command you must specify the location of the root directory for the database with the\n--dir or -D flag. Otherwise, the default is the current working directory. The archive file that the collect\ncommand generates is also created in your current working directory unless you use the --output flag\nto specify an alternate location and filename.\nThe collect command has optional arguments that let you control what data is collected, the name of the\nresulting archive file, as well as whether to upload the file to an FTP server. In the preceding example\nthe --prefix flag specifies the prefix for the archive file name. If you are submitting the log files to an\nFTP server via SFTP, you can use the --upload , --username , and --password flags to identify\nthe target server and account. For example:\n$ voltdb collect --dir=/home/db \\\n --prefix=mylogs \\\n --upload=ftp.mycompany.com \\\n48What to Do When Problems Arise\n --username=babbage\n --password=charles\nNote that the voltdb collect command collects log files for the current system only. To collect logs for all\nservers in a cluster, you will need to issue the voltdb collect command locally on each server separately.\nSee the voltdb collect documentation in the Using VoltDB manual for details.\n49Appendix A. Server Configuraon Opons\nThere are a number of system, process, and application options that can impact the performance or be-\nhavior of your VoltDB database. You control these options when initializing and/or starting VoltDB. The\nconfiguration options fall into five main categories:\n•Server configuration\n•Process configuration\n•Database configuration\n•Path configuration\n•Network ports used by the database cluster\nThis appendix describes each of the configuration options, how to set them, and their impact on the result-\ning VoltDB database and application environment.\nA.1. Server Configuration Options\nVoltDB provides mechanisms for setting a number of options. However, it also relies on the base operating\nsystem and network infrastructure for many of its core functions. There are operating system configuration\noptions that you can adjust to to maximize your performance and reliability, including:\n•Network configuration\n•Time configuration\nA.1.1. Network Configuration (DNS)\nVoltDB creates a network mesh among the database cluster nodes. To do that, all nodes must be able to\nresolve the IP address and hostnames of the other server nodes. Make sure all nodes of the cluster have\nvalid DNS entries or entries in the local hosts files.\nFor servers that have two or more network interfaces — and consequently two or more IP addresses — it\nis possible to assign different functions to each interface. VoltDB defines two sets of ports:\n•External ports, including the client and admin ports. These are the ports used by external applications\nto connect to and communicate with the database.\n•Internal ports, including all other ports. These are the ports used by the database nodes to communicate\namong themselves. These include the internal port, the zookeeper port, and so on. (See Section A.5,\n“Network Ports” for a complete listing of ports.)\nYou can specify which network interface the server expects to use for each set of ports by specifying the\ninternal and external interface when starting the database. For example:\n$ voltdb start --dir=~/mydb \\\n --externalinterface=10.11.169.10 \\\n --internalinterface=10.12.171.14\n50Server Configuration Options\nNote that the default setting for the internal and external interface can be overridden for a specific port by\nincluding the interface and a colon before the port number when specifying a port on the command line.\nSee Section A.5, “Network Ports” for details on setting specific ports.\nA.1.2. Time Configuration\nKeeping VoltDB cluster nodes in close synchronization is important for the ongoing performance of your\ndatabase. At a minimum, use of a time service such as NTP or chrony to synchronize time across the cluster\nis recommended. If the time difference between nodes is too large (greater than 200 milliseconds) VoltDB\nrefuses to start. It is also important to avoid having nodes adjust time backwards, or VoltDB will pause\nwhile it waits for time to \"catch up\" to its previous setting.\nA.2. Process Configuration Options\nIn addition to system settings, there are configuration options pertaining to the VoltDB server process\nitself that can impact performance. Runtime configuration options are set as command line options when\nstarting the VoltDB server process.\nThe key process configuration for VoltDB is the Java maximum heap size. It is also possible to pass other\narguments to the Java Virtual Machine directly.\nA.2.1. Maximum Heap Size\nThe heap size is a parameter associated with the Java runtime environment. Certain portions of the VoltDB\nserver software use the Java heap. In particular, the part of the server that receives and responds to stored\nprocedure requests uses the Java heap.\nDepending upon how many transactions your application executes a second, you may need additional heap\nspace. The higher the throughput, the larger the maximum heap needed to avoid running out of memory.\nIn general, a maximum heap size of two gigabytes (2048) is recommended. For production use, a more\naccurate measurement of the needed heap size can be calculated from the size of the schema (number of\ntables), number of sites per host, and what durability and availability features are in use. See the VoltDB\nPlanning Guide for details.\nIt is important to remember that the heap size is not directly related to data storage capacity. Increasing the\nmaximum heap size does not provide additional data storage space. In fact, quite the opposite. Needlessly\nincreasing the maximum heap size reduces the amount of memory available for storage.\nTo set the maximum heap size when starting VoltDB, define the environment variable VOLTDB_HEAP-\nMAX as an integer value (in megabytes) before issuing the voltdb start command. For example, the fol-\nlowing commands start VoltDB with a 3 gigabyte heap size (the default is 2 gigabytes):\n$ export VOLTDB_HEAPMAX=\"3072\"\n$ voltdb start --dir=~/mydb -H serverA\nA.2.2. Other Java Runtime Options (VOLTDB_OPTS)\nVoltDB sets the Java options — such as heap size and classpath — that directly impact VoltDB. There are\na number of other configuration options available in the Java Virtual machine (JVM).\nVoltDB provides a mechanism for passing arbitrary options directly to the JVM. If the environment vari-\nable VOLTDB_OPTS is defined, its value is passed as arguments to the Java command line. Note that\n51Server Configuration Options\nthe contents of VOLTDB_OPTS are added to the Java command line on the current server only. In other\nwords, you must define VOLTDB_OPTS on each server to have it take effect for all servers.\nWarning\nVoltDB does not validate the correctness of the arguments you specify using VOLTDB_OPTS\nor their appropriateness for use with VoltDB. This feature is intended for experienced users only\nand should be used with extreme caution.\nA.3. Database Configuration Options\nRuntime configuration options are set either as part of the configuration file or as command line options\nwhen starting the VoltDB server process. These database configuration options are only summarized here.\nSee the Using VoltDB manual for a more detailed explanation. The configuration options include:\n•Sites per host\n•K-Safety\n•Network partition detection\n•Automated snapshots\n•Import and export\n•Command logging\n•Heartbeat\n•Temp table size\n•Query timeout\n•Flush Interval\n•Long-running process warning\n•Copying array parameters\n•Transaction Prioritization\nA.3.1. Sites per Host\nSites per host specifies the number of unique VoltDB \"sites\" that are created on each physical database\nserver. The section on \"Determining How Many Sites per Host\" in the Using VoltDB manual explains how\nto choose a value for sites per host.\nYou set the value of sites per host using the sitesperhost attribute of the <cluster> tag in the\nconfiguration file.\nA.3.2. K-Safety\nK-safety defines the level of availability or durability that the database can sustain, by replicating individual\npartitions to multiple servers. K-safety is described in detail in the \"Availability\" chapter of the Using\nVoltDB manual.\n52Server Configuration Options\nYou specify the level of K-safety that you want in the configuration file using the kfactor attribute of\nthe <cluster> tag.\nA.3.3. Network Partition Detection\nNetwork partition detection protects a VoltDB cluster in environments where the network is susceptible\nto partial or intermittent failure among the server nodes. Partition detection is described in detail in the\n\"Availability\" chapter of the Using VoltDB manual.\nUse of network partition detection is strongly recommended for production systems and therefore is en-\nabled by default. You can enable or disable network partition detection in the configuration file using the\n<partition-detection> tag.\nA.3.4. Automated Snapshots\nAutomated snapshots provide ongoing protection against possible database failure (due to hardware or\nsoftware issues) by taking periodic snapshots of the database's contents. Automated snapshots are de-\nscribed in detail in the section on \" Scheduling Automated Snapshots \" in the Using VoltDB manual.\nYou enable and configure automated snapshots with the <snapshot> tag in the configuration file.\nSnapshot activity involves both processing and disk I/O and so may have a noticeable impact on perfor-\nmance (in terms of throughput and/or latency) on a very busy database. You can control the priority of\nsnapshots activity using the <snapshot> tag within the <systemsettings> element of the deploy-\nment file. The snapshot priority is an integer value between 0 and 10, with 0 being the highest priority\nand 10 being the lowest. The closer to 10, the longer snapshots take to complete, but the less they can\naffect ongoing database work.\nWarning\nSetting snapshot priority directly as described is deprecated. If transaction prioritization is not\nenabled, this method continues to work for backwards compatibility. However, the recommended\nmethod for setting snapshot priority is to enable transaction prioritization and set the snapshot\npriority as a child of <priorities> , described in Section A.3.13, “Transaction Prioritization” .\nNote that snapshot priority affects all snapshot activity, including automated snapshots, manual snapshots,\nand command logging snapshots.\nA.3.5. Import and Export\nThe import and export functions let you automatically import and/or export selected data between your\nVoltDB database and another database or distributed service at runtime. These features are described in\ndetail in the chapter on \" Importing and Exporting Live Data \" in the Using VoltDB manual.\nYou enable and disable import and export using the <import> and <export> tags in the configuration\nfile.\nA.3.6. Command Logging\nThe command logging function saves a record of each transaction as it is initiated. These logs can then be\n\"replayed\" to recreate the database's last known state in case of intentional or accidental shutdown. This\nfeature is described in detail in the chapter on \"Command Logging and Recovery\" in the Using VoltDB\nmanual.\n53Server Configuration Options\nTo enable and disable command logging, use the <commandlog> tag in the configuration file.\nA.3.7. Heartbeat\nThe database servers use a \"heartbeat\" to verify the presence of other nodes in the cluster. If a heartbeat is\nnot received within a specified time limit, that server is assumed to be down and the cluster reconfigures\nitself with the remaining nodes (assuming it is running with K-safety). This time limit is called the \"heart-\nbeat timeout\" and is specified as a integer number of seconds.\nFor most situations, the default value for the timeout (90 seconds) is appropriate. However, if your cluster\nis operating in an environment that is susceptible to network fluctuations or unpredictable latency, you\nmay want to increase the heartbeat timeout period.\nYou can set an alternate heartbeat timeout using the <heartbeat> tag in the configuration file.\nNote\nBe aware that certain Linux system settings can override the VoltDB heartbeat messages. In\nparticular, lowering the setting for TCP_RETRIES2 may result in the system network timeout\ninterrupting VoltDB's heartbeat mechanism and causing timeouts sooner than expected. Values\nlower than 8 for TCP_RETRIES2 are not recommended.\nA.3.8. Temp Table Size\nVoltDB uses temporary tables to store intermediate table data while processing transactions. The default\ntemp table size is 100 megabytes. This setting is appropriate for most applications. However, extremely\ncomplex queries or many updates to large records could cause the temporary space to exceed the maximum\nsize, resulting in the transaction failing with an error.\nIn these unusual cases, you may need to increase the temp table size. You can specify a different size for\nthe temp tables using the < systemsettings> and <temptables> tags in the configuration file and\nspecifying the maxsize attribute as a whole number of megabytes. For example:\n<systemsettings>\n <temptables maxsize=\"200\"/>\n</systemsettings>\nNote: since the temp tables are allocated as needed, increasing the maximum size can result in a Java out-\nof-memory error at runtime if the system is memory-constrained. Modifying the temp table size should\nbe done with caution.\nA.3.9. Query Timeout\nIn general, SQL queries execute extremely quickly. But it is possible, usually by accident, to construct\na query that takes an unexpectedly long time to execute. This usually happens when the query is overly\ncomplex or accesses extremely large tables without the benefit of an appropriate filter or index.\nYou have the option to set a query timeout limit cluster-wide, for an interactive session, or per transaction.\nThe query limit sets a limit on the length of time any read-only query (or batch of queries in the case of\nthe voltExecuteSQL() method in a stored procedure) is allowed to run. You specify the timeout limit in\nmilliseconds.\nTo set a cluster-wide query limit you use the <systemsettings> and <query timeout=\"{lim-\nit}\"> tags in the configuration file. To set a limit for an interactive session in the sqlcmd utility, you use\n54Server Configuration Options\nthe --query-timeout flag when invoking sqlcmd. To specify a limit when invoking a specific stored\nprocedure, you use the callProcedureWithTimeout method in place of the callProcedure method.\nThe cluster-wide limit is set when you initialize the database root directory. By default, the system-wide\nlimit is 10 seconds. You can set a different timeout in the configuration file. Or It can be adjusted using\nthe voltadmin update command to modify the configuration settings while the database is running. If\nsecurity is enabled, any user can set a lower query limit on a per session or per transaction basis. However,\nthe user must have the ADMIN privilege to set a query limit longer than the cluster-wide setting.\nThe following example configuration file sets a cluster-wide query timeout value of three seconds:\n<systemsettings>\n <query timeout=\"3000\"/>\n</systemsettings>\nIf any query or batch of queries exceeds the query timeout, the query is interrupted and an error returned\nto the calling application. Note that the limit is applied to read-only ad hoc queries or queries in read-\nonly stored procedures only. In a K-Safe cluster, queries on different copies of a partition may execute at\ndifferent rates. Consequently the same query may timeout in one copy of the partition but not in another.\nTo avoid possible non-deterministic changes, VoltDB does not apply the time out limit to any queries or\nprocedures that may modify the database contents.\nA.3.10. Flush Interval\nVoltDB features that interact with external systems, including database replication (DR) and export, limit\ntheir activity to balance I/O latency against potentially competing with ongoing database work. These\nfeatures trigger I/O based on two factors: batch size and a flush interval. In other words, data is written\nwhen enough records are received to match the batch size or, if input is sporadic, data is written when the\nflush interval is reached to avoid small amounts of data be held indefinitely.\nThere are two different settings that control how frequently data is flushed from the queues. There is a\nfeature-specific flush setting and a system-wide minimum value. You can set different flush intervals with\nindividual features. For example, you might set the DR flush interval to 500 milliseconds to reduce the\nlatency of database replication, while setting the export flush interval to 4 seconds if export latency is\nnot critical.\nThe system-wide minimum defines how often flush intervals are checked. So no buffers can be written\nmore frequently than the system-wide minimum. And since the minimum check event and the feature-spe-\ncific intervals may not line up exactly, actual writes occur at some incremental time after the defined in-\nterval. For example, if you set both the minimum and the DR interval at 500 milliseconds, the actual buffer\nwrites might occur anywhere between 500 and 1000ms apart.\nYou set both the system-wide minimum and feature-specific intervals in the configuration file using the\n<systemsettings> and <flushinterval> tags. You set the system-wide minimum in the min-\nimum attribute of the <flushinterval> tag and you set the feature-specific intervals using the <dr>\nand <export> sub-elements. All values are specified in milliseconds. For example:\n<systemsettings>\n <flushinterval minimum=\"500\">\n <export interval=\"4000\" />\n <dr interval=\"500\" />\n </flushinterval>\n</systemsettings>\nThe default system-wide minimum is one second (1000). The default flush intervals for DR and export\nare one second (1000) and four seconds (4000), respectively.\n55Server Configuration Options\nA.3.11. Long-Running Process Warning\nYou can avoid runaway read-only queries using the query timeout setting. But you cannot stop read-\nwrite procedures or other computational tasks, such as automated snapshots. These processes must run to\ncompletion. However, you may want to be notified when a process is blocking an execution queue for an\nextended period of time.\nBy default, VoltDB writes an informational message into the log file whenever a task runs for more than\nten seconds in any of the execution sites. These tasks may be stored procedures, procedure fragments (in\nthe case of multi-partitioned procedures), or operational tasks such as snapshot creation. You can adjust\nthe limit when these messages are written by specifying a value, in milliseconds in the loginfo attribute\nof the <procedure> tag in the configuration file. For example, the following configuration file entry\nchanges the threshold after which a message is written to the log to three seconds:\n<systemsettings>\n <procedure loginfo=\"3000\"/>\n</systemsettings>\nNote that in a cluster, the informational message is written only to the log of the server that is hosting the\naffected queue, not to all server logs.\nA.3.12. Copying Array Parameters\nYou can send mutable datatypes, most notably arrays, as arguments to a VoltDB stored procedure. By\ndefault, when this happens on a cluster with K=1 or more, VoltDB makes a copy of the array before using\nit in a transactional statement, to ensure that the execution of the statement is deterministic. However,\ncopying the contents of the array consumes additional memory, which can add up if procedures are called\nfrequently with large arrays.\nThe alternative, if the procedures do not modify the contents of the array, is to tell VoltDB not to copy\narray parameters on K-safe clusters by setting the copyparameters attribute of the <procedure>\nelement to \"false\":\n<systemsettings>\n <procedure copyparameters=\"false\"/>\n</systemsettings>\nWarning\nOnly disable copying of parameters if you are sure the stored procedures do not modify any array\nparameters. If a stored procedures does modify an array when arrays are not being copied, the\ntransaction can result in non-deterministic behavior, including possible data corruption and/or\ncrashing the database.\nA.3.13. Transaction Prioritization\nBy default, all transactions are treated equally and executed in a first in, first out basis. However, you can\nenable transaction priorities where individual transactions (or groups of transactions) are given higher or\nlower priority.\nTo use transaction priorities, you must enable them in the configuration file by adding <priorities> as\na child of the <systemsettings> element. If the <priorities> element is present, priorities are enabled.\nOr you can explicitly enable or disable them. For example:\n<systemsettings>\n56Server Configuration Options\n <priorities enabled=\"true\"/>\n</systemsettings>\nYou can also set a priority for database replication and/or snapshot transactions using corresponding\nsubelements and specifying a priority between 1 and 8 (1 being the highest priority, 8 being the lowest):\n<systemsettings>\n <priorities enabled=\"true\">\n <dr priority=\"3\"/>\n <snapshot priority=\"6\"/> \n</systemsettings>\nYou can adjust the effects of prioritization by setting the maxwait attribute on the <priorities> el-\nement. The maxwait attribute specifies the maximum number of milliseconds a task remains in a priority\nqueue before it gets scheduled for execution regardless of its pioritization. This helps avoid high priority\ntransactions essentially blocking lower priority tasks from getting scheduled. The default wait time is 1000\nmilliseconds. Setting maxwait to zero (0) means that prioritization is always in effect. The following\nexample reduces the maximum wait time to half a second:\n<systemsettings>\n <priorities enabled=\"true\" maxwait=\"500\" />\n</systemsettings>\nA.4. Path Configuration Options\nThe running database uses a number of disk locations to store information associated with runtime features,\nsuch as export, network partition detection, and snapshots. You can control which paths are used for these\ndisk-based activities. The path configuration options include:\n•VoltDB root\n•Snapshots path\n•Export overflow path\n•Command log path\n•Command log snapshots path\nA.4.1. VoltDB Root\nVoltDB defines a root directory for any disk-based activity which is required at runtime. This directory\nalso serves as a root for all other path definitions that take the default or use a relative path specification.\nIf you do not specify a location for the root directory on the command line, VoltDB uses the current working\ndirectory as a default. Normally, you specify the location of the root directory using the --dir flag on the\nvoltdb init and voltdb start commands. The root directory is then the subdirectory voltdbroot within\nthe specified location. (If the subfolder does not exist, VoltDB creates it.) See the section on \" Configuring\nPaths for Runtime Features \" in the Using VoltDB manual for details.\nA.4.2. Snapshots Path\nThe snapshots path specifies where automated and network partition snapshots are stored. The default\nsnapshots path is the \" snapshots \" subfolder of the VoltDB root directory. You can specify an alternate\npath for snapshots using the <snapshots> child element of the <paths> tag in the configuration file.\n57Server Configuration Options\nA.4.3. Export Overflow Path\nThe export overflow path specifies where overflow data is stored for the export streams. The default export\noverflow path is the \" export_overflow \" subfolder of the VoltDB root directory. You can specify an\nalternate path using the <exportoverflow> child element of the <paths> tag in the configuration\nfile.\nSee the chapter on \"Exporting Live Data\" in the Using VoltDB manual for more information on export\noverflow.\nA.4.4. Command Log Path\nThe command log path specifies where the command logs are stored when command logging is enabled.\nThe default command log path is the \" command_log \" subfolder of the VoltDB root directory. However,\nfor production use, it is strongly recommended that the command logs be written to a dedicated device,\nnot the same device used for snapshotting or export overflow. You can specify an alternate path using the\n<commandlog> child element of the <paths> tag in the configuration file.\nSee the chapter on \"Command Logging and Recovery\" in the Using VoltDB manual for more information\non command logging.\nA.4.5. Command Log Snapshots Path\nThe command log snapshots path specifies where the snapshots created by command logging are stored.\nThe default path is the \" command_log_snapshot \" subfolder of the VoltDB root directory. (Note that\ncommand log snapshots are stored separately from automated snapshots.) You can specify an alternate\npath using the <commandlogsnapshot> child element of the <paths> tag in the configuration file.\nSee the chapter on \"Command Logging and Recovery\" in the Using VoltDB manual for more information\non command logging.\nA.5. Network Ports\nA VoltDB cluster opens network ports to manage its own operation and to provide services to client ap-\nplications. The network ports are configurable as part of the command that starts the VoltDB database\nprocess. You can specify just a port number or the network interface and the port number, separated by\na colon.\nTable A.1, “VoltDB Port Usage” summarizes the ports that VoltDB uses and their default value. The\nfollowing sections describe each port in more detail and how to set them. Section A.5.7, “TLS/SSL En-\ncryption (Including HTTPS)” explains how to enable TLS encryption for the web and the programming\ninterface ports, client and admin.\nTable A.1. VoltDB Port Usage\nPort Default Value\nClient Port 21212\nAdmin Port 21211\nWeb Interface Port (httpd) 8080\nWeb Interface Port (with TSL/SSL enabled) 8443\nInternal Server Port 3021\n58Server Configuration Options\nPort Default Value\nReplication Port 5555\nZookeeper port 7181\nA.5.1. Client Port\nThe client port is the port VoltDB client applications use to communicate with the database cluster nodes.\nBy default, VoltDB uses port 21212 as the client port. You can change the client port. However, all client\napplications must then use the specified port when creating connections to the cluster nodes.\nTo specify a different client port on the command line, use the --client flag when starting the VoltDB\ndatabase. For example, the following command starts the database using port 12345 as the client port:\n$ voltdb start --dir=~/mydb --client=12345\nIf you change the default client port, all client applications must also connect to the new port. The client\ninterfaces for Java and C++ accept an additional, optional argument to the createConnection method for\nthis purpose. The following examples demonstrate how to connect to an alternate port using the Java and\nC++ client interfaces.\nJava\norg.voltdb.client.Client voltclient;\nvoltclient = ClientFactory.createClient();\nvoltclient.createConnection(\"myserver\",12345); \nC++\nboost::shared_ptr<voltdb::Client> client = voltdb::Client::create();\nclient->createConnection(\"myserver\", 12345);\nA.5.2. Admin Port\nThe admin port is similar to the client port, it accepts and processes requests from applications. However,\nthe admin port has the special feature that it continues to accept write requests when the database enters\nadmin, or read-only, mode.\nBy default, VoltDB uses port 21211 on the default external network interface as the admin port. You can\nchange the port assignment on the command line using the --admin flag. For example, the following\ncommand sets the admin port to 2222:\n$ voltdb start --dir=~/mydb --admin=2222\nA.5.3. Web Interface Port (hp)\nThe web interface port is the port that VoltDB listens to for web-based connections. This port is used for\nboth the JSON programming interface and access to the VoltDB Management Center.\nBy default, VoltDB uses port 8080 on the default external network interface as the web port. You can\nchange the port assignment on the command line using the --http flag. For example, the following\ncommand sets the port to 8888:\n$ voltdb start --dir=~/mydb --http=8888\n59Server Configuration Options\nIf you change the port number, be sure to use the new port number when connecting to the cluster using\neither the VoltDB Management Center or the JSON interface. For example, the following URL connects\nto the JSON interface using the reassigned port 8888:\nhttp://athena.mycompany.com:8888/api/2.0/?Procedure=@SystemInformation\nIf you do not want to use the http port of the features it supports (the JSON API and VoltDB Management\nCenter) you can disable the port in the configuration file. For example, for following configuration option\ndisables the default http port:\n<httpd enabled=\"false\"/>\nIf the port is not enabled, neither the JSON interface nor the Management Center are available from the\ncluster. By default, the web interface is enabled.\nAnother aspect of the http port, when it is enabled, is whether the port transmits using http or https. You\ncan enable TLS (Transport Layer Security) encryption on the web interface so that all interaction uses the\nHTTPS protocol. When TLS is enabled, the default port changes to 8443. See Section A.5.7, “TLS/SSL\nEncryption (Including HTTPS)” for information on enabling encryption in the configuration file.\nA.5.4. Internal Server Port\nA VoltDB cluster uses ports to communicate among the cluster nodes. This port is internal to VoltDB and\nshould not be used by other applications.\nBy default, the internal server port is port 3021 for all nodes in t1he cluster1. You can specify an alternate\nport using the --internal flag when starting the VoltDB process. For example, the following command\nstarts the VoltDB process using an internal port of 4000:\n$ voltdb start --dir=~/mydb --internal=4000\nA.5.5. Replication Port\nDuring database replication, producer databases (that is, the master database in passive DR and all clusters\nin XDCR) use a dedicated port to share data to their consumers. By default, the replication port is port\n5555. You can use a different port by specifying a different port number on the voltdb command line using\nthe --replication flag. For example, the following command changes the replication port:\n$ voltdb start --dir=~/mydb --replication=6666\nNote that if you set the replication port on the producer to something other than the default, you must notify\nthe consumers of this change. The replica or other XDCR clusters must specify the port along with the\nnetwork address or hostname in the src attribute of the <connection> element when configuring the\nDR relationship. For example, if the server nyc2 has changed its replication port to 3333, another cluster\nin the XDCR relationship might have the following configuration:\n<dr id=\"1\" role=\"xdcr\" >\n <connection source=\"nyc1, nyc2:3333 \" />\n</dr>\nFinally, in some cloud environments, such as Kubernetes, remote clusters may not be able to access the\nproducer cluster by its internal network interface. Consumers can specify the location of the producer in\n1In the special circumstance where multiple VoltDB processes are started for one database, all on the same server, the internal server port is\nincremented from the initial value for each process.\n60Server Configuration Options\nthe DR configuration using a remapped IP address. But once they initialize contact with the producer,\nthe producer sends a list of IP addresses to use for ongoing replication. By default, these are the internal\naddresses the producer cluster knows about.\nYou can tell the producer to advertise a different interface (and port) for this second phase by specifying the\nalternate interface using the --drpublic argument in the voltdb start command. If you do not specify\na port on the --drpublic argument, the internal replication port is used. For example:\n$ voltdb start --drpublic=some.external.addr\nA.5.6. Zookeeper Port\nVoltDB uses a version of Apache Zookeeper to communicate among supplementary functions that require\ncoordination but are not directly tied to database transactions. Zookeeper provides reliable synchroniza-\ntion for functions such as command logging without interfering with the database's own internal commu-\nnications.\nVoltDB uses a network port bound to the local interface (127.0.0.1) to interact with Zookeeper. By default,\n7181 is assigned as the Zookeeper port for VoltDB. You can specify a different port number using the\n--zookeeper flag when starting the VoltDB process. It is also possible to specify a different network\ninterface, like with other ports. However, accepting the default for the zookeeper network interface is\nrecommended where possible. For example:\n$ voltdb start --dir=~/mydb --zookeeper=2288\nA.5.7. TLS/SSL Encryption (Including HTTPS)\nVoltDB lets you enable Transport Layer Security (TLS) — the recommended upgrade from Secure Socket\nLayer (SSL) encryption — for all of its externally-facing interfaces: the web port, client port, admin port,\nand replication (DR) port. When you enable TLS, you automatically enable encryption for the web port.\nYou can then optionally enable encryption for the external ports (client and admin) and/or the replication\nport.\nTo enable TLS encryption you need an appropriate certificate. How you configure TLS depends on whether\nyou create a local certificate or receive one from an authorized certificate provider, such as VeriSign,\nGeoTrust and others. If you use a commercial certificate, you only need to identify the certificate as the key\nstore. If you create your own, you must specify both the key store and the trust store. (See the section on\nusing TLS/SSL for security in the Using VoltDB manual for an example of creating your own certificate.)\nYou enable TLS encryption in the deployment file using the <ssl> element. Within <ssl> you specify the\nlocation and password for the key store and, for locally generated certificates, the trust store in separate\nelements like so:\n<ssl>\n <keystore path=\"/etc/mydb/keystore\" password=\"twiddledee\"/>\n <truststore path=\"/etc/mydb/truststore\" password=\"twiddledum\"/>\n</ssl>\nWhen you enable the <ssl> element in the configuration file, TLS encryption is enabled for the web port\nand all access to the httpd port and JSON interface must use the HTTPS protocol. When you enable TLS,\nthe default web port changes from 8080 to 8443.\nYou can explicitly enable or disable TLS encryption by including the enable attribute. (For example, if\nyou want to include the key store and trust store in the configuration but not turn on TLS during testing,\nyou can include enabled=\"false\" .) You can specify that the client and admin API ports are also\n61Server Configuration Options\nTLS encrypted by adding the external attribute and setting it to true. Similarly, you can enable TLS\nencryption for the DR port by adding the dr attribute. For example, the following configuration sample,\nexplicitly enables TLS for all externally-facing ports:\n<ssl enabled=\"true\" external=\"true\" dr=\"true\">\n <keystore path=\"/etc/mydb/keystore\" password=\"twiddledee\"/>\n <truststore path=\"/etc/mydb/truststore\" password=\"twiddledum\"/>\n</ssl>\nNote that you cannot disable TLS encryption for the web port separately. TLS is always enabled for the\nweb port if you enable encryption for any ports.\n62Appendix B. Snapshot Ulies\nVoltDB provides two utilities for managing snapshot files. These utilities verify that a native snapshot\nis complete and usable and convert the snapshot contents to a text representation that can be useful for\nuploading or reloading data in case of severe errors.\nIt is possible, as the result of a design flaw or failed program logic, for a database application to become\nunusable. However, the data is still of value. In such emergency cases, it is desirable to extract the data\nfrom the database and possibly reload it. This is the function that save and restore perform within VoltDB.\nBut there may be cases when you want to use the data created by a VoltDB snapshot elsewhere. The goal\nof the utilities is to assist in that process. The snapshot utilities are:\n•snapshotconvert converts a snapshot (or part of a snapshot) into text files, creating one file for each\ntable in the snapshot.\n•snapshotverifier verifies that a VoltDB snapshot is complete and usable.\nTo use the snapshotconvert and snapshotverifier commands, be sure that the voltdb /bin\ndirectory is in your PATH, as described in the section on \"Setting Up Your Environment\" in the Using\nVoltDB manual. The following sections describe how to use these two commands.\n63Snapshot Utilities\nsnapshotconvert\nsnapshotconvert — Converts the tables in a VoltDB snapshot into text files.\nSyntax\nsnapshotconvert {snapshot-id} --type {csv|tsv} \\\n--table {table} [...] [--dir {directory}]... \\\n[--outdir {directory}]\nsnapshotconvert --help\nDescription\nSnapshotConverter converts one or more tables in a valid snapshot into either comma-separated (csv) or\ntab-separated (tsv) text files, creating one file per table.\nWhere:\n{snapshot-id} is the unique identifier specified when the snapshot was created. (It is also the name\nof the .digest file that is part of the snapshot.) You must specify a snapshot ID.\n{csv|tsv} is either \"csv\" or \"tsv\" and specifies whether the output file is comma-separated or\ntab-separated. This argument is also used as the filetype of the output files.\n{table} is the name of the database table that you want to export to text file. You can spec-\nify the --table argument multiple times to convert multiple tables with a single\ncommand.\n{directory} is the directory to search for the snapshot ( --dir) or where to create the resulting\noutput files ( --outdir ). You can specify the --dir argument multiple times to\nsearch multiple directories for the snapshot files. Both --dir and --outdir are\noptional; they default to the current directory path.\nExample\nThe following command exports two tables from a snapshot of the flight reservation example used in the\nUsing VoltDB manual. The utility searches for the snapshot files in the current directory (the default) and\ncreates one file per table in the user's home directory:\n$ snapshotconvert flightsnap --table CUSTOMER --table RESERVATION \\ \n --type csv -- outdir ~/\n64Snapshot Utilities\nsnapshotverifier\nsnapshotverifier — Verifies that the contents of one or more snapshot files are complete and usable.\nSyntax\nsnapshotverifier [snapshot-id] [--dir {directory}] ...\nsnapshotverifier --help\nDescription\nSnapshotVerifier verifies one or more snapshots in the specified directories.\nWhere:\n[snapshot-id] is the unique identifier specified when the snapshot was created. (It is also the name\nof the .digest file that is part of the snapshot.) If you specify a snapshot ID, only\nsnapshots matching that ID are verified. If you do not specify an ID, all snapshots\nfound will be verified.\n{directory} is the directory to search for the snapshot. You can specify the --dir argument\nmultiple times to search multiple directories for snapshot files. If you do not specify\na directory, the default is to search the current directory.\nExamples\nThe following command verifies all of the snapshots in the current directory:\n$ snapshotverifier \nThis example verifies a snapshot with the unique identifier \"flight\" in either the directory /etc/volt-\ndb/save or ~/mysaves :\n$ snapshotverifier flight --dir /etc/voltdb/save/ --dir ~/mysaves\n65" } ]
{ "category": "App Definition and Development", "file_name": "AdminGuide.pdf", "project_name": "VoltDB", "subcategory": "Database" }
[ { "data": "Query for data \nfrom 2013-01-01 \nto 2013-01-08results for segment 2013-01-01/2013-01-02\nresults for segment 2013-01-02/2013-01-03\nresults for segment 2013-01-07/2013-01-08Cache (on broker nodes)\nsegment for data 2013-01-03/2013-01-04\nsegment for data 2013-01-04/2013-01-05\nsegment for data 2013-01-05/2013-01-06\nsegment for data 2013-01-06/2013-01-07Historical and real-time nodes\nQuery for data \nnot in cache" } ]
{ "category": "App Definition and Development", "file_name": "caching.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "WiredTiger EnginePython API\nmessaging serverC ClientC APImessagingnative callsoptionallystartsJava APInative callsoptionallystartsoptionallyembeds\nDatabase FilesTransactionsIn-memory treeLoggingRowstorageSchema\nConcurrency ControlColumn storage" } ]
{ "category": "App Definition and Development", "file_name": "architecture.pdf", "project_name": "MongoDB", "subcategory": "Database" }
[ { "data": "Meta State Machine (MSM)\nChristophe HenryMeta State Machine (MSM)\nChristophe Henry\nCopyright © 2008-2010 Distributed under the Boost Software License, Version 1.0. (See accompanying file\nLICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt )iiiTable of Contents\nPreface ....................................................................................................................... vi\nI. User' guide ............................................................................................................... 1\n1. Founding idea ................................................................................................... 4\n2. UML Short Guide ............................................................................................. 5\nWhat are state machines? ............................................................................... 5\nConcepts ..................................................................................................... 5\nState machine, state, transition, event ........................................................ 5\nSubmachines, orthogonal regions, pseudostates ........................................... 6\nHistory ............................................................................................... 9\nCompletion transitions / anonymous transitions ......................................... 10\nInternal transitions .............................................................................. 12\nConflicting transitions ......................................................................... 12\nAdded concepts ........................................................................................... 13\nState machine glossary ................................................................................. 13\n3. Tutorial .......................................................................................................... 15\nDesign ....................................................................................................... 15\nBasic front-end ........................................................................................... 15\nA simple example ............................................................................... 15\nTransition table ................................................................................... 15\nDefining states with entry/exit actions ..................................................... 17\nWhat do you actually do inside actions / guards? ....................................... 17\nDefining a simple state machine ............................................................. 19\nDefining a submachine ......................................................................... 19\nOrthogonal regions, terminate state, event deferring ................................... 21\nHistory .............................................................................................. 24\nCompletion (anonymous) transitions ....................................................... 26\nInternal transitions ............................................................................... 26\nmore row types ................................................................................... 28\nExplicit entry / entry and exit pseudo-state / fork ....................................... 28\nFlags ................................................................................................. 32\nEvent Hierarchy .................................................................................. 33\nCustomizing a state machine / Getting more speed ..................................... 34\nChoosing the initial event ..................................................................... 35\nContaining state machine (deprecated) ................................................... 35\nFunctor front-end ........................................................................................ 35\nTransition table .................................................................................. 35\nDefining states with entry/exit actions ..................................................... 37\nWhat do you actually do inside actions / guards (Part 2)? ............................ 37\nDefining a simple state machine ............................................................. 38\nAnonymous transitions ......................................................................... 38\nInternal transitions ............................................................................... 39\nKleene (any) event .............................................................................. 39\neUML ....................................................................................................... 40\nTransition table ................................................................................... 40\nA simple example: rewriting only our transition table ................................. 41\nDefining events, actions and states with entry/exit actions ........................... 42\nWrapping up a simple state machine and first complete examples ................. 44\nDefining a submachine ......................................................................... 45\nAttributes / Function call ..................................................................... 45\nOrthogonal regions, flags, event deferring ................................................ 47\nCustomizing a state machine / Getting more speed .................................... 48\nCompletion / Anonymous transitions ....................................................... 48\nInternal transitions ............................................................................... 49\nKleene(any) event) .............................................................................. 49\nOther state types ................................................................................. 49Meta State Machine (MSM)\nivHelper functions .................................................................................. 50\nPhoenix-like STL support ..................................................................... 51\nWriting actions with Boost.Phoenix (in development) ................................. 52\nBack-end ................................................................................................... 53\nCreation ............................................................................................. 53\nStarting and stopping a state machine ...................................................... 53\nEvent dispatching ................................................................................ 54\nActive state(s) ..................................................................................... 54\nSerialization ....................................................................................... 54\nBase state type .................................................................................... 55\nVisitor ............................................................................................... 56\nFlags ................................................................................................. 57\nGetting a state .................................................................................... 57\nState machine constructor with arguments ............................................... 57\nTrading run-time speed for better compile-time / multi-TU compilation .......... 58\nCompile-time state machine analysis ....................................................... 59\nEnqueueing events for later processing ................................................... 60\nCustomizing the message queues ........................................................... 60\nPolicy definition with Boost.Parameter .................................................... 60\nChoosing when to switch active states ..................................................... 60\n4. Performance / Compilers ................................................................................... 62\nSpeed ........................................................................................................ 62\nExecutable size ........................................................................................... 62\nSupported compilers .................................................................................... 62\nLimitations ................................................................................................ 63\nCompilers corner ........................................................................................ 63\n5. Questions & Answers, tips ................................................................................ 65\n6. Internals ......................................................................................................... 67\nBackend: Run To Completion ........................................................................ 67\nFrontend / Backend interface ......................................................................... 68\nGenerated state ids ..................................................................................... 69\nMetaprogramming tools ................................................................................ 70\n7. Acknowledgements .......................................................................................... 72\nMSM v2 .................................................................................................... 72\nMSM v1 ................................................................................................... 72\n8. Version history ................................................................................................ 73\nFrom V2.27 to V2.28 (Boost 1.57) ................................................................. 73\nFrom V2.26 to V2.27 (Boost 1.56) ................................................................. 73\nFrom V2.25 to V2.26 (Boost 1.55) ................................................................. 73\nFrom V2.24 to V2.25 (Boost 1.54) ................................................................. 73\nFrom V2.23 to V2.24 (Boost 1.51) ................................................................. 73\nFrom V2.22 to V2.23 (Boost 1.50) ................................................................. 73\nFrom V2.21 to V2.22 (Boost 1.48) ................................................................. 74\nFrom V2.20 to V2.21 (Boost 1.47) ................................................................. 74\nFrom V2.12 to V2.20 (Boost 1.46) ................................................................. 74\nFrom V2.10 to V2.12 (Boost 1.45) ................................................................. 75\nFrom V2.0 to V2.12 (Boost 1.44) ................................................................... 75\nII. Reference .............................................................................................................. 76\n9. External references to MSM .............................................................................. 78\n10. eUML operators and basic helpers .................................................................... 79\n11. Functional programming .................................................................................. 82\nCommon headers ................................................................................................ 90\nBack-end ........................................................................................................... 91\nFront-end ........................................................................................................... 97vList of Tables\n10.1. Operators and state machine helpers ....................................................................... 79\n11.1. STL algorithms ................................................................................................... 82\n11.2. STL algorithms ................................................................................................... 82\n11.3. STL algorithms ................................................................................................... 82\n11.4. STL container methods ......................................................................................... 84\n11.5. STL list methods ................................................................................................. 84\n11.6. STL associative container methods ......................................................................... 85\n11.7. STL pair ............................................................................................................ 85\n11.8. STL string .......................................................................................................... 85viPreface\nMSM is a library allowing you to easily and quickly define state machines of very high performance.\nFrom this point, two main questions usually quickly arise, so please allow me to try answering them\nupfront.\n•When do I need a state machine?\nMore often that you think. Very often, one defined a state machine informally without even noticing\nit. For example, one declares inside a class some boolean attribute, say to remember that a task has\nbeen completed. Later the boolean actually needs a third value, so it becomes an int. A few weeks,\na second attribute is needed. Then a third. Soon, you find yourself writing:\nvoid incoming_data(data)\n{\nif (data == packet_3 && flag1 == work_done && flag2 > step3)...\n}\nThis starts to look like event processing (contained inside data) if some stage of the object life has\nbeen achieved (but is ugly).\nThis could be a protocol definition and it is a common use case for state machines. Another\ncommon one is a user interface. The stage of the user's interaction defines if some button is active,\na functionality is available, etc.\nBut there are many more use cases if you start looking. Actually, a whole model-driven development\nmethod, Executable UML (http://en.wikipedia.org/wiki/Executable_UML) specifies its complete\ndynamic behavior using state machines. Class diagram, state machine diagrams, and an action\nlanguage are all you absolutely need in the Executable UML world.\n•Another state machine library? What for?\nTrue, there are many state machine libraries. This should already be an indication that if you're not\nusing any of them, you might be missing something. Why should you use this one? Unfortunately,\nwhen looking for a good state machine library, you usually pretty fast hit one or several of the\nfollowing snags:\n•speed: \"state machines are slow\" is usually the first criticism you might hear. While it is often\nan excuse not to use any and instead resort to dirty, hand-written implementations (I mean, no,\nyours are not dirty of course, I'm talking about other developers). MSM removes this often feeble\nexcuse because it is blazingly fast. Most hand-written implementations will be beaten by MSM.\n•ease of use: good argument. If you used another library, you are probably right. Many state\nmachine definitions will look similar to:\nstate s1 = new State; // a state\nstate s2 = new State; // another state\nevent e = new Event; // event\ns1->addTransition(e,s2); // transition s1 -> s2\nThe more transitions you have, the less readable it is. A long time ago, there was not so much Java\nyet, and many electronic systems were built with a state machine defined by a simple transition\ntable. You could easily see the whole structure and immediately see if you forgot some transitions.Preface\nviiThanks to our new OO techniques, this ease of use was gone. MSM gives you back the transition\ntable and reduces the noise to the minimum.\n•expressiveness: MSM offers several front-ends and constantly tries to improve state machine\ndefinition techniques. For example, you can define a transition with eUML (one of MSM's front-\nends) as:\nstate1 == state2 + event [condition] / action\nThis is not simply syntactic sugar. Such a formalized, readable structure allows easy\ncommunication with domain experts of a software to be constructed. Having domain experts\nunderstand your code will greatly reduce the number of bugs.\n•model-driven-development: a common difficulty of a model-driven development is the\ncomplexity of making a round-trip (generating code from model and then model from code). This\nis due to the fact that if a state machine structure is hard for you to read, chances are that your\nparsing tool will also have a hard time. MSM's syntax will hopefully help tool writers.\n•features: most developers use only 20% of the richly defined UML standard. Unfortunately, these\nare never the same 20% for all. And so, very likely, one will need something from the standard\nwhich is not implemented. MSM offers a very large part of the standard, with more on the way.\nLet us not wait any longer, I hope you will enjoy MSM and have fun with it!Part I. User' guide2Table of Contents\n1. Founding idea ........................................................................................................... 4\n2. UML Short Guide ..................................................................................................... 5\nWhat are state machines? ....................................................................................... 5\nConcepts ............................................................................................................. 5\nState machine, state, transition, event ............................................................... 5\nSubmachines, orthogonal regions, pseudostates ................................................... 6\nHistory ....................................................................................................... 9\nCompletion transitions / anonymous transitions ................................................. 10\nInternal transitions ...................................................................................... 12\nConflicting transitions ................................................................................. 12\nAdded concepts ................................................................................................... 13\nState machine glossary ......................................................................................... 13\n3. Tutorial .................................................................................................................. 15\nDesign ............................................................................................................... 15\nBasic front-end ................................................................................................... 15\nA simple example ....................................................................................... 15\nTransition table ........................................................................................... 15\nDefining states with entry/exit actions ............................................................. 17\nWhat do you actually do inside actions / guards? ............................................... 17\nDefining a simple state machine .................................................................... 19\nDefining a submachine ................................................................................. 19\nOrthogonal regions, terminate state, event deferring ........................................... 21\nHistory ...................................................................................................... 24\nCompletion (anonymous) transitions ............................................................... 26\nInternal transitions ....................................................................................... 26\nmore row types ........................................................................................... 28\nExplicit entry / entry and exit pseudo-state / fork ............................................... 28\nFlags ......................................................................................................... 32\nEvent Hierarchy .......................................................................................... 33\nCustomizing a state machine / Getting more speed ............................................. 34\nChoosing the initial event ............................................................................. 35\nContaining state machine (deprecated) ........................................................... 35\nFunctor front-end ................................................................................................ 35\nTransition table .......................................................................................... 35\nDefining states with entry/exit actions ............................................................. 37\nWhat do you actually do inside actions / guards (Part 2)? .................................... 37\nDefining a simple state machine .................................................................... 38\nAnonymous transitions ................................................................................. 38\nInternal transitions ....................................................................................... 39\nKleene (any) event ...................................................................................... 39\neUML ............................................................................................................... 40\nTransition table ........................................................................................... 40\nA simple example: rewriting only our transition table ......................................... 41\nDefining events, actions and states with entry/exit actions ................................... 42\nWrapping up a simple state machine and first complete examples ......................... 44\nDefining a submachine ................................................................................. 45\nAttributes / Function call ............................................................................. 45\nOrthogonal regions, flags, event deferring ........................................................ 47\nCustomizing a state machine / Getting more speed ............................................ 48\nCompletion / Anonymous transitions ............................................................... 48\nInternal transitions ....................................................................................... 49\nKleene(any) event) ...................................................................................... 49\nOther state types ......................................................................................... 49\nHelper functions .......................................................................................... 50\nPhoenix-like STL support ............................................................................. 51User' guide\n3Writing actions with Boost.Phoenix (in development) ........................................ 52\nBack-end ........................................................................................................... 53\nCreation ..................................................................................................... 53\nStarting and stopping a state machine ............................................................. 53\nEvent dispatching ........................................................................................ 54\nActive state(s) ............................................................................................. 54\nSerialization ............................................................................................... 54\nBase state type ............................................................................................ 55\nVisitor ....................................................................................................... 56\nFlags ......................................................................................................... 57\nGetting a state ............................................................................................ 57\nState machine constructor with arguments ....................................................... 57\nTrading run-time speed for better compile-time / multi-TU compilation ................. 58\nCompile-time state machine analysis ............................................................... 59\nEnqueueing events for later processing ........................................................... 60\nCustomizing the message queues .................................................................. 60\nPolicy definition with Boost.Parameter ............................................................ 60\nChoosing when to switch active states ............................................................. 60\n4. Performance / Compilers ........................................................................................... 62\nSpeed ................................................................................................................ 62\nExecutable size ................................................................................................... 62\nSupported compilers ............................................................................................ 62\nLimitations ........................................................................................................ 63\nCompilers corner ................................................................................................ 63\n5. Questions & Answers, tips ........................................................................................ 65\n6. Internals ................................................................................................................. 67\nBackend: Run To Completion ............................................................................... 67\nFrontend / Backend interface ................................................................................. 68\nGenerated state ids ............................................................................................. 69\nMetaprogramming tools ........................................................................................ 70\n7. Acknowledgements .................................................................................................. 72\nMSM v2 ............................................................................................................ 72\nMSM v1 ........................................................................................................... 72\n8. Version history ........................................................................................................ 73\nFrom V2.27 to V2.28 (Boost 1.57) ......................................................................... 73\nFrom V2.26 to V2.27 (Boost 1.56) ......................................................................... 73\nFrom V2.25 to V2.26 (Boost 1.55) ......................................................................... 73\nFrom V2.24 to V2.25 (Boost 1.54) ......................................................................... 73\nFrom V2.23 to V2.24 (Boost 1.51) ......................................................................... 73\nFrom V2.22 to V2.23 (Boost 1.50) ......................................................................... 73\nFrom V2.21 to V2.22 (Boost 1.48) ......................................................................... 74\nFrom V2.20 to V2.21 (Boost 1.47) ......................................................................... 74\nFrom V2.12 to V2.20 (Boost 1.46) ......................................................................... 74\nFrom V2.10 to V2.12 (Boost 1.45) ......................................................................... 75\nFrom V2.0 to V2.12 (Boost 1.44) .......................................................................... 754Chapter 1. Founding idea\nLet's start with an example taken from the C++ Template Metaprogramming book:\nclass player : public state_machine<player>\n{ \n // The list of FSM states enum states { Empty, Open, Stopped, Playing, Paused , initial_state = Empty }; \n // transition actions \n void start_playback(play const&) { std::cout << \"player::start_playback\\n\"; } \n void open_drawer(open_close const&) { std::cout << \"player::open_drawer\\n\"; } \n // more transition actions\n ...\n typedef player p; // makes transition table cleaner \n struct transition_table : mpl::vector11< \n // Start Event Target Action \n // +---------+------------+-----------+---------------------------+ \n row< Stopped , play , Playing , &p::start_playback >,\n row< Stopped , open_close , Open , &::open_drawer >,\n // +---------+------------+-----------+---------------------------+ \n row< Open , open_close , Empty , &p::close_drawer >,\n // +---------+------------+-----------+---------------------------+ \n row< Empty , open_close , Open , &p::open_drawer >,\n row< Empty , cd_detected, Stopped , &p::store_cd_info >,\n // +---------+------------+-----------+---------------------------+ \n row< Playing , stop , Stopped , &p::stop_playback >,\n row< Playing , pause , Paused , &p::pause_playback >,\n row< Playing , open_close , Open , &p::stop_and_open >,\n // +---------+------------+-----------+---------------------------+ \n row< Paused , play , Playing , &p::resume_playback >,\n row< Paused , stop , Stopped , &p::stop_playback >,\n row< Paused , open_close , Open , &p::stop_and_open >\n // +---------+------------+-----------+---------------------------+ \n > {};\n // Replaces the default no-transition response. \n template <class Event> \n int no_transition(int state, Event const& e)\n { \n std::cout << \"no transition from state \" << state << \" on event \" << typeid(e).name() << std::endl; \n return state; \n }\n}; \nThis example is the foundation for the idea driving MSM: a descriptive and expressive language based\non a transition table with as little syntactic noise as possible, all this while offering as many features\nfrom the UML 2.0 standard as possible. MSM also offers several expressive state machine definition\nsyntaxes with different trade-offs.5Chapter 2. UML Short Guide\nWhat are state machines?\nState machines are the description of a thing's lifeline. They describe the different stages of the lifeline,\nthe events influencing it, and what it does when a particular event is detected at a particular stage.\nThey offer the complete specification of the dynamic behavior of the thing.\nConcepts\nThinking in terms of state machines is a bit surprising at first, so let us have a quick glance at the\nconcepts.\nState machine, state, transition, event\nA state machine is a concrete model describing the behavior of a system. It is composed of a finite\nnumber of states and transitions.\nA simple state has no sub states. It can have data, entry and exit behaviors and deferred events. One can\nprovide entry and exit behaviors (also called actions) to states (or state machines), which are executed\nwhenever a state is entered or left, no matter how. A state can also have internal transitions which\ncause no entry or exit behavior to be called. A state can mark events as deferred. This means the event\ncannot be processed if this state is active, but it must be retained. Next time a state not deferring this\nevent is active, the event will be processed, as if it had just been fired.\nA transition is the switching between active states, triggered by an event. Actions and guard conditions\ncan be attached to the transition. The action executes when the transition fires, the guard is a Boolean\noperation executed first and which can prevent the transition from firing by returning false.UML Short Guide\n6\nAn initial state marks the first active state of a state machine. It has no real existence and neither has\nthe transition originating from it.\nSubmachines, orthogonal regions, pseudostates\nA composite state is a state containing a region or decomposed in two or more regions. A composite\nstate contains its own set of states and regions.\nA submachine is a state machine inserted as a state in another state machine. The same submachine\ncan be inserted more than once.\nOrthogonal regions are parts of a composite state or submachine, each having its own set of mutually\nexclusive set of states and transitions.\nUML also defines a number of pseudo states, which are considered important concepts to model, but\nnot enough to make them first-class citizens. The terminate pseudo state terminates the execution of\na state machine (MSM handles this slightly differently. The state machine is not destroyed but no\nfurther event processing occurs.).UML Short Guide\n7\nAn exit point pseudo state exits a composite state or a submachine and forces termination of execution\nin all contained regions.\nAn entry point pseudo state allows a kind of controlled entry inside a composite. Precisely, it connects\na transition outside the composite to a transition inside the composite. An important point is that this\nmechanism only allows a single region to be entered. In the above diagram, in region1, the initial state\nwould become active.UML Short Guide\n8\nThere are also two more ways to enter a submachine (apart the obvious and more common case of a\ntransition terminating on the submachine as shown in the region case). An explicit entry means that\nan inside state is the target of a transition. Unlike with direct entry, no tentative encapsulation is made,\nand only one transition is executed. An explicit exit is a transition from an inner state to a state outside\nthe submachine (not supported by MSM). I would not recommend using explicit entry or exit.UML Short Guide\n9\nThe last entry possibility is using fork. A fork is an explicit entry into one or more regions. Other\nregions are again activated using their initial state.\nHistory\nUML defines two kinds of history, shallow history and deep history. Shallow history is a pseudo state\nrepresenting the most recent substate of a submachine. A submachine can have at most one shallowUML Short Guide\n10history. A transition with a history pseudo state as target is equivalent to a transition with the most\nrecent substate as target. And very importantly, only one transition may originate from the history.\nDeep history is a shallow history recursively reactivating the substates of the most recent substate. It\nis represented like the shallow history with a star (H* inside a circle).\nHistory is not a completely satisfying concept. First of all, there can be just one history pseudo state\nand only one transition may originate from it. So they do not mix well with orthogonal regions as only\none region can be “remembered”. Deep history is even worse and looks like a last-minute addition.\nHistory has to be activated by a transition and only one transition originates from it, so how to model\nthe transition originating from the deep history pseudo state and pointing to the most recent substate\nof the substate? As a bonus, it is also inflexible and does not accept new types of histories. Let's face\nit, history sounds great and is useful in theory, but the UML version is not quite making the cut. And\ntherefore, MSM provides a different version of this useful concept.\nCompletion transitions / anonymous transitions\nCompletion events (or transitions), also called anonymous transitions, are defined as transitions having\nno defined event triggering them. This means that such transitions will immediately fire when a state\nbeing the source of an anonymous transition becomes active, provided that a guard allows it. They are\nuseful in modeling algorithms as an activity diagram would normally do. In the real-time world, they\nhave the advantage of making it easier to estimate how long a periodically executed action will last.\nFor example, consider the following diagram.UML Short Guide\n11\nUML Short Guide\n12The designer now knows at any time that he will need a maximum of 4 transitions. Being able to\nestimate how long a transition takes, he can estimate how much of a time frame he will need to require\n(real-time tasks are often executed at regular intervals). If he can also estimate the duration of actions,\nhe can even use graph algorithms to better estimate his timing requirements.\nInternal transitions\nInternal transitions are transitions executing in the scope of the active state, being a simple state or a\nsubmachine. One can see them as a self-transition of this state, without an entry or exit action called.\nConflicting transitions\nIf, for a given event, several transitions are enabled, they are said to be in conflict. There are two\nkinds of conflicts:\n•For a given source state, several transitions are defined, triggered by the same event. Normally, the\nguard condition in each transition defines which one is fired.\n•The source state is a submachine or simple state and the conflict is between a transition internal to\nthis state and a transition triggered by the same event and having as target another state.\nThe first one is simple; one only needs to define two or more rows in the transition table, with the\nsame source and trigger, with a different guard condition. Beware, however, that the UML standard\nwants these conditions to be not overlapping. If they do, the standard says nothing except that this\nis incorrect, so the implementer is free to implement it the way he sees fit. In the case of MSM, the\ntransition appearing last in the transition table gets selected first, if it returns false (meaning disabled),\nthe library tries with the previous one, and so on.\nIn the second case, UML defines that the most inner transition gets selected first, which makes sense,\notherwise no exit point pseudo state would be possible (the inner transition brings us to the exit point,\nfrom where the containing state machine can take over).UML Short Guide\n13\nMSM handles both cases itself, so the designer needs only concentrate on its state machine and the\nUML subtleties (not overlapping conditions), not on implementing this behavior himself.\nAdded concepts\n•Interrupt states: a terminate state which can be exited if a defined event is triggered.\n•Kleene (any) event: a transition with a kleene event will accept any event as trigger. Unlike a\ncompletion transition, an event must be triggered and the original event is kept accessible in the\nkleene event.\nState machine glossary\n•state machine: the life cycle of a thing. It is made of states, regions, transitions and processes\nincoming events.\n•state: a stage in the life cycle of a state machine. A state (like a submachine) can have an entry\nand exit behaviors.\n•event: an incident provoking (or not) a reaction of the state machine\n•transition: a specification of how a state machine reacts to an event. It specifies a source state,\nthe event triggering the transition, the target state (which will become the newly active state if the\ntransition is triggered), guard and actions.\n•action: an operation executed during the triggering of the transition.\n•guard: a boolean operation being able to prevent the triggering of a transition which would otherwise\nfire.\n•transition table: representation of a state machine. A state machine diagram is a graphical, but\nincomplete representation of the same model. A transition table, on the other hand, is a complete\nrepresentation.\n•initial state: The state in which the state machine starts. Having several orthogonal regions means\nhaving as many initial states.\n•submachine: A submachine is a state machine inserted as a state in another state machine and can\nbe found several times in a same state machine.\n•orthogonal regions: (logical) parallel flow of execution of a state machine. Every region of a state\nmachine gets a chance to process an incoming event.\n•terminate pseudo-state: when this state becomes active, it terminates the execution of the whole\nstate machine. MSM does not destroy the state machine as required by the UML standard, however,\nwhich lets you keep all the state machine's data.UML Short Guide\n14•entry/exit pseudo state: defined for submachines and are defined as a connection between a transition\noutside of the submachine and a transition inside the submachine. It is a way to enter or leave a\nsubmachine through a predefined point.\n•fork: a fork allows explicit entry into several orthogonal regions of a submachine.\n•history: a history is a way to remember the active state of a submachine so that the submachine can\nproceed in its last active state next time it becomes active.\n•completion events (also called completion/anonymous transitions): when a transition has no named\nevent triggering it, it automatically fires when the source state is active, unless a guard forbids it.\n•transition conflict: a conflict is present if for a given source state and incoming event, several\ntransitions are possible. UML specifies that guard conditions have to solve the conflict.\n•internal transitions: transition from a state to itself without having exit and entry actions being called.15Chapter 3. Tutorial\nDesign\nMSM is divided between front–ends and back-ends. At the moment, there is just one back-end. On\nthe front-end side, you will find three of them which are as many state machine description languages,\nwith many more possible. For potential language writers, this document contains a description of the\ninterface between front-end and back-end .\nThe first front-end is an adaptation of the example provided in the MPL book [http://boostpro.com/\nmplbook] with actions defined as pointers to state or state machine methods. The second one is based\non functors. The third, eUML (embedded UML) is an experimental language based on Boost.Proto and\nBoost.Typeof and hiding most of the metaprogramming to increase readability. Both eUML and the\nfunctor front-end also offer a functional library (a bit like Boost.Phoenix) for use as action language\n(UML defining none).\nBasic front-end\nThis is the historical front-end, inherited from the MPL book. It provides a transition table made of\nrows of different names and functionality. Actions and guards are defined as methods and referenced\nthrough a pointer in the transition. This front-end provides a simple interface making easy state\nmachines easy to define, but more complex state machines a bit harder.\nA simple example\nLet us have a look at a state machine diagram of the founding example:\nWe are now going to build it with MSM's basic front-end. An implementation [examples/\nSimpleTutorial.cpp ] is also provided.\nTransition table\nAs previously stated, MSM is based on the transition table, so let us define one:Tutorial\n16 \nstruct transition_table : mpl::vector<\n// Start Event Target Action Guard \n// +---------+------------+-----------+---------------------------+----------------------------+ \na_row< Stopped , play , Playing , &player_::start_playback >,\na_row< Stopped , open_close , Open , &player_::open_drawer >,\n _row< Stopped , stop , Stopped >,\n// +---------+------------+-----------+---------------------------+----------------------------+ \na_row< Open , open_close , Empty , &player_::close_drawer >,\n// +---------+------------+-----------+---------------------------+----------------------------+ \na_row< Empty , open_close , Open , &player_::open_drawer >,\n row< Empty , cd_detected, Stopped , &player_::store_cd_info , &player_::good_disk_format >,\n row< Empty , cd_detected, Playing , &player_::store_cd_info , &player_::auto_start >,\n// +---------+------------+-----------+---------------------------+----------------------------+ \na_row< Playing , stop , Stopped , &player_::stop_playback >,\na_row< Playing , pause , Paused , &player_::pause_playback >,\na_row< Playing , open_close , Open , &player_::stop_and_open >,\n// +---------+------------+-----------+---------------------------+----------------------------+ \na_row< Paused , end_pause , Playing , &player_::resume_playback >,\na_row< Paused , stop , Stopped , &player_::stop_playback >,\na_row< Paused , open_close , Open , &player_::stop_and_open >\n// +---------+------------+-----------+---------------------------+----------------------------+ \n> {};\n \nYou will notice that this is almost exactly our founding example. The only change in the transition\ntable is the different types of transitions (rows). The founding example forces one to define an action\nmethod and offers no guards. You have 4 basic row types:\n•row takes 5 arguments: start state, event, target state, action and guard.\n•a_row (“a” for action) allows defining only the action and omit the guard condition.\n•g_row (“g” for guard) allows omitting the action behavior and defining only the guard.\n•_row allows omitting action and guard.\nThe signature for an action methods is void method_name (event const&), for example:\nvoid stop_playback(stop const&)\nAction methods return nothing and take the argument as const reference. Of course nothing forbids\nyou from using the same action for several events:\ntemplate <class Event> void stop_playback(Eventconst&)\nGuards have as only difference the return value, which is a boolean:\nbool good_disk_format(cd_detected const& evt)\nThe transition table is actually a MPL vector (or list), which brings the limitation that the default\nmaximum size of the table is 20. If you need more transitions, overriding this default behavior is\nnecessary, so you need to add before any header:\n#define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\n#define BOOST_MPL_LIMIT_VECTOR_SIZE 30 //or whatever you need \n#define BOOST_MPL_LIMIT_MAP_SIZE 30 //or whatever you need \nThe other limitation is that the MPL types are defined only up to 50 entries. For the moment, the only\nsolution to achieve more is to add headers to the MPL (luckily, this is not very complicated).Tutorial\n17Defining states with entry/exit actions\nWhile states were enums in the MPL book, they now are classes, which allows them to hold data,\nprovide entry, exit behaviors and be reusable (as they do not know anything about the containing state\nmachine). To define a state, inherit from the desired state type. You will mainly use simple states:\nstruct Empty : public msm::front::state<> {};\nThey can optionally provide entry and exit behaviors:\nstruct Empty : public msm::front::state<> \n{\n template <class Event, class Fsm> \n void on_entry(Event const&, Fsm& ) \n {std::cout <<\"entering: Empty\" << std::endl;} \n template <class Event, class Fsm> \n void on_exit(Event const&, Fsm& ) \n {std::cout <<\"leaving: Empty\" << std::endl;} \n};\n \nNotice how the entry and exit behaviors are templatized on the event and state machine. Being generic\nfacilitates reuse. There are more state types (terminate, interrupt, pseudo states, etc.) corresponding to\nthe UML standard state types. These will be described in details in the next sections.\nWhat do you actually do inside actions / guards?\nState machines define a structure and important parts of the complete behavior, but not all. For\nexample if you need to send a rocket to Alpha Centauri, you can have a transition to a state\n\"SendRocketToAlphaCentauri\" but no code actually sending the rocket. This is where you need\nactions. So a simple action could be:\ntemplate <class Fire> void send_rocket(Fire const&)\n{\n fire_rocket();\n}\nOk, this was simple. Now, we might want to give a direction. Let us suppose this information is\nexternally given when needed, it makes sense do use the event for this:\n// Event\nstruct Fire {Direction direction;};\ntemplate <class Fire> void send_rocket(Fire const& evt)\n{\n fire_rocket(evt.direction);\n}\nWe might want to calculate the direction based not only on external data but also on data accumulated\nduring previous work. In this case, you might want to have this data in the state machine itself. As\ntransition actions are members of the front-end, you can directly access the data:\n// Event\nstruct Fire {Direction direction;};\n//front-end definition, see down\nstruct launcher_ : public msm::front::state_machine_def<launcher_>{\nData current_calculation; Tutorial\n18template <class Fire> void send_rocket(Fire const& evt)\n{\n fire_rocket(evt.direction, current_calculation);\n}\n...\n};\nEntry and exit actions represent a behavior common to a state, no matter through which transition it\nis entered or left. States being reusable, it might make sense to locate your data there instead of in the\nstate machine, to maximize reuse and make code more readable. Entry and exit actions have access\nto the state data (being state members) but also to the event and state machine, like transition actions.\nThis happens through the Event and Fsm template parameters:\nstruct Launching : public msm::front::state<> \n{\n template <class Event, class Fsm> \n void on_entry(Event const& evt, Fsm& fsm) \n {\n fire_rocket(evt.direction, fsm.current_calculation);\n } \n};\nExit actions are also ideal for clanup when the state becomes inactive.\nAnother possible use of the entry action is to pass data to substates / submachines. Launching is a\nsubstate containing a data attribute:\nstruct launcher_ : public msm::front::state_machine_def<launcher_>{\nData current_calculation;\n// state machines also have entry/exit actions \ntemplate <class Event, class Fsm> \nvoid on_entry(Event const& evt, Fsm& fsm) \n{\n launcher_::Launching& s = fsm.get_state<launcher_::Launching&>();\n s.data = fsm.current_calculation;\n} \n...\n};\nThe set_states back-end method allows you to replace a complete state.\nThe functor front-end and eUML offer more capabilities.\nHowever, this basic front-end also has special capabilities using the row2 / irow2 transitions. _row2,\na_row2, row2, g_row2, a_irow2, irow2, g_irow2 let you call an action located in any state of the\ncurrent fsm or in the front-end itself, thus letting you place useful data anywhere you see fit.\nIt is sometimes desirable to generate new events for the state machine inside actions. Since the\nprocess_event method belongs to the back end, you first need to gain a reference to it. The back end\nderives from the front end, so one way of doing this is to use a cast:\nstruct launcher_ : public msm::front::state_machine_def<launcher_>{\ntemplate <class Fire> void send_rocket(Fire const& evt)\n{\n fire_rocket();\n msm::back::state_machine<launcher_> &fsm = static_cast<msm::back::state_machine<launcher_> &>(*this);\n fsm.process_event(rocket_launched());\n}\n...Tutorial\n19};\nThe same can be implemented inside entry/exit actions. Admittedly, this is a bit awkward. A more\nnatural mechanism is available using the functor front-end.\nDefining a simple state machine\nDeclaring a state machine is straightforward and is done with a high signal / noise ratio. In our player\nexample, we declare the state machine as:\nstruct player_ : public msm::front::state_machine_def<player_>{\n /* see below */}\nThis declares a state machine using the basic front-end. We now declare inside the state machine\nstructure the initial state:\ntypedef Empty initial_state;\nAnd that is about all of what is absolutely needed. In the example, the states are declared inside the\nstate machine for readability but this is not a requirements, states can be declared wherever you like.\nAll what is left to do is to pick a back-end (which is quite simple as there is only one at the moment):\ntypedef msm::back::state_machine<player_> player;\nYou now have a ready-to-use state machine with entry/exit actions, guards, transition actions, a\nmessage queue so that processing an event can generate another event. The state machine also adapted\nitself to your need and removed almost all features we didn't use in this simple example. Note that\nthis is not per default the fastest possible state machine. See the section \"getting more speed\" to know\nhow to get the maximum speed. In a nutshell, MSM cannot know about your usage of some features\nso you will have to explicitly tell it.\nState objects are built automatically with the state machine. They will exist until state machine\ndestruction. MSM is using Boost.Fusion behind the hood. This unfortunately means that if you define\nmore than 10 states, you will need to extend the default:\n#define FUSION_MAX_VECTOR_SIZE 20 // or whatever you need\n \nWhen an unexpected event is fired, the no_transition(event, state machine, state\nid) method of the state machine is called . By default, this method simply asserts when called. It is\npossible to overwrite the no_transition method to define a different handling:\ntemplate <class Fsm,class Event> \nvoid no_transition(Event const& e, Fsm& ,int state){...}\nNote: you might have noticed that the tutorial calls start() on the state machine just after creation.\nThe start method will initiate the state machine, meaning it will activate the initial state, which means in\nturn that the initial state's entry behavior will be called. The reason why we need this will be explained\nin the back-end part . After a call to start, the state machine is ready to process events. The same way,\ncalling stop() will cause the last exit actions to be called.\nDefining a submachine\nWe now want to extend our last state machine by making the Playing state a state machine itself (a\nsubmachine).Tutorial\n20\nAgain, an example [examples/CompositeTutorial.cpp ] is also provided.\nA submachine really is a state machine itself, so we declare Playing as such, choosing a front-end\nand a back-end:\nstruct Playing_ : public msm::front::state_machine_def<Playing_>{...} \ntypedef msm::back::state_machine<Playing_> Playing;\nLike for any state machine, one also needs a transition table and an initial state:\n \nstruct transition_table : mpl::vector<\n// Start Event Target Action Guard \n// +--------+---------+--------+---------------------------+------+ \na_row< Song1 , NextSong, Song2 , &Playing_::start_next_song >,\na_row< Song2 , NextSong, Song1 , &Playing_::start_prev_song >,\na_row< Song2 , NextSong, Song3 , &Playing_::start_next_song >,\na_row< Song3 , NextSong, Song2 , &Playing_::start_prev_song >\n// +--------+---------+--------+---------------------------+------+ \n> {};\n \ntypedef Song1 initial_state; \nThis is about all you need to do. MSM will now automatically recognize Playing as a submachine and\nall events handled by Playing (NextSong and PreviousSong) will now be automatically forwarded to\nPlaying whenever this state is active. All other state machine features described later are also available.\nYou can even decide to use a state machine sometimes as submachine or sometimes as an independent\nstate machine.\nThere is, however, a limitation for submachines. If a submachine's substate has an entry action\nwhich requires a special event property (like a given method), the compiler will require all eventsTutorial\n21entering this submachine to support this property. As this is not practicable, we will need to use\nboost::enable_if / boost::disable_if to help, for example consider:\n// define a property for use with enable_if \nBOOST_MPL_HAS_XXX_TRAIT_DEF(some_event_property)\n// this event supports some_event_property and a corresponding required method\nstruct event1\n{\n // the property\n typedef int some_event_property;\n // the method required by this property\n void some_property(){...}\n};\n// this event does not supports some_event_property\nstruct event2\n{\n};\nstruct some_state : public msm::front::state<>\n{\n template <class Event,class Fsm>\n // enable this version for events supporting some_event_property\n typename boost::enable_if<typename has_some_event_property<Event>::type,void>::type\n on_entry(Event const& evt,Fsm& fsm)\n {\n evt.some_property();\n }\n // for events not supporting some_event_property\n template <class Event,class Fsm>\n typename boost::disable_if<typename has_some_event_property<Event>::type,void>::type\n on_entry(Event const& ,Fsm& )\n { }\n}; \nNow this state can be used in your submachine.\nOrthogonal regions, terminate state, event deferring\nIt is a very common problem in many state machines to have to handle errors. It usually involves\ndefining a transition from all the states to a special error state. Translation: not fun. It is also not\npractical to find from which state the error originated. The following diagram shows an example of\nwhat clearly becomes not very readable:Tutorial\n22\nThis is neither very readable nor beautiful. And we do not even have any action on the transitions yet\nto make it even less readable.\nLuckily, UML provides a helpful concept, orthogonal regions. See them as lightweight state machines\nrunning at the same time inside a common state machine and having the capability to influence one\nanother. The effect is that you have several active states at any time. We can therefore keep our state\nmachine from the previous example and just define a new region made of two states, AllOk and\nErrorMode. AllOk is most of the time active. But the error_found error event makes the second region\nmove to the new active state ErrorMode. This event does not interest the main region so it will simply\nbe ignored. \" no_transition \" will be called only if no region at all handles the event. Also, as\nUML mandates, every region gets a chance of handling the event, in the order as declared by the\ninitial_state type.\nAdding an orthogonal region is easy, one only needs to declare more states in the initial_state\ntypedef. So, adding a new region with AllOk as the region's initial state is:\ntypedef mpl::vector<Empty,AllOk> initial_state;Tutorial\n23\nFurthermore, when you detect an error, you usually do not want events to be further processed. To\nachieve this, we use another UML feature, terminate states. When any region moves to a terminate\nstate, the state machine “terminates” (the state machine and all its states stay alive) and all events are\nignored. This is of course not mandatory, one can use orthogonal regions without terminate states.\nMSM also provides a small extension to UML, interrupt states. If you declare ErrorMode (or a\nBoost.MPL sequence of events, like boost::mpl::vector<ErrorMode, AnotherEvent>) as interrupt state\ninstead of terminate state, the state machine will not handle any event other than the one which ends\nthe interrupt. So it's like a terminate state, with the difference that you are allowed to resume the state\nmachine when a condition (like handling of the original error) is met.\nLast but not least, this example also shows here the handling of event deferring. Let's say someone\nputs a disc and immediately presses play. The event cannot be handled, yet you'd want it to be handled\nat a later point and not force the user to press play again. The solution is to define it as deferred in\nthe Empty and Open states and get it handled in the first state where the event is not to be deferred.\nIt can then be handled or rejected. In this example, when Stopped becomes active, the event will be\nhandled because only Empty and Open defer the event.\nUML defines event deferring as a state property. To accommodate this, MSM lets you specify this in\nstates by providing a deferred_events type:\nstruct Empty : public msm::front::state<> \n{\n // if the play event is fired while in this state, defer it until a state\n // handles or rejects it\n typedef mpl::vector<play> deferred_events;\n...\n}; \nPlease have a look at the complete example [examples/Orthogonal-deferred.cpp ].\nWhile this is wanted by UML and is simple, it is not always practical because one could wish to\ndefer only in certain conditions. One could also want to make this be part of a transition action with\nthe added bonus of a guard for more sophisticated behaviors. It would also be conform to the MSM\nphilosophy to get as much as possible in the transition table, where you have the whole state machine\nstructure. This is also possible but not practical with this front-end so we will need to pick a differentTutorial\n24row from the functor front-end. For a complete description of the Row type, please have a look at the\nfunctor front-end.\nFirst, as there is no state where MSM can automatically find out the usage of this feature, we need to\nrequire deferred events capability explicitly, by adding a type in the state machine definition:\nstruct player_ : public msm::front::state_machine_def<player_>\n{ \n typedef int activate_deferred_events;\n...\n}; \nWe can now defer an event in any transition of the transition table by using as action the predefined\nmsm::front::Defer functor, for example:\nRow < Empty , play , none , Defer , none >\nThis is an internal transition row(see internal transitions ) but you can ignore this for the moment. It\njust means that we are not leaving the Empty state. What matters is that we use Defer as action. This\nis roughly equivalent to the previous syntax but has the advantage of giving you all the information\nin the transition table with the added power of transition behavior.\nThe second difference is that as we now have a transition defined, this transition can play in the\nresolution of transition conflicts . For example, we could model an \"if (condition2) move to Playing\nelse if (condition1) defer play event\":\nRow < Empty , play , none , Defer , condition1 >,\ng_row < Empty , play , Playing , &player_::condition2 >\nPlease have a look at this possible implementation [examples/Orthogonal-deferred2.cpp ].\nHistory\nUML defines two types of history, Shallow History and Deep History. In the previous examples, if\nthe player was playing the second song and the user pressed pause, leaving Playing, at the next press\non the play button, the Playing state would become active and the first song would play again. Soon\nwould the first client complaints follow. They'd of course demand, that if the player was paused, then\nit should remember which song was playing. But it the player was stopped, then it should restart from\nthe first song. How can it be done? Of course, you could add a bit of programming logic and generate\nextra events to make the second song start if coming from Pause. Something like:\nif (Event == end_pause) \n{ \n for (int i=0;i< song number;++i) {player.process_event(NextSong()); } \n} \nNot much to like in this example, isn't it? To solve this problem, you define what is called a shallow\nor a deep history. A shallow history reactivates the last active substate of a submachine when this\nsubmachine becomes active again. The deep history does the same recursively, so if this last active\nsubstate of the submachine was itself a submachine, its last active substate would become active and\nthis will continue recursively until an active state is a normal state. For example, let us have a look\nat the following UML diagram:Tutorial\n25\nNotice that the main difference compared to previous diagrams is that the initial state is gone and\nreplaced by a History symbol (the H inside a circle).\nAs explained in the small UML tutorial , History is a good concept with a not completely satisfying\nspecification. MSM kept the concept but not the specification and goes another way by making this\na policy and you can add your own history types (the reference explains what needs to be done).\nFurthermore, History is a backend policy. This allows you to reuse the same state machine definition\nwith different history policies in different contexts.\nConcretely, your frontend stays unchanged:\nstruct Playing_ : public msm::front::state_machine_def<Playing_>\nYou then add the policy to the backend as second parameter:\ntypedef msm::back::state_machine<Playing_,\n msm::back::ShallowHistory<mpl::vector<end_pause> > > Playing;\nThis states that a shallow history must be activated if the Playing state machine gets activated by the\nend_pause event and only this one (or any other event added to the mpl::vector). If the state machine\nwas in the Stopped state and the event play was generated, the history would not be activated and the\nnormal initial state would become active. By default, history is disabled. For your convenience the\nlibrary provides in addition to ShallowHistory a non-UML standard AlwaysHistory policy (likely to be\nyour main choice) which always activates history, whatever event triggers the submachine activation.\nDeep history is not available as a policy (but could be added). The reason is that it would conflict\nwith policies which submachines could define. Of course, if for example, Song1 were a state machine\nitself, it could use the ShallowHistory policy itself thus creating Deep History for itself. An example\n[examples/History.cpp ] is also provided.Tutorial\n26Completion (anonymous) transitions\nThe following diagram shows an example making use of this feature:\nAnonymous transitions are transitions without a named event. This means that the transition\nautomatically fires when the predecessor state is entered (to be exact, after the entry action). Otherwise\nit is a normal transition with actions and guards. Why would you need something like that? A possible\ncase would be if a part of your state machine implements some algorithm, where states are steps of the\nalgorithm implementation. Then, using several anonymous transitions with different guard conditions,\nyou are actually implementing some if/else statement. Another possible use would be a real-time\nsystem called at regular intervals and always doing the same thing, meaning implementing the same\nalgorithm. The advantage is that once you know how long a transition takes to execute on the system,\nby calculating the longest path (the number of transitions from start to end), you can pretty much know\nhow long your algorithm will take in the worst case, which in turns tells you how much of a time\nframe you are to request from a scheduler.\nIf you are using Executable UML (a good book describing it is \"Executable UML, a foundation for\nModel-Driven Architecture\"), you will notice that it is common for a state machine to generate an\nevent to itself only to force leaving a state. Anonymous transitions free you from this constraint.\nIf you do not use this feature in a concrete state machine, MSM will deactivate it and you will not pay\nfor it. If you use it, there is however a small performance penalty as MSM will try to fire a compound\nevent (the other UML name for anonymous transitions) after every taken transition. This will therefore\ndouble the event processing cost, which is not as bad as it sounds as MSM’s execution speed is very\nhigh anyway.\nTo define such a transition, use “none” as event in the transition table, for example:\nrow < State3 , none , State4 , &p::State3ToState4 , &p::always_true >\nAn implementation [examples/AnonymousTutorial.cpp ] of the state machine diagram is also\nprovided.\nInternal transitions\nInternal transitions are transitions executing in the scope of the active state, a simple state or a\nsubmachine. One can see them as a self-transition of this state, without an entry or exit action called.\nThis is useful when all you want is to execute some code for a given event in a given state.Tutorial\n27Internal transitions are specified as having a higher priority than normal transitions. While it makes\nsense for a submachine with exit points, it is surprising for a simple state. MSM lets you define the\ntransition priority by setting the transition’s position inside the transition table (see internals ). The\ndifference between \"normal\" and internal transitions is that internal transitions have no target state,\ntherefore we need new row types. We had a_row, g_row, _row and row, we now add a_irow, g_irow,\n_irow and irow which are like normal transitions but define no target state. For, example an internal\ntransition with a guard condition could be:\ng_irow < Empty /*state*/,cd_detected/*event*/,&p::internal_guard/* guard */>\nThese new row types can be placed anywhere in the transition table so that you can still have your state\nmachine structure grouped together. The only difference of behavior with the UML standard is the\nmissing notion of higher priority for internal transitions. Please have a look at the example [examples/\nSimpleTutorialInternal.cpp ].\nIt is also possible to do it the UML-conform way by declaring a transition table called internal\ntransition_table inside the state itself and using internal row types. For example:\nstruct Empty : public msm::front::state<> \n{\n struct internal_transition_table : mpl::vector<\n a_internal < cd_detected , Empty, &Empty::internal_action >\n > {};\n};\nThis declares an internal transition table called internal_transition_table and reacting on the event\ncd_detected by calling internal_action on Empty. Let us note a few points:\n•internal tables are NOT called transition_table but internal_transition_table\n•they use different but similar row types: a_internal, g_internal, _internal and internal.\n•These types take as first template argument the triggering event and then the action and guard\nmethod. Note that the only real difference to classical rows is the extra argument before the function\npointer. This is the type on which the function will be called.\n•This also allows you, if you wish, to use actions and guards from another state of the state machine\nor in the state machine itself.\n•submachines can have an internal transition table and a classical transition table.\nThe following example [examples/TestInternal.cpp ] makes use of an a_internal. It also uses functor-\nbased internal transitions which will be explained in the functor front-end , please ignore them for the\nmoment. Also note that the state-defined internal transitions, having the highest priority (as mandated\nby the UML standard), are tried before those defined inside the state machine transition table.\nWhich method should you use? It depends on what you need:\n•the first version (using irow) is simpler and likely to compile faster. It also lets you choose the\npriority of your internal transition.\n•the second version is more logical from a UML perspective and lets you make states more useful\nand reusable. It also allows you to call actions and guards on any state of the state machine.\nNote: There is an added possibility coming from this feature. The\ninternal_transition_table transitions being added directly inside the main state machine's\ntransition table, it is possible, if it is more to your state, to distribute your state machine definition a bit\nlike Boost.Statechart, leaving to the state machine itself the only task of declaring the states it wants\nto use using the explicit_creation type definition. While this is not the author's favorite way,\nit is still possible. A simplified example using only two states will show this possibility:Tutorial\n28•state machine definition [examples/distributed_table/DistributedTable.cpp ]\n•Empty header [examples/distributed_table/Empty.hpp ] and cpp [examples/distributed_table/\nEmpty.cpp ]\n•Open header [examples/distributed_table/Open.hpp ] and cpp [examples/distributed_table/\nOpen.cpp ]\n•events definition [examples/distributed_table/Events.hpp ]\nThere is an added bonus offered for submachines, which can have both the standard transition_table\nand an internal_transition_table (which has a higher priority). This makes it easier if you decide to\nmake a full submachine from a state. It is also slightly faster than the standard alternative, adding\northogonal regions, because event dispatching will, if accepted by the internal table, not continue to\nthe subregions. This gives you a O(1) dispatch instead of O(number of regions). While the example\nis with eUML, the same is also possible with any front-end.\nmore row types\nIt is also possible to write transitions using actions and guards not just from the state machine but also\nfrom its contained states. In this case, one must specify not just a method pointer but also the object\non which to call it. This transition row is called, not very originally, row2. They come, like normal\ntransitions in four flavors: a_row2, g_row2, _row2 and row2 . For example, a transition\ncalling an action from the state Empty could be:\na_row2<Stopped,open_close,Open,Empty\n /*action source*/,&Empty::open_drawer/*action*/>\nThe same capabilities are also available for internal transitions so that we have:\na_irow2, g_irow2, _irow2 and row2 . For transitions defined as part of the\ninternal_transition_table , you can use the a_internal, g_internal, _internal, internal\nrow types from the previous sections.\nThese row types allow us to distribute the state machine code among states, making them reusable\nand more useful. Using transition tables inside states also contributes to this possibility. An example\n[examples/SimpleTutorial2.cpp ] of these new rows is also provided.\nExplicit entry / entry and exit pseudo-state / fork\nMSM (almost) fully supports these features, described in the small UML tutorial . Almost because\nthere are currently two limitations:\n•it is only possible to explicitly enter a sub- state of the target but not a sub-sub state.\n•it is not possible to explicitly exit. Exit points must be used.\nLet us see a concrete example:Tutorial\n29\nWe find in this diagram:\n•A “normal” activation of SubFsm2, triggered by event1. In each region, the initial state is activated,\ni.e. SubState1 and SubState1b.\n•An explicit entry into SubFsm2::SubState2 for region “1” with event2 as trigger, meaning that in\nregion “2” the initial state, SubState1b, activated.\n•A fork into regions “1” and “2” to the explicit entries SubState2 and SubState2b, triggered by event3.\nBoth states become active so no region is default activated (if we had a third one, it would be).\n•A connection of two transitions through an entry pseudo state, SubFsm2::PseudoEntry1, triggered\nby event4 and triggering also the second transition on the same event (both transitions must be\ntriggered by the same event). Region “2” is default-activated and SubState1b becomes active.\n•An exit from SubFsm2 using an exit pseudo-state, PseudoExit1, triggered by event5 and connecting\ntwo transitions using the same event. Again, the event is forwarded to the second transition and\nboth regions are exited, as SubFsm2 becomes inactive. Note that if no transition is defined from\nPseudoExit1, an error (as defined in the UML standard) will be detected and no_transition called.\nThe example is also fully implemented [examples/DirectEntryTutorial.cpp ].\nThis sounds complicated but the syntax is simple.\nExplicit entry\nFirst, to define that a state is an explicit entry, you have to make it a state and mark it as explicit,\ngiving as template parameters the region id (the region id starts with 0 and corresponds to the first\ninitial state of the initial_state type sequence).\nstruct SubFsm2_ : public msm::front::state_machine_def<SubFsm2_> \n{\n struct SubState2 : public msm::front::state<> , Tutorial\n30 public msm::front::explicit_entry<0> \n {...};\n...\n};\nAnd define the submachine as:\ntypedef msm::back::state_machine<SubFsm2_> SubFsm2;\nYou can then use it as target in a transition with State1 as source:\n_row < State1, Event2, SubFsm2::direct< SubFsm2_::SubState2> > //SubFsm2_::SubState2: complete name of SubState2 (defined within SubFsm2_)\nThe syntax deserves some explanation. SubFsm2_ is a front end. SubState2 is a nested state, therefore\nthe SubFsm2_::SubState2 syntax. The containing machine (containing State1 and SubFsm2) refers to\nthe backend instance (SubFsm2). SubFsm2::direct states that an explicit entry is desired.\nThanks to the mpl_graph library you can also omit to provide the region index and let MSM find out\nfor you. The are however two points to note:\n•MSM can only find out the region index if the explicit entry state is somehow connected to an initial\nstate through a transition, no matter the direction.\n•There is a compile-time cost for this feature.\nNote (also valid for forks) : in order to make compile time more bearable for the more standard cases,\nand unlike initial states, explicit entry states which are also not found in the transition table of the\nentered submachine (a rare case) do NOT get automatically created. To explicitly create such states,\nyou need to add in the state machine containing the explicit states a simple typedef giving a sequence\nof states to be explicitly created like:\ntypedef mpl::vector<SubState2,SubState2b> explicit_creation;\nNote (also valid for forks) : At the moment, it is not possible to use a submachine as the target of an\nexplicit entry. Please use entry pseudo states for an almost identical effect.\nFork\nNeed a fork instead of an explicit entry? As a fork is an explicit entry into states of different regions,\nwe do not change the state definition compared to the explicit entry and specify as target a list of\nexplicit entry states:\n_row < State1, Event3, \n mpl::vector<SubFsm2::direct<SubFsm2_::SubState2>, \n SubFsm2::direct <SubFsm2_::SubState2b>\n >\nWith SubState2 defined as before and SubState2b defined as being in the second region (Caution:\nMSM does not check that the region is correct):\nstruct SubState2b : public msm::front::state<> , \n public msm::front::explicit_entry<1>\nEntry pseudo states\nTo define an entry pseudo state, you need derive from the corresponding class and give the region id:\nstruct PseudoEntry1 : public msm::front::entry_pseudo_state<0>\nAnd add the corresponding transition in the top-level state machine's transition table:Tutorial\n31_row < State1, Event4, SubFsm2::entry_pt<SubFsm2_::PseudoEntry1> >\nAnd another in the SubFsm2_ submachine definition (remember that UML defines an entry point as\na connection between two transitions), for example this time with an action method:\n_row < PseudoEntry1, Event4, SubState3,&SubFsm2_::entry_action >\nExit pseudo states\nAnd finally, exit pseudo states are to be used almost the same way, but defined differently: it takes as\ntemplate argument the event to be forwarded (no region id is necessary):\nstruct PseudoExit1 : public exit_pseudo_state<event6>\nAnd you need, like for entry pseudo states, two transitions, one in the submachine:\n_row < SubState3, Event5, PseudoExit1 >\nAnd one in the containing state machine:\n_row < SubFsm2::exit_pt<SubFsm2_::PseudoExit1>, Event6,State2 >\nImportant note 1: UML defines transiting to an entry pseudo state and having either no second\ntransition or one with a guard as an error but defines no error handling. MSM will tolerate this behavior;\nthe entry pseudo state will simply be the newly active state.\nImportant note 2 : UML defines transiting to an exit pseudo state and having no second transition as\nan error, and also defines no error handling. Therefore, it was decided to implement exit pseudo state\nas terminate states and the containing composite not properly exited will stay terminated as it was\ntechnically “exited”.\nImportant note 3: UML states that for the exit point, the same event must be used in both transitions.\nMSM relaxes this rule and only wants the event on the inside transition to be convertible to the one of\nthe outside transition. In our case, event6 is convertible from event5. Notice that the forwarded event\nmust be named in the exit point definition. For example, we could define event6 as simply as:\nstruct event \n{ \n event(){} \n template <class Event> \n event(Event const&){} \n}; //convertible from any event\nNote: There is a current limitation if you need not only convert but also get some data from the original\nevent. Consider:\nstruct event1 \n{ \n event1(int val_):val(val_) {}\n int val;\n}; // forwarded from exit point\nstruct event2 \n{ \n template <class Event> \n event2(Event const& e):val(e.val){} // compiler will complain about another event not having any val\n int val;\n}; // what the higher-level fsm wants to get\nThe solution is to provide two constructors:\nstruct event2 Tutorial\n32{ \n template <class Event> \n event2(Event const& ):val(0){} // will not be used\n event2(event1 const& e)):val(e.val){} // the conversion constructor\n int val;\n}; // what the higher-level fsm wants to get\nFlags\nThis tutorial [examples/Flags.cpp ] is devoted to a concept not defined in UML: flags. It has been added\ninto MSM after proving itself useful on many occasions. Please, do not be frightened as we are not\ntalking about ugly shortcuts made of an improbable collusion of Booleans.\nIf you look into the Boost.Statechart documentation you'll find this code:\nif ( ( state_downcast< const NumLockOff * >() != 0 ) &&\n ( state_downcast< const CapsLockOff * >() != 0 ) &&\n ( state_downcast< const ScrollLockOff * >() != 0 ) )\n \nWhile correct and found in many UML books, this can be error-prone and a potential time-bomb when\nyour state machine grows and you add new states or orthogonal regions.\nAnd most of all, it hides the real question, which would be “does my state machine's current state define\na special property”? In this special case “are my keys in a lock state”? So let's apply the Fundamental\nTheorem of Software Engineering and move one level of abstraction higher.\nIn our player example, let's say we need to know if the player has a loaded CD. We could do the same:\nif ( ( state_downcast< const Stopped * >() != 0 ) &&\n ( state_downcast< const Open * >() != 0 ) &&\n ( state_downcast< const Paused * >() != 0 ) &&\n ( state_downcast< const Playing * >() != 0 )) \nOr flag these 4 states as CDLoaded-able. You add a flag_list type into each flagged state:\ntypedef mpl::vector1<CDLoaded> flag_list;\nYou can even define a list of flags, for example in Playing:\ntypedef mpl::vector2<PlayingPaused,CDLoaded> flag_list;\nThis means that Playing supports both properties. To check if your player has a loaded CD, check if\nyour flag is active in the current state:\nplayer p; if (p.is_flag_active<CDLoaded>()) ... \nAnd what if you have orthogonal regions? How to decide if a state machine is in a flagged state? By\ndefault, you keep the same code and the current states will be OR'ed, meaning if one of the active\nstates has the flag, then is_flag_active returns true. Of course, in some cases, you might want that all\nof the active states are flagged for the state to be active. You can also AND the active states:\nif (p.is_flag_active<CDLoaded,player::Flag_AND>()) ...\nNote. Due to arcane C++ rules, when called inside an action, the correct call is:\nif (p.template is_flag_active<CDLoaded>()) ...\nThe following diagram displays the flag situation in the tutorial.Tutorial\n33\nEvent Hierarchy\nThere are cases where one needs transitions based on categories of events. An example is text parsing.\nLet's say you want to parse a string and use a state machine to manage your parsing state. You want\nto parse 4 digits and decide to use a state for every matched digit. Your state machine could look like:\nBut how to detect the digit event? We would like to avoid defining 10 transitions on char_0, char_1...\nbetween two states as it would force us to write 4 x 10 transitions and the compile-time would suffer.Tutorial\n34To solve this problem, MSM supports the triggering of a transition on a subclass event. For example,\nif we define digits as:\nstruct digit {};\nstruct char_0 : public digit {}; \nAnd to the same for other digits, we can now fire char_0, char_1 events and this will cause a transition\nwith \"digit\" as trigger to be taken.\nAn example [examples/ParsingDigits.cpp ] with performance measurement, taken from the\ndocumentation of Boost.Xpressive illustrates this example. You might notice that the performance is\nactually very good (in this case even better).\nCustomizing a state machine / Getting more speed\nMSM is offering many UML features at a high-speed, but sometimes, you just need more speed and\nare ready to give up some features in exchange. A process_event is handling several tasks:\n•checking for terminate/interrupt states\n•handling the message queue (for entry/exit/transition actions generating themselves events)\n•handling deferred events\n•catching exceptions (or not)\n•handling the state switching and action calls\nOf these tasks, only the last one is absolutely necessary to a state machine (its core job), the other ones\nare nice-to-haves which cost CPU time. In many cases, it is not so important, but in embedded systems,\nthis can lead to ad-hoc state machine implementations. MSM detects by itself if a concrete state\nmachine makes use of terminate/interrupt states and deferred events and deactivates them if not used.\nFor the other two, if you do not need them, you need to help by indicating it in your implementation.\nThis is done with two simple typedefs:\n•no_exception_thrown indicates that behaviors will never throw and MSM does not need to\ncatch anything\n•no_message_queue indicates that no action will itself generate a new event and MSM can save\nus the message queue.\nThe third configuration possibility, explained here, is to manually activate deferred events,\nusing activate_deferred_events . For example, the following state machine sets all three\nconfiguration types:\nstruct player_ : public msm::front::state_machine_def<player_>\n{\n // no need for exception handling or message queue\n typedef int no_exception_thrown;\n typedef int no_message_queue;\n // also manually enable deferred events\n typedef int activate_deferred_events\n ...// rest of implementation\n };\nImportant note : As exit pseudo states are using the message queue to forward events out of a\nsubmachine, the no_message_queue option cannot be used with state machines containing an exit\npseudo state.Tutorial\n35Choosing the initial event\nA state machine is started using the start method. This causes the initial state's entry behavior\nto be executed. Like every entry behavior, it becomes as parameter the event causing the state to\nbe entered. But when the machine starts, there was no event triggered. In this case, MSM sends\nmsm::back::state_machine<...>::InitEvent , which might not be the default you'd\nwant. For this special case, MSM provides a configuration mechanism in the form of a typedef. If the\nstate machine's front-end definition provides an initial_event typedef set to another event, this event\nwill be used. For example:\nstruct my_initial_event{};\nstruct player_ : public msm::front::state_machine_def<player_>{\n...\ntypedef my_initial_event initial_event; \n};\nContaining state machine (deprecated)\nThis feature is still supported in MSM for backward compatibility but made obsolete by the fact that\nevery guard/action/entry action/exit action get the state machine passed as argument and might be\nremoved at a later time.\nAll of the states defined in the state machine are created upon state machine construction. This has\nthe huge advantage of a reduced syntactic noise. The cost is a small loss of control for the user on the\nstate creation and access. But sometimes you needed a way for a state to get access to its containing\nstate machine. Basically, a state needs to change its declaration to:\nstruct Stopped : public msm::front::state<sm_ptr>\nAnd to provide a set_sm_ptr function: void set_sm_ptr(player* pl)\nto get a pointer to the containing state machine. The same applies to terminate_state / interrupt_state\nand entry_pseudo_state / exit_pseudo_state.\nFunctor front-end\nThe functor front-end is the preferred front-end at the moment. It is more powerful than the standard\nfront-end and has a more readable transition table. It also makes it easier to reuse parts of state\nmachines. Like eUML, it also comes with a good deal of predefined actions. Actually, eUML generates\na functor front-end through Boost.Typeof and Boost.Proto so both offer the same functionality.\nThe rows which MSM offered in the previous front-end come in different flavors. We saw the a_row,\ng_row, _row, row, not counting internal rows. This is already much to know, so why define new rows?\nThese types have some disadvantages:\n•They are more typing and information than we would wish. This means syntactic noise and more\nto learn.\n•Function pointers are weird in C++.\n•The action/guard signature is limited and does not allow for more variations of parameters (source\nstate, target state, current state machine, etc.)\n•It is not easy to reuse action code from a state machine to another.\nTransition table\nWe can change the definition of the simple tutorial's transition table to:Tutorial\n36 \nstruct transition_table : mpl::vector<\n// Start Event Target Action Guard \n// +---------+------------+-----------+---------------------------+----------------------------+ \nRow < Stopped , play , Playing , start_playback , none >,\nRow < Stopped , open_close , Open , open_drawer , none >,\nRow < Stopped , stop , Stopped , none , none >,\n// +---------+------------+-----------+---------------------------+----------------------------+ \nRow < Open , open_close , Empty , close_drawer , none >,\n// +---------+------------+-----------+---------------------------+----------------------------+ \nRow < Empty , open_close , Open , open_drawer , none >,\nRow < Empty , cd_detected, Stopped , store_cd_info , good_disk_format >,\ng_row< Empty , cd_detected, Playing , &player_::store_cd_info , &player_::auto_start >,\n// +---------+------------+-----------+---------------------------+----------------------------+ \nRow < Playing , stop , Stopped , stop_playback , none >,\nRow < Playing , pause , Paused , pause_playback , none >,\nRow < Playing , open_close , Open , stop_and_open , none >,\n// +---------+------------+-----------+---------------------------+----------------------------+ \nRow < Paused , end_pause , Playing , resume_playback , none >,\nRow < Paused , stop , Stopped , stop_playback , none >,\nRow < Paused , open_close , Open , stop_and_open , none >\n// +---------+------------+-----------+---------------------------+----------------------------+ \n> {};\n \nTransitions are now of type \"Row\" with exactly 5 template arguments: source state, event, target\nstate, action and guard. Wherever there is nothing (for example actions and guards), write \"none\".\nActions and guards are no more methods but functors getting as arguments the detected event, the\nstate machine, source and target state:\nstruct store_cd_info \n{ \n template <class Fsm,class Evt,class SourceState,class TargetState> \n void operator()(Evt const&, Fsm& fsm, SourceState&,TargetState& ) \n {\n cout << \"player::store_cd_info\" << endl;\n fsm.process_event(play());\n } \n}; \nThe advantage of functors compared to functions are that functors are generic and reusable. They\nalso allow passing more parameters than just events. The guard functors are the same but have an\noperator() returning a bool.\nIt is also possible to mix rows from different front-ends. To show this, a g_row has been left in the\ntransition table. Note: in case the action functor is used in the transition table of a state machine\ncontained inside a top-level state machine, the “fsm” parameter refers to the lowest-level state machine\n(referencing this action), not the top-level one.\nTo illustrate the reusable point, MSM comes with a whole set of predefined functors. Please refer to\neUML for the full list. For example, we are now going to replace the first action by an action sequence\nand the guard by a more complex functor.\nWe decide we now want to execute two actions in the first transition (Stopped -> Playing). We only\nneed to change the action start_playback to\nActionSequence_< mpl::vector<some_action, start_playback> >\nand now will execute some_action and start_playback every time the transition is taken.\nActionSequence_ is a functor calling each action of the mpl::vector in sequence.Tutorial\n37We also want to replace good_disk_format by a condition of the type: “good_disk_format &&\n(some_condition || some_other_condition)”. We can achieve this using And_ and Or_ functors:\nAnd_<good_disk_format,Or_< some_condition , some_other_condition> >\nIt even starts looking like functional programming. MSM ships with functors for operators, state\nmachine usage, STL algorithms or container methods.\nDefining states with entry/exit actions\nYou probably noticed that we just showed a different transition table and that we even mixed\nrows from different front-ends. This means that you can do this and leave the definitions for states\nunchanged. Most examples are doing this as it is the simplest solution. You still enjoy the simplicity\nof the first front-end with the extended power of the new transition types. This tutorial [examples/\nSimpleWithFunctors.cpp ], adapted from the earlier example does just this.\nOf course, it is also possible to define states where entry and exit actions are also provided as functors\nas these are generated by eUML and both front-ends are equivalent. For example, we can define a\nstate as:\nstruct Empty_Entry \n{ \n template <class Event,class Fsm,class State> \n void operator()(Event const&,Fsm&,State&) \n {\n ... \n } \n}; // same for Empty_Exit\nstruct Empty_tag {};\nstruct Empty : public msm::front::euml::func_state<Empty_tag,Empty_Entry,Empty_Exit>{};\nThis also means that you can, like in the transition table, write entry / exit actions made of\nmore complicated action combinations. The previous example can therefore be rewritten [examples/\nSimpleWithFunctors2.cpp ].\nUsually, however, one will probably use the standard state definition as it provides the same\ncapabilities as this front-end state definition, unless one needs some of the shipped predefined functors\nor is a fan of functional programming.\nWhat do you actually do inside actions / guards (Part\n2)?\nUsing the basic front-end, we saw how to pass data to actions through the event, that data common\nto all states could be stored in the state machine, state relevant data could be stored in the state and\naccess as template parameter in the entry / exit actions. What was however missing was the capability\nto access relevant state data in the transition action. This is possible with this front-end. A transition's\nsource and target state are also given as arguments. If the current calculation's state was to be found\nin the transition's source state (whatever it is), we could access it:\nstruct send_rocket \n{ \n template <class Fsm,class Evt,class SourceState,class TargetState> \n void operator()(Evt const&, Fsm& fsm, SourceState& src,TargetState& ) \n {\n fire_rocket(evt.direction, src.current_calculation);\n } \n}; Tutorial\n38It was a little awkward to generate new events inside actions with the basic front-end. With the functor\nfront-end it is much cleaner:\nstruct send_rocket \n{ \n template <class Fsm,class Evt,class SourceState,class TargetState> \n void operator()(Evt const& evt, Fsm& fsm, SourceState& src,TargetState&) \n {\n fire_rocket(evt.direction, src.current_calculation);\n fsm.process_event(rocket_launched());\n } \n}; \nDefining a simple state machine\nLike states, state machines can be defined using the previous front-end, as the previous example\nshowed, or with the functor front-end, which allows you to define a state machine entry and exit\nfunctions as functors, as in this example [examples/SimpleWithFunctors2.cpp ].\nAnonymous transitions\nAnonymous (completion) transitions are transitions without a named event. We saw how this front-\nend uses none when no action or guard is required. We can also use none instead of an event to\nmark an anonymous transition. For example, the following transition makes an immediate transition\nfrom State1 to State2:\nRow < State1 , none , State2 >\nThe following transition does the same but calling an action in the process:\nRow < State1 , none , State2 , State1ToState2, none >\nThe following diagram shows an example and its implementation [examples/\nAnonymousTutorialWithFunctors.cpp ]:\nTutorial\n39Internal transitions\nThe following example [examples/SimpleTutorialInternalFunctors.cpp ] uses internal transitions with\nthe functor front-end. As for the simple standard front-end, both methods of defining internal\ntransitions are supported:\n•providing a Row in the state machine's transition table with none as target state defines an internal\ntransition.\n•providing an internal_transition_table made of Internal rows inside a state or\nsubmachine defines UML-conform internal transitions with higher priority.\n•transitions defined inside internal_transition_table require no source or target state as\nthe source state is known ( Internal really are Row without a source or target state) .\nLike for the standard front-end internal transitions , internal transition tables are added into the main\nstate machine's table, thus allowing you to distribute the transition table definition and reuse states.\nThere is an added bonus offered for submachines, which can have both the standard transition_table\nand an internal_transition_table (which has higher priority). This makes it easier if you decide to make\na full submachine from a state later. It is also slightly faster than the standard alternative, adding\northogonal regions, because event dispatching will, if accepted by the internal table, not continue to\nthe subregions. This gives you a O(1) dispatch instead of O(number of regions). While the example\nis with eUML, the same is also possible with this front-end.\nKleene (any) event\nNormally, MSM requires an event to fire a transition. But there are cases, where any event, no matter\nwhich one would do:\n•If you want to reduce the number of transitions: any event would do, possibly will guards decide\nwhat happens\n•Pseudo entry states do not necessarily want to know the event which caused their activation, or they\nmight want to know only a property of it.\nMSM supports a boost::any as an acceptable event. This event will match any event, meaning that if a\ntransition with boost::any as event originates from the current state, this transition would fire (provided\nno guards or transition with a higher priority fires first). This event is named Kleene, as reference top\nthe Kleene star used in a regex.\nFor example, this transition on a state machine instance named fsm:\nRow < State1, boost::any, State2>\nwill fire if State1 is active and an event is processed:\nfsm.process_event(whatever_event());\nAt this point, you can use this any event in transition actions to get back to the original event by calling\nfor example boost::any::type() .\nIt is also possible to support your own Kleene events by specializing boost::msm::is_kleene_event for\na given event, for example:\nnamespace boost { namespace msm{\n template<> \n struct is_kleene_event< my_event >\n { Tutorial\n40 typedef boost::mpl::true_ type;\n };\n}}\nThe only requirement is that this event must have a copy constructor from the event originally\nprocessed on the state machine.\neUML\nImportant note : eUML requires a compiler supporting Boost.Typeof. Full eUML has experimental\nstatus (but not if only the transition table is written using eUML) because some compilers will start\ncrashing when a state machine becomes too big (usually when you write huge actions).\nThe previous front-ends are simple to write but still force an amount of noise, mostly MPL types,\nso it would be nice to write code looking like C++ (with a C++ action language) directly inside the\ntransition table, like UML designers like to do on their state machine diagrams. If it were functional\nprogramming, it would be even better. This is what eUML is for.\neUML is a Boost.Proto and Boost.Typeof-based compile-time domain specific embedded language.\nIt provides grammars which allow the definition of actions/guards directly inside the transition table\nor entry/exit in the state definition. There are grammars for actions, guards, flags, attributes, deferred\nevents, initial states.\nIt also relies on Boost.Typeof as a wrapper around the new decltype C++0x feature to provide a\ncompile-time evaluation of all the grammars. Unfortunately, all the underlying Boost libraries are\nnot Typeof-enabled, so for the moment, you will need a compiler where Typeof is supported (like\nVC9-10, g++ >= 4.3).\nExamples will be provided in the next paragraphs. You need to include eUML basic features:\n#include <msm/front/euml/euml.hpp>\nTo add STL support (at possible cost of longer compilation times), include:\n#include <msm/front/euml/stl.hpp>\neUML is defined in the namespace msm::front::euml .\nTransition table\nA transition can be defined using eUML as:\nsource + event [guard] / action == target\nor as\ntarget == source + event [guard] / action\nThe first version looks like a drawn transition in a diagram, the second one seems natural to a C++\ndeveloper.\nThe simple transition table written with the functor front-end can now be written as:\nBOOST_MSM_EUML_TRANSITION_TABLE(( \nStopped + play [some_guard] / (some_action , start_playback) == Playing ,\nStopped + open_close/ open_drawer == Open ,\nStopped + stop == Stopped ,\nOpen + open_close / close_drawer == Empty ,Tutorial\n41Empty + open_close / open_drawer == Open ,\nEmpty + cd_detected [good_disk_format] / store_cd_info == Stopped\n),transition_table) \nOr, using the alternative notation, it can be:\nBOOST_MSM_EUML_TRANSITION_TABLE(( \nPlaying == Stopped + play [some_guard] / (some_action , start_playback) ,\nOpen == Stopped + open_close/ open_drawer ,\nStopped == Stopped + stop ,\nEmpty == Open + open_close / close_drawer ,\nOpen == Empty + open_close / open_drawer ,\nStopped == Empty + cd_detected [good_disk_format] / store_cd_info\n),transition_table) \nThe transition table now looks like a list of (readable) rules with little noise.\nUML defines guards between “[ ]” and actions after a “/”, so the chosen syntax is already more readable\nfor UML designers. UML also allows designers to define several actions sequentially (our previous\nActionSequence_) separated by a comma. The first transition does just this: two actions separated by\na comma and enclosed inside parenthesis to respect C++ operator precedence.\nIf this seems to you like it will cost you run-time performance, don't worry, eUML is based on typeof\n(or decltype) which only evaluates the parameters to BOOST_MSM_EUML_TRANSITION_TABLE\nand no run-time cost occurs. Actually, eUML is only a metaprogramming layer on top of \"standard\"\nMSM metaprogramming and this first layer generates the previously-introduced functor front-end .\nUML also allows designers to define more complicated guards, like [good_disk_format &&\n(some_condition || some_other_condition)]. This was possible with our previously defined functors,\nbut using a complicated template syntax. This syntax is now possible exactly as written, which means\nwithout any syntactic noise at all.\nA simple example: rewriting only our transition table\nAs an introduction to eUML, we will rewrite our tutorial's transition table using eUML. This will\nrequire two or three changes, depending on the compiler:\n•events must inherit from msm::front::euml::euml_event< event_name >\n•states must inherit from msm::front::euml::euml_state< state_name >\n•with VC, states must be declared before the front-end\nWe now can write the transition table like just shown, using\nBOOST_MSM_EUML_DECLARE_TRANSITION_TABLE instead of\nBOOST_MSM_EUML_TRANSITION_TABLE. The implementation [examples/\nSimpleTutorialWithEumlTable.cpp ] is pretty straightforward. The only required addition is the need\nto declare a variable for each state or add parenses (a default-constructor call) in the transition table.\nThe composite [examples/CompositeTutorialWithEumlTable.cpp ] implementation is also natural:\n// front-end like always\nstruct sub_front_end : public boost::msm::front::state_machine_def<sub_front_end>\n{\n...\n};\n// back-end like always\ntypedef boost::msm::back::state_machine<sub_front_end> sub_back_end;Tutorial\n42sub_back_end const sub; // sub can be used in a transition table.\nUnfortunately, there is a bug with VC, which appears from time to time and causes in a stack overflow.\nIf you get a warning that the program is recursive on all paths, revert to either standard eUML or\nanother front-end as Microsoft doesn't seem to intend to fix it.\nWe now have a new, more readable transition table with few changes to our example. eUML can do\nmuch more so please follow the guide.\nDefining events, actions and states with entry/exit\nactions\nEvents\nEvents must be proto-enabled. To achieve this, they must inherit from a proto terminal\n(euml_event<event-name>). eUML also provides a macro to make this easier:\nBOOST_MSM_EUML_EVENT(play)\nThis declares an event type and an instance of this type called play, which is now ready to use in\nstate or transition behaviors.\nThere is a second macro, BOOST_MSM_EUML_EVENT_WITH_ATTRIBUTES, which takes as\nsecond parameter the attributes an event will contain, using the attribute syntax .\nNote: as we now have events defined as instances instead of just types, can we still process an\nevent by creating one on the fly, like: fsm.process_event(play()); or do we have to write:\nfsm.process_event(play);\nThe answer is you can do both. The second one is easier but unlike other front-ends, the second uses\na defined operator(), which creates an event on the fly.\nActions\nActions (returning void) and guards (returning a bool) are defined like previous functors, with the\ndifference that they also must be proto-enabled. This can be done by inheriting from euml_action<\nfunctor-name >. eUML also provides a macro:\nBOOST_MSM_EUML_ACTION(some_condition)\n{\n template <class Fsm,class Evt,class SourceState,class TargetState>\n bool operator()(Evt const& ,Fsm& ,SourceState&,TargetState& ) \n { return true; }\n}; \nLike for events, this macro declares a functor type and an instance for use in transition or state\nbehaviors.\nIt is possible to use the same action grammar from the transition table to define state entry and exit\nbehaviors. So (action1,action2) is a valid entry or exit behavior executing both actions in turn.\nThe state functors have a slightly different signature as there is no source and target state but only a\ncurrent state (entry/exit actions are transition-independent), for example:\nBOOST_MSM_EUML_ACTION(Empty_Entry)\n{\n template <class Evt,class Fsm,class State>\n void operator()(Evt const& ,Fsm& ,State& ) { ... } Tutorial\n43 }; \nIt is also possible to reuse the functors from the functor front-end. The syntax is however slightly less\ncomfortable as we need to pretend creating one on the fly for typeof. For example:\nstruct start_playback \n{\n template <class Fsm,class Evt,class SourceState,class TargetState>\n void operator()(Evt const& ,Fsm&,SourceState& ,TargetState& )\n {\n ... \n }\n};\nBOOST_MSM_EUML_TRANSITION_TABLE((\nPlaying == Stopped + play / start_playback() ,\n...\n),transition_table)\nStates\nThere is also a macro for states. This macro has 2 arguments, first the expression defining the state,\nthen the state (instance) name:\nBOOST_MSM_EUML_STATE((),Paused)\nThis defines a simple state without entry or exit action. You can provide in the expression parameter\nthe state behaviors (entry and exit) using the action grammar, like in the transition table:\nBOOST_MSM_EUML_STATE(((Empty_Entry,Dummy_Entry)/*2 entryactions*/,\n Empty_Exit/*1 exit action*/ ),\n Empty)\nThis means that Empty is defined as a state with an entry action made of two sub-actions, Empty_Entry\nand Dummy_Entry (enclosed inside parenthesis), and an exit action, Empty_Exit.\nThere are several possibilitites for the expression syntax:\n•(): state without entry or exit action.\n•(Expr1): state with entry but no exit action.\n•(Expr1,Expr2): state with entry and exit action.\n•(Expr1,Expr2,Attributes): state with entry and exit action, defining some attributes (read further on).\n•(Expr1,Expr2,Attributes,Configure): state with entry and exit action, defining some attributes (read\nfurther on) and flags (standard MSM flags) or deferred events (standard MSM deferred events).\n•(Expr1,Expr2,Attributes,Configure,Base): state with entry and exit action, defining some attributes\n(read further on), flags and deferred events (plain msm deferred events) and a non-default base state\n(as defined in standard MSM).\nno_action is also defined, which does, well, nothing except being a placeholder (needed for example\nas entry action if we have no entry but an exit). Expr1 and Expr2 are a sequence of actions, obeying\nthe same action grammar as in the transition table (following the “/” symbol).\nThe BOOST_MSM_EUML_STATE macro will allow you to define most common states, but\nsometimes you will need more, for example provide in your states some special behavior. In this case,\nyou will have to do the macro's job by hand, which is not very complicated. The state will need to\ninherit from msm::front::state<> , like any state, and from euml_state<state-name> to\nbe proto-enabled. You will then need to declare an instance for use in the transition table. For example:Tutorial\n44struct Empty_impl : public msm::front::state<> , public euml_state<Empty_impl> \n{\n void activate_empty() {std::cout << \"switching to Empty \" << std::endl;}\n template <class Event,class Fsm>\n void on_entry(Event const& evt,Fsm&fsm){...}\n template <class Event,class Fsm>\n void on_exit(Event const& evt,Fsm&fsm){...}\n};\n//instance for use in the transition table\nEmpty_impl const Empty;\nNotice also that we defined a method named activate_empty. We would like to call it inside a behavior.\nThis can be done using the BOOST_MSM_EUML_METHOD macro.\nBOOST_MSM_EUML_METHOD(ActivateEmpty_,activate_empty,activate_empty_,void,void)\nThe first parameter is the name of the underlying functor, which you could use with the functor front-\nend, the second is the state method name, the third is the eUML-generated function, the fourth and\nfifth the return value when used inside a transition or a state behavior. You can now use this inside\na transition:\nEmpty == Open + open_close / (close_drawer,activate_empty_(target_))\nWrapping up a simple state machine and first\ncomplete examples\nYou can reuse the state machine definition method from the standard front-end and simply replace the\ntransition table by this new one. You can also use eUML to define a state machine \"on the fly\" (if, for\nexample, you need to provide an on_entry/on_exit for this state machine as a functor). For this, there\nis also a macro, BOOST_MSM_EUML_DECLARE_STATE_MACHINE, which has 2 arguments,\nan expression describing the state machine and the state machine name. The expression has up to 8\narguments:\n•(Stt, Init): simplest state machine where only the transition table and initial state(s) are defined.\n•(Stt, Init, Expr1): state machine where the transition table, initial state and entry action are defined.\n•(Stt, Init, Expr1, Expr2): state machine where the transition table, initial state, entry and exit actions\nare defined.\n•(Stt, Init, Expr1, Expr2, Attributes): state machine where the transition table, initial state, entry and\nexit actions are defined. Furthermore, some attributes are added (read further on).\n•(Stt, Init, Expr1, Expr2, Attributes, Configure): state machine where the transition table, initial state,\nentry and exit actions are defined. Furthermore, some attributes (read further on), flags, deferred\nevents and configuration capabilities (no message queue / no exception catching) are added.\n•(Stt, Init, Expr1, Expr2, Attributes, Flags, Deferred , Base): state machine where the transition table,\ninitial state, entry and exit actions are defined. Furthermore, attributes (read further on), flags ,\ndeferred events and configuration capabilities (no message queue / no exception catching) are added\nand a non-default base state (see the back-end description ) is defined.\nFor example, a minimum state machine could be defined as:\nBOOST_MSM_EUML_TRANSITION_TABLE(( \n),transition_table) \nBOOST_MSM_EUML_DECLARE_STATE_MACHINE((transition_table,init_ << Empty ),\n player_)Tutorial\n45Please have a look at the player tutorial written using eUML's first syntax\n[examples/SimpleTutorialEuml2.cpp ] and second syntax [examples/SimpleTutorialEuml.cpp ]. The\nBOOST_MSM_EUML_DECLARE_ATTRIBUTE macro, to which we will get back shortly, declares\nattributes given to an eUML type (state or event) using the attribute syntax .\nDefining a submachine\nDefining a submachine (see tutorial [examples/CompositeTutorialEuml.cpp ]) with other front-ends\nsimply means using a state which is a state machine in the transition table of another state machine.\nThis is the same with eUML. One only needs define a second state machine and reference it in the\ntransition table of the containing state machine.\nUnlike the state or event definition macros, BOOST_MSM_EUML_DECLARE_STATE_MACHINE\ndefines a type, not an instance because a type is what the back-end requires. This means that you\nwill need to declare yourself an instance to reference your submachine into another state machine,\nfor example:\nBOOST_MSM_EUML_DECLARE_STATE_MACHINE(...,Playing_)\ntypedef msm::back::state_machine<Playing_> Playing_type;\nPlaying_type const Playing;\nWe can now use this instance inside the transition table of the containing state machine:\nPaused == Playing + pause / pause_playback\nAttributes / Function call\nWe now want to make our grammar more useful. Very often, one needs only very simple action\nmethods, for example ++Counter or Counter > 5 where Counter is usually defined as some attribute\nof the class containing the state machine. It seems like a waste to write a functor for such a simple\naction. Furthermore, states within MSM are also classes so they can have attributes, and we would\nalso like to provide them with attributes.\nIf you look back at our examples using the first [examples/SimpleTutorialEuml2.cpp ]\nand second [examples/SimpleTutorialEuml.cpp ] syntaxes, you will find a\nBOOST_MSM_EUML_DECLARE_ATTRIBUTE and a BOOST_MSM_EUML_ATTRIBUTES\nmacro. The first one declares possible attributes:\nBOOST_MSM_EUML_DECLARE_ATTRIBUTE(std::string,cd_name)\nBOOST_MSM_EUML_DECLARE_ATTRIBUTE(DiskTypeEnum,cd_type)\nThis declares two attributes: cd_name of type std::string and cd_type of type DiskTypeEnum. These\nattributes are not part of any event or state in particular, we just declared a name and a type. Now, we\ncan add attributes to our cd_detected event using the second one:\nBOOST_MSM_EUML_ATTRIBUTES((attributes_ << cd_name << cd_type ), \n cd_detected_attributes)\nThis declares an attribute list which is not linked to anything in particular yet. It can be attached to\na state or an event. For example, if we want the event cd_detected to have these defined attributes\nwe write:\nBOOST_MSM_EUML_EVENT_WITH_ATTRIBUTES(cd_detected,cd_detected_attributes)\nFor states, we use the BOOST_MSM_EUML_STATE macro, which has an expression form where\none can provide attributes. For example:\nBOOST_MSM_EUML_STATE((no_action /*entry*/,no_action/*exit*/,\n attributes_ << cd_detected_attributes),Tutorial\n46 some_state)\nOK, great, we now have a way to add attributes to a class, which we could have done more easily, so\nwhat is the point? The point is that we can now reference these attributes directly, at compile-time, in\nthe transition table. For example, in the example, you will find this transition:\nStopped==Empty+cd_detected[good_disk_format&&(event_(cd_type)==Int_<DISK_CD>())] \nRead event_(cd_type) as event_->cd_type with event_ a type generic for events, whatever the concrete\nevent is (in this particular case, it happens to be a cd_detected as the transition shows).\nThe main advantage of this feature is that you do not need to define a new functor and you do not need\nto look inside the functor to know what it does, you have all at hand.\nMSM provides more generic objects for state machine types:\n•event_ : used inside any action, the event triggering the transition\n•state_: used inside entry and exit actions, the entered / exited state\n•source_: used inside a transition action, the source state\n•target_: used inside a transition action, the target state\n•fsm_: used inside any action, the (lowest-level) state machine processing the transition\n•Int_<int value>: a functor representing an int\n•Char_<value>: a functor representing a char\n•Size_t_<value>: a functor representing a size_t\n•String_<mpl::string> (boost >= 1.40): a functor representing a string.\nThese helpers can be used in two different ways:\n•helper(attribute_name) returns the attribute with name attribute_name\n•helper returns the state / event type itself.\nThe second form is helpful if you want to provide your states with their own methods, which you\nalso want to use inside the transition table. In the above tutorial [examples/SimpleTutorialEuml.cpp ],\nwe provide Empty with an activate_empty method. We would like to create a eUML functor\nand call it from inside the transition table. This is done using the MSM_EUML_METHOD /\nMSM_EUML_FUNCTION macros. The first creates a functor to a method, the second to a free\nfunction. In the tutorial, we write:\nMSM_EUML_METHOD(ActivateEmpty_,activate_empty,activate_empty_,void,void)\nThe first parameter is the functor name, for use with the functor front-end. The second is the name\nof the method to call. The third is the function name for use with eUML, the fourth is the return type\nof the function if used in the context of a transition action, the fifth is the result type if used in the\ncontext of a state entry / exit action (usually fourth and fifth are the same). We now have a new eUML\nfunction calling a method of \"something\", and this \"something\" is one of the five previously shown\ngeneric helpers. We can now use this in a transition, for example:\nEmpty == Open + open_close / (close_drawer,activate_empty_(target_))\nThe action is now defined as a sequence of two actions: close_drawer and activate_empty, which\nis called on the target itself. The target being Empty (the state defined left), this really will call\nEmpty::activate_empty(). This method could also have an (or several) argument(s), for example the\nevent, we could then call activate_empty_(target_ , event_).Tutorial\n47More examples can be found in the terrible compiler stress test [examples/\nCompilerStressTestEuml.cpp ], the timer example [examples/SimpleTimer.cpp ] or in the iPodSearch\nwith eUML [examples/iPodSearchEuml.cpp ] (for String_ and more).\nOrthogonal regions, flags, event deferring\nDefining orthogonal regions really means providing more initial states. To add more initial states,\n“shift left” some, for example, if we had another initial state named AllOk :\nBOOST_MSM_EUML_DECLARE_STATE_MACHINE((transition_table,\n init_ << Empty << AllOk ),\n player_)\nYou remember from the BOOST_MSM_EUML_STATE and\nBOOST_MSM_EUML_DECLARE_STATE_MACHINE signatures that just after attributes, we\ncan define flags, like in the basic MSM front-end. To do this, we have another \"shift-left\" grammar,\nfor example:\nBOOST_MSM_EUML_STATE((no_action,no_action, attributes_ <<no_attributes_, \n /* flags */ configure_<< PlayingPaused << CDLoaded), \n Paused)\nWe now defined that Paused will get two flags, PlayingPaused and CDLoaded, defined, with another\nmacro:\nBOOST_MSM_EUML_FLAG(CDLoaded)\nThis corresponds to the following basic front-end definition of Paused:\nstruct Paused : public msm::front::state<>\n{ \n typedef mpl::vector2<PlayingPaused,CDLoaded> flag_list; \n};\nUnder the hood, what you get really is a mpl::vector2.\nNote: As we use the version of BOOST_MSM_EUML_STATE's expression with 4 arguments,\nwe need to tell eUML that we need no attributes. Similarly to a cout << endl , we need a\nattributes_ << no_attributes_ syntax.\nYou can use the flag with the is_flag_active method of a state machine. You can also use the provided\nhelper function is_flag_ (returning a bool) for state and transition behaviors. For example, in the iPod\nimplementation with eUML [examples/iPodEuml.cpp ], you find the following transition:\nForwardPressed == NoForward + EastPressed[!is_flag_(NoFastFwd)]\nThe function also has an optional second parameter which is the state machine on which the function\nis called. By default, fsm_ is used (the current state machine) but you could provide a functor returning\na reference to another state machine.\neUML also supports defining deferred events in the state (state machine) definition. To this aim, we\ncan reuse the flag grammar. For example:\nBOOST_MSM_EUML_STATE((Empty_Entry,Empty_Exit, attributes_ << no_attributes_,\n /* deferred */ configure_<< play ),Empty) \nThe configure_ left shift is also responsible for deferring events. Shift inside configure_ a flag and\nthe state will get a flag, shift an event and it will get a deferred event. This replaces the basic front-\nend definition:\ntypedef mpl::vector<play> deferred_events;Tutorial\n48In this tutorial [examples/OrthogonalDeferredEuml.cpp ], player is defining a second orthogonal\nregion with AllOk as initial state. The Empty and Open states also defer the event play. Open,\nStopped and Pause also support the flag CDLoaded using the same left shift into configure_ .\nIn the functor front-end, we also had the possibility to defer an event inside a transition, which\nmakes possible conditional deferring. This is also possible with eUML through the use of the defer_\norder, as shown in this tutorial [examples/OrthogonalDeferredEuml.cpp ]. You will find the following\ntransition:\nOpen + play / defer_\nThis is an internal transition . Ignore it for the moment. Interesting is, that when the event play is\nfired and Open is active, the event will be deferred. Now add a guard and you can conditionally defer\nthe event, for example:\nOpen + play [ some_condition ] / defer_\nThis is similar to what we did with the functor front-end. This means that we have the same constraints.\nUsing defer_ instead of a state declaration, we need to tell MSM that we have deferred events in this\nstate machine. We do this (again) using a configure_ declaration in the state machine definition in\nwhich we shift the deferred_events configuration flag:\nBOOST_MSM_EUML_DECLARE_STATE_MACHINE((transition_table,\n init_ << Empty << AllOk,\n Entry_Action, \n Exit_Action, \n attributes_ << no_attributes_,\n configure_<< deferred_events ),\n player_)\nA tutorial [examples/OrthogonalDeferredEuml2.cpp ] illustrates this possibility.\nCustomizing a state machine / Getting more speed\nWe just saw how to use configure_ to define deferred events or flags. We can also use it to configure\nour state machine like we did with the other front-ends:\n•configure_ << no_exception : disables exception handling\n•configure_ << no_msg_queue deactivates the message queue\n•configure_ << deferred_events manually enables event deferring\nDeactivating the first two features and not activating the third if not needed greatly improves the event\ndispatching speed of your state machine. Our speed testing [examples/EumlSimple.cpp ] example with\neUML does this for the best performance.\nImportant note : As exit pseudo states are using the message queue to forward events out of a\nsubmachine, the no_message_queue option cannot be used with state machines containing an exit\npseudo state.\nCompletion / Anonymous transitions\nAnonymous transitions (See UML tutorial ) are transitions without a named event, which are therefore\ntriggered immediately when the source state becomes active, provided a guard allows it. As there is no\nevent, to define such a transition, simply omit the “+” part of the transition (the event), for example:\nState3 == State4 [always_true] / State3ToState4\nState4 [always_true] / State3ToState4 == State3Tutorial\n49Please have a look at this example [examples/AnonymousTutorialEuml.cpp ], which implements the\npreviously defined state machine with eUML.\nInternal transitions\nLike both other front-ends, eUML supports two ways of defining internal transitions:\n•in the state machine's transition table. In this case, you need to specify a source state, event, actions\nand guards but no target state, which eUML will interpret as an internal transition, for example this\ndefines a transition internal to Open, on the event open_close:\nOpen + open_close [internal_guard1] / internal_action1\nA full example [examples/EumlInternal.cpp ] is also provided.\n•in a state's internal_transition_table . For example:\nBOOST_MSM_EUML_DECLARE_STATE((Open_Entry,Open_Exit),Open_def)\nstruct Open_impl : public Open_def\n{\n BOOST_MSM_EUML_DECLARE_INTERNAL_TRANSITION_TABLE((\n open_close [internal_guard1] / internal_action1\n ))\n};\nNotice how we do not need to repeat that the transition originates from Open as we already are in\nOpen's context.\nThe implementation [examples/EumlInternalDistributed.cpp ] also shows the added bonus offered\nfor submachines, which can have both the standard transition_table and an internal_transition_table\n(which has higher priority). This makes it easier if you decide to make a full submachine from a\nstate. It is also slightly faster than the standard alternative, adding orthogonal regions, because event\ndispatching will, if accepted by the internal table, not continue to the subregions. This gives you a\nO(1) dispatch instead of O(number of regions).\nKleene(any) event)\nAs for the functor front-end, eUML supports the concept of an any event, but boost::any is not an\nacceptable eUML terminal. If you need an any event, use msm::front::euml::kleene, which inherits\nboost::any. The same transition as with boost:any would be:\nState1 + kleene == State2\nOther state types\nWe saw the build_state function, which creates a simple state. Likewise, eUML provides other state-\nbuilding macros for other types of states:\n•BOOST_MSM_EUML_TERMINATE_STATE takes the same arguments as\nBOOST_MSM_EUML_STATE and defines, well, a terminate state.\n•BOOST_MSM_EUML_INTERRUPT_STATE takes the same arguments as\nBOOST_MSM_EUML_STATE and defines an interrupt state. However, the expression\nargument must contain as first element the event ending the interruption, for example:\nBOOST_MSM_EUML_INTERRUPT_STATE(( end_error /*end interrupt\nevent*/,ErrorMode_Entry,ErrorMode_Exit ),ErrorMode)\n•BOOST_MSM_EUML_EXIT_STATE takes the same arguments as\nBOOST_MSM_EUML_STATE and defines an exit pseudo state. However, theTutorial\n50expression argument must contain as first element the event propagated from\nthe exit point: BOOST_MSM_EUML_EXIT_STATE(( event6 /*propagated\nevent*/,PseudoExit1_Entry,PseudoExit1_Exit ),PseudoExit1)\n•BOOST_MSM_EUML_EXPLICIT_ENTRY_STATE defines an entry pseudo state. It takes\n3 parameters: the region index to be entered is defined as an int argument, followed\nby the configuration expression like BOOST_MSM_EUML_STATE and the state name,\nso that BOOST_MSM_EUML_EXPLICIT_ENTRY_STATE(0 /*region index*/,\n( SubState2_Entry,SubState2_Exit ),SubState2) defines an entry state into the\nfirst region of a submachine.\n•BOOST_MSM_EUML_ENTRY_STATE defines an entry pseudo state. It takes\n3 parameters: the region index to be entered is defined as an int\nargument, followed by the configuration expression like BOOST_MSM_EUML_STATE\nand the state name, so that BOOST_MSM_EUML_ENTRY_STATE(0,\n( PseudoEntry1_Entry,PseudoEntry1_Exit ),PseudoEntry1) defines a pseudo\nentry state into the first region of a submachine.\nTo use these states in the transition table, eUML offers the functions explicit_ , exit_pt_ and\nentry_pt_ . For example, a direct entry into the substate SubState2 from SubFsm2 could be:\nexplicit_(SubFsm2,SubState2) == State1 + event2\nForks being a list on direct entries, eUML supports a logical syntax (state1, state2, ...), for example:\n(explicit_(SubFsm2,SubState2), \n explicit_(SubFsm2,SubState2b),\n explicit_(SubFsm2,SubState2c)) == State1 + event3 \nAn entry point is entered using the same syntax as explicit entries:\nentry_pt_(SubFsm2,PseudoEntry1) == State1 + event4\nFor exit points, it is again the same syntax except that exit points are used as source of the transition:\nState2 == exit_pt_(SubFsm2,PseudoExit1) + event6 \nThe entry tutorial [examples/DirectEntryEuml.cpp ] is also available with eUML.\nHelper functions\nWe saw a few helpers but there are more, so let us have a more complete description:\n•event_ : used inside any action, the event triggering the transition\n•state_: used inside entry and exit actions, the entered / exited state\n•source_: used inside a transition action, the source state\n•target_: used inside a transition action, the target state\n•fsm_: used inside any action, the (deepest-level) state machine processing the transition\n•These objects can also be used as a function and return an attribute, for example event_(cd_name)\n•Int_<int value>: a functor representing an int\n•Char_<value>: a functor representing a char\n•Size_t_<value>: a functor representing a size_t\n•True_ and False_ functors returning true and false respectivelyTutorial\n51•String_<mpl::string> (boost >= 1.40): a functor representing a string.\n•if_then_else_(guard, action, action) where action can be an action sequence\n•if_then_(guard, action) where action can be an action sequence\n•while_(guard, action) where action can be an action sequence\n•do_while_(guard, action) where action can be an action sequence\n•for_(action, guard, action, action) where action can be an action sequence\n•process_(some_event [, some state machine] [, some state machine] [, some state machine] [, some\nstate machine]) will call process_event (some_event) on the current state machine or on the one(s)\npassed as 2nd , 3rd, 4th, 5th argument. This allow sending events to several external machines\n•process_(event_): reprocesses the event which triggered the transition\n•reprocess_(): same as above but shorter to write\n•process2_(some_event,Value [, some state machine] [, some state machine] [, some state machine])\nwill call process_event (some_event(Value)) on the current state machine or on the one(s) passed\nas 3rd, 4th, 5th argument\n•is_ flag_(some_flag[, some state machine]) will call is_flag_active on the current state machine or\non the one passed as 2nd argument\n•Predicate_<some predicate>: Used in STL algorithms. Wraps unary/binary functions to make them\neUML-compatible so that they can be used in STL algorithms\nThis can be quite fun. For example,\n/( if_then_else_(--fsm_(m_SongIndex) > Int_<0>(),/*if clause*/\n show_playing_song, /*then clause*/\n (fsm_(m_SongIndex)=Int_<1>(),process_(EndPlay))/*else clause*/\n ) \n )\nmeans: if (fsm.SongIndex > 0, call show_playing_song else {fsm.SongIndex=1; process EndPlay on\nfsm;}\nA few examples are using these features:\n•the iPod example introduced at the BoostCon09 has been rewritten [examples/iPodEuml.cpp ] with\neUML (weak compilers please move on...)\n•the iPodSearch example also introduced at the BoostCon09 has been rewritten [examples/\niPodSearchEuml.cpp ] with eUML. In this example, you will also find some examples of STL\nfunctor usage.\n•A simpler timer [examples/SimpleTimer.cpp ] example is a good starting point.\nThere is unfortunately a small catch. Defining a functor using MSM_EUML_METHOD or\nMSM_EUML_FUNCTION will create a correct functor. Your own eUML functors written as\ndescribed at the beginning of this section will also work well, except, for the moment, with the while_,\nif_then_, if_then_else_ functions.\nPhoenix-like STL support\neUML supports most C++ operators (except address-of). For example it is possible to write\nevent_(some_attribute)++ or [source_(some_bool) && fsm_(some_other_bool)]. But a programmer\nneeds more than operators in his daily programming. The STL is clearly a must have. Therefore, eUMLTutorial\n52comes in with a lot of functors to further reduce the need for your own functors for the transition\ntable. For almost every algorithm or container method of the STL, a corresponding eUML function is\ndefined. Like Boost.Phoenix, “.” And “->” of call on objects are replaced by a functional programming\nparadigm, for example:\n•begin_(container), end_(container): return iterators of a container.\n•empty_(container): returns container.empty()\n•clear_(container): container.clear()\n•transform_ : std::transform\nIn a nutshell, almost every STL method or algorithm is matched by a corresponding functor, which\ncan then be used in the transition table or state actions. The reference lists all eUML functions and the\nunderlying functor (so that this possibility is not reserved to eUML but also to the functor-based front-\nend). The file structure of this Phoenix-like library matches the one of Boost.Phoenix. All functors for\nSTL algorithms are to be found in:\n#include <msm/front/euml/algorithm.hpp>\nThe algorithms are also divided into sub-headers, matching the phoenix structure for simplicity:\n#include < msm/front/euml/iteration.hpp> \n#include < msm/front/euml/transformation.hpp>\n#include < msm/front/euml/querying.hpp> \nContainer methods can be found in:\n#include < msm/front/euml/container.hpp>\nOr one can simply include the whole STL support (you will also need to include euml.hpp):\n#include < msm/front/euml/stl.hpp>\nA few examples (to be found in this tutorial [examples/iPodSearchEuml.cpp ]):\n•push_back_(fsm_(m_tgt_container),event_(m_song)) : the state machine has an\nattribute m_tgt_container of type std::vector<OneSong> and the event has an attribute m_song of\ntype OneSong. The line therefore pushes m_song at the end of m_tgt_container\n•if_then_( state_(m_src_it) != end_(fsm_(m_src_container)),\nprocess2_(OneSong(),*(state_(m_src_it)++)) ) : the current state has an attribute\nm_src_it (an iterator). If this iterator != fsm.m_src_container.end(), process OneSong on fsm, copy-\nconstructed from state.m_src_it which we post-increment\nWriting actions with Boost.Phoenix (in development)\nIt is also possible to write actions, guards, state entry and exit actions using a reduced set of\nBoost.Phoenix capabilities. This feature is still in development stage, so you might get here and there\nsome surprise. Simple cases, however, should work well. What will not work will be mixing of eUML\nand Phoenix functors. Writing guards in one language and actions in another is ok though.\nPhoenix also supports a larger syntax than what will ever be possible with eUML, so you can only use\na reduced set of phoenix's grammar. This is due to the nature of eUML. The run-time transition table\ndefinition is translated to a type using Boost.Typeof. The result is a \"normal\" MSM transition table\nmade of functor types. As C++ does not allow mixing run-time and compile-time constructs, there\nwill be some limit (trying to instantiate a template class MyTemplateClass<i> where i is an int will\ngive you an idea). This means following valid Phoenix constructs will not work:\n•literalsTutorial\n53•function pointers\n•bind\n•->*\nMSM also provides placeholders which make more sense in its context than arg1.. argn:\n•_event: the event triggering the transition\n•_fsm: the state machine processing the event\n•_source: the source state of the transition\n•_target: the target state of the transition\n•_state: for state entry/exit actions, the entry/exit state\nFuture versions of MSM will support Phoenix better. You can contribute by finding out cases which\ndo not work but should, so that they can be added.\nPhoenix support is not activated by default. To activate it, add before any MSM header: #define\nBOOST_MSM_EUML_PHOENIX_SUPPORT.\nA simple example [examples/SimplePhoenix.cpp ] shows some basic capabilities.\nBack-end\nThere is, at the moment, one back-end. This back-end contains the library engine and defines the\nperformance and functionality trade-offs. The currently available back-end implements most of\nthe functionality defined by the UML 2.0 standard at very high runtime speed, in exchange for\nlonger compile-time. The runtime speed is due to a constant-time double-dispatch and self-adapting\ncapabilities allowing the framework to adapt itself to the features used by a given concrete state\nmachine. All unneeded features either disable themselves or can be manually disabled. See section\n5.1 for a complete description of the run-to-completion algorithm.\nCreation\nMSM being divided between front and back-end, one needs to first define a front-end. Then, to create\na real state machine, the back-end must be declared:\ntypedef msm::back::state_machine<my_front_end> my_fsm;\nWe now have a fully functional state machine type. The next sections will describe what can be done\nwith it.\nStarting and stopping a state machine\nThe start() method starts the state machine, meaning it will activate the initial state, which means\nin turn that the initial state's entry behavior will be called. We need the start method because you do not\nalways want the entry behavior of the initial state to be called immediately but only when your state\nmachine is ready to process events. A good example of this is when you use a state machine to write\nan algorithm and each loop back to the initial state is an algorithm call. Each call to start will make\nthe algorithm run once. The iPodSearch [examples/iPodSearch.cpp ] example uses this possibility.\nThe stop() method works the same way. It will cause the exit actions of the currently active states(s)\nto be called.\nBoth methods are actually not an absolute need. Not calling them will simply cause your first entry\nor your last exit action not to be called.Tutorial\n54Event dispatching\nThe main reason to exist for a state machine is to dispatch events. For MSM, events are objects of a\ngiven event type. The object itself can contain data, but the event type is what decides of the transition\nto be taken. For MSM, if some_event is a given type (a simple struct for example) and e1 and e2\nconcrete instances of some_event, e1 and e2 are equivalent, from a transition perspective. Of course,\ne1 and e2 can have different values and you can use them inside actions. Events are dispatched as\nconst reference, so actions cannot modify events for obvious side-effect reasons. To dispatch an event\nof type some_event, you can simply create one on the fly or instantiate if before processing:\nmy_fsm fsm; fsm.process_event(some_event());\nsome_event e1; fsm.process_event(e1)\nCreating an event on the fly will be optimized by the compiler so the performance will not degrade.\nActive state(s)\nThe backend also offers a way to know which state is active, though you will normally only need this\nfor debugging purposes. If what you need simply is doing something with the active state, internal\ntransitions or visitors are a better alternative. If you need to know what state is active, const int*\ncurrent_state() will return an array of state ids. Please refer to the internals section to know how state\nids are generated.\nSerialization\nA common need is the ability to save a state machine and restore it at a different time. MSM supports\nthis feature for the basic and functor front-ends, and in a more limited manner for eUML. MSM\nsupports boost::serialization out of the box (by offering a serialize function). Actually, for basic\nserialization, you need not do much, a MSM state machine is serializable almost like any other type.\nWithout any special work, you can make a state machine remember its state, for example:\nMyFsm fsm;\n// write to archive\nstd::ofstream ofs(\"fsm.txt\");\n// save fsm to archive\n{\n boost::archive::text_oarchive oa(ofs);\n // write class instance to archive\n oa << fsm;\n} \nLoading back is very similar:\nMyFsm fsm;\n{\n // create and open an archive for input\n std::ifstream ifs(\"fsm.txt\");\n boost::archive::text_iarchive ia(ifs);\n // read class state from archive\n ia >> fsm;\n} \nThis will (de)serialize the state machine itself but not the concrete states' data. This can be done on\na per-state basis to reduce the amount of typing necessary. To allow serialization of a concrete state,\nprovide a do_serialize typedef and implement the serialize function:\nstruct Empty : public msm::front::state<> Tutorial\n55{\n // we want Empty to be serialized. First provide the typedef\n typedef int do_serialize;\n // then implement serialize\n template<class Archive>\n void serialize(Archive & ar, const unsigned int /* version */)\n {\n ar & some_dummy_data;\n }\n Empty():some_dummy_data(0){} \n int some_dummy_data;\n}; \nYou can also serialize data contained in the front-end class. Again, you need to provide the typedef\nand implement serialize:\nstruct player_ : public msm::front::state_machine_def<player_>\n{\n //we might want to serialize some data contained by the front-end\n int front_end_data;\n player_():front_end_data(0){}\n // to achieve this, provide the typedef\n typedef int do_serialize;\n // and implement serialize\n template<class Archive>\n void serialize(Archive & ar, const unsigned int )\n {\n ar & front_end_data;\n } \n...\n}; \nThe saving of the back-end data (the current state(s)) is valid for all front-ends, so a front-end\nwritten using eUML can be serialized. However, to serialize a concrete state, the macros like\nBOOST_MSM_EUML_STATE cannot be used, so the state will have to be implemented by directly\ninheriting from front::euml::euml_state .\nThe only limitiation is that the event queues cannot be serialized so serializing must be done in a stable\nstate, when no event is being processed. You can serialize during event processing only if using no\nqueue (deferred or event queue).\nThis example [examples/Serialize.cpp ] shows a state machine which we serialize after processing an\nevent. The Empty state also has some data to serialize.\nBase state type\nSometimes, one needs to customize states to avoid repetition and provide a common functionality, for\nexample in the form of a virtual method. You might also want to make your states polymorphic so\nthat you can call typeid on them for logging or debugging. It is also useful if you need a visitor, like\nthe next section will show. You will notice that all front-ends offer the possibility of adding a base\ntype. Note that all states and state machines must have the same base state, so this could reduce reuse.\nFor example, using the basic front end, you need to:\n•Add the non-default base state in your msm::front::state<> definition, as first template argument\n(except for interrupt_states for which it is the second argument, the first one being the event ending\nthe interrupt), for example, my_base_state being your new base state for all states in a given state\nmachine:\nstruct Empty : public msm::front::state<my_base_state>Tutorial\n56Now, my_base_state is your new base state. If it has a virtual function, your\nstates become polymorphic. MSM also provides a default polymorphic base type,\nmsm::front::polymorphic_state\n•Add the user-defined base state in the state machine frontend definition, as a second template\nargument, for example:\nstruct player_ : public msm::front::state_machine<player_,my_base_state> \nYou can also ask for a state with a given id (which you might have gotten from current_state()) using\nconst base_state* get_state_by_id(int id) const where base_state is the one\nyou just defined. You can now do something polymorphically.\nVisitor\nIn some cases, having a pointer-to-base of the currently active states is not enough. You might want\nto call non-virtually a method of the currently active states. It will not be said that MSM forces the\nvirtual keyword down your throat!\nTo achieve this goal, MSM provides its own variation of a visitor pattern using the previously described\nuser-defined state technique. If you add to your user-defined base state an accept_sig typedef\ngiving the return value (unused for the moment) and parameters and provide an accept method with\nthis signature, calling visit_current_states will cause accept to be called on the currently active states.\nTypically, you will also want to provide an empty default accept in your base state in order in order\nnot to force all your states to implement accept. For example your base state could be:\nstruct my_visitable_state\n{\n // signature of the accept function\n typedef args<void> accept_sig;\n // we also want polymorphic states\n virtual ~my_visitable_state() {}\n // default implementation for states who do not need to be visited\n void accept() const {}\n};\nThis makes your states polymorphic and visitable. In this case, accept is made const and takes no\nargument. It could also be:\nstruct SomeVisitor {…};\nstruct my_visitable_state\n{\n // signature of the accept function\n typedef args<void,SomeVisitor&> accept_sig;\n // we also want polymorphic states\n virtual ~my_visitable_state() {}\n // default implementation for states who do not need to be visited\n void accept(SomeVisitor&) const {}\n};\nAnd now, accept will take one argument (it could also be non-const). By default, accept takes\nup to 2 arguments. To get more, set #define BOOST_MSM_VISITOR_ARG_SIZE to another value\nbefore including state_machine.hpp. For example:\n#define BOOST_MSM_VISITOR_ARG_SIZE 3\n#include <boost/msm/back/state_machine.hpp>\nNote that accept will be called on ALL active states and also automatically on sub-states of a\nsubmachine .Tutorial\n57Important warning : The method visit_current_states takes its parameter by value, so if the signature of\nthe accept function is to contain a parameter passed by reference, pass this parameter with a boost:ref/\ncref to avoid undesired copies or slicing. So, for example, in the above case, call:\nSomeVisitor vis; sm.visit_current_states(boost::ref(vis));\nThis example [examples/SM-2Arg.cpp ] uses a visiting function with 2 arguments.\nFlags\nFlags is a MSM-only concept, supported by all front-ends, which base themselves on the functions:\ntemplate <class Flag> bool is_flag_active()\ntemplate <class Flag,class BinaryOp> bool is_flag_active()\nThese functions return true if the currently active state(s) support the Flag property. The first variant\nORs the result if there are several orthogonal regions, the second one expects OR or AND, for example:\nmy_fsm.is_flag_active<MyFlag>()\nmy_fsm.is_flag_active<MyFlag,my_fsm_type::Flag_OR>()\nPlease refer to the front-ends sections for usage examples.\nGetting a state\nIt is sometimes necessary to have the client code get access to the states' data. After all, the states\nare created once for good and hang around as long as the state machine does so why not use it? You\nsimply just need sometimes to get information about any state, even inactive ones. An example is if\nyou want to write a coverage tool and know how many times a state was visited. To get a state, use\nthe get_state method giving the state name, for example:\nplayer::Stopped* tempstate = p.get_state<player::Stopped*>();\nor\nplayer::Stopped& tempstate2 = p.get_state<player::Stopped&>();\ndepending on your personal taste.\nState machine constructor with arguments\nYou might want to define a state machine with a non-default constructor. For example, you might\nwant to write:\nstruct player_ : public msm::front::state_machine_def<player_> \n{ \n player_(int some_value){…} \n}; \nThis is possible, using the back-end as forwarding object:\ntypedef msm::back::state_machine<player_ > player; player p(3);\nThe back-end will call the corresponding front-end constructor upon creation.\nYou can pass arguments up to the value of the BOOST_MSM_CONSTRUCTOR_ARG_SIZE macro\n(currently 5) arguments. Change this value before including any header if you need to overwrite the\ndefault.\nYou can also pass arguments by reference (or const-reference) using boost::ref (or boost::cref):Tutorial\n58struct player_ : public msm::front::state_machine_def<player_> \n{\n player_(SomeType& t, int some_value){…} \n}; \ntypedef msm::back::state_machine<player_ > player; \nSomeType data;\nplayer p(boost::ref(data),3);\n \nNormally, MSM default-constructs all its states or submachines. There are however cases where you\nmight not want this. An example is when you use a state machine as submachine, and this submachine\nused the above defined constructors. You can add as first argument of the state machine constructor\nan expression where existing states are passed and copied:\nplayer p( back::states_ << state_1 << ... << state_n , boost::ref(data),3);\nWhere state_1..n are instances of some or all of the states of the state machine. Submachines being\nstate machines, this can recurse, for example, if Playing is a submachine containing a state Song1\nhaving itself a constructor where some data is passed:\nplayer p( back::states_ << Playing(back::states_ << Song1(some_Song1_data)) , \n boost::ref(data),3);\nIt is also possible to replace a given state by a new instance at any time using set_states() and\nthe same syntax, for example:\np.set_states( back::states_ << state_1 << ... << state_n );\nAn example [examples/Constructor.cpp ] making intensive use of this capability is provided.\nTrading run-time speed for better compile-time / multi-\nTU compilation\nMSM is optimized for run-time speed at the cost of longer compile-time. This can become a problem\nwith older compilers and big state machines, especially if you don't really care about run-time speed\nthat much and would be satisfied by a performance roughly the same as most state machine libraries.\nMSM offers a back-end policy to help there. But before you try it, if you are using a VC compiler,\ndeactivate the /Gm compiler option (default for debug builds). This option can cause builds to be 3\ntimes longer... If the compile-time still is a problem, read further. MSM offers a policy which will\nspeed up compiling in two main cases:\n•many transition conflicts\n•submachines\nThe back-end msm::back::state_machine has a policy argument (first is the front-\nend, then the history policy) defaulting to favor_runtime_speed . To switch to\nfavor_compile_time , which is declared in <msm/back/favor_compile_time.hpp> ,\nyou need to:\n•switch the policy to favor_compile_time for the main state machine (and possibly\nsubmachines)\n•move the submachine declarations into their own header which includes <msm/back/\nfavor_compile_time.hpp>\n•add for each submachine a cpp file including your header and calling a macro, which generates\nhelper code, for example:Tutorial\n59#include \"mysubmachine.hpp\"\nBOOST_MSM_BACK_GENERATE_PROCESS_EVENT(mysubmachine)\n•configure your compiler for multi-core compilation\nYou will now compile your state machine on as many cores as you have submachines, which will\ngreatly speed up the compilation if you factor your state machine into smaller submachines.\nIndependently, transition conflicts resolution will also be much faster.\nThis policy uses boost.any behind the hood, which means that we will lose a feature which MSM\noffers with the default policy, event hierarchy . The following example takes our iPod example and\nspeeds up compile-time by using this technique. We have:\n•our main state machine and main function [examples/iPod_distributed/iPod.cpp ]\n•PlayingMode moved to a separate header [examples/iPod_distributed/PlayingMode.hpp ]\n•a cpp for PlayingMode [examples/iPod_distributed/PlayingMode.cpp ]\n•MenuMode moved to a separate header [examples/iPod_distributed/MenuMode.hpp ]\n•a cpp for MenuMode [examples/iPod_distributed/MenuMode.cpp ]\n•events move to a separate header as all machines use it [examples/iPod_distributed/Events.hpp ]\nCompile-time state machine analysis\nA MSM state machine being a metaprogram, it is only logical that cheking for the validity of a\nconcrete state machine happens compile-time. To this aim, using the compile-time graph library\nmpl_graph [http://www.dynagraph.org/mpl_graph/ ] (delivered at the moment with MSM) from\nGordon Woodhull, MSM provides several compile-time checks:\n•Check that orthogonal regions ar truly orthogonal.\n•Check that all states are either reachable from the initial states or are explicit entries / pseudo-entry\nstates.\nTo make use of this feature, the back-end provides a policy (default is no analysis),\nmsm::back::mpl_graph_fsm_check . For example:\n typedef msm::back::state_machine< player_,msm::back::mpl_graph_fsm_check> player; \nAs MSM is now using Boost.Parameter to declare policies, the policy choice can be made at any\nposition after the front-end type (in this case player_ ).\nIn case an error is detected, a compile-time assertion is provoked.\nThis feature is not enabled by default because it has a non-neglectable compile-time cost. The\nalgorithm is linear if no explicit or pseudo entry states are found in the state machine, unfortunately\nstill O(number of states * number of entry states) otherwise. This will be improved in future versions\nof MSM.\nThe same algorithm is also used in case you want to omit providing the region index in the explicit\nentry / pseudo entry state declaration.\nThe author's advice is to enable the checks after any state machine structure change and disable it\nagain after sucessful analysis.\nThe following example [examples/TestErrorOrthogonality.cpp ] provokes an assertion if one of the\nfirst two lines of the transition table is used.Tutorial\n60Enqueueing events for later processing\nCalling process_event(Event const&) will immediately process the event with run-to-\ncompletion semantics. You can also enqueue the events and delay their processing by calling\nenqueue_event(Event const&) instead. Calling execute_queued_events() will then\nprocess all enqueued events (in FIFO order). Calling execute_single_queued_event() will\nexecute the oldest enqueued event.\nYou can query the queue size by calling get_message_queue_size() .\nCustomizing the message queues\nMSM uses by default a std::deque for its queues (one message queue for events generated\nduring run-to-completion or with enqueue_event , one for deferred events). Unfortunately, on\nsome STL implementations, it is a very expensive container in size and copying time. Should\nthis be a problem, MSM offers an alternative based on boost::circular_buffer. The policy is\nmsm::back::queue_container_circular. To use it, you need to provide it to the back-end definition:\n typedef msm::back::state_machine< player_,msm::back::queue_container_circular> player; \nYou can access the queues with get_message_queue and get_deferred_queue, both returning a\nreference or a const reference to the queues themselves. Boost::circular_buffer is outside of the scope\nof this documentation. What you will however need to define is the queue capacity (initially is 0) to\nwhat you think your queue will at most grow, for example (size 1 is common):\n fsm.get_message_queue().set_capacity(1); \nPolicy definition with Boost.Parameter\nMSM uses Boost.Parameter to allow easier definition of back::state_machine<> policy arguments (all\nexcept the front-end). This allows you to define policy arguments (history, compile-time / run-time,\nstate machine analysis, container for the queues) at any position, in any number. For example:\n typedef msm::back::state_machine< player_,msm::back::mpl_graph_fsm_check> player; \n typedef msm::back::state_machine< player_,msm::back::AlwaysHistory> player; \n typedef msm::back::state_machine< player_,msm::back::mpl_graph_fsm_check,msm::back::AlwaysHistory> player; \n typedef msm::back::state_machine< player_,msm::back::AlwaysHistory,msm::back::mpl_graph_fsm_check> player; \nChoosing when to switch active states\nThe UML Standard is silent about a very important question: when a transition fires, at which exact\npoint is the target state the new active state of a state machine? At the end of the transition? After\nthe source state has been left? What if an exception is thrown? The Standard considers that run-to-\ncompletion means a transition completes in almost no time. But even this can be in some conditions a\nvery very long time. Consider the following example. We have a state machine representing a network\nconnection. We can be Connected and Disconnected . When we move from one state to another,\nwe send a (Boost) Signal to another entity. By default, MSM makes the target state as the new state\nafter the transition is completed. We want to send a signal based on a flag is_connected which is true\nwhen in state Connected.\nWe are in state Disconnected and receive an event connect . The transition action will ask the\nstate machine is_flag_active<is_connected> and will get... false because we are still in\nDisconnected . Hmm, what to do? We could queue the action and execute it later, but it means an\nextra queue, more work and higher run-time.\nMSM provides the possibility (in form of a policy) for a front-end to decide when the target state\nbecomes active. It can be:Tutorial\n61•before the transition fires, if the guard will allow the transition to fire:\nactive_state_switch_before_transition\n•after calling the exit action of the source state: active_state_switch_after_exit\n•after the transition action is executed:\nactive_state_switch_after_transition_action\n•after the entry action of the target state is executed (default):\nactive_state_switch_after_entry\nThe problem and the solution is shown for the functor-\nfront-end [examples/ActiveStateSetBeforeTransition.cpp ] and eUML [examples/\nActivateStateBeforeTransitionEuml.cpp ]. Removing\nactive_state_switch_before_transition will show the default state.62Chapter 4. Performance / Compilers\nTests were made on different PCs running Windows XP and Vista and compiled with VC9 SP1\nor Ubuntu and compiled with g++ 4.2 and 4.3. For these tests, the same player state machine was\nwritten using Boost.Statechart, as a state machine with only simple states [examples/SCSimple.cpp ]\nand as a state machine with a composite state [examples/SCComposite.cpp ]. The same simple and\ncomposite state machines are implemented with MSM with a standard frontend (simple) [examples/\nMsmSimple.cpp ](composite) [examples/MsmComposite.cpp ], the simple one also with functors\n[examples/MsmSimpleFunctors.cpp ] and with eUML [examples/EumlSimple.cpp ]. As these simple\nmachines need no terminate/interrupt states, no message queue and have no-throw guarantee on their\nactions, the MSM state machines are defined with minimum functionality. Test machine is a Q6600\n2.4GHz, Vista 64.\nSpeed\nVC9:\n•The simple test completes 90 times faster with MSM than with Boost.Statechart\n•The composite test completes 25 times faster with MSM\ngcc 4.2.3 (Ubuntu 8.04 in VMWare, same PC):\n•The simple test completes 46 times faster with MSM\n•The composite test completes 19 times faster with Msm\nExecutable size\nThere are some worries that MSM generates huge code. Is it true? The 2 compilers I tested disagree\nwith this claim. On VC9, the test state machines used in the performance section produce executables\nof 14kB (for simple and eUML) and 21kB (for the composite). This includes the test code and\niostreams. By comparison, an empty executable with iostreams generated by VC9 has a size of 7kB.\nBoost.Statechart generates executables of 43kB and 54kB. As a bonus, eUML comes for “free” in\nterms of executable size. You even get a speed gain. With g++ 4.3, it strongly depends on the compiler\noptions (much more than VC). A good size state machine with –O3 can generate an executable of\n600kB, and with eUML you can get to 1.5MB. Trying with –Os –s I come down to 18kB and 30kB for\nthe test state machines, while eUML will go down to 1MB (which is still big), so in this case eUML\ndoes not come for free.\nSupported compilers\nFor a current status, have a look at the regression tests [http://www.boost.org/development/tests/trunk/\ndeveloper/msm.html ].\nMSM was successfully tested with:\n•VC8 (partly), VC9, VC10\n•g++ 4.0.1 and higher\n•Intel 10.1 and higher\n•Clang 2.9\n•Green Hills Software MULTI for ARM v5.0.5 patch 4416 (Simple and Composite tutorials)Performance / Compilers\n63•Partial support for IBM compiler\nVC8 and to some lesser extent VC9 suffer from a bug. Enabling the option \"Enable Minimal\nRebuild\" (/Gm) will cause much higher compile-time (up to three times with VC8!). This option being\nactivated per default in Debug mode, this can be a big problem.\nLimitations\n•Compilation times of state machines with > 80 transitions that are going to make you storm the\nCFO's office and make sure you get a shiny octocore with 12GB RAM by next week, unless he's\ninterested in paying you watch the compiler agonize for hours... (Make sure you ask for dual 24\"\nas well, it doesn't hurt).\n•eUML allows very long constructs but will also quickly increase your compile time on some\ncompilers (VC9, VC10) with buggy decltype support (I suspect some at least quadratic algorithms\nthere). Even g++ 4.4 shows some regression compared to 4.3 and will crash if the constructs become\ntoo big.\n•Need to overwrite the mpl::vector/list default-size-limit of 20 and fusion default vector size of 10\nif more than 10 states found in a state machine\n•Limitation for submachines and entry actions requiring an event property.\nCompilers corner\nCompilers are sometimes full of surprises and such strange errors happened in the course of the\ndevelopment that I wanted to list the most fun for readers’ entertainment.\nVC8:\ntemplate <class StateType>\ntypename ::boost::enable_if<\n typename ::boost::mpl::and_<\n typename ::boost::mpl::not_<\n typename has_exit_pseudo_states<StateType>::type\n >::type,\n typename ::boost::mpl::not_<\n typename is_pseudo_exit<StateType>::type\n >::type \n >::type,\n BaseState*>::type \nI get the following error:\nerror C2770: invalid explicit template argument(s) for '`global namespace'::boost::enable_if<...>::...'\nIf I now remove the first “::” in ::boost::mpl , the compiler shuts up. So in this case, it is not possible\nto follow Boost’s guidelines.\nVC9:\n•This one is my all times’ favorite. Do you know why the exit pseudo states are referenced in the\ntransition table with a “submachine::exit_pt” ? Because “exit” will crash the compiler. “Exit” is not\npossible either because it will crash the compiler on one machine, but not on another (the compiler\nwas installed from the same disk).\n•Sometimes, removing a policy crashes the compiler, so some versions are defining a dummy policy\ncalled WorkaroundVC9.Performance / Compilers\n64•Typeof: While g++ and VC9 compile “standard” state machines in comparable times, Typeof (while\nin both ways natively supported) seems to behave in a quadratic complexity with VC9 and VC10.\n•eUML: in case of a compiler crash, changing the order of state definitions (first states without entry\nor exit) sometimes solves the problem.\ng++ 4.x: Boring compiler, almost all is working almost as expected. Being not a language lawyer I\nam unsure about the following “Typeof problem”. VC9 and g++ disagree on the question if you can\nderive from the BOOST_TYPEOF generated type without first defining a typedef. I will be thankful\nfor an answer on this. I only found two ways to break the compiler:\n•Add more eUML constructs until something explodes (especially with g++-4.4)\n•The build_terminate function uses 2 mpl::push_back instead of mpl::insert_range because g++\nwould not accept insert_range.\nYou can test your compiler’s decltype implementation with the following stress test [examples/\nCompilerStressTestEuml.cpp ] and reactivate the commented-out code until the compiler crashes.65Chapter 5. Questions & Answers, tips\nWhere should I define a state machine? : The tutorials are implemented in a simple cpp source file\nfor simplicity. I want to model dynamic behavior of a class as a state machine, how should I define\nthe state machine?\nAnswer: Usually you'll want to implement the state machine as an attribute of the class. Unfortunately,\na concrete state machine is a typedef, which cannot be forward-declared. This leaves you with two\npossibilities:\n•Provide the state machine definition inside the header class and contain an instance as attribute.\nSimple, but with several drawbacks: using namespace directives are not advised, and compile-time\ncost for all modules including the header.\n•Keep the state machine as (shared) pointer to void inside the class definition [examples/\nFsmAsPtr.hpp ], and implement the state machine in the cpp file [examples/FsmAsPtr.cpp ].\nMinimum compile-time, using directives are okay, but the state machine is now located inside the\nheap.\nQuestion: on_entry gets as argument, the sent event. What event do I get when the state becomes\ndefault-activated (because it is an initial state)?\nAnswer: To allow you to know that the state was default-activated, MSM generates a\nboost::msm::InitEvent default event.\nQuestion: Why do I see no call to no_transition in my submachine?\nAnswer: Because of the priority rule defined by UML. It says that in case of transition conflict, the\nmost inner state has a higher priority. So after asking the inner state, the containing composite has to\nbe also asked to handle the transition and could find a possible transition.\nQuestion: Why do I get a compile error saying the compiler cannot convert to a\nfunction ...Fsm::*(some_event)?\nAnswer: You probably defined a transition triggered by the event some_event, but used a guard/action\nmethod taking another event.\nQuestion: Why do I get a compile error saying something like “too few” or “too many” template\narguments?\nAnswer: You probably defined a transition in form of a a_row or g_row where you wanted just a _row\nor the other way around. With Row, it could mean that you forgot a \"none\".\nQuestion: Why do I get a very long compile error when I define more than 20 rows in the transition\ntable?\nAnswer: MSM uses Boost.MPL under the hood and this is the default maximum size. Please define\nthe following 3 macros before including any MSM headers:\n#define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS\n#define BOOST_MPL_LIMIT_VECTOR_SIZE 30 // or whatever you need \n#define BOOST_MPL_LIMIT_MAP_SIZE 30 // or whatever you need \nQuestion: Why do I get this error: ”error C2977: 'boost::mpl::vector' : too many template arguments”?\nAnswer: The first possibility is that you defined a transition table as, say, vector17 and have 18 entries.\nThe second is that you have 17 entries and have a composite state. Under the hood, MSM adds a row\nfor every event in the composite transition table. The third one is that you used a mpl::vector without\nthe number of entries but are close to the MPL default of 50 and have a composite, thus pushing you\nabove 50. Then you need mpl/vector60/70….hpp and a mpl/map60/70….hppQuestions & Answers, tips\n66Question: Why do I get a very long compile error when I define more than 10 states in a state machine?\nAnswer: MSM uses Boost.Fusion under the hood and this is the default maximum size. Please define\nthe following macro before including any MSM headers:\n#define FUSION_MAX_VECTOR_SIZE 20 // or whatever you need 67Chapter 6. Internals\nThis chapter describes the internal machinery of the back-end, which can be useful for UML experts\nbut can be safely ignored for most users. For implementers, the interface between front- and back-\nend is also described in detail.\nBackend: Run To Completion\nThe back-end implements the following run-to completion algorithm:\n•Check if one region of the concrete state machine is in a terminate or interrupt state. If yes, event\nprocessing is disabled while the condition lasts (forever for a terminate pseudo-state, while active\nfor an interrupt pseudo-state).\n•If the message queue feature is enabled and if the state machine is already processing an event, push\nthe currently processed event into the queue and end processing. Otherwise, remember that the state\nmachine is now processing an event and continue.\n•If the state machine detected that no deferred event is used, skip this step. Otherwise, mark the first\ndeferred event from the deferred queue as active.\n•Now start the core of event dispatching. If exception handling is activated, this will happen inside\na try/catch block and the front-end exception_caught is called if an exception occurs.\n•The event is now dispatched in turn to every region, in the order defined by the initial state front-\nend definition. This will, for every region, call the corresponding front-end transition definition (the\n\"row\" or \"Row\" of the transition table).\n•Without transition conflict, if for a given region a transition is possible, the guard condition is\nchecked. If it returns true, the transition processing continues and the current state's exit action is\ncalled, followed by the transition action behavior and the new active state's entry behavior.\n•With transition conflicts (several possible transitions, disambiguated by mutually exclusive guard\nconditions), the guard conditions are tried in reverse order of their transition definition in the\ntransition table. The first one returning true selects its transition. Note that this is not defined by the\nUML standard, which simply specifies that if the guard conditions are not mutually exclusive, the\nstate machine is ill-formed and the behaviour undefined. Relying on this implementation-specific\nbehaviour will make it harder for the developer to support another state machine framework.\n•If at least one region processes the event, this event is seen as having been accepted. If not, the\nlibrary calls no_transition on the state machine for every contained region.\n•If the currently active state is a submachine, the behaviour is slightly different. The UML standard\nspecifies that internal transitions have to be tried first, so the event is first dispatched to the\nsubmachine. Only if the submachine does not accept the event are other (non internal) transitions\ntried.\n•This back-end supports simple states' and submachines' internal transitions. These are provided in\nthe state's internal_transition_table type. Transitions defined in this table are added at\nthe end of the main state machine's transition table, but with a lesser priority than the submachine's\ntransitions (defined in transition_table ). This means, for simple states, that these transitions\nhave higher priority than non-internal transitions, conform to the UML standard which gives higher\npriority to deeper-level transitions. For submachines, this is a non-standard addition which can help\nmake event processing faster by giving a chance to bypass subregion processing. With standard\nUML, one would need to add a subregion only to process these internal transitions, which would\nbe slower.\n•After the dispatching itself, the deferred event marked in step 3 (if any) now gets a chance of\nprocessing.Internals\n68•Then, events queued in the message queue also get a dispatching chance\n•Finally, completion / anonymous transitions, if to be found in the transition table, also get their\ndispatching chance.\nThis algorithm illustrates how the back-end configures itself at compile-time as much as possible.\nEvery feature not found in a given state machine definition is deactivated and has therefore no runtime\ncost. Completion events, deferred events, terminate states, dispatching to several regions, internal\ntransitions are all deactivated if not used. User configuration is only for exception handling and\nmessage queue necessary.\nFrontend / Backend interface\nThe design of MSM tries to make front-ends and back-ends (later) to be as interchangeable as possible.\nOf course, no back-end will ever implement every feature defined by any possible front-end and\ninversely, but the goal is to make it as easy as possible to extend the current state of the library.\nTo achieve this, MSM divides the functionality between both sides: the front-end is a sort of user\ninterface and is descriptive, the back-end implements the state machine engine.\nMSM being based on a transition table, a concrete state machine (or a given front-end) must provide\na transition_table. This transition table must be made of rows. And each row must tell what kind of\ntransition it is and implement the calls to the actions and guards. A state machine must also define its\nregions (marked by initial states) And that is about the only constraints for front-ends. How the rows\nare described is implementer's choice.\nEvery row must provide:\n•A Source typedef indicating, well, the type of the source state.\n•A Target typedef indicating, well, the type of the target state.\n•A Evt typedef indicating the type of the event triggering the transition.\n•A row_type_tag typedef indicating the type of the transition.\n•Rows having a type requiring transition actions must provide a static function action_call\nwith the following signature: template <class Fsm,class SourceState,class\nTargetState,class AllStates>\nstatic void action_call (Fsm& fsm, Event const& evt, SourceState&,\nTargetState&, AllStates&)\nThe function gets as parameters the (back-end) state machine, the event, source and target states\nand a container (in the current back-end, a fusion::set) of all the states defined in the state machine.\nFor example, as the back-end has the front-end as basic class, action_call is simply defined\nas (fsm.*action)(evt) .\n•Rows having a type requiring a guard must provide a static function guard_call with the\nfollowing signature:\ntemplate <class Fsm,class SourceState,class TargetState,class\nAllStates>\nstatic bool guard_call (Fsm&, Event const&, SourceState&,\nTargetState&, AllStates&)\n•The possible transition (row) types are:\n•a_row_tag: a transition with actions and no guardInternals\n69•g_row_type: a transition with a guard and no actions\n•_row_tag: a transition without actions or guard\n•row_tag: a transition with guard and actions\n•a_irow_tag: an internal transition (defined inside the transition_table ) with actions\n•g_irow_tag: an internal transition (defined inside the transition_table ) with guard\n•irow_tag: an internal transition (defined inside the transition_table ) with actions and\nguards\n•_irow_tag: an internal transition (defined inside the transition_table ) without action or\nguard. Due to higher priority for internal transitions, this is equivalent to a \"ignore event\"\n•sm_a_i_row_tag: an internal transition (defined inside the internal_transition_table )\nwith actions\n•sm_g_i_row_tag: an internal transition (defined inside the internal_transition_table )\nwith guard\n•sm_i_row_tag: an internal transition (defined inside the internal_transition_table )\nwith actions and guards\n•sm__i_row_tag: an internal transition (defined inside the internal_transition_table )\nwithout action or guard. Due to higher priority for internal transitions, this is quivalent to a \"ignore\nevent\"\nFurthermore, a front-end must provide the definition of states and state machines. State machine\ndefinitions must provide (the implementer is free to provide it or let it be done by every concrete state\nmachine. Different MSM front-ends took one or the other approach):\n•initial_state : This typedef can be a single state or a mpl container and provides the initial\nstates defining one or several orthogonal regions.\n•transition_table : This typedef is a MPL sequence of transition rows.\n•configuration : this typedef is a MPL sequence of known types triggering special behavior in\nthe back-end, for example if a concrete fsm requires a message queue or exception catching.\nStates and state machines must both provide a (possibly empty) definition of:\n•flag_list : the flags being active when this state or state machine become the current state of\nthe fsm.\n•deferred_events : events being automatically deferred when the state is the current state of\nthe fsm.\n•internal_transition_table : the internal transitions of this state.\n•on_entry and on_exit methods.\nGenerated state ids\nNormally, one does not need to know the ids are generated for all the states of a state machine, unless\nfor debugging purposes, like the pstate function does in the tutorials in order to display the name of\nthe current state. This section will show how to automatically display typeid-generated names, but\nthese are not very readable on all platforms, so it can help to know how the ids are generated. The idsInternals\n70are generated using the transition table, from the “Start” column up to down, then from the “Next”\ncolumn, up to down, as shown in the next image:\nStopped will get id 0, Open id 1, ErrorMode id 6 and SleepMode (seen only in the “Next” column) id\n7. If you have some implicitly created states, like transition-less initial states or states created using\nthe explicit_creation typedef, these will be added as a source at the end of the transition table. If you\nhave submachine states, a row will be added for them at the end of the table, after the automatically or\nexplicitly created states, which can change their id. The next help you will need for debugging would\nbe to call the current_state method of the state_machine class, then the display_type helper to generate\na readable name from the id. If you do not want to go through the transition table to fill an array of\nnames, the library provides another helper, fill_state_names, which, given an array of sufficient size\n(please see next section to know how many states are defined in the state machine), will fill it with\ntypeid-generated names.\nMetaprogramming tools\nWe can find for the transition table more uses than what we have seen so far. Let's suppose you need\nto write a coverage tool. A state machine would be perfect for such a job, if only it could provide some\ninformation about its structure. Thanks to the transition table and Boost.MPL, it does.\nWhat is needed for a coverage tool? You need to know how many states are defined in the state\nmachine, and how many events can be fired. This way you can log the fired events and the states\nvisited in the life of a concrete machine and be able to perform some coverage analysis, like “fired\n65% of all possible events and visited 80% of the states defined in the state machine”. To achieve this,\nMSM provides a few useful tools:\n•generate_state_set<transition table>: returns a mpl::set of all the states defined in the table.\n•generate_event_set<transition table>: returns a mpl::set of all the events defined in the table.\n•using mpl::size<>::value you can get the number of elements in the set.\n•display_type defines an operator() sending typeid(Type).name() to cout.\n•fill_state_names fills an array of char const* with names of all states (found by typeid)\n•using mpl::for_each on the result of generate_state_set and generate_event_set passing display_type\nas argument will display all the states of the state machine.Internals\n71•let's suppose you need to recursively find the states and events defined in the composite states and\nthus also having a transition table. Calling recursive_get_transition_table<Composite> will return\nyou the transition table of the composite state, recursively adding the transition tables of all sub-state\nmachines and sub-sub...-sub-state machines. Then call generate_state_set or generate_event_set on\nthe result to get the full list of states and events.\nAn example [examples/BoostCon09Full.cpp ] shows the tools in action.72Chapter 7. Acknowledgements\nI am in debt to the following people who helped MSM along the way.\nMSM v2\n•Thanks to Dave Abrahams for managing the review\n•Thanks to Eric Niebler for his patience correcting my grammar errors\n•Special thanks to Joel de Guzman who gave me very good ideas at the BoostCon09. These ideas\nwere the starting point of the redesign. Any time again, Joel #\n•Thanks to Richard O’Hara for making Green Hills bring a patch in less than 1 week, thus adding\none more compiler to the supported list.\n•Big thanks to those who took the time to write a review: Franz Alt, David Bergman, Michael Caisse,\nBarend Gehrels, Darryl Greene, Juraj Ivancic, Erik Nelson, Kenny Riddile.\n•Thanks to Matt Calabrese, Juraj Ivancic, Adam Merz and Joseph Wu for reporting bugs.\n•Thanks to Thomas Mistretta for providing an addition to the section \"What do you actually do inside\nactions / guards\".\nMSM v1\n•The original version of this framework is based on the brilliant work of David Abrahams and\nAleksey Gurtovoy who laid down the base and the principles of the framework in their excellent\nbook, “C++ template Metaprogramming”. The implementation also makes heavy use of the\nboost::mpl.\n•Thanks to Jeff Flinn for his idea of the user-defined base state and his review which allowed MSM\nto be presented at the BoostCon09.\n•Thanks to my MSM v1 beta testers, Christoph Woskowski and Franz Alt for using the framework\nwith little documentation and to my private reviewer, Edouard Alligand73Chapter 8. Version history\nFrom V2.27 to V2.28 (Boost 1.57)\n•Fixed BOOST_MSM_EUML_EVENT_WITH_ATTRIBUTES (broken in 1.56).\n•Fixed execute_queued_events, added execute_single_queued_event\n•Fixed warnings for unused variables\nFrom V2.26 to V2.27 (Boost 1.56)\n•Bugfix: no_transition in case of an exception.\n•Bugfix: Trac 9280\n•Bugfix: incomplete namespace names in eUML\nFrom V2.25 to V2.26 (Boost 1.55)\n•New feature: interrupt states now support a sequence of events to end the interruption\n•Bugfix: Trac 8686.\nFrom V2.24 to V2.25 (Boost 1.54)\n•Bugfix: Exit points broken for the favor_compile_time policy.\n•Bugfix: copy breaks exit points of subsubmachines.\n•Bugfix: Trac 8046.\nFrom V2.23 to V2.24 (Boost 1.51)\n•Support for boost::any or kleene as an acceptable event.\n•Bugfix: compiler error with fsm internal table and none(compound) event.\n•Bugfix: euml::defer_ leading to stack overflow.\nFrom V2.22 to V2.23 (Boost 1.50)\n•eUML : better syntax for front-ends defined with eUML as transititon table only. Caution: Breaking\nChange!\n•Bugfix: graph building was only working if initial_state defined as a sequence\n•Bugfix: flags defined for a Terminate or Interrupt state do not break the blocking function of these\nstates any more.\n•Bugfix: multiple deferred events from several regions were not working in every case.\n•Bugfix: visitor was passed by value to submachines.\n•Bugfix: no_transition was not called for submachines who send an event to themselves.Version history\n74•Fixed warnings with gcc\nFrom V2.21 to V2.22 (Boost 1.48)\n•eUML: added easier event reprocessing: process(event_) and reprocess()\n•Rewrite of internal transition tables. There were a few bugs (failing recursivity in internal transition\ntables of sub-sub machines) and a missing feature (unused internal transition table of the main state\nmachine).\n•Bugfixes\n•Reverted favor_compile_time policy to Boost 1.46 state\n•none event now is convertible from any other event\n•eUML and pseudo exit states\n•Fixed not working Flag_AND\n•Fixed rare bugs causing multiple processing of the same event in a submachine whose transition\ntable contains this event and a base event of it.\n•gcc warnings about unused variables\n•Breaking change: the new internal transition table feature causes a minor breaking change. In a\nsubmachine, the \"Fsm\" template parameter for guards / actions of an internal table declared using\ninternal_transition_table now is the submachine, not the higher-level state machine.\nInternal transitions declared using internal rows in the higher-level state machine keep their behavior\n(the \"Fsm\" parameter is the higher-level state machine). To sum up, the internal transition \"Fsm\"\nparameter is the closest state machine containing this transition.\nFrom V2.20 to V2.21 (Boost 1.47)\n•Added a stop() method in the back-end.\n•Added partial support for Boost.Phoenix functors in eUML\n•Added the possibility to choose when state switching occurs.\n•Bugfixes\n•Trac 5117, 5253, 5533, 5573\n•gcc warnings about unused variables\n•better implemenation of favor_compile_time back-end policy\n•bug with eUML and state construction\n•incorrect eUML event and state macros\n•incorrect event type passed to a direct entry state's on_entry action\n•more examples\nFrom V2.12 to V2.20 (Boost 1.46)\n•Compile-time state machine analysis using mpl_graph:Version history\n75•checking of region orthogonality .\n•search for unreachable states .\n•automatic region index search for pseudo entry or explicit entry states .\n•Boost.Parameter interface definition for msm::back::state_machine<> template arguments.\n•Possibility to provide a container for the event and deferred event queues. A policy\nimplementation based on a more efficient Boost.CircularBuffer is provided.\n•msm::back::state_machine<>::is_flag_active method made const.\n•added possibility to enqueue events for delayed processing.\n•Bugfixes\n•Trac 4926\n•stack overflow using the Defer functor\n•anonymous transition of a submachine not called for the initial state\nFrom V2.10 to V2.12 (Boost 1.45)\n•Support for serialization\n•Possibility to use normal functors (from functor front-end) in eUML.\n•New constructors where substates / submachines can be taken as arguments. This allows passing\narguments to the constructor of a submachine.\n•Bugfixes\nFrom V2.0 to V2.12 (Boost 1.44)\n•New documentation\n•Internal transitions. Either as part of the transition table or using a state's internal transition table\n•increased dispatch and copy speed\n•new row types for the basic front-end\n•new eUML syntax, better attribute support, macros to ease developer's life. Even VC8 seems to\nlike it better.\n•New policy for reduced compile-time at the cost of dispatch speed\n•Support for base events\n•possibility to choose the initial eventPart II. Reference77Table of Contents\n9. External references to MSM ...................................................................................... 78\n10. eUML operators and basic helpers ............................................................................ 79\n11. Functional programming .......................................................................................... 8278Chapter 9. External references to\nMSM\nAn interesting mapping UML <-> MSM from Takatoshi Kondo can be found at Redboltz [http://\nredboltz.wikidot.com/boost-msm-guide ].79Chapter 10. eUML operators and\nbasic helpers\nThe following table lists the supported operators:\nTable 10.1. Operators and state machine helpers\neUML function / operator Description Functor\n&& Calls lazily Action1&& Action2 And_\n|| Calls lazily Action1|| Action2 Or_\n! Calls lazily !Action1 Not_\n!= Calls lazily Action1 != Action2 NotEqualTo_\n== Calls lazily Action1 == Action2 EqualTo_\n> Calls lazily Action1 > Action2 Greater_\n>= Calls lazily Action1 >= Action2 Greater_Equal_\n< Calls lazily Action1 < Action2 Less_\n<= Calls lazily Action1 <= Action2 Less_Equal_\n& Calls lazily Action1 & Action2 Bitwise_And_\n| Calls lazily Action1 | Action2 Bitwise_Or_\n^ Calls lazily Action1 ^ Action2 Bitwise_Xor_\n-- Calls lazily --Action1 /\nAction1--Pre_Dec_ / Post_Dec_\n++ Calls lazily ++Action1 /\nAction1++Pre_Inc_ / Post_Inc_\n/ Calls lazily Action1 / Action2 Divides_\n/= Calls lazily Action1 /= Action2 Divides_Assign_\n* Calls lazily Action1 * Action2 Multiplies_\n*= Calls lazily Action1 *= Action2 Multiplies_Assign_\n+ (binary) Calls lazily Action1 + Action2 Plus_\n+ (unary) Calls lazily +Action1 Unary_Plus_\n+= Calls lazily Action1 += Action2 Plus_Assign_\n- (binary) Calls lazily Action1 - Action2 Minus_\n- (unary) Calls lazily -Action1 Unary_Minus_\n-= Calls lazily Action1 -= Action2 Minus_Assign_\n% Calls lazily Action1 % Action2 Modulus_\n%= Calls lazily Action1 %= Action2 Modulus_Assign_\n>> Calls lazily Action1 >> Action2 ShiftRight_\n>>= Calls lazily Action1 >>=\nAction2ShiftRight_Assign_\n<< Calls lazily Action1 << Action2 ShiftLeft_\n<<= Calls lazily Action1 <<=\nAction2ShiftLeft_Assign_\n[] (works on vector, map, arrays) Calls lazily Action1 [Action2] Subscript_eUML operators and basic helpers\n80eUML function / operator Description Functor\nif_then_else_(Condition,Action1,Action2) Returns either the result of\ncalling Action1 or the result of\ncalling Action2If_Else_\nif_then_(Condition,Action) Returns the result of calling\nAction if ConditionIf_Then_\nwhile_(Condition, Body) While Condition(), calls Body().\nReturns nothingWhile_Do_\ndo_while_(Condition, Body) Calls Body() while Condition().\nReturns nothingDo_While_\nfor_(Begin,Stop,EndLoop,Body) Calls for(Begin;Stop;EndLoop)\n{Body;}For_Loop_\nprocess_(Event [,fsm1] [,fsm2]\n[,fsm3] [,fsm4])Processes Event on the current\nstate machine (if no fsm\nspecified) or on up to 4\nstate machines returned by an\nappropriate functor.Process_\nprocess2_(Event, Data [,fsm1]\n[,fsm2] [,fsm3])Processes Event on the current\nstate machine (if no fsm\nspecified) or on up to 2\nstate machines returned by an\nappropriate functor. The event\nis copy-constructed from what\nData() returns.Process2_\nis_flag_(Flag [,fsm]) Calls is_flag_active() on the\ncurrent state machine or the one\nreturned by calling fsm.Get_Flag_\nevent_ [(attribute name)] Returns the current event (as\nconst reference)GetEvent_\nsource_ [(attribute name)] Returns the source state of the\ncurrently triggered transition (as\nreference). If an attribute name is\nprovided, returns the attribute by\nreference.GetSource_\ntarget_ [(attribute name)] Returns the target state of the\ncurrently triggered transition (as\nreference). If an attribute name is\nprovided, returns the attribute by\nreference.GetTarget_\nstate_ [(attribute name)] Returns the source state of\nthe currently active state (as\nreference). Valid inside a state\nentry/exit action. If an attribute\nname is provided, returns the\nattribute by reference.GetState_\nfsm_ [(attribute name)] Returns the current state\nmachine (as reference). Valid\ninside a state entry/exit action or\na transition. If an attribute name\nis provided, returns the attribute\nby reference.GetFsm_eUML operators and basic helpers\n81eUML function / operator Description Functor\nsubstate_(state_name [,fsm]) Returns (as reference) the state\nstate_name referenced in the\ncurrent state machine or the one\ngiven as argument.SubState_\nTo use these functions, you need to include:\n#include <msm/front/euml/euml.hpp>82Chapter 11. Functional programming\nTo use these functions, you need to include:\n#include <msm/front/euml/stl.hpp>\nor the specified header in the following tables.\nThe following tables list the supported STL algorithms:\nTable 11.1. STL algorithms\nSTL algorithms in querying.hpp Functor\nfind_(first, last, value) Find_\nfind_if_(first, last, value) FindIf_\nlower_bound_(first, last, value [,op#]) LowerBound_\nupper_bound_(first, last, value [,op#]) UpperBound_\nequal_range_(first, last, value [,op#]) EqualRange_\nbinary_search_(first, last, value [,op#]) BinarySearch_\nmin_element_(first, last[,op#]) MinElement_\nmax_element_(first, last[,op#]) MaxElement_\nadjacent_find_(first, last[,op#]) AdjacentFind_\nfind_end_( first1, last1, first2, last2 [,op #]) FindEnd_\nfind_first_of_( first1, last1, first2, last2 [,op #]) FindFirstOf_\nequal_( first1, last1, first2 [,op #]) Equal_\nsearch_( first1, last1, first2, last2 [,op #]) Search_\nincludes_( first1, last1, first2, last2 [,op #]) Includes_\nlexicographical_compare_ ( first1, last1, first2,\nlast2 [,op #])LexicographicalCompare_\ncount_(first, last, value [,size]) Count_\ncount_if_(first, last, op # [,size]) CountIf_\ndistance_(first, last) Distance_\nmismatch _( first1, last1, first2 [,op #]) Mismatch_\nTable 11.2. STL algorithms\nSTL algorithms in iteration.hpp Functor\nfor_each_(first,last, unary op#) ForEach_\naccumulate_first, last, init [,op#]) Accumulate_\nTable 11.3. STL algorithms\nSTL algorithms in transformation.hpp Functor\ncopy_(first, last, result) Copy_\ncopy_backward_(first, last, result) CopyBackward_\nreverse_(first, last) Reverse_\nreverse_copy_(first, last , result) ReverseCopy_\nremove_(first, last, value) Remove_Functional programming\n83STL algorithms in transformation.hpp Functor\nremove_if_(first, last , op#) RemoveIf_\nremove_copy_(first, last , output, value) RemoveCopy_\nremove_copy_if_(first, last, output, op#) RemoveCopyIf_\nfill_(first, last, value) Fill_\nfill_n_(first, size, value)# FillN_\ngenerate_(first, last, generator#) Generate_\ngenerate_(first, size, generator#)# GenerateN_\nunique_(first, last [,op#]) Unique_\nunique_copy_(first, last, output [,op#]) UniqueCopy_\nrandom_shuffle_(first, last [,op#]) RandomShuffle_\nrotate_copy_(first, middle, last, output) RotateCopy_\npartition_ (first, last [,op#]) Partition_\nstable_partition_ (first, last [,op#]) StablePartition_\nstable_sort_(first, last [,op#]) StableSort_\nsort_(first, last [,op#]) Sort_\npartial_sort_(first, middle, last [,op#]) PartialSort_\npartial_sort_copy_ (first, last, res_first, res_last\n[,op#])PartialSortCopy_\nnth_element_(first, nth, last [,op#]) NthElement_\nmerge_( first1, last1, first2, last2, output [,op #]) Merge_\ninplace_merge_(first, middle, last [,op#]) InplaceMerge_\nset_union_(first1, last1, first2, last2, output [,op\n#])SetUnion_\npush_heap_(first, last [,op #]) PushHeap_\npop_heap_(first, last [,op #]) PopHeap_\nmake_heap_(first, last [,op #]) MakeHeap_\nsort_heap_(first, last [,op #]) SortHeap_\nnext_permutation_(first, last [,op #]) NextPermutation_\nprev_permutation_(first, last [,op #]) PrevPermutation_\ninner_product_(first1, last1, first2, init [,op1#]\n[,op2#])InnerProduct_\npartial_sum_(first, last, output [,op#]) PartialSum_\nadjacent_difference_(first, last, output [,op#]) AdjacentDifference_\nreplace_(first, last, old_value, new_value) Replace_\nreplace_if_(first, last, op#, new_value) ReplaceIf_\nreplace_copy_(first, last, result, old_value,\nnew_value)ReplaceCopy_\nreplace_copy_if_(first, last, result, op#,\nnew_value)ReplaceCopyIf_\nrotate_(first, middle, last)# Rotate_Functional programming\n84Table 11.4. STL container methods\nSTL container methods(common) in\ncontainer.hppFunctor\ncontainer::reference front_(container) Front_\ncontainer::reference back_(container) Back_\ncontainer::iterator begin_(container) Begin_\ncontainer::iterator end_(container) End_\ncontainer::reverse_iterator rbegin_(container) RBegin_\ncontainer::reverse_iterator rend_(container) REnd_\nvoid push_back_(container, value) Push_Back_\nvoid pop_back_(container, value) Pop_Back_\nvoid push_front_(container, value) Push_Front_\nvoid pop_front_(container, value) Pop_Front_\nvoid clear_(container) Clear_\nsize_type capacity_(container) Capacity_\nsize_type size_(container) Size_\nsize_type max_size_(container) Max_Size_\nvoid reserve_(container, value) Reserve _\nvoid resize_(container, value) Resize _\niterator insert_(container, pos, value) Insert_\nvoid insert_( container , pos, first, last) Insert_\nvoid insert_( container , pos, number, value) Insert_\nvoid swap_( container , other_container) Swap_\nvoid erase_( container , pos) Erase_\nvoid erase_( container , first, last) Erase_\nbool empty_( container) Empty_\nTable 11.5. STL list methods\nstd::list methods in container.hpp Functor\nvoid list_remove_(container, value) ListRemove_\nvoid list_remove_if_(container, op#) ListRemove_If_\nvoid list_merge_(container, other_list) ListMerge_\nvoid list_merge_(container, other_list, op#) ListMerge_\nvoid splice_(container, iterator, other_list) Splice_\nvoid splice_(container, iterator, other_list,\niterator)Splice_\nvoid splice_(container, iterator, other_list, first,\nlast)Splice_\nvoid list_reverse_(container) ListReverse_\nvoid list_unique_(container) ListUnique_\nvoid list_unique_(container, op#) ListUnique_\nvoid list_sort_(container) ListSort_\nvoid list_sort_(container, op#) ListSort_Functional programming\n85Table 11.6. STL associative container methods\nAssociative container methods in\ncontainer.hppFunctor\niterator insert_(container, pos, value) Insert_\nvoid insert_( container , first, last) Insert_\npair<iterator, bool> insert_( container , value) Insert_\nvoid associative_erase_( container , pos) Associative_Erase_\nvoid associative_erase_( container , first, last) Associative_Erase_\nsize_type associative_erase_( container , key) Associative_Erase_\niterator associative_find_( container , key) Associative_Find_\nsize_type associative_count_( container , key) AssociativeCount_\niterator associative_lower_bound_( container ,\nkey)Associative_Lower_Bound_\niterator associative_upper_bound_( container ,\nkey)Associative_Upper_Bound_\npair<iterator, iterator>\nassociative_equal_range_( container , key)Associative_Equal_Range_\nTable 11.7. STL pair\nstd::pair in container.hpp Functor\nfirst_type first_(pair<T1, T2>) First_\nsecond_type second_(pair<T1, T2>) Second_\nTable 11.8. STL string\nSTL string method std::string method in\ncontainer.hppFunctor\nsubstr (size_type pos, size_type\nsize)string substr_(container, pos,\nlength)Substr_\nint compare(string) int string_compare_(container,\nanother_string)StringCompare_\nint compare(char*) int string_compare_(container,\nanother_string)StringCompare_\nint compare(size_type pos,\nsize_type size, string)int string_compare_(container,\npos, size, another_string)StringCompare_\nint compare (size_type pos,\nsize_type size, string, size_type\nlength)int string_compare_(container,\npos, size, another_string, length)StringCompare_\nstring& append(const string&) string& append_(container,\nanother_string)Append_\nstring& append (charT*) string& append_(container,\nanother_string)Append_\nstring& append (string ,\nsize_type pos, size_type size)string& append_(container,\nother_string, pos, size)Append_\nstring& append (charT*,\nsize_type size)string& append_(container,\nanother_string, length)Append_\nstring& append (size_type size,\ncharT)string& append_(container, size,\nchar)Append_Functional programming\n86STL string method std::string method in\ncontainer.hppFunctor\nstring& append (iterator begin,\niterator end)string& append_(container,\nbegin, end)Append_\nstring& insert (size_type pos,\ncharT*)string&\nstring_insert_(container, pos,\nother_string)StringInsert_\nstring& insert(size_type pos,\ncharT*,size_type n)string&\nstring_insert_(container, pos,\nother_string, n)StringInsert_\nstring& insert(size_type\npos,size_type n, charT c)string&\nstring_insert_(container, pos, n,\nc)StringInsert_\nstring& insert (size_type pos,\nconst string&)string&\nstring_insert_(container, pos,\nother_string)StringInsert_\nstring& insert (size_type pos,\nconst string&, size_type pos1,\nsize_type n)string&\nstring_insert_(container, pos,\nother_string, pos1, n)StringInsert_\nstring& erase(size_type pos=0,\nsize_type n=npos)string& string_erase_(container,\npos, n)StringErase_\nstring& assign(const string&) string&\nstring_assign_(container,\nanother_string)StringAssign_\nstring& assign(const charT*) string&\nstring_assign_(container,\nanother_string)StringAssign_\nstring& assign(const string&,\nsize_type pos, size_type n)string&\nstring_assign_(container,\nanother_string, pos, n)StringAssign_\nstring& assign(const charT*,\nsize_type n)string&\nstring_assign_(container,\nanother_string, n)StringAssign_\nstring& assign(size_type n,\ncharT c)string&\nstring_assign_(container, n, c)StringAssign_\nstring& assign(iterator first,\niterator last)string&\nstring_assign_(container, first,\nlast)StringAssign_\nstring& replace(size_type pos,\nsize_type n, const string&)string&\nstring_replace_(container, pos,\nn, another_string)StringReplace_\nstring& replace(size_type pos,\nsize_type n, const charT*,\nsize_type n1)string&\nstring_replace_(container, pos,\nn, another_string, n1)StringReplace_\nstring& replace(size_type pos,\nsize_type n, const charT*)string&\nstring_replace_(container, pos,\nn, another_string)StringReplace_\nstring& replace(size_type pos,\nsize_type n, size_type n1, charT\nc)string&\nstring_replace_(container, pos,\nn, n1, c)StringReplace_Functional programming\n87STL string method std::string method in\ncontainer.hppFunctor\nstring& replace(iterator first,\niterator last, const string&)string&\nstring_replace_(container, first,\nlast, another_string)StringReplace_\nstring& replace(iterator first,\niterator last, const charT*,\nsize_type n)string&\nstring_replace_(container, first,\nlast, another_string, n)StringReplace_\nstring& replace(iterator first,\niterator last, const charT*)string&\nstring_replace_(container, first,\nlast, another_string)StringReplace_\nstring& replace(iterator first,\niterator last, size_type n, charT\nc)string&\nstring_replace_(container, first,\nlast, n, c)StringReplace_\nstring& replace(iterator first,\niterator last, iterator f, iterator l)string&\nstring_replace_(container, first,\nlast, f, l)StringReplace_\nconst charT* c_str() const charT* c_str_(container) CStr_\nconst charT* data() const charT*\nstring_data_(container)StringData_\nsize_type copy(charT* buf,\nsize_type n, size_type pos = 0)size_type\nstring_copy_(container, buf, n,\npos); size_type\nstring_copy_(container, buf, n)StringCopy_\nsize_type find(charT* s,\nsize_type pos, size_type n)size_type\nstring_find_(container, s, pos, n)StringFind_\nsize_type find(charT* s,\nsize_type pos=0)size_type\nstring_find_(container, s, pos);\nsize_type\nstring_find_(container, s)StringFind_\nsize_type find(const string& s,\nsize_type pos=0)size_type\nstring_find_(container, s, pos)\nsize_type\nstring_find_(container, s)StringFind_\nsize_type find(charT c,\nsize_type pos=0)size_type\nstring_find_(container, c, pos)\nsize_type\nstring_find_(container, c)StringFind_\nsize_type rfind(charT* s,\nsize_type pos, size_type n)size_type\nstring_rfind_(container, s, pos,\nn)StringRFind_\nsize_type rfind(charT* s,\nsize_type pos=npos)size_type\nstring_rfind_(container, s, pos);\nsize_type\nstring_rfind_(container, s)StringRFind_\nsize_type rfind(const string& s,\nsize_type pos=npos)size_type\nstring_rfind_(container, s, pos);\nsize_type\nstring_rfind_(container, s)StringRFind_\nsize_type rfind(charT c,\nsize_type pos=npos)size_type\nstring_rfind_(container, c, pos)StringRFind_Functional programming\n88STL string method std::string method in\ncontainer.hppFunctor\nsize_type\nstring_rfind_(container, c)\nsize_type find_first_of(charT* s,\nsize_type pos, size_type n)size_type\nfind_first_of_(container, s, pos,\nn)StringFindFirstOf_\nsize_type find_first_of (charT*\ns, size_type pos=0)size_type\nfind_first_of_(container, s,\npos); size_type\nfind_first_of_(container, s)StringFindFirstOf_\nsize_type find_first_of (const\nstring& s, size_type pos=0)size_type\nfind_first_of_(container, s,\npos); size_type\nfind_first_of_(container, s)StringFindFirstOf_\nsize_type find_first_of (charT c,\nsize_type pos=0)size_type\nfind_first_of_(container, c, pos)\nsize_type\nfind_first_of_(container, c)StringFindFirstOf_\nsize_type\nfind_first_not_of(charT* s,\nsize_type pos, size_type n)size_type\nfind_first_not_of_(container, s,\npos, n)StringFindFirstNotOf_\nsize_type find_first_not_of\n(charT* s, size_type pos=0)size_type\nfind_first_not_of_(container, s,\npos); size_type\nfind_first_not_of_(container, s)StringFindFirstNotOf_\nsize_type find_first_not_of\n(const string& s, size_type\npos=0)size_type\nfind_first_not_of_(container, s,\npos); size_type\nfind_first_not_of_(container, s)StringFindFirstNotOf_\nsize_type find_first_not_of\n(charT c, size_type pos=0)size_type\nfind_first_not_of_(container, c,\npos); size_type\nfind_first_not_of_(container, c)StringFindFirstNotOf_\nsize_type find_last_of(charT* s,\nsize_type pos, size_type n)size_type\nfind_last_of_(container, s, pos,\nn)StringFindLastOf_\nsize_type find_last_of (charT* s,\nsize_type pos=npos)size_type\nfind_last_of_(container, s, pos);\nsize_type\nfind_last_of_(container, s)StringFindLastOf_\nsize_type find_last_of (const\nstring& s, size_type pos=npos)size_type\nfind_last_of_(container, s, pos);\nsize_type\nfind_last_of_(container, s)StringFindLastOf_\nsize_type find_last_of (charT c,\nsize_type pos=npos)size_type\nfind_last_of_(container, c, pos);\nsize_type\nfind_last_of_(container, c)StringFindLastOf_\nsize_type\nfind_last_not_of(charT* s,\nsize_type pos, size_type n)size_type\nfind_last_not_of_(container, s,\npos, n)StringFindLastNotOf_Functional programming\n89STL string method std::string method in\ncontainer.hppFunctor\nsize_type find_last_not_of\n(charT* s, size_type pos=npos)size_type\nfind_last_not_of_(container, s,\npos); size_type\nfind_last_of_(container, s)StringFindLastNotOf_\nsize_type find_last_not_of\n(const string& s, size_type\npos=npos)size_type\nfind_last_not_of_(container, s,\npos); size_type\nfind_last_not_of_(container, s)StringFindLastNotOf_\nsize_type find_last_not_of\n(charT c, size_type pos=npos)size_type\nfind_last_not_of_(container, c,\npos); size_type\nfind_last_not_of_(container, c)StringFindLastNotOf_\nNotes:\n•#: algorithms requiring a predicate need to make them eUML compatible by wrapping them inside\na Predicate_ functor. For example, std::less<int> => Predicate_<std::less<int> >()\n•#: If using the SGI STL implementation, these functors use the SGI return value90Name\nCommon headers — The common types used by front- and back-ends\nmsm/common.hpp\nThis header provides one type, wrap, which is an empty type whose only reason to exist is to be\ncheap to construct, so that it can be used with mpl::for_each, as shown in the Metaprogramming book,\nchapter 9.\n template <class Dummy> wrap{}; {\n}\nmsm/row_tags.hpp\nThis header contains the row type tags which front-ends can support partially or totally. Please see the\nInternals section for a description of the different types.91Name\nBack-end — The back-end headers\nmsm/back/state_machine.hpp\nThis header provides one type, state_machine, MSM's state machine engine implementation.\n template <class Derived,class HistoryPolicy=NoHistory,class\n CompilePolicy=favor_runtime_speed> state_machine {\n}\nTemplate arguments\nDerived\nThe name of the front-end state machine definition. All three front-ends are possible.\nHistoryPolicy\nThe desired history. This can be: AlwaysHistory, NoHistory, ShallowHistory. Default is NoHistory.\nCompilePolicy\nThe trade-off performance / compile-time. There are two predefined policies, favor_runtime_speed\nand favor_compile_time. Default is favor_runtime_speed, best performance, longer compile-time. See\nthe backend .\nmethods\nstart\nThe start methods must be called before any call to process_event. It activates the entry action of the\ninitial state(s). This allows you to choose when a state machine can start. See backend.\n void start();\nprocess_event\nThe event processing method implements the double-dispatch. Each call to this function with a new\nevent type instantiates a new dispatch algorithm and increases compile-time.\n template <class Event> HandledEnum\n process_event(Event const&);\ncurrent_state\nReturns the ids of currently active states. You will typically need it only for debugging or logging\npurposes.\n const int* current_state const();\nget_state_by_id\nReturns the state whose id is given. As all states of a concrete state machine share a common base\nstate, the return value is a base state. If the id corresponds to no state, a null pointer is returned.\n const BaseState* get_state_by_id const(int id);\nis_contained\nHelper returning true if the state machine is contained as a submachine of another state machine.Back-end\n92 bool is_contained const();\nget_state\nReturns the required state of the state machine as a pointer. A compile error will occur if the state is\nnot to be found in the state machine.\n template <class State> State* get_state();\nget_state\nReturns the required state of the state machine as a reference. A compile error will occur if the state\nis not to be found in the state machine.\n template <class State> State& get_state();\nis_flag_active\nReturns true if the given flag is currently active. A flag is active if the active state of one region is\ntagged with this flag (using OR as BinaryOp) or active states of all regions (using AND as BinaryOp)\n template <class Flag,class BinaryOp> bool\n is_flag_active();\nis_flag_active\nReturns true if the given flag is currently active. A flag is active if the active state of one region is\ntagged with this flag.\n template <class Flag> bool is_flag_active();\nvisit_current_states\nVisits all active states and their substates. A state is visited using the accept method without\nargument. The base class of all states must provide an accept_sig type.\n void visit_current_states();\nvisit_current_states\nVisits all active states and their substates. A state is visited using the accept method with arguments.\nThe base class of all states must provide an accept_sig type defining the signature and thus the\nnumber and type of the parameters.\n void visit_current_states(any-type param1, any-type param2,...);\ndefer_event\nDefers the provided event. This method can be called only if at least one state defers an event\nor if the state machine provides the activate_deferred_events (see example [examples/\nOrthogonal-deferred2.cpp ]) type either directly or using the deferred_events configuration of eUML\n(configure_ << deferred_events )\n template <class Event> void defer_event(Event const&);\nTypes\nnr_regions\nThe number of orthogonal regions contained in the state machineBack-end\n93entry_pt\nThis nested type provides the necessary typedef for entry point pseudostates.\nstate_machine<...>::entry_pt<state_name> is a transition's valid target inside the\ncontaining state machine's transition table.\n entry_pt {\n}\nexit_pt\nThis nested type provides the necessary typedef for exit point pseudostates.\nstate_machine<...>::exit_pt<state_name> is a transition's valid source inside the\ncontaining state machine's transition table.\n exit_pt {\n}\ndirect\nThis nested type provides the necessary typedef for an explicit entry inside a submachine.\nstate_machine<...>::direct<state_name> is a transition's valid target inside the\ncontaining state machine's transition table.\n direct {\n}\nstt\nCalling state_machine<frontend>::stt returns a mpl::vector containing the transition table of the state\nmachine. This type can then be used with generate_state_set or generate_event_set.\nargs.hpp\nThis header provides one type, args. which provides the necessary types for a visitor implementation.\nmsm/back/history_policies.hpp\nThis header provides the out-of-the-box history policies supported by MSM. There are 3 such policies.\nEvery history policy must implement the following methods:\nset_initial_states\nThis method is called by msm::back::state_machine when constructed. It gives the policy a chance to\nsave the ids of all initial states (passed as array).\nvoid set_initial_states ();\n(int* const) ;\nhistory_exit\nThis method is called by msm::back::state_machine when the submachine is exited. It gives the policy\na chance to remember the ids of the last active substates of this submachine (passed as array).\nvoid history_exit ();\n(int* const) ;Back-end\n94history_entry\nThis method is called by msm::back::state_machine when the submachine is entered. It gives the policy\na chance to set the active states according to the policy's aim. The policy gets as parameter the event\nwhich activated the submachine and returns an array of active states ids.\ntemplate <class Event> int* const history_exit ();\n(Event const&) ;\nOut-of-the-box policies:\nNoHistory\nThis policy is the default used by state_machine. No active state of a submachine is remembered and\nat every new activation of the submachine, the initial state(s) are activated.\nAlwaysHistory\nThis policy is a non-UML-standard extension. The active state(s) of a submachine is (are) always\nremembered at every new activation of the submachine.\nShallowHistory\nThis policy activates the active state(s) of a submachine if the event is found in the policy's event list.\nmsm/back/default_compile_policy.hpp\nThis header contains the definition of favor_runtime_speed. This policy has two settings:\n•Submachines dispatch faster because their transitions are added into their containing machine's\ntransition table instead of simply forwarding events.\n•It solves transition conflicts at compile-time\nmsm/back/favor_compile_time.hpp\nThis header contains the definition of favor_compile_time. This policy has two settings:\n•Submachines dispatch is slower because all events, even those with no dispatch chance, are\nforwarded to submachines. In exchange, no row is added into the containing machine's transition\ntable, which reduces compile-time.\n•It solves transition conflicts at run-time.\nmsm/back/metafunctions.hpp\nThis header contains metafunctions for use by the library. Three metafunctions can be useful for the\nuser:\n•generate_state_set< stt > : generates the list of all states referenced by the transition\ntable stt. If stt is a recursive table (generated by recursive_get_transition_table ), the\nmetafunction finds recursively all states of the submachines. A non-recursive table can be obtained\nwith some_backend_fsm::stt.\n•generate_event_set< stt> : generates the list of all events referenced by the transition\ntable stt. If stt is a recursive table (generated by recursive_get_transition_table ), theBack-end\n95metafunction finds recursively all events of the submachines. A non-recursive table can be obtained\nwith some_backend_fsm::stt.\n•recursive_get_transition_table<fsm> : recursively extends the transition table of the\nstate machine fsm with tables from the submachines.\nmsm/back/tools.hpp\nThis header contains a few metaprogramming tools to get some information out of a state machine.\nfill_state_names\nattributes\nfill_state_names has for attribute:\n•char const** m_names : an already allocated array of const char* where the typeid-generated\nnames of a state machine states will be witten.\nconstructor\n char const** names_to_fill(char const** names_to_fill);\nusage\nfill_state_names is made for use in a mpl::for_each iterating on a state list and writing inside a pre-\nallocated array the state names. Example:\ntypedef some_fsm::stt Stt;\ntypedef msm::back::generate_state_set<Stt>::type all_states; //states\nstatic char const* state_names[mpl::size<all_states>::value];\n// array to fill with names\n// fill the names of the states defined in the state machine\nmpl::for_each<all_states,boost::msm::wrap<mpl::placeholders::_1> > \n (msm::back::fill_state_names<Stt>(state_names));\n// display all active states\nfor (unsigned int i=0;i<some_fsm::nr_regions::value;++i)\n{\n std::cout << \" -> \" \n << state_names[my_fsm_instance.current_state()[i]] \n << std::endl;\n}\nget_state_name\nattributes\nget_state_name has for attributes:\n•std::string& m_name: the return value of the iteration\n•int m_state_id: the searched state's id\nconstructor\nThe constructor takes as argument a reference to the string to fill with the state name and the id which\nmust be searched.\n string& name_to_fill,int state_id(string& name_to_fill,int state_id);Back-end\n96usage\nThis type is made for the same search as in the previous example, using a mpl::for_each to iterate on\nstates. After the iteration, the state name reference has been set.\n// we need a fsm's table\ntypedef player::stt Stt;\ntypedef msm::back::generate_state_set<Stt>::type all_states; //all states\nstd::string name_of_open; // id of Open is 1\n// fill name_of_open for state of id 1\nboost::mpl::for_each<all_states,boost::msm::wrap<mpl::placeholders::_1> > \n (msm::back::get_state_name<Stt>(name_of_open,1));\nstd::cout << \"typeid-generated name Open is: \" << name_of_open << std::endl;\ndisplay_type\nattributes\nnone\nusage\nReusing the state list from the previous example, we can output all state names:\nmpl::for_each<all_states,boost::msm::wrap<mpl::placeholders::_1>\n>(msm::back::display_type ());97Name\nFront-end — The front-end headers\nmsm/front/common_states.hpp\nThis header contains the predefined types to serve as base for states or state machines:\n•default_base_state: non-polymorphic empty type.\n•polymorphic_state: type with a virtual destructor, which makes all states polymorphic.\nmsm/front/completion_event.hpp\nThis header contains one type, none. This type has several meanings inside a transition table:\n•as action or guard: that there is no action or guard\n•as target state: that the transition is an internal transition\n•as event: the transition is an anonymous (completion) transition\nmsm/front/functor_row.hpp\nThis header implements the functor front-end's transitions and helpers.\nRow\ndefinition\n template <class Source,class Event,class Target,class\n Action,class Guard> Row {\n}\ntags\nrow_type_tag is defined differently for every specialization:\n•all 5 template parameters means a normal transition with action and guard: typedef row_tag\nrow_type_tag;\n•Row<Source,Event,Target,none,none> a normal transition without action or guard: typedef\n_row_tag row_type_tag;\n•Row<Source,Event,Target,Action,none> a normal transition without guard: typedef\na_row_tag row_type_tag;\n•Row<Source,Event,Target,none,Guard> a normal transition without action: typedef\ng_row_tag row_type_tag;\n•Row<Source,Event,none,Action,none> an internal transition without guard: typedef\na_irow_tag row_type_tag;\n•Row<Source,Event,none,none,Guard> an internal transition without action: typedef\ng_irow_tag row_type_tag;\n•Row<Source,Event,none,none,Guard> an internal transition with action and guard: typedef\nirow_tag row_type_tag;\n•Row<Source,Event,none,none,none> an internal transition without action or guard: typedef\n_irow_tag row_type_tag;Front-end\n98methods\nLike any other front-end, Row implements the two necessary static functions for action and guard call.\nEach function receives as parameter the (deepest-level) state machine processsing the event, the event\nitself, the source and target states and all the states contained in a state machine.\ntemplate <class Fsm,class SourceState,class TargetState, class\nAllStates> static void action_call ();\n(Fsm& fsm,Event const& evt,SourceState&,TargetState,AllStates&) ;\ntemplate <class Fsm,class SourceState,class TargetState, class\nAllStates> static bool guard_call ();\n(Fsm& fsm,Event const& evt,SourceState&,TargetState,AllStates&) ;\nInternal\ndefinition\n template <class Event,class Action,class Guard>\n Internal {\n}\ntags\nrow_type_tag is defined differently for every specialization:\n•all 3 template parameters means an internal transition with action and guard: typedef\nsm_i_row_tag row_type_tag;\n•Internal<Event,none,none> an internal transition without action or guard: typedef\nsm__i_row_tag row_type_tag;\n•Internal<Event,Action,none> an internal transition without guard: typedef sm_a_i_row_tag\nrow_type_tag;\n•Internal<Event,none,Guard> an internal transition without action: typedef sm_g_i_row_tag\nrow_type_tag;\nmethods\nLike any other front-end, Internal implements the two necessary static functions for action and guard\ncall. Each function receives as parameter the (deepest-level) state machine processsing the event, the\nevent itself, the source and target states and all the states contained in a state machine.\ntemplate <class Fsm,class SourceState,class TargetState, class\nAllStates> static void action_call ();\n(Fsm& fsm,Event const& evt,SourceState&,TargetState,AllStates&) ;\ntemplate <class Fsm,class SourceState,class TargetState, class\nAllStates> static bool guard_call ();\n(Fsm& fsm,Event const& evt,SourceState&,TargetState,AllStates&) ;\nActionSequence_\nThis functor calls every element of the template Sequence (which are also callable functors) in turn.\nIt is also the underlying implementation of the eUML sequence grammar (action1,action2,...).Front-end\n99definition\n template <class Sequence> ActionSequence_ {\n}\nmethods\nThis helper functor is made for use in a transition table and in a state behavior and therefore implements\nan operator() with 3 and with 4 arguments:\ntemplate <class Evt,class Fsm,class SourceState,class TargetState>\noperator() ();\nEvt const& ,Fsm& ,SourceState& ,TargetState& ;\ntemplate <class Evt,class Fsm,class State> operator() ();\nEvt const&, Fsm&, State&;\nDefer\ndefinition\n Defer {\n}\nmethods\nThis helper functor is made for use in a transition table and therefore implements an operator() with\n4 arguments:\ntemplate <class Evt,class Fsm,class SourceState,class TargetState>\noperator() ();\nEvt const&, Fsm& , SourceState&, TargetState&;\nmsm/front/internal_row.hpp\nThis header implements the internal transition rows for use inside an internal_transition_table. All\nthese row types have no source or target state, as the backend will recognize internal transitions from\nthis internal_transition_table.\nmethods\nLike any other front-end, the following transition row types implements the two necessary static\nfunctions for action and guard call. Each function receives as parameter the (deepest-level) state\nmachine processsing the event, the event itself, the source and target states and all the states contained\nin a state machine.\ntemplate <class Fsm,class SourceState,class TargetState, class\nAllStates> static void action_call ();\n(Fsm& fsm,Event const& evt,SourceState&,TargetState,AllStates&) ;\ntemplate <class Fsm,class SourceState,class TargetState, class\nAllStates> static bool guard_call ();\n(Fsm& fsm,Event const& evt,SourceState&,TargetState,AllStates&) ;Front-end\n100a_internal\ndefinition\nThis is an internal transition with an action called during the transition.\n template< class Event, class CalledForAction, void\n (CalledForAction::*action)(Event const&)>\n a_internal {\n}\ntemplate parameters\n•Event: the event triggering the internal transition.\n•CalledForAction: the type on which the action method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.\n•action: a pointer to the method which CalledForAction provides.\ng_internal\nThis is an internal transition with a guard called before the transition and allowing the transition if\nreturning true.\ndefinition\n template< class Event, class CalledForGuard, bool\n (CalledForGuard::*guard)(Event const&)>\n g_internal {\n}\ntemplate parameters\n•Event: the event triggering the internal transition.\n•CalledForGuard: the type on which the guard method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.\n•guard: a pointer to the method which CalledForGuard provides.\ninternal\nThis is an internal transition with a guard called before the transition and allowing the transition if\nreturning true. It also calls an action called during the transition.\ndefinition\n template< class Event, class CalledForAction, void\n (CalledForAction::*action)(Event const&), class\n CalledForGuard, bool (CalledForGuard::*guard)(Event const&)>\n internal {\n}\ntemplate parameters\n•Event: the event triggering the internal transition\n•CalledForAction: the type on which the action method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.Front-end\n101•action: a pointer to the method which CalledForAction provides.\n•CalledForGuard: the type on which the guard method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.\n•guard: a pointer to the method which CalledForGuard provides.\n_internal\nThis is an internal transition without action or guard. This is equivalent to an explicit \"ignore event\".\ndefinition\n template< class Event > _internal {\n}\ntemplate parameters\n•Event: the event triggering the internal transition.\nmsm/front/row2.hpp\nThis header contains the variants of row2, which are an extension of the standard row transitions for\nuse in the transition table. They offer the possibility to define action and guard not only in the state\nmachine, but in any state of the state machine. They can also be used in internal transition tables\nthrough their irow2 variants.\nmethods\nLike any other front-end, the following transition row types implements the two necessary static\nfunctions for action and guard call. Each function receives as parameter the (deepest-level) state\nmachine processsing the event, the event itself, the source and target states and all the states contained\nin a state machine.\ntemplate <class Fsm,class SourceState,class TargetState, class\nAllStates> static void action_call ();\n(Fsm& fsm,Event const& evt,SourceState&,TargetState,AllStates&) ;\ntemplate <class Fsm,class SourceState,class TargetState, class\nAllStates> static bool guard_call ();\n(Fsm& fsm,Event const& evt,SourceState&,TargetState,AllStates&) ;\n_row2\nThis is a transition without action or guard. The state machine only changes active state.\ndefinition\n template< class Source, class Event, class Target >\n _row2 {\n}\ntemplate parameters\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•Target: the target state of the transition.Front-end\n102a_row2\nThis is a transition with action and without guard.\ndefinition\n template< class Source, class Event, class Target,\n {\n}\n class CalledForAction, void\n (CalledForAction::*action)(Event const&) > _row2 {\n}\ntemplate parameters\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•Target: the target state of the transition.\n•CalledForAction: the type on which the action method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.\n•action: a pointer to the method which CalledForAction provides.\ng_row2\nThis is a transition with guard and without action.\ndefinition\n template< class Source, class Event, class Target,\n {\n}\n class CalledForGuard, bool (CalledForGuard::*guard)(Event\n const&) > _row2 {\n}\ntemplate parameters\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•Target: the target state of the transition.\n•CalledForGuard: the type on which the guard method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.\n•guard: a pointer to the method which CalledForGuard provides.\nrow2\nThis is a transition with guard and action.\ndefinition\n template< class Source, class Event, class Target,Front-end\n103 {\n}\n class CalledForAction, void\n (CalledForAction::*action)(Event const&), {\n}\n class CalledForGuard, bool (CalledForGuard::*guard)(Event\n const&) > _row2 {\n}\ntemplate parameters\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•Target: the target state of the transition.\n•CalledForAction: the type on which the action method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.\n•action: a pointer to the method which CalledForAction provides.\n•CalledForGuard: the type on which the guard method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.\n•guard: a pointer to the method which CalledForGuard provides.\na_irow2\nThis is an internal transition for use inside a transition table, with action and without guard.\ndefinition\n template< class Source, class Event, {\n}\n class CalledForAction, void\n (CalledForAction::*action)(Event const&) > _row2 {\n}\ntemplate parameters\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•CalledForAction: the type on which the action method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.\n•action: a pointer to the method which CalledForAction provides.\ng_irow2\nThis is an internal transition for use inside a transition table, with guard and without action.\ndefinition\n template< class Source, class Event, {Front-end\n104}\n class CalledForGuard, bool (CalledForGuard::*guard)(Event\n const&) > _row2 {\n}\ntemplate parameters\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•CalledForGuard: the type on which the guard method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.\n•guard: a pointer to the method which CalledForGuard provides.\nirow2\nThis is an internal transition for use inside a transition table, with guard and action.\ndefinition\n template< class Source, class Event, {\n}\n class CalledForAction, void\n (CalledForAction::*action)(Event const&), {\n}\n class CalledForGuard, bool (CalledForGuard::*guard)(Event\n const&) > _row2 {\n}\ntemplate parameters\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•CalledForAction: the type on which the action method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.\n•action: a pointer to the method which CalledForAction provides.\n•CalledForGuard: the type on which the guard method will be called. It can be either a state of the\ncontaining state machine or the state machine itself.\n•guard: a pointer to the method which CalledForGuard provides.\nmsm/front/state_machine_def.hpp\nThis header provides the implementation of the basic front-end . It contains one type,\nstate_machine_def\nstate_machine_def definition\nThis type is the basic class for a basic (or possibly any other) front-end. It provides the standard row\ntypes (which includes internal transitions) and a default implementation of the required methods and\ntypedefs.Front-end\n105 template <class Derived,class BaseState =\n default_base_state> state_machine_def {\n}\ntypedefs\n•flag_list: by default, no flag is set in the state machine\n•deferred_events: by default, no event is deferred.\n•configuration: by default, no configuration customization is done.\nrow methods\nLike any other front-end, the following transition row types implements the two necessary static\nfunctions for action and guard call. Each function receives as parameter the (deepest-level) state\nmachine processsing the event, the event itself, the source and target states and all the states contained\nin a state machine (ignored).\ntemplate <class Fsm,class SourceState,class TargetState, class\nAllStates> static void action_call ();\n(Fsm& fsm,Event const& evt,SourceState&,TargetState,AllStates&) ;\ntemplate <class Fsm,class SourceState,class TargetState, class\nAllStates> static bool guard_call ();\n(Fsm& fsm,Event const& evt,SourceState&,TargetState,AllStates&) ;\na_row\nThis is a transition with action and without guard.\ntemplate< class Source, class Event, class Target, void\n(Derived::*action)(Event const&) > a_row\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•Target: the target state of the transition.\n•action: a pointer to the method provided by the concrete front-end (represented by Derived ).\ng_row\nThis is a transition with guard and without action.\ntemplate< class Source, class Event, class Target, bool\n(Derived::*guard)(Event const&) > g_row\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•Target: the target state of the transition.\n•guard: a pointer to the method provided by the concrete front-end (represented by Derived ).\nrow\nThis is a transition with guard and action.Front-end\n106template< class Source, class Event, class Target, void\n(Derived::*action)(Event const&), bool (Derived::*guard)(Event\nconst&) > row\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•Target: the target state of the transition.\n•action: a pointer to the method provided by the concrete front-end (represented by Derived ).\n•guard: a pointer to the method provided by the concrete front-end (represented by Derived ).\n_row\nThis is a transition without action or guard. The state machine only changes active state.\ntemplate< class Source, class Event, class Target > _row\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•Target: the target state of the transition.\na_irow\nThis is an internal transition for use inside a transition table, with action and without guard.\ntemplate< class Source, class Event, void (Derived::*action)(Event\nconst&) > a_irow\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•action: a pointer to the method provided by the concrete front-end (represented by Derived ).\ng_irow\nThis is an internal transition for use inside a transition table, with guard and without action.\ntemplate< class Source, class Event, bool (Derived::*guard)(Event\nconst&) > g_irow\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\n•guard: a pointer to the method provided by the concrete front-end (represented by Derived ).\nirow\nThis is an internal transition for use inside a transition table, with guard and action.\ntemplate< class Source, class Event, void (Derived::*action)(Event\nconst&), bool (Derived::*guard)(Event const&) > irow\n•Event: the event triggering the transition.\n•Source: the source state of the transition.Front-end\n107•action: a pointer to the method provided by the concrete front-end (represented by Derived ).\n•guard: a pointer to the method provided by the concrete front-end (represented by Derived ).\n_irow\nThis is an internal transition without action or guard. As it does nothing, it means \"ignore event\".\ntemplate< class Source, class Event > _irow\n•Event: the event triggering the transition.\n•Source: the source state of the transition.\nmethods\nstate_machine_def provides a default implementation in case of an event which cannot be\nprocessed by a state machine (no transition found). The implementation is using a BOOST_ASSERT\nso that the error will only be noticed in debug mode. Overwrite this method in your implementation\nto change the behavior.\ntemplate <class Fsm,class Event> static void no_transition ();\n(Event const& ,Fsm&, int state) ;\nstate_machine_def provides a default implementation in case an exception is thrown by a state\n(entry/exit) or transition (action/guard) behavior. The implementation is using a BOOST_ASSERT so\nthat the error will only be noticed in debug mode. Overwrite this method in your implementation to\nchange the behavior. This method will be called only if exception handling is not deactivated (default)\nby defining has_no_message_queue .\ntemplate <class Fsm,class Event> static void exception_caught ();\n(Event const& ,Fsm&, std::exception&) ;\nmsm/front/states.hpp\nThis header provides the different states (except state machines) for the basic front-end (or mixed with\nother front-ends).\ntypes\nThis header provides the following types:\nno_sm_ptr\ndeprecated: default policy for states. It means that states do not need to save a pointer to their containing\nstate machine.\nsm_ptr\ndeprecated: state policy. It means that states need to save a pointer to their containing state machine.\nWhen seeing this flag, the back-end will call set_sm_ptr(fsm*) and give itself as argument.\nstate\nBasic type for simple states. Inherit from this type to define a simple state. The first argument is needed\nif you want your state (and all others used in a concrete state machine) to inherit a basic type for\nlogging or providing a common behavior.\n template<class Base = default_base_state,classFront-end\n108 SMPtrPolicy = no_sm_ptr> state {\n}\nterminate_state\nBasic type for terminate states. Inherit from this type to define a terminate state. The first argument is\nneeded if you want your state (and all others used in a concrete state machine) to inherit a basic type\nfor logging or providing a common behavior.\n template<class Base = default_base_state,class\n SMPtrPolicy = no_sm_ptr> terminate_state {\n}\ninterrupt_state\nBasic type for interrupt states. Interrupt states prevent any further event handling until\nEndInterruptEvent is sent. Inherit from this type to define a terminate state. The first argument is the\nname of the event ending the interrupt. The second argument is needed if you want your state (and\nall others used in a concrete state machine) to inherit a basic type for logging or providing a common\nbehavior.\nThe EndInterruptEvent can also be a sequence of events:\nmpl::vector<EndInterruptEvent,EndInterruptEvent2>.\n template<class EndInterruptEvent,class Base =\n default_base_state, {\n}\n class SMPtrPolicy = no_sm_ptr>\n interrupt_state {\n}\nexplicit_entry\nInherit from this type in addition to the desired state type to enable this state for direct entering.\nThe template parameter gives the region id of the state (regions are numbered in the order of the\ninitial_state typedef).\n template <int ZoneIndex=-1> explicit_entry {\n}\nentry_pseudo_state\nBasic type for entry pseudo states. Entry pseudo states are an predefined entry into a submachine\nand connect two transitions. The first argument is the id of the region entered by this state (regions\nare numbered in the order of the initial_state typedef). The second argument is needed if you\nwant your state (and all others used in a concrete state machine) to inherit a basic type for logging\nor providing a common behavior.\n template<int RegionIndex=-1,class Base =\n default_base_state, {\n}\n class SMPtrPolicy = no_sm_ptr>\n entry_pseudo_state {\n}\nexit_pseudo_state\nBasic type for exit pseudo states. Exit pseudo states are an predefined exit from a submachine and\nconnect two transitions. The first argument is the name of the event which will be \"thrown\" out of theFront-end\n109exit point. This event does not need to be the same as the one sent by the inner region but must be\nconvertible from it. The second argument is needed if you want your state (and all others used in a\nconcrete state machine) to inherit a basic type for logging or providing a common behavior.\n template<class Event,class Base =\n default_base_state, {\n}\n class SMPtrPolicy = no_sm_ptr>\n exit_pseudo_state {\n}\nmsm/front/euml/euml.hpp\nThis header includes all of eUML except the STL functors.\nmsm/front/euml/stl.hpp\nThis header includes all the functors for STL support in eUML. These tables show a full description.\nmsm/front/euml/algorithm.hpp\nThis header includes all the functors for STL algorithms support in eUML. These tables show a full\ndescription.\nmsm/front/euml/iteration.hpp\nThis header includes iteration functors for STL support in eUML. This tables shows a full description.\nmsm/front/euml/querying.hpp\nThis header includes querying functors for STL support in eUML. This tables shows a full description.\nmsm/front/euml/transformation.hpp\nThis header includes transformation functors for STL support in eUML. This tables shows a full\ndescription.\nmsm/front/euml/container.hpp\nThis header includes container functors for STL support in eUML (functors calling container\nmethods). This tables shows a full description. It also provides npos for strings.\nNpos_<container type>\nFunctor returning npos for transition or state behaviors. Like all constants, only the functor form exists,\nso parenthesis are necessary. Example:\nstring_find_(event_(m_song),Char_<'S'>(),Size_t_<0>()) !=\nNpos_<string>() // compare result of string::find with npos\nmsm/front/euml/stt_grammar.hpp\nThis header provides the transition table grammars. This includes internal transition tables.Front-end\n110functions\nbuild_stt\nThe function build_stt evaluates the grammar-conform expression as parameter. It returns a transition\ntable, which is a mpl::vector of transitions (rows) or, if the expression is ill-formed (does not match\nthe grammar), the type invalid_type , which will lead to a compile-time static assertion when this\ntransition table is passed to a state machine.\ntemplate<class Expr> [mpl::vector<...> /\nmsm::front::euml::invalid_type] build_stt ();\nExpr const& expr;\nbuild_internal_stt\nThe function build_internal_stt evaluates the grammar-conform expression as parameter. It returns a\ntransition table, which is a mpl::vector of transitions (rows) or, if the expression is ill-formed (does\nnot match the grammar), the type invalid_type , which will lead to a compile-time static assertion\nwhen this transition table is passed to a state machine.\ntemplate<class Expr> [mpl::vector<...> /\nmsm::front::euml::invalid_type] build_internal_stt ();\nExpr const& expr;\ngrammars\ntransition table\nThe transition table accepts the following grammar:\nStt := Row | (Stt ',' Stt)\nRow := (Target '==' (SourcePlusEvent)) /* first syntax*/\n | ( (SourcePlusEvent) '==' Target ) /* second syntax*/\n | (SourcePlusEvent) /* internal transitions */\nSourcePlusEvent := (BuildSource '+' BuildEvent)/* standard transition*/ \n | (BuildSource) /* anonymous transition */\nBuildSource := state_tag | (state_tag '/' Action) | (state_tag '[' Guard ']') \n | (state_tag '[' Guard ']' '/' Action)\nBuildEvent := event_tag | (event_tag '/' Action) | (event_tag '[' Guard ']') \n | (event_tag '[' Guard ']' '/' Action)\nThe grammars Action and Guard are defined in state_grammar.hpp and guard_grammar.hpp\nrespectively. state_tag and event_tag are inherited from euml_state (or other state variants) and\neuml_event respectively. For example, following declarations are possible:\ntarget == source + event [guard] / action,\nsource + event [guard] / action == target,\nsource + event [guard] / (action1,action2) == target,\ntarget == source + event [guard] / (action1,action2),\ntarget == source + event,\nsource + event == target,\ntarget == source + event [guard],\nsource + event [guard] == target,\ntarget == source + event / action,\nsource + event /action == target,\nsource / action == target, /*anonymous transition*/\ntarget == source / action, /*anonymous transition*/\nsource + event /action, /* internal transition*/Front-end\n111internal transition table\nThe internal transition table accepts the following grammar:\nIStt := BuildEvent | (IStt ',' IStt)\nBuildEvent being defined for both internal and standard transition tables.\nmsm/front/euml/guard_grammar.hpp\nThis header contains the Guard grammar used in the previous section. This grammar is long but\npretty simple:\nGuard := action_tag | (Guard '&&' Guard) \n | (Guard '||' Guard) | ... /* operators*/\n | (if_then_else_(Guard,Guard,Guard)) | (function (Action,...Action))\nMost C++ operators are supported (address-of is not). With function is meant any eUML\npredefined function or any self-made (using MSM_EUML_METHOD or MSM_EUML_FUNCTION ).\nAction is a grammar defined in state_grammar.hpp.\nmsm/front/euml/state_grammar.hpp\nThis header provides the grammar for actions and the different grammars and functions to build states\nusing eUML.\naction grammar\nLike the guard grammar, this grammar supports relevant C++ operators and eUML functions:\nAction := action_tag | (Action '+' Action) \n | ('--' Action) | ... /* operators*/\n | if_then_else_(Guard,Action,Action) | if_then_(Action) \n | while_(Guard,Action) \n | do_while_(Guard,Action) | for_(Action,Guard,Action,Action) \n | (function(Action,...Action))\nActionSequence := Action | (Action ',' Action)\nRelevant operators are: ++ (post/pre), -- (post/pre), dereferencing, + (unary/binary), - (unary/binary),\n*, /, %, &(bitwise), | (bitwise), ^(bitwise), +=, -=, *=, /=, %=, <<=, >>=, <<, >>, =, [].\nattributes\nThis grammar is used to add attributes to states (or state machines) or events: It evaluates to a\nfusion::map. You can use two forms:\n•attributes_ << no_attributes_\n•attributes_ << attribute_1 << ... << attribute_n\nAttributes can be of any default-constructible type (fusion requirement).\nconfigure\nThis grammar also has two forms:\n•configure_ << no_configure_\n•configure_ << type_1 << ... << type_nFront-end\n112This grammar is used to create inside one syntax:\n•flags: configure_ << some_flag where some_flag inherits from\neuml_flag<some_flag> or is defined using BOOST_MSM_EUML_FLAG.\n•deferred events: configure_ << some_event where some_event inherits from\neuml_event<some_event> or is defined using BOOST_MSM_EUML_EVENT or\nBOOST_MSM_EUML_EVENT_WITH_ATTRIBUTES.\n•configuration (message queue, manual deferring, exception handling): configure_ <<\nsome_config where some_config inherits from euml_config<some_config> . At the\nmoment, three predefined objects exist (in msm//front/euml/common.hpp):\n•no_exception: disable catching exceptions\n•no_msg_queue: disable message queue\n•deferred_events: manually enable handling of deferred events\ninitial states\nThe grammar to define initial states for a state machine is: init_\n<< state_1 << ... << state_n where state_1...state_n\ninherit from euml_state or is defined using BOOST_MSM_EUML_STATE,\nBOOST_MSM_EUML_INTERRUPT_STATE, BOOST_MSM_EUML_TERMINATE_STATE,\nBOOST_MSM_EUML_EXPLICIT_ENTRY_STATE, BOOST_MSM_EUML_ENTRY_STATE or\nBOOST_MSM_EUML_EXIT_STATE.\nfunctions\nbuild_sm\nThis function has several overloads. The return type is not relevant to you as only decltype (return\ntype) is what one needs.\nDefines a state machine without entry or exit:\ntemplate <class StateNameTag,class Stt,class Init>\nfunc_state_machine<...> build_sm ();\nStt ,Init;\nDefines a state machine with entry behavior:\ntemplate <class StateNameTag,class Stt,class Init,class Expr1>\nfunc_state_machine<...> build_sm ();\nStt ,Init,Expr1 const&;\nDefines a state machine with entry and exit behaviors:\ntemplate <class StateNameTag,class Stt,class Init,class Expr1, class\nExpr2> func_state_machine<...> build_sm ();\nStt ,Init,Expr1 const&,Expr2 const&;\nDefines a state machine with entry, exit behaviors and attributes:\ntemplate <class StateNameTag,class Stt,class Init,class Expr1, class\nExpr2, class Attributes> func_state_machine<...> build_sm ();\nStt ,Init,Expr1 const&, Expr2 const&, Attributes const&;Front-end\n113Defines a state machine with entry, exit behaviors, attributes and configuration (deferred events, flags):\ntemplate <class StateNameTag,class Stt,class Init,class Expr1, class\nExpr2, class Attributes, class Configure> func_state_machine<...>\nbuild_sm ();\nStt ,Init,Expr1 const&, Expr2 const&, Attributes const&, Configure\nconst&;\nDefines a state machine with entry, exit behaviors, attributes, configuration (deferred events, flags)\nand a base state:\ntemplate <class StateNameTag,class Stt,class Init,class Expr1,\nclass Expr2, class Attributes, class Configure, class Base>\nfunc_state_machine<...> build_sm ();\nStt ,Init,Expr1 const&, Expr2 const&, Attributes const&, Configure\nconst&, Base;\nNotice that this function requires the extra parameter class StateNameTag to disambiguate state\nmachines having the same parameters but still being different.\nbuild_state\nThis function has several overloads. The return type is not relevant to you as only decltype (return\ntype) is what one needs.\nDefines a simple state without entry or exit:\nfunc_state<class StateNameTag,...> build_state ();\n;\nDefines a simple state with entry behavior:\ntemplate <class StateNameTag,class Expr1> func_state<...>\nbuild_state ();\nExpr1 const&;\nDefines a simple state with entry and exit behaviors:\ntemplate <class StateNameTag,class Expr1, class Expr2>\nfunc_state<...> build_state ();\nExpr1 const&,Expr2 const&;\nDefines a simple state with entry, exit behaviors and attributes:\ntemplate <class StateNameTag,class Expr1, class Expr2, class\nAttributes> func_state<...> build_state ();\nExpr1 const&, Expr2 const&, Attributes const&;\nDefines a simple state with entry, exit behaviors, attributes and configuration (deferred events, flags):\ntemplate <class StateNameTag,class Expr1, class Expr2, class\nAttributes, class Configure> func_state<...> build_state ();\nExpr1 const&, Expr2 const&, Attributes const&, Configure const&;\nDefines a simple state with entry, exit behaviors, attributes, configuration (deferred events, flags) and\na base state:Front-end\n114template <class StateNameTag,class Expr1, class Expr2, class\nAttributes, class Configure, class Base> func_state<...>\nbuild_state ();\nExpr1 const&, Expr2 const&, Attributes const&, Configure const&,\nBase;\nNotice that this function requires the extra parameter class StateNameTag to disambiguate states\nhaving the same parameters but still being different.\nbuild_terminate_state\nThis function has the same overloads as build_state.\nbuild_interrupt_state\nThis function has several overloads. The return type is not relevant to you as only decltype (return\ntype) is what one needs.\nDefines an interrupt state without entry or exit:\ntemplate <class StateNameTag,class EndInterruptEvent>\nfunc_state<...> build_interrupt_state ();\nEndInterruptEvent const&;\nDefines an interrupt state with entry behavior:\ntemplate <class StateNameTag,class EndInterruptEvent,class Expr1>\nfunc_state<...> build_interrupt_state ();\nEndInterruptEvent const&,Expr1 const&;\nDefines an interrupt state with entry and exit behaviors:\ntemplate <class StateNameTag,class EndInterruptEvent,class Expr1,\nclass Expr2> func_state<...> build_interrupt_state ();\nEndInterruptEvent const&,Expr1 const&,Expr2 const&;\nDefines an interrupt state with entry, exit behaviors and attributes:\ntemplate <class StateNameTag,class EndInterruptEvent,class\nExpr1, class Expr2, class Attributes> func_state<...>\nbuild_interrupt_state ();\nEndInterruptEvent const&,Expr1 const&, Expr2 const&, Attributes\nconst&;\nDefines an interrupt state with entry, exit behaviors, attributes and configuration (deferred events,\nflags):\ntemplate <class StateNameTag,class EndInterruptEvent,class Expr1,\nclass Expr2, class Attributes, class Configure> func_state<...>\nbuild_interrupt_state ();\nEndInterruptEvent const&,Expr1 const&, Expr2 const&, Attributes\nconst&, Configure const&;\nDefines an interrupt state with entry, exit behaviors, attributes, configuration (deferred events, flags)\nand a base state:Front-end\n115template <class StateNameTag,class EndInterruptEvent,class Expr1,\nclass Expr2, class Attributes, class Configure, class Base>\nfunc_state<...> build_interrupt_state ();\nEndInterruptEvent const&,Expr1 const&, Expr2 const&, Attributes\nconst&, Configure const&, Base;\nNotice that this function requires the extra parameter class StateNameTag to disambiguate states\nhaving the same parameters but still being different.\nbuild_entry_state\nThis function has several overloads. The return type is not relevant to you as only decltype (return\ntype) is what one needs.\nDefines an entry pseudo state without entry or exit:\ntemplate <class StateNameTag,int RegionIndex> entry_func_state<...>\nbuild_entry_state ();\n;\nDefines an entry pseudo state with entry behavior:\ntemplate <class StateNameTag,int RegionIndex,class Expr1>\nentry_func_state<...> build_entry_state ();\nExpr1 const&;\nDefines an entry pseudo state with entry and exit behaviors:\ntemplate <class StateNameTag,int RegionIndex,class Expr1, class\nExpr2> entry_func_state<...> build_entry_state ();\nExpr1 const&,Expr2 const&;\nDefines an entry pseudo state with entry, exit behaviors and attributes:\ntemplate <class StateNameTag,int RegionIndex,class Expr1, class\nExpr2, class Attributes> entry_func_state<...> build_entry_state ();\nExpr1 const&, Expr2 const&, Attributes const&;\nDefines an entry pseudo state with entry, exit behaviors, attributes and configuration (deferred events,\nflags):\ntemplate <class StateNameTag,int RegionIndex,class Expr1, class\nExpr2, class Attributes, class Configure> entry_func_state<...>\nbuild_entry_state ();\nExpr1 const&, Expr2 const&, Attributes const&, Configure const&;\nDefines an entry pseudo state with entry, exit behaviors, attributes, configuration (deferred events,\nflags) and a base state:\ntemplate <class StateNameTag,int RegionIndex,class Expr1, class\nExpr2, class Attributes, class Configure, class Base>\nentry_func_state<...> build_entry_state ();\nExpr1 const&, Expr2 const&, Attributes const&, Configure const&,\nBase;Front-end\n116Notice that this function requires the extra parameter class StateNameTag to disambiguate states\nhaving the same parameters but still being different.\nbuild_exit_state\nThis function has several overloads. The return type is not relevant to you as only decltype (return\ntype) is what one needs.\nDefines an exit pseudo state without entry or exit:\ntemplate <class StateNameTag,class Event> exit_func_state<...>\nbuild_exit_state ();\nEvent const&;\nDefines an exit pseudo state with entry behavior:\ntemplate <class StateNameTag,class Event,class Expr1>\nexit_func_state<...> build_exit_state ();\nEvent const&,Expr1 const&;\nDefines an exit pseudo state with entry and exit behaviors:\ntemplate <class StateNameTag,class Event,class Expr1, class Expr2>\nexit_func_state<...> build_exit_state ();\nEvent const&,Expr1 const&,Expr2 const&;\nDefines an exit pseudo state with entry, exit behaviors and attributes:\ntemplate <class StateNameTag,class Event,class Expr1, class Expr2,\nclass Attributes> exit_func_state<...> build_exit_state ();\nEvent const&,Expr1 const&, Expr2 const&, Attributes const&;\nDefines an exit pseudo state with entry, exit behaviors, attributes and configuration (deferred events,\nflags):\ntemplate <class StateNameTag,class Event,class Expr1, class\nExpr2, class Attributes, class Configure> exit_func_state<...>\nbuild_exit_state ();\nEvent const&,Expr1 const&, Expr2 const&, Attributes const&, Configure\nconst&;\nDefines an exit pseudo state with entry, exit behaviors, attributes, configuration (deferred events, flags)\nand a base state:\ntemplate <class StateNameTag,class Event,class Expr1, class Expr2,\nclass Attributes, class Configure, class Base> exit_func_state<...>\nbuild_exit_state ();\nEvent const&,Expr1 const&, Expr2 const&, Attributes const&, Configure\nconst&, Base;\nNotice that this function requires the extra parameter class StateNameTag to disambiguate states\nhaving the same parameters but still being different.\nbuild_explicit_entry_state\nThis function has the same overloads as build_entry_state and explicit_entry_func_state as return type.Front-end\n117msm/front/euml/common.hpp\ntypes\neuml_event\nThe basic type for events with eUML.\n template <class EventName> euml_event; {\n}\nstruct play : euml_event<play>{};\neuml_state\nThe basic type for states with eUML. You will usually not use\nthis type directly as it is easier to use BOOST_MSM_EUML_STATE,\nBOOST_MSM_EUML_INTERRUPT_STATE, BOOST_MSM_EUML_TERMINATE_STATE,\nBOOST_MSM_EUML_EXPLICIT_ENTRY_STATE, BOOST_MSM_EUML_ENTRY_STATE or\nBOOST_MSM_EUML_EXIT_STATE.\n template <class StateName> euml_state; {\n}\nYou can however use this type directly if you want to provide your state with extra functions or provide\nentry or exit behaviors without functors, for example:\nstruct Empty : public msm::front::state<> , public euml_state<Empty> \n{\n void foo() {...}\n template <class Event,class Fsm>\n void on_entry(Event const& evt,Fsm& fsm){...}\n};\neuml_flag\nThe basic type for flags with eUML.\n template <class FlagName> euml_flag; {\n}\nstruct PlayingPaused: euml_flag<PlayingPaused>{};\neuml_action\nThe basic type for state or transition behaviors and guards with eUML.\n template <class AcionName> euml_action; {\n}\nstruct close_drawer : euml_action<close_drawer>\n{\n template <class Fsm,class Evt,class SourceState,class TargetState>\n void operator()(Evt const& , Fsm&, SourceState& ,TargetState& ) {...}\n};\nOr, as state entry or exit behavior:\nstruct Playing_Entry : euml_action<Playing_Entry>Front-end\n118{\n template <class Event,class Fsm,class State>\n void operator()(Event const&,Fsm& fsm,State& ){...}\n};\neuml_config\nThe basic type for configuration possibilities with eUML.\n template <class ConfigName> euml_config; {\n}\nYou normally do not use this type directly but instead the instances of predefined configuration:\n•no_exception: disable catching exceptions\n•no_msg_queue: disable message queue. The message queue allows you to send an event for\nprocesing while in an event processing.\n•deferred_events: manually enable handling of deferred events\ninvalid_type\nType returned by grammar parsers if the grammar is invalid. Seeing this type will result in a static\nassertion.\nno_action\nPlaceholder type for use in entry/exit or transition behaviors, which does absolutely nothing.\nsource_\nGeneric object or function for the source state of a given transition:\n•as object: returns by reference the source state of a transition, usually to be used by another function\n(usually one created by MSM_EUML_METHOD or MSM_EUML_FUNCTION).\nExample:\nsome_user_function_(source_)\n•as function: returns by reference the attribute passed as parameter.\nExample:\nsource_(m_counter)++\ntarget_\nGeneric object or function for the target state of a given transition:\n•as object: returns by reference the target state of a transition, usually to be used by another function\n(usually one created by MSM_EUML_METHOD or MSM_EUML_FUNCTION).\nExample:\nsome_user_function_(target_)\n•as function: returns by reference the attribute passed as parameter.\nExample:Front-end\n119target_(m_counter)++\nstate_\nGeneric object or function for the state of a given entry / exit behavior. state_ means source_ while in\nthe context of an exit behavior and target_ in the context of an entry behavior:\n•as object: returns by reference the current state, usually to be used by another function (usually one\ncreated by MSM_EUML_METHOD or MSM_EUML_FUNCTION).\nExample:\nsome_user_function_(state_) // calls some_user_function on the current state\n•as function: returns by reference the attribute passed as parameter.\nExample:\nstate_(m_counter)++\nevent_\nGeneric object or function for the event triggering a given transition (valid in a transition behavior,\nas well as in state entry/exit behaviors):\n•as object: returns by reference the event of a transition, usually to be used by another function\n(usually one created by MSM_EUML_METHOD or MSM_EUML_FUNCTION).\nExample:\nsome_user_function_(event_)\n•as function: returns by reference the attribute passed as parameter.\nExample:\nevent_(m_counter)++\nfsm_\nGeneric object or function for the state machine containing a given transition:\n•as object: returns by reference the event of a transition, usually to be used by another function\n(usually one created by MSM_EUML_METHOD or MSM_EUML_FUNCTION).\nExample:\nsome_user_function_(fsm_)\n•as function: returns by reference the attribute passed as parameter.\nExample:\nfsm_(m_counter)++\nsubstate_\nGeneric object or function returning a state of a given state machine:\n•with 1 parameter: returns by reference the state passed as parameter, usually to be used by another\nfunction (usually one created by MSM_EUML_METHOD or MSM_EUML_FUNCTION).Front-end\n120Example:\nsome_user_function_(substate_(my_state))\n•with 2 parameters: returns by reference the state passed as first parameter from the state\nmachine passed as second parameter, usually to be used by another function (usually one created\nby MSM_EUML_METHOD or MSM_EUML_FUNCTION). This makes sense when used in\ncombination with attribute_.\nExample (equivalent to the previous example):\nsome_user_function_(substate_(my_state,fsm_))\nattribute_\nGeneric object or function returning the attribute passed (by name) as second parameter of the thing\npassed as first (a state, event or state machine). Example:\nattribute_(substate_(my_state),cd_name_attribute)++\nTrue_\nFunctor returning true for transition or state behaviors. Like all constants, only the functor form exists,\nso parenthesis are necessary. Example:\nif_then_(True_(),/* some action always called*/)\nFalse_\nFunctor returning false for transition or state behaviors. Like all constants, only the functor form exists,\nso parenthesis are necessary. Example:\nif_then_(False_(),/* some action never called */)\nInt_<int value>\nFunctor returning an integer value for transition or state behaviors. Like all constants, only the functor\nform exists, so parenthesis are necessary. Example:\ntarget_(m_ringing_cpt) = Int_<RINGING_TIME>() // RINGING_TIME is a constant\nChar_<char value>\nFunctor returning a char value for transition or state behaviors. Like all constants, only the functor\nform exists, so parenthesis are necessary. Example:\n// look for 'S' in event.m_song\n[string_find_(event_(m_song),Char_<'S'>(),Size_t_<0>()) != Npos_<string>()]\nSize_t_<size_t value>\nFunctor returning a size_t value for transition or state behaviors. Like all constants, only the functor\nform exists, so parenthesis are necessary. Example:\nsubstr_(event_(m_song),Size_t_<1>()) // returns a substring of event.m_song\nString_ < mpl::string >\nFunctor returning a string for transition or state behaviors. Like all constants, only the functor form\nexists, so parenthesis are necessary. Requires boost >= 1.40 for mpl::string.Front-end\n121Example:\n// adds \"Let it be\" to fsm.m_src_container\npush_back_(fsm_(m_src_container), String_<mpl::string<'Let','it ','be'> >())\nPredicate_ < some_stl_compatible_functor >\nThis functor eUML-enables a STL functor (for use in an algorithm). This is necessary because all\nwhat is in the transition table must be a eUML terminal.\nExample:\n//equivalent to: \n//std::accumulate(fsm.m_vec.begin(),fsm.m_vec.end(),1,std::plus<int>())== 1\naccumulate_(begin_(fsm_(m_vec)),end_(fsm_(m_vec)),Int_<1>(),\n Predicate_<std::plus<int> >()) == Int_<1>())\nprocess_\nThis function sends an event to up to 4 state machines by calling process_event on them:\n•process_(some_event) : processes an event in the current (containing) state machine.\n•process_(some_event [,fsm1...fsm4] ) : processes the same event in the 1-4 state\nmachines passed as argument.\nprocess2_\nThis function sends an event to up to 3 state machines by calling process_event on them and\ncopy-constructing the event from the data passed as second parameter:\n•process2_(some_event, some_data) : processes an event in the current (containing)\nstate machine.\n•process2_(some_event, some_data [,fsm1...fsm3] ) : processes the same event\nin the 1-3 state machines passed as argument.\nExample:\n// processes NotFound on current state machine, \n// copy-constructed with event.m_song\nprocess2_(NotFound,event_(m_song))\nWith the following definitions:\nBOOST_MSM_EUML_DECLARE_ATTRIBUTE(std::string,m_song)//declaration of m_song\nNotFound (const string& data) // copy-constructor of NotFound\nis_flag_\nThis function tells if a flag is active by calling is_flag_active on the current state machine or\none passed as parameter:\n•is_flag_(some_flag) : calls is_flag_active on the current (containing) state machine.\n•is_flag_(some_flag, some_fsm) :calls is_flag_active on the state machine.passed\nas argument.\ndefer_\nThis object defers the current event by calling defer_event on the current state machine. Example:Front-end\n122Empty() + play() / defer_\nexplicit_(submachine-name,state-name)\nUsed as transition's target, causes an explicit entry into the given state from the given submachine.\nSeveral explicit_ as targets, separated by commas, means a fork. The state must have been declared\nas such using BOOST_MSM_EUML_EXPLICIT_ENTRY_STATE.\nentry_pt_(submachine-name,state-name)\nUsed as transition's target from a containing state machine, causes submachine-name to be entered\nusing the given entry pseudo-state. This state must have been declared as pseudo entry using\nBOOST_MSM_EUML_ENTRY_STATE.\nexit_pt_(submachine-name,state-name)\nUsed as transition's source from a containing state machine, causes submachine-name to be left\nusing the given exit pseudo-state. This state must have been declared as pseudo exit using\nBOOST_MSM_EUML_EXIT_STATE.\nMSM_EUML_FUNCTION\nThis macro creates a eUML function and a functor for use with the functor front-end, based on a free\nfunction:\n•first parameter: the name of the functor\n•second parameter: the underlying function\n•third parameter: the eUML function name\n•fourth parameter: the return type if used in a transition behavior\n•fifth parameter: the return type if used in a state behavior (entry/exit)\nNote that the function itself can take up to 5 arguments.\nExample:\nMSM_EUML_FUNCTION(BinarySearch_,std::binary_search,binary_search_,bool,bool)\nCan be used like:\nbinary_search_(begin_(fsm_(m_var)),end_(fsm_(m_var)),Int_<9>())\nMSM_EUML_METHOD\nThis macro creates a eUML function and a functor for use with the functor front-end, based on a\nmethod:\n•first parameter: the name of the functor\n•second parameter: the underlying function\n•third parameter: the eUML function name\n•fourth parameter: the return type if used in a transition behavior\n•fifth parameter: the return type if used in a state behavior (entry/exit)\nNote that the method itself can take up to 4 arguments (5 like for a free function - 1 for the object\non which the method is called).Front-end\n123Example:\nstruct Empty : public msm::front::state<> , public euml_state<Empty> \n{\n void activate_empty() {std::cout << \"switching to Empty \" << std::endl;}\n... \n};\nMSM_EUML_METHOD(ActivateEmpty_,activate_empty,activate_empty_,void,void)\nCan be used like:\nEmpty == Open + open_close / (close_drawer , activate_empty_(target_))\nBOOST_MSM_EUML_ACTION(action-instance-name)\nThis macro declares a behavior type and a const instance for use in state or transition behaviors. The\naction implementation itself follows the macro declaration, for example:\nBOOST_MSM_EUML_ACTION(good_disk_format)\n{\n template <class Fsm,class Evt,class SourceState,class TargetState>\n void/bool operator()(Evt const& evt,Fsm&,SourceState& ,TargetState& ){...}\n};\nBOOST_MSM_EUML_FLAG(flag-instance-name)\nThis macro declares a flag type and a const instance for use in behaviors.\nBOOST_MSM_EUML_FLAG_NAME(flag-instance-name)\nThis macro returns the name of the flag type generated by BOOST_MSM_EUML_FLAG. You need\nthis where the type is required (usually with the back-end method is_flag_active). For example:\nfsm.is_flag_active<BOOST_MSM_EUML_FLAG_NAME(CDLoaded)>()\nBOOST_MSM_EUML_DECLARE_ATTRIBUTE(event-type,event-name)\nThis macro declares an attribute called event-name of type event-type. This attribute can then be made\npart of an attribute list using BOOST_MSM_EUML_ATTRIBUTES.\nBOOST_MSM_EUML_ATTRIBUTES(attributes-expression,attributes-name)\nThis macro declares an attribute list called attributes-name based on the expression\nas first argument. These attributes can then be made part of an event\nusing BOOST_MSM_EUML_EVENT_WITH_ATTRIBUTES, of a state as 3rd parameter\nof BOOST_MSM_EUML_STATE or of a state machine as 5th parameter of\nBOOST_MSM_EUML_DECLARE_STATE_MACHINE.\nAttributes are added using left-shift, for example:\n// m_song is of type std::string\nBOOST_MSM_EUML_DECLARE_ATTRIBUTE(std::string,m_song)\n// contains one attribute, m_song\nBOOST_MSM_EUML_ATTRIBUTES((attributes_ << m_song ), FoundDef)\nBOOST_MSM_EUML_EVENT(event-instance name)\nThis macro defines an event type (event-instance-name_helper) and declares a const instance of this\nevent type called event-instance-name for use in a transition table or state behaviors.Front-end\n124BOOST_MSM_EUML_EVENT_WITH_ATTRIBUTES(event-instance-\nname,attributes)\nThis macro defines an event type (event-instance-name_helper) and declares a const instance of this\nevent type called event-instance-name for use in a transition table or state behaviors. The event will\nhave as attributes the ones passed by the second argument:\nBOOST_MSM_EUML_EVENT_WITH_ATTRIBUTES(Found,FoundDef)\nThe created event instance supports operator()(attributes) so that\nmy_back_end.process_event(Found(some_string))\nis possible.\nBOOST_MSM_EUML_EVENT_NAME(event-instance-name)\nThis macro returns the name of the event type generated by BOOST_MSM_EUML_EVENT or\nBOOST_MSM_EUML_EVENT_WITH_ATTRIBUTES. You need this where the type is required\n(usually inside a back-end definition). For example:\ntypedef msm::back::state_machine<Playing_,\nmsm::back::ShallowHistory<mpl::vector<BOOST_MSM_EUML_EVENT_NAME(end_pause)\n> > > Playing_type;\nBOOST_MSM_EUML_STATE(build-expression,state-instance-name)\nThis macro defines a state type (state-instance-name_helper) and declares a const instance of this state\ntype called state-instance-name for use in a transition table or state behaviors.\nThere are several possibilitites for the expression syntax:\n•(): state without entry or exit action.\n•(Expr1): state with entry but no exit action.\n•(Expr1,Expr2): state with entry and exit action.\n•(Expr1,Expr2,Attributes): state with entry and exit action, defining some attributes.\n•(Expr1,Expr2,Attributes,Configure): state with entry and exit action, defining some attributes and\nflags (standard MSM flags) or deferred events (standard MSM deferred events).\n•(Expr1,Expr2,Attributes,Configure,Base): state with entry and exit action, defining some attributes,\nflags and deferred events (plain msm deferred events) and a non-default base state (as defined in\nstandard MSM).\nBOOST_MSM_EUML_INTERRUPT_STATE(build-expression,state-instance-\nname)\nThis macro defines an interrupt state type (state-instance-name_helper) and declares a const instance\nof this state type called state-instance-name for use in a transition table or state behaviors.\nThere are several possibilitites for the expression syntax. In all of them, the first argument is the name\nof the event (generated by one of the previous macros) ending the interrupt:\n•(end_interrupt_event): interrupt state without entry or exit action.\n•(end_interrupt_event,Expr1): interrupt state with entry but no exit action.Front-end\n125•(end_interrupt_event,Expr1,Expr2): interrupt state with entry and exit action.\n•(end_interrupt_event,Expr1,Expr2,Attributes): interrupt state with entry and exit action, defining\nsome attributes.\n•(end_interrupt_event,Expr1,Expr2,Attributes,Configure): interrupt state with entry and exit action,\ndefining some attributes and flags (standard MSM flags) or deferred events (standard MSM deferred\nevents).\n•(end_interrupt_event,Expr1,Expr2,Attributes,Configure,Base): interrupt state with entry and exit\naction, defining some attributes, flags and deferred events (plain msm deferred events) and a non-\ndefault base state (as defined in standard MSM).\nBOOST_MSM_EUML_TERMINATE_STATE(build-expression,state-instance-\nname)\nThis macro defines a terminate pseudo-state type (state-instance-name_helper) and declares a const\ninstance of this state type called state-instance-name for use in a transition table or state behaviors.\nThere are several possibilitites for the expression syntax:\n•(): terminate pseudo-state without entry or exit action.\n•(Expr1): terminate pseudo-state with entry but no exit action.\n•(Expr1,Expr2): terminate pseudo-state with entry and exit action.\n•(Expr1,Expr2,Attributes): terminate pseudo-state with entry and exit action, defining some\nattributes.\n•(Expr1,Expr2,Attributes,Configure): terminate pseudo-state with entry and exit action, defining\nsome attributes and flags (standard MSM flags) or deferred events (standard MSM deferred events).\n•(Expr1,Expr2,Attributes,Configure,Base): terminate pseudo-state with entry and exit action,\ndefining some attributes, flags and deferred events (plain msm deferred events) and a non-default\nbase state (as defined in standard MSM).\nBOOST_MSM_EUML_EXIT_STATE(build-expression,state-instance-name)\nThis macro defines an exit pseudo-state type (state-instance-name_helper) and declares a const\ninstance of this state type called state-instance-name for use in a transition table or state behaviors.\nThere are several possibilitites for the expression syntax:\n•(forwarded_event):exit pseudo-state without entry or exit action.\n•(forwarded_event,Expr1): exit pseudo-state with entry but no exit action.\n•(forwarded_event,Expr1,Expr2): exit pseudo-state with entry and exit action.\n•(forwarded_event,Expr1,Expr2,Attributes): exit pseudo-state with entry and exit action, defining\nsome attributes.\n•(forwarded_event,Expr1,Expr2,Attributes,Configure): exit pseudo-state with entry and exit action,\ndefining some attributes and flags (standard MSM flags) or deferred events (standard MSM deferred\nevents).\n•(forwarded_event,Expr1,Expr2,Attributes,Configure,Base): exit pseudo-state with entry and exit\naction, defining some attributes, flags and deferred events (plain msm deferred events) and a non-\ndefault base state (as defined in standard MSM).Front-end\n126Note that the forwarded_event must be constructible from the event sent by the submachine containing\nthe exit point.\nBOOST_MSM_EUML_ENTRY_STATE(int region-index,build-expression,state-\ninstance-name)\nThis macro defines an entry pseudo-state type (state-instance-name_helper) and declares a const\ninstance of this state type called state-instance-name for use in a transition table or state behaviors.\nThere are several possibilitites for the expression syntax:\n•(): entry pseudo-state without entry or exit action.\n•(Expr1): entry pseudo-state with entry but no exit action.\n•(Expr1,Expr2): entry pseudo-state with entry and exit action.\n•(Expr1,Expr2,Attributes): entry pseudo-state with entry and exit action, defining some attributes.\n•(Expr1,Expr2,Attributes,Configure): entry pseudo-state with entry and exit action, defining some\nattributes and flags (standard MSM flags) or deferred events (standard MSM deferred events).\n•(Expr1,Expr2,Attributes,Configure,Base): entry pseudo-state with entry and exit action, defining\nsome attributes, flags and deferred events (plain msm deferred events) and a non-default base state\n(as defined in standard MSM).\nBOOST_MSM_EUML_EXPLICIT_ENTRY_STATE(int region-index,build-\nexpression,state-instance-name)\nThis macro defines a submachine's substate type (state-instance-name_helper), which can be explicitly\nentered and also declares a const instance of this state type called state-instance-name for use in a\ntransition table or state behaviors.\nThere are several possibilitites for the expression syntax:\n•(): state without entry or exit action.\n•(Expr1): state with entry but no exit action.\n•(Expr1,Expr2): state with entry and exit action.\n•(Expr1,Expr2,Attributes): state with entry and exit action, defining some attributes.\n•(Expr1,Expr2,Attributes,Configure): state with entry and exit action, defining some attributes and\nflags (standard MSM flags) or deferred events (standard MSM deferred events).\n•(Expr1,Expr2,Attributes,Configure,Base): state with entry and exit action, defining some attributes,\nflags and deferred events (plain msm deferred events) and a non-default base state (as defined in\nstandard MSM).\nBOOST_MSM_EUML_STATE_NAME(state-instance-name)\nThis macro returns the name of the state type generated by BOOST_MSM_EUML_STATE or other\nstate macros. You need this where the type is required (usually using a backend function). For example:\nfsm.get_state<BOOST_MSM_EUML_STATE_NAME(StringFind)&>().some_state_function();\nBOOST_MSM_EUML_DECLARE_STATE(build-expression,state-instance-\nname)\nLike BOOST_MSM_EUML_STATE but does not provide an instance, simply a type declaration.Front-end\n127BOOST_MSM_EUML_DECLARE_INTERRUPT_STATE(build-expression,state-\ninstance-name)\nLike BOOST_MSM_EUML_INTERRUPT_STATE but does not provide an instance, simply a type\ndeclaration.\nBOOST_MSM_EUML_DECLARE_TERMINATE_STATE(build-expression,state-\ninstance-name)\nLike BOOST_MSM_EUML_TERMINATE_STATE but does not provide an instance, simply a type\ndeclaration.\nBOOST_MSM_EUML_DECLARE_EXIT_STATE(build-expression,state-instance-\nname)\nLike BOOST_MSM_EUML_EXIT_STATE but does not provide an instance, simply a type\ndeclaration.\nBOOST_MSM_EUML_DECLARE_ENTRY_STATE(int region-index,build-\nexpression,state-instance-name)\nLike BOOST_MSM_EUML_ENTRY_STATE but does not provide an instance, simply a type\ndeclaration.\nBOOST_MSM_EUML_DECLARE_EXPLICIT_ENTRY_STATE(int region-\nindex,build-expression,state-instance-name)\nLike BOOST_MSM_EUML_EXPLICIT_ENTRY_STATE but does not provide an instance, simply\na type declaration.\nBOOST_MSM_EUML_TRANSITION_TABLE(expression, table-instance-name)\nThis macro declares a transition table type and also declares a const instance\nof the table which can then be used in a state machine declaration (see\nBOOST_MSM_EUML_DECLARE_STATE_MACHINE).The expression must follow the\ntransition table grammar .\nBOOST_MSM_EUML_DECLARE_TRANSITION_TABLE(iexpression,table-\ninstance-name)\nLike BOOST_MSM_EUML_TRANSITION_TABLE but does not provide an instance, simply a type\ndeclaration.\nBOOST_MSM_EUML_INTERNAL_TRANSITION_TABLE(expression, table-\ninstance-name)\nThis macro declares a transition table type and also declares a const instance of the table.The\nexpression must follow the transition table grammar . For the moment, this macro is not used.\nBOOST_MSM_EUML_DECLARE_INTERNAL_TRANSITION_TABLE(iexpression,table-\ninstance-name)\nLike BOOST_MSM_EUML_TRANSITION_TABLE but does not provide an instance, simply a type\ndeclaration. This is currently the only way to declare an internal transition table with eUML. For\nexample:\nBOOST_MSM_EUML_DECLARE_STATE((Open_Entry,Open_Exit),Open_def)\nstruct Open_impl : public Open_defFront-end\n128{\n BOOST_MSM_EUML_DECLARE_INTERNAL_TRANSITION_TABLE((\n open_close [internal_guard1] / internal_action1 ,\n open_close [internal_guard2] / internal_action2\n ))\n}; " } ]
{ "category": "App Definition and Development", "file_name": "msm.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Transform Iterator\nAuthor : David Abrahams, Jeremy Siek, Thomas Witt\nContact : dave@boost-consulting.com ,jsiek@osl.iu.edu ,witt@ive.uni-hannover.de\nOrganization :Boost Consulting , Indiana University Open Systems Lab , University of\nHanover Institute for Transport Railway Operation and Construction\nDate : 2004-11-01\nCopyright : Copyright David Abrahams, Jeremy Siek, and Thomas Witt 2003.\nabstract: The transform iterator adapts an iterator by modifying the operator* to apply\na function object to the result of dereferencing the iterator and returning the result.\nTable of Contents\ntransform_iterator synopsis\ntransform_iterator requirements\ntransform_iterator models\ntransform_iterator operations\nExample\ntransform_iterator synopsis\ntemplate <class UnaryFunction,\nclass Iterator,\nclass Reference = use_default,\nclass Value = use_default>\nclass transform_iterator\n{\npublic:\ntypedef /* see below */ value_type;\ntypedef /* see below */ reference;\ntypedef /* see below */ pointer;\ntypedef iterator_traits<Iterator>::difference_type difference_type;\ntypedef /* see below */ iterator_category;\ntransform_iterator();\ntransform_iterator(Iterator const& x, UnaryFunction f);\ntemplate<class F2, class I2, class R2, class V2>\ntransform_iterator(\ntransform_iterator<F2, I2, R2, V2> const& t\n1, typename enable_if_convertible<I2, Iterator>::type* = 0 // ex-\nposition only\n, typename enable_if_convertible<F2, UnaryFunction>::type* = 0 // ex-\nposition only\n);\nUnaryFunction functor() const;\nIterator const& base() const;\nreference operator*() const;\ntransform_iterator& operator++();\ntransform_iterator& operator--();\nprivate:\nIterator m_iterator; // exposition only\nUnaryFunction m_f; // exposition only\n};\nIfReference isuse_default then the reference member of transform_iterator isresult_of<UnaryFunction(iterator_traits<Iterator>::reference)>::type .\nOtherwise, reference isReference .\nIfValue isuse_default then the value_type member is remove_cv<remove_reference<reference>\n>::type . Otherwise, value_type isValue .\nIfIterator models Readable Lvalue Iterator and if Iterator models Random Access Traver-\nsal Iterator, then iterator_category is convertible to random_access_iterator_tag . Otherwise, if\nIterator models Bidirectional Traversal Iterator, then iterator_category is convertible to bidi-\nrectional_iterator_tag . Otherwise iterator_category is convertible to forward_iterator_tag .\nIfIterator does not model Readable Lvalue Iterator then iterator_category is convertible to in-\nput_iterator_tag .\ntransform_iterator requirements\nThe type UnaryFunction must be Assignable, Copy Constructible, and the expression f(*i) must be\nvalid where fis an object of type UnaryFunction ,iis an object of type Iterator , and where the type\noff(*i) must be result_of<UnaryFunction(iterator_traits<Iterator>::reference)>::type .\nThe argument Iterator shall model Readable Iterator.\ntransform_iterator models\nThe resulting transform_iterator models the most refined of the following that is also modeled by\nIterator .\n•Writable Lvalue Iterator if transform_iterator::reference is a non-const reference.\n•Readable Lvalue Iterator if transform_iterator::reference is a const reference.\n•Readable Iterator otherwise.\nThe transform_iterator models the most refined standard traversal concept that is modeled by\ntheIterator argument.\nIftransform_iterator is a model of Readable Lvalue Iterator then it models the following original\niterator concepts depending on what the Iterator argument models.\nIfIterator models then transform_iterator models\nSingle Pass Iterator Input Iterator\nForward Traversal Iterator Forward Iterator\nBidirectional Traversal Iterator Bidirectional Iterator\nRandom Access Traversal Iterator Random Access Iterator\n2Iftransform_iterator models Writable Lvalue Iterator then it is a mutable iterator (as defined in\nthe old iterator requirements).\ntransform_iterator<F1, X, R1, V1> is interoperable with transform_iterator<F2, Y, R2, V2>\nif and only if Xis interoperable with Y.\ntransform_iterator operations\nIn addition to the operations required by the concepts modeled by transform_iterator ,trans-\nform_iterator provides the following operations.\ntransform_iterator();\nReturns: An instance of transform_iterator with m_f and m_iterator default con-\nstructed.\ntransform_iterator(Iterator const& x, UnaryFunction f);\nReturns: An instance of transform_iterator with m_finitialized to fand m_iterator\ninitialized to x.\ntemplate<class F2, class I2, class R2, class V2>\ntransform_iterator(\ntransform_iterator<F2, I2, R2, V2> const& t\n, typename enable_if_convertible<I2, Iterator>::type* = 0 // expo-\nsition only\n, typename enable_if_convertible<F2, UnaryFunction>::type* = 0 // expo-\nsition only\n);\nReturns: An instance of transform_iterator with m_finitialized to t.functor() and\nm_iterator initialized to t.base() .\nRequires: OtherIterator is implicitly convertible to Iterator .\nUnaryFunction functor() const;\nReturns: m_f\nIterator const& base() const;\nReturns: m_iterator\nreference operator*() const;\nReturns: m_f(*m_iterator)\ntransform_iterator& operator++();\nEffects: ++m_iterator\nReturns: *this\ntransform_iterator& operator--();\nEffects: --m_iterator\nReturns: *this\ntemplate <class UnaryFunction, class Iterator>\ntransform_iterator<UnaryFunction, Iterator>\nmake_transform_iterator(Iterator it, UnaryFunction fun);\n3Returns: An instance of transform_iterator<UnaryFunction, Iterator> with m_fini-\ntialized to fandm_iterator initialized to x.\ntemplate <class UnaryFunction, class Iterator>\ntransform_iterator<UnaryFunction, Iterator>\nmake_transform_iterator(Iterator it);\nReturns: An instance of transform_iterator<UnaryFunction, Iterator> with m_fde-\nfault constructed and m_iterator initialized to x.\nExample\nThis is a simple example of using the transform iterators class to generate iterators that multiply (or\nadd to) the value returned by dereferencing the iterator. It would be cooler to use lambda library in\nthis example.\nint x[] = { 1, 2, 3, 4, 5, 6, 7, 8 };\nconst int N = sizeof(x)/sizeof(int);\ntypedef boost::binder1st< std::multiplies<int> > Function;\ntypedef boost::transform_iterator<Function, int*> doubling_iterator;\ndoubling_iterator i(x, boost::bind1st(std::multiplies<int>(), 2)),\ni_end(x + N, boost::bind1st(std::multiplies<int>(), 2));\nstd::cout << \"multiplying the array by 2:\" << std::endl;\nwhile (i != i_end)\nstd::cout << *i++ << \" \";\nstd::cout << std::endl;\nstd::cout << \"adding 4 to each element in the array:\" << std::endl;\nstd::copy(boost::make_transform_iterator(x, boost::bind1st(std::plus<int>(), 4)),\nboost::make_transform_iterator(x + N, boost::bind1st(std::plus<int>(), 4)),\nstd::ostream_iterator<int>(std::cout, \" \"));\nstd::cout << std::endl;\nThe output is:\nmultiplying the array by 2:\n2 4 6 8 10 12 14 16\nadding 4 to each element in the array:\n5 6 7 8 9 10 11 12\nThe source code for this example can be found here.\n4" } ]
{ "category": "App Definition and Development", "file_name": "transform_iterator.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Introduction \nWhy yet another state machine framework \nState -local storage \nDynamic configurability \nError handling \nAsynchronous state machines \nUser actions: Member functions vs. function objects \nLimitations \nIntroduction \nMost of the design decisions made during the development of this library are the result of the \nfollowing requirements. \nBoost.Statechart should ... \n1. be fully type-safe. Whenever possible, type mismatche s should be flagged with an error at \ncompile-time \n2. not require the use of a code generator. A lot of th e existing FSM solutions force the developer \nto design the state machine either graphically or in a specialized language. All or part of the \ncode is then generated \n3. allow for easy transformation of a UML statechart (d efined in http://www.omg.org/cgi -\nbin/doc?formal/03 -03 -01 ) into a working state machine. Vice versa, an existin g C++ \nimplementation of a state machine should be fairly trivi al to transform into a UML statechart. \nSpecifically, the following state machine features shou ld be supported: \n/ring2Hierarchical (composite, nested) states \n/ring2Orthogonal (concurrent) states \n/ring2Entry-, exit- and transition-actions \n/ring2Guards \n/ring2Shallow/deep history \n4. produce a customizable reaction when a C++ exceptio n is propagated from user code \n5. support synchronous and asynchronous state machines and leave it to the user which thread an \nasynchronous state machine will run in. Users should also b e able to use the threading library \nof their choice \n6. support the development of arbitrarily large and complex state machines. Multiple developers \nshould be able to work on the same state machine simulta neously \n7. allow the user to customize all resource management so that the library could be used for \napplications with hard real-time requirements \n8. enforce as much as possible at compile time. Specifica lly, invalid state machines should not \ncompile \n9. offer reasonable performance for a wide range of ap plications \nWhy yet another state machine framework? \nThe Boost Statechart \nLibrary \nRationale Page 1 of 10 The Boost Statechart Library - Rationale \n2006/12/03Before I started to develop this library I had a look at the following frameworks: \n/circle6The framework accompanying the book \"Practical Statec harts in C/C++\" by Miro Samek, \nCMP Books, ISBN: 1-57820-110-1 \nhttp://www.quantum -leaps.com \nFails to satisfy at least the requirements 1, 3, 4, 6, 8. \n/circle6The framework accompanying \"Rhapsody in C++\" by ILogix (a code generator solution) \nhttp://www.ilogix.com/sublevel.aspx?id=53 \nThis might look like comparing apples with oranges. Howe ver, there is no inherent reason why \na code generator couldn't produce code that can easi ly be understood and modified by humans. \nFails to satisfy at least the requirements 2, 4, 5, 6, 8 (there is quite a bit of error checking \nbefore code generation, though). \n/circle6The framework accompanying the article \"State Machine Design in C++\" \nhttp://www.ddj.com/184401236?pgno=1 \nFails to satisfy at least the requirements 1, 3, 4, 5 (th ere is no direct threading support), 6, 8. \nI believe Boost.Statechart satisfies all requirements. \nState-local storage \nThis not yet widely known state machine feature is enab led by the fact that every state is represented \nby a class. Upon state-entry, an object of the class is constructed and the object is later destructed \nwhen the state machine exits the state. Any data that is useful only as long as the machine resides in \nthe state can (and should) thus be a member of the stat e. This feature paired with the ability to spread \na state machine over several translation units makes possibl e virtually unlimited scalability. \nIn most existing FSM frameworks the whole state machine ru ns in one environment (context). That \nis, all resource handles and variables local to the stat e machine are stored in one place (normally as \nmembers of the class that also derives from some state machi ne base class). For large state machines \nthis often leads to the class having a huge number of da ta members most of which are needed only \nbriefly in a tiny part of the machine. The state mach ine class therefore often becomes a change \nhotspot what leads to frequent recompilations of the wh ole state machine. \nThe FAQ item \" What's so cool about state -local storage? \" further explains this by comparing the \ntutorial StopWatch to a behaviorally equivalent ver sion that does not use state-local storage. \nDynamic configurability \nTwo types of state machine frameworks \n/circle6A state machine framework supports dynamic configurabili ty if the whole layout of a state \nmachine can be defined at runtime (\"layout\" refers t o states and transitions, actions are still \nspecified with normal C++ code). That is, data only available at runtime can be used to build \narbitrarily large machines. See \"A Multiple Substring Search Algorithm\" by Moishe Halibard \nand Moshe Rubin in June 2002 issue of CUJ for a good exa mple (unfortunately not available \nonline). \n/circle6On the other side are state machine frameworks which re quire the layout to be specified at \ncompile time \nState machines that are built at runtime almost always g et away with a simple state model (no \nhierarchical states, no orthogonal states, no entry and exit actions, no history) because the layout is \nvery often computed by an algorithm . On the other hand, machine layouts that are fixed a t compile \ntime are almost always designed by humans, who frequentl y need/want a sophisticated state model Page 2 of 10 The Boost Statechart Library - Rationale \n2006/12/03in order to keep the complexity at acceptable level s. Dynamically configurable FSM frameworks are \ntherefore often optimized for simple flat machines wh ile incarnations of the static variant tend to \noffer more features for abstraction. \nHowever, fully-featured dynamic FSM libraries do exi st. So, the question is: \nWhy not use a dynamically configurable FSM library for all state \nmachines? \nOne might argue that a dynamically configurable FSM framework is all one ever needs because any \nstate machine can be implemented with it. However, d ue to its nature such a framework has a \nnumber of disadvantages when used to implement static mac hines: \n/circle6No compile-time optimizations and validations can be mad e. For example, Boost.Statechart \ndetermines the innermost common context of the transition-source and destination state at \ncompile time. Moreover, compile time checks ensure that the state machine is valid (e.g. that \nthere are no transitions between orthogonal states). \n/circle6Double dispatch must inevitably be implemented with some k ind of a table. As argued under \nDouble dispatch , this scales badly. \n/circle6To warrant fast table lookup, states and events must be r epresented with an integer. To keep \nthe table as small as possible, the numbering should be continuous, e.g. if there are ten states, \nit's best to use the ids 0-9. To ensure continuity of id s, all states are best defined in the same \nheader file. The same applies to events. Again, this doe s not scale. \n/circle6Because events carrying parameters are not represented by a type, some sort of a generic event \nwith a property map must be used and type- safety is enforced at runtime rather than at compile \ntime. \nIt is for these reasons, that Boost.Statechart was buil t from ground up to not support dynamic \nconfigurability. However, this does not mean that it' s impossible to dynamically shape a machine \nimplemented with this library. For example, guards can b e used to make different transitions \ndepending on input only available at runtime. Howeve r, such layout changes will always be limited \nto what can be foreseen before compilation. A somewhat related library, the boost::spirit parser \nframework, allows for roughly the same runtime configura bility. \nError handling \nThere is not a single word about error handling in th e UML state machine semantics specifications. \nMoreover, most existing FSM solutions also seem to ignore the issue. \nWhy an FSM library should support error handling \nConsider the following state configuration: \n \nPage 3 of 10 The Boost Statechart Library - Rationale \n2006/12/03Both states define entry actions (x() and y()). Whenev er state A becomes active, a call to x() will \nimmediately be followed by a call to y(). y() could d epend on the side-effects of x(). Therefore, \nexecuting y() does not make sense if x() fails. This is no t an esoteric corner case but happens in \nevery-day state machines all the time. For example, x() could acquire memory the contents of which \nis later modified by y(). There is a different but in terms of error handling equally critical situation in \nthe Tutorial under Getting state information out of the machine when Running::~Running() \naccesses its outer state Active . Had the entry action of Active failed and had Running been \nentered anyway then Running 's exit action would have invoked undefined behavio r. The error \nhandling situation with outer and inner states resemble s the one with base and derived classes: If a \nbase class constructor fails (by throwing an exception) the construction is aborted, the derived class \nconstructor is not called and the object never comes to life. \nIn most traditional FSM frameworks such an error situati on is relatively easy to tackle as long as the \nerror can be propagated to the state machine client . In this case a failed action simply propagates \na C++ exception into the framework. The framework usua lly does not catch the exception so that the \nstate machine client can handle it. Note that, after doing so, the client can no longer use the state \nmachine object because it is either in an unknown stat e or the framework has already reset the state \nbecause of the exception (e.g. with a scope guard). T hat is, by their nature, state machines typically \nonly offer basic exception safety. \nHowever, error handling with traditional FSM framew orks becomes surprisingly cumbersome as \nsoon as a lot of actions can fail and the state machin e itself needs to gracefully handle these errors. \nUsually, a failing action (e.g. x()) then posts an ap propriate error event and sets a global error \nvariable to true. Every following action (e.g. y()) first has to check the error variable before doing \nanything. After all actions have completed (by doing nothing!), the previously posted error event has \nto be processed what leads to the execution of the reme dy action. Please note that it is not sufficient \nto simply queue the error event as other events could st ill be pending. Instead, the error event has \nabsolute priority and has to be dealt with immediately. There are slightly less cumbersome \napproaches to FSM error handling but these usually nec essitate a change of the statechart layout and \nthus obscure the normal behavior. No matter what approa ch is used, programmers are normally \nforced to write a lot of code that deals with errors and most of that code is not devoted to error \nhandling but to error propagation. \nError handling support in Boost.Statechart \nC++ exceptions may be propagated from any action to sig nal a failure. Depending on how the state \nmachine is configured, such an exception is either immed iately propagated to the state machine \nclient or caught and converted into a special event that is dispatched immediately. For more \ninformation see the Exception handling chapter in the Tutorial. \nTwo stage exit \nAn exit action can be implemented by adding a destruct or to a state. Due to the nature of destructors, \nthere are two disadvantages to this approach: \n/circle6Since C++ destructors should virtually never throw, on e cannot simply propagate an exception \nfrom an exit action as one does when any of the other actions fails \n/circle6When a state_machine<> object is destructed then all currently active states are \ninevitably also destructed. That is, state machine termin ation is tied to the destruction of the \nstate machine object \nIn my experience, neither of the above points is usual ly problem in practice since ... \n/circle6exit actions cannot often fail. If they can, such a f ailure is usually either \n/ring2not of interest to the outside world, i.e. the failur e can simply be ignored Page 4 of 10 The Boost Statechart Library - Rationale \n2006/12/03/ring2so severe, that the application needs to be terminated anyway. In such a situation stack \nunwind is almost never desirable and the failure is bet ter signaled through other \nmechanisms (e.g. abort()) \n/circle6to clean up properly, often exit actions must be executed when a state machine object is \ndestructed, even if it is destructed as a result of a st ack unwind \nHowever, several people have put forward theoretical arguments and real-world scenarios, which \nshow that the exit action to destructor mapping can be a problem and that workarounds are overly \ncumbersome. That's why two stage exit is now supported. \nAsynchronous state machines \nRequirements \nFor asynchronous state machines different applications h ave rather varied requirements: \n1. In some applications each state machine needs to run i n its own thread, other applications are \nsingle-threaded and run all machines in the same thre ad \n2. For some applications a FIFO scheduler is perfect, oth ers need priority- or EDF-schedulers \n3. For some applications the boost::thread library is just fine, others might want to use another \nthreading library, yet other applications run on OS- less platforms where ISRs are the only \nmode of (apparently) concurrent execution \nOut of the box behavior \nBy default, asynchronous_state_machine<> subtype objects are serviced by a \nfifo_scheduler<> object. fifo_scheduler<> does not lock or wait in single-threaded \napplications and uses boost::thread primitives to do so in multi-threaded programs. Moreover, a \nfifo_scheduler<> object can service an arbitrary number of \nasynchronous_state_machine<> subtype objects. Under the hood, fifo_scheduler<> \nis just a thin wrapper around an object of its FifoWorker template parameter (which manages the \nqueue and ensures thread safety) and a processor_container<> (which manages the lifetime \nof the state machines). \nThe UML standard mandates that an event not triggering a reaction in a state machine should be \nsilently discarded. Since a fifo_scheduler<> object is itself also a state machine, events \ndestined to no longer existing asynchronous_state_machine<> subtype objects are also \nsilently discarded. This is enabled by the fact that asynchronous_state_machine<> subtype \nobjects cannot be constructed or destructed directly. Instead, this must be done through \nfifo_scheduler<>::create_processor<>() and \nfifo_scheduler<>::destroy_processor() ( processor refers to the fact that \nfifo_scheduler<> can only host event_processor<> subtype objects; \nasynchronous_state_machine<> is just one way to implement such a processor). Moreov er, \ncreate_processor<>() only returns a processor_handle object. This must henceforth be \nused to initiate, queue events for, terminate and dest roy the state machine through the scheduler. \nCustomization \nIf a user needs to customize the scheduler behavior she can do so by instantiating \nfifo_scheduler<> with her own class modeling the FifoWorker concept. I considered a \nmuch more generic design where locking and waiting is implemented in a policy but I have so far \nfailed to come up with a clean and simple interface fo r it. Especially the waiting is a bit difficult to Page 5 of 10 The Boost Statechart Library - Rationale \n2006/12/03model as some platforms have condition variables, others have events and yet others don't have any \nnotion of waiting whatsoever (they instead loop until a new event arrives, presumably via an ISR). \nGiven the relatively few lines of code required to i mplement a custom FifoWorker type and the \nfact that almost all applications will implement at most one such class, it does not seem to be \nworthwhile anyway. Applications requiring a less or mo re sophisticated event processor lifetime \nmanagement can customize the behavior at a more coarse level, by using a custom Scheduler \ntype. This is currently also true for applications req uiring non-FIFO queuing schemes. However, \nBoost.Statechart will probably provide a priority_scheduler in the future so that custom \nschedulers need to be implemented only in rare cases. \nUser actions: Member functions vs. function objects \nAll user-supplied functions ( react member functions, entry-, exit- and transition-acti ons) must be \nclass members. The reasons for this are as follows: \n/circle6The concept of state-local storage mandates that state- entry and state-exit actions are \nimplemented as members \n/circle6react member functions and transition actions often access st ate-local data. So, it is most \nnatural to implement these functions as members of the class the data of which the functions \nwill operate on anyway \nLimitations \nJunction points \nUML junction points are not supported because arbitrar ily complex guard expressions can easily be \nimplemented with custom_reaction<> s. \nDynamic choice points \nCurrently there is no direct support for this UML ele ment because its behavior can often be \nimplemented with custom_reaction<> s. In rare cases this is not possible, namely when a \nchoice point happens to be the initial state. Then, t he behavior can easily be implemented as follows: \nstruct make_choice : sc::event< make_choice > {}; \n \n// universal choice point base class template \ntemplate< class MostDerived, class Context > \nstruct choice_point : sc::state< MostDerived, Conte xt, \n sc::custom_reaction< make_choice > > \n{ \n typedef sc::state< MostDerived, Context, \n sc::custom_reaction< make_choice > > base_type; \n typedef typename base_type::my_context my_context ; \n typedef choice_point my_base; \n \n choice_point( my_context ctx ) : base_type( ctx ) \n { \n this->post_event( boost::intrusive_ptr< make_ch oice >( \n new make_choice() ) ); \n } \n}; Page 6 of 10 The Boost Statechart Library - Rationale \n2006/12/03 \n// ... \n \nstruct MyChoicePoint; \nstruct Machine : sc::state_machine< Machine, MyChoi cePoint > {}; \n \nstruct Dest1 : sc::simple_state< Dest1, Machine > { }; \nstruct Dest2 : sc::simple_state< Dest2, Machine > { }; \nstruct Dest3 : sc::simple_state< Dest3, Machine > { }; \n \nstruct MyChoicePoint : choice_point< MyChoicePoint, Machine > \n{ \n MyChoicePoint( my_context ctx ) : my_base( ctx ) {} \n \n sc::result react( const make_choice & ) \n { \n if ( /* ... */ ) \n { \n return transit< Dest1 >(); \n } \n else if ( /* ... */ ) \n { \n return transit< Dest2 >(); \n } \n else \n { \n return transit< Dest3 >(); \n } \n } \n}; \nchoice_point<> is not currently part of Boost.Statechart, mainly bec ause I fear that beginners \ncould use it in places where they would be better of f with custom_reaction<> . If the demand is \nhigh enough I will add it to the library. \nDeep history of orthogonal regions \nDeep history of states with orthogonal regions is curre ntly not supported: Page 7 of 10 The Boost Statechart Library - Rationale \n2006/12/03 \nAttempts to implement this statechart will lead to a comp ile-time error because B has orthogonal \nregions and its direct or indirect outer state contain s a deep history pseudo state. In other words, a \nstate containing a deep history pseudo state must not ha ve any direct or indirect inner states which \nthemselves have orthogonal regions. This limitation stem s from the fact that full deep history support \nwould be more complicated to implement and would consume more resources than the currently \nimplemented limited deep history support. Moreover, full deep history behavior can easily be \nimplemented with shallow history: \n \nOf course, this only works if C, D, E or any of their direct or indirect inner states do not have \northogonal regions. If not so then this pattern has to be applied recursively. \nSynchronization (join and fork) bars \nPage 8 of 10 The Boost Statechart Library - Rationale \n2006/12/03 \nSynchronization bars are not supported, that is, a tra nsition always originates at exactly one state and \nalways ends at exactly one state. Join bars are sometimes u seful but their behavior can easily be \nemulated with guards. The support of fork bars would mak e the implementation much more \ncomplex and they are only needed rarely. \nEvent dispatch to orthogonal regions \nThe Boost.Statechart event dispatch algorithm is diffe rent to the one specified in David Harel's \noriginal paper and in the UML standard . Both mandate that each event is dispatched to all o rthogonal \nregions of a state machine. Example: \n \nHere the Harel/UML dispatch algorithm specifies that t he machine must transition from (B,D) to \n(C,E) when an EvX event is processed. Because of the sub tleties that Harel describes in chapter 7 of \nhis paper , an implementation of this algorithm is not only quite complex but also much slower than \nthe simplified version employed by Boost.Statechart, whi ch stops searching for reactions as soon as \nit has found one suitable for the current event. Tha t is, had the example been implemented with this \nlibrary, the machine would have transitioned non-det erministically from (B,D) to either (C,D) or \n(B,E). This version was chosen because, in my experience, in real-world machines different \northogonal regions often do not specify transitions fo r the same events. For the rare cases when they \ndo, the UML behavior can easily be emulated as follo ws: \nPage 9 of 10 The Boost Statechart Library - Rationale \n2006/12/03 \nTransitions across orthogonal regions \n \nTransitions across orthogonal regions are currently fla gged with an error at compile time (the UML \nspecifications explicitly allow them while Harel doe s not mention them at all). I decided to not \nsupport them because I have erroneously tried to impleme nt such a transition several times but have \nnever come across a situation where it would make any se nse. If you need to make such transitions, \nplease do let me know! \n \nRevised 03 December, 2006 \nCopyright © 2003-2006 Andreas Huber Dönni \nDistributed under the Boost Software License, Version 1.0. (See accompanying file \nLICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt ) \nPage 10 of 10 The Boost Statechart Library - Rationale \n2006/12/03" } ]
{ "category": "App Definition and Development", "file_name": "rationale.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "The Quaternionic Exponential\n(and beyond)\nERRATA & ADDENDA\nHubert HOLIN\n23/03/2001\nHubert.Holin@Bigfoot.com\nhttp://www.bigfoot.com/~Hubert.Holin\nErrata\n• Page 2:\nThe multiplication is defined in such a way that e=()1000,,, is its neural\nelement.\n• Page 3:\nWe ask for a norm of a Banach algebra to verify pq p q∗≤ and not just\nthat there exists some positive k such that pq k p q∗≤ . As noted the case for\nreal numbers, complex numbers, quaternions and octonions is even better yet,\nas we have pq p q∗=.\n• Page 3:\nReqq q()=+()1\n2 and Urqq q()=−()1\n2.\n• Page 6:\nThe formula is not in error, but might be more readable in the form\n ra a uau u a uaurr r r r r r r r r()=()−⋅()[] +()∧()+⋅() cos sinθθ .\n• Page 7:\n1Instead of qx\ny\nz=±\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ncos\nsinsinsin\nθ\nθ\nθ\nθ2\n222, read qx\ny\nz=±\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ncos\nsin\nsin\nsin\nθ\nθ\nθ\nθ2\n2\n2\n2.• Page 19, footnote:\nInstead of \n sinc : ,sin\na xx\na\nx\naRR→\n\n\n\n\n\n\n\naπ\n, read \n sinc : ,sin\na xx\na\nx\naRR→\n\n\n\n\n\n\n\naπ\nπ.\n• Page 19, footnote:\nInstead of \n sinhc : ,sinh\na xx\na\nx\naRR→\n\n\n\n\n\n\n\naπ\n, read \n sinhc : ,sinh\na xx\na\nx\naRR→\n\n\n\n\n\n\n\naπ\nπ.\nAddenda\n• More structure\nIt is interesting to note that C, H and O are (left) vector spaces over C.\nThe basis of H as a left C-vector space is 1,j(), and the basis of O as a\nleft C-vector space is 1, , ,je j′′(). However, if qi j k=+ + + ∈αβ γ δ H, then\nqii j=+()++() αβ γδ , but if oi j k e i j k=+ + + + ′+′+′+′∈ αβ γ δ εζ η θ O then\no i ij ie ij=+()++() ++()′+−()′ αβ γδ εζ η θ (note the minus sign in the last factor).\nIf we write qj=+Γ∆, with ∆∈C and Γ∈C, then qj=−Γ∆, and if we also\nhave pj=+ΑΒ, with Α∈C and Β∈C, then pq j=−() ++() ΑΓ Β∆ Α∆ ΒΓ . In particular,\nif z∈C then jz zj=.\nThings break down when we want to consider O as a structure over H,\nhowever. Indeed, there is no widely-accepted generalization of vector fieldwhere the role of the scalars is taken by a non-commutative structure, as isthe case with \nH as most interesting properties of vector spaces fail to remain\ntrue in that case, in general (though by requiring the scalars to be merely acommutative ring instead of a full blown field, quite a few properties remaintrue; this structure is known as a module).\nHowever, if \noi j k e i j k=+ + + + ′+′+′+′∈ αβ γ δ εζ η θ O, then it is also true\nthat oi j ki j k e=+++() +++ +() ′ αβ γ δ εζ η θ .\n• More Geometry\nAnother interesting way to see H is as RR×3. In this case, if qt V11 13=()∈×,RR\nand qt V22 23=()∈×,RR, then the quaternionic product can be expressed as\nqq tt V V tV tV V V12 1 2 1 2 12 21 1 2=− ⋅ ++ ∧() , , with “⋅” the scalar product on R3 and “∧” the\nvector product in R3.\n• Finding the quaternions for a given rotation of R3\nIf we are given the rotation in term of vector and angle in 0;+[]π, then this\nhas been solved in the main text (if the angle is in −[]π;0, we just take the\nopposite of both the angle and vector; the identity and its opposite are trivial\n2to solve). The opposite quaternion is also a solution, of course.If we are simply given a rotation matrix, then, essentially, we first find its\nelements (vector and angle), and use the procedure above. To find an invariantvector, we simply solve the linear system which defines them. For the angle,we first find its cosine using the trace. Then we build a vector orthogonal tothe invariant vector we found (always possible starting from one of the canonicalbasis vector and using some classical orthonormalization procedure) to checkthe sign of the angle.\nIf we are given a succession of rotation, it may be advantageous in applications\nto chose among the successions of pairs of opposite solutions that for whichthe distance between successive quaternions is the smallest.\n• More rotations\nWe have considered HH→[],pp qa and dp q pq=→[]HH,a in the first\nchapter and seen, thru some amount of computation, that they gave rise to\nrotations on R4, when q=1.\nWe present here another take on the same subject, aimed at giving effective\nmethods of parameterizing SO R4,()\nIt is also interesting to consider gp p qq=→[]HH,a, as both gq and dq are\nC-linear operators on H, which trivially verify gg gqq qq ′′=o and dd dqq qq ′′=o. We\ncan also very simply verify that gp gp q p p dp dpqq qq()′()() = ′()=()′()()2. Obviously g1\nand d1 are both the identity on H. Considering them now as R-linear operators\non H, we see that their determinant in the canonical basis must stay of the\nsame sign on S3, hence must stay positive (since 13∈S), therefore must be\nalways equal to 1 on S3.\nHence SS O R34→()[] ,,qgqa and SS O R34→()[] ,,qdqa are both group\nhomeomorphism, sending 1 to the identity.\nUsing the same kind of topological argument as in the case of SO R3,(), we\nget the parameterization of SO R4,() that we announced ([ M. Berger (1990)],\n[C. Godbillon (1971)]) simply by considering pq d gpq,() ao, save for the determination\nof the kernel. We will, however, aim here for a more constructive approach to\nsurjectivity.\nGiven r a rotation of R4, we seek p∈S3 and q∈S3 such that for any\nquaternion s we have psq r s=(). Hence, by applying that to s=1, we find that\nnecessarily prq=()1 (since qq=−1 as q=1). Therefore we are led to solve\nqsq s=()ρ for all s, with ρsrr s()=()()1 (as rt t()= for all quaternion t, since r is a\nrotation, and hence r11()=). But then ρµ µ µ µ()=()()=() ()= rr rr11 1 for all µ∈R,\nwhich means R is invariant, and we know that ρ is a rotation R4, as the\ncomposition of r, which is one by hypothesis, and the multiplication on the left\nby a unit quaternion, which is one also as we have seen in the main text. Thismeans we are simply back to solving on \nS3 the equation ρρq=()MC C,, (with\n3 qqaρ as presented in the first chapter).Finally, given two unit quaternions pi j k=+ + +αβ γ δ and qi j k=+ + +εζηθ ,\nthe rotation matrix on R4 is given explicitly by:\nαεβζ γη δθ αζ βεγθ δη αη βθ γεδζ αθ βη γζ δε\nαζ βεγθ δη αεβζ γη δθ αθ βη γζ δε αη βθ γεδζ\nαη βθ γεδζ αθ βη γζ δ+++ +−−+ ++− − +−+−\n−+−+ +−− −++− ++++−++− ++++\nεεαεβζ γη δθ αζ βε γθ δη\nαθ βη γζ δεαη βθ γε δζ αζ βε γθ δη αε βζ γη δθ−+− −−++\n−−++ −+−+ ++++ −−+\n\n\n\n\nAdditional Bibliography\n4C. Godbillon (1971): Éléments de Topologie Alg ébrique; Collection M éthodes, Hermann, 1971." } ]
{ "category": "App Definition and Development", "file_name": "TQE_EA.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "NoYesNo healthy alternativesHas alternativeSuccessExceptionBroken PromiseHas alternatives?Pick up an alternativeBackoffRequestReplyErrorNeverStartLoad Balancing in FoundationDB In FoundationDB, often multiple interfaces are available for the same type of requests. A load balancer can be used to distribute the requests to those interfaces, while awaring the possible failures.Two load balancer are provided: basicLoadBalance and loadBalance, both defined in LoadBalance.actor.h. The basicLoadBalance is a simple basicLoadBalance basicLoadBalance implements a simple load balancing algorithm. It applies toCommit proxy interfaceGetReadVersion proxy interfaceConfigFollower interfaceThe interface is assumed to be always fresh, i.e. the list of the servers is fixed.NoYesHas alternativeNo alternativeSuccessFailureAt least one alternativeStartHas alternatives?Choose initial candidatesNeverPick up an alternativeSend requestWait for available alternativeResponseAll alternatives failedAlternative pick up algorithm In basicLoadBalance, a best alternative is picked up and used at the beginning. At this stage, this alternative is randomly picked up among all alternatives. If the best alternative does not work, it will iteratively try other interfaces, see here.loadBalance loadBalance provides a more sophisticated implementation of load balancing. In addition of the basic load balancing, it also provides a variety of features, such likeSupport for Test Storage Server (TSS)Distance-based candidate electionAble to handle timeouts and exceptions with retriesetc.Currently it is used forStorage Server interfaceBlobWorker interface \nNote:Response could be an exception, e.g. process_behind or request_maybe_delivered, and will be delivered as Error to the caller.Choose initial candidates Two initial candidates will be picked up before the requests start. They will be selected as the first two alternatives for the load balancer. If both of them failed, other alternatives are used in a round-robin way.No QueueModel If no QueueModel is provided, the initial candidates are picked up randomly. The first candidate, or the best alternative, will always be one of local workers.With QueueModel QueueModel holds information about each candidate related to future version, latency and penalty.If the storage server is returning a future version error, it is marked as not available until some certain time.Penalty is reported by storage server in each response (see storageserver.actor.cpp:StorageServer::getPenalty). It is determined by the write queue length and the version lagging.If QueueModel exists, the candidates will be picked base on the penalty. Workers with high penalties will be avoided when picking up the first two candidates. Pick up an alternative As mentioned above, the alternatives are chosen in the round-robin way when the first two candidates failed.If all alternatives failed, a flag is set, so if the next request fails with process_behind, the caller will receive the process_behind error.Send requests to workers Here it is assumed that there are at least one alternative available.YesSuccessTimeoutNoFirst request succeedSecond request succeedAdditional request failedstartIs first requestSend first requestResponsePick up next alternativeSend additional request\nIf the first request failed, it is reset and the next request will be considered as the first request. Certain types of errors can also be returned as response, e.g. request_may_be_delivered or process_behind, which may not trigger a load-balancer retry.Wait for available alternative When there is no alternatives available, the load balancer may wait until at least one interface is up.YesNoTimeoutSuccessSuccessFailedstartIs first request in-flightWait for the first requestResponseRetryWait for alternativesall_alternatives_failedNote that \"Wait for alternatives\" will only timeout if the alternatives are not always fresh, i.e. this only happens when accessing storage servers.Requests Original requests in loadBalancer are wrapped by LoadBalance.actor.h:RequestData. It provides the following additional operations besides the original flow request:TSS support if QueueModel is availableTranslate some errors into maybe_delivered, process_behind or retriesUpdate the QueueModel information including latency, penalty, etc.Appendix Picking up an alternative in load balancing algorithm The following script simulates the alternative picking up algorithm. The chosen alternatives will be printed out one-by-one.#! /usr/bin/env python3import randomimport timeclass Alternatives:    def __init__(self, num_alternatives):        self._size = num_alternatives        def size(self):        return self._size    def get_best(self):        return random.randint(0, self._size - 1)# EntryNUM_ALTERNATIVES = 10alts = Alternatives(NUM_ALTERNATIVES)best_alt = alts.get_best()next_alt = random.randint(0, alts.size() - 2)if next_alt >= best_alt:    next_alt += 1start_alt = next_altstart_distance = (best_alt + alts.size() - start_alt) % alts.size()use_alt = Noneprint(\"best_alt = {}\".format(best_alt))print(\"start_alt = {}\".format(start_alt))print(\"start_distance = {}\".format(start_distance))while True:    for alt_num in range(0, alts.size()):        use_alt = next_alt        if next_alt == start_alt:            print(\" Going back to the start_alt\")            use_alt = best_alt        elif (next_alt + alts.size() - start_alt) % alts.size() <= start_distance:            print(\" Entering start_distance\")            use_alt = (next_alt + alts.size() - 1) % alts.size()                print(\"Attempting alt: {}\".format(use_alt))         # Next loop        next_alt = (next_alt + 1) % alts.size()        time.sleep(.2)" } ]
{ "category": "App Definition and Development", "file_name": "LoadBalancing.pdf", "project_name": "FoundationDB", "subcategory": "Database" }
[ { "data": "A One-Stop Large-Scale Graph \nComputing System from Alibaba\nWhite Paper\nGraphScope: A One-Stop Large-Scale \nGraph Computing System from Alibaba\n1.Background\n1. 1.What is graph computing\nGraph models a set of objects (vertices) and their relationships (edges). As a sophisticated model, \ngraphs can naturally express a large number of real-life datasets, such as social networks, Web \ngraphs, transaction networks and knowledge graphs. Figure 1 shows an e-commerce graph in \nAlibaba, where there are various types of vertices (consumers, sellers, items and devices) and edges \n(purchase, view, comment and so on), and each vertex is associated with rich attribute information. \nCurrent graph data in real industrial scenarios usually contains billions of vertices and trillions of \nedges. In addition, continuous updates arrive at a tremendous speed. Given the ever-growing \namount of graph data available, graph computing, which tries to explore underlying insights hidden \nin graph data, has attracted increasing attention in recent years.\nFigure 1: E-commerce graph at Alibaba\nGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 01\nFigure 2: An example graph model for fraud detection using interactive query.\nAccording to different aims and characteristics of graph-related tasks, current graph computing can \nbe roughly divided into 3 categories, namely interactive queries on graphs, analytics on graphs and \ndeep learning on graphs.\n \nInteractive queries on graphs: Modern business often requires analyzing large-scale graphs in an \nexploratory manner in order to locate specific or in-depth information in time, as illustrated using the \nfollowing example in Figure 2.\nThe graph depicted in Figure 2 is a simplified version of a real query employed at Alibaba for credit \ncard fraud detection. By using a fake identifier, the “criminal” may obtain a short-term credit from \na bank (vertex 4). He/she tries to illegally cash out money by forging a purchase (edge 2 --> 3) at \ntime t 1 with the help of a merchant (vertex 3). Once receiving payment (edge 4 --> 3) from the bank \n(vertex 4), the merchant tries to send the money back (edges 3 --> 1 and 1 --> 2) to the “criminal” \nvia multiple accounts of a middle man (vertex 1) at time t 3 and t 4, respectively. This pattern eventually \nforms a cycle (2 --> 3 --> 1 … --> 2). Such fraudulent activities have become one of the major issues \nfor online payments, where the graph could contain billions of vertices (e.g., users) and hundreds \nof billions to trillions of edges (e.g., payments). In reality, the entire fraudulent process can involve a \ncomplex chain of transactions, through many entities, with various constraints, which thus requires \ncomplex interactive analysis to identify.\nAnalytics on graphs: Analytics on graphs has been studied for decades, and tons of graph analytics \nalgorithms have been proposed for different purposes. Typical graph analytics algorithms include \ngeneral analytics algorithms (e.g., PageRank (see Figure 3), shortest path, and maximum flow), \ncommunity detection algorithms (e.g., maximum clique/bi-clique, connected components, Louvain \nand label propagation), graph mining algorithms (e.g., frequent structure mining and graph pattern \ndiscovery). Due to the high diversity of graph analytics algorithms, programming models for graph \nanalytics are needed. Current programming models can basically fall into the following categories: \nGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 02\n“think like a vertex” , matrix algebra, “think like a graph/subgraph” and datalog. With these \nprogramming models in place, a lot of graph analytics systems have been developed, such as \nNetworkX, Pregel, PowerGraph, Apache Giraph and GRAPE.\nFigure 4: Graph neural network\nMachine learning on graphs: Classic graph embedding techniques, such as Node2Vec and LINE, have \nbeen widely adopted in various machine learning scenarios. Recently Graph Neural Networks (GNNs) \nare proposed, which combine the structural and attribute information in the graph with the power \nof deep learning technologies. GNNs can learn a low-dimensional representation for any graph \nstructure (e.g., a vertex, an edge, or an entire graph) in a graph, and the generated representations \ncan be leveraged by many downstream graph-related machine learning tasks, such as vertex \nclassification, link prediction, graph clustering. Graph learning technologies have demonstrated \nconvincing performance on many graph-related tasks. Different from traditional machine learning \ntasks, graph learning tasks involve both graph-related and neural network operations (see Figure \n4). Specifically, each vertex in the graph selects its neighbors using graph-related operations and \naggregates its neighbors’ features with neural network operations. \nGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 03\nFigure 3: PageRank, an algorithm to assign an \"importance\" score for each vertex. In this figure, \nthe diameter of each vertex indicates its PageRank score.GraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 04\nThe study of graph computing algorithms and systems has been developing rapidly in recent years \nand become a hot topic in both academia and industry. In particular, the performance of graph \ncomputing systems has improved by 10 to 100 times over the past decade and the systems are still \nbecoming increasingly efficient, making it possible to accelerate the AI and big data tasks via graph \ncomputing. In fact, graphs are able to naturally express data of various sophisticated types and \ncan provide abstractions for common machine learning models. Compared to dense tensors, the \ngraph representations also offer a much richer semantics and a more comprehensive capability for \noptimization. Moreover, graphs are a natural encoding of sparse high-dimensional data and the \ngrowing research literature in GCN (graph convolutional network) and GNN (graph neural network) \nhas proven that graph computing is an effective complement to machine learning. \n Figure 5 : Applications of graph computing.1.2.Graph computing: a foundation for the next generation of artificial intelligence\nPutting these together, it is reasonable to expect that graph computing would play a big role \nin various applications within the next generation of artificial intelligence, including anti-fraud, \nintelligent logistics, city brain, bioinformatics, public security, public health, urban planning, anti-\nmoney laundering, infrastructures, recommender systems, financial technology and supply chains \n(see Figure 5). \nAlthough graph computing has been considered as promising solutions for various applications, \nthere is a huge gap between initial ideas and real productions. We summarize that the current large-\nscale graph computing faces the following challenges.1.3.Challenges of graph computingGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 05\n(1) Real-life graph applications are complex and diverse. In real-life scenarios, a graph-related task \nis typically very complex and involves multiple types of graph computing. Existing graph computing \nsystems are mostly designed for a specific type of graph computation. Therefore, users have to \ndisassemble a complex task into multiple jobs involving many systems. To bridge different systems, \nthere could be significant overheads such as, extra costs on integration, I/O, format transformation, \nnetwork and storage. \n \n(2) It is difficult to develop applications for large graphs. To develop a graph computing application, \nusers normally begin with small graphs on a single machine using easy- and ready-to-use tools (such \nas NetworkX in Python and TinkerPop). However, it is extremely difficult for average users to extend \ntheir single machine solution to handle large graphs in parallel. Existing distributed parallel systems \nfor large graphs usually follow different programming models, and lack the rich ready-to-use libraries \nfound in the single machine libraries (e.g., NetworkX). This makes distributed graph computing a \nprivilege for experienced users only.\n(3) The scale and efficiency of processing large graphs is still limited. Although current systems have \nlargely benefited from years of work in optimizations for each individual system, they still suffer \nfrom efficiency and/or scale problems. For example, existing interactive graph query systems cannot \nparallelize the executions for Gremlin queries because of the high complexity of traversal patterns. \nFor graph analytical systems, traditional vertex-centric model makes graph-level optimization \nunavailable. In addition, many existing systems lack optimizations at the compiler level.GraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 06\nTo tackle the above three challenges, we propose GraphScope, a one-stop large-scale graph \ncomputing system. GraphScope aims to provide a single system that is able to support all three \ntypes of computation tasks, i.e., graph interactive query, graph analytics and graph deep learning. \nWe carefully design GraphScope with user-friendly interface and extensible programming APIs, \nso that users can easily construct customized end-to-end graph processing pipelines involving \ndifferent types of graph computation tasks. In specific, GraphScope fully embraces the Python and \nGremlin ecosystem, and thus comes with a shallow learning curve for both data scientists and \ndevelopers. Under the hood, GraphScope comprises core engines specifically optimized for each \ngraph computation paradigm, and can smoothly orchestrate multiple engines to cooperate efficiently, \navoiding the complexity of manually stitching multiple independent systems together. GraphScope \ncan scale to ultra-large graphs, and run in industrial speed and robustness.\nGraphScope has been battle-tested in production in Alibaba and over 30 external customers. \nGraphScope has supported graph processing tasks on extremely-large and complex graphs, \nwhich are of size more than 50TB, and consist of billions of vertices, hundreds of billions of edges, \nover one hundred labels, and more than one thousand attributes. GraphScope has demonstrated \nsuperior performance compared with state-of-the-art graph systems: on the industrial standard \nLDBC benchmark, GraphScope achieves up-to 5.22B EVPS on 4 Aliyun nodes for XL-size graphs on \nthe analytical tasks, and nearly linear speed-ups on the SNB interactive queries tasks; GraphScope \nalso speeds up the training time of graph learning models by 50%. In addition, GraphScope provides \na rich set of built-in algorithm libraries, covering over 50 graph analysis and graph deep learning \nalgorithms. These libraries, along with the user-friendly interface and APIs, significantly reduce the \ndevelopment cycle of end-to-end graph applications from weeks to days. \nOur endeavor on GraphScope has also been recognized in both academia and industry. We have \npublished tens of research papers on top tier conferences and journals. These research works have \nwon SIGMOD 2017 Best Paper Award, VLDB 2017 Best Demo Award and VLDB 2020 Best Paper Runner-\nup Award. Based on GraphScope, we developed a cognitive intelligence computing platform, which \nwon the 'SAIL' prize of the 2019 World Conference on Artificial Intelligence.2.GraphScope: a one-stop large-scale graph computing\n system\n2. 1.IntroductionGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 07\nGraphScope is a full-fledged, in-production system for one-stop analysis on big graph data. \nAchieving this goal requires a wide variety of components to interact, including cluster management \n(deployment) software, graph store (as input, output and to hold intermediate results), distributed \nexecution engines, language constructs, and development tools. Due to the space limit, we highlight \nthe three major layers in GraphScope, namely algorithm, execution, and storage as shown in Figure 6, \nand give an overview to each of them below.\n Figure 6: Architecture of GraphScope\nAlgorithm. Graph processing often requires specific algorithms for each particular task. While \nthose algorithms can be directly written using the GraphScope's primitives, we have built libraries \nof common algorithms for various application domains (such as graph neural networks, clustering, \nand pattern matching) to ease the development of new graph applications. GraphScope gives \nthe programmer the illusion of writing for a single machine using a general purpose high-level \nprogramming model (Python) and to have the system deal with the complexities that arise from \ndistributed execution. This has tremendously simplified the defining and maintaining of such libraries. \nIn addition, this approach allows GraphScope to seamlessly combine multiple graph processing \nengines in one unified platform as described below.\nExecution. GraphScope execution runtime consists of three engines, namely GraphScope Interactive \nEngine (GIE), GraphScope Analytics Engine (GAE), GraphScope Learning Engine (GLE), and provides \nthe functionality of interactive, analytical, and graph-based machine learning, respectively. A \ncommon feature all those execution engines provide is the automatic support for efficient distributed \nexecution of queries and algorithms in their target domains.\n2.2.Architecture overviewGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 08\nEach query/algorithm is automatically and transparently compiled by GraphScope into a distributed \nexecution plan that is partitioned across multiple compute nodes for parallel execution. Each partition \nruns on a separate compute node, managed by a local executor, that schedules and executes \ncomputation on a multi-core server.\nStorage. Because the graph datasets are large, it can take a long time to load input into and save \noutput from multiple processing stages (between different engines) within a complex pipeline. To \nmitigate such cost, GraphScope provides an in-memory storage layer called Vineyard that maintains \nan (intermediate) data object partitioned across a cluster. The storage is tightly coupled with the \nexecution engines for efficiency, so that each local executor (of any engine) can access graph \ndata completely avoiding unnecessary data copy. Furthermore, the storage provides high-level \nabstractions or data structures such as (sub)graphs, matrices and tensors as fundamental interfaces \nto its clients so as to minimize serialization and deserialization cost as well.\n2.3. Components\n 2.3. 1. GIE: a parallel interactive engine for graph traversal\n | Challenges of parallelizing the interactive graph query\nDifferent from an analytic query that may run minutes to hours without much human involvement, an \ninteractive query allows human to interact with graph data in real time typically using a high-level, \ndeclarative query language. Because of such features, interactive query enables human, often non-\ntechnical users, to directly explore, examine and present data in order to locate specific or in-depth \ninformation at low latency, and is commonly recognized as an essential part of any data analytics \nproject. \n \nGIE exploits the Gremlin graph traversal language developed by Apache TinkerPop to provide \na high-level language for interactive graph queries and provides automatic parallel execution. \nGremlin is widely supported by popular graph system vendors such as Neo4j, OrientDB, JanusGraph, \nAzure Cosmos DB, and Amazon Neptune, which offers a flexible and expressive programming \nmodel to enable non-technical users to succinctly express complex traversal patterns in real-world \napplications. For example, one can write the above fraud-detection query (Figure 2) in just a couple \nof lines using Gremlin, as shown in Figure 7 .\ng.V(’account’).has(’id’,’2’).as(’s’)\n .repeat(out(’transfer’).simplePath())\n .times(k-1)\n .where(out(’transfer’).eq(’s’))\n .path().limit(1)\nFigure 7: An example Gremlin queryThe flexibility of Gremlin mainly stems from nested traversal with dynamic control flow such as \nloops, which introduces fine-grained data dependencies at runtime that are complex and can \nincur significant overheads in distributed execution. Therefore, existing Gremlin-enabled, large-\nscale systems either adopt centralized query processing (such as JanusGraph and Neptune), or \noffer a subset of the language constructs that is often too limited for real-world applications (in \nproduction at Alibaba), or come as a huge performance sacrifice (e.g. Hadoop-Gremlin). In addition, \nsuch a system must cope with runtime dynamics related to variations in memory consumption in \nan interactive context. While several techniques exist for alleviating memory scarcity in distributed \nexecution, such as backpressure and memory swapping, they cannot be directly applied due to \npotential deadlocks or big latency penalty. \nGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 09\nWe tackle the challenges to scale the Gremlin queries by a novel distributed-system infrastructure \ndesigned specifically to make it easy for a variety of users to interactively analyze big graph data \non large clusters at low latency. i) GIE compiles a Gremlin query into a dataflow graph that can be \nmapped to physical machines for distributed execution, ii) operators in Gremlin are precompiled and \ninstalled on each compute node to allow query plans to be dispatched at low latency, iii) each local \nexecutor employs dynamic scheduling and works together to optimize execution dynamically (to \ncope with runtime dynamics related to variations in memory usage and ensure bounded-memory \nexecution), iv) finally, the same runtime can be dynamically reconfigured by user-defined graph \ntraversal strategies (such as depth-first or breadth-first search) to achieve low latency by avoiding \nwasted computation. All of the above mechanisms are made possible by a powerful new abstraction \nwe developed in GIE that caters to the specific needs in this new computation model to scale graph \nqueries with complex dependencies and runtime dynamics, while at the same time maintaining the \nsimple and concise programming model.\nThe interactive engine has been deployed in production clusters at Alibaba to support a variety of \nbusiness-critical scenarios. Extensive evaluations using both benchmarks and real-world applications \nhave validated the high-performance and scalability of GIE. Compared to the Gremlin-enabled graph \ndatabase JanusGraph, GIE outperforms JanusGraph by over one order of magnitude on average \nusing the industry-standard LDBC benchmark. Additionally, GIE can scale to much larger graphs. \nIn the benchmark, we have adopted the largest graph that LDBC benchmark can generate, which \ncontains over 2 billion vertices, 17 billion edges and occupies 2TB aggregated memory in the cluster; \nin production, GIE has been deployed in Alibaba cluster to process gigantic graphs with hundreds of \nbillions of edges. | Efficient graph interactive engineTo reduce the programming burden from users while achieving high performance at the same \ntime, we develop a large-scale parallel graph analytics engine, referred to as GAE. GAE originated \nfrom GRAPE, a system that implemented the fixpoint model of \" Wenfei Fan, Wenyuan Yu, Jingbo \nXu, Jingren Zhou, Xiaojian Luo, Qiang Yin, Ping Lu, Yang Cao, Ruiqi Xu: Parallelizing Sequential \nGraph Computations. ACM Trans. Database Syst. 43(4): 2018 \" and won the SIGMOD2017 Best Paper \nAward, the VLDB2017 Best Demo Award and the SIGMOD 2018 Research Highlights Award. GAE also \ndemonstrates superior performance compared with state-of-the-art graph systems: on the industrial \nstandard LDBC benchmark, it achieves up-to 5.22B EVPS on 4 Aliyun nodes for XL-size graphs. \nGAE supports a simple paradigm such that to implement a graph analytics algorithm, users only need \nto provide three functions, (1) PEval, a function for given a query, computes the answer on a local \ngraph; (2) IncEval, an incremental function, computes changes to the old output by treating incoming \nmessages as updates; and (3) Assemble, which collects partial answers, and combines them into \na complete answer. In this model, users do not need to know the details of the distributed setting \nwhile processing big graphs in a cluster, and GAE auto-parallelizes the graph analytics tasks across \na cluster of workers, based on a fixpoint computation. Under a monotonic condition, it guarantees to \nconverge with correct answers as long as the three sequential algorithms provided are correct. That \nis, GAE parallelizes sequential algorithms as a whole. This makes parallel computations accessible to \nusers who know conventional graph algorithms covered in college textbooks, and there is no need to \nrecast existing graph algorithms into a new model.\nGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 10\n 2.3.2. GAE: a high-performance graph analytics engine\n | Challenges\n | Auto parallelization\nFigure 8: Execution model in GAEIn response to the need of analyzing big graphs, several parallel graph analytics engines have been \ndeveloped. However, writing parallel graph algorithms remains challenging for average users. \nThe most popular programming model for parallel graph algorithms is the vertex-centric model, \npioneered by Pregel and GraphLab. Although graph analytics computing has been studied for \ndecades and a large number of sequential graph algorithms are already in place, to use the vertex-\ncentric model, one has to recast the existing sequential algorithms into vertex-centric programs. \nThe recasting is nontrivial for users who are not very familiar with the parallel models. Moreover, \nnone of the systems provides guarantee on the correctness or even termination of parallel programs \ndeveloped in their models. These make the existing systems a privilege for experienced users only.Existing parallel programming models can be easily adapted and executed on GAE. GAE works \non a graph G fragmented via a partition strategy picked by the user and each worker maintains a \nfragment of G. Given a query, GAE posts the same query to all the workers. As shown in the Figure 8, \nit computes Q(G) in three phases following BSP (Bulk Synchronous Parallel). More specifically, each \nworker first executes PEval against its local fragment, to compute partial answers in parallel. This \nfacilities data-partitioned parallelism via partial evaluation. Then each worker may exchange partial \nresults with other processors via synchronous message passing. Upon receiving messages, each \nworker incrementally computes IncEval. The incremental step iterates until no further messages can \nbe generated. At this point, Assemble pulls partial answers and assembles the final result. \nMulti-language SDKs are provided by GAE. Users choose to write their own algorithms in either C++ or \nPython. With Python, users can still expect a high performance. GAE integrated a compiler built with \nCython. It can generate efficient native code from Python algorithms behind the scenes, and dispatch \nthe code to the GraphScope cluster for execution. The SDKs further lower the total cost of ownership \nof graph analytics.\nGAE achieves high performance through a highly optimized analytical runtime based on libgrape-lite. \nMany optimization techniques, such as pull/push dynamic switching, cache-efficient memory layout, \nand pipelining were employed in the runtime.It performs well in LDBC Graph Analytics Benchmark, \nand outperforms other state-of-the-art graph systems. GAE is designed to be highly efficient and \nflexible, to cope with the scale, variety and complexity from real-life graph analytics applications.\nGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 11\nGraph deep learning algorithms, including graph embedding (GE) and graph neural networks (GNN), \nhave increasingly attracted a lot of interests in both academia and industry since the last decade. \nRecently, the rising of big data and complex systems brings a quick proliferation of graph data and \nreveals new insights. We have observed four properties in the vast majority of real-world graph data, \nnamely large-scale, heterogeneous, attributed and dynamic. For example, nowadays e-commerce \ngraphs often contain billions of vertices and tens of billions of edges, with various types and rich \nattributes, and quickly evolve over time. These properties pose great challenges for embedding and \nrepresenting graph data:\n• Graph data, very different from other forms of data, usually exhibits structural irregularity in \nEuclidean space. It is challenging to scale graph learning algorithms well on real-world graphs with \nextremely large sizes. Thus, it is a top priority for graph deep learning engines to ensure the time and \nspace efficiencies on large-scale graphs. 2.3.3.GLE: an end-to-end graph learning framework\n | Challenges | Flexible programming models\n | Multi-language SDKs\n | High-performance runtimeGraph sampling is an essential step in large-scale graph learning. To ease the development of \ngraph sampling, we propose a high-level language named as GSL (Graph Sampling Language). \nGSL features a Gremlin-like syntax, which is a widely adopted graph query language. With GSL, \na sampling query can be implemented as a traversal following a user-defined pattern, allowing \nusers to apply customized sampling logics at corresponding vertices and edges. Figure 9 shows an \nexample GSL query, which samples a batch of 512 vertices of type \"vertex_type\" , and in turn for each \nvertex samples 10 neighbors following the \"edge_type\" edges.\ngraph.V(\"vertex_type\").shuffle().batch(512)\n .outV(\"edge_type\").sample(10).by(\"random\");\nGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 12\n• Attributed heterogeneous graphs usually contain various types of vertex and edges with different \nattributes. This rich information is critical for leveraging both inductive and transductive settings and \nenhancing the representation power of a graph deep learning algorithm. However, it is non-trivial to \nintegrate both the topological structure information and the unstructured attribute information in a \nunified embedding space.\nGLE is designed for industrial scenarios at the very beginning, and thus is able to efficiently handle \nlarge-scale, heterogeneous graph data. We carefully design GLE to be light-weight, portable, \nextensible, and easily customizable for various kinds of tasks: GLE provides a set of user-friendly \nprogramming APIs essential for developing an end-to-end graph learning application, and can \nsmoothly co-work with popular deep learning engines such as TensorFlow and PyTorch to implement \ntask oriented neural network layers. \nFigure 9: An example graph sampling query | An industrial-scale graph deep learning engine\n | User-friendly interface\n | Modularized and extensible design\nAs shown in Figure 10, we design GLE in a modularized approach, where each module can be \nextended independently. This design enables GLE to keep up with the pace of the vibrant research \nand industrial advances in this field. Specifically, GLE abstracts out four layers: graph data layer, \nsampler layer, tensor operator layer, and algorithm layer. The graph data layer provides basic graph \nquery functionalities, allowing users to query the attribute/label data, vertices and edges of a graph. \nUsers can extend this layer to adapt to their customized graph stores. The sampler layer provides \nthree types of sampling operators (traverse sampler, neighbor sampler and negative sampler) and a \nrich set of built-in sampling operators implementations. The tensor operator layer comprises tensor \noperations used in graph neural networks. Users can easily plugin their own operators into these two GraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 13\n layers when developing new algorithms. The algorithm layer is built on top of the above three layers. \nWe have also incorporated in GLE a library of popular graph learning algorithms, including both \ngraph neural network algorithms and node embedding algorithms.\nFigure 10: Modularized design of GLE \nTo further optimize the sampling performance, GLE caches the remote neighbors for vertices that \nare visited frequently. In addition, the attribute indexes are also cached to speed up attribute \nlookups for vertices in each partition. GLE adopts an asynchronous execution engine with the \nsupport for heterogeneous hardware, which enables GLE to efficiently overlap a huge number of \nconcurrent operations such as I/Os, sampling and tensor computation. GLE abstracts heterogeneous \ncomputation hardware as resource pools, e.g., CPU thread pool and GPU stream pool, and \ncooperatively schedules fine-grained concurrent tasks.\nIn a common graph computing practice, several different computing systems are involved to tackle \ndifferent kinds of workloads. Existing solutions usually adopt distributed databases or file systems \nas the intermedia storage to share distributed data between heterogeneous computing systems that \nare involved in the task. This brings two significant overheads: 1) the structural data are transformed \nfrom/to the external data storage format (e.g., tables in relational databases, files in HDFS) back and \nforth in the beginning/end of each computation step. Meanwhile, the structure and operations of the \ndata are dismissed. 2) saving/loading the data to/from the external storage requires lots of memory-\ncopies and disk-IO costs, which becomes the bottleneck of the entire process in more and more cases \nas the efficiency of the computing systems are growing rapidly. In addition, the lack of managing the \ndata uniformly through the big data task obstructs the application of modern techniques such as data \nmonitoring, data-aware optimization, and fault-tolerance, thus, further decreases the productive \nefficiency. | Effective distributed runtime\n 2.3.4.Vineyard: an in-memory immutable data manager\n | ChallengesGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 14\nTo bridge different graph computation systems, a distributed in-memory data manager Vineyard \nis developed. It provides 1) in-memory distributed immutable data sharing in a zero-copy fashion \nto avoid introducing extra I/O costs, 2) some built-in out-of-box high-level data abstractions with \nefficient underlying memory layout to share the distributed data with complex structures (e.g., \ndistributed graphs). 3) extensible mechanism to allow to transplant user-defined data structures or \nfunctionalities (e.g., graph partitioning algorithms) into Vineyard. 4) metadata management. With \nVineyard in place, users can handle large-scale distributed data across different graph computing \nsystems as simple and efficient as local variables, and finish a graph-related task in an end-to-end \nway.\nVineyard supports various flexible and efficient memory layout for partitioned immutable graph \ndata. Both simple graphs and property graphs are supported. And it utilized a columnar layout of \nproperties of graphs, which can speed-up the computing tasks and make it easier to interact with \nother data tasks with Apache Arrow efficiently.\nVineyard employs the extensible design concept of registry mechanism to facilitate users \ntransplanting their defined data structures into Vineyard. In particular, the extensible design involves \nthree components builders, resolvers and drivers, and allows users to build, resolve and share their \ndata structures easily among different systems and programming languages respectively. In general, \nthe registry mechanism decouples the functionality methods from the definition of Vineyard data \nstructures. For builders and resolvers, users can flexibly register different implementations in different \nprogramming languages to build and resolve the same Vineyard data structures, which makes the \ndata structures available to share among different systems and programming languages, and makes \nit possible to exploit native language optimizations. For drivers, the registry mechanism allows users \nto flexibly plug-in functionality methods in different programming languages for Vineyard data \nstructures, which assigns required capability to the data structures along with the data analytics \nprocess. For graph computing, graph partitioning is critical to both analytics or interactive processing. \nWith the registry mechanism, users can easily extend Vineyard by plugin their own graph partitioning \ndrivers, which can be implemented and optimized in accordance with specific graph computation \ntasks for further efficiency augmentation.\n In addition, Vineyard provides management for the metadata of the data stored in Vineyard. It keeps \nthe structures, layouts and properties of the data to construct high-level abstractions (e.g., graphs, \ntensors, dataframes). The metadata managers in a Vineyard cluster communicate with each other \nthrough the backend key-value store, e.g., etcd server, to keep the consistency of the distributed data \nstored in Vineyard. | Data sharing with zero-cost | A distributed in-memory data manager\n | Effective graph partitions with extensible design\n | Metadata managementGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 15\nWith the rapid growth of Taobao users, there has been an increasing volume of fraudulent user \nbehaviors on the Taobao platform. These fraudulent behaviors, such as spam transactions, \nfake comments, risky traffic, bring various threats to our ecosystem. Generally, all the users and \ntransactions can be organized as a large-scale heterogeneous graph which contains both different \ntypes of vertices such as buyers, sellers and items, and different types of edges with edge attributes \nextracted from various retailing scenarios. Facing the heterogeneous graph, GraphScope integrates \nthree types of tools, i.e., graph analytics, graph interactive query, and graph deep learning tools, \nto enable automatically to recognize spam transactions, fake comments, and risky traffic from \nhuge volumes of Taobao transactions. Moreover, because fraudulent users grow rapidly towards \nhighly organized groups, GraphScope can also identify fraudulent communities by using the \ngraph interactive query tools. To fulfill the need of discovering fraudulent behaviors in real-time, \nGraphScope implements graph deep learning models especially graph neural networks to recognize \nand process fake behaviors within a short period of time. \nOne-stop graph computing for fraudulent community detection. GraphScope can identify fraudulent \ncommunities by combining the components of graph analytics, graph interactive query and graph \ndeep learning. Figure 11 displays how GraphScope helps fraudulent community detection in Alibaba. \nFirst of all, graph analytics can extract discriminative patterns of each single fraudulent user. We first \nbuild a bipartite graph with vertices and edges representing buyers and their purchases of goods. \nThen, a Biclique subgraph pattern matching algorithm in graph analytics can be used to detect \npurchase fraud behavior. \nThen, after processing by graph analytics engine, GraphScope calls the component of graph learning \nto further discover deep complicated patterns of fraudulent users. Graph learning builds a semi-\nsupervised learning model on the large-scale user purchase graph. Specifically, graph neural \nnetworks (GNNs) are used to integrate the attribute information of vertices and edges with local \nnetwork structure information to learn deep network representations. \nAt the last step, GraphScope uses the component of graph interactive query to discover fraudulent \ncommunities. Specifically, label propagation algorithms (LPA) in the component of graph traversal \ncan discover fraudulent communities by spreading the class labels of fraudulent users discovered \nby graph analytics and graph learning. LPA runs in a large-scale decentralized graph, and discovers \noverlapping fraud communities. 3.Case study: anti-fraud and risk controlGraphScope: A One-Stop Large-Scale Graph Computing System from Alibaba 16\nAccording to our plan, the GraphScope is to be open-sourced by December 2020* under Apache \nLicense 2.0. Here at Alibaba, we are fully committed to the long-term active development and \nmaintenance of the project. We aim for a major release every 6 months with new features and \nimprovements. Following the initial release, the following new capabilities are planned for the next \nmajor release around June 2021* , please stay tuned :\n1. NetworkX-API support\n2. A persistent dynamic graph storage\n3. Performance improvement over runtime and compiler modules\n4. Java SDK for developing analytics algorithms\nWe also pledge to foster an open and welcoming graph computing community around the project. \nWe encourage everyone to try and contribute to GraphScope to improve this project. \n* These dates are subject to changeBecause of its rapid evolution and rich expressivity, graph computing demonstrates a huge potential \nto unlock many of the next generation AI applications. In this white paper, we have introduced \nGraphScope, a one-stop large-scale graph computing system to be open-sourced by Alibaba. \nBy providing a unified system for analytics, interactive queries and deep learning over graphs, \nGraphScope addresses many of the challenges encountered in existing systems. It is designed as a \none-stop, user-friendly and highly performant system for industrial-scale graphs and applications. \nWe believe in the future of graph computing, and we are fully committed to the future development \nof GraphScope. Welcome to join the force with us!\nFigure 11: One-stop graph computing for fraudulent community detection\n4.The future of GraphScope\n5.Conclusion ___\nA One-Stop Large-Scale Graph \nComputing System from Alibaba\n" } ]
{ "category": "App Definition and Development", "file_name": "GraphScope_whitepaper.pdf", "project_name": "GraphScope", "subcategory": "Database" }
[ { "data": "0.00.51.0\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequery time (s)datasource\na\nb\nc\nd\ne\nf\ng\nhMean query latency" } ]
{ "category": "App Definition and Development", "file_name": "avg_query_latency.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "Boost.Spirit 2.5.4 Reference Card\nPrimitive Parsers\nattr(a) Attribute A\neoi End of Input unused\neol End of Line unused\neps Epsilon unused\neps(p) unused\nsymbols<Ch,T,Lookup> Symbol Table T\nUnary Parsers\n&a And Predicate unused\n!a Not Predicate unused\n*a Zero or More vector<A>\n*u unused\n-a Optional optional<A>\n-u unused\n+a One or More vector<A>\n+u unused\nattr_cast<T>(p) Attribute Cast T\nBinary Parsers\na-b Difference A\nu-b unused\na%b Separated List vector<A>\nu%b unused\nN-ary Parsers\na|b Alternative variant<A, B>\na | u optional<A>\na|b|u optional<variant<A,B>>\nu | b optional<B>u|u unused\na|a A\na>b Expect tuple<A,B>\na>u A\nu>b B\nu>u unused\na > vA vector<A>\nvA > a vector<A>\nvA > vA vector<A>a^b Permute tuple<optional<A>,optional<B>>\na ^ u optional<A>\nu ^ b optional<B>\nu^u unused\na> >b Sequence tuple<A,B>\na> >u A\nu> >b Bu> >u unused\na >> a vector<A>\na >> vA vector<A>vA >> a vector<A>\nvA >> vA vector<A>\na| |b Sequence Or tuple<optional<A>,optional<B>>\na || u optional<A>\nu || a optional<A>\nu| |u unused\na || a vector<optional<A>>Nonterminal Parsers\nrule<It,RT(A1,...,An),Skip,Loc> Rule RT\nrule<It> unused\nrule<It,Skip> unused\nrule<It,Loc> unused\nrule<It,Skip,Loc> unused\ngrammar<It,RT(A1,...,An),Skip,Loc> Grammar RT\ngrammar<It> unused\ngrammar<It,Skip> unused\ngrammar<It,Loc> unused\ngrammar<It,Skip,Loc> unused\nParser Directives\nas<T>()[a] Atomic Assignment T\nexpect[a] Expectation A\nexpect[u] unused\nhold[a] Hold Attribute A\nhold[u] unused\nlexeme[a] Lexeme A\nlexeme[u] unused\nmatches[a] Matches bool\nno_case[a] Case Insensitive A\nno_case[u] unused\nno_skip[a] No Skipping A\nno_skip[u] unused\nomit[a] Omit Attribute unused\nraw[a] Raw Iterators iterator_range<It>\nraw[u] unused\nrepeat[a] Repeat vector<A>\nrepeat[u] unused\nrepeat(n)[a] vector<A>repeat(n)[u] unused\nrepeat(min,max)[a] vector<A>\nrepeat(min,max)[u] unused\nrepeat(min,inf)[a] vector<A>repeat(min,inf)[u] unused\nskip[a] Skip Whitespace A\nskip[u] unused\nskip(p)[a] A\nskip(p)[u] unused\nSemantic Actions\np[fa] Apply Semantic Action A\np[phoenix lambda ]A\ntemplate<typename Attrib>\nvoid fa(Attrib& attr);\ntemplate<typename Attrib, typename Context>\nvoid fa(Attrib& attr, Context& context);\ntemplate<typename Attrib, typename Context>\nvoid fa(Attrib& attr, Context& context, bool& pass);Phoenix Placeholders\n_1,_2,...,_N Nth Attribute of p\n_val Enclosing rule’s synthesized attribute\n_r1,_r2,...,_rNEnclosing rule’s Nth inherited attribute.\n_a,_b,...,_j Enclosing rule’s local variables.\n_pass Assign falseto_passto force failure.\nIterator Parser API\nbool parse<It, Exp>(\nIt& first, It last, Exp const& expr);\nbool parse<It, Exp, A1, ..., An>(\nIt& first, It last, Exp const& expr,A1& a1, ..., An& an);\nbool phrase_parse<It, Exp, Skipper>(\nIt& first, It last, Exp const& expr,Skipper const& skipper,\nskip_flag post_skip = postskip);\nbool phrase_parse<It, Exp, Skipper, A1, ..., An>(\nIt& first, It last, Exp const& expr,Skipper const& skipper,\nA1& a1, ..., An& an);\nbool phrase_parse<It, Exp, Skipper, A1, ..., An>(\nIt& first, It last, Exp const& expr,\nSkipper const& skipper,\nskip_flag post_skip,A1& a1, ..., An& an);\nStream Parser API\nunspecified match<Exp>(Exp const& expr);\nunspecified match<Exp, A1, ..., An>(\nExp const& expr,\nA1& a1, ..., An& an);\nunspecified phrase_match<Exp, Skipper>(\nExp const& expr,\nSkipper const& skipper,\nskip_flag post_skip = postskip);\nunspecified phrase_match<Exp, Skipper, A1, ..., An>(\nExp const& expr,\nSkipper const& skipper,skip_flag post_skip,\nA1& a1, ..., An& an);\nc/circlecopyrt2016 Richard Thomson, Permissions on back. v1.0\nSend comments and corrections to Richard Thomson, /angbracketleftlegalize@xmission.com /angbracketrightBinary Value Parsers\nbyte_ Native Byte uint_least8_t\nbyte_(b) unused\nword Native Word uint_least16_t\nword(w) unused\ndword Native Double Word uint_least32_t\ndword(dw) unused\nqword Native Quad Word uint_least64_t\nqword(qw) unused\nbin_float Native Float float\nbin_float(f) unused\nbin_double Native Double double\nbin_double(d) unused\nlittle_ item Little Endian item as above\nlittle_ item (w) unused\nbig_ item Big Endian item as above\nbig_ item (w) unused\nCharacter Encodings\nascii 7-bit ASCII\niso8859_1 ISO 8859-1\nstandard Using <cctype>\nstandard_wide Using <cwctype>\nCharacter Parsers\nc Character Literal unused\nlit(c) unused\nns::char_ Any Character ns::char_type\nns::char_(c) Character Value ns::char_type\nns::char_(f,l) Character Range ns::char_type\nns::char_(str) Any Character in String ns::char_type\n~cp Characters not in cp Attribute of cp\nCharacter Class Parsers\nns::alnum Letters or Digits ns::char_type\nns::alpha Alphabetic ns::char_type\nns::blank Spaces or Tabs ns::char_type\nns::cntrl Control Characters ns::char_type\nns::digit Numeric Digits ns::char_type\nns::graph Non-space Printing Characters ns::char_type\nns::lower Lower Case Letters ns::char_type\nns::print Printing Characters ns::char_type\nns::punct Punctuation ns::char_type\nns::space White Space ns::char_type\nns::upper Upper Case Letters ns::char_type\nns::xdigit Hexadecimal Digits ns::char_type\nString Parsers\nstr String Literal unused\nlit(str) unused\nns::string(\"str\") String basic_string<char>\nns::string(L\"str\") basic_string<wchar_t>Unsigned Integer Parsers\nlit(num) Integer Literal unused\nushort_ Short unsigned short\nushort_(num) Short Value unsigned short\nuint_ Integer unsigned int\nuint_(num) Integer Value unsigned int\nulong_ Long unsigned long\nulong_(num) Long Value unsigned long\nulong_long Long Long unsigned long long\nulong_long(num) Long Long Value unsigned long long\nbin Binary Integer unsigned int\nbin(num) Binary Integer Value unsigned int\noct Octal Integer unsigned int\noct(num) Octal Integer Value unsigned int\nhex Hexadecimal Integer unsigned int\nhex(num) Hex Integer Value unsigned int\nGeneralized Unsigned Integer Parser\nuint_parser<T,Radix,MinDigits,MaxDigits> T\nuint_parser<T,Radix,MinDigits,MaxDigits>(num) T\nSigned Integer Parsers\nlit(num) Integer Literal unused\nshort_ Short short\nshort_(num) Short Value short\nint_ Integer int\nint_(num) Integer Value int\nlong_ Long long\nlong_(num) Long Value long\nlong_long Long Long long long\nlong_long(num) Long Long Value long long\nGeneralized Signed Integer Parser\nint_parser<T,Radix,MinDigits,MaxDigits> T\nint_parser<T,Radix,MinDigits,MaxDigits>(num) T\nReal Number Parsers\nlit(num) Real Number Literal unused\nfloat_ Float float\nfloat_(num) Float Value float\ndouble_ Double double\ndouble_(num) Double Value double\nlong_double Long Double long double\nlong_double(num) Long Double Value long double\nGeneralized Real Number Parser\nreal_parser<T,RealPolicies> T\nreal_parser<T,RealPolicies>(num) T\nBoolean Parsers\nlit(boolean) Boolean Literal unused\nfalse_ Match “ false” bool\ntrue_ Match “ true” bool\nbool_ Boolean bool\nbool_(boolean) Boolean Value bool\nGeneralized Boolean Parser\nbool_parser<T,BoolPolicies> T\nbool_parser<T,BoolPolicies>(boolean) T\nCopyright c/circlecopyrt2016 Richard Thomson, July 2016 v1.0\nPermission is granted to make and distribute copies of this card pro-\nvided the copyright notice and this permission notice are preserved onall copies." } ]
{ "category": "App Definition and Development", "file_name": "spirit-reference.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Iterator Adaptor\nAuthor : David Abrahams, Jeremy Siek, Thomas Witt\nContact : dave@boost-consulting.com ,jsiek@osl.iu.edu ,witt@ive.uni-hannover.de\nOrganization :Boost Consulting , Indiana University Open Systems Lab , University of\nHanover Institute for Transport Railway Operation and Construction\nDate : 2004-11-01\nCopyright : Copyright David Abrahams, Jeremy Siek, and Thomas Witt 2003.\nabstract:\nEach specialization of the iterator_adaptor class template is derived from a specialization of\niterator_facade . The core interface functions expected by iterator_facade are implemented in\nterms of the iterator_adaptor ’sBase template parameter. A class derived from iterator_adaptor\ntypically redefines some of the core interface functions to adapt the behavior of the Base type. Whether\nthe derived class models any of the standard iterator concepts depends on the operations supported\nby the Base type and which core interface functions of iterator_facade are redefined in the Derived\nclass.\nTable of Contents\nOverview\nReference\niterator_adaptor requirements\niterator_adaptor base class parameters\niterator_adaptor public operations\niterator_adaptor protected member functions\niterator_adaptor private member functions\nTutorial Example\nOverview\nTheiterator_adaptor class template adapts some Base1type to create a new iterator. Instantiations of\niterator_adaptor are derived from a corresponding instantiation of iterator_facade and implement\nthe core behaviors in terms of the Base type. In essence, iterator_adaptor merely forwards all\noperations to an instance of the Base type, which it stores as a member.\n1The term “Base” here does not refer to a base class and is not meant to imply the use of derivation. We\nhave followed the lead of the standard library, which provides a base() function to access the underlying\niterator object of a reverse_iterator adaptor.\n1The user of iterator_adaptor creates a class derived from an instantiation of iterator_adaptor\nand then selectively redefines some of the core member functions described in the iterator_facade\ncore requirements table. The Base type need not meet the full requirements for an iterator; it need\nonly support the operations used by the core interface functions of iterator_adaptor that have not\nbeen redefined in the user’s derived class.\nSeveral of the template parameters of iterator_adaptor default to use_default . This allows\nthe user to make use of a default parameter even when she wants to specify a parameter later in the\nparameter list. Also, the defaults for the corresponding associated types are somewhat complicated,\nso metaprogramming is required to compute them, and use_default can help to simplify the imple-\nmentation. Finally, the identity of the use_default type is not left unspecified because specification\nhelps to highlight that the Reference template parameter may not always be identical to the iterator’s\nreference type, and will keep users from making mistakes based on that assumption.\nReference\ntemplate <\nclass Derived\n, class Base\n, class Value = use_default\n, class CategoryOrTraversal = use_default\n, class Reference = use_default\n, class Difference = use_default\n>\nclass iterator_adaptor\n: public iterator_facade<Derived, V’,C’,R’,D’> // see details\n{\nfriend class iterator_core_access;\npublic:\niterator_adaptor();\nexplicit iterator_adaptor(Base const& iter);\ntypedef Base base_type;\nBase const& base() const;\nprotected:\ntypedef iterator_adaptor iterator_adaptor_;\nBase const& base_reference() const;\nBase& base_reference();\nprivate: // Core iterator interface for iterator_facade.\ntypename iterator_adaptor::reference dereference() const;\ntemplate <\nclass OtherDerived, class OtherItera-\ntor, class V, class C, class R, class D\n>\nbool equal(iterator_adaptor<OtherDerived, OtherItera-\ntor, V, C, R, D> const& x) const;\nvoid advance(typename iterator_adaptor::difference_type n);\nvoid increment();\nvoid decrement();\ntemplate <\nclass OtherDerived, class OtherItera-\n2tor, class V, class C, class R, class D\n>\ntypename iterator_adaptor::difference_type distance_to(\niterator_adaptor<OtherDerived, OtherItera-\ntor, V, C, R, D> const& y) const;\nprivate:\nBase m_iterator; // exposition only\n};\niterator_adaptor requirements\nstatic_cast<Derived*>(iterator_adaptor*) shall be well-formed. The Base argument shall be\nAssignable and Copy Constructible.\niterator_adaptor base class parameters\nTheV’,C’,R’, and D’parameters of the iterator_facade used as a base class in the summary of\niterator_adaptor above are defined as follows:\nV’= if (Value is use_default)\nreturn iterator_traits<Base>::value_type\nelse\nreturn Value\nC’= if (CategoryOrTraversal is use_default)\nreturn iterator_traversal<Base>::type\nelse\nreturn CategoryOrTraversal\nR’= if (Reference is use_default)\nif (Value is use_default)\nreturn iterator_traits<Base>::reference\nelse\nreturn Value&\nelse\nreturn Reference\nD’= if (Difference is use_default)\nreturn iterator_traits<Base>::difference_type\nelse\nreturn Difference\niterator_adaptor public operations\niterator_adaptor();\nRequires: The Base type must be Default Constructible.\nReturns: An instance of iterator_adaptor with m_iterator default constructed.\nexplicit iterator_adaptor(Base const& iter);\nReturns: An instance of iterator_adaptor with m_iterator copy constructed from iter .\n3Base const& base() const;\nReturns: m_iterator\niterator_adaptor protected member functions\nBase const& base_reference() const;\nReturns: A const reference to m_iterator .\nBase& base_reference();\nReturns: A non-const reference to m_iterator .\niterator_adaptor private member functions\ntypename iterator_adaptor::reference dereference() const;\nReturns: *m_iterator\ntemplate <\nclass OtherDerived, class OtherIterator, class V, class C, class R, class D\n>\nbool equal(iterator_adaptor<OtherDerived, OtherIterator, V, C, R, D> const& x) const;\nReturns: m_iterator == x.base()\nvoid advance(typename iterator_adaptor::difference_type n);\nEffects: m_iterator += n;\nvoid increment();\nEffects: ++m_iterator;\nvoid decrement();\nEffects: --m_iterator;\ntemplate <\nclass OtherDerived, class OtherItera-\ntor, class V, class C, class R, class D\n>\ntypename iterator_adaptor::difference_type distance_to(\niterator_adaptor<OtherDerived, OtherIterator, V, C, R, D> const& y) const;\nReturns: y.base() - m_iterator\nTutorial Example\nIn this section we’ll further refine the node_iter class template we developed in the iterator_facade\ntutorial . If you haven’t already read that material, you should go back now and check it out because\nwe’re going to pick up right where it left off.\n4node_base* really is an iterator\nIt’s not really a very interesting iterator, since node_base is an abstract class: a pointer to a\nnode_base just points at some base subobject of an instance of some other class, and incrementing\nanode_base* moves it past this base subobject to who-knows-where? The most we can do with\nthat incremented position is to compare another node_base* to it. In other words, the original\niterator traverses a one-element array.\nYou probably didn’t think of it this way, but the node_base* object that underlies node_iterator is\nitself an iterator, just like all other pointers. If we examine that pointer closely from an iterator perspec-\ntive, we can see that it has much in common with the node_iterator we’re building. First, they share\nmost of the same associated types ( value_type ,reference ,pointer , and difference_type ). Second,\neven some of the core functionality is the same: operator* and operator== on the node_iterator\nreturn the result of invoking the same operations on the underlying pointer, via the node_iterator ’s\ndereference andequal member functions ). The only real behavioral difference between node_base*\nand node_iterator can be observed when they are incremented: node_iterator follows the m_next\npointer, while node_base* just applies an address offset.\nIt turns out that the pattern of building an iterator on another iterator-like type (the Base1type)\nwhile modifying just a few aspects of the underlying type’s behavior is an extremely common one, and\nit’s the pattern addressed by iterator_adaptor . Using iterator_adaptor is very much like using\niterator_facade , but because iterator adaptor tries to mimic as much of the Base type’s behavior as\npossible, we neither have to supply a Value argument, nor implement any core behaviors other than\nincrement . The implementation of node_iter is thus reduced to:\ntemplate <class Value>\nclass node_iter\n: public boost::iterator_adaptor<\nnode_iter<Value> // Derived\n, Value* // Base\n, boost::use_default // Value\n, boost::forward_traversal_tag // CategoryOrTraversal\n>\n{\nprivate:\nstruct enabler {}; // a private type avoids misuse\npublic:\nnode_iter()\n: node_iter::iterator_adaptor_(0) {}\nexplicit node_iter(Value* p)\n: node_iter::iterator_adaptor_(p) {}\ntemplate <class OtherValue>\nnode_iter(\nnode_iter<OtherValue> const& other\n, typename boost::enable_if<\nboost::is_convertible<OtherValue*,Value*>\n, enabler\n>::type = enabler()\n)\n: node_iter::iterator_adaptor_(other.base()) {}\nprivate:\n5friend class boost::iterator_core_access;\nvoid increment() { this->base_reference() = this->base()->next(); }\n};\nNote the use of node_iter::iterator_adaptor_ here: because iterator_adaptor defines a nested\niterator_adaptor_ type that refers to itself, that gives us a convenient way to refer to the complicated\nbase class type of node_iter<Value> . [Note: this technique is known not to work with Borland C++\n5.6.4 and Metrowerks CodeWarrior versions prior to 9.0]\nYou can see an example program that exercises this version of the node iterators here.\nIn the case of node_iter , it’s not very compelling to pass boost::use_default asiterator_adaptor ’s\nValue argument; we could have just passed node_iter ’sValue along to iterator_adaptor , and that’d\neven be shorter! Most iterator class templates built with iterator_adaptor are parameterized on\nanother iterator type, rather than on its value_type . For example, boost::reverse_iterator takes\nan iterator type argument and reverses its direction of traversal, since the original iterator and the\nreversed one have all the same associated types, iterator_adaptor ’s delegation of default types to its\nBase saves the implementor of boost::reverse_iterator from writing:\nstd::iterator_traits<Iterator>:: some-associated-type\nat least four times.\nWe urge you to review the documentation and implementations of reverse_iterator and the\nother Boost specialized iterator adaptors to get an idea of the sorts of things you can do with itera-\ntor_adaptor . In particular, have a look at transform_iterator , which is perhaps the most straight-\nforward adaptor, and also counting_iterator , which demonstrates that iterator_adaptor ’sBase\ntype needn’t be an iterator.\n6" } ]
{ "category": "App Definition and Development", "file_name": "iterator_adaptor.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Identify MIME Type \n \nTest " } ]
{ "category": "App Definition and Development", "file_name": "1.pdf", "project_name": "Apache NiFi", "subcategory": "Streaming & Messaging" }
[ { "data": "05000100001500020000\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequery time (s)datasource\na\nb\nc\nd\ne\nf\ng\nh99th percentile query latency" } ]
{ "category": "App Definition and Development", "file_name": "99th_percentile.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "Generic GraphAlgorithmsforSparse MatrixOrdering\n/?\nLie-QuanLee JeremyG.Siek AndrewLumsdaine\nLaboratory for ScientificComputing, Department of Computer Science and Engineering,\nUniversity of NotreDame, Notre Dame, IN 46556,fllee1,jsiek,lums g@lsc.nd.edu ,\nWWWhome page: http://lsc.nd.edu/ /#18lsc\nAbstract. Fill-reducing sparse matrix orderings have been a topic of active re-\nsearch for many years. Although most such algorithms are developed and ana-\nlyzed within a graph-theoretical framework, for reasons of performance, the cor-\nresponding implementations are typically realized with programming languagesdevoid of language features necessary to explicitly represent graph abstractions.\nRecently,genericprogramminghasemergedasaprogrammingparadigmcapable\nofproviding high levelsofperformance inthepresence ofprogramming abstrac-tions. In this paper we present an implementation of the Minimum Degree or-\ndering algorithm using the newly-developed Generic Graph Component Library.\nExperimental comparisons show that, despite our heavy use of abstractions, ourimplementation has performance indistinguishable from that of the Fortran im-plementation.\n1 Introduction\nComputationswith symmetric positive definite sparse matrices are a commonand im-\nportanttaskinscientificcomputing.Forefficientmatrixfactorizationandlinearsystem\nsolution,theorderingoftheequationsplaysanimportantrole.BecauseGaussianelim-\nination (without numerical pivoting) of symmetric positive definite systems is stable,\nsuch systems can be ordered before factorization takes place based only on the struc-\ntureofthesparsematrix.Unfortunately,determiningtheoptimalordering(inthesenseof minimizing fill-in) is an NP-complete problem [1], so greedy heuristic algorithms\naretypicallyusedinstead.\nThe development of algorithms for sparse matrix ordering has been an active re-\nsearchtopicformanyyears.Thealgorithmsaretypicallydevelopedingraph-theoretical\nterms,whilethemostwidelyusedimplementationsarecodedinFortran77.SinceFor-\ntran 77 supports no abstract data types other than arrays, the graph abstractions used\nto develop and describe the ordering algorithms must be discarded for the actual im-\nplementation.Althoughgraphalgorithmsarewell-developedand widely-implementedin higher-level languages such as C or C++, performance concerns (which are often\nparamountinscientificcomputing)havecontinuedtorestrictimplementationsofsparse\nmatrixorderingalgorithmstoFortran.\nEfforts to develop sparse matrix orderings with modern programming techniques\ninclude [2] and [3]. These were based on an object-oriented, rather than generic, pro-\ngramming paradigm and although they were well programmed, the reported perfor-\nmancewas still afactorof4-5slowerthanFortran77implementations./?This work was supported by NSFgrants ASC94-22380 and CCR95-02710.2\nTherecentlyintroducedprogrammingparadigmknownas genericprogramming [4,\n5] has demonstratedthat abstraction and performanceare not necessarily mutually ex-clusive. One example of a graph library that incorporates the generic programming\nparadigmisthe recentlydevelopedGenericGraphComponentLibrary(GGCL)[6].In\nthis paper we present an implementation of the minimum degree algorithm for sparse\nmatrix ordering using the GGCL. Although the implementation uses powerful graph\nabstractions, its performance is indistinguishable from that of one of the most widelyusedFortran77codes.\nTherestofthispaperisorganizedasfollows.Weprovideabriefoverviewofgeneric\nprogramming and the Generic Graph Component Library in Sections 2 and 3. Algo-rithms for sparse matrix orderingare reviewedin Section 4 and our implementationof\ntheMinimumDegreealgorithmisgiveninSection5alongwithperformanceresultsin\nSection6.\n2 Generic Programming\nRecently, genericprogramminghas emergedas a powerfulnewparadigmfor software\ndevelopment,particularlyforthedevelopmentof(anduseof)componentlibraries.The\nmost visible (and perhaps most important) popular example of generic programmingis the celebrated Standard Template Library (STL) [7]. The fundamental principle of\ngeneric programming is to separate algorithms from the concrete data structures on\nwhichtheyoperatebasedontheunderlyingabstractproblemdomainconcepts,allowingthe algorithms and data structures to freely interoperate. That is, in a generic library,\nalgorithms do not manipulate concrete data structures directly, but instead operate on\nabstract interfaces defined for entire equivalence classes of data structures. A single\ngeneric algorithm can thus be applied to any particular data structure that conformsto\ntherequirementsofits equivalenceclass.\nInSTL the data structuresare containers such asvectorsandlinked lists and itera-\ntorsformtheabstractinterfacebetween algorithms andcontainers.EachSTLalgorithm\nis written in terms of the iterator interface and as a result each algorithm can operatewith any of the STL containers. In addition, many of the STL algorithms are parame-\nterizednotonlyonthe typeofiteratorbeingaccessed,butonthe typeofoperationthat\nisappliedduringthetraversalofacontaineraswell. Forexample,the transform()\nalgorithmhasaparameterfora UnaryOperator functionobject (aka“functor”).Fi-\nnally,STLcontainsclassesknownas adaptors thatareusedtomodifyunderlyingclass\ninterfaces.\nThe generic programming approach to software development can provide tremen-\ndous benefits to such aspects of software quality as functionality, reliability, usability,maintainability, portability, and efficiency. The last point, efficiency, is of particular\n(and sometimes paramount)concernin scientific applications. Performanceis often of\nsuchimportanceto scientificapplicationsthat otheraspectsofsoftwarequalitymaybe\ndeliberately sacrificed if nice programming abstractions and high performance cannot\nbe simultaneously achieved. Until quite recently, the common wisdom has been thathighlevelsof abstractionandhighlevelsof performancewere, per se, mutuallyexclu-\nsive. However, beginningwith STL for general-purposeprogramming,and continuing3\nwiththeMatrixTemplateLibrary(MTL)[5]forbasiclinearalgebra,ithasbeenclearly\ndemonstratedthatabstractiondoesnotnecessarilycomeattheexpenseofperformance.Infact,MTLprovidesperformanceequivalenttothatofhighly-optimizedvendor-tuned\nmathlibraries.\n3 The Generic GraphComponent Library\nThe Generic Graph Component Library (GGCL) is a collection of high-performancegraph algorithms and data structures, written in C++ using the generic programming\nstyle. Although the domain of graphs and graph algorithms is a natural one for the\napplicationofgenericprogramming,thereareimportant(andfundamental)differences\nbetweenthe typesofalgorithmsanddatastructuresin STLandthetypesofalgorithmsanddatastructuresin agenericgraphlibrary.\n3.1 GraphConcepts\nThe graph interface used by GGCL can be derived directly from the formal definition\nofagraph[8].Agraph Gisapair(V,E),whereVisafinitesetand Eisabinaryrelation\nonV.Viscalleda vertex set whoseelementsarecalled vertices.Eiscalledan edgeset\nwhose elements are called edges. An edge is an ordered or unorderedpair (u,v)where\nu,v/2V.I f(u,v)is and edge in graph G, then vertex visadjacent to vertex u. Edge\n(u,v)isanout-edgeofvertex uandanin-edgeofvertex v.Inadirectedgraphedgesare\norderedpairswhileina undirected graphedgesareunorderedpairs.Ina directedgraph\nanedge(u,v)leavesfromthe sourcevertexutothetargetvertexv.\nTodescribethegraphinterfaceofGGCLweusegenericprogrammingterminology\nfrom the SGI STL [4]. In the parlance of the SGI STL, the set of requirements on a\ntemplate parameter for a generic algorithm or data structure is called a concept.T h e\nvarious classes that fulfill the requirements of a concept are said to be modelsof the\nconcept. Concepts can extend other concepts, which is referred to as refinement .W e\nuseabold sansserif fontforallconceptidentifiers.\nThe three main concepts necessary to define our graph are Graph,Vertex,a n d\nEdge. Theabstractiteratorinterfaceusedby STLis notsufficientlyrich to encompass\nthe numerous ways that graph algorithms may compute with a graph. Instead, we for-\nmulate an abstract interface, based on VisitorandDecorator concepts, that serves the\nsame purpose for graphs that iterators do for basic containers. These two concepts are\nsimilar in spirit to the “Gang of Four” [9] patterns of the same name, however the im-plementationtechniquesusedarebasedonstaticpolymorphismandmixins[10]instead\nofdynamicpolymorphism.Fig.1depictstheanalogybetweentheSTLandtheGGCL.\nGraph:TheGraphconcept merely contains a set of vertices and a set of edges and\na tag to specify whether it is a directed graph or an undirected graph. The only\nrequirement is the vertex set be a model of Container and its value\ntypea\nmodelofVertex.Theedgesetmustbeamodelof Container anditsvalue type\namodelof Edge.\nVertex:TheVertexconceptprovidesaccess to the adjacent vertices, the out-edgesof\nthevertexandoptionallythein-edges.4\nSTL AlgorithmsSTL Containers\n(a) (b)Graph AlgorithmsGraph\nData Structures\nVertex, Edge,\nVisitor, DecoratorIterator\nFunctor\nFig.1.Theanalogy between theSTLandtheGGCL.\nEdge:AnEdgeisapairofvertices,oneisthe sourcevertexandtheotheristhe target\nvertex. In the unorderedcase it is just assumed that the position of the sourceand\ntargetverticesareinterchangeable.\nDecorator: As mentioned in the introduction, we would like to have a generic way\nto access vertex and edge properties, such as color and weight, from within an\nalgorithm. The generic access method is necessary because there are many ways\nin which the properties can be stored, and ways in which access to that storage isimplemented. We give the name Decorator to the concept for this generic access\nmethod. The implementation of graph Decorator s is similar in spirit to the GoF\ndecoratorpattern [9]. A Decorator is very similar to a functor,or functionobject.\nWe use the operator[] instead of operator() since it is a better match for\nthecommonlyusedgraphalgorithmnotations.\nVisitor:In the same way that functionobjects are used to make STL algorithmsmore\nflexible,wecanusefunctor-likeobjectstomakethegraphalgorithmsmoreflexible.\nWe use the name Visitorfor this concept, since we are basically just using a tem-\nplate version of the well known visitor pattern [9]. Our Visitoris somewhat more\ncomplexthan a functionobject, since there are severalwell defined entrypointsat\nwhichtheusermaywanttointroduceacall-back.\nTheDecorator andVisitorconceptsare used in the GGCL graph algorithminterfaces\ntoallowformaximumflexibility.BelowistheprototypefortheGGCLdepthfirstsearchalgorithm,whichincludesparametersforbotha Decorator andaVisitorobject.There\naretwooverloadedversionsoftheinterface,oneinwhichthereisadefault ColorDec-\norator.Thedefaultdecoratoraccessesthecolorpropertydirectlyfromthegraphvertex.\nThis is analogous to the STL algorithms. For example, there are two overloaded ver-\nsions of the lower\nbound() algorithm. One uses operator< by default and the\nothertakesa BinaryOperator functorargument.\ntemplate <class Graph, class Visitor>\nvoid dfs(Graph& G, Visitor visit);\ntemplate <class Graph, class Visitor, class ColorDecorator>\nvoid dfs(Graph& G, Visitor visit, ColorDecorator color);5\n3.2 GenericGraphAlgorithms\nWiththeabstractgraphinterfacedefined,genericgraphalgorithmscanbewrittensolely\nin termsof thegraphinterface.Thealgorithmsdonotmake anyassumptionsabouttheactualunderlyinggraphdatastructure.\nTheBreadthFirstSearch(BFS)algorithm,asanexample,isshowninFig.2.Inthis\nalgorithm we use the expression u.out\nedges() to access the Container of edges\nleaving vertex u. We can then use the iterators of this Container to access each of the\nedges.In thisalgorithm,the Visitoris usedto abstract the kindofoperationperformed\noneachedgeasitisdiscovered.Thealgorithmalsoinsertseachdiscoveredvertexonto\nQ.Thevertexisaccessedthrough e.target vertex() .\ntemplate <class Graph, class QType, class Visitor>\nvoid generalized_BFS(Graph& G, Graph::vertex_type s,\nQType& Q, Visitor visitor)\n{\ntypename Vertex::edgelist_type::iterator ei;\nvisitor.start(s);Q.push(s);\nwhile (! Q.empty()) {\nVertex u = Q.front();Q.pop();\nvisitor.discover(u);\nfor (ei = u.out_edges().begin();\nei != u.out_edges().end(); ++ei) {\nEdge e = *ei;\nif (visitor.visit(e))\nQ.push(e.target_vertex());\n}\nvisitor.finish(u);\n}\n}\nFig.2.The generalized Breadth FirstSearch algorithm.\nTheconciseimplementationofalgorithmsisenabledbythegenericityoftheGGCL\nalgorithms,allowingustoexploitthereusethatisinherentinthesegraphalgorithmsin\naconcretefashion.\n4 Sparse Matrix Ordering\nTheprocessforsolvingasparsesymmetricpositivedefinitelinearsystem, Ax /= b,can\nbedividedintofourstagesasfollows:\nOrdering: Finda permutation Pofmatrix A,6\nSymbolicfactorization: Setupa datastructureforCholeskyfactor Lof PA P\nT,\nNumericalfactorization: Decompose PA P\nTinto LL\nT,\nTriangularsystemsolution: Solve LL\nTPx /= Pbfor x.\nBecause the choice of permutation Pwill directly determine the number of fill-in ele-\nments (elements present in the non-zero structure of Lthat are not present in the non-\nzero structure of A), the ordering has a significant impact on the memory and compu-\ntationalrequirementsforthelatter stages. However,findingtheoptimalorderingfor A\n(in the sense of minimizing fill-in) has been proven to be NP-complete [1] requiring\nthatheuristicsbeusedforall butsimple(orspeciallystructured)cases.\nDevelopingalgorithmsforhigh-qualityorderingshas beenan activeresearch topic\nfor many years. Most orderingalgorithmsin wide use are based on a greedy approachsuch that the orderingis chosen to minimize some quantityat each step of a simulatedn-stepsymmetricGaussianeliminationprocess.Thealgorithmsusingsuchanapproach\naretypicallydistinguishedbytheirgreedyminimizationcriteria[11].\n4.1 GraphModels\nIn1961,Parter introducedthe graphmodelofsymmetricGaussianelimination[12].A\nsequenceofeliminationgraphsrepresentasequenceofGaussianeliminationsteps.The\ninitial Elimination graph is the original graph for matrix A. The elimination graph of\nk'sstepisobtainedbyaddingedgesbetweenadjacentverticesofthecurrenteliminated\nvertextoforma clique,removingtheeliminatedvertexanditsedges.\nIn graph terms, the basic ordering process used by most greedy algorithms is as\nfollows:\n1.Start:Constructundirectedgraph G\n/0correspondingtomatrix A\n2.Iterate:For k /=/1 /; /2 /;/:/:/: /;until G\nk/= /;do:\n–Chooseavertex v\nkfrom G\nkaccordingto somecriterion\n–Eliminate v\nkfrom G\nktoform G\nk /+/1\nThe resulting ordering is the sequence of vertices f v\n/0/;v\n/1/;/:/:/: gselected by the algo-\nrithm.\nOne of the most important examples of such an algorithm is the Minimum Degree\nalgorithm. At each step the minimum degree algorithm chooses the vertex with mini-\nmumdegreein thecorrespondinggraphas v\nk. Anumberofenhancementsto thebasic\nminimum degree algorithm have been developed, such as the use of a quotient graphrepresentation, mass elimination, incomplete degree update, multiple elimination, and\nexternal degree. See [13] for a historical survey of the minimum degree algorithm.\nMany of these enhancements, although initially proposed for the minimum degree al-gorithm, can be applied to other greedy approaches as well. Other greedy approaches\ndiffer from minimum degree by the choice of minimization criteria for choosing new\nvertices.Forexample,to accelerateoneoftheprimarybottlenecksoftheorderingpro-\ncess, theApproximate Minimum Degree (AMD) algorithm uses an estimate of the de-\ngree(orexternaldegree)ofa vertex[14].The MinimumDeficiency classof algorithms\ninsteadchoosethevertexthatwouldcreatethe minimumnumberof fill-inelements.A\nnicecomparisonofmanyofthesedifferentapproachescanbefoundin[11].7\n5 Implementation\nOur GGCL-based implementation of MMD closely follows the algorithmic descrip-\ntions of MMD given, e.g., in [15,13]. The implementation presently includes the en-\nhancementsfor mass elimination,incomplete degreeupdate, multiple elimination,and\nexternal degree. In addition, we use a quotient-graph representation. Some particulardetailsofourimplementationaregivenbelow.\nPrototype Theprototypeforouralgorithmis\ntemplate<class Graph, class RandomAccessContainer,\nclass Decorator>\nvoid mmd(Graph& G, RandomAccessContainer& Permutation,\nRandomAccessContainer& InversePermutation,\nDecorator SuperNodeSize, int delta = 0)\nTheparametersareusedinthefollowingway.\nG(input/output) is the graph representing the matrixAto be ordered on input. On\noutput, Gcontainstheresultsoftheorderedeliminationprocess.Maybeusedsub-\nsequentlybysymbolicfactorization.\nPermutation ,InversePermutation (output) respectively contain the permu-\ntationandinversepermutationproducedbythealgorithm.\nSuperNodeSize (output) contains the size of supernodes or supernode representa-\ntive node produced by the the algorithm. May be used subsequently by symbolicfactorization.\ndelta(input)controlsmultipleelimination.\nAbstract Graph Representation Our minimum degree algorithm is expressed only in\ntermsoftheGGCL abstractgraphinterface.Thus,anyunderlyingconcreterepresenta-\ntionthatmodelstheGGCL Graphconceptcanbeused.Notallconcreterepresentations\nwill provide the same levels of performance,however. A particular representationthat\noffershighperformanceinourapplicationis describedbelow.\nConcrete Graph Representation We use an adjacency list representation within the\nGGCL framework. In particular the graph is based on a templated “vector of vectors.”The vectorcontainer used is an adaptorclass built on top the STL vector class. Par-\nticularcharacteristicsofthisadaptorclassincludethefollowing:\n–Erasing elements does not shrink the associated memory. Adding new elements\naftererasingwillnotneedtoallocateadditionalmemory.\n–Additional memory is allocated efficiently on demand when new elements are\nadded(doublingthe capacityeverytime itisincreased).Thispropertycomesfrom\nSTL vector.\nWe note that this representation is similar to that used in Liu's implementation, with\nsomeimportantdifferencesduetodynamicmemoryallocation.Withthedynamicmem-ory allocation we do not need to over-write portions of the graph that have been elim-\ninated, allowing for a more efficient graph traversal. More importantly, information8\nabout the elimination graph is preserved allowing for trivial symbolic factorization.\nSince symbolic factorization can be an expensive part of the entire solution process,improvingits performancecanresultin significantcomputationalsavings.\nThe overhead of dynamic memory allocation could conceivably compromise per-\nformance in some cases. However, in practice, memory allocation overhead does not\ncontributesignificantlytorun-timeforourMMDimplementation.Finally,withourap-\nproach,somewhatmoretotalmemorymayberequiredforgraphrepresentation.Inthecontextoftheentiresparsematrixsolutionprocessthisisnotanimportantissuebecause\nthememoryusedforthegraphduringorderingcanbereturnedtothesystemforusein\nsubsequentstages(whichwouldusemorememorythaneventhedynamically-allocatedgraphatanyrate).\n6 Experimental Results\n6.1 Test Matrices\nWe tested the performance of our implementation using selected matrices from the\nHarwell-Boeingcollection[16],theUniversityofFlorida'ssparsematrixcollection[17],aswell aslocally-generatedmatricesrepresentingdiscretizedLaplacians.\nForour tests, we comparethe executiontime of ourimplementationagainstthat of\ntheequivalentSPARSPAKalgorithm(GENMMD).ThetestswererunonaSunSPARC\nStation U-30 having a 300MHz UltraSPARC-II processor, 256MB RAM, and Solaris\n2.6. The GENMMD code was compiled with Solaris F77 4.2 with optimizing flags-fast -xdepend -xtarget=ultra2 -xarch=v8plus -xO4 -stackvar\n-xsafe=mem . The C++ code was compiled with Kuck and Associates KCC version\n3.3e using aggressive optimization for the C++ front-end.The back-end compiler wasSolarisccversion4.2,usingoptimizationsbasicallyequivalenttothosegivenabovefor\ntheFortrancompiler.\nTable1givestheperformanceresults.Foreachcase,ourimplementationandGEN-\nMMD produced identical orderings. Note that the performanceof our implementation\nisessentiallyequaltothatoftheFortranimplementationandevensurpassestheFortranimplementationina fewcases.\n7 Future Work\nThe work reported here only scratches the surface of what is possible using GGCL\nfor sparse matrix orderings (or more generally, using generic programming for sparse\nmatrixcomputations).Thehighlymodularnatureofgenericprogramsmakestheimple-mentations of entire classes of algorithms possible. For instance, a generalized greedy\nordering algorithm (currently being developed) will enable the immediate implemen-\ntation of most (if not all) of the greedy algorithms related to MMD (e.g., minimum\ndeficiency).Wearealsoworkingtodevelopsuper-nodebasedsparsematricesaspartof\ntheMatrixTemplateLibraryandinfacttodevelopallofthenecessaryinfrastructurefora complete generic high-performancesparse matrix package. Future work will extend\ntheseapproachesfromthesymmetricpositivedefinitecase tothe generalcase.9\nMatrix nnnz GENMMD GGCL\nBCSPWR09 17232394 0.00728841 0.007807\nBCSPWR10 53008271 0.0306503 0.033222\nBCSSTK15 394856934 0.13866 0.142741\nBCSSTK18 1194868571 0.251257 0.258589\nBCSSTK21 360011500 0.0339959 0.039638\nBCSSTK23 313421022 0.150273 0.146198\nBCSSTK24 356278174 0.0305037 0.031361\nBCSSTK26 192214207 0.0262676 0.026178\nBCSSTK27 122427451 0.00987525 0.010078\nBCSSTK28 4410107307 0.0435296 0.044423\nBCSSTK29 13992302748 0.344164 0.352947\nBCSSTK31 35588572914 0.842505 0.884734\nBCSSTK35 30237709963 0.532725 0.580499\nBCSSTK36 23052560044 0.302156 0.333226\nBCSSTK37 25503557737 0.347472 0.369738\nCRYSTK02 13965477309 0.239564 0.250633\nCRYSTK03 24696863241 0.455818 0.480006\nCRYSTM03 24696279537 0.293619 0.366581\nCT20STIF 523291323067 1.59866 1.59809\nLA2D32 10241984 0.00489657 0.006476\nLA2D64 40968064 0.022337 0.028669\nLA2D128 1638432512 0.0916937 0.119037\nLA3D16 409611520 0.0765908 0.077862\nLA3D32 3276895232 0.87223 0.882814\nPWT 36519144794 0.312136 0.383882\nSHUTTLE EDDY1042946585 0.0546211 0.066164\nNASASRB 548701311227 1.34424 1.30256\nTable 1.Test matrices and ordering time in seconds, for GENMMD (Fortran) and GGCL (C++)\nimplementations ofminimumdegree ordering.Alsoshown arethematrixorder(n)andthenum-\nber of off-diagonal non-zero elements (nnz).10\nReferences\n1. MYannanakis. Computing theminimum fill-inisNP-complete. SIAM Journal ofAlgebraic\nand Discrete Methods , 1981.\n2. Kaixiang Zhong. A sparse matrix package using the standard template library. Master's\nthesis, University of NotreDame, 1996.\n3. Gary Kumfert and Alex Pothen. An object-oriented collection of minimum degree algo-\nrithms. In Computing inObject-OrientedParallel Environments , pages 95–106, 1998.\n4. Matthew H. Austern. Generic Programming and the STL . Addison Wesley Longman, Inc,\nOctober 1998.\n5. Jeremy G. Siek and Andrew Lumsdaine. The matrix template library: A generic program-\nming approach to high performance numerical linear algebra. In Denis Carmel, Rodney R.\nOldhhoeft,andMarydellTholburn,editors, ComputinginObject-OrientedParallelEnviron-\nments, pages 59–70, 1998.\n6. Lie-Quan Lee, Jeremy G. Siek, and Andrew Lumsdaine. The generic graph component\nlibrary. In OOPSLA'99 ,1999. Accepted.\n7. Meng Lee and Alexander Stepanov. The standard template library. Technical report, HP\nLaboratories, February 1995.\n8. ThomasH.Cormen,CharlesE.Leiserson,andRonaldL.Rivest. IntroductiontoAlgorithms .\nThe MITPress, 1990.\n9. ErichGamma,RichardHelm,RalphJohnson,andJohnVlissides. DesignPatterns:Elements\nofReusableObject-OrientedSoftware . AddiaonWesleyPublishingCompany,October1994.\n10. Yannis Samaragdakis and Don Batory. Implementing layered designs with mixin layers. In\nThe Europe Conference on Object-Oriented Programming , 1998.\n11. Esmond G.Ng amd Padma Raghavan. Performance of greedy ordering heuristics for sparse\nCholesky factorization. SIAM Journal on Matrix Analysis and Applications , Toappear.\n12. S.Parter. Theuse of planar graph in Gaussian elimination. SIAMReview , 3:364–369, 1961.\n13. Alan George and Joseph W. H. Liu. The evolution of the minimum degree ordering algo-\nrithm.SIAMReview , 31(1):1–19, March 1989.\n14. Patrick Amestoy, Timothy A. Davis, and Iain S. Duff. An approximation minimum degree\nordering algorithm. SIAMJ.MatrixAnalysisand Applications , 17(4):886–905, 1996.\n15. Joseph W. H. Liu. Modification of the minimum-degree algorithm by multiple elimination.\nACMTransaction on Mathematical Software , 11(2):141–153, 1985.\n16. Roger G. Grimes, John G. Lewis, and Iain S. Duff. User's guide for the harwell-boeing\nsparsematrixcollection. User'sManual Release1,BoeingComputer Services,Seattle,WA,\nOctober 1992.\n17. UniversityofFloridasparsematrixcollection. http://www-pub.cise.ufl.edu/ davis/sparse/ ." } ]
{ "category": "App Definition and Development", "file_name": "iscope99.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "The Quaternionic Exponential\n(and beyond)\nHubert HOLIN\nHubert.Holin@Bigfoot.com\nhttp://www.bigfoot.com/~Hubert.Holin\n08/12/1999\nMotivation ............................................................................................................................... .......2\nChapter 1 Quaternions redux .....................................................................................................2\n1- What to find here .....................................................................................................................2\n2- The nature of the Beast ...........................................................................................................2\n3- Quaternions’ kin............................................................................................................ ..........4\n4- Quaternions and rotations................................................................................................... ...5\n5- Miscellany ............................................................................................................................... ..8\nChapter 2 Building the Quaternions .......................................................................................1 1\n1- What to find here ...................................................................................................................1 1\n2- Cayley algebra, alternative algebra...................................................................................1 13- The Cayley doubling procedure..........................................................................................1 24- \nR, C, H, O, X.......................................................................................................................1 3\n5- The full Cayley ladder all at once ........................................................................................1 4\nChapter 3 The Exponential .......................................................................................................1 8\n1- What to find here ...................................................................................................................1 8\n2- Definition ............................................................................................................................... .1 8\n3- Links with differentiation.................................................................................................. .1 8\n4- The closed formula for the exponential in CR().............................................................1 9\n5- Some properties of the exponential and further consequences..................................2 06- Conclusion ............................................................................................................................... 20\nBibliography ............................................................................................................................... 21\nSoftware index .............................................................................................................................2 1\n1Interesting URLs .............................................................................................................. .........2 1Motivation\nI felt the need to take a closer look at quaternions when, some time back, I was\nlooking for new applications to Harthong-Reeb circles (on which I was working at thetime), and came across [\nD. Pletincks (1989) ]. That paper, on one hand, did indicate one\npotential application for that method, but, on the other hand, alluded to some oddconstructions involving quaternions, the validity of which was propitiously left in theshadows. The present text is therefore a compilation of many well-known but apparentlyscattered results about quaternions (and related entities), as well as some newdevelopments, notably the explicit formula for the quaternionic exponential (and friends).Incidentally, these results enables one to solve the problem found in [\nD. Pletincks\n(1989)], but without the unsalvageable constructions.\nChapter 1 Quaternions redux\n1- What to find here\nThis chapter only contains a quick-and-dirty (but sufficient for most uses)\npresentation of the quaternions, along with their most classical properties, inspiredvery largely by [\nD. Leborgne (1982)], [J. Lelong-Ferrand, J.M. Arnaudiès (1978)] and [M. Berger\n(1990)]. This approach, however, obscures the deep relationship which links the\nquaternions, the complex and real numbers and more exotic things known as octonions;this relationship will be the thrust of the next chapter.\nIt should be said that other important uses of quaternions exist ([\nK. Gürlebeck,\nW. Spössig (1989)],...), but that they will not be touched upon here. As well, quaternionic\nanalysis ([ A. Sudbery (1979)]) and geometry ([ S. Salamon (1982)]), though perhaps not as\nvibrant as their complex counterparts, do keep evolving; though these usually involvefairly sophisticated mathematical machinery, very nice results can also be had withvery elementary ones ([\nP. de Casteljau (1987)],...). All are beyond the scope of this article,\nhowever.\n2- The nature of the Beast\nLet H=R4 with the usual four-dimensional vector space structure over R. We\ndefine e=()1000,,,, i=()0100,,,, j=()0010,,, and k=()0001,,,.\nThe first important thing we need is a multiplication, denoted *, which we\ndefine to be a (non-commutative) R-bilinear operation on H such that ii jj kk e*=*=*= - ,\nij ji k*= -*()=, jk kj i*= -*()= and ki ik j*= -*()=.\nThe second important thing we need is the conjugation on H (and we will\nusually denote by q the conjugate of q) which we define by abgd a b g d,,, , , ,() ---()a .\nImportant properties are that qq q q*¢=¢*, that ee=, that qq qq*=*˛× Re and that\nqq+˛×Re. Actually qq*=0 if and only if q=0, as is easily seen.\nA straightforward verification then shows that H,,,+*×() is an effectively non-\ncommutative, but associative, R-algebra with unit e, and that RHfi () [], ,,,xxa000 and\n CHfi () ()() [],R e , I m , ,zz za 00 are algebra homomorphisms, bijective from their sources\n2onto their images. The image of the conjugate of a complex number is also seen to bethe conjugate (in H) of the image of that complex, by the above function. We will\ntherefore assimilate H to a superset of (both) R and C, and identify e with 1 and i\nwith its complex counterpart. We see at once that the operations we have defined on\nH extend their counterparts on C and R. The multiplication can then be memorized\nthru the well-known formula:\nii jj kk i k*=*=*=**= - j1\nIt is important to notice that given any quaternion q and any real number x, we\nalways have qx xqx q*=*=× .\nWe will usually write a quaternion under the form q=+ + +ab g dijk with a, b,\ng and d reals, omitting the “ ×” when multiplying a quaternion by a real number (as per\nthe vector space structure). We will also omit the “ *” when multiplying a quaternion\nby a real number, from the left as well as from the right. When no confusion mayarise, we will do away entirely with the “\n*”.\nWith the above notations, the conjugate of q=+ + +ab g dijk will then simply be\nq=- - -ab g dijk .\nLooking at H as a 4-dimensional R-vector space, it is easy to see the usual\neuclidian scalar product is equal to the following:\np q pqpq p pq q\npq qp\npq qp()=+() +()--\n=+()\n=+()1\n2\n1\n2\nAll the same, the usual euclidian norm on R4, coincides with qq q qa=*[] ,\nand of course qq q()== + + +22222abgd . Note that, if q„0 then qq q q q q q- --=*() =*()1 11.\nFor the quaternions, we will also use a notation compatible with real and complex\nnumbers and define q as q (of course, if q is actually complex, q has exactly the\nvalue of the modulus of q).\nIt is important to remember that H,,, ,+*×() is a Banach R-algebra. The norm is\nbetter than what we might expect, though, as we have pq p q*= instead of just\npq p q*£.\nWe will call the real and unreal parts of quaternion, respectively, Reqq q()=+()1\n2\nand Urqq q()=-()1\n2. We will say that a quaternion is pure if its real part is zero. For a\ncomplex number, the quaternionic real part is what is already known as the complex\nreal part, and the unreal part is just the imaginary part multiplied by i.\n33- Quaternions’ kin\nAs we have just seen, quaternions are related to both real numbers and complex\nnumbers. As we shall see in some details in the next chapter, quaternions areactually part of an infinite family of sets\n1 which we will call the Cayley ladder, some\nof which we will introduce here as we will have some need of them for our purposes.\nFirst relative in that family, beyond the quaternions, are the octonions. We\ndenote by O the set R8, with its usual vector space structure on R, we identify\n1 10000000=(),,,,,,, , i=()01000000,,,,,,, , j=()00100000,,,,,,, and k=()00010000,,,,,,, and we define\n¢=()e00001000,,,,,,, , ¢=()i00000100,,,,,,, , ¢=()j00000010,,,,,,, and ¢=()k00000001,,,,,,, . We now\nconsider O to be a super-set of H. We can now define a multiplication on O by the\nfollowing table (the value at line n and column m is the product of the element in the\nleft column by the element in the top row; for instance ii e*¢=-¢):\n1\n11\n1\n1\n1\n1\n1ijk eijk\nijk eijk\nii k j iek j\njjk ijkei\nkkj i k ji e\nee ijk ijk\niiek j i k j\njj¢¢¢¢\n¢¢¢¢\n-- ¢-¢-¢¢\n-- ¢¢ -¢-¢\n-- ¢-¢¢-¢\n¢¢ -¢-¢-¢-\n¢¢¢ -¢¢ ---\n¢¢¢ kke i j k i\nkk ji e k ji¢-¢-- -\n¢¢ -¢¢ ¢ -- -1\n1\nOther presentations, perhaps more useful, exist ([ G. Dixon]). This multiplication still\nhas a unit ( 1), but is no longer associative (for instance ¢*¢*()=+ „- = ¢*¢()* iej k ki ej ).\nReal numbers still commute with every octonion. We define a conjugation by\nab g d e z h q ab g d e z h q++++ ¢+¢+¢+¢=- - - - ¢-¢-¢-¢ ijkei jk ijkei jk , a scalar product and\na norm which, as with the quaternions turn out to be exactly the euclidian scalarproduct and euclidian norm on \nR8. Again, we have just extended the quaternionic\noperations. As with complex numbers and quaternions, we have oo o o*¢=¢ for any two\noctonions o and ¢o, and an octonion o is invertible if and only if it is non-zero, and\nthen ooo-=1\n21.\nBeyond even the octonions, we find R16, which appears not to have any agreed-upon\nname. We shall here call them hexadecimalions, and denote the set by X (after the\nC/C++ notation...). We have the usual vector space structure on R, we identify 1, ,K¢k\nwith 1000000000000000 0000000100000000,,,,,,,,,,,,,,, , , ,,,,,,,,,,,,,,,() () K respectively, and define\n¢¢ ¢¢ ¢¢ ¢¢ ¢¢¢ ¢¢¢ ¢¢¢ ¢¢¢eijkei j k,,,, ,, , as 0000000010000000 0000000000000001,,,,,,,,,,,,,,, , , ,,,,,,,,,,,,,,,() () K\nrespectively. We define a multiplication on X as explicited in the next chapter, for\n41Actually, several families, but we will focus on just one here; for others, see [G. Dixon (1994)].which 1 is still a unit and for which reals commute with every hexadecimalion. Wedefine as well a conjugation, a scalar product and a norm (for details, see next\nchapter), which once again coincide with the euclidian scalar product and euclidiannorm on \nR16. These all extend the octonionic case. However, the product has even\nfewer properties than in the octonionic case (the algebra is no longer even alternative2,\nas for instance ie ie j j k j ie ie j+¢¢¢()*+¢¢¢()*() =- + „- = + ¢¢¢()*+¢¢¢()() * 22 2 ), and the norm is\nnot even an algebra norm any longer, as for instance\nij ek ij ek+¢¢()*¢+¢¢¢() =>=+ ¢¢ ¢+¢¢¢2 2284 .\n4- Quaternions and rotations\nIt is pleasant to think that perhaps the relationship between quaternions and\nrotations has been stumbled upon while running a check-list of classical constructson the then-newly discovered quaternions. At any rate, the easiest way to explain thatlink is thru interior automorphisms.\nMore precisely, given a non-zero quaternion \nq=++ +abg dijk , we can consider\nthe interior automorphism:\n lq\npq p q:HHfi\n()( )-a1\nThese objects have several fundamental properties: ll lqq qq ¢¢=o and lqqq()=, lq\nleaves R invariant (since reals commute with all quaternions), and lq respects the\nnorm on H.\nIt is interesting to see lq as an R-linear function on H. As it preserves the\nnorm, it preserves the scalar product, and hence lq˛()O4 ,H. Then, as it leaves R\nglobally invariant, it must leave its orthogonal ( i.e. the unreals) globally invariant.\nConsider now the matrix of lq ; expressed in the canonical basis C=()1, , ,ijk that\nmatrix is:\n MC Clabgd\na b g d ad bg ag bd\nad bg a b g d ab gd\nag bd ab gd a b g dqq,,() =+++\n+-- - + + +\n++ - + - -+\n-+ ++ - - +Ø\nºŒ\nŒŒŒ1000\n02 2 2 202 2 2 202 2 2 2\n22222\n2222\n2222\n2222øø\nßœ\nœœœ\nIt is quite obvious3 that QM C C:, , ; , ,R404 4-{}fi() () [] MRqq al is continuous, and a\ngroup homomorphism. As we have seen, QRR404-{}() Ì()O,, and as Q14()=I, the identity\n52We will define this in the next chapter.\n3We will note MUnm,,() the set of matrices, n rows by m columns, with elements in U.\n4More generally, we will denote by In the identity matrix on Rn.matrix4 on HR=4, QR40-{}() must actually be included in the connected component ofI4 in O4 ,R(), and that is SO 4,R(), i.e., lq is a rotation on R4, and hence on R, where it is\nthe identity I1, and thus must also be a rotation on 03{}·R, i.e. the unreals. We can find\na far simpler (if somewhat tedious) proof of that by simply computing the determinant\nof MC Clq,,(), which of course turns out to be 1 (also see next section)...\nWe can therefore extract a rotation matrix on R3 from MC Clq,,():\nrabgda b g d ad bg ag bd\nad bg a b g d ab gd\nag bd ab gd a b g dq=++++-- - + + +\n++ - + - -+\n-+ ++ - - +Ø\nºŒ\nŒŒø\nßœ\nœœ122 22\n22 22\n22 22\n22222222\n2222\n2222\nLet us introduce R:, , ;R403 3-{}fi() [] MRqqar. It is trivial to see that Q and R\nare both C¥ (because they are rational). It is important to note that they are both\nR-homogeneous of degree 0, which means that given any non-zero real number x, lq\nand lxq are identical, and therefore yield identical rotations ( i.e. rrqx q=).\nA fundamental result is that R is surjective. There are at least two well-known\nways to prove this.\nThe easiest way also has the advantage of being completely constructive: we\njust compute the elements of the rotation rq.\nThis is possible because we always know one invariant vector. Indeed (as an\nimmediate consequence of lqqq()=):\nrb\ng\ndb\ng\ndqØ\nºŒ\nŒŒø\nßœ\nœœ=Ø\nºŒ\nŒŒø\nßœ\nœœ\nFurthermore, the angle, qp˛[]0;, is given by considering the trace of rq:\n1232222\n2222+()=---\n+++cosqabgd\nabgd\nWe now exploit the homogeneity of R, which implies that RRHS-{}() =()03, and\ntherefore that we can restrict our search to unit quaternions. For unit quaternions,\nthe trace relation simplifies to 122+()=cosqa.\nTherefore, the identity rotation I3 is associated with q=–1 (which we already\nknew), and these unit quaternions only.\nLet rrr\nijk,,() be the canonical basis of R3. Consider now a rotation r„I3 (hence\nqp˛]]0;), it possesses a unique rotation axis, and a unique unit vector rrr r\nux iy jz k=++\ndirecting that axis such that ra uau u a arr r r r r r()=-()() ×()+()Ù()+() 1 cos sin cosqq q for all ra˛R3. It\n6follows that r is associated with the two unit quaternionsqx\ny\nz=–æ\nŁö\nł\næ\nŁö\nł\næ\nŁö\nł\næ\nŁö\nłØ\nºŒ\nŒŒŒŒŒŒŒŒŒŒŒø\nßœ\nœœœœœœœœœœœcos\nsinsinsin\nq\nq\nq\nq2\n222\nand these two unit quaternions only.\nThe second method is non-constructive, but has the advantage of highlighting\nthe regularity of the connection between rotations and quaternions, which is harder toread using the first method.\nWe once again exploit the homogeneity of \n R and use unit quaternions. Given\nthat we know that in fact RSR3()Ì()SO 3,, we can consider RSR\n3SO 3,() which is C¥ (because\nit is rational). It is slightly tedious, but possible, to prove that in fact RSR\n3SO 3,() is a\nlocal diffeomorphism at 1. It is also a group homomorphism (stemming from the fact\nthat ll lqq qq ¢¢=o). Since in a connected topological group, every neighborhood of the\nneutral element is a generator of the whole group ([ G. Pichon (1973), p 31]), RSR\n3SO 3,() is\nsurjective upon the connected component of I31=()R in SO 3,R(), i.e. upon SO 3,R(), and\nof course is everywhere a local diffeomorphism (though it is of course not a global\ndiffeomorphism).\nCombining these two approaches, one finds a global C¥-diffeomorphism between\nSO 3,R() and RP3 (which is nothing more than S3 where every couple of opposite points\nhave been identified).\nAnother thing worth noting is that RSR\n3SO 3,() is more than just a locally diffeomorphic\nbijection. If we call sS3 the positive Borel measure on S3 induced by HR=4 and sSO 3,R()\nthat induced on SO 3,R() by MR,,33() (by assimilation of the rotations with their matrix\nin the canonical basis of R3), seen as R9, then we can compute5 that R*ssSO 3,R S ()=16 23.\nFurthermore, RSR\n3SO 3,() actually has no critical point.\n75 A fact that is supposed to be found, but is not, in [C.W. Misner, K.S. Thone, J.A. Wheeler (1973)].5- Miscellany\nAs we have seen, the main power of the quaternions is their ability to pleasantly\nparameter SO 3,R(). It should be said that what is, perhaps their greatest strengths in\nthis regard, with respect to other parameterization of SO 3,R() such as Euler angles, is\nthat RSR\n3SO 3,() has no critical points (no “Gimbal Lock”), and that the composition of\nrotations is extremely simple to compute in terms of the parameter. Also and they\ncan be shown to allow interpolations of orientations under constraints (such hashaving one axis stay “horizontal”).\nQuaternions also allow a nice parameterization of \nSO 4,R() ([M. Berger (1990)] the\napplication SS R334 ·fi () ( ) [] SO , , , sr q s q raa is a continuous group homomorphism,\nsurjective, with kernel 11 1 1,, ,()--(){} ).\nQuaternions have other uses, though. For instance, they can be also be used to\nparameter SU 2,C(). More precisely, an isomorphism exists between 03{}·S and SU 2,C()\n(consider, the application\n Y:, , HC fi()\n=+++-Ø\nºŒø\nßœM2 2\nqi j kuv\nvuabgd a\nwith ui=+ad and vi=+gb is a ring isomorphism from H on a sub-ring of MC,,22(),\nwhich induces an isomorphism). There are also applications of quaternions to the\nRiemann sphere ([ J. Lelong-Ferrand, J.M. Arnaudiès (1978)]).\nIt should be mentioned that research exists to find more efficient algorithm\nfor the product of quaternions ([ T. Howell, J.C. Lafon (1975)]), but has so far not reached a\nconclusion, one way or the other.\nGiven the power of the quaternions, the question naturally arises as to whether\nsomething similar can be done for rotations on spaces of higher dimensions (themultiplication being commutative on the reals and complex numbers, interiorsautomorphisms are just the identity). The answer to that question is partly positive,but it should be now stated that the right tool, in general, for that problem turns outto be Clifford algebras rather than Cayley algebras.\nWhen we turn to the octonions, the multiplication is not only not associative, it\nis no longer even associative. Fortunately, the sub-algebra engendered by any twoelements (and the unity) is still associative, and therefore interior automorphism donot depend on the order in which the products are carried out. The interesting fact isthat, as with the quaternions, the interior automorphisms leave \nR invariant, and\ninduce a rotation, on R7 this time. The catch is that SO 7,R() is a 21-dimentional\nmanifold, whereas the interior automorphisms we just described only have 7 degreesof freedom. In short, we do not get all the rotations on \nR7 by this method. It is still\nuseful, though, for theoretical purposes.\n8Beyond the even the octonions, the hexadecimalions have two different flavors\nof interior automorphism, pq p qa()()()-1 and pq p qa()()()-1, neither of which is, in\ngeneral, a rotation (on either R16 or R15). The average of the two isn’t a rotation\neither, by the way...\nInterior automorphisms having apparently reached the limits of their usefulness,\nwe turn now to something else, with the same objects. It turns out that we can findrotations with even simpler constructions!\nLet \nx=˛aR, then Mxyx y yy x=[]() =[]() =[] MM aa,, ,,11 11 a, hence tMM Ixxx=1, and\ndet Mxx()=. Therefore if x=1, we find that MOx˛()1,R, and we of course get all two\nelements of O1 ,R() that way... but MS Ox˛()1,R only if x=1! Obviously, given x˛R and\n¢˛xR, MM M M Mxx xxx x ¢¢ ¢== .\nLet ci=+ ˛abC, then \n Mczc zii zz cii=[]() ()() =[]() ()() =-\n+Ø\nºŒø\nßœMM aa,,,, ,,,,11 11ab\nba, hence\ntMM Iccc=2, and det Mc()=+ab22. Therefore if c=1, MS Oc˛()2,R, and we get all rotations\non R2 that way, as is well-known. And given c˛C and ¢˛cC we still have\nMM M M Mcc ccc c ¢¢ ¢== .\nLet now q=++ + ˛abg dijk H, then:\n MqGpq pi j ki j k=[]() ()() =---\n+- +\n++ -\n+-+Ø\nºŒ\nŒŒŒø\nßœ\nœœœM a,, ,,,, ,,11\nab gd\nba d g\ngd ab\ndgb a\nand\n MqDpp qi j ki j k=[]() ()() =---\n++ -\n+- +\n++-Ø\nºŒ\nŒŒŒø\nßœ\nœœœM a,, ,,,, ,,11\nab gd\nba d g\ngd ab\ndgb a\n, hence ttMM MM IqG\nqG\nqD\nqDq ==4, and det detMMqG\nqD()=()=+ + +()abgd22222. Therefore if q=1,\nMS OqG˛()4,R and MS OqD˛()4,R, but we only get a tiny fraction of SO 4,R() that way.\nThis, of course can be used as an alternate proof that the interior automorphisms\non the quaternions actually induce rotations on R4.\nIt is interesting to note that given q˛H and ¢˛qH, we still have MM MqqG\nqG\nqG\n¢¢=\n9and MM MqqD\nqD\nqD\n¢¢=, though we now sometimes have MM MMqG\nqG\nqG\nqG\n¢¢„ and MM MM¢¢„qD\nqD\nqD\nqD.Turning to the octonions, let oi j k e i j k=+ + + + ¢+¢+¢+¢˛ ab g d e z h q O, then:\n MoGoo oi j k e i j ki j k e i j k=¢¢[] ¢¢¢¢() ¢¢¢¢() () =-------\n+ -+-++-\n++ ---++\n+-+ -+-M a,, ,,,,,, ,, ,,,,,,11ab gde zh q\nba d g z e q h\ngd abhqez\ndgb aqhz ze\nezhq a bgd\nzeqhb adg\nhqezgda b\nqhzedgb a+\n++++ ---\n+-+-+ +-\n+--++- +\n++--++-Ø\nºŒ\nŒŒŒŒŒŒŒŒŒ\nŒø\nßœ\nœœœœœœœœœ\nœ\nand\n MoDoo oi j k e i j ki j k e i j k=¢¢[] ¢¢¢¢() ¢¢¢¢() () =-------\n+ +-+--+\n+- +++--\n++- +-+M a,, ,,,,,, ,, ,,,,,,11ab gde zh q\nba d g z e q h\ngd abhqez\ndgb aqhz ze\nezhq a bgd\nzeqhb adg\nhqezgda b\nqhzedgb a-\n+--- +++\n++-+- -+\n+++--+ -\n+-++--+Ø\nºŒ\nŒŒŒŒŒŒŒŒŒ\nŒø\nßœ\nœœœœœœœœœ\nœ\n, hence ttMM MM IoG\noG\noD\noDo ==8, and det detMMoG\noD()=()=+ + + + + + +()abgdezhq222222224.\nTherefore, if o=1, MS OoG˛()8,R and MS OoD˛()8,R. Again, we only get a very tiny fraction\nof SO 8,R() that way.\nAlso, and contrary to the case for the real numbers, the complex numbers and\nthe quaternions, in general MM MooG\noG\noG\n¢¢„ and MM MooD\noD\noD\n¢¢„, due to the non-associativity\nof the product on O. For instance, ¢¢=-ie i, but MM M¢¢ -„iG\neG\niG .\nIf we try to do the same thing with hexadecimalions, we find that neither\n Mlh li j k e i j k e i j k e i j k i j k e i j k e i j k e i ja[] ¢ ¢ ¢ ¢ ¢¢ ¢¢ ¢¢ ¢¢ ¢¢¢ ¢¢¢ ¢¢¢ ¢¢¢() ¢ ¢ ¢ ¢ ¢¢ ¢¢ ¢¢ ¢¢ ¢¢¢ ¢¢¢ ¢¢¢ ¢ ,, ,,,,,,, ,, , , , , , ,, ,,,,,,, ,, , , , , ,11 ¢¢¢ () ( )k nor its\nright-hand version are rotation in general, even if l=1. That trail ends here as well!\n10Chapter 2 Building the Quaternions\n1- What to find here\nThis chapter, except for Section 5, only consists of well-known classical\nresults ([ N. Bourbaki (A) ], [S. Lang (1991) ],...). Some have been slightly restated (usually\nwith simplifications) from their original sources, but hardly anything new is presentedhere. In case the sources disagree on definitions, [\nN. Bourbaki (A) ] will take precedence.\n2- Cayley algebra, alternative algebra\nSome of the structures we will be considering will not even be associative. To\nsave what may be, a weaker structure, which is interesting in its own right ispresented first. An algebra \nE is said to be alternative if the following trilinear\napplication, known as the associator of E, is alternating (which means its value is\nzero if two of its arguments are identical):\n a:\n,,EEE E·· fi\n() **()-*()* xyz x y z x y z a\nThis notion is interesting as, though an alternative algebra is not as wieldy as\nan associative algebra, it is such that every sub-algebra engendered by any twoelements \nis associative. It also implies that an alternative algebra is a division\nalgebra (which means that for any x˛E, x„0, the applications EEfi*;yx ya and\n EEfi*;yy xa are bijective, or that elements are “simplifiable”). In particular the\ninverse of a non-zero element (if it exists) is unique in such an algebra.\nThe meat of this chapter is the following structure.Let \nA be a commutative ring, and E an algebra over A, not necessarily commutative\nor associative, but having a unit element e (remember that since E is an A-algebra,\nthen \"˛() \"˛() ×= ×()*=* ×() ll l lAExx x x ee ).\nA conjugation over E is any (there may be none) bijective, A-linear, function\ns:EEfi such that:\n1)see()=.\n2)\"()˛() *()=()*() xy x y y x,E2ss s (beware the inversion of x and y!).\n3)\"˛() +()() ˛× xx x eEAs and \"˛() *()() ˛× xx x eEAs .\nThese properties imply6 \"˛() *()=()* xx x x xEss , and7 \"˛() ()= xx xEsso.\n116xx e xx x xx x x xx x x xx x+()() ˛×Þ* ()=* + ()() -*= + ()() *-*= ()* ss s ssA .\n7Given x˛E, there exists a˛A such that xx e+()=×sa ; the A-linearity of s then implies\n ss ss s a sxx x xe()+()=+()() =×() o , and finally, see()=. We will also write x for sx().If E is such an algebra, and if s is a conjugation over E, the structure\nE,,, ,+*×()s is said to be a cayley algebra over A. On such a structure, it is convenient to\nconsider the cayley trace and cayley norm (an unfortunate misnomer as it is actuallyquadratic...), defined respectively by \n TExx x()=+()s and NExx x()=*()s.\nNote that if E,+,*() has no zero divisors, for instance if it is a field, then\n NEx()=0 if and only if x=0.\nWe have the important relations:\n TTEEsxx()()=()\n NNEEsxx()()=()\n TTxy yx*()=*()\n TTT T T N N NEEE E E E E Exy yx x y x y x y x y*()() =* ()() =()*()-*()=+()-()-() ss\nIt is interesting to note that TTxy yx*()=*() regardless of whether or not E is\nassociative or commutative. For the cayley norm, no such broad result seem to hold8;\nhowever if E is alternative, then we also have NN NEE Exy x y*()=()().\nFinally, the following lemma will be useful for our purposes:\nLemma (Complexoïd): # Given x˛E, Vect ,Aex(), the A-module spanned by x and e, is\nstable for *; it is a sub-cayley algebra of E which is both associative and commutative.\nIf xeˇ×A, let yex=× +×ab , MyGuy ue xe x=[] () () ()M a*,,,, and MyDuu ye xe x=[] () () ()M a*,,,, ;\nthen (with T×()ex=TE and N×()ex=NE)\nMMMN\nTyyG\nyD===-\n+Ø\nºŒø\nßœab\nbb a\n. Given ze x˛()Vect ,, we have MM M M M M**yz y z z y zy=== . $\n# This is a simple consequence of the fact that xx x e*=×-×TN , with T×()ex=TE and\n N×()ex=NE! $\nThis lemma allows us, in particular, to define unambiguously the n-th power,\nwith n˛N, of any x˛E by the usual recursion rules, we will write the result, as usual\nxn. It also trivially induces the following scholie:\nScholie (Powers): # Given x˛E, and n˛N, xe xn˛()Vect ,A and NNEExxn n()=()(). $\n3- The Cayley doubling procedure\nIt should be noted that this is simply the plain vanilla version of the doubling\nprocess9; it will suffice here, however.\n128Indeed, we have seen that such an equality does not hold for hexadecimalions!\n9The general procedure involves abitrary coefficients which parameterize the operations.Let A be a commutative ring, and E,,, ,+*×()s a cayley algebra over A, not\nnecessarily commutative or associative, with unit element e. Let FEE=· and\neeF F=()˛,0; furthermore, let:\n +· fi\n()¢¢()() +¢+¢ ()FFF F:\n,, , ,xy x y x x y y a\n *· fi\n()¢¢()() *¢-¢**¢+¢* ()FFF F:\n,, , ,xy x y x x y yy x y x a\n ×· fi\n()() ××()FAF F:\n,, ,ll lxy x y a\n s\nsFFF:\n,,fi\n() ()-() xy x y a\nProposition (Structure): # FFF F,,,+*×() is an A-algebra, with unit eF, and sF is a conjugation\nover F; F is associative if and only if E is both associative and commutative; F is\nalternative if and only E is associative. Furthermore, TTFExy x,()()=() and\n NN NFE Exy x y,()()=()+(). $\nKeep in mind that since F is also an A-algebra then\n\"˛() \"()˛() ×()=×()*()=()*×() ll l lAFFF F F F F F xy xy xy xy,, , , ee . It is interesting to note that, if\nE is associative, we still have NN NFF F Fxy x y xy x y,, , ,()*¢¢()() =()() ¢¢()(), even if F is not\nassociative.\nGiven the proposition, we can (and will) identify E with EE·{}0. Alternatively,\nwe can identify F with a superset of E. It is also possible to identify A with a subset\nof E (and hence of F as well), in that case we have noted that all elements of A\ncommute with all elements of E, for the multiplication in E, as well as with all\nelements of F, for the multiplication in F, even though E or F might not be commutative.\nWith this identification, T and N have value in A.\n4- R, C, H, O, X...\nWe now consider AR= and ER=, with sxx()= and e=1, then NRxx()=2 is\nalways positive (and zero if and only x=0, as R is a field). When we build F as above,\nwe get exactly C, and sF is the usual conjugation on C. We define i=()01;, and as\nstated earlier, we identify R with R·{}0. As is well known, C is a commutative\nfield, in particular, real numbers commute with complex numbers. Due to ouridentifications, \n TC and NC have values in R, and actually, if zxy=+i then\n NNNC RRzxy x y z()=()+()=+=‡22 20, and NCz()=0 if and only if z=0. We lose some of\nthe original properties of R as we build C, for instance we lose the existence of an\norder compatible with the multiplication; we do get new and interesting properties at\n13the same time, of course.Let’s do the doubling again, this time with AR= and EC=, with the usual\nconjugation, and this time we get exactly H, the conjugation being the same as\ndefined earlier, given the definition of j=()01; and ki=()0;, and the identification of\nC with C·{}0. Once again, we note that, as predicted, for quaternion multiplication,\nreal numbers commute with quaternions, though some quaternions do not commute(for instance \nij ji*„*). As already stated H is a (non-commutative) field. Once again,\ndue to our new identifications, TH and NH have values in R, and actually, NH is\nalways positive and NHq()=0 if and only if q=0. We keep loosing original properties,\nmost notably the commutativity, when we go from C to H, but the new properties we\ngain, notably the link with rotations in R3, which we saw earlier, still makes it\nworthwhile. We also see that THqq()=()2R e and NHqqq()==22, as defined earlier.\nThere being not such thing as too much of a good thing, let’s do the doubling\nonce again, this time with AR= and EH=, and the conjugation just built on H. What\nthe process yields this time is known as the set of (Cayley) octonions, whose symbolis \nO. We, as is now usual, identify H with H·{}0. Yet again, we note that, for\noctonion multiplication, real numbers commute with octonions, though some octonionsdo not commute (as some quaternions already do not commute). Yet again, due to ournew identifications, \n TO and NO have values in R, and actually, NO is always positive\nand NOo()=0 if and only if o=0. The situation keeps deteriorating, though, as this\ntime the algebra is not associative anymore (but it is still associative). Octonions do\nhave uses, apart from being an example of a non-associative algebra. They can be usedto find a basis of non-vanishing vector field on \nS7 (the euclidian unit sphere in R8), in\nthe same way quaternions can be used to find one on S3, and complexes are used to\nfind one on S1. They also see use in theoretical physics ([ G. Dixon (1994) ]). Octonions\nstill are a division algebra, and non-zero octonions O have NOOOO()[] ()-1s for inverse.\nDespite the non-associativity of the multiplication, we still have NN NOO Ooo o o*¢()=()¢(),\nsince the multiplication is associative on H.\nWe can keep doubling ad nauseam, but things really get unwieldy. At the stage\nafter octonions, the hexadecimalions, X, the algebra is not even alternating. This\nauthor does not know of any use the ulterior echelons may have been put to, if any.\n5- The full Cayley ladder all at once\nOne might wonder if the whole doubling procedure might be “carried out to\ninfinity”. As it turns out, it can, after a fashion. We will present here a specialversion\n10 of the global object, for simplicity.\nLet A be a commutative ring, whose unit element will be called e.\nLet us call A0=A and s0 the identity over A. It is quite obvious that A,,,,+··() s\nis a cayley algebra over A. Using the doubling procedure, we build A1=·AA and s1,\nand by induction we build An and sn for all n˛N.\n1410That is, the object built by our “plain vanilla” version of the Cayley doubling procedure.Consider AX[] the set of polynomials (in one indeterminate X) with coefficients\nin A. We already have an A-algebra structure, which we will denote by AX[]+×·(),, , and\nis the usual commutative algebra. We readily identify A0=A with constant polynomials,\nthru an homomorphism of A-modules J0. It is trivial to see that An identifies with\npolynomials of degree less or equal to 21n-, thru the trivial A-modules isomorphism\n Jn. Let us call IA Ann n xx :, ,fi ()+1 0 a the canonical identification. Then \"˛() =+nnn nNJI J1o,\nwhich means our identifications are all coherent.\nSo every element of every rung of the Cayley “ladder”, build by successively\ndoubling the preceding rung and begun by A, a finite number of times , can be identified\nuniquely with some polynomial with coefficients in A, and conversely every element\nof AX[] can be seen a some unique element of the Cayley ladder. As the multiplication\nwe will define differs, in general, from the polynomial multiplication, we will choosea new symbol for our construction.\nLet \n CA() be some set equipotent to AX[], the set of polynomials in one\nindeterminate X over A, thru a bijection IC:AA()[]aX. This bijection induces an\nA-module on CA(), from AX[]+×(),,, which we will denote by CA()+×(),,, and we identify\n An with I-1JAnn()().\nWe will now define a multiplication on CA(), which we will denote by “ *”. Let\n pC˛()A and qC˛()A; let P=()Ip and Q=()Iq, then there exists (at least) one n˛N such\nthat Pnn˛()JA and Qnn˛()JA. We chose the smallest such n. We now find the only pnn˛A\nsuch that Ppnn=()J and the only qnn˛A such that Qqnn=()J. Finally Ip q*()=*()JA nn npq\nn.\nWe note that for all ¢>nn, we do have Pnn˛()¢¢JA and Qnn˛()¢¢JA and there are unique\n pnn¢¢˛A such that Ppnn=()¢¢J and qnn¢¢˛A such that Qqnn=()¢¢J, but thanks to the coherence\nof the identifications we also have JJAA nn n nn n pq p q\nn n*() =*()¢¢ ¢¢.\nIt is easy to verify that CA()+×*(),, , is an A-algebra. However, in general\n Ip q Ip Iq*()„()·(). For instance if AR= then Xi1=()I, Xj2=()I and Xk3=()I, and thus\n II IiXXXX j k()=„=·= ()·()1523. So I is not, in general, an algebra isomorphism between\n CA()+×*(),, , and AX[]+×·(),, ,, as stated earlier.\nWe likewise define the conjugation s, and the cayley trace and “norm”, over\n CA() thru the identifications Jn, with values in A0. It is now easy to check that\n CA()+×*(),, ,,s is a cayley algebra over A (usually not commutative or associative),\nwhich, thru the identifications, contains all the rungs of the cayley doubling procedure\nstarting with A. Elements of A commute with all elements of CA(), for *.\nWe will shortly use the fact that if Ip()=+ + + = ()--aa a01 2121XX pnn\nnn L J, then\n Ip p*()=*() =-+ +()() ++ +---JA nn npp X X\nnnnnaa a a a a a02\n12\n212\n10 2102122 LL ; this is simply proved\nby recurrence.\nAs a first example of Cayley ladders, let us consider CZZ2()+·*(),,,, I d. It is a\ncommutative and associative cayley algebra, the conjugation being the identity on\n CZZ2(); however it has zero divisors, as for instance II--+()*+()=1111 0XX , but if\n15 pC˛()ZZ2 and Ip()has an odd number of 1 then Ip p*()=1.The second, perhaps more interesting example, is CR(), which we have actually\nused already. In that case we can see that if aC˛()R, then NCaR()() is always positive,\nand NCaR()()=0 if and only if a=0; furthermore,\n aa a a a a aCC „Þ ()() ()Ø\nºŒø\nßœ*=* ()() ()Ø\nºŒø\nßœ=()-\n()-0111NNRRss . It is also possible in this case to compute\nsquare roots! Indeed, let xn˛A, with JnxAA X AXnn()=+ ++--\n01 2121L ; we seek yn˛A with\n JnyX Xnn()=+ + +--aa a01 2121L such that yy x*=. This amounts to solving, in R2n\n the\nsystem:\n aa a\naa\naa02\n12\n212\n0\n10 1\n210212\n2-+() =\n=\n=ì\níï\nï\nîï\nï-\n--L\nMn\nnnA\nA\nA\nThis system is easily solved by considering first the case x=0, for which there is a\nunique solution y=0, second the subcase x˛+R* for which there are exactly two\nsolutions given by yx=–, third the subcase xˇR (if n‡1, of course) for which there\nare also exactly two solution given by \n JnyAXAXn n()=+ + +- -aaa01\n021\n021\n22L with\na02\n2=– ()+Rexx, and finally the case x˛-R*, for which the solutions are all the\n yn˛A such that Ry()=0 and yx=.\nThis means that the solutions to yx2=, where xˇ-R are the same in every rung\nof the real Cayley ladder (that is, there are exactly two, opposite solutions, belongingto the same rung), and the solution to \ny20= is always y=0, in whatever rung of the\nCayley ladder. However, solutions to yx2= for x˛-R* differ depending upon the precise\nrung: in R there is no solution, in C there are exactly two, opposite, solutions, in H\nand above there is an innumerable number of solutions (full spheres)!\nNote that in any case a y such that yx2= commutes with x, but that two such\nsolutions need not commute with each other!\nAt least two topologies are interesting to consider on CR(): the norm topology\ninduced by the square root of N, the Cayley “norm” on CR() (we will write cc=()N),\nwhich we will call T, and the strict inductive limit topology ([ V.-K. Khoan (1972)])\ndefined by the rungs Cnn n=J A() of CR() on which we consider the norms qnn=×-()202sup ,,\nwhich we will call T¥.\nThe problem with × is that it is not an algebra norm, as evidenced by the\nhexadecimalions. Furthermore, CR()(),T is not complete, its completion being l2R()\nwith its usual topology.\nOn the other hand, CR()()¥,T is complete, and the product is (trivially) separately\n16continuous ([ N. Bourbaki (EVT)]), but it is not known if it is continuous.For both topologies, any finite-dimensional vector space is closed and the\nrestriction to that vector space is just the usual (euclidian) topology.\nAt any rate, given x˛()CR, the Powers Scholie proves that Vect , ,R1x()() is a\ncommutative R-Banach algebra (of dimension 1 if and only if x˛R).\nAs a final thought, since CA() is an A-Cayley algebra, we can perform the\nCayley doubling procedure on it! And again, and so on and so forth... We can actuallyperform an infinity of doubling as above, and embed all these doublings in what,essentially, is \nAXY,[]. And then we can start all over again... As we can readily see,\nthere is no “ultimate” step... What seems to be going on is that we can build an objectfor any \nfinite ordinal ([ J.-M. Exbrayat, P. Mazet (1971)]), and we have built an object,\nwhich we have called CA(), for the first infinite ordinal w. We have then seen that the\ndoubling of CA() yields the object corresponding to w* (the successor of w). The next\ninfinite ordinal with no predecessor ( 2w) corresponds to AXY,[]. Further on\n(corresponding to w2), we find the set of polynomials in an indeterminate number of\nindeterminates ( i.e.11 ANN[][]). It is not clear, however, in which way we can extend the\nconstruction to any set of ordinals ( i.e. there is no clear transfinite “recurrence\nformula”).\n1711Recal that if X is a monoïd ( [N. Bourbaki (A)] ) and Y is a set, XY[] is the set of functions from Y to X\nwhich take values different from the neutral ement of X only for a finite numbers of elements of Y.Chapter 3 The Exponential\n1- What to find here\nThis chapter is mostly designed to prove the explicit formula for the exponential\nin CR(), and give several related results. As far as I known, these results are new.\nThere are many notions of the exponential, and many ways to see several of\nthem. These, of course, agree when various different definitions can be put forwardfor the same object to be exponentiated. We will be concerned here mainly with theanalyst’s point of view, and define the exponential of quaternions thru the use of theusual power series ([\nA.F. Beardon (1979)],...). It is known that the approach detailed in\n[F. Pham (1996)] could also be used, at least for quaternions, though I believe it would\nthen be necessary to derive the power series representation (or the intermediarydifferential representation we will also use) to achieve our present goal. It remainsto be seen if it can also be carried over to the whole of \n CR().\n2- Definition\nGiven x˛()CR, we will call exponential of x, and we will write Expx() the\nelement of CR() given by x\nnn\nn!=+¥\nå\n0. The unambiguity and existence of Expx() is given by the\nfact that Vect , ,R1x()() is a commutative R-Banach algebra, as we have said earlier.\nThis, of course agrees with the definition on R and C. We must bear in mind that\nExp Vect ,xx()˛()R1.\nWe see at once that \"˛()() ()=() xx xCRExp Exp . The exponential is continuous when\nrestricted to each rung of CR(), and has its values into the same rung (we will give a\nmore precise result later on).\n3- Links with differentiation\nDifferentiating a function of one or several quaternions (or higher in the Cayley\nladder) is quite problematic. Of course, since Vect ,R1x() is commutative, there is no\nambiguity in defining ff /yx y x()-()() -() if yx˛()Vect ,R1, and we can therefore differentiate\nExpVect ,R1x() with respect to some yx˛()Vect ,R1 and find that it is once again ExpVect ,R1x().\nIt is more fruitful, however, to differentiate a function of a real variable, with\nvalues in some topological R-vector space.\nLet us therefore consider, for some x˛()CR, the function et t xx: , ExpRRfi() ()[] C a .\nIt is clear that ex takes its values in Vect ,R1x(), is differentiable and et x e t e t xxx x¢()=()=(),\nand of course ex01()=. This, of course proves that ex is the unique solution to ¢=fx f,\nf01()= in C11 RR,Vect , x()() , the set of one-time continuously differentiable functions\nfrom R to Vect ,R1x(). Given any rung E of CR() such that x˛E, ex is still the unique\nsolution to ¢=fx f, f01()= in C1R,E().\nThe perhaps surprising phenomenon is when we consider the equation ¢=fx f,\nf0()=g in C1R,E() for some rung E of CR() such that x˛E, and g˛E. If E=R or E=C,\n18then of course the solution is etx()g, and it turns out this is still true if E=H,because of the associativity of the quaternionic product (this, actually, is how one\ncan navigate the unit sphere of the quaternions, which is useful for interpolatingbetween orientations, and was the problem under examination in [\nD. Pletincks (1989) ]). It\nis interesting to note that this is still true if E=O because of the alternative nature\nof that algebra. This stops to be true with hexadecimalions, however. Indeed, consider\nxie=+¢¢¢ and g=j, and let gte tx()=()g. We will shortly see that ei eie+¢¢¢æ\nŁçö\nł÷=+ ¢¢¢()p2\n42\n2,\nfrom which we can deduce gp2\n42\n2æ\nŁçö\nł÷=+ ¢¢¢()iej and ¢æ\nŁçö\nł÷=+¢¢¢() +¢¢¢()æ\nŁçö\nł÷=- gp2\n42\n22 ie ie j j\nwhereas ie ie iej jk+¢¢¢()æ\nŁö\nł=+¢¢¢() +¢¢¢()æ\nŁçö\nł÷=- + ¢¢¢() gp\n22\n22 , and therefore ¢æ\nŁö\nł„+¢¢¢()æ\nŁö\nłggpp\n22ie .\nNumerical integration procedures will yield the solution to the differential equation,\nand therefore not the exponential function, unless care has been taken to chose thestarting point correctly.\n4- The closed formula for the exponential in CR()\nWe now give the main result of this work. Note that it is closed only in that we\nassume the exponential and classical trigonometric functions on R to be givens12.\nTheorem (Exponential):# If x˛()CR then Exp e cos Ur sinc Ur UrRexx x xx()= ()()+ ()() () []()\np . $\n# Let y˛()CR such that Rey()=0, y=1; then yy y yy21 =()-[] =-()=- TN . Therefore, in\nVect ,R1y() computations are carried out exactly as in C, with y taking the place of i.\nMore precisely, C, +Rfi() + []Vect ,1y a ib a by a is a Banach isomorphism.\nLet now x˛()CR. If x˛R, we see the result is trivially true. Assume, then that xˇR,\nand let ÃUr\nUrxx\nx=()\n(). Then ReÃx()=0, Ãx=1, xx x x=()+() Re Ur Ã, and of course\nVect , Vect , ÃRR11xx()=(). The previous identification then allows us to find\nExp e cos Ur sin Ur ÃRexx x xx()= ()()+()() [](). $\nAs an example, we have, as announced earlier, Expp2\n42\n2ie ie+¢¢¢()æ\nŁçö\nł÷=+ ¢¢¢().\n1912A family of special functions will be of interest here, that of the “Sinus Cardinal” functions, defined for\nsome parameter a˛R+* by \n sinc : ,sin\na xx\na\nx\naRRfiæ\nŁö\nłØ\nºŒ\nŒŒ\nŒø\nßœ\nœœ\nœap\n. We will, by similitude, define the\n“Hyperbolic Sinus Cardinal” family of functions defined for some parameter a˛R+* by\n sinhc : ,sinh\na xx\na\nx\naRRfiæ\nŁö\nłØ\nºŒ\nŒŒ\nŒø\nßœ\nœœ\nœap\n. These functions are entire functions on all of R.5- Some properties of the exponential and further consequences\nWe compute at once Exp eRe xx()=().\nAs should be expected when we lose the benefit of commutativity, the exponential\nof a sum is in general different from the product of the exponentials; for instance wehave \nExp Exp cos cos sin cos sin cos sin sinij i j k()()=() ()+() ()+() ()+() () 11 11 11 11 whereas\nExp cosij ij+()=()++()22\n2. We also see immediately that the exponential is not injective\non any rung E of CR() containing C, as it is already not injective on C! We, however\nalso lose the periodicity when E contains H, as the periods would make an additive\nsubgroup of E but the solutions of Exp x()=1 on H are exactly the set 202p..NS{}· (with\nS2 the unit sphere of R3); the rest is number theory (and trying to fit square pegs into\nround holes). We have more details on the surjectivity of the exponential:\nCorollary (sujectivity): # If E is a rung of CR() containing C, then the exponential is a\nsurjection from E onto E-{}0. $\n# We first note that given any x˛()CR, Exp eRe xx()=() proves that the exponential never\ntake the value 0 on CR().\nLet now y˛E, y„0. If y˛R we know we can solve our problem (in R if y>0, in C if\ny<0). Assume therefore that yˇR. We can find r˛R such that er=y. Let Äyy\ny=;\nÄy=1 and ÄyˇR, so let ÃUrÄ\nUrÄyy\ny=()\n(), so that ÄReÄUrÄà yy y y=()+(), ReÄy()„0 and ReÄUrÄ yy()+()=2 21.\nLet qp˛][0; the unique number such that cos Re Äq()=()y and sin Ur Ä q()=()y. We see that\nExp Ãrq+()=yy. $\nWe can likewise find closed formulæ for other interesting entire functions\n(defining cos!xx\nnnn\nn()=-()\n()=+¥\nå1\n22\n0, sin!xx\nnnn\nn()=-()\n+()+\n=+¥\nå1\n2121\n0, cosh!xx\nnn\nn()=()=+¥\nå2\n02, sinh!xx\nnn\nn()=+()+\n=+¥\nå21\n021), to wit:\ncos cos Re cosh Ur sin Re sinhc Ur x Ur xxxx x()=()() ()()-()() ()() ()p\nsin sin Re cosh Ur x cos Re sinhc Ur x Ur xxx x()=()() ()()+()() ()() ()p\ncosh cosh Re cos Ur sinh Re sinc Urxx x x x()= ()() ()()+()() ()()p\nsinh sinh Re cos Ur cosh Re sinc Ur Urxx x xx x()=()() ()()+ ()() ()() ()p\nand of course many other such.\n6- Conclusion\nWe have found a closed formula for the exponential, for quaternions, octonions,\nand beyond.\nAn interesting application of this formula is navigation on the unit sphere of\nthe quaternions, leading to an algorithm for the interpolation of orientations, butwhich, in general, does not preserve the horizontal. This can also be achieved, however,and has been implemented by the author and a colleague ([\nHorizontal-preserving quaternions ]).\n20Bibliography\nA.F. Beardon (1979): Complex Analysis, The Argument Principle in Analysis and Topology; John Wiley &\nSons, A Wiley-Interscience Publication, 1979.\nM. Berger (1990): Géom etrie 1; Nathan, 1990.\nN. Bourbaki (A): Algèbre.\nN. Bourbaki (EVT): Espaces Vectoriels Topologiques.\nP. de Casteljau (1987): Les quaternions; Hermes, Traité des nouvelles technologies, série mathématiques\nappliquées, 1987.\nG. Dixon (1994): D ivision Algebras: Octonions, Quaternions, Complex Numbers and the Algebraic Design\nof Physics; Kluwer Academic Publishers, Mathematics and Its Applications, 1994.\nJ.-M. Exbrayat, P. M azet (1971): al gèbre 1, Notions Fondamentales de la Théorie des Ensembles; Hatier-\nUniversité, Notions Modernes de Mathématiques, 1971.\nK. Gürlebeck, W. Spössig (1989): Quaternion Analysis and Elliptical Boundary Problems; Birkhaüser,\nInternational Series of Numerical Mathematics, vol. 89, 1989.\nT. Howell, J.C. Lafon (1975): The complexity of the quaternion product; Cornell Computer Science TR\n75-245, June 1975.\nV.-K. Khoan (1972): D istributions Analyse de Fourier Opérateurs aux Dérivées Partielles, tome 1, Cours\net exercices résolus maitrise de mathématiques certificat C2; Vuibert, 1972.\nS. Lang (1971): Algebra; Addison-Wesley Series in Mathematics, Addisson-Wesley Publishing\nCompany, Revised printing: January 1971.\nD. Leborgne (1982): Calcul différentiel et géometrie; P.U.F., Mathématiques, 1982.\nJ. Lelong-Ferrand, J.M. Arnaudiès (1978): Co urs de mathèmatiques, Tome 1, Algèbre (3eme édition);\nDunod Université, 1978.\nC.W. Misner, K.S. Thone, J.A. Wheeler (1973): Gravitation; W.H. Freeman and Company, New York,\nChapter 41 -Spinors,1973.\nF. Pham (1996): Une définition non standard de l’exponentielle (variation sur un thème de Pierre\nCartier), 1996 (early version, unpublished, presented at the Journées Non Standard,Paris, 14/12/1996 and perhaps elsewhere).\nG. Pichon (1973): Groupes de Lie, representations linéaires et applications; Hermann, Collection\nMéthodes, Paris, 1973.\nD. Pletincks (1989): Quaternion calculus as a basic tool in computer graphics; \nin “The Visual Computer,\nan International Journal of Computer Graphics” vol. 5, n° 1/2, pp 2-13, 1989.\nS. Salamon (1982): Quaternionic Kaeler Manifolds; in Inventiones Mathematica, vol. 67, n° 1,\npp 143-171.\nA. Sudbery (1979): Quaternionic Analysis; in Proceedings of the Cambridge Philosophical Society,\nvol. 85, 1979.\nSoftware index\nHorizontal-preserving quaternions: available for licencing from the author, © Hubert Holin & Didier Vidal.\nMaple : A commercial computer-aided mathematics software, currently in version V, release\n5.1; edited by Waterloo Maple Inc., 450Phillip St., Waterloo, ON N2L SJ2, C anada;\nhttp://www.maplesoft.com.\nInteresting URLs\nG. Dixon: http://www.7stones.com/Homepage/sevenhome2.html.\n21E. Weisstein: http://www.treasure-troves.com/math/CayleyNumber.html." } ]
{ "category": "App Definition and Development", "file_name": "TQE.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "FLAT_STABLE_SORT\nNew stable sort algorithm\nCopyright (c) 2017 Francisco José Tapia (fjtapia@gmail.com )\n1.- INTRODUCTION\n2.- DESCRIPTION OF THE ALGORITHM\n2.1.- BASIC CONCEPTS\n2.1.1.- Block merge\n2.1.2.- Sequence merge\n2.1.3.- Example\n2.1.4.- Internal details\n \n2.2.- NUMBER OF ELEMENTS NO MULTIPLE OF THE BLOCK SIZE\n \n2.3.- SPECIAL CASES\n2.3.1.- Number of elements not sorted less or equal than the double of\nthe block size\n2.3.2.- Number of elements not sorted greater than the double of the \nblock size\n2.3.3.- Backward search\n \n3.- BENCHMARKS\n1.- INTRODUCTION\nflat_stable_sort is a new stable sort algorithm, created and implemented by the author, which use a very\nlow additional memory ( around 1% of the data size). The best case is O(N), and the average and worst case\nare O(NlogN).\nThe size of the additional memory is : size of the data / 256 + 8K\nData size Additional memory Percent\n1M 12 K 1.2 %\n1G 4M 0.4 %\nThe algorithm is fast sorting unsorted elements, but had been designed for to be extremely efficient when \nthe data are near sorted. By example :\n•Sorted elements with unsorted elements added at end or at he beginning .\n•Sorted elements with unsorted elements inserted in internal positions, or elements modified which\nalter the ordered sequence.\n•Reverse sorted elements\n•Combination of the three previous points.The results obtained in the sorting of 100 000 000 numbers plus a percent of unsorted elements inserted at \nend or in the middle , was\nrandom |10.78 |\n | |\nsorted | 0.07 |\nsorted + 0.1% end | 0.36 |\nsorted + 1% end | 0.49 |\nsorted + 10% end | 1.39 |\n | |\nsorted + 0.1% middle | 2.47 |\nsorted + 1% middle | 3.06 |\nsorted + 10% middle | 5.46 |\n | |\nreverse sorted | 0.14 |\nreverse sorted + 0.1% end | 0.41 |\nreverse sorted + 1% end | 0.55 |\nreverse sorted + 10% end | 1.46 |\n | |\nreverse sorted + 0.1% middle| 2.46 |\nreverse sorted + 1% middle| 3.16 |\nreverse sorted + 10% middle| 5.46 |\nThe results obtained with strings and objects of several sizes with different comparisons, was equally \nsatisfactory and according with the expected.\n2.- DESCRIPTION OF THE ALGORITHM\n2.1.- INTRODUCTION\nThe problem of the merge algorithms is where put the merged elements?. This is the justification of the\nadditional memory of the stable sort algorithms ( usually a half of the memory used by the data)\nThis algorithm have a different strategy. The data are grouped in blocks of fixed size. All elements inside a\nblock are ordered. Suppose, initially, than the number of elements is a multiple of the block size. Further we\nsee when this is not true.\nOther important concept is the sequence. We can have several blocks ordered, but the blocks are not\nphysically contiguous. We have a vector of positions, indicating the physically position of the blocks logically\nordered by the vector of positions. The sequence is defined by an iterator to the first and other to the after\nthe last element of the vector position.\nBy example\n5678\n101286\nThis sequence [ 5, 9 ), imply that the blocks of the sequence are in the physically positions 10, 12, 8 and 6.\nThe data inside this sequence of non contiguous blocks are ordered\nThe idea of the algorithms is similar to others merge algorithms. Initially sort N blocks and have N\nsequences of 1 block. Merge the sequence 0 and 1, 2 and 3, 3 and 4 and , at the end have N/2 sequences\nof two blocks. In the new sequences obtained , merge the 0 and 1, 2 and 3 … and obtain N/ 4 sequences.\nAt end of this process, we have only 1 sequence. The logical order is in a vector of positions, using this\nvector and with a simple algorithms, move the elements from their physically position to the logic position,\nand the process is finished. This process is done one time and is the last operation of the algorithm.2.1.1.- Block merge\nIn order to merge the blocks, use a “mixer”. This is a circular buffer, very simple and very fast, with an\ninternal size of the double of the block size. The internal memory of the mixer, is use in other moments by the\nalgorithm\nThe idea is merge two blocks, until one of them is empty. The merged elements are inside the mixer\n2.1.2.- Sequence merge\nWe have two sequences, defined as two ranges of positions in a vector . The ranges can be of the same or \ndistinct vector. By example [ 0, 4 ) and [ 4, 8 ).\nSequence 10123\n1203\nSequence 24567\n6547\nBegin to merge the block1 1 and 6. If the first empty block is the 1, fill the block 1 with the data from the front\nof the mixer, and add 1 to output list. \nNow, continue with the remaining elements of the block 6 and the next block of the first sequence, in this\nexample the 2. When any of the two blocks is empty, is filled with the front data of the mixer, and their\nposition is inserted in the output list.\nFor to explain, when a sequence is empty, we do with an example.\nSuppose the sequence 1 is empty\nSequence 10123\nSequence 24567\n47\nIn the mixer, we have merged elements, and the block 4 is partially filled. The elements needed for to fill the \nblock 4 are in the mixer. We move from the mixer to the block 4 and in the output list we add 4 and 7.\nWe have a new sequence, which can be like this \nOutput \nsequence01234567\n16205347\nThe elements in the blocks of a sequence are ordered.\n2.1.3.- Example\nWe have 4 blocks of 4 elements each. Initially, sorted the blocks, and now all the elements of a blocks are\nsorted.\nThe vector of positions of the blocks is called index. For to merge two sequences, copy the sequences from\nthe index to two vectors, and the output sequence is generated over the index.Index0123\n0123\nData0123\n5, 7, 12, 234, 6, 11, 151, 9, 13, 348, 14, 17,19\nWe have 4 sequences of 1 block. Merge the sequences 0 with 1 and 2 with 3. After this we will have two \nsequences of two blocks. When merge the two sequences of two blocks , we obtain a sequence of 4 blocks,\nwhich is the final sequence.\nMerge of blocks 0 and 1\nMixer Block 0 Block 1\n4,5,6,7,11,12,15 23\nThe block 1 is empty. We fill with the element from the front of the mixer. And add 1 to output sequence. The\nfirst sequence is empty. The remaining elements of the mixer are moved to the front of the block 0, and add\nthe 0 to the output sequence. The output sequence is [ 0, 2 ). We see in the index\nIndex0123\n1023\nData0123\n11, 12, 15, 234,5,6, 71, 9, 13, 348, 14, 17,19\nDoing the same with the blocks 2 and 3, we obtain\nIndex0123\n1032\nData0123\n11, 12, 15, 234, 5, 6, 714, 17, 19, 341, 8, 9, 13\nWe have now, two sequences of two blocks each. The sequences are defined in the index, [ 0, 2) and [ 2,\n4 ). The first sequence are the positions 0 and 1 of the index, and the second are the positions 2 and 3.\nFor to merge, take the firs block of the first sequence ( 1 ), with the first block of the second sequence (3),\nand begin the merge.\nThe first empty block is the 1. Now we fill the block 1 with the front data of the mixer, and add 1 to the output\nsequence. For to substitute the block 1 , take the next of the first sequence, the block 0, and continue with\nthe merge.\nNow the first empty block is the 3. We fill from the front of the mixer, and insert the number in the output\nsequence. For to substitute the block 3, take the next block of the sequence, the 2, and continue with the\nmerge.\nThe next empty block is the 0. We fill from the front of the mixer , and add their number to the output\nsequence. The first sequence is empty. Now we have only the block 2 partially filled, we fill from the mixer\nand add their position to the output list\nAt end, we can see\nIndex0123\n1302Data0123\n12, 13, 14, 151, 4, 5, 617, 19, 23, 347, 8, 9, 11\nNow, we must move the blocks, with a very simple algorithm, and pass to logical sorted with a index to a\nphysically order. This is done only one time and is the last operation of the algorithm.\nIndex0123\n0123\nData0123\n1, 4, 5, 67, 8, 9, 1112, 13, 14, 1517, 19, 23, 34\n2.1.4.- Internal details\nThe additional memory needed by the algorithm are the two blocks of the mixer, and the list with the\npositions of the blocks. When merge two sequences, copy each sequence in a vector , and the result is over\nthe index. In the last merge, the sum of the size of these two vectors is the same than the index. Due this,\nneed the size of the index, multiply by two.\nIn the merge process, participate two blocks, and the two blocks of the mixer. The size of the blocks is\ndesigned for to allocate the 4 blocks in the L1 cache of the processor. Normally they have 32K for core, and\neach core have two threads, then , we have 16K for each thread.\nDue this the block size must be 4K, and then: Size of one object x Block size = 4K\nSize of \nthe objectBlock \nSize\n41024\n8512\n16256\n32128\n6464\n2.2.- NUMBER OF ELEMENTS NOT MULTIPLE OF THE BLOCK \nSIZE\nWhen the number of elements to sort is not a multiple of the block size, we have an incomplete final block,\ncalled tail, and always is the last. When merge two sequences, if we have tail, always is in the second\nsequence. \nWhen exist tail block in the second sequence, the merge is as showed before. If the first sequence is empty,\nthe procedure is as described before.\nBut if the sequence empty is the second, and have tail, must do a different procedure. We see with an\nexample\nIndex012345\n120435Data012345\n38, 42, 44, 461, 5, 13, 1923, 29, 34, 3622, 25, 27, 284, 14, 17, 2031, 32\nWe want to merge the sequence [ 0, 3 ), y [ 3, 6 ). The block 5 is incomplete or tail block.\nBegin the merge, and the second sequence is empty. In this instant the state of the data is \nSequence 1012\n20\n \nThe block 2 is partially empty.\nSequence 2012\nOutput \nsequence012345\n143\nData012345\n38, 42, 44, 461, 4, 5, 13 34, 3622, 23, 25, 2714, 17, 19, 20\nMixer28, 29, 31, 32\nNow, we add the tail block to the end of the first sequence.\nData pending of the \nsequence 1205\n34, 3638, 42, 44, 46\nThe tail block have a size of two in this example. We shift to the right the data pending, the size of the tail\nblock ( 2 positions), and insert the data of the mixer by the left side, and we have\nData pending of the \nsequence 1205\n28, 29, 31, 32 34, 36, 38, 42 44, 46\nNow, we have the blocks filled, and now we must insert their numbers in the output sequence\nOutput \nsequence012345\n143205\nData012345\n34, 36, 38, 42 1, 4, 5, 1328, 29, 31, 3222, 23, 25, 2714, 17, 19, 2044, 46\nNow, only must move the blocks from the physical position to their logical position, and all the data are \nsorted.2.3.- SPECIAL CASES\nThe algorithm had been designed for to be extremely efficient with the data near sorted. We can classify in \nfour cases:\n•Sorted elements, and add unsorted elements to the beginning or the end.\n•Sorted elements and unsorted elements are inserted in internal positions, or elements modified, \nwhich alter the order of the elements.\n•Reverse sorted elements\n•Combination of the first 3 cases\nBegin from the first position, looking for sorted or reverse sorted elements. If the number of sorted or reverse\nsorted elements is greater than a value (usually ¼ of the number of elements), begin with an special\nprocess. If not, apply the general process described previously.\nFor to explain the special process, we do with an example\nIf the elements are reverse sorted, move between them for to be sorted.\nBy example, if have an array of 16 elements, with a block size of 4\n0123456789101112131415\n345781013141721612911216\nRange 1, 10 sorted elements Range 2, 6 unsorted elements\nSort the range 2, and obtain \n0123456789101112131415\n345781013141721269111216\nRange 1, 10 sorted elements Range 2, 6 sorted elements\nThe idea is to merge the two ranges, and can appear two cases:\n1.When the range 2 is lower or equal than the double of the block size\n2.When the range 2 is greater than the double of the block size\n2.3.1.- Number of elements lower or equal than the double of the block \nsize.\n0123456789101112131415\n345781013141721612911216\nSort the range 2, and after this, the idea is to use as auxiliary memory the internal memory of the mixed ( 2\nblocks). And do an insertion of sorted elements of the range 2 in the sorted elements of the range 1\n0123456789101112131415\n345781013141721269111216\nMove the range 2 in the auxiliary memory and obtain \n012345\n269111216Find the position of insertion of each element, and calculate the positions to shift the elements\n0123456789101112131415\n345781013141721\n1 position 2 positions3 pois-tions 5 positions6 positions\nShift the elements and insert the elements in the auxiliary memory and obtain\n0123456789101112131415\n234567891011121314161721\nAnd now, we have all the elements sorted\n2.3.2.- Number of elements greater than the double of the block size.\nIn the description of the algorithm, we have a merge of two sequences, but have a limitation. In the first\nsequence, all the blocks must be completed, and the number of elements multiple of the block size. The\nsecond sequence, don’t have this limitation, because can have a tail block..\n0123456789101112131415\n345781013141721612911216\nRange 1, 10 elements sorted Range 2, 6 elements unsorted\nIf we have a block size of 4, and the first range have 10 elements, we cut the number of elements of this\nrange to the nearest multiple of the block size, lower than the actual size. In this case 8, and after this the\nfirst range have 8 elements, and the second range 8 elements to sort.\n0123456789101112131415\n345781013141721612911216\nNew Range 1 8 sorted elements New Range 2 8 unsorted elements\nSort the range 2 and obtain\n0123456789101112131415\n345781013142691112161721\nNew Range 1 of 8 sorted elements New Range 2 of 8 sorted elements\nNow, the range 1 have a number of elements multiple of the block size, and we can do the merge as\ndescribed in the general case in the point 2.1.\n2.3.3.- Look for sorted elements from the end to the beginning\nIn the same way that we begin to look for sorted or reverse sorted elements from the beginning to the end,\nwe will do from the end to the beginning. The ideas and concepts are identical than in the look for forward\nIf the number of elements sorted or reverse sorted is greater or equal than ¼ of the number of elements,\napply an special process. If the number of elements pending to sort is lower or equal than the double of the\nblock size, we do an insertion similar to the previously described, and if it is greater, cut the number of\nelements of the range 2 , for to obtain a range 1 with a number of elements multiple of the block size, the\nsort, and after do the merge with the general process. 3.- BENCHMARKS\nThe measured memory in the sorting of 100 000 000 numbers of 64 bits was:\nstd::stable_sort 1176 MB\nBoost spin_sort 1175 MB\nBoost flat_stable_sort 787 MB\nIn the time benchmark, the random, sorted and reverse sorted elements are 100000000 \nnumbers of 64 bits. To these numbers, add 0.1%, 1% and 10% of unsorted elements inserted \nat the end and in the middle, uniformly spaced. \n[ 1 ] std::stable_sort [ 2 ] Boost spin_sort [ 3 ] Boost flat_stable_sort\n | | | |\n | [ 1 ]| [ 2 ]| [ 3 ]| \n------------------------------+------+------+------+\nrandom | 8.51 | 9.45 |10.78 |\n | | | |\nsorted | 4.86 | 0.06 | 0.07 | \nsorted + 0.1% end | 4.89 | 0.41 | 0.36 |\nsorted + 1% end | 4.96 | 0.55 | 0.49 |\nsorted + 10% end | 5.71 | 1.31 | 1.39 |\n | | | |\nsorted + 0.1% middle | 6.51 | 1.85 | 2.47 |\nsorted + 1% middle | 7.03 | 2.07 | 3.06 |\nsorted + 10% middle | 9.42 | 3.92 | 5.46 |\n | | | |\nreverse sorted | 5.10 | 0.13 | 0.14 |\nreverse sorted + 0.1% end | 5.21 | 0.52 | 0.41 |\nreverse sorted + 1% end | 5.27 | 0.65 | 0.55 |\nreverse sorted + 10% end | 6.01 | 1.43 | 1.46 |\n | | | |\nreverse sorted + 0.1% middle | 6.51 | 1.85 | 2.46 |\nreverse sorted + 1% middle | 7.03 | 2.07 | 3.16 |\nreverse sorted + 10% middle | 9.42 | 3.92 | 5.46 |\n------------------------------+------+------+------+" } ]
{ "category": "App Definition and Development", "file_name": "flat_stable_sort_eng.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "THE BOOST C++ METAPROGRAMMING\nLIBRARY\nAleksey Gurtovoy\nMetaCommunications\nagurtovoy@meta-comm.com\nDavid Abrahams\nBoost Consulting\ndavid.abrahams@rcn.com\nAbstract\nThis paper describes the Boost C++ template metaprogramming library (MPL), an extensible compile-time frame-\nwork of algorithms, sequences and metafunction classes. The library brings together important abstractions from the\ngeneric and functional programming worlds to build a powerful and easy-to-use toolset which makes template\nmetaprogramming practical enough for the real-world environments. The MPL is heavily influenced by its run-time\nequivalent — the Standard Template Library (STL), a part of the C++ standard library [STL94], [ISO98]. Like the\nSTL, it defines an open conceptual and implementation framework which can serve as a foundation for future con-\ntributions in the domain. The library's fundamental concepts and idioms enable the user to focus on solutions with-\nout navigating the universe of possible ad-hoc approaches to a given metaprogramming problem, even if no actual\nMPL code is used. The library also provides a compile-time lambda expression facility enabling arbitrary currying\nand composition of class templates, a feature whose runtime counterpart is often cited as missing from the STL. This\npaper explains the motivation, usage, design, and implementation of the MPL with examples of its real-life applica-\ntions, and offers some lessons learned about C++ template metaprogramming.\n1TableofContents\n1.Introduction...............................................................................................................................3\n1.1.Nativelanguagemetaprogramming..............................................................................................3\n1.2.MetaprogramminginC++..........................................................................................................3\n1.2.1.Numericcomputations ....................................................................................................3\n1.2.2.Typecomputations.........................................................................................................4\n1.2.3.Typesequences .............................................................................................................4\n1.3.Whymetaprogramming? ...........................................................................................................6\n1.4.Whyametaprogramming library? ...............................................................................................7\n2.Basicusage................................................................................................................................8\n2.1.Conditionaltypeselection ..........................................................................................................8\n2.1.1.Delayedevaluation.........................................................................................................8\n2.2.Metafunctions..........................................................................................................................11\n2.2.1.Thesimpleform ............................................................................................................11\n2.2.2.Higher-ordermetafunctions .............................................................................................12\n2.2.3.Metafunctionclasses ......................................................................................................13\n2.2.4.Onesizefitsall? ............................................................................................................14\n2.2.5.Frommetafunctiontometafunctionclass............................................................................15\n2.3.Sequences,algorithms,anditerators.............................................................................................15\n2.3.1.Introduction..................................................................................................................15\n2.3.2.Algorithmsandsequences ...............................................................................................15\n2.3.3.Sequenceconcepts .........................................................................................................16\n2.3.4.Adhocexamplerevisited ................................................................................................17\n2.3.5.iter_foldasthemainiterationalgorithm .............................................................................18\n2.3.6.Sequencesofnumbers ....................................................................................................19\n2.3.7.Avarietyofsequences ....................................................................................................20\n2.3.8.Loop/recursionunrolling .................................................................................................20\n3.Lambdafacility ..........................................................................................................................23\n4.Codegenerationfacilities .............................................................................................................24\n5.Example:acompile-timeFSMgenerator .........................................................................................25\n5.1.Implementation........................................................................................................................27\n5.2.Relatedwork ...........................................................................................................................29\n6.Acknowledgements.....................................................................................................................29\n7.References.................................................................................................................................29THE BOOST C++ METAPROGRAMMING LIBRARY\n21. Introduction\nMetaprogramming is usually defined as the creation of programs which generate other programs. Parser generators\nsuch as YACC [Joh79] are examples of one kind of program-generating program. The input language to YACC is a\ncontext-free grammar in Extended Backus-Naur Form [EBNF], and its output is a program which parses that gram-\nmar. Note that in this case the metaprogram (YACC) is written in a language ( C) which does not directly support the\ndescription of generated programs. These specifications, which we'll call metadata, are not written in C, but in a\nmeta-language . Because the the rest of the user's program typically requires a general-purpose programming system\nand must interact with the generated parser, the metadata is translated into C, which is then compiled and linked to-\ngether with the rest of the system. The metadata thus undergoes two translation steps, and the user is always very\nconscious of the boundary between her metadata and the rest of her program.\n1.1. Native language metaprogramming\nA more interesting form of metaprogramming is available in languages such as Scheme [SS75], where the generated\nprogram specification is given in the same language as the metaprogram itself. The metaprogrammer defines her\nmeta-language as a subset of the expressible forms of the underlying language, and program generation can take\nplace in the same translation step used to process the rest of the user's program. This allows users to switch transpar-\nently between ordinary programming, generated program specification, and metaprogramming, often without being\naware of the transition.\n1.2. Metaprogramming in C++\nIn C++, it was discovered almost by accident [Unr], [Vel95a] that the template mechanism provides a rich facility\nfor computation at compile-time. In this section, we'll explore the basic mechanisms and some common idioms used\nfor metaprogramming in C++.\n1.2.1. Numeric computations\nThe availability of non-type template parameters makes it possible to perform integer computations at compile-time.\nFor example, the following template computes the factorial of its argument:\ntemplate< unsigned n >\nstruct factorial\n{\nstatic const unsigned value = n * factorial<n-1>::value;\n};\ntemplate<>\nstruct factorial<0>\n{\nstatic const unsigned value = 1;\n};\nThe program fragment above is called a metafunction , and it is easy to see its relationship to a function designed to\nbe evaluated at runtime: the \"metafunction argument\" is passed as a template parameter, and its \"return value\" is de-\nfined as a nested static constant. Because of the hard line between the expression of compile-time and runtime com-\nputation in C++, metaprograms look different from their runtime counterparts. Thus, although as in Scheme the C++\nmetaprogrammer writes her code in the same language as the ordinary program, only a subset of the full C++ lan-\nguage is available to her: those expressions which can be evaluated at compile-time. Compare the above with a\nstraightforward runtime definition of the factorial function:\nunsigned factorial(unsigned N)\n{THE BOOST C++ METAPROGRAMMING LIBRARY\n3return N == 0 ? 1 : N * factorial(N - 1);\n}\nWhile it is easy to see the analogy between the two recursive definitions, recursion is in general more important to\nC++ metaprograms than it is to runtime C++. In contrast to languages such as Lisp where recursion is idiomatic,\nC++ programmers will typically avoid recursion when possible. This is done not only for efficiency reasons, but\nalso because of \"cultural momentum\": recursive programs are simply harder (for C++ programmers) to think about.\nLike pure Lisp, though, the C++ template mechanism is a functional programming language: as such it rules out the\nuse of data mutation required to maintain loop variables.\nA key difference between the runtime and compile-time factorial functions is the expression of the termination con-\ndition: our meta-factorial uses template specialization as a kind of pattern-matching mechanism to describe the be-\nhavior when Nis zero. The syntactic analogue in the runtime world would require two separate definitions of the\nsame function. In this case the impact of the second definition is minimal, but in large metaprograms the cost of\nmaintaining and understanding the terminating definitions can become significant.\nNote also that a C++ metafunction's return value must be named. The name chosen here, value, is the same one\nused for all numeric returns in the MPL. As we'll see, establishing a consistent naming convention for metafunction\nreturns is crucial to the power of the library.\n1.2.2. Type computations\nHow could we apply our factorial metafunction? We might, for example, produce an array type of an appropri-\nate size to hold all permutations of instances of another type:\n// permutation_holder<T>::type is an array type which can contain\n// all permutations of a given T.\n// unspecialized template for scalars\ntemplate< typename T >\nstruct permutation_holder\n{\ntypedef T type[1][1];\n};\n// specialization for array types\ntemplate< typename T, unsigned N >\nstruct permutation_holder<T[N]>\n{\ntypedef T type[factorial<N>::value][N];\n};\nHere we have introduced the notion of a type computation . Likefactorial above, permutation_holder tem-\nplate is a metafunction. However, where factorial manipulates unsigned integer values, permuta-\ntion_holder accepts and \"returns\" a type (as the nested typedef type). Because the C++ type system provides a\nmuch richer set of expressions than anything we can use as a nontype template argument (e.g. the integers), C++\nmetaprograms tend to be composed mostly of type computations.\n1.2.3. Type sequences\nThe ability to programmatically manipulate collections of types is a central tool of most interesting C++ metapro-\ngrams. Because this capability is so well-supported by the MPL, we'll provide just a brief introduction to the basics\nhere. Later on, we'll revisit the example below to show how it can be implemented using MPL.\nFirst, we'd need a way to represent the collection. One idea might be to store the types in a structure:\nstruct types\n{\nint t1;THE BOOST C++ METAPROGRAMMING LIBRARY\n4long t2;\nstd::vector<double> t3;\n};\nUnfortunately, this arrangement is not susceptible to the compile-time type introspection power that C++ gives us:\nthere's no way to find out what the names of the members are, and even if we assume that they're named according\nto some convention as above, there's no way to know how many members there are. The key to solving this problem\nis to increase the uniformity of the representation. If we have a consistent way to get the first type of any sequence\nand the rest of the sequence, we can easily access all members:\ntemplate< typename First, typename Rest >\nstruct cons\n{\ntypedef First first;\ntypedef Rest rest;\n};\nstruct nil {};\ntypedef\ncons<int\n, cons<long\n, cons<std::vector<double>\n, nil\n> > > my_types;\nThe structure described by typesabove is the compile-time analogue of a singly-linked list; it has been first intro-\nduced by Czarnecki and Eisenecker in [CE98]. Now that we've adjusted the structure so that the C++ template ma-\nchinery can \"peel it apart\", let's examine a simple metafunction which does so. Suppose a user wished to find the\nlargest of an arbitrary collection of types. We can apply the recursive metafunction formula which should by now be\nfamiliar:\n// choose the larger of two types\ntemplate<\ntypename T1\n, typename T2\n, bool choose1 = (sizeof(T1) > sizeof(T2)) // hands off!\n>\nstruct choose_larger\n{\ntypedef T1 type;\n};\n// specialization for the case where sizeof(T2) >= sizeof(T1)\ntemplate< typename T1, typename T2 >\nstruct choose_larger< T1,T2,false >\n{\ntypedef T2 type;\n};\n// get the largest of a cons-list\ntemplate< typename T > struct largest;\n// specialization to peel apart the cons list\ntemplate< typename First, typename Rest >\nstruct largest< cons<First,Rest> >\n: choose_larger< First, typename largest<Rest>::type >\n{\n// type inherited from base\n};\n// specialization for loop termination\ntemplate< typename First >\nstruct largest< cons<First,nil> >\n{THE BOOST C++ METAPROGRAMMING LIBRARY\n5typedef First type;\n};\nint main()\n{\n// print the name of the largest of my_types\nstd::cout\n<< typeid(largest<my_types>::type).name()\n<< std::endl\n;\n}\nThere are several things worth noticing about this code:\n• It uses a few ad-hoc, esoteric techniques, or \"hacks\". The default template argument choose1 (labeled \"hands\noff!\") is one example. Without it, we would have needed yet another template to provide the implementation of\nchoose_larger , or we would have had to provide the computation explicitly as a parameter to the template —\nperhaps not bad for this example, but it would make choose_larger much less useful and more error-prone.\nThe other hack is the derivation of a specialization of largest fromchoose_larger . This is a code-saving\ndevice which allows the programmer to avoid writing \" typedef typename ...::type type \" in the template\nbody.\n• Even this simple metaprogram uses three separate partial specializations. The largest metafunction uses two\nspecializations. One might expect that this indicates there are two termination conditions, but there are not: one\nspecialization is needed simply to deal with access to the sequence elements. These specializations make the\ncode difficult to read by spreading the definition of a single metafunction over several C++ template definitions.\nAlso, because they are partialspecializations, they make the code unusable for a large community of C++ pro-\ngrammers whose compilers don't support that feature.\nWhile these techniques are, of course, a valuable part of the arsenal of any good C++ metaprogrammer, their use\ntends to make programs written in what is already an unusual style harder-to-read and harder-to-write. By encapsu-\nlating commonly-used structures and dealing with loop terminations internally, the MPL reduces the need for both\ntricky hacks and for template specializations.\n1.3. Why metaprogramming?\nIt's worth asking why anyone would want to do this. After all, even a simple toy example like the factorial metafunc-\ntion is somewhat esoteric. To show how the type computation can be put to work, let's examine a simple example.\nThe following code produces an array containing all possible permutations of another array:\n// can't return an array in C++, so we need this wrapper\ntemplate< typename T >\nstruct wrapper\n{\nT x;\n};\n// return an array of the N! permutations of 'in'\ntemplate< typename T >\nwrapper< typename permutation_holder<T>::type >\nall_permutations(T const& in)\n{\nwrapper<typename permutation_holder<T>::type> result;\n// copy the unpermutated array to the first result element\nunsigned const N = sizeof(T) / sizeof(**result.x);\nstd::copy(&*in, &*in + N, result.x[0]);\n// enumerate the permutations\nunsigned const result_size = sizeof(result.x) / sizeof(T);THE BOOST C++ METAPROGRAMMING LIBRARY\n6for (T* dst = result.x + 1; dst != result.x + result_size; ++dst)\n{\nT* src = dst - 1;\nstd::copy(*src, *src + N, *dst);\nstd::next_permutation(*dst, *dst + N);\n}\nreturn result;\n}\nThe runtime definition of factorial would be useless in all_permutations above, since in C++ the sizes of\narray members must be computed at compile-time. However, there are alternative approaches; how could we avoid\nmetaprogramming, and what would the consequences be?\n• We could write programs to interpret the metadata directly. In our factorial example, the array size could have\nbeen a runtime quantity; then we'd have been able to use the straightforward factorial function. However, that\nwould imply the use of dynamic allocation, which is often expensive.\n• To carry this further, YACC might be rewritten to accept a pointer-to-function returning tokens from the stream\nto be parsed, and a string containing the grammar description. This approach, however, would impose unaccept-\nable runtime costs for most applications: either the parser would have to treat the grammar nondeterministically,\nexploring the grammar for each parse, or it would have to begin by replicating at runtime the substantial table-\ngeneration and optimization work of the existing YACC for each input grammar.\n• We could replace the compile-time computation with our own analysis. After all, the size of arrays passed to\nall_permutations are always known at compile-time, and thus can be known to its user. We could ask the\nuser to supply the result type explicitly:\ntemplate< typename Result, typename T >\nResult all_permutations(T const& input);\nThe costs to this approach are obvious: we give up expressivity (by requiring the user to explicitly specify imple-\nmentation details), and correctness (by allowing the user to specify them incorrectly). Anyone who has had to\nwrite parser tables by hand will tell you that the impracticality of this approach is the very reason of YACC's ex-\nistence.\nIn a language such as C++, where the metadata can be expressed in the same language as the rest of the user's\nprogram, expressivity is further enhanced: the user can invoke metaprograms directly, without learning a foreign\nsyntax or interrupting the flow of her code.\nSo, the motivation for metaprogramming comes down to the combination of three factors: efficiency, expressivity,\nand correctness. While in classical programming there is always a tension between expressivity and correctness on\none hand and efficiency on the other, in the metaprogramming world we wield new power: we can move the compu-\ntation required for expressivity from runtime to compile-time.\n1.4. Why a metaprogramming library?\nOne might just as well ask why we need any generic library:\n• Quality. Code that is appropriate for a general-purpose library is usually incidental to the purpose of its users. To\na library developer, it is the central mission. On average, the containers and algorithms provided by any given\nC++ standard library implementation are more-flexible and better-implemented than the project-specific imple-\nmentations which abound, because library development was treated as an end in itself rather than a task inciden-\ntal to the development of some other application. With a centralized implementation for any given function, opti-\nmizations and improvements are more likely to have been applied.\n• Re-use. More important even than the re-use of code which all libraries provide, a well-designed generic libraryTHE BOOST C++ METAPROGRAMMING LIBRARY\n7establishes a framework of concepts and idioms which establishes a reusable mental model for approaching\nproblems. Just as the C++ Standard Template Library gave us iterator concepts and a function object protocol,\nthe Boost Metaprogramming Library provides type-iterators and metafunction class protocol. A well-considered\nframework of idioms saves the metaprogrammer from considering irrelevant implementation details and allows\nher to concentrate on the problem at hand.\n• Portability. A good library can smooth over the ugly realities of platform differences. While in theory a metapro-\ngramming library is fully generic and shouldn't be concerned with these issues, in practice support for templates\nremains inconsistent even four years after standardization. This should perhaps not be surprising: C++ templates\nare the language's furthest-reaching and most complicated feature, which largely accounts for the power of\nmetaprogramming in C++.\n• Fun. Repeating the same idioms over and over is tedious. It makes programmers tired and reduces productivity.\nFurthermore, when programmers get bored they get sloppy, and buggy code is even more costly than slowly-\nwritten code. Often the most useful libraries are simply patterns that have been \"plucked\" by an astute program-\nmer from a sea of repetition. The MPL helps to reduce boredom by eliminating the need for the most commonly-\nrepeated boilerplate coding patterns.\nAs one can see, the MPL's development is motivated primarily by the same practical, real-world considerations that\njustify the development of any other library. Perhaps this is an indication that template metaprogramming is finally\nready to leave the realm of the esoteric and enter the lingua franca of every day programmers.\n2. Basic usage\n2.1. Conditional type selection\nConditional type selection is the simplest basic construct of C++ template metaprogramming. Veldhuizen [Vel95a]\nwas the first to show how to implement it, and Czarnecki and Eisenecker [CE00] first presented it as a standalone li-\nbrary primitive. The MPL defines the corresponding facility as follows:\ntemplate<\ntypename Condition\n, typename T1\n, typename T2\n>\nstruct if_\n{\ntypedefunspecified type;\n};\nNote that the first template parameter of the template is a type. The primitive's semantics intuitively matches its\nname:\ntypedef mpl::if_<mpl::true_,char,long>::type t1;\ntypedef mpl::if_<mpl::false_,char,long>::type t2;\nBOOST_MPL_ASSERT(( is_same< t1, char > ));\nBOOST_MPL_ASSERT(( is_same< t2, long > ));\nThe construct is important because template metaprograms often contain a lot of decision-making code, and, as we\nwill show, spelling it manually every time via (partial) class template specialization quickly becomes impractical.\nThe template is also important from the point of encapsulating the compiler workarounds.\n2.1.1. Delayed evaluationTHE BOOST C++ METAPROGRAMMING LIBRARY\n81Although it would be easy to implement pointed_type using partial specialization to distinguish the case where Tis a pointer, if_is\nlikely to be the right tool for dealing with more complex conditionals. For the purposes of exposition, please suspend disbelief!The way the C++ template instantiation mechanism works imposes some subtle limitations on applicability of the\ntype selection primitive ( if_), compared to a manually implemented equivalent of the selection code. For example,\nsuppose we are implementing a pointed_type traits template such that pointed_type<T>::type instantiated\nfor aTthat is either a plain pointer ( U*),std::auto_ptr<U> , or any of the Boost smart pointers [SPL], e.g.\nboost::scoped_ptr<U> , will give us the pointed type ( U):\nBOOST_MPL_ASSERT(( is_same< pointed_type<my*>::type, my > ));\nBOOST_MPL_ASSERT(( is_same< pointed_type< std::auto_ptr<my> >::type, my > ));\nBOOST_MPL_ASSERT(( is_same< pointed_type< boost::scoped_ptr<my> >::type, my> ));\nUnfortunately, the straightforward application of if_to this problem does not work:1\ntemplate< typename T >\nstruct pointed_type\n: mpl::if_<\nboost::is_pointer<T>\n, typename boost::remove_pointer<T>::type\n, typename T::element_type // #1\n>\n{\n};\n// the following code causes compilation error in line #1:\n// name followed by \"::\" must be a class or namespace name\ntypedef pointed_type<char*>::type result;\nClearly, the expression typename T::element_type is not valid in the case of T == char* , and that's what\nthe compiler is complaining about. Implementing the selection code manually solves the problem:\nnamespace aux {\n// general case\ntemplate< typename T, bool is_pointer = false >\nstruct select_pointed_type\n{\ntypedef typename T::element_type type;\n};\n// specialization for plain pointers\ntemplate< typename T >\nstruct select_pointed_type<T,true>\n{\ntypedef typename boost::remove_pointer<T>::type type;\n};\n}\ntemplate< typename T >\nstruct pointed_type\n: aux::select_pointed_type<\nT, boost::is_pointer<T>::value\n>\n{\n};\nBut this quickly becomes awkward if needs to be done repeatedly, and this awkwardness is compounded when par-\ntial specialization is not available. We can try to work around the problem as follows:\nnamespace aux {\ntemplate< typename T >THE BOOST C++ METAPROGRAMMING LIBRARY\n9struct element_type\n{\ntypedef typename T::element_type type;\n};\n}\ntemplate< typename T >\nstruct pointed_type\n{\ntypedef typename mpl::if_<\nboost::is_pointer<T>\n, typename boost::remove_pointer<T>::type\n, typename aux::element_type<T>::type\n>::type type;\n};\nbut this doesn't work either — the access to the aux::element_type<T> 's nested typemember still forces the\ncompiler to instantiate element_type<T> withT == char* , and that instantiation is, of course, invalid. Also,\nalthough in our case this does not lead to a compile error, the boost::remove_pointer<T> template always\ngets instantiated as well, and for the same reason (because we are accessing its nested typemember). Unnecessary\ninstantiation that is not fatal may or may be not a problem, depending on the \"weight\" of the template (how much\nthe instantiation taxes the compiler), but a general rule of thumb would be to avoid such code.\nReturning to our error, to make the above code compile, we need to factor the act of \"asking\"\naux::element_type<T> for its nested typeout of the if_invocation. The fact that both the\nboost::remove_pointer<T> trait template and aux::element_type<T> use the same naming convention\nfor their result types makes the refactoring easier:\ntemplate< typename T >\nstruct pointed_type\n{\nprivate:\ntypedef typename mpl::if_<\nboost::is_pointer<T>\n, boost::remove_pointer<T>\n, aux::element_type<T>\n>::type func_;\npublic:\ntypedef typename func_::type type;\n};\nNow the compiler is guaranteed not to instantiate both boost::remove_pointer<T> and\naux::element_type<T> , even although they are used as actual parameters to the if_template, so we are al-\nlowed to get away with aux::element_type<char*> so long as it won't end up being selected as func_.\nThe above technique is so common in template metaprograms, that it even makes sense to facilitate the selection of\na nested typemember by introducing a high-level equivalent to if_— the one that will do the func_::type op-\neration (that is called [nullary] metafunction class application) as a part of its invocation. The MPL provides such\ntemplate — it's called eval_if . Using it, we can re-write the above code as simple as:\ntemplate< typename T >\nstruct pointed_type\n{\ntypedef typename mpl::eval_if<\nboost::is_pointer<T>\n, boost::remove_pointer<T>\n, aux::element_type<T>\n>::type type;\n};\nTo make our techniques review complete, let's consider a slightly different example — suppose we want to define aTHE BOOST C++ METAPROGRAMMING LIBRARY\n10high-level wrapper around boost::remove_pointer traits template [TTL], which will strip the pointer qualifi-\ncation conditionally. We will call it remove_pointer_if :\ntemplate<\ntypename Condition\n, typename T\n>\nstruct remove_pointer_if\n{\ntypedef typename mpl::if_<\nCondition\n, typename boost::remove_pointer<T>::type\n, T\n>::type type;\n};\nNow the above works the first time, but it suffers from the problem we mentioned earlier —\nboost::remove_pointer<T> gets instantiated even if its result is never used. In the metaprogramming world\ncompilation time is an important resource [Abr01], and it is wasted by unnecessary template instantiations. We've\njust seen how to deal with the problem when both arguments to if_are the results of nullary metafunction class ap-\nplications, but in this example one of the arguments ( T) is just a simple type, so the refactoring just doesn't seem\npossible.\nThe easiest way out of this situation would be to pass to if_a real nullary metafunction instead of T— the one that\nreturns Ton its invocation. The MPL provides a simple way to do it — we just substitute identity<T> and\neval_if forTandif_:\ntemplate<\ntypename Condition\n, typename T\n>\nstruct remove_pointer_if\n{\ntypedef typename mpl::eval_if<\nCondition\n, boost::remove_pointer<T>\n, mpl::identity<T>\n>::type type;\n};\nwhich gives us exactly what we wanted.\n2.2. Metafunctions\n2.2.1. The simple form\nIn C++, the basic underlying language construct which allows parameterized compile-time computation is the class\ntemplate ([ISO98], section 14.5.1 [temp.class]). A bare class template is the simplest possible model we could\nchoose for metafunctions: it can take types and/or non-type arguments as actual template parameters, and instantia-\ntion \"returns\" a new type. For example, the following produces a type derived from its arguments:\ntemplate< typename T1, typename T2 >\nstruct derive : T1, T2\n{\n};\nHowever, this model is far too limiting: it restricts the metafunction result not only to class types, but to instantia-\ntions of a given class template, to say nothing of the fact that every metafunction invocation introduces an additional\nlevel of template nesting. While that might be acceptable for this particular metafunction, any model which pre-THE BOOST C++ METAPROGRAMMING LIBRARY\n112In fact it's already broken: apply_twice doesn't even fit the metafunction concept since it requires a template (rather than a type) as its\nfirst parameter, which breaks the metafunction protocol.vented us from \"returning\", say, intis obviously not general enough. To meet this basic requirement, we must rely\non a nested type to provide our return value:\ntemplate< typename T1, typename T2 >\nstruct derive\n{\nstruct type : T1, T2 {};\n};\n// silly specialization, but demonstrates \"returning\" int\ntemplate<>\nstruct derive<void,void>\n{\ntypedef int type;\n};\nVeldhuizen [Vel95a] was first to talk about class templates of this form as \"compile-time functions\", and Czarnecki\nand Eisenecker [CE00] have introduced \"template metafunction\" as an equivalent term (they also use the simpler\nterm \"metafunction\", as do we). Czarnecki and Eisenecker have also recognized the limitations of the simple meta-\nfunction representation and suggested the form that we discuss in the Metafunction classes section.\n2.2.2. Higher-order metafunctions\nWhile syntactically simple, the simple template metafunction form does not always interact optimally with the rest\nof C++. In particular, the simple metafunction form makes it unnecessarily awkward and tedious to define and work\nwith higher-order metafunctions (metafunctions that operate on other metafunctions). In order to pass a simple meta-\nfunction to another template, we need to use template template parameters :\n// returns F(T1,F(T2,T3))\ntemplate<\ntemplate<typename,typename> class F\n, typename T1\n, typename T2\n, typename T3\n>\nstruct apply_twice\n{\ntypedef typename F<\nT1\n, typename F<T2,T3>::type\n>::type type;\n};\n// a new metafunction returning a type derived from T1, T2, and T3\ntemplate<\ntypename T1\n, typename T2\n, typename T3\n>\nstruct derive3\n: apply_twice<derive,T1,T2,T3>\n{\n};\nThis looks different, but it seems to work.2\nHowever, things begin to break down noticeably when we want to \"return\" a metafunction from our metafunction:\n// returns G s.t. G(T1,T2,T3) == F(T1,F(T2,T3))\ntemplate< template<typename,typename> class F >THE BOOST C++ METAPROGRAMMING LIBRARY\n12struct compose_self\n{\ntemplate<\ntypename T1\n, typename T2\n, typename T3\n>\nstruct type\n: apply_twice<F,T1,T2,T3>\n{\n};\n};\nThe first and most obvious problem is that the result of applying compose_self is not itself a type, but a template,\nso it can't be passed in the usual ways to other metafunctions. A more subtle issue, however, is that the metafunction\n\"returned\" is not exactly what we intended. Although it acts just like apply_twice , it differs in one important re-\nspect: its identity. In the C++ type system, compose_self<F>::template type<T,U,V> is not a synonym for\napply_twice<F,T,U,V> , and any metaprogram which compared metafunctions would discover that fact.\nBecause C++ makes a strict distinction between type and class template template parameters, reliance on simple\nmetafunctions creates a \"wall\" between metafunctions and metadata, relegating metafunctions to the status of sec-\nond-class citizens. For example, recalling our introduction to type sequences, there's no way to make a conslist of\nmetafunctions:\ntypedef cons<derive, cons<derive3, nil> > derive_functions; // error!\nWe might consider redefining our conscell so we can pass deriveas the head element:\ntemplate <\ntemplate< template<typename T, typename U> class F\n, typename Tail\n>\nstruct cons;\nHowever, now we have another problem: C++ templates are polymorphic with respect to their type arguments, but\nnot with respect to template template parameters. The arity (number of parameters) of any template template param-\neter is strictly enforced, so we stillcan't embed derive3 in aconslist. Moreover, polymorphism betweentypes\nand metafunctions is not supported (the compiler expects one or the other), and as we've seen, the syntax and seman-\ntics of \"returned\" metafunctions is different from that of returned types. Trying to accomplish everything with the\nsimple template metafunction form would seriously limit the applicability of higher-order metafunctions and would\nhave an overall negative effect on the both conceptual and implementation clarity, simplicity and size of the library.\n2.2.3. Metafunction classes\nFortunately, the truism that \"there is no problem in software which can't be solved by adding yet another level of in-\ndirection\" applies here. To elevate metafunctions to the status of first-class objects, the MPL introduces the concept\nof a \"metafunction class\":\n// metafunction class form of derive\nstruct derive\n{\ntemplate< typename N1, typename N2 >\nstruct apply\n{\nstruct type : N1, N2 {};\n};\n};\nThis form should look familiar to anyone acquainted with function objects in STL, with the nested applytemplateTHE BOOST C++ METAPROGRAMMING LIBRARY\n13taking the same role as the runtime function-call operator. In fact, compile-time metafunction classes have the same\nrelationship to metafunctions that runtime function objects have to functions:\n// function form of add\ntemplate< typename T > T add(T x, T y) { return x + y; }\n// function object form of add\nstruct add\n{\ntemplate< typename T >\nT operator()(T x, T y) { return x + y; }\n};\n2.2.4. One size fits all?\nThe metafunction class form solves all the problems with ordinary template metafunction mentioned earlier: since it\nis a regular class, it can be placed in compile-time metadata sequences and manipulated by other metafunctions us-\ning the same protocols as for any other metadata. We thereby avoid the code-duplication needed to provide versions\nof each library component to operate on ordinary metadata and on metafunctions with each distinct supported arity.\nOn the other hand, it seems that accepting metafunction classes as therepresentation for compile-time function enti-\nties imposes code duplication danger as well: if the library's own primitives, algorithms, etc. are represented as class\ntemplates, that means that one either cannot reuse these algorithms in the context of higher-order functions, or she\nhave to duplicate all algorithms in the second form, so, for instance, there would be two versions of find:\n// user-friendly form\ntemplate<\ntypename Sequence\n, typename T\n>\nstruct find\n{\ntypedef /* ... */ type;\n};\n// \"metafunction class\" form\nstruct find_func\n{\ntemplate< typename Sequence, typename T >\nstruct apply\n{\ntypedef /* ... */ type;\n};\n};\nOf course, the third option is to eliminate \"user-friendly form\" completely so one would always have to write\ntypedef mpl::find::apply<list,long>::type iter;\nor even\ntypedef mpl::apply< mpl::find,list,long >::type iter;\ninstead of\ntypedef mpl::find<list,long>::type iter;\nThat too would hurt usability, considering that the direct invocations of library's algorithms are far more often-used\nthan passing algorithms as arguments to other algorithms/metafunctions.THE BOOST C++ METAPROGRAMMING LIBRARY\n142.2.5. From metafunction to metafunction class\nThe MPL's answer to this dilemma is lambda expressions . Lambda is the mechanism that enables the library to\ncurry metafunctions and convert them into metafunction classes, so when one wants to pass the findalgorithm as\nan argument to a higher-order metafunction, she just write:\nusing namespace mpl::placeholders;\ntypedef mpl::apply< my_f, mpl::find<_1,_2> >::type result;\nwhere_1and_2areplaceholders for the first and second arguments to the resulting metafunction class. This pre-\nserves the intuitive syntax below for when the user wants to use finddirectly in her code:\ntypedef mpl::find<list,long>::type iter;\nThis functionality is described in more details in the Lambda facility section.\n2.3. Sequences, algorithms, and iterators\n2.3.1. Introduction\nCompile-time iteration over a sequence (of types) is one of the basic concepts of template metaprogramming. Differ-\nences in types of objects being manipulated is the most common point of variability of similar but not identical\ncode/design, and such designs are the direct target for some metaprogramming. Templates were originally designed\nto solve this exact problem (e.g. std::vector ). However, without predefined abstractions/constructs for manipu-\nlating/iterating over sequences of types (as opposed to standalone types), and without known techniques for emulat-\ning these constructs using the current language facilities, their effect on helping high-level metaprogramming hap-\npen has been limited.\nCzarnecki and Eisenecker [CE98], [CE00] were the first to introduce compile-time sequences of types and some\nsimple algorithms on them, although the idea of representing common data structures like trees, lists, etc. at compile\ntime, using class template composition has been around for a while (e.g. most of the expression template libraries\nbuild such trees as a part of their expression \"parsing\" process [Vel95b]). Alexandrescu [Ale01] used lists of types\nand some algorithms on them to implement several design patterns; the accompanying code is known as the Loki li-\nbrary [Loki].\n2.3.2. Algorithms and sequences\nMost of the algorithms in the Boost Metaprogramming Library operate on sequences. For example, searching for a\ntype in a list looks like this:\ntypedef mpl::list<char,short,int,long,float,double> types;\ntypedef mpl::find<types,long>::type iter;\nHere,findaccepts two parameters — a sequence to search ( types) and the type to search for ( long) — and re-\nturns an iterator iterpointing to the first element of the sequence such that iter::type is identical to long. If\nno such element exists, iteris identical to end<types>::type . Basically, this is how one would search for a\nvalue in a std::list orstd::vector , except that mpl::find accepts the sequence as a single parameter,\nwhilestd::find takes two iterators. Everything else is pretty much the same — the names are the same, the se-\nmantics are very close, there are iterators, and one can search not only by type, but also by using a predicate:\ntypedef mpl::find_if< types,boost::is_float<_> >::type iter;THE BOOST C++ METAPROGRAMMING LIBRARY\n153A more precise definition of these concepts can be found in the library reference documentation [MPLR].This conceptual/syntactical similarity with the STL is not coincidental. Reusing the conceptual framework of the\nSTL in the compile-time world allows us to apply familiar and sound approaches for dealing with sequential data\nstructures. The algorithms and idioms which programmers already know from the STL can be applied again at com-\npile-time. We consider this to be one of MPL's greatest strengths, distinguishing it from earlier attempts to build a\ntemplate metaprogramming library.\n2.3.3. Sequence concepts\nIn thefindexample above, we searched for the type in a sequence built using the mpl::list template; but list\nis not the only sequence that the library provides. Neither is mpl::find or any other algorithm hard-coded to work\nonly with listsequences. listis just one model of MPL's Forward Sequence concept, and findworks with\nanything that satisfies this concept's requirements. The hierarchy of sequence concepts in MPL is quite simple — a\nForward Sequence is any compile-time entity for which begin<> andend<>produce iterators to the range of its\nelements; a Bidirectional Sequence is aForward Sequence whose iterators satisfy Bidirectional Iterator require-\nments; finally, a Random Access Sequence is aBidirectional Sequence whose iterators satisfy Random Access Itera-\ntorrequirements.3\nDecoupling algorithms from particular sequence implementations (through iterators) allows a metaprogrammer to\ncreate her own sequence types and to retain the rest of the library at her disposal. For example, one can define a\ntiny_list for dealing with sequences of three types as follows:\ntemplate< typename TinyList, long Pos >\nstruct tiny_list_item;\ntemplate< typename TinyList, long Pos >\nstruct tiny_list_iterator\n{\ntypedef typename tiny_list_item<TinyList,Pos>::type type;\ntypedef tiny_list_iterator<TinyList, Pos-1> prior;\ntypedef tiny_list_iterator<TinyList, Pos+1> next;\n};\ntemplate< typename T0, typename T1, typename T2 >\nstruct tiny_list\n{\ntypedef tiny_list_iterator<tiny_list, 0> begin;\ntypedef tiny_list_iterator<tiny_list, 3> end;\ntypedef T0 type0;\ntypedef T1 type1;\ntypedef T2 type2;\n};\ntemplate< typename TinyList >\nstruct tiny_list_item<TinyList,0>\n{\ntypedef typename TinyList::type0 type;\n};\ntemplate< typename TinyList >\nstruct tiny_list_item<TinyList,1>\n{\ntypedef typename TinyList::type1 type;\n};\ntemplate< typename TinyList >\nstruct tiny_list_item<TinyList,2>\n{\ntypedef typename TinyList::type2 type;\n};\nand then use it with any of the library algorithms as if it were mpl::list :THE BOOST C++ METAPROGRAMMING LIBRARY\n164Random access is almost as important at compile-time as it is at run-time. For example, searching for an item in a sorted random-access se-\nquence using lower_bound can be much faster than performing the same operation on a forward-access-only list.typedef tiny_list< char,short,int > types;\ntypedef mpl::transform<\ntypes\n, boost::add_pointer<_1>\n>::type pointers;\nAs written, tiny_list is a model of Bidirectional Sequence ; to turn it into a Random Access Sequence , we need to\npromote tiny_list_iterator into aRandom Access Iterator by specializing mpl::advance and\nmpl::distance metafunctions:\nnamespace boost { namespace mpl {\ntemplate< typename TinyList, long Pos, typename N >\nstruct advance< tiny_list_iterator<TinyList,Pos>, N >\n{\ntypedef tiny_list_iterator<\nTinyList\n, Pos + N::value\n> type;\n};\ntemplate< typename TinyList, long Pos1, long Pos2 >\nstruct distance<\ntiny_list_iterator<TinyList,Pos1>\n, tiny_list_iterator<TinyList,Pos2>\n>\n{\ntypedef mpl::integral_c<long, Pos2 - Pos1> type;\n};\n}}\nWhile the tiny_list itself might be not that interesting (after all, it can hold only three elements), if the technique\nabove could be automated so we would be able to define not-so-tiny sequences (with five, ten, twenty, etc. ele-\nments), it would be very valuable.4\nExternal code generation is an option, but there exists a solution within the language. However, it is not a template\nmetaprogramming, but rather preprocessor metaprogramming . In fact, MPL's vector — a fixed-size type se-\nquence that provides random-access iterators — is implemented very much like the above tiny_list — using the\nBoost Preprocessor library [PRE].\n2.3.4. Ad hoc example revisited\nSo, the library provides its users with almost complete compile-time equivalent of the STL framework. Does it help\nthem to solve their metaprogramming tasks? Let's return to our earlier largest example to see if we can rewrite it\nin a better way with what MPL has to offer. Well, actually, there is not much to look at, because the MPL imple-\nmentation is a one-liner (we'll spread it out here for readability)5:\ntemplate< typename Sequence >\nstruct largest\n{\ntypedef typename mpl::max_element<\nSequence\n, mpl::less<\nmpl::sizeof_<_1>\n, mpl::sizeof_<_2>\n>\n>::type iter;THE BOOST C++ METAPROGRAMMING LIBRARY\n175Here is another, even more elegant implementation:\ntemplate< typename Sequence >\nstruct largest\n{\ntypedef typename mpl::max_element<\nmpl::transform_view<\nSequence\n, mpl::sizeof_<_>\n>\n>::type iter;\ntypedef typename deref<iter>::type type;\n};\n6Theiter_fold 's interface in the current version of the library is slightly different from the one presented here. Please refer to the MPL ref-\nerence manual for the up-to-date information.typedef typename deref<iter>::type type;\n};\nThere are no more termination conditions with tricky pattern matching, no more partial specializations; and even\nmore importantly, it's obviouswhat the above code does — even although it's all templates — something that one\ncould not say about the original version.\n2.3.5. iter_fold as the main iteration algorithm\nFor the purpose of examining a little bit more of the library's internal structure, let's look at how max_element\nfrom the above example is implemented. One might expect that nowwe will again see all these awkward partial spe-\ncializations, esoteric pattern matching, etc. Well, let's see:\ntemplate<\ntypename Sequence\n, typename Predicate\n>\nstruct max_element\n{\ntypedef typename mpl::iter_fold<\nSequence\n, typename mpl::begin<Sequence>::type\n, if_< less< deref<_1>,deref<_2> >, _2, _1 >\n>::type type;\n};\nThe first thing to notice here is that this algorithm is implemented in terms of another one: iter_fold . In fact, this\nis probably the most important point of the example, because nearly all other generic sequence algorithms in the li-\nbrary are implemented in terms of iter_fold . If a user should ever need to implement her own sequence algo-\nrithm, she'll almost certainly be able to do so using this primitive, which means she won't have to resort to imple-\nmenting hand-crafted iteration, pattern matching of special cases for loop termination, or workarounds for lack of\npartial specialization. It also means that her algorithm will automatically benefit from any optimizations the library\nhas implemented, (e.g. recursion unrolling), and that it will work with any sequence that is a model of ForwardSe-\nquence, because iter_fold does not require anything more of its sequence argument.\niter_fold algorithm is basically a compile-time equivalent of the foldorreducefunctions that comprise the\nbasic and well-known primitives of many functional programming languages. An analogy more familiar to a C++\nprogrammer would be the std::accumulate algorithm from the C++ standard library ([ISO98], section 26.4.1\n[lib.accumulate]). However, iter_fold is designed to take advantage of the natural characteristics of recursive\ntraversal: it accepts twometafunction class arguments, the first of which is applied to the state \"on the way in\" and\nthe second of which is applied \"on the way out\".\nThe interface to iter_fold is defined in MPL as follows:6THE BOOST C++ METAPROGRAMMING LIBRARY\n187Ideally, if going this route, all the templates should be re-implemented for every integral type — char,int,short,long, etc.\n8The same technique was suggested by Czarnecki and Eisenecker in [CE00].template<\ntypename Sequence\n, typename InitialState\n, typename ForwardOp\n, typename BackwardOp = _1\n>\nstruct iter_fold\n{\ntypedefunspecified type;\n};\nThe algorithm \"returns\" the result of two-way successive applications of binary ForwardOp andBackwardOp op-\nerations to iterators in range [ begin<Sequence>::type ,end<Sequence>::type ) and previous result of an\noperation; the InitialState is logically placed before the sequence and included in the forward traversal. The re-\nsulttypeis identical to InitialState if the sequence is empty.\nThe library also provides reverse_iter_fold ,fold, andreverse_fold algorithms which wrap iter_fold\nto accommodate its most common usage patterns.\n2.3.6. Sequences of numbers\nWhat we've seen so far were sequences (and algorithms on sequences) of types. It is both possible and easy to ma-\nnipulate compile-time valuesusing the library as well. The only thing to remember is that in C++, class template\nnon-type template parameters give us one more example of non-polymorphic behavior. In other words, if one de-\nclared a metafunction to take a non-type template parameter (e.g. long) it's not possible to pass anything besides\ncompile-time integral constants to it:\ntemplate< long N1, long N2 >\nstruct equal_to\n{\nstatic bool const value = (N1 == N2);\n};\nequal_to<5,5>::value; // OK\nequal_to<int,int>::value; // error!\nAnd of course this doesn't work the other way around either:\ntypedef mpl::list<1,2,3,4,5> numbers; // error!\nWhile this may be an obvious limitation, it imposes yet another dilemma on the library design: on the one hand, we\ndon't want to restrict users to type manipulations only, and on the other hand, full support for integral manipulations\nwould require at least duplication of most of the library facilities7— the same situation as we would have if we had\nchosen to represent metafunctions as ordinary class templates. The solution for this issue is the same as well: we\nrepresent integral values by wrapping them in types.8For example, to create a list of numbers one can write:\ntypedef mpl::list<\nmpl::int_<1>\n, mpl::int_<2>\n, mpl::int_<3>\n, mpl::int_<4>\n, mpl::int_<5>\n> numbers;\nWrapping integral constants into types to make them first-class citizens is important well inside metaprograms,\nwhere one often doesn't know (and doesn't care) if the metafunctions she is using operate on types, integral values,\nother metafunctions, or something else, like compile-time fixed-point or rational numbers.THE BOOST C++ METAPROGRAMMING LIBRARY\n19But, from the user's perspective, the above example is much more verbose than the shorter, incorrect one. Thus, for\nthe purpose of convenience, the library does provide users with a template that takes non-type template parameters,\nbut offers a more compact notation:\ntypedef mpl::list_c<long,1,2,3,4,5> numbers;\nThere is a similar vectorcounterpart as well:\ntypedef mpl::vector_c<long,1,2,3,4,5> numbers;\n2.3.7. A variety of sequences\nPrevious efforts to provide generalized metaprogramming facilities for C++ have always concentrated on -\nconsstyle type lists and a few core algorithms like sizeandat, which are tied to the specific sequence implemen-\ntation. Such systems have an elegant simplicity reminiscent of the analogous functionality in pure functional Lisp. It\nis much more time-consuming to implement even a basic set of the sequence algorithms provided by equivalent run-\ntime libraries (the STL in particular), but if we have learned anything from the STL, it is that tying those algorithms'\nimplementations to a specific sequence implementation is a misguided effort!\nThe truth is that there is no single \"best\" type sequence implementation for the same reasons that there will never be\na single \"best\" runtime sequence implementation. Furthermore, there are alreadyquite a number of type list imple-\nmentations in use today; and just as the STL algorithms can operate on sequences which don't come from STL con-\ntainers, so the MPL algorithms are designed to work with foreign type sequences.\nIt may be an eye-opening fact for some that type lists are not the only useful compile-time sequence. Again, the need\nfor a variety of compile-time containers arises for the same reasons that we have lists, vectors, deques, and sets in\nthe C++ standard library &mdash; different containers have different functional and performance characteristics\nwhich determine not only applicability and efficiency of particular algorithms, but also the expressiveness or ver-\nbosity of the code that uses them. While runtime performance is not an issue for C++ metaprograms, compilation\nspeed is often a significant bottleneck to advanced C++ software development [Abr01].\nThe MPL provides five built-in sequences: list,list_c(really just a listof value wrappers), vector, a ran-\ndomly-accessible sequence of fixed maximum size, vector_c , andrange_c , a randomly-accessible sequence of\nconsecutive integral values. More important, however, is its ability to adapt to arbitrary sequence types. The only\ncore operations that a sequence is required to provide in order to be used with the library algorithms are begin<>\nandend<>metafunctions which \"return\" iterators into the sequence. As with the STL, it is the iterators which are\nused to implement most of the general-purpose sequence algorithms the library provides. Also, as with the STL, al-\ngorithm specialization is used to take advantage of implementation knowledge about particular sequences: many of\nthe \"basic\" sequence operations such as back<>,front<> ,size<>, andat<>are specialized on sequence type to\nprovide a more efficient implementation than the fully generic version.\n2.3.8. Loop/recursion unrolling\nAlmost coincidentally, loop unrolling can be as important to compile-time iterative algorithms as it is to runtime al-\ngorithms. To see why, one must first remember that all \"loops\" in C++ metaprograms, are in fact, implemented with\nrecursion, and that the template instantiation depth can be a valuable resource in a compiler implementation. In fact,\nAnnex B of the C++ standard ([ISO98], annex B [limits]) recommends a minimum depth of 17 recursively nested\ntemplate instantiations; but this is far too low for many serious metaprograms, some of which easily exceed the\nhard-coded instantiation limits of some otherwise excellent compilers. To see how this works in action, let's examine\na straightforward implementation of the foldmetafunction, which combines some algorithm state with each ele-\nment of a sequence:\nnamespace aux {\n// unspecialized version combines the initial state and first elementTHE BOOST C++ METAPROGRAMMING LIBRARY\n209It could be much more, depending on the complexity of the apply<...> expression, whose depth is added to the overall recursion depth.// and recurses to process the rest\ntemplate<\ntypename Start\n, typename Finish\n, typename State\n, typename BinaryFunction\n>\nstruct fold_impl\n: fold_impl<\ntypename next<Start>::type\n, Finish\n, typename apply<\nBinaryFunction\n, State\n, typename deref<Start>::type\n>::type\n, BinaryFunction\n>\n{\n};\n// specialization for loop termination\ntemplate<\ntypename Finish\n, typename State\n, typename BinaryFunction\n>\nstruct fold_impl<Finish,Finish,State,BinaryFunction>\n{\ntypedef State type;\n};\n} // namespace aux\n// public interface\ntemplate<\ntypename Sequence\n, typename State\n, typename ForwardOp\n>\nstruct fold\n: aux::fold_impl<\n, typename begin<Sequence>::type\n, typename end<Sequence>::type\n, State\n, ForwardOp\n>\n{\n};\nAlthough simple and elegant, this implementation will always incur at least as many levels of recursive template in-\nstantiation as there are elements in the input sequence.9The library addresses this problem by explicitly \"unrolling\"\nthe recursion. To apply the technique to our foldexample, we begin by factoring out a single step of the algorithm.\nOurfold_impl_step metafunction has two results: type(the next state), and iterator (the next sequence po-\nsition).\ntemplate<\ntypename BinaryFunction\n, typename State\n, typename Start\n, typename Finish\n>\nstruct fold_impl_step\n{\ntypedef typename apply<\nBinaryFunctionTHE BOOST C++ METAPROGRAMMING LIBRARY\n21, State\n, typename deref<Start>::type\n>::type type;\ntypedef typename next<Start>::type iterator;\n};\nAs with our main algorithm implementation, we specialize for the loop termination condition so that the step be-\ncomes a no-op:\ntemplate<\ntypename BinaryFunction\n, typename State\n, typename Finish\n>\nstruct fold_impl_step<BinaryFunction,State,Finish,Finish>\n{\ntypedef State type;\ntypedef Finish iterator;\n};\nNow we can now reduce fold's instantiation depth by any constant factor N simply by inserting N invocations of\nfold_impl_step . Here we've chosen a factor of 4:\ntemplate<\ntypename Start\n, typename Finish\n, typename State\n, typename BinaryFunction\n>\nstruct fold_impl\n{\nprivate:\ntypedef fold_impl_step<\nBinaryFunction\n, State\n, Start\n, Finish\n> next1;\ntypedef fold_impl_step<\nBinaryFunction\n, typename next1::type\n, typename next1::iterator\n, Finish\n> next2;\ntypedef fold_impl_step<\nBinaryFunction\n, typename next2::type\n, typename next2::iterator\n, Finish\n> next3;\ntypedef fold_impl_step<\nBinaryFunction\n, typename next3::type\n, typename next3::iterator\n, Finish\n> next4;\ntypedef fold_impl_step<\ntypename next4::iterator\n, Finish\n, typename next4::type\n, BinaryFunctionTHE BOOST C++ METAPROGRAMMING LIBRARY\n2210This implementation detail is made relatively painless through heavy reliance on the Boost Preprocessor Library [PRE], so only one copy of the\ncode needs to be maintained.> recursion;\npublic:\ntypedef typename recursion::type type;\n};\nThe MPL applies this unrolling technique across all algorithms with an unrolling factor tuned according to the de-\nmands of the C++ implementation in use, and with an option for the user to override the value.10\nThis fact enables users to push beyond the metaprogramming limits they would usually encounter with more naive\nalgorithm implementations. Experiments also show a small (up to 10%) increase in metaprogram instantiation speed\non some compilers when loop unrolling is used.\n3. Lambda facility\nThe MPL's lambda facility allows the inline composition of class templates into \"lambda expressions\", which are\nclasses and can therefore be passed around as ordinary metafunction classes, or transformed into metafunction\nclasses before application using the expression:\ntypedef mpl::lambda<expr>::type func;\nFor example, boost::remove_const traits template from Boost type_traits library [TTL] is a class tem-\nplate, or a metafunction in MPL terminology. The simplest example of an \"inline composition\" of it would be some-\nthing like:\ntypedef boost::remove_const<_1> expr;\nThis forms a so called \"lambda expression\", which is neither a metafunction class, nor a metafunction, yet can be\npassed around everywhere because it's an ordinary C++ class, because all MPL facilities are polymorphic with re-\nspect to their arguments. Now, that lambda expression can be transformed into a metafunction class using the MPL's\nlambdafacility:\ntypedef boost::remove_const<_1> expr;\ntypedef mpl::lambda<expr>::type func;\nThefuncis a unary metafunction class and can be used as such. In particular, it can be pass around or invoked\n(applied):\ntypedef mpl::apply<func,int const>::type res;\nBOOST_MPL_ASSERT(( is_same<res, int> ));\nor even\ntypedef func::apply<int const>::type res;\nBOOST_MPL_ASSERT(( is_same<res, int >));\nInline composition is very appealing syntactically when one deals with metafunctions, because it makes the expres-\nsion obvious:\ntypedef mpl::or_<\nmpl::less< mpl::sizeof_<_1>, mpl::int_<16> >\n, boost::is_same<_1,_2>\n> expr;THE BOOST C++ METAPROGRAMMING LIBRARY\n23typedef mpl::lambda<expr>::type func;\nIn fact, that last bit ( typedef lambda<expr>::type func ) is unnecessary, because all MPL algorithms per-\nform this transformation to all of their metafunction class operands internally (a lambda<T>::type expression ap-\nplied to a metafunction class gives back the same metafunction class, so it's safe to apply the expression uncondi-\ntionally).\nThe alternative way to write an equivalent of the above metafunction class would be:\ntypedef mpl::bind<\nmpl::quote2<mpl::or_>\n, mpl::bind< mpl::quote2<mpl::less>\n, mpl::bind< mpl::quote1<mpl::sizeof_>,_1 >\n, mpl::int_<16>\n>\n, mpl::bind< mpl::quote2<boost::is_same>,_1,_2 >\n> func;\nHere, we use mpl::quote ntemplates to convert metafunctions into metafunction classes and then combine them\nusingmpl::bind primitive. The transformation from this form to the above inline lambda expression and vice-\nversa is mechanical, and that is essentially what happens under the hood when we write typedef\nmpl::lambda<expr>::type .\nFor its own metafunctions (algorithms, primitives, etc.), MPL enables us to write the above in a less cumbersome\nway:\ntypedef mpl::bind<\nmpl::or_<>\n, mpl::bind< mpl::less<>, mpl::bind<mpl::sizeof_<>,_1>, mpl::int_<16> >\n, mpl::bind< mpl::quote2<boost::is_same>, _1,_2 >\n> func;\nNote that because is_same is not an MPL primitive, we still have to wrap it using quote2.\n4. Code generation facilities\nThere are cases, especially in the domain of numeric computation, when one wants to perform some part of the cal-\nculations at compile-time, and then pass the results to a run-time part of the program for further processing. For ex-\nample, suppose one has implemented a complex compile-time algorithm that works with fixed-point arithmetic:\n// fixed-point algorithm input\ntypedef mpl::vector<\nfixed_c<-1,2345678>\n, fixed_c<9,0001>\n// ..\n, fixed_c<3,14159>\n> input_data;\n// complex compile-time algorithm\n// ...\ntypedef /*...*/ result_data;\nSuppose the result_data here is a sequence of fixed_c types that keeps the results of the algorithm, and now\none wishes to feed that result to the run-time part of the algorithm. With MPL she can do this:\ndouble my_algorithm()\n{\n// passing the results to the run-time part of the programTHE BOOST C++ METAPROGRAMMING LIBRARY\n24std::vector<double> results;\nresults.reserve(mpl::size<result_data>::value);\nmpl::for_each<numbers,_>(\nboost::bind(&std::vector<double>::push_back, &results, _1)\n);\n// ...\n}\nThefor_each<numbers,_>(...) call is what actually transfers the compile-time result_data into run-time\nresults .for_each is a function template declared as:\ntemplate<\ntypename Seq\n, typename TransformOp\n, typename F\n>\nvoid for_each(F f)\n{\n// ...\n}\nTo call the function, one is required to explicitly provide two actual template parameters, a compile-time sequence\nSeqand a unary transformation metafunction TransformOp , plus a run-time function argument f(in our example,\nnumbers ,_, andboost::bind(...) correspondingly). fis a function object which operator() is called for\nevery element in the Seqtranfromed by TransformOp .\nApplying this to our example, the\nmpl::for_each<numbers,_>(\nboost::bind(&std::vector<double>::push_back, &results, _1)\n);\ncall is roughly equivalent to this:\nf(mpl::apply< _,mpl::at_c<result_data,0>::type >::type());\nf(mpl::apply< _,mpl::at_c<result_data,1>::type >::type());\n// ...\nf(mpl::apply< _,mpl::at_c<result_data,n>::type >::type());\nwheren == mpl::size<result_data>::value .\n5. Example: a compile-time FSM generator\nFinite state machines (FSMs) are an important tool for describing and implementing program behavior [HU79],\n[Mar98]. They also are a good example of a domain in which metaprogramming can be applied to reduce the\namount of repetitive and boilerplate operations one must perform in order to implement these simple mathematical\nmodels in code. Below we present a simple state machine generator that has been implemented using Boost\nMetaprogramming Library facilities. The generator takes a compile-time automata description, and converts it into\nC++ code that implements the FSM at run-time.\nThe FSM description is basically a combination of states and events plus a state transition table (STT), which ties\nthem all together. The generator walks through the table and generates the state machine's process_event\nmethod that is the essence of an FSM.\nSuppose we want to implement a simple music player using a finite state machine model. The state transition table\nfor the FSM is shown in the table below. The STT format reflects the way one usually describes the behavior of an\nFSM in plain English. For example, the first line of the table can be read as follows: \"If the model is in the\nstopped state and the play_event is received, then the do_play transition function is called, and the model\ntransitions to the playing state.THE BOOST C++ METAPROGRAMMING LIBRARY\n2511The events need to be passed to action functions, as they may contain some event-specific information for an action.State Event Next state Transition function\nstopped play_event playing do_play\nplaying stop_event stopped do_stop\nplaying pause_event paused do_pause\npaused play_event playing do_resume\npaused stop_event stopped do_stop\nTable 1. Player's state transition table with actions\nThe transition table provides us with a complete formal definition of the target FSM, and there are several ways to\ntransform that definition into code. For instance, if we define states as members of an enumeration type, and events\nas classes derived from some base eventclass,11like so:\nclass player\n{\npublic:\n// event declarations\nstruct event;\nstruct play_event;\nstruct stop_event;\nstruct pause_event;\n// \"input\" function\nvoid process_event(event const&); // throws\nprivate:\n// states\nenum state_t { stopped, playing, paused };\n// transition functions\nvoid do_play(play_event const&);\nvoid do_stop(stop_event const&);\nvoid do_pause(pause_event const&);\nvoid do_resume(play_event const&);\nprivate:\nstate_t m_state;\n};\nthen the most straightforward way to derive the FSM implementation from the above table would be something like\nthis:\nvoid player::process_event(event const& e)\n{\nif (m_state == stopped)\n{\nif (typeid(e) == typeid(play_event))\n{\ndo_play(static_cast<play_event const&>(e));\nm_state = playing;\nreturn;\n}\n}\nelse if (m_state == playing)\n{\nif (typeid(e) == typeid(stop_event))\n{\ndo_stop(static_cast<stop_event const&>(e));THE BOOST C++ METAPROGRAMMING LIBRARY\n26m_state = stopped;\nreturn;\n}\nif (typeid(e) == typeid(pause_event))\n{\ndo_pause(static_cast<pause_event const&>(e));\nm_state = paused;\nreturn;\n}\n}\nelse if (m_state == paused)\n{\nif (typeid(e) == typeid(stop_event))\n{\ndo_stop(static_cast<stop_event const&>(e));\nm_state = stopped;\nreturn;\n}\nif (typeid(e) == typeid(play_event))\n{\ndo_play(static_cast<play_event const&>(e));\nm_state = playing;\nreturn;\n}\n}\nelse\n{\nthrow logic_error(\nboost::format(\"unknown state: %d\")\n% static_cast<int>(m_state)\n);\n}\nthrow std::logic_error(\n\"unexpected event: \" + typeid(e).name()\n);\n}\nAlthough there is nothing particularly wrong with implementing an FSM's structure using nested if(orswitch-\ncase) statements, the obvious weakness of this approach is that most of the above code is boilerplate. What one\ntends to do with boilerplate code is to copy and paste it, then change names etc. to adjust it to its new location; and\nthat's where the errors are most likely to creep in. Since all the lines of event processing look alike (structurally), it's\nvery easy to overlook or forget something that needs to be changed, and many such errors won't appear until the run-\ntime.\nThe transition table of our FSM is just five lines long; ideally, we would like the skeleton implementation of the au-\ntomata's controlling logic to be equally short (or, at least, to look equally short, i.e. to be encapsulated in some form\nso we never worry about it).\n5.1. Implementation\nTo represent the STT in a C++ program, we define a transition class template that represents a single line of the\ntable. Then the table itself can be represented as a sequence of such lines:\ntypedef mpl::list<\ntransition<stopped, play_event, playing, &player::do_play>\n, transition<playing, stop_event, stopped, &player::do_stop>\n, transition<playing, pause_event, paused, &player::do_pause>\n, transition<paused, play_event, playing, &player::do_resume>\n, transition<paused, stop_event, stopped, &player::do_stop>\n>::type transition_table;THE BOOST C++ METAPROGRAMMING LIBRARY\n27Now, the complete FSM will look like this:\nclass player\n: state_machine<player>\n{\nprivate:\ntypedef player self_t;\n// state invariants\nvoid stopped_state_invariant();\nvoid playing_state_invariant();\nvoid paused_state_invariant();\n// states (invariants are passed as non-type template arguments,\n// and are called then the FSM enters the corresponding state)\ntypedef state<0, &self_t::stopped_state_invariant> stopped;\ntypedef state<1, &self_t::playing_state_invariant> playing;\ntypedef state<2, &self_t::paused_state_invariant> paused;\nprivate:\n// event declarations; events are represented as types,\n// and can carry a specific data for each event;\n// but it's not needed for generator, so we define them later\nstruct play_event;\nstruct stop_event;\nstruct pause_event;\n// transition functions\nvoid do_play(play_event const&);\nvoid do_stop(stop_event const&);\nvoid do_pause(pause_event const&);\nvoid do_resume(play_event const&);\n// STT\nfriend class state_machine<player>;\ntypedef mpl::list<\ntransition<stopped, play_event, playing, &player::do_play>\n, transition<playing, stop_event, stopped, &player::do_stop>\n, transition<playing, pause_event, paused, &player::do_pause>\n, transition<paused, play_event, playing, &player::do_resume>\n, transition<paused, stop_event, stopped, &player::do_stop>\n>::type transition_table;\n};\nThat's all &mdash; the above will generate a complete FSM implementation according to our specification. The only\nthing we need before using it is the definition of the event types (that were just forward declared before):\n// event definitions\nstruct player::play_event\n: player::event\n{\n};\n// ...\nThe usage is simple as well:\nint main()\n{\n// usage example\nplayer p;\np.process_event(player::play_event());\np.process_event(player::pause_event());\np.process_event(player::play_event());THE BOOST C++ METAPROGRAMMING LIBRARY\n28p.process_event(player::stop_event());\nreturn 0;\n}\n5.2. Related work\nA notable prior work in the field of automation of general-purpose state machine implementation in C++ is the\nRobert Martin's State Machine Compiler [SMC]. The SMC takes an ASCII description of the machine's state transi-\ntion table and produces C++ code that implements the FSM using a variation of State design pattern [Hun91],\n[GHJ95]. Lafreniere [Laf00] presents another approach, where no external tools are used, and the FSMs are table\ndriven.\n6. Acknowledgements\nPeter Dimov contributed the bindfunctionality without which compile-time lambda expressions wouldn't have\nbeen possible. The MPL implementation would have been much more difficult without Vesa Karvonen's wonderful\nBoost Preprocessor Metaprogramming Library. Authors are also greatly indebted to David B. Held who kindly vol-\nunteered to thoroughly edit this document. Of course, any remaining errors are exclusively ours.\n7. References\n[Abr01] David Abrahams and Carlos Pinto Coelho, Effects of Metaprogramming Style on Compilation Time , 2001\n[Ale01] Andrei Alexandrescu, Modern C++ Design: Generic Programming and Design Patterns Applied , Addison-\nWesley, ISBN 0-201-70431-5, 2001\n[CE98] Krzysztof Czarnecki and Ulrich Eisenecker, Metalisp\n[CE00] Krzysztof Czarnecki and Ulrich Eisenecker, Generative Programming: Methods, Tools, and Applications ,\nAddison-Wesley, ISBN 0-201-30977-7, 2000\n[EBNF]ISO/IEC 14977:1996(E), Information technology — Syntactic metalanguage — Extended BNF , ISO/IEC,\n1996\n[GHJ95] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, Design Patterns, Elements of Reusable\nObject-Oriented Software , Addison-Wesley, ISBN 0-201-63361-2, 1995\n[HU79] Hopcroft and Ullman, Introduction to automata theory, languages and computations , Addison-Wesley,\n1979\n[Hud89] Paul Hudak, Conception, Evolution, and Application of Functional Programming Languages , ACM Com-\nputing Surveys, Association for Computing Machinery (ACM), ISSN 0360-0300, Vol. 21, Issue 3, pp.\n359-411, September, 1989\n[Hun91] Immo Huneke, Finite State Machines: A Model of Behavior for C++ , C++ Report, SIGS Publications Inc.,\nISSN 1040-6042, 1991\n[ISO98]ISO/IEC 14882:1998(E), Programming languages — C++ , ISO/IEC, 1998\n[Joh79] Stephen C. Johnson, Yacc: Yet Another Compiler Compiler , UNIX Programmer's Manual, Vol. 2b, pp.\n353-387, 1979\n[Laf00] David Lafreniere, State Machine Design in C++ , C/C++ User Journal, CMP Media LCC, ISSN 1075-2838,\nVol. 18, Issue 5, May 1998THE BOOST C++ METAPROGRAMMING LIBRARY\n29[Loki]The Loki library\n[Mar98] Robert C. Martin, UML Tutorial: Finite State Machines , C++ Report, SIGS Publications Inc., ISSN\n1040-6042, June 1998\n[MPLR]Boost MPL Library Reference Documentation\n[PRE] Vesa Karvonen, Boost Preprocessor Metaprogramming library\n[SMC] Robert C. Martin, SMC - Finite State Machine Compiler (C++)\n[STL94] A. A. Stepanov and M. Lee, The Standard Template Library, Hewlett-Packard Laboratories , 1994\n[SPL]Boost Smart Pointer library\n[SS75] Gerald J. Sussman and Guy L. Steele Jr., Scheme: An interpreter for extended lambda calculus , MIT AI\nMemo 349, Massachusetts Institute of Technology, May 1975\n[TTL]Boost Type Traits library\n[Vel95a] Todd Veldhuizen, Using C++ template metaprograms , C++ Report, SIGS Publications Inc., ISSN\n1040-6042, Vol. 7, Issue 4, pp. 36-43, May 1995\n[Vel95b] Todd Veldhuizen, Expression templates , C++ Report, SIGS Publications Inc., ISSN 1040-6042, Vol. 7, Is-\nsue 5, pp. 26-31, Jun 1995\n[Unr] Erwin Unruh, Prime number computation , ANSI X3J16-94-0075/ISO WG21-462THE BOOST C++ METAPROGRAMMING LIBRARY\n30" } ]
{ "category": "App Definition and Development", "file_name": "mpl_paper.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Autodi\u000b\nAutomatic Di\u000berentiation C++ Library\nMatthew Pulver\nJune 22, 2019\nContents\n1 Description 2\n2 Examples 2\n2.1 Example 1: Single-variable derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2\n2.1.1 Calculate derivatives of f(x) =x4atx= 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2\n2.2 Example 2: Multi-variable mixed partial derivatives with multi-precision data type . . . . . . . . . . . 3\n2.2.1 Calculate@12f\n@w3@x2@y4@z3(11;12;13;14) with a precision of about 50 decimal digits,\nwhere f(w; x; y; z ) = exp\u0010\nwsin\u0010\nxlog(y)\nz\u0011\n+q\nwz\nxy\u0011\n+w2\ntan(z). . . . . . . . . . . . . . . . . . . . . 3\n2.3 Example 3: Black-Scholes Option Pricing with Greeks Automatically Calculated . . . . . . . . . . . . 4\n2.3.1 Calculate greeks directly from the Black-Scholes pricing function. . . . . . . . . . . . . . . . . . 4\n3 Advantages of Automatic Di\u000berentiation 5\n4 Mathematics 6\n4.1 Truncated Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6\n4.1.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6\n4.2 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7\n4.2.1 Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7\n4.2.2 Subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7\n4.2.3 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7\n4.2.4 Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8\n4.3 General Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8\n4.4 Multiple Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9\n4.4.1 Declaring Multiple Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9\n5 Writing Functions for Autodi\u000b Compatibility 10\n5.1 Piecewise-Rational Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10\n5.2 Functions That Call Existing Autodi\u000b Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11\n5.3 New Functions For Which The Derivatives Can Be Calculated . . . . . . . . . . . . . . . . . . . . . . . 11\n6 Function Writing Guidelines 12\n6.1 Example 1: f(x) = max(0 ; x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12\n6.2 Example 2: f(x) = sinc( x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13\n6.3 Example 3: f(x) =pxandf0(0) =1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14\n6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14\n7 Acknowledgments 15\n1\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )1 Description\nAutodi\u000b is a header-only C++ library that facilitates the automatic di\u000berentiation (forward mode) of mathematical\nfunctions of single and multiple variables.\nThis implementation is based upon the Taylor series expansion of an analytic function fat the point x0:\nf(x0+\") =f(x0) +f0(x0)\"+f00(x0)\n2!\"2+f000(x0)\n3!\"3+\u0001\u0001\u0001\n=NX\nn=0f(n)(x0)\nn!\"n+O\u0000\n\"N+1\u0001\n:\nThe essential idea of autodi\u000b is the substitution of numbers with polynomials in the evaluation of f(x0). By\nsubstituting the number x0with the \frst-order polynomial x0+\", and using the same algorithm to compute f(x0+\"),\nthe resulting polynomial in \"contains the function's derivatives f0(x0),f00(x0),f000(x0), ... within the coe\u000ecients.\nEach coe\u000ecient is equal to the derivative of its respective order, divided by the factorial of the order.\nIn greater detail, assume one is interested in calculating the \frst Nderivatives of fatx0. Without loss of\nprecision to the calculation of the derivatives, all terms O\u0000\n\"N+1\u0001\nthat include powers of \"greater than Ncan be\ndiscarded. (This is due to the fact that each term in a polynomial depends only upon equal and lower-order terms\nunder arithmetic operations.) Under these truncation rules, fprovides a polynomial-to-polynomial transformation:\nf : x0+\"7!NX\nn=0yn\"n=NX\nn=0f(n)(x0)\nn!\"n:\nC++'s ability to overload operators and functions allows for the creation of a class fvar (forward-mode autodi\u000b\nvariable) that represents polynomials in \". Thus the same algorithm fthat calculates the numeric value of y0=f(x0),\nwhen written to accept and return variables of a generic (template) type, is also used to calculate the polynomialPN\nn=0yn\"n=f(x0+\"). The derivatives f(n)(x0) are then found from the product of the respective factorial n! and\ncoe\u000ecient yn:\ndnf\ndxn(x0) =n!yn:\n2 Examples\n2.1 Example 1: Single-variable derivatives\n2.1.1 Calculate derivatives of f(x) =x4atx= 2.\nIn this example, make fvar<double, Order>(2.0) instantiates the polynomial 2 + \". The Order=5 means that\nenough space is allocated (on the stack) to hold a polynomial of up to degree 5 during the proceeding computation.\nInternally, this is modeled by a std::array<double,6> whose elementsf2, 1, 0, 0, 0, 0 gcorrespond to the\n6 coe\u000ecients of the polynomial upon initialization. Its fourth power, at the end of the computation, is a polynomial\nwith coe\u000ecients y =f16, 32, 24, 8, 1, 0 g. The derivatives are obtained using the formula f(n)(2) = n!\u0003y[n].\n#include <boost/math/differentiation/autodiff.hpp>\n#include <iostream>\ntemplate <typename T>\nT fourth_power(T const& x) {\nT x4 = x * x; // retval in operator*() uses x4's memory via NRVO.\nx4 *= x4; // No copies of x4 are made within operator*=() even when squaring.\nreturn x4; // x4 uses y's memory in main() via NRVO.\n}\nint main() {\nusing namespace boost::math::differentiation;\n2\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )constexpr unsigned Order = 5; // Highest order derivative to be calculated.\nauto const x = make_fvar<double, Order>(2.0); // Find derivatives at x=2.\nauto const y = fourth_power(x);\nfor (unsigned i = 0; i <= Order; ++i)\nstd::cout << \"y.derivative(\" << i << \") = \" << y.derivative(i) << std::endl;\nreturn 0;\n}\n/*\nOutput:\ny.derivative(0) = 16\ny.derivative(1) = 32\ny.derivative(2) = 48\ny.derivative(3) = 48\ny.derivative(4) = 24\ny.derivative(5) = 0\n*/\nThe above calculates\ny:derivative (0) = f(2) = x4\f\f\nx=2= 16\ny:derivative (1) = f0(2) = 4\u0001x3\f\f\nx=2= 32\ny:derivative (2) =f00(2) = 4\u00013\u0001x2\f\f\nx=2= 48\ny:derivative (3) =f000(2) =4\u00013\u00012\u0001xjx=2= 48\ny:derivative (4) =f(4)(2) = 4\u00013\u00012\u00011 = 24\ny:derivative (5) =f(5)(2) = 0\n2.2 Example 2: Multi-variable mixed partial derivatives with multi-precision data\ntype\n2.2.1 Calculate@12f\n@w3@x2@y4@z3(11;12;13;14)with a precision of about 50 decimal digits,\nwhere f(w; x; y; z ) = exp\u0010\nwsin\u0010\nxlog(y)\nz\u0011\n+q\nwz\nxy\u0011\n+w2\ntan(z).\nIn this example, make ftuple<float50, Nw, Nx, Ny, Nz>(11, 12, 13, 14) returns a std::tuple of 4 indepen-\ndent fvar variables, with values of 11, 12, 13, and 14, for which the maximum order derivative to be calculated for\neach are 3, 2, 4, 3, respectively. The order of the variables is important, as it is the same order used when calling\nv.derivative(Nw, Nx, Ny, Nz) in the example below.\n#include <boost/math/differentiation/autodiff.hpp>\n#include <boost/multiprecision/cpp_bin_float.hpp>\n#include <iostream>\nusing namespace boost::math::differentiation;\ntemplate <typename W, typename X, typename Y, typename Z>\npromote<W, X, Y, Z> f(const W& w, const X& x, const Y& y, const Z& z) {\nusing namespace std;\nreturn exp(w * sin(x * log(y) / z) + sqrt(w * z / (x * y))) + w * w / tan(z);\n}\nint main() {\nusing float50 = boost::multiprecision::cpp_bin_float_50;\nconstexpr unsigned Nw = 3; // Max order of derivative to calculate for w\nconstexpr unsigned Nx = 2; // Max order of derivative to calculate for x\n3\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )constexpr unsigned Ny = 4; // Max order of derivative to calculate for y\nconstexpr unsigned Nz = 3; // Max order of derivative to calculate for z\n// Declare 4 independent variables together into a std::tuple.\nauto const variables = make_ftuple<float50, Nw, Nx, Ny, Nz>(11, 12, 13, 14);\nauto const& w = std::get<0>(variables); // Up to Nw derivatives at w=11\nauto const& x = std::get<1>(variables); // Up to Nx derivatives at x=12\nauto const& y = std::get<2>(variables); // Up to Ny derivatives at y=13\nauto const& z = std::get<3>(variables); // Up to Nz derivatives at z=14\nauto const v = f(w, x, y, z);\n// Calculated from Mathematica symbolic differentiation.\nfloat50 const answer(\"1976.319600747797717779881875290418720908121189218755\");\nstd::cout << std::setprecision(std::numeric_limits<float50>::digits10)\n<< \"mathematica : \" << answer << '\\n'\n<< \"autodiff : \" << v.derivative(Nw, Nx, Ny, Nz) << '\\n'\n<< std::setprecision(3)\n<< \"relative error: \" << (v.derivative(Nw, Nx, Ny, Nz) / answer - 1) << '\\n';\nreturn 0;\n}\n/*\nOutput:\nmathematica : 1976.3196007477977177798818752904187209081211892188\nautodiff : 1976.3196007477977177798818752904187209081211892188\nrelative error: 2.67e-50\n*/\n2.3 Example 3: Black-Scholes Option Pricing with Greeks Automatically Calculated\n2.3.1 Calculate greeks directly from the Black-Scholes pricing function.\nBelow is the standard Black-Scholes pricing function written as a function template, where the price, volatility\n(sigma), time to expiration (tau) and interest rate are template parameters. This means that any Greek based on\nthese 4 variables can be calculated using autodi\u000b. The below example calculates delta and gamma where the variable\nof di\u000berentiation is only the price. For examples of more exotic greeks, see example/black scholes.cpp .\n#include <boost/math/differentiation/autodiff.hpp>\n#include <iostream>\nusing namespace boost::math::constants;\nusing namespace boost::math::differentiation;\n// Equations and function/variable names are from\n// https://en.wikipedia.org/wiki/Greeks_(finance)#Formulas_for_European_option_Greeks\n// Standard normal cumulative distribution function\ntemplate <typename X>\nX Phi(X const& x) {\nreturn 0.5 * erfc(-one_div_root_two<X>() * x);\n}\nenum class CP { call, put };\n// Assume zero annual dividend yield (q=0).\ntemplate <typename Price, typename Sigma, typename Tau, typename Rate>\npromote<Price, Sigma, Tau, Rate> black_scholes_option_price(CP cp,\ndouble K,\nPrice const& S,\nSigma const& sigma,\nTau const& tau,\n4\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )Rate const& r) {\nusing namespace std;\nauto const d1 = (log(S / K) + (r + sigma * sigma / 2) * tau) / (sigma * sqrt(tau));\nauto const d2 = (log(S / K) + (r - sigma * sigma / 2) * tau) / (sigma * sqrt(tau));\nswitch (cp) {\ncase CP::call:\nreturn S * Phi(d1) - exp(-r * tau) * K * Phi(d2);\ncase CP::put:\nreturn exp(-r * tau) * K * Phi(-d2) - S * Phi(-d1);\n}\n}\nint main() {\ndouble const K = 100.0; // Strike price.\nauto const S = make_fvar<double, 2>(105); // Stock price.\ndouble const sigma = 5; // Volatility.\ndouble const tau = 30.0 / 365; // Time to expiration in years. (30 days).\ndouble const r = 1.25 / 100; // Interest rate.\nauto const call_price = black_scholes_option_price(CP::call, K, S, sigma, tau, r);\nauto const put_price = black_scholes_option_price(CP::put, K, S, sigma, tau, r);\nstd::cout << \"black-scholes call price = \" << call_price.derivative(0) << '\\n'\n<< \"black-scholes put price = \" << put_price.derivative(0) << '\\n'\n<< \"call delta = \" << call_price.derivative(1) << '\\n'\n<< \"put delta = \" << put_price.derivative(1) << '\\n'\n<< \"call gamma = \" << call_price.derivative(2) << '\\n'\n<< \"put gamma = \" << put_price.derivative(2) << '\\n';\nreturn 0;\n}\n/*\nOutput:\nblack-scholes call price = 56.5136\nblack-scholes put price = 51.4109\ncall delta = 0.773818\nput delta = -0.226182\ncall gamma = 0.00199852\nput gamma = 0.00199852\n*/\n3 Advantages of Automatic Di\u000berentiation\nThe above examples illustrate some of the advantages of using autodi\u000b:\n\u000fElimination of code redundancy. The existence of Nseparate functions to calculate derivatives is a form of\ncode redundancy, with all the liabilities that come with it:\n{Changes to one function require Nadditional changes to other functions. In the 3rdexample above,\nconsider how much larger and inter-dependent the above code base would be if a separate function were\nwritten for each Greek value.\n{Dependencies upon a derivative function for a di\u000berent purpose will break when changes are made to the\noriginal function. What doesn't need to exist cannot break.\n{Code bloat, reducing conceptual integrity. Control over the evolution of code is easier/safer when the code\nbase is smaller and able to be intuitively grasped.\n\u000fAccuracy of derivatives over \fnite di\u000berence methods. Single-iteration \fnite di\u000berence methods always include\na \u0001xfree variable that must be carefully chosen for each application. If \u0001 xis too small, then numerical errors\nbecome large. If \u0001 xis too large, then mathematical errors become large. With autodi\u000b, there are no free\n5\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )variables to set and the accuracy of the answer is generally superior to \fnite di\u000berence methods even with the\nbest choice of \u0001 x.\n4 Mathematics\nIn order for the usage of the autodi\u000b library to make sense, a basic understanding of the mathematics will help.\n4.1 Truncated Taylor Series\nBasic calculus courses teach that a real analytic function f:D!Ris one which can be expressed as a Taylor series\nat a point x02D\u0012R:\nf(x) =f(x0) +f0(x0)(x\u0000x0) +f00(x0)\n2!(x\u0000x0)2+f000(x0)\n3!(x\u0000x0)3+\u0001\u0001\u0001\nOne way of thinking about this form is that given the value of an analytic function f(x0) and its derivatives\nf0(x0); f00(x0); f000(x0); :::evaluated at a point x0, then the value of the function f(x) can be obtained at any other\npoint x2Dusing the above formula.\nLet us make the substitution x=x0+\"and rewrite the above equation to get:\nf(x0+\") =f(x0) +f0(x0)\"+f00(x0)\n2!\"2+f000(x0)\n3!\"3+\u0001\u0001\u0001\nNow consider \"asan abstract algebraic entity that never acquires a numeric value , much like one does in basic algebra\nwith variables like xory. For example, we can still manipulate entities like xyand (1 + 2 x+ 3x2) without having\nto assign speci\fc numbers to them.\nUsing this formula, autodi\u000b goes in the other direction. Given a general formula/algorithm for calculating\nf(x0+\"), the derivatives are obtained from the coe\u000ecients of the powers of \"in the resulting computation. The\ngeneral coe\u000ecient for \"nis\nf(n)(x0)\nn!:\nThus to obtain f(n)(x0), the coe\u000ecient of \"nis multiplied by n!.\n4.1.1 Example\nApply the above technique to calculate the derivatives of f(x) =x4atx0= 2.\nThe \frst step is to evaluate f(x0+\") and simply go through the calculation/algorithm, treating \"as an abstract\nalgebraic entity:\nf(x0+\") =f(2 +\")\n= (2 + \")4\n=\u0000\n4 + 4\"+\"2\u00012\n= 16 + 32 \"+ 24\"2+ 8\"3+\"4:\nEquating the powers of \"from this result with the above \"-taylor expansion yields the following equalities:\nf(2) = 16 ; f0(2) = 32 ;f00(2)\n2!= 24;f000(2)\n3!= 8;f(4)(2)\n4!= 1;f(5)(2)\n5!= 0:\nMultiplying both sides by the respective factorials gives\nf(2) = 16 ; f0(2) = 32 ; f00(2) = 48 ; f000(2) = 48 ; f(4)(2) = 24 ; f(5)(2) = 0 :\nThese values can be directly con\frmed by the power rule applied to f(x) =x4.\n6\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )4.2 Arithmetic\nWhat was essentially done above was to take a formula/algorithm for calculating f(x0) from a number x0, and\ninstead apply the same formula/algorithm to a polynomial x0+\". Intermediate steps operate on values of the form\nx=x0+x1\"+x2\"2+\u0001\u0001\u0001+xN\"N\nand the \fnal return value is of this polynomial form as well. In other words, the normal arithmetic operators\n+;\u0000;\u0002;\u0004applied to numbers xare instead applied to polynomials x. Through the overloading of C++ operators\nand functions, \roating point data types are replaced with data types that represent these polynomials. More\nspeci\fcally, C++ types such as double are replaced with std::array<double,N+1> , which hold the above N+ 1\ncoe\u000ecients xi, and are wrapped in a class that overloads all of the arithmetic operators.\nThe logic of these arithmetic operators simply mirror that which is applied to polynomials. We'll look at each of\nthe 4 arithmetic operators in detail.\n4.2.1 Addition\nThe addition of polynomials xandyis done component-wise:\nz=x+y\n= NX\ni=0xi\"i!\n+ NX\ni=0yi\"i!\n=NX\ni=0(xi+yi)\"i\nzi=xi+yi fori2f0;1;2; :::; Ng:\n4.2.2 Subtraction\nSubtraction follows the same form as addition:\nz=x\u0000y\n= NX\ni=0xi\"i!\n\u0000 NX\ni=0yi\"i!\n=NX\ni=0(xi\u0000yi)\"i\nzi=xi\u0000yi fori2f0;1;2; :::; Ng:\n4.2.3 Multiplication\nMultiplication produces higher-order terms:\n7\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )z=x\u0002y\n= NX\ni=0xi\"i! NX\ni=0yi\"i!\n=x0y0+ (x0y1+x1y0)\"+ (x0y2+x1y1+x2y0)\"2+\u0001\u0001\u0001+0\n@NX\nj=0xjyN\u0000j1\nA\"N+O\u0000\n\"N+1\u0001\n=NX\ni=0iX\nj=0xjyi\u0000j\"i+O\u0000\n\"N+1\u0001\nzi=iX\nj=0xjyi\u0000j fori2f0;1;2; :::; Ng:\nIn the case of multiplication, terms involving powers of \"greater than N, collectively denoted by O\u0000\n\"N+1\u0001\n, are\nsimply discarded. Fortunately, the values of zifori\u0014Ndo not depend on any of these discarded terms, so there is\nno loss of precision in the \fnal answer. The only information that is lost are the values of higher order derivatives,\nwhich we are not interested in anyway. If we were, then we would have simply chosen a larger value of Nto begin\nwith.\n4.2.4 Division\nDivision is not directly calculated as are the others. Instead, to \fnd the components of z=x\u0004ywe require that\nx=y\u0002z. This yields a recursive formula for the components zi:\nxi=iX\nj=0yjzi\u0000j\n=y0zi+iX\nj=1yjzi\u0000j\nzi=1\ny00\n@xi\u0000iX\nj=1yjzi\u0000j1\nA fori2f0;1;2; :::; Ng:\nIn the case of division, the values for zimust be calculated sequentially, since zidepends on the previously calculated\nvalues z0; z1; :::; z i\u00001.\n4.3 General Functions\nCalling standard mathematical functions such as log() ,cos() , etc. should return accurate higher order derivatives.\nFor example, exp(x) may be written internally as a speci\fc 14th-degree polynomial to approximate exwhen 0 <\nx <1. This would mean that the 15thderivative, and all higher order derivatives, would be 0, however we know that\nd15\ndx15ex=ex. How should such functions whose derivatives are known be written to provide accurate higher order\nderivatives? The answer again comes back to the function's Taylor series.\nTo simplify notation, for a given polynomial x=x0+x1\"+x2\"2+\u0001\u0001\u0001+xN\"Nde\fne\nx\"=x1\"+x2\"2+\u0001\u0001\u0001+xN\"N=NX\ni=1xi\"i:\nThis allows for a concise expression of a general function fofx:\n8\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )f(x) =f(x0+x\")\n=f(x0) +f0(x0)x\"+f00(x0)\n2!x2\n\"+f000(x0)\n3!x3\n\"+\u0001\u0001\u0001+f(N)(x0)\nN!xN\n\"+O\u0000\n\"N+1\u0001\n=NX\ni=0f(i)(x0)\ni!xi\n\"+O\u0000\n\"N+1\u0001\nwhere \"has been substituted with x\"in the \"-taylor series for f(x). This form gives a recipe for calculating f(x) in\ngeneral from regular numeric calculations f(x0),f0(x0),f00(x0), ... and successive powers of the epsilon terms x\".\nFor an application in which we are interested in up to Nderivatives in xthe data structure to hold this information\nis an ( N+ 1)-element array vwhose general element is\nv[i] =f(i)(x0)\ni!fori2f0;1;2; :::; Ng:\n4.4 Multiple Variables\nIn C++, the generalization to mixed partial derivatives with multiple independent variables is conveniently achieved\nwith recursion. To begin to see the recursive pattern, consider a two-variable function f(x; y). Since xandyare\nindependent, they require their own independent epsilons \"xand\"y, respectively.\nExpand f(x; y) for x=x0+\"x:\nf(x0+\"x; y) =f(x0; y) +@f\n@x(x0; y)\"x+1\n2!@2f\n@x2(x0; y)\"2\nx+1\n3!@3f\n@x3(x0; y)\"3\nx+\u0001\u0001\u0001+1\nM!@Mf\n@xM(x0; y)\"M\nx+O\u0000\n\"M+1\nx\u0001\n=MX\ni=01\ni!@if\n@xi(x0; y)\"i\nx+O\u0000\n\"M+1\nx\u0001\n:\nNext, expand f(x0+\"x; y) for y=y0+\"y:\nf(x0+\"x; y0+\"y) =NX\nj=01\nj!@j\n@yj MX\ni=0\"i\nx1\ni!@if\n@xi!\n(x0; y0)\"j\ny+O\u0000\n\"M+1\nx\u0001\n+O\u0000\n\"N+1\ny\u0001\n=MX\ni=0NX\nj=01\ni!j!@i+jf\n@xi@yj(x0; y0)\"i\nx\"j\ny+O\u0000\n\"M+1\nx\u0001\n+O\u0000\n\"N+1\ny\u0001\n:\nSimilar to the single-variable case, for an application in which we are interested in up to Mderivatives in xand\nNderivatives in y, the data structure to hold this information is an ( M+ 1)\u0002(N+ 1) array vwhose element at\n(i; j) is\nv[i][j] =1\ni!j!@i+jf\n@xi@yj(x0; y0) for ( i; j)2f0;1;2; :::; Mg\u0002f 0;1;2; :::; Ng:\nThe generalization to additional independent variables follows the same pattern.\n4.4.1 Declaring Multiple Variables\nInternally, independent variables are represented by vectors within orthogonal vector spaces. Because of this, one\nmust be careful when declaring more than one independent variable so that they do not end up in parallel vector\nspaces. This can easily be achieved by following one rule:\n\u000fWhen declaring more than one independent variable, call make ftuple<>() once and only once.\nThe tuple of values returned are independent. Though it is possible to achieve the same result with multiple calls to\nmake fvar , this is an easier and less error-prone method. See Section 2.2 for example usage.\n9\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )5 Writing Functions for Autodi\u000b Compatibility\nIn this section, a general procedure is given for writing new, and transforming existing, C++ mathematical functions\nfor compatibility with autodi\u000b.\nThere are 3 categories of functions that require di\u000berent strategies:\n1. Piecewise-rational functions. These are simply piecewise quotients of polynomials. All that is needed is to\nturn the function parameters and return value into generic (template) types. This will then allow the function\nto accept and return autodi\u000b's fvar types, thereby using autodi\u000b's overloaded arithmetic operators which\ncalculate the derivatives automatically.\n2. Functions that call existing autodi\u000b functions. This is the same as the previous, but may also include calls to\nfunctions that are in the autodi\u000b library. Examples: exp() ,log() ,tgamma() , etc.\n3. New functions for which the derivatives can be calculated. This is the most general technique, as it allows for\nthe development of a function which do not fall into the previous two categories.\nFunctions written in any of these ways may then be added to the autodi\u000b library.\n5.1 Piecewise-Rational Functions\nf(x) =1\n1 +x2\nBy simply writing this as a template function, autodi\u000b can calculate derivatives for it:\n#include <boost/math/differentiation/autodiff.hpp>\n#include <iostream>\ntemplate <typename T>\nT rational(T const& x) {\nreturn 1 / (1 + x * x);\n}\nint main() {\nusing namespace boost::math::differentiation;\nauto const x = make_fvar<double, 10>(0);\nauto const y = rational(x);\nstd::cout << std::setprecision(std::numeric_limits<double>::digits10)\n<< \"y.derivative(10) = \" << y.derivative(10) << std::endl;\nreturn 0;\n}\n/*\nOutput:\ny.derivative(10) = -3628800\n*/\nAs simple as f(x) may seem, the derivatives can get increasingly complex as derivatives are taken. For example, the\n10thderivative has the form\nf(10)(x) =\u000036288001\u000055x2+ 330 x4\u0000462x6+ 165 x8\u000011x10\n(1 +x2)11:\nDerivatives of f(x) are useful, and in fact used, in calculating higher order derivatives for arctan( x) for instance,\nsince\narctan(n)(x) =\u0012d\ndx\u0013n\u000011\n1 +x2for 1\u0014n:\n10\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )5.2 Functions That Call Existing Autodi\u000b Functions\nMany of the standard library math function are overloaded in autodi\u000b. It is recommended to use argument-dependent\nlookup (ADL) in order for functions to be written in a way that is general enough to accommodate standard types\n(double ) as well as autodi\u000b types ( fvar ).\nExample:\n#include <boost/math/constants/constants.hpp>\n#include <cmath>\nusing namespace boost::math::constants;\n// Standard normal cumulative distribution function\ntemplate <typename T>\nT Phi(T const& x)\n{\nreturn 0.5 * std::erfc(-one_div_root_two<T>() * x);\n}\nThough Phi(x) is general enough to handle the various fundamental \roating point types, this will not work if xis\nan autodi\u000b fvar variable, since std::erfc does not include a specialization for fvar . The recommended solution is\nto remove the namespace pre\fx std:: from erfc :\n#include <boost/math/constants/constants.hpp>\n#include <boost/math/differentiation/autodiff.hpp>\n#include <cmath>\nusing namespace boost::math::constants;\n// Standard normal cumulative distribution function\ntemplate <typename T>\nT Phi(T const& x)\n{\nusing std::erfc;\nreturn 0.5 * erfc(-one_div_root_two<T>() * x);\n}\nIn this form, when xis of type fvar , the C++ compiler will search for and \fnd a function erfc within the same\nnamespace as fvar , which is in the autodi\u000b library, via ADL. Because of the using-declaration, it will also call\nstd::erfc when xis a fundamental type such as double .\n5.3 New Functions For Which The Derivatives Can Be Calculated\nMathematical functions which do not fall into the previous two categories can be constructed using autodi\u000b helper\nfunctions. This requires a separate function for calculating the derivatives. In case you are asking yourself what\ngood is an autodi\u000b library if one needs to supply the derivatives, the answer is that the new function will \ft in with\nthe rest of the autodi\u000b library, thereby allowing for the creation of additional functions via all of the arithmetic\noperators, plus function composition, which was not readily available without the library.\nThe example given here is for cos:\ntemplate <typename RealType, size_t Order>\nfvar<RealType, Order> cos(fvar<RealType, Order> const& cr) {\nusing std::cos;\nusing std::sin;\nusing root_type = typename fvar<RealType, Order>::root_type;\nconstexpr size_t order = fvar<RealType, Order>::order_sum;\nroot_type const d0 = cos(static_cast<root_type>(cr));\nif constexpr (order == 0)\nreturn fvar<RealType, Order>(d0);\nelse {\n11\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )root_type const d1 = -sin(static_cast<root_type>(cr));\nroot_type const derivatives[4]{d0, d1, -d0, -d1};\nreturn cr.apply_derivatives(order,\n[&derivatives](size_t i) { return derivatives[i & 3]; });\n}\n}\nThis uses the helper function fvar::apply derivatives which takes two parameters:\n1. The highest order derivative to be calculated.\n2. A function that maps derivative order to derivative value.\nThe highest order derivative necessary to be calculated is generally equal to fvar::order sum. In the case of sin\nand cos, the derivatives are cyclical with period 4. Thus it is su\u000ecient to store only these 4 values into an array,\nand take the derivative order modulo 4 as the index into this array.\nA second helper function, not shown here, is apply coefficients . This is used the same as apply derivatives\nexcept that the supplied function calculates coe\u000ecients instead of derivatives. The relationship between a coe\u000ecient\nCnand derivative Dnfor derivative order nis\nCn=Dn\nn!:\nInternally, fvar holds coe\u000ecients rather than derivatives, so in case the coe\u000ecient values are more readily available\nthan the derivatives, it can save some unnecessary computation to use apply coefficients . See the de\fnition of\natan for an example.\nBoth of these helper functions use Horner's method when calculating the resulting polynomial fvar . This works\nwell when the derivatives are \fnite, but in cases where derivatives are in\fnite, this can quickly result in NaN values as\nthe computation progresses. In these cases, one can call non-Horner versions of both function which better \\isolate\"\nin\fnite values so that they are less likely to evolve into NaN values.\nThe four helper functions available for constructing new autodi\u000b functions from known coe\u000ecients/derivatives\nare:\n1.fvar::apply coefficients\n2.fvar::apply coefficients nonhorner\n3.fvar::apply derivatives\n4.fvar::apply derivatives nonhorner\n6 Function Writing Guidelines\nAt a high level there is one fairly simple principle, loosely and intuitively speaking, to writing functions for which\nautodi\u000b can e\u000bectively calculate derivatives:\nAutodi\u000b Function Principle (AFP)\nA function whose branches in logic correspond to piecewise analytic calculations over non-singleton inter-\nvals, with smooth transitions between the intervals, and is free of indeterminate forms in the calculated\nvalue and higher order derivatives, will work \fne with autodi\u000b.\nStating this with greater mathematical rigor can be done. However what seems to be more practical, in this case, is\nto give examples and categories of examples of what works, what doesn't, and how to remedy some of the common\nproblems that may be encountered. That is the approach taken here.\n6.1 Example 1: f(x) = max(0 ; x)\nOne potential implementation of f(x) = max(0 ; x) is:\n12\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )template<typename T>\nT f(const T& x)\n{\nreturn 0 < x ? x : 0;\n}\nThough this is consistent with Section 5, there are two problems with it:\n1.f(nan) = 0 . This problem is independent of autodi\u000b, but is worth addressing anyway. If there is an indetermi-\nnate form that arises within a calculation and is input into f, then it gets \\covered up\" by this implementation\nleading to an unknowingly incorrect result. Better for functions in general to propagate NaN values, so that\nthe user knows something went wrong and doesn't rely on an incorrect result, and likewise the developer can\ntrack down where the NaN originated from and remedy it.\n2.f0(0) = 0 when autodi\u000b is applied. This is because freturns 0 as a constant when x==0 , wiping out any of\nthe derivatives (or sensitivities) that xwas holding as an autodi\u000b variable. Instead, let us apply the AFP and\nidentify the two intervals over which fis de\fned: (\u00001;0][(0;1). Though the function itself is not analytic\natx= 0, we can attempt somewhat to smooth out this transition point by averaging the calculation of f(x)\natx= 0 from each interval. If x <0 then the result is simply 0, and if 0 < xthen the result is x. The average\nis1\n2(0 +x) which will allow autodi\u000b to calculate f0(0) =1\n2. This is a more reasonable answer.\nA better implementation that resolves both issues is:\ntemplate<typename T>\nT f(const T& x)\n{\nif (x < 0)\nreturn 0;\nelse if (x == 0)\nreturn 0.5*x;\nelse\nreturn x;\n}\n6.2 Example 2: f(x) = sinc( x)\nThe de\fnition of sinc : R!Ris\nsinc(x) =(\n1 if x= 0\nsin(x)\nxotherwise.\nA potential implementation is:\ntemplate<typename T>\nT sinc(const T& x)\n{\nusing std::sin;\nreturn x == 0 ? 1 : sin(x) / x;\n}\nThough this is again consistent with Section 5, and returns correct non-derivative values, it returns a constant when\nx==0 thereby losing all derivative information contained in xand contributions from sinc. For example, sinc00(0) =\u00001\n3,\nhowever y.derivative(2) == 0 when y = sinc(make fvar<double,2>(0)) using the above incorrect implementa-\ntion. Applying the AFP, the intervals upon which separate branches of logic are applied are ( \u00001;0)[[0;0][(0;1).\nThe violation occurs due to the singleton interval [0 ;0], even though a constant function of 1 is technically analytic.\nThe remedy is to de\fne a custom sinc overload and add it to the autodi\u000b library. This has been done. Mathematically,\nit is well-de\fned and free of indeterminate forms, as is the 3rdexpression in the equalities\n1\nxsin(x) =1\nx1X\nn=0(\u00001)n\n(2n+ 1)!x2n+1=1X\nn=0(\u00001)n\n(2n+ 1)!x2n:\n13\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )The autodi\u000b library contains helper functions to help write function overloads when the derivatives of a function are\nknown. This is an advanced feature and documentation for this may be added at a later time.\nFor now, it is worth understanding the ways in which indeterminate forms can occur within a mathematical\ncalculation, and avoid them when possible by rewriting the function. Table 1 compares 3 types of indeterminate\nforms. Assume the product a*bis a positive \fnite value.\nf(x) =\u0010a\nx\u0011\n\u0002(bx2) g(x) =\u0010a\nx\u0011\n\u0002(bx) h(x) =\u0010a\nx2\u0011\n\u0002(bx)\nMathematical\nLimitlim\nx!0f(x) = 0 lim\nx!0g(x) =ab lim\nx!0h(x) =1\nFloating Point\nArithmeticf(0) = inf*0 = nan g(0) = inf*0 = nan h(0) = inf*0 = nan\nTable 1: Automatic di\u000berentiation does not compute limits. Indeterminate forms must be simpli\fed manually.\n(These cases are not meant to be exhaustive.)\nIndeterminate forms result in NaN values within a calculation. Mathematically, if they occur at locally isolated\npoints, then we generally prefer the mathematical limit as the result, even if it is in\fnite. As demonstrated in Table 1,\ndepending upon the nature of the indeterminate form, the mathematical limit can be 0 (no matter the values of\naorb), or ab, or1, but these 3 cases cannot be distinguished by the \roating point result of nan. Floating point\narithmetic does not perform limits (directly), and neither does the autodi\u000b library. Thus it is up to the diligence of\nthe developer to keep a watchful eye over where indeterminate forms can arise.\n6.3 Example 3: f(x) =pxandf0(0) =1\nWhen working with functions that have in\fnite higher order derivatives, this can very quickly result in nans in higher\norder derivatives as the computation progresses, as inf-inf ,inf/inf , and 0*inf result in nan. See Table 2 for an\nexample.\nf(x) f(0) f0(0) f00(0) f000(0)\nsqrt(x) 0 inf -inf inf\nsqr(sqrt(x)+1) 1 inf nan nan\nx+2*sqrt(x)+1 1 inf -inf inf\nTable 2: Indeterminate forms in higher order derivatives. sqr(x) == x*x .\nCalling the autodi\u000b-overloaded implementation of f(x) =pxat the value x==0 results in the 1strow (after the\nheader row) of Table 2, as is mathematically correct. The 2ndrow shows f(x) = (px+ 1)2resulting in nanvalues\nforf00(0) and all higher order derivatives. This is due to the internal arithmetic in which infis added to -inf\nduring the squaring, resulting in a nanvalue for f00(0) and all higher orders. This is typical of infvalues in autodi\u000b.\nWhere they show up, they are correct, however they can quickly introduce nanvalues into the computation upon the\naddition of oppositely signed infvalues, division by inf, or multiplication by 0. It is worth noting that the infection\nofnanonly spreads upward in the order of derivatives, since lower orders do not depend upon higher orders (which\nis also why dropping higher order terms in an autodi\u000b computation does not result in any loss of precision for lower\norder terms.)\nThe resolution in this case is to manually perform the squaring in the computation, replacing the 2ndrow with\nthe 3rd:f(x) =x+ 2px+ 1. Though mathematically equivalent, it allows autodi\u000b to avoid nanvalues sincepxis\nmore \\isolated\" in the computation. That is, the infvalues that unavoidably show up in the derivatives of sqrt(x)\nforx==0 do not have the chance to interact with other infvalues as with the squaring.\n6.4 Summary\nThe AFP gives a high-level uni\fed guiding principle for writing C++ template functions that autodi\u000b can e\u000bectively\nevaluate derivatives for.\nExamples have been given to illustrate some common items to avoid doing:\n1. It is not enough for functions to be piecewise continuous. On boundary points between intervals, consider\nreturning the average expression of both intervals, rather than just one of them. Example: max(0 ; x) atx= 0.\n14\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )In cases where the limits from both sides must match, and they do not, then nanmay be a more appropriate\nvalue depending on the application.\n2. Avoid returning individual constant values (e.g. sinc(0) = 1.) Values must be computed uniformly along with\nother values in its local interval. If that is not possible, then the function must be overloaded to compute the\nderivatives precisely using the helper functions from Section 5.3.\n3. Avoid intermediate indeterminate values in both the value (sinc( x) atx= 0) and derivatives ((px+ 1)2at\nx= 0). Try to isolate expressions that may contain in\fnite values/derivatives so that they do not introduce\nNaN values into the computation.\n7 Acknowledgments\n\u000fKedar Bhat | C++11 compatibility, Boost Special Functions compatibility testing, codecov integration, and\nfeedback.\n\u000fNick Thompson | Initial feedback and help with Boost integration.\n\u000fJohn Maddock | Initial feedback and help with Boost integration.\nReferences\n[1]https://en.wikipedia.org/wiki/Automatic_differentiation\n[2] Andreas Griewank, Andrea Walther. Evaluating Derivatives . SIAM, 2nd ed. 2008.\n15\nCopyright c\rMatthew Pulver 2018{2019. Distributed under the Boost Software License, Version 1.0.\n(See accompanying \fle LICENSE 10.txt or copy at https://www.boost.org/LICENSE_1_0.txt )" } ]
{ "category": "App Definition and Development", "file_name": "autodiff.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Druid\nA Real-time Analytical Data Store\nFangjin Y ang\nMetamarkets Group, Inc.\nfangjin@metamarkets.comEric Tschetter\necheddar@gmail.comXavier Léauté\nMetamarkets Group, Inc.\nxavier@metamarkets.com\nNelson Ray\nncray86@gmail.comGian Merlino\nMetamarkets Group, Inc.\ngian@metamarkets.comDeep Ganguli\nMetamarkets Group, Inc.\ndeep@metamarkets.com\nABSTRACT\nDruidisanopensource1datastoredesignedforreal-timeexploratory\nanalyticsonlargedatasets. Thesystemcombinesacolumn-oriented\nstorage layout, a distributed, shared-nothing architecture, and an\nadvanced indexing structure to allow for the arbitrary exploration\nof billion-row tables with sub-second latencies. In this paper, we\ndescribeDruid’sarchitecture,anddetailhowitsupportsfastaggre-\ngations, flexible filters, and low latency data ingestion.\nCategories and Subject Descriptors\nH.2.4[DatabaseManagement ]: Systems— Distributeddatabases\nKeywords\ndistributed;real-time;fault-tolerant;highlyavailable;opensource;\nanalytics; column-oriented; OLAP\n1. INTRODUCTION\nIn recent years, the proliferation of internet technology has cre-\natedasurgeinmachine-generatedevents. Individually,theseevents\ncontainminimalusefulinformationandareoflowvalue. Giventhe\ntime and resources required to extract meaning from large collec-\ntionsofevents,manycompanieswerewillingtodiscardthisdatain-\nstead. Althoughinfrastructurehasbeenbuilttohandleevent-based\ndata (e.g. IBM’s Netezza[37], HP’s Vertica[5], and EMC’s Green-\nplum[29]), they are largely sold at high price points and are only\ntargetedtowards those companies who can affordthe offering.\nA few years ago, Google introduced MapReduce [11] as their\nmechanism of leveraging commodity hardware to index the inter-\nnet and analyze logs. The Hadoop [36] project soon followed and\nwaslargelypatternedaftertheinsightsthatcameoutoftheoriginal\nMapReduce paper. Hadoop is currently deployed in many orga-\nnizations to store and analyze large amounts of log data. Hadoop\nhascontributedmuchtohelpingcompaniesconverttheirlow-value\n1http://druid.io/ https://github.com/metamx/druid\nPermission to make digital or hard copies of all or part of this work for personal or\nclassroom use is granted without fee provided that copies are not made or distributed\nfor profit or commercial advantage and that copies bear this notice and the full citation\non the first page. Copyrights for components of this work owned by others than the\nauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or\nrepublish, to post on servers or to redistribute to lists, requires prior specific permission\nand/or a fee. Request permissions from permissions@acm.org.\nSIGMOD’14, June 22–27, 2014, Snowbird, UT, USA.\nCopyright is held by the owner/author(s). Publication rights licensed to ACM.\nACM 978-1-4503-2376-5/14/06 ...$15.00.\nhttp://dx.doi.org/10.1145/2588555.2595631.event streams into high-value aggregates for a variety of applica-\ntions such as business intelligence and A-B testing.\nAs with many great systems, Hadoop has opened our eyes to\na new space of problems. Specifically, Hadoop excels at storing\nandprovidingaccesstolargeamountsofdata,however,itdoesnot\nmakeanyperformanceguaranteesaroundhowquicklythatdatacan\nbe accessed. Furthermore, although Hadoop is a highly available\nsystem,performancedegradesunderheavyconcurrentload. Lastly,\nwhileHadoopworkswellforstoringdata,itisnotoptimizedforin-\ngesting data and making that data immediately readable.\nEarlyoninthedevelopmentoftheMetamarketsproduct,weran\nintoeachoftheseissuesandcametotherealizationthatHadoopis\na great back-office, batch processing, and data warehousing sys-\ntem. However, as a company that has product-level guarantees\naroundqueryperformanceanddataavailabilityinahighlyconcur-\nrent environment (1000+ users), Hadoop wasn’t going to meet our\nneeds. Weexploreddifferentsolutionsinthespace,andaftertrying\nbothRelationalDatabaseManagementSystemsandNoSQLarchi-\ntectures, we came to the conclusion that there was nothing in the\nopen source world that could be fully leveraged for our require-\nments. We ended up creating Druid, an open source, distributed,\ncolumn-oriented, real-time analytical data store. In many ways,\nDruidsharessimilaritieswithotherOLAPsystems[30,35,22],in-\nteractivequerysystems[28],main-memorydatabases[14],aswell\naswidelyknowndistributeddatastores[7,12,23]. Thedistribution\nand query model also borrow ideas from current generation search\ninfrastructure [25, 3, 4].\nThispaperdescribesthearchitectureofDruid,exploresthevari-\nousdesigndecisionsmadeincreatinganalways-onproductionsys-\ntem that powers a hosted service, and attempts to help inform any-\nonewhofacesasimilarproblemaboutapotentialmethodofsolving\nit. Druid is deployed in production at several technology compa-\nnies2. The structure of the paper is as follows: we first describe\ntheprobleminSection2. Next,wedetailsystemarchitecturefrom\nthe point of view of how data flows through the system in Section\n3. We then discuss how and why data gets converted into a binary\nformatinSection4. WebrieflydescribethequeryAPIinSection5\nand present performance results in Section 6. Lastly, we leave off\nwithourlessonsfromrunningDruidinproductioninSection7,and\nrelated work in Section 8.\n2. PROBLEM DEFINITION\nDruid was originally designed to solve problems around ingest-\ningandexploringlargequantitiesoftransactionalevents(logdata).\nThis form of timeseries data is commonly found in OLAP work-\n2http://druid.io/druid.htmlTimestamp Page Username Gender City Characters Added Characters Removed\n2011-01-01T01:00:00Z Justin Bieber Boxer Male San Francisco 1800 25\n2011-01-01T01:00:00Z Justin Bieber Reach Male Waterloo 2912 42\n2011-01-01T02:00:00Z Ke$ha Helz Male Calgary 1953 17\n2011-01-01T02:00:00Z Ke$ha Xeno Male Taiyuan 3194 170\nTable1: Sample Druid data for edits that have occurredon Wikipedia.\nflowsandthenatureofthedatatendstobeveryappendheavy. For\nexample,considerthedatashowninTable1. Table1containsdata\nfor edits that have occurred on Wikipedia. Each time a user edits\na page in Wikipedia, an event is generated that contains metadata\nabout the edit. This metadata is comprised of 3 distinct compo-\nnents. First, there is a timestamp column indicating when the edit\nwasmade. Next,thereareasetdimensioncolumnsindicatingvar-\nious attributes about the edit such as the page that was edited, the\nuser who made the edit, and the location of the user. Finally, there\nare a set of metric columns that contain values (usually numeric)\nthat can be aggregated, such as the number of characters added or\nremoved in an edit.\nOur goal is to rapidly compute drill-downs and aggregates over\nthisdata. Wewanttoanswerquestionslike“Howmanyeditswere\nmadeonthepageJustinBieberfrommalesinSanFrancisco?”and\n“Whatistheaveragenumberofcharactersthatwereaddedbypeo-\nplefromCalgaryoverthespanofamonth?”. Wealsowantqueries\nover any arbitrary combination of dimensions to return with sub-\nsecond latencies.\nThe need for Druid was facilitated by the fact that existing open\nsource Relational Database Management Systems (RDBMS) and\nNoSQLkey/valuestoreswereunabletoprovidealowlatencydata\ningestionandqueryplatformforinteractiveapplications[40]. Inthe\nearly days of Metamarkets, we were focused on building a hosted\ndashboardthatwouldallowuserstoarbitrarilyexploreandvisualize\nevent streams. The data store powering the dashboard needed to\nreturn queries fast enough that the data visualizations built on top\nof it could provide users with an interactive experience.\nInadditiontothequerylatencyneeds,thesystemhadtobemulti-\ntenant and highly available. The Metamarkets product is used in a\nhighlyconcurrentenvironment. Downtimeiscostlyandmanybusi-\nnessescannotaffordtowaitifasystemisunavailableinthefaceof\nsoftware upgrades or network failure. Downtime for startups, who\noften lack proper internal operations management, can determine\nbusiness success or failure.\nFinally, another challenge that Metamarkets faced in its early\ndays was to allow users and alerting systems to be able to make\nbusiness decisions in “real-time”. The time from when an event is\ncreated to when that event is queryable determines how fast inter-\nestedpartiesareabletoreacttopotentiallycatastrophicsituationsin\ntheirsystems. Popularopensourcedatawarehousingsystemssuch\nas Hadoop were unable to provide the sub-second data ingestion\nlatencies we required.\nTheproblemsofdataexploration,ingestion,andavailabilityspan\nmultipleindustries. SinceDruidwasopensourcedinOctober2012,\nit been deployed as a video, network monitoring, operations mon-\nitoring, and online advertising analytics platform at multiple com-\npanies.\n3. ARCHITECTURE\nADruidclusterconsistsofdifferenttypesofnodesandeachnode\ntype is designed to perform a specific set of things. We believe\nthis design separates concerns and simplifies the complexity of the\noverallsystem. Thedifferentnodetypesoperatefairlyindependentofeachotherandthereisminimalinteractionamongthem. Hence,\nintra-cluster communication failures have minimal impact on data\navailability.\nTosolvecomplexdataanalysisproblems,thedifferentnodetypes\ncome together to form a fully working system. The name Druid\ncomes from the Druid class in many role-playing games: it is a\nshape-shifter, capable of taking on many different forms to fulfill\nvarious different roles in a group. The composition of and flow of\ndata in a Druid cluster are shown in Figure 1.\n3.1 Real-time Nodes\nReal-timenodesencapsulatethefunctionalitytoingestandquery\nevent streams. Events indexed via these nodes are immediately\navailable for querying. The nodes are only concerned with events\nfor some small time range and periodically hand off immutable\nbatches of events they have collected over this small time range to\nothernodesintheDruidclusterthatarespecializedindealingwith\nbatchesofimmutableevents. Real-timenodesleverageZooKeeper\n[19] for coordination with the rest of the Druid cluster. The nodes\nannounce their online state and the data they serve in ZooKeeper.\nReal-time nodes maintain an in-memory index buffer for all in-\ncomingevents. Theseindexesareincrementallypopulatedasevents\nare ingested and the indexes are also directly queryable. Druid be-\nhaves as a row store for queries on events that exist in this JVM\nheap-based buffer. To avoid heap overflow problems, real-time\nnodes persist their in-memory indexes to disk either periodically\nor after some maximum row limit is reached. This persist process\nconverts data stored in the in-memory buffer to a column oriented\nstorage format described in Section 4. Each persisted index is im-\nmutable and real-time nodes load persisted indexes into off-heap\nmemory such that they can still be queried. This process is de-\nscribed in detail in [33] and is illustrated in Figure 2.\nOn a periodic basis, each real-time node will schedule a back-\ngroundtaskthatsearchesforalllocallypersistedindexes. Thetask\nmerges these indexes together and builds an immutable block of\ndata that contains all the events that have been ingested by a real-\ntime node for some span of time. We refer to this block of data as\na “segment”. During the handoff stage, a real-time node uploads\nthissegmenttoapermanentbackupstorage,typicallyadistributed\nfile system such as S3 [12] or HDFS [36], which Druid refers to as\n“deep storage”. The ingest, persist, merge, and handoff steps are\nfluid; there is no data loss during any of the processes.\nFigure 3 illustrates the operations of a real-time node. The node\nstartsat13:37andwillonlyaccepteventsforthecurrenthourorthe\nnext hour. When events are ingested, the node announces that it is\nservingasegmentofdataforanintervalfrom13:00to14:00. Every\n10 minutes (the persist period is configurable), the node will flush\nand persist its in-memory buffer to disk. Near the end of the hour,\nthenodewilllikelyseeeventsfor14:00to15:00. Whenthisoccurs,\nthe node prepares to serve data for the next hour and creates a new\nin-memory index. The node then announces that it is also serving\na segment from 14:00 to 15:00. The node does not immediately\nmerge persisted indexes from 13:00 to 14:00, instead it waits for\na configurable window period for straggling events from 13:00 toReal-time \nNodes\nCoordinator \nNodesBroker Nodes\nHistorical \nNodesMySQL\nZookeeper\nDeep \nStorage\nStreaming \nData\nBatch\nDataClient \nQueries\nQueries\nMetadata\nData/SegmentsDruid Nodes\nExternal DependenciesFigure1: An overviewof a Druid cluster andthe flow of data throughthe cluster.\nevent_23312\nevent_23481\nevent_23593\n...\nevent_1234\nevent_2345\n...event_3456\nevent_4567\n...\nevent_5678\nevent_6789\n...event_7890\nevent_8901\n...Disk and persisted indexesHeap and in-memory index\nPersistevent_34982\nevent_35789\nevent_36791\n...\nevent_1234\nevent_2345\n...event_3456\nevent_4567\n...\nevent_5678\nevent_6789\n...event_7890\nevent_8901\n...Off-heap memory and \npersisted indexes\nLoadQueries\nFigure2: Real-timenodesbuffereventstoanin-memoryindex,\nwhich is regularly persisted to disk. On a periodic basis, per-\nsisted indexes are then merged together before getting handed\noff. Queries will hit both the in-memory and persisted indexes.\n14:00toarrive. Thiswindowperiodminimizestheriskofdataloss\nfromdelaysineventdelivery. Attheendofthewindowperiod,the\nnodemergesallpersistedindexesfrom13:00to14:00intoasingle\nimmutable segment and hands the segment off. Once this segment\nis loaded and queryable somewhere else in the Druid cluster, the\nreal-timenodeflushesallinformationaboutthedataitcollectedfor\n13:00 to 14:00 and unannounces it is serving this data.\n3.1.1 Availability and Scalability\nReal-timenodesareaconsumerofdataandrequireacorrespond-\ningproducertoprovidethedatastream. Commonly,fordatadura-\nbility purposes, a message bus such as Kafka [21] sits between the\nproducer and the real-time node as shown in Figure 4. Real-time\nnodes ingest data by reading events from the message bus. The\ntime from event creation to event consumption is ordinarily on the\norder of hundreds of milliseconds.\nThepurposeofthemessagebusinFigure4istwo-fold. First,the\nmessage bus acts as a buffer for incoming events. A message bus\nsuchasKafkamaintainspositionaloffsetsindicatinghowfaracon-\nsumer (a real-time node) has read in an event stream. Consumers\ncanprogrammaticallyupdatetheseoffsets. Real-timenodesupdatethisoffseteachtimetheypersisttheirin-memorybufferstodisk. In\nafailandrecoverscenario,ifanodehasnotlostdisk,itcanreload\nall persisted indexes from disk and continue reading events from\nthe last offset it committed. Ingesting events from a recently com-\nmitted offset greatly reduces a node’s recovery time. In practice,\nweseenodesrecoverfromsuchfailurescenariosinafewseconds.\nThe second purpose of the message bus is to act as a single end-\npoint from which multiple real-time nodes can read events. Multi-\nple real-time nodes can ingest the same set of events from the bus,\ncreating a replication of events. In a scenario where a node com-\npletely fails and loses disk, replicated streams ensure that no data\nis lost. A single ingestion endpoint also allows for data streams\nto be partitioned such that multiple real-time nodes each ingest a\nportion of a stream. This allows additional real-time nodes to be\nseamlessly added. In practice, this model has allowed one of the\nlargestproductionDruidclusterstobeabletoconsumerawdataat\napproximately 500 MB/s (150,000 events/s or 2 TB/hour).\n3.2 Historical Nodes\nHistorical nodes encapsulate the functionality to load and serve\ntheimmutableblocksofdata(segments)createdbyreal-timenodes.\nIn many real-world workflows, most of the data loaded in a Druid\ncluster is immutable and hence, historical nodes are typically the\nmain workers of a Druid cluster. Historical nodes follow a shared-\nnothingarchitectureandthereisnosinglepointofcontentionamong\nthe nodes. The nodes have no knowledge of one another and are\noperationally simple; they only know how to load, drop, and serve\nimmutable segments.\nSimilartoreal-timenodes,historicalnodesannouncetheironline\nstate and the data they are serving in ZooKeeper. Instructions to\nloadanddropsegmentsaresentoverZooKeeperandcontaininfor-\nmationaboutwherethesegmentislocatedindeepstorageandhow\nto decompress and process the segment. Before a historical node\ndownloadsaparticularsegmentfromdeepstorage,itfirstchecksa\nlocalcachethatmaintainsinformationaboutwhatsegmentsalready\nexist on the node. If information about a segment is not present in\nthecache,thehistoricalnodewillproceedtodownloadthesegment\nfrom deep storage. This process is shown in Figure 5. Once pro-\ncessing is complete, the segment is announced in ZooKeeper. At\nthis point, the segment is queryable. The local cache also allows\nforhistoricalnodestobequicklyupdatedandrestarted. Onstartup,\nthenodeexaminesitscacheandimmediatelyserveswhateverdata\nit finds.13:00 14:00 15:00\n13:37\n- node starts\n- announce segment \nfor data 13:00-14:0013:47\npersist data for 13:00-14:00\n~14:00\n- announce segment \nfor data 14:00-15:0014:10\n- merge and handoff for data 13:00-14:00\n- persist data for 14:00-15:00~14:11\n- unannounce segment \nfor data 13:00-14:00\n13:57\npersist data for 13:00-14:0014:07\npersist data for 13:00-14:00Figure 3: The node starts, ingests data, persists, and periodically hands data off. This process repeats indefinitely. The time periods\nbetween differentreal-time node operations areconfigurable.\nevent_12345\nevent_23456\nevent_34567\nevent_35582\nevent_37193\nevent_78901\nevent_79902\nevent_79932\nevent_89012event_2849219\nevent_120202\n…\nevent_90192\nReal-time\nNode 1\nReal-time\nNode 2\noffset 1\noffset 2\neventseventseventsKafka\nStreaming events\nFigure4: Multiplereal-timenodescanreadfromthesamemes-\nsage bus. Each node maintains its own offset.\nDeep Storage\nSegmentMemory\nDisk\nCache \nEntriesSegmentSegment\ndownload\ncreate keyLoad\nFigure5: Historicalnodesdownloadimmutablesegmentsfrom\ndeep storage. Segments must be loaded in memory before they\ncan be queried.Historicalnodescansupportreadconsistencybecausetheyonly\ndealwithimmutabledata. Immutabledatablocksalsoenableasim-\nple parallelization model: historical nodes can concurrently scan\nand aggregate immutable blocks without blocking.\n3.2.1 Tiers\nHistoricalnodescanbegroupedindifferenttiers,whereallnodes\ninagiventierareidenticallyconfigured. Differentperformanceand\nfault-tolerance parameters can be set for each tier. The purpose of\ntierednodesistoenablehigherorlowerprioritysegmentstobedis-\ntributed according to their importance. For example, it is possible\nto spin up a “hot” tier of historical nodes that have a high num-\nber of cores and large memory capacity. The “hot” cluster can be\nconfigured to download more frequently accessed data. A parallel\n“cold”clustercanalsobecreatedwithmuchlesspowerfulbacking\nhardware. The “cold” cluster would only contain less frequently\naccessed segments.\n3.2.2 Availability\nHistoricalnodesdependonZooKeeperforsegmentloadandun-\nload instructions. Should ZooKeeper become unavailable, histor-\nical nodes are no longer able to serve new data or drop outdated\ndata, however, because the queries are served over HTTP, histori-\ncalnodesarestillabletorespondtoqueryrequestsforthedatathey\nare currently serving. This means that ZooKeeper outages do not\nimpact current data availability on historical nodes.\n3.3 Broker Nodes\nBrokernodesactasqueryrouterstohistoricalandreal-timenodes.\nThey understand the metadata published in ZooKeeper about what\nsegmentsarequeryableandwherethosesegmentsarelocated. Bro-\nkernodesrouteincomingqueriessuchthatthequerieshittheright\nhistorical or real-time nodes. Broker nodes also merge partial re-\nsults from historical and real-time nodes before returning a final\nconsolidated result to the caller.\n3.3.1 Caching\nBroker nodes contain a cache with a LRU [31, 20] invalidation\nstrategy. The cache can use local heap memory or an external dis-tributedkey/valuestoresuchasMemcached[16]. Eachtimeabro-\nker node receives a query, it first maps the query to a set of seg-\nments. Results for certain segments may already exist in the cache\nand there is no need to recompute them. For any results that do\nnotexistinthecache,thebrokernodewillforwardthequerytothe\ncorrecthistoricalandreal-timenodes. Oncehistoricalnodesreturn\ntheirresults,thebrokerwillcachetheseresultsonapersegmentba-\nsis for future use. This process is illustrated in Figure 6. Real-time\ndata is never cached and hence requests for real-time data will al-\nwaysbeforwardedtoreal-timenodes. Real-timedataisperpetually\nchanging and caching the results is unreliable.\nThe cache also acts as an additional level of data durability. In\nthe event that all historical nodes fail, it is still possible to query\nresults if those results already exist in the cache.\n3.3.2 Availability\nIn the event of a total ZooKeeper outage, data is still queryable.\nIfbrokernodesareunabletocommunicatetoZooKeeper,theyuse\ntheirlastknownviewoftheclusterandcontinuetoforwardqueries\nto real-time and historical nodes. Broker nodes make the assump-\ntion that the structure of the cluster is the same as it was before the\noutage. In practice, this availability model has allowed our Druid\ncluster to continue serving queries for a significant period of time\nwhile we diagnosed ZooKeeper outages.\n3.4 Coordinator Nodes\nDruidcoordinatornodesareprimarilyinchargeofdatamanage-\nment and distribution on historical nodes. The coordinator nodes\ntell historical nodes to load new data, drop outdated data, replicate\ndata, and move data to load balance. Druid uses a multi-version\nconcurrency control swapping protocol for managing immutable\nsegments in order to maintain stable views. If any immutable seg-\nmentcontainsdatathatiswhollyobsoletedbynewersegments,the\noutdated segment is dropped from the cluster. Coordinator nodes\nundergoaleader-electionprocessthatdeterminesasinglenodethat\nrunsthecoordinatorfunctionality. Theremainingcoordinatornodes\nact as redundant backups.\nA coordinator node runs periodically to determine the current\nstate of the cluster. It makes decisions by comparing the expected\nstate of the cluster with the actual state of the cluster at the time\nof the run. As with all Druid nodes, coordinator nodes maintain\na ZooKeeper connection for current cluster information. Coordi-\nnator nodes also maintain a connection to a MySQL database that\ncontainsadditionaloperationalparametersandconfigurations. One\nof the key pieces of information located in the MySQL database is\na table that contains a list of all segments that should be served by\nhistoricalnodes. Thistablecanbeupdatedbyanyservicethatcre-\natessegments,forexample,real-timenodes. TheMySQLdatabase\nalso contains a rule table that governs how segments are created,\ndestroyed, and replicated in the cluster.\n3.4.1 Rules\nRules govern how historical segments are loaded and dropped\nfromthecluster. Rulesindicatehowsegmentsshouldbeassignedto\ndifferenthistoricalnodetiersandhowmanyreplicatesofasegment\nshould exist in each tier. Rules may also indicate when segments\nshould be dropped entirely from the cluster. Rules are usually set\nfor a period of time. For example, a user may use rules to load the\nmostrecentonemonth’sworthofsegmentsintoa“hot”cluster,the\nmostrecentoneyear’sworthofsegmentsintoa“cold”cluster,and\ndrop any segments that are older.\nThecoordinatornodesloadasetofrulesfromaruletableinthe\nMySQL database. Rules may be specific to a certain data sourceand/or a default set of rules may be configured. The coordinator\nnodewillcyclethroughallavailablesegmentsandmatcheachseg-\nment with the first rule that applies to it.\n3.4.2 Load Balancing\nIn a typical production environment, queries often hit dozens or\neven hundreds of segments. Since each historical node has limited\nresources, segments must be distributed among the cluster to en-\nsure that the cluster load is not too imbalanced. Determining opti-\nmalloaddistributionrequiressomeknowledgeaboutquerypatterns\nandspeeds. Typically,queriescoverrecentsegmentsspanningcon-\ntiguoustimeintervalsforasingledatasource. Onaverage,queries\nthat access smaller segments are faster.\nThese query patterns suggest replicating recent historical seg-\nments at a higher rate, spreading out large segments that are close\nintimetodifferenthistoricalnodes,andco-locatingsegmentsfrom\ndifferent data sources. To optimally distribute and balance seg-\nments among the cluster, we developed a cost-based optimization\nprocedurethattakesintoaccountthesegmentdatasource,recency,\nandsize. Theexactdetailsofthealgorithmarebeyondthescopeof\nthis paper and may be discussed in future literature.\n3.4.3 Replication\nCoordinator nodes may tell different historical nodes to load a\ncopy of the same segment. The number of replicates in each tier\nof the historical compute cluster is fully configurable. Setups that\nrequire high levels of fault tolerance can be configured to have a\nhigh number of replicas. Replicated segments are treated the same\nastheoriginalsandfollowthesameloaddistributionalgorithm. By\nreplicatingsegments,singlehistoricalnodefailuresaretransparent\nin the Druid cluster. We use this property for software upgrades.\nWe can seamlessly take a historical node offline, update it, bring it\nbackup,andrepeattheprocessforeveryhistoricalnodeinacluster.\nOverthelasttwoyears,wehavenevertakendowntimeinourDruid\ncluster for software upgrades.\n3.4.4 Availability\nDruid coordinator nodes have ZooKeeper and MySQL as exter-\nnal dependencies. Coordinator nodes rely on ZooKeeper to deter-\nminewhathistoricalnodesalreadyexistinthecluster. IfZooKeeper\nbecomesunavailable,thecoordinatorwillnolongerbeabletosend\ninstructionstoassign,balance,anddropsegments. However,these\noperations do not affectdata availability at all.\nThe design principle for responding to MySQL and ZooKeeper\nfailures is the same: if an external dependency responsible for co-\nordination fails, the cluster maintains the status quo. Druid uses\nMySQLtostoreoperationalmanagementinformationandsegment\nmetadatainformationaboutwhatsegmentsshouldexistintheclus-\nter. IfMySQLgoesdown,thisinformationbecomesunavailableto\ncoordinator nodes. However, this does not mean data itself is un-\navailable. If coordinator nodes cannot communicate to MySQL,\nthey will cease to assign new segments and drop outdated ones.\nBroker, historical, and real-time nodes are still queryable during\nMySQLoutages.\n4. STORAGE FORMAT\nDatatablesinDruid(called datasources )arecollectionsoftimes-\ntamped events and partitioned into a set of segments, where each\nsegmentistypically5–10millionrows. Formally,wedefineaseg-\nment as a collection of rows of data that span some period of time.\nSegmentsrepresentthefundamentalstorageunitinDruidandrepli-\ncation and distribution are done at a segment level.Query for data \nfrom 2013-01-01 \nto 2013-01-08results for segment 2013-01-01/2013-01-02\nresults for segment 2013-01-02/2013-01-03\nresults for segment 2013-01-07/2013-01-08Cache (on broker nodes)\nsegment for data 2013-01-03/2013-01-04\nsegment for data 2013-01-04/2013-01-05\nsegment for data 2013-01-05/2013-01-06\nsegment for data 2013-01-06/2013-01-07Historical and real-time nodes\nQuery for data \nnot in cacheFigure6: Results arecached per segment. Queriescombine cached resultswith resultscomputed on historical and real-timenodes.\nDruid always requires a timestamp column as a method of sim-\nplifyingdatadistributionpolicies,dataretentionpolicies,andfirst-\nlevel query pruning. Druid partitions its data sources into well-\ndefined time intervals, typically an hour or a day, and may further\npartition on values from other columns to achieve the desired seg-\nment size. The time granularity to partition segments is a function\nof data volume and time range. A data set with timestamps spread\nover a year is better partitioned by day, and a data set with times-\ntamps spread over a day is better partitioned by hour.\nSegments are uniquely identified by a data source identifer, the\ntime interval of the data, and a version string that increases when-\never a new segment is created. The version string indicates the\nfreshnessofsegmentdata;segmentswithlaterversionshavenewer\nviewsofdata(oversometimerange)thansegmentswitholderver-\nsions. This segment metadata is used by the system for concur-\nrency control; read operations always access data in a particular\ntime range from the segments with the latest version identifiers for\nthat time range.\nDruid segments are stored in a column orientation. Given that\nDruidisbestusedforaggregatingeventstreams(alldatagoinginto\nDruidmusthaveatimestamp),theadvantagesofstoringaggregate\ninformation as columns rather than rows are well documented [1].\nColumn storage allows for more efficient CPU usage as only what\nis needed is actually loaded and scanned. In a row oriented data\nstore,allcolumnsassociatedwitharowmustbescannedaspartof\nan aggregation. The additional scan time can introduce signficant\nperformance degradations [1].\nDruid has multiple column types to represent various data for-\nmats. Depending on the column type, different compression meth-\nods are used to reduce the cost of storing a column in memory and\non disk. In the example given in Table 1, the page, user, gender,\nand city columns only contain strings. Storing strings directly is\nunnecessarily costly and string columns can be dictionary encoded\ninstead. Dictionaryencodingisacommonmethodtocompressdata\nand has been used in other data stores such as PowerDrill [17]. In\nthe example in Table 1, we can map each page to a unique integer\nidentifier.\nJustin Bieber -> 0\nKe$ha -> 1\nThis mapping allows us to represent the page column as an in-\nteger array where the array indices correspond to the rows of the\noriginaldataset. Forthepagecolumn,wecanrepresenttheunique\npages as follows:\n[0, 0, 1, 1]\nThe resulting integer array lends itself very well to compression\nmethods. Generic compression algorithms on top of encodings are\nextremelycommonincolumn-stores. DruidusestheLZF[24]com-\npression algorithm.\nSimilarcompressionmethodscanbeappliedtonumericcolumns.\nForexample,thecharactersaddedandcharactersremovedcolumns\nin Table1 can also be expressed as individual arrays.\nCharacters Added -> [1800, 2912, 1953, 3194]\nCharacters Removed -> [25, 42, 17, 170]\nInthiscase,wecompresstherawvaluesasopposedtotheirdic-\ntionary representations.\nInteger array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)Integer array size (bytes)\n1e+041e+06\n1e+02 1e+05 1e+08\nCardinalityConcise compressed size (bytes)sorted\nsorted\nunsortedFigure7: Integer array size versus Concise set size.\n4.1 Indices for Filtering Data\nIn many real world OLAP workflows, queries are issued for the\naggregated results of some set of metrics where some set of di-\nmension specifications are met. An example query is: “How many\nWikipedia edits were done by users in San Francisco who are also\nmale?”ThisqueryisfilteringtheWikipediadatasetinTable1based\non a Boolean expression of dimension values. In many real world\ndata sets, dimension columns contain strings and metric columns\ncontainnumericvalues. Druidcreatesadditionallookupindicesfor\nstringcolumnssuchthatonlythoserowsthatpertaintoaparticular\nquery filter are ever scanned.\nLetusconsiderthepagecolumninTable1. Foreachuniquepage\nin Table 1, we can form some representation indicating in which\ntable rows a particular page is seen. We can store this information\nin a binary array where the array indices represent our rows. If a\nparticular page is seen in a certain row, that array index is marked\nas1. For example:\nJustin Bieber -> rows [0, 1] -> [1][1][0][0]\nKe$ha -> rows [2, 3] -> [0][0][1][1]\nJustin Bieber is seen in rows 0and1. This mapping of col-\numn values to row indices forms an inverted index [39]. To know\nwhichrowscontain Justin Bieber orKe$ha,wecanORtogether\nthe two arrays.\n[0][1][0][1] OR [1][0][1][0] = [1][1][1][1]\nThisapproachofperformingBooleanoperationsonlargebitmap\nsetsiscommonlyusedinsearchengines. BitmapindicesforOLAP\nworkloads is described in detail in [32]. Bitmap compression al-\ngorithms are a well-defined area of research [2, 44, 42] and often\nutilize run-length encoding. Druid opted to use the Concise algo-\nrithm [10]. Figure 7 illustrates the number of bytes using Concise\ncompression versus using an integer array. The results were gen-\nerated on a cc2.8xlarge system with a single thread, 2G heap,\n512m young gen, and a forced GC between each run. The data set\nis a single day’s worth of data collected from the Twitter garden\nhose [41] data stream. The data set contains 2,272,295 rows and12dimensionsofvaryingcardinality. Asanadditionalcomparison,\nwe also resorted the data set rows to maximize compression.\nIntheunsortedcase,thetotalConcisesizewas53,451,144bytes\nand the total integer array size was 127,248,520 bytes. Overall,\nConcise compressed sets are about 42% smaller than integer ar-\nrays. In the sorted case, the total Concise compressed size was\n43,832,884 bytes and the total integer array size was 127,248,520\nbytes. What is interesting to note is that after sorting, global com-\npression only increased minimally.\n4.2 Storage Engine\nDruid’s persistence components allows for different storage en-\ngines to be plugged in, similar to Dynamo [12]. These storage en-\nginesmaystoredatainanentirelyin-memorystructuresuchasthe\nJVM heap or in memory-mapped structures. The ability to swap\nstorage engines allows for Druid to be configured depending on a\nparticular application’s specifications. An in-memory storage en-\nginemaybeoperationallymoreexpensivethanamemory-mapped\nstorage engine but could be a better alternative if performance is\ncritical. By default, a memory-mapped storage engine is used.\nWhen using a memory-mapped storage engine, Druid relies on\ntheoperatingsystemtopagesegmentsinandoutofmemory. Given\nthat segments can only be scanned if they are loaded in memory, a\nmemory-mapped storage engine allows recent segments to retain\ninmemorywhereassegmentsthatareneverqueriedarepagedout.\nThemaindrawbackwithusingthememory-mappedstorageengine\nis when a query requires more segments to be paged into memory\nthanagivennodehascapacityfor. Inthiscase,queryperformance\nwillsufferfromthecostofpagingsegmentsinandoutofmemory.\n5. QUERY API\nDruid has its own query language and accepts queries as POST\nrequests. Broker, historical, and real-time nodes all share the same\nquery API.\nThe body of the POST request is a JSON object containing key-\nvalue pairs specifying various query parameters. A typical query\nwillcontainthedatasourcename,thegranularityoftheresultdata,\ntime range of interest, the type of request, and the metrics to ag-\ngregate over. The result will also be a JSON object containing the\naggregated metrics over the time period.\nMost query types will also support a filter set. A filter set is a\nBooleanexpressionofdimensionnameandvaluepairs. Anynum-\nber and combination of dimensions and values may be specified.\nWhen a filter set is provided, only the subset of the data that per-\ntainstothefiltersetwillbescanned. Theabilitytohandlecomplex\nnestedfiltersetsiswhatenablesDruidtodrillintodataatanydepth.\nTheexactquerysyntaxdependsonthequerytypeandtheinfor-\nmation requested. A sample count query over a week of data is as\nfollows:\n{\n\"queryType\" : \"timeseries\",\n\"dataSource\" : \"wikipedia\",\n\"intervals\" : \"2013-01-01/2013-01-08\",\n\"filter\" : {\n\"type\" : \"selector\",\n\"dimension\" : \"page\",\n\"value\" : \"Ke$ha\"\n},\n\"granularity\" : \"day\",\n\"aggregations\" : [{\"type\":\"count\", \"name\":\"rows\"}]\n}\nThequeryshownabovewillreturnacountofthenumberofrows\nin the Wikipedia data source from 2013-01-01 to 2013-01-08, fil-\ntered for only those rows where the value of the “page” dimension\nis equal to “Ke$ha”. The results will be bucketed by day and will\nbe a JSON array of the following form:[ {\n\"timestamp\": \"2012-01-01T00:00:00.000Z\",\n\"result\": {\"rows\":393298}\n},\n{\n\"timestamp\": \"2012-01-02T00:00:00.000Z\",\n\"result\": {\"rows\":382932}\n},\n...\n{\n\"timestamp\": \"2012-01-07T00:00:00.000Z\",\n\"result\": {\"rows\": 1337}\n} ]\nDruid supports many types of aggregations including sums on\nfloating-point and integer types, minimums, maximums, and com-\nplex aggregations such as cardinality estimation and approximate\nquantile estimation. The results of aggregations can be combined\nin mathematical expressions to form other aggregations. It is be-\nyond the scope of this paper to fully describe the query API but\nmore information can be found online3.\nAsofthiswriting,ajoinqueryforDruidisnotyetimplemented.\nThishasbeenafunctionofengineeringresourceallocationanduse\ncase decisions more than a decision driven by technical merit. In-\ndeed, Druid’s storage format would allow for the implementation\nof joins (there is no loss of fidelity for columns included as dimen-\nsions)andtheimplementationofthemhasbeenaconversationthat\nwe have every few months. To date, we have made the choice that\ntheimplementationcostisnotworththeinvestmentforourorgani-\nzation. The reasons for this decision are generally two-fold.\n1. Scalingjoinquerieshasbeen,inourprofessionalexperience,\na constant bottleneck of working with distributed databases.\n2. The incremental gains in functionality are perceived to be\nof less value than the anticipated problems with managing\nhighly concurrent, join-heavy workloads.\nAjoinqueryisessentiallythemergingoftwoormorestreamsof\ndata based on a shared set of keys. The primary high-level strate-\ngies for join queries we are aware of are a hash-based strategy or a\nsorted-mergestrategy. Thehash-basedstrategyrequiresthatallbut\none data set be available as something that looks like a hash table,\na lookup operation is then performed on this hash table for every\nrow in the “primary” stream. The sorted-merge strategy assumes\nthateachstreamissortedbythejoinkeyandthusallowsforthein-\ncrementaljoiningofthestreams. Eachofthesestrategies,however,\nrequiresthematerializationofsomenumberofthestreamseitherin\nsorted order or in a hash table form.\nWhen all sides of the join are significantly large tables (> 1 bil-\nlion records), materializing the pre-join streams requires complex\ndistributed memory management. The complexity of the memory\nmanagementisonlyamplifiedbythefactthatwearetargetinghighly\nconcurrent, multitenant workloads. This is, as far as we are aware,\nan active academic research problem that we would be willing to\nhelp resolve in a scalable manner.\n6. PERFORMANCE\nDruidrunsinproductionatseveralorganizations,andtodemon-\nstrate its performance, we have chosen to share some real world\nnumbersforthemainproductionclusterrunningatMetamarketsas\nofearly2014. Forcomparisonwithotherdatabaseswealsoinclude\nresults from synthetic workloads on TPC-H data.\n3http://druid.io/docs/latest/Querying.htmlDataSource Dimensions Metrics\na 25 21\nb 30 26\nc 71 35\nd 60 19\ne 29 8\nf 30 16\ng 26 18\nh 78 14\nTable2: Characteristics of productiondata sources.\n6.1 Query Performance in Production\nDruidqueryperformancecanvarysignficantlydependingonthe\nquerybeingissued. Forexample,sortingthevaluesofahighcardi-\nnality dimension based on a given metric is much more expensive\nthan a simple count over a time range. To showcase the average\nquery latencies in a production Druid cluster, we selected 8 of our\nmost queried data sources, described in Table2.\nApproximately30%ofqueriesarestandardaggregatesinvolving\ndifferent types of metrics and filters, 60% of queries are ordered\ngroup bys over one or more dimensions with aggregates, and 10%\nof queries are search queries and metadata retrieval queries. The\nnumber of columns scanned in aggregate queries roughly follows\nan exponential distribution. Queries involving a single column are\nvery frequent, and queries involving all columns are very rare.\nAfewnotes about our results:\n\u000fTheresultsarefroma“hot”tierinourproductioncluster. There\nwere approximately 50 data sources in the tier and several hun-\ndred users issuing queries.\n\u000fTherewasapproximately10.5TBofRAMavailableinthe“hot”\ntier and approximately 10TB of segments loaded. Collectively,\nthereareabout50billionDruidrowsinthistier. Resultsforevery\ndata source are not shown.\n\u000fThe hot tier uses Intel®Xeon®E5-2670 processors and consists\nof 1302 processing threads and 672 total cores (hyperthreaded).\n\u000fA memory-mapped storage engine was used (the machine was\nconfiguredtomemorymapthedatainsteadofloadingitintothe\nJava heap.)\nQuerylatenciesareshowninFigure8andthequeriesperminute\nare shown in Figure 9. Across all the various data sources, aver-\nage query latency is approximately 550 milliseconds, with 90% of\nqueries returning in less than 1 second, 95% in under 2 seconds,\nand99%ofqueriesreturninginlessthan10seconds. Occasionally\nwe observe spikes in latency, as observed on February 19, where\nnetwork issues on the Memcached instances were compounded by\nvery high query load on one of our largestdata sources.\n6.2 Query Benchmarks on TPC-H Data\nWealsopresentDruidbenchmarksonTPC-Hdata. MostTPC-H\nqueriesdonotdirectlyapplytoDruid,soweselectedqueriesmore\ntypicalofDruid’sworkloadtodemonstratequeryperformance. As\na comparison, we also provide the results of the same queries us-\ningMySQLusingtheMyISAMengine(InnoDBwasslowerinour\nexperiments).\nWeselectedMySQLtobenchmarkagainstbecauseofitsuniver-\nsal popularity. We chose not to select another open source column\nstore because we were not confident we could correctly tune it for\noptimal performance.\nOur Druid setup used Amazon EC2 m3.2xlarge instance types\n(Intel®Xeon®E5-2680 v2 @ 2.80GHz) for historical nodes and\nc3.2xlarge instances(Intel®Xeon®E5-2670v2@2.50GHz)for\n0.00.51.0\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequery time (s)datasource\na\nb\nc\nd\ne\nf\ng\nhMean query latency\n0.00.51.01.5\n01234\n0510152090%ile 95%ile 99%ile\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequery time (seconds)datasource\na\nb\nc\nd\ne\nf\ng\nhQuery latency percentilesFigure8: Query latencies of productiondata sources.\n050010001500\nFeb 03 Feb 10 Feb 17 Feb 24\ntimequeries / minutedatasource\na\nb\nc\nd\ne\nf\ng\nhQueries per minute\nFigure9: Queriesper minute of productiondata sources.01234\ncount_star_interval\nsum_all\nsum_all_filter\nsum_all_year\nsum_price\ntop_100_commitdate\ntop_100_parts\ntop_100_parts_details\ntop_100_parts_filter\nQueryTime (seconds)engine\nDruid\nMySQLMedian query time (100 runs) − 1GB data − single nodeFigure10: Druid & MySQL benchmarks – 1GB TPC-H data.\naggregation top−n\n0200400600\n02500500075001000012500count_star_interval\nsum_all\nsum_all_filter\nsum_all_year\nsum_price\ntop_100_commitdate\ntop_100_parts\ntop_100_parts_details\ntop_100_parts_filter\nQueryTime (seconds)engine\nDruid\nMySQLMedian Query Time (3+ runs) − 100GB data − single node\nFigure11: Druid&MySQLbenchmarks–100GBTPC-Hdata.\nbroker nodes. Our MySQL setup was an Amazon RDS instance\nthat ran on the same m3.2xlarge instance type.\nThe results for the 1 GB TPC-H data set are shown in Figure 10\nand the results of the 100 GB data set are shown in Figure 11.\nWebenchmarkedDruid’sscanrateat53,539,211rows/second/core\nforselect count(*) equivalentqueryoveragiventimeinterval\nand36,246,530rows/second/corefora select sum(float) type\nquery.\nFinally,wepresentourresultsofscalingDruidtomeetincreasing\ndata volumes with the TPC-H 100 GB data set. We observe that\nwhenweincreasedthenumberofcoresfrom8to48,notalltypesof\nqueries achieve linear scaling, but the simpler aggregation queries\ndo, as shown in Figure 12.\nTheincreaseinspeedofaparallelcomputingsystemisoftenlim-\nitedbythetimeneededforthesequentialoperationsofthesystem.\nIn this case, queries requiring a substantial amount of work at the\nbroker level do not parallelize as well.\n6.3 Data Ingestion Performance\nTo showcase Druid’s data ingestion latency, we selected several\nproduction datasources of varying dimensions, metrics, and event\nvolumes. Our production ingestion setup consists of 6 nodes, to-\ntalling 360GB of RAM and 96 cores (12 x Intel®Xeon®E5-2670).\ncount_star_intervalsum_allsum_all_filtersum_all_yearsum_pricetop_100_commitdatetop_100_partstop_100_parts_detailstop_100_parts_filter\n0\n50\n100 150Time (seconds)QueryDruid Scaling − 100GB\ncount_star_intervalsum_allsum_all_filtersum_all_yearsum_pricetop_100_commitdatetop_100_partstop_100_parts_detailstop_100_parts_filter\n1 2 3 4 5 6Speedup FactorQuery\nCores 8 (1 node) 48 (6 nodes)Druid Scaling ... 100GBFigure12: Druid scalingbenchmarks – 100GB TPC-H data.\nDataSource Dimensions Metrics Peak events/s\ns 7 228334.60\nt 10 768808.70\nu 5 149933.93\nv 30 1022240.45\nw 35 14135763.17\nx 28 646525.85\ny 33 24162462.41\nz 33 2495747.74\nTable3: Ingestion characteristics of various data sources.\nNote that in this setup, several other data sources were being in-\ngested and many other Druid related ingestion tasks were running\nconcurrently on the machines.\nDruid’s data ingestion latency is heavily dependent on the com-\nplexity of the data set being ingested. The data complexity is de-\ntermined by the number of dimensions in each event, the number\nof metrics in each event, and the types of aggregations we want to\nperform on those metrics. With the most basic data set (one that\nonlyhasatimestampcolumn),oursetupcaningestdataatarateof\n800,000 events/second/core, which is really just a measurement of\nhow fast we can deserialize events. Real world data sets are never\nthis simple. Table 3 shows a selection of data sources and their\ncharacteristics.\nWe can see that, based on the descriptions in Table 3, latencies\nvarysignificantlyandtheingestionlatencyisnotalwaysafactorof\nthenumberofdimensionsandmetrics. Weseesomelowerlatencies\nonsimpledatasetsbecausethatwastheratethatthedataproducer\nwas delivering data. The results are shown in Figure 13.\nWe define throughput as the number of events a real-time node\ncan ingest and also make queryable. If too many events are sent\nto the real-time node, those events are blocked until the real-time\nnode has capacity to accept them. The peak ingestion latency we\nmeasuredinproductionwas22914.43events/second/coreonadata-\nsource with 30 dimensions and 19 metrics, running an Amazon\ncc2.8xlarge instance.050,000100,000150,000200,000250,000\nDec 15 Jan 01 Jan 15 Feb 01 Feb 15 Mar 01\ntimeevents / sdatasource\ns\nt\nu\nv\nw\nx\ny\nzEvents per second ... 24h moving averageFigure13: Combined cluster ingestion rates.\nThelatencymeasurementswepresentedaresufficienttoaddress\nthestatedproblemsofinteractivity. Wewouldpreferthevariability\nin the latencies to be less. It is still possible to decrease latencies\nby adding additional hardware, but we have not chosen to do so\nbecause infrastructure costs are still a consideration for us.\n7. DRUID IN PRODUCTION\nOver the last few years, we have gained tremendous knowledge\nabout handling production workloads with Druid and have made a\ncouple of interesting observations.\nQuery Patterns.\nDruid is often used to explore data and generate reports on data.\nIn the explore use case, the number of queries issued by a single\nuser are much higher than in the reporting use case. Exploratory\nqueriesofteninvolveprogressivelyaddingfiltersforthesametime\nrange to narrow down results. Users tend to explore short time in-\ntervals of recent data. In the generate report use case, users query\nfor much longer data intervals, but those queries are generally few\nand pre-determined.\nMultitenancy.\nExpensiveconcurrentqueriescanbeproblematicinamultitenant\nenvironment. Queriesforlargedatasourcesmayenduphittingev-\nery historical node in a cluster and consume all cluster resources.\nSmaller, cheaper queries may be blocked from executing in such\ncases. We introduced query prioritization to address these issues.\nEach historical node is able to prioritize which segments it needs\ntoscan. Properqueryplanningiscriticalforproductionworkloads.\nThankfully, queries for a significant amount of data tend to be for\nreporting use cases and can be deprioritized. Users do not expect\nthe same level of interactivity in this use case as when they are ex-\nploring data.\nNode failures.\nSinglenodefailuresarecommonindistributedenvironments,but\nmany nodes failing at once are not. If historical nodes completely\nfailanddonotrecover,theirsegmentsneedtobereassigned,which\nmeansweneedexcessclustercapacitytoloadthisdata. Theamount\nof additional capacity to have at any time contributes to the cost\nof running a cluster. From our experiences, it is extremely rare to\nsee more than 2 nodes completely fail at once and hence, we leave\nenoughcapacityinourclustertocompletelyreassignthedatafrom\n2 historical nodes.Data Center Outages.\nComplete cluster failures are possible, but extremely rare. If\nDruid is only deployed in a single data center, it is possible for\nthe entire data center to fail. In such cases, new machines need\nto be provisioned. As long as deep storage is still available, clus-\nterrecoverytimeisnetworkbound,ashistoricalnodessimplyneed\nto redownload every segment from deep storage. We have experi-\nenced such failures in the past, and the recovery time was several\nhoursintheAmazonAWSecosystemforseveralterabytesofdata.\n7.1 Operational Monitoring\nPropermonitoringiscriticaltorunalargescaledistributedclus-\nter. Each Druid node is designed to periodically emit a set of oper-\national metrics. These metrics may include system level data such\nasCPUusage,availablememory,anddiskcapacity,JVMstatistics\nsuch as garbage collection time, and heap usage, or node specific\nmetrics such as segment scan time, cache hit rates, and data inges-\ntion latencies. Druid also emits per query metrics.\nWe emit metrics from a production Druid cluster and load them\ninto a dedicated metrics Druid cluster. The metrics Druid cluster\nis used to explore the performance and stability of the production\ncluster. This dedicated metrics cluster has allowed us to find nu-\nmerousproductionproblems,suchasgradualqueryspeeddegrega-\ntions,lessthanoptimallytunedhardware,andvariousothersystem\nbottlenecks. We also use a metrics cluster to analyze what queries\naremadeinproductionandwhataspectsofthedatausersaremost\ninterested in.\n7.2 Pairing Druid with a Stream Processor\nCurrently, Druid can only understand fully denormalized data\nstreams. Inordertoprovidefullbusinesslogicinproduction,Druid\ncan be paired with a stream processor such as Apache Storm [27].\nA Storm topology consumes events from a data stream, retains\nonly those that are “on-time”, and applies any relevant business\nlogic. This could range from simple transformations, such as id\ntonamelookups,tocomplexoperationssuchasmulti-streamjoins.\nThe Storm topology forwards the processed event stream to Druid\nin real-time. Storm handles the streaming data processing work,\nand Druid is used for responding to queries for both real-time and\nhistorical data.\n7.3 Multiple Data Center Distribution\nLargescaleproductionoutagesmaynotonlyaffectsinglenodes,\nbut entire data centers as well. The tier configuration in Druid co-\nordinatornodesallowforsegmentstobereplicatedacrossmultiple\ntiers. Hence, segments can be exactly replicated across historical\nnodes in multiple data centers. Similarily, query preference can be\nassigned to different tiers. It is possible to have nodes in one data\ncenter act as a primary cluster (and receive all queries) and have a\nredundant cluster in another data center. Such a setup may be de-\nsired if one data center is situated much closer to users.\n8. RELATED WORK\nCattell [6] maintains a great summary about existing Scalable\nSQL and NoSQL data stores. Hu [18] contributed another great\nsummary for streaming databases. Druid, feature-wise, sits some-\nwhere between Google’s Dremel [28] and PowerDrill [17]. Druid\nhas most of the features implemented in Dremel (Dremel handles\narbitrarynesteddatastructureswhileDruidonlyallowsforasingle\nlevel of array-based nesting) and many of the interesting compres-\nsion algorithms mentioned in PowerDrill.\nAlthough Druid builds on many of the same principles as other\ndistributedcolumnardatastores[15],manyofthesedatastoresaredesigned to be more generic key-value stores [23] and do not sup-\nport computation directly in the storage layer. There are also other\ndata stores designed for some of the same data warehousing issues\nthat Druid is meant to solve. These systems include in-memory\ndatabases such as SAP’s HANA [14] and VoltDB [43]. These data\nstoreslackDruid’slowlatencyingestioncharacteristics. Druidalso\nhas native analytical features baked in, similar to ParAccel [34],\nhowever, Druid allows system wide rolling software updates with\nno downtime.\nDruid is similiar to C-Store [38] and LazyBase [8] in that it has\ntwosubsystems,aread-optimizedsubsysteminthehistoricalnodes\nand a write-optimized subsystem in the real-time nodes. Real-time\nnodes are designed to ingest a high volume of append heavy data,\nand do not support data updates. Unlike the two aforementioned\nsystems,DruidismeantforOLAPtransactionsandnotOLTPtrans-\nactions.\nDruid’s low latency data ingestion features share some similar-\nities with Trident/Storm [27] and Spark Streaming [45], however,\nboth systems are focused on stream processing whereas Druid is\nfocused on ingestion and aggregation. Stream processors are great\ncomplementstoDruidasameansofpre-processingthedatabefore\nthe data enters Druid.\nThere are a class of systems that specialize in queries on top of\ncluster computing frameworks. Shark [13] is such a system for\nqueriesontopofSpark,andCloudera’sImpala[9]isanothersystem\nfocused on optimizing query performance on top of HDFS. Druid\nhistorical nodes download data locally and only work with native\nDruid indexes. We believe this setup allows for faster query laten-\ncies.\nDruidleveragesauniquecombinationofalgorithmsinitsarchi-\ntecture. Although we believe no other data store has the same set\nof functionality as Druid, some of Druid’s optimization techniques\nsuchasusinginvertedindicestoperformfastfiltersarealsousedin\nother data stores [26].\n9. CONCLUSIONS\nInthispaperwepresentedDruid,adistributed,column-oriented,\nreal-time analytical data store. Druid is designed to power high\nperformanceapplicationsandisoptimizedforlowquerylatencies.\nDruid supports streaming data ingestion and is fault-tolerant. We\ndiscussed Druid benchmarks and summarized key architecture as-\npects such as the storage format, query language, and general exe-\ncution.\n10. ACKNOWLEDGEMENTS\nDruid could not have been built without the help of many great\nengineersatMetamarketsandinthecommunity. Wewanttothank\neveryone that has contributed to the Druid codebase for their in-\nvaluable support.\n11. REFERENCES\n[1] D. J. Abadi, S. R. Madden, and N. Hachem. Column-stores\nvs.row-stores: Howdifferentaretheyreally? In Proceedings\nof the 2008 ACM SIGMOD international conferenceon\nManagement of data , pages 967–980. ACM,2008.\n[2] G. Antoshenkov.Byte-aligned bitmap compression. In Data\nCompressionConference,1995. DCC’95. Proceedings , page\n476. IEEE, 1995.\n[3] Apache. Apache solr.\nhttp://lucene.apache.org/solr/ , February 2013.\n[4] S. Banon. Elasticsearch.\nhttp://www.elasticseach.com/ , July 2013.[5] C. Bear,A. Lamb, and N. Tran. The vertica database: Sql\nrdbms for managing big data. In Proceedingsof the 2012\nworkshop on Management of big data systems , pages 37–38.\nACM,2012.\n[6] R. Cattell. Scalable sql and nosql data stores. ACM SIGMOD\nRecord, 39(4):12–27, 2011.\n[7] F.Chang, J. Dean, S. Ghemawat, W.C. Hsieh, D. A.\nWallach,M. Burrows, T.Chandra, A. Fikes,and R. E.\nGruber.Bigtable: A distributed storage system for structured\ndata.ACM Transactionson Computer Systems (TOCS) ,\n26(2):4, 2008.\n[8] J. Cipar,G. Ganger, K. Keeton, C. B. Morrey III, C. A.\nSoules, and A. Veitch.Lazybase: trading freshness for\nperformanceinascalabledatabase.In Proceedingsofthe7th\nACM europeanconference on Computer Systems , pages\n169–182. ACM,2012.\n[9] Cloudera impala. http://blog.cloudera.com/blog ,\nMarch 2013.\n[10] A. Colantonio and R. Di Pietro. Concise: Compressed\n‘n’composable integer set. Information ProcessingLetters ,\n110(16):644–650,2010.\n[11] J. Dean and S. Ghemawat. Mapreduce: simplified data\nprocessing on largeclusters. Communications of the ACM ,\n51(1):107–113,2008.\n[12] G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati,\nA. Lakshman, A. Pilchin, S. Sivasubramanian, P.Vosshall,\nand W.Vogels.Dynamo: amazon’s highly available\nkey-value store. In ACM SIGOPS Operating Systems\nReview, volume 41, pages 205–220. ACM, 2007.\n[13] C. Engle, A. Lupher,R. Xin, M. Zaharia, M. J. Franklin,\nS. Shenker,and I. Stoica. Shark: fast data analysis using\ncoarse-grained distributed memory.In Proceedingsof the\n2012 international conferenceon Management of Data ,\npages 689–692. ACM, 2012.\n[14] F.Färber,S. K. Cha, J. Primsch, C. Bornhövd, S. Sigg, and\nW.Lehner.Sap hana database: data management for modern\nbusiness applications. ACM Sigmod Record , 40(4):45–51,\n2012.\n[15] B. Fink. Distributed computation on dynamo-style\ndistributed storage: riak pipe. In Proceedingsof the eleventh\nACM SIGPLAN workshop on Erlang workshop , pages\n43–50. ACM,2012.\n[16] B. Fitzpatrick. Distributed caching with memcached. Linux\njournal, (124):72–74, 2004.\n[17] A. Hall, O. Bachmann, R. Büssow,S. Gănceanu, and\nM. Nunkesser.Processing a trillion cells per mouse click.\nProceedingsof the VLDB Endowment , 5(11):1436–1446,\n2012.\n[18] B. Hu. Stream database survey.2011.\n[19] P.Hunt, M. Konar,F.P. Junqueira, and B. Reed. Zookeeper:\nWait-freecoordinationforinternet-scalesystems.In USENIX\nATC, volume 10, 2010.\n[20] C. S. Kim. Lrfu: A spectrum of policies that subsumes the\nleast recently used and least frequently used policies. IEEE\nTransactionson Computers , 50(12), 2001.\n[21] J. Kreps, N.Narkhede, and J. Rao. Kafka: A distributed\nmessaging system for log processing. In Proceedingsof 6th\nInternational Workshopon Networking Meets Databases\n(NetDB), Athens, Greece , 2011.\n[22] T.Lachev. Applied MicrosoftAnalysis Services 2005: And\nMicrosoftBusiness Intelligence Platform . Prologika Press,\n2005.[23] A. Lakshman and P.Malik. Cassandra—a decentralized\nstructured storage system. Operating systems review ,\n44(2):35, 2010.\n[24] Liblzf. http://freecode.com/projects/liblzf , March\n2013.\n[25] LinkedIn. Senseidb. http://www.senseidb.com/ , July\n2013.\n[26] R. MacNicol and B. French. Sybase iq multiplex-designed\nfor analytics. In Proceedingsof the Thirtieth international\nconferenceon Verylargedata bases-Volume30 , pages\n1227–1230.VLDBEndowment, 2004.\n[27] N. Marz. Storm: Distributed and fault-tolerant realtime\ncomputation. http://storm-project.net/ , February\n2013.\n[28] S. Melnik, A. Gubarev,J. J. Long, G. Romer,S. Shivakumar,\nM.Tolton,and T.Vassilakis.Dremel: interactive analysis of\nweb-scale datasets. Proceedingsof the VLDB Endowment ,\n3(1-2):330–339, 2010.\n[29] D. Miner.Unified analytics platform for big data. In\nProceedingsof the WICSA/ECSA 2012 Companion Volume ,\npages 176–176. ACM,2012.\n[30] K.Oehler,J.Gruenes,C.Ilacqua,andM.Perez. IBMCognos\nTM1: The Official Guide . McGraw-Hill, 2012.\n[31] E. J. O’neil, P.E. O’neil, and G. Weikum.The lru-k page\nreplacement algorithm for database disk buffering.In ACM\nSIGMODRecord , volume 22, pages 297–306. ACM, 1993.\n[32] P.O’Neil and D. Quass. Improved query performance with\nvariant indexes. In ACM Sigmod Record , volume 26, pages\n38–49. ACM,1997.\n[33] P.O’Neil, E. Cheng, D. Gawlick, and E. O’Neil. The\nlog-structured merge-tree(lsm-tree). Acta Informatica ,\n33(4):351–385, 1996.\n[34] Paraccel analytic database.\nhttp://www.paraccel.com/resources/Datasheets/\nParAccel-Core-Analytic-Database.pdf , March 2013.\n[35] M. Schrader,D. Vlamis, M. Nader,C. Claterbos, D. Collins,\nM.Campbell, and F.Conrad. Oracle Essbase & Oracle\nOLAP.McGraw-Hill, Inc., 2009.[36] K. Shvachko, H. Kuang, S. Radia, and R. Chansler.The\nhadoop distributed file system. In Mass Storage Systems and\nTechnologies(MSST), 2010 IEEE 26th Symposium on , pages\n1–10. IEEE, 2010.\n[37] M. Singh and B. Leonhardi. Introduction to the ibm netezza\nwarehouse appliance. In Proceedingsof the 2011Conference\nof the Center for Advanced Studies on Collaborative\nResearch, pages 385–386. IBM Corp., 2011.\n[38] M. Stonebraker,D. J. Abadi, A. Batkin, X. Chen,\nM. Cherniack, M. Ferreira, E. Lau, A. Lin, S. Madden,\nE. O’Neil, et al. C-store: a column-oriented dbms. In\nProceedingsof the 31st international conferenceon Very\nlargedata bases , pages 553–564. VLDB Endowment, 2005.\n[39] A. Tomasicand H. Garcia-Molina. Performance of inverted\nindices in shared-nothing distributed text document\ninformation retrieval systems. In Parallel and Distributed\nInformation Systems, 1993., Proceedingsof the Second\nInternational Conferenceon ,pages 8–17. IEEE, 1993.\n[40] E. Tschetter.Introducing druid: Real-time analytics at a\nbillion rows per second. http://druid.io/blog/2011/\n04/30/introducing-druid.html , April 2011.\n[41] Twitterpublic streams. https://dev.twitter.com/\ndocs/streaming-apis/streams/public , March 2013.\n[42] S. J.van Schaik and O. de Moor.A memory efficient\nreachability data structure through bit vector compression.In\nProceedingsof the 2011international conferenceon\nManagement of data , pages 913–924. ACM, 2011.\n[43] L. VoltDB.Voltdbtechnical overview.\nhttps://voltdb.com/ , 2010.\n[44] K. Wu,E. J. Otoo, and A. Shoshani. Optimizing bitmap\nindices with efficient compression. ACM Transactionson\nDatabase Systems (TODS) , 31(1):1–38, 2006.\n[45] M. Zaharia, T.Das, H. Li, S. Shenker, and I. Stoica.\nDiscretized streams: an efficient and fault-tolerant model for\nstreamprocessingonlargeclusters.In Proceedingsofthe4th\nUSENIX conferenceon Hot Topicsin Cloud Computing ,\npages 10–10. USENIX Association, 2012." } ]
{ "category": "App Definition and Development", "file_name": "modii658-yang.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "Geometry Template Library for STL-like 2D Operation s \nLucanus Simonson \nIntel Corporation \n2200 Mission College Blvd. \nSanta Clara, CA 95054-1549 \n1 (408) 765-8080 \nlucanus.j.simonson@intel.com Gyuszi Suto \nIntel Corporation \n2200 Mission College Blvd. \nSanta Clara, CA 95054-1549 \n1 (408) 765-8080 \ngyuszi.suto@intel.com \n \nABSTRACT \nThere is a proliferation of geometric algorithms an d data types \nwith no existing mechanism to unify geometric progr amming in \nC++. The geometry template library (GTL) provides geometry \nconcepts and concept mapping through traits as well as algorithms \nparameterized by conceptual geometric data type to provide a \nunified library of fundamental geometric algorithms that is \ninteroperable with existing geometric data types wi thout the need \nfor data copy conversion. Specific concepts and al gorithms \nprovided in GTL focus on high performance/capacity 2D polygon \nmanipulation, especially polygon clipping. The app lication-\nprogramming interface (API) for invoking algorithms is based on \noverloading of generic free functions by concepts. Overloaded \ngeneric operator syntax for polygon clipping Boolea ns (see Figure \n1) and the supporting operator templates are provid ed to make the \nAPI highly productive and abstract away the details of algorithms \nfrom their usage. The library was implemented in I ntel \nCorporation to converge the programming of geometri c \nmanipulations in C++ while providing best in class runtime and \nmemory performance for Booleans operations. This p aper \ndiscusses the specific needs of generic geometry pr ogramming \nand how those needs are met by the concepts-based t ype system \nthat makes the generic API possible. \nCategories and Subject Descriptors \nI.3.5 [ Computer Graphics ]: Computational Geometry and Object \nModeling – Curve, surface, solid and object representations. \nGeneral Terms \nAlgorithms, Performance, Design, Reliability, Stand ardization. \nKeywords \nC++ concepts, geometry, polygon clipping, data mode ling, library \ndesign. \n1. INTRODUCTION \n1.1 Problem Statement and Motivation \nTraditional object-oriented design leads to type sy stems that employ a common base class to integrate new types i nto an \nexisting system. This leads to monolithic software design with \nshared dependencies on the base class, creating bar riers of \nincompatibility between different software systems when they \ncannot share a common base class. Integrating libr ary \nfunctionality into such a software system typically requires \nwrapping the library API with functions that perfor m implicit data \ncopy conversion or that data copy conversion be per formed as an \nexplicit step before the library API can be used. This leads to \ncode and memory bloat in applications and to librar ies that are \nhard to use. The problem presents itself clearly i n geometry. An \napplication will model objects that have a geometri c nature as \nwell as application-specific attributes such as nam es, weights, \ncolors etc. These object models are geometric enti ties, but are \ntypically not syntactically compatible with a geome tric library \nunless there is code coupling through some base geo metric class. \nOften an unnecessary copy conversion of data model to geometric \nobject is required to satisfy syntactic requirement s of a geometry \nlibrary API. For example, to use CGAL [8] library algorithms a \npolygon that depends on CGAL header files must be d eclared and \ndata copied into it before that data can be passed into the \nalgorithm. Eliminating this artificial incompatibi lity and allowing \na geometry library’s API to be used directly and no n-intrusively in \nan application with its native object model is the design goal for \nGTL’s interfaces. This allows application level pr ogramming \ntasks to benefit from the thoughtful design of a pr oductive and \nintuitive set of library APIs. \n1.2 C++ Concepts Address the Problem \nComputational geometry is the ideal domain to apply C++ \nConcepts [10] based library design. At a conceptua l level, \ngeometric objects are universally understood. Exis ting geometry \ncodes coming from different sources are not inter-c ompatible due \nlargely to syntactic rather than semantic differenc es. These \nsyntactic differences can be easily resolved by con cept mapping \nthrough C++ traits. However, there are minor seman tic \ndifferences that need to be comprehended in order t o implement a \ntruly generic geometry concept. \nPermission to make digital or hard copies of all or part of this work for \npersonal or classroom use is granted without fee pr ovided that copies are \nnot made or distributed for profit or commercial ad vantage and that \ncopies bear this notice and the full citation on th e first page. To copy \notherwise, or republish, to post on servers or to r edistribute to lists, \nrequires prior specific permission and/or a fee. \nBoostcon’09 , May 3–8, 2009, Aspen, Colorado, USA. \nCopyright 2009 ACM 1-58113-000-0/00/0004…$5.00. \n \nFigure 1. Booleans example: disjoint-union (a xor b ) is c. A generic geometry library must also parameterize t he \nnumerical coordinate data type while at the same ti me providing \nnumerically robust arithmetic. Doing this requires more than \nsimply making the coordinate data type a template p arameter \neverywhere, but also looking up parameterized coord inate type \nconversions, handling the differences between integ er and floating \npoint programming, and making sparing use of high-p recision \ndata types to allow robust calculations. The gener ic geometry \nlibrary must define a type system of concepts that includes a \ncoordinate concept and provide an API based upon it that is both \nintuitive and productive to use yet maintainable an d easily \nextensible. \nWe will compare and contrast different proposed app roaches \nto generic geometry type systems in Section 2. Sec tion 3 will \npresent the approach used in GTL to implement a geo metry \nconcepts API and explain its advantages over other proposals. \nOperator templates and the details of the operator based API for \npolygon set operations (intersection, union, etc.) will be presented \nin Section 4. In Section 5 we will present a gener ic sweep-line \nalgorithmic framework, explain the principles behin d our \nimplementation of sweep-line and how they are refle cted in the \nrequirements placed on its template parameters. Nu merical issues \nfaced and solutions for numerical robustness proble ms \nimplemented in our library will be discussed in Sec tion 6. A \nperformance comparison with several open source com putational \ngeometry libraries will be presented in Section 7 a nd closing \nremarks in Section 8. \n2. Generic Geometry Approaches \nThere are several well known generic programming te chniques \nthat are applicable to the design of a generic geom etry library \nAPI. The most important is C++ traits, which provi des the \nnecessary abstraction between the interface of a ge ometry object \nand its use in generic code. In combination with t raits, other \ngeneric programming techniques have been proposed i n the \ndesign of generic geometry libraries including: tag dispatching, \nstatic asserts and substitution-failure-is-not-an-e rror (SFINAE) \ntemplate function overloading. \n2.1 Free Functions and Traits \nThe conventional way to implement a generic API is with \ntemplate functions declared within a namespace. Th is allows \narbitrary objects to be passed directly into templa te functions and \naccessed through their associated traits. There is , however, one \nproblem with this approach when used for computatio nal \ngeometry. How to specify what kind of geometric en tity a given \ntemplate parameter represents? The simplest soluti on is to name \nthe function such that it is clear what kind of geo metry it expects. \nThis prevents generic function name collisions and documents \nwhat expectation is placed upon the template parame ter. \nHowever, it leads to overly long function names, pa rticularly if \ntwo or more kinds of geometry are arguments of a fu nction. Such \ngeneric functions do not lend themselves to generic programming \nat a higher level of abstraction. Consider the center(T) \nfunction. If we name it variously rectangle_center(T) for \nthe rectangle case and polygon_center(T) for polygons we \ncannot abstract away what kind of geometry we are d ealing with \nwhen working with their center points. As an examp le, a generic \nlibrary function that computes the centroid of an i terator range \nover geometry objects using their center points wei ghted by their area would require two definitions that differ only by the names of \nfunctions that compute center and area. As we add triangle and \npolygon-with-holes and circle to the generic librar y, the drawback \nof not being able to abstract away what kind of geo metry is being \nworked with in library code as well as user code be comes \npainfully obvious. \n2.2 Concepts and Static Assert \nA static assert generates a syntax error whenever a concept check \nfails. A boost geometry library proposal from Bran don Kohn [5] \nemploys boost::static_assert on top of generic free \nfunctions and traits. This approach to C++ concept s when applied \nto computational geometry improves the API primaril y by \nproducing more intelligible syntax errors when the wrong type is \npassed into an API. It does not solve the problem of not being \nable to abstract away the geometric concept itself because it still \nrelies on functions having different names for diff erent concepts \nto prevent function name collisions. \n2.3 Tag Dispatching Based Concepts \nA series of boost geometry library proposals from B arend Gehrels \nand Bruno Lalande [2] have culminated into a tag di spatching \nbased API where a generic free function that looks up tag types \nfor the objects passed in to disambiguate otherwise identical \ndispatch functions. These dispatch functions are w rapped in \nstructs to emulate partial specialization of dispat ch functions by \nspecializing the struct. Partial specialization of template functions \nis not legal C++ syntax under the C++03 standard. \n \nThis approach solves the name collision problem. I t allows \none center() function, for example, to dispatch to different \nimplementations for various rectangle, circle, poly gon conceptual \ntypes and abstract the concept away when working wi th objects \nthat share the characteristic that they have a cent er. However, it is \nhard to generalize about concepts in a tag dispatch ing API \nbecause concept tags need to be explicit in the dec laration of \ndispatch functions. For all combinations of tags t hat could satisfy \na generic function, a definition of a dispatch func tion that accepts \nthose tags must be provided. For center() this is merely one for \neach concept, but for a function such as distance() it is all pairs \nand becomes cumbersome once the number of concepts in the namespace dispatch { \n template <typename TAG1, typename TAG2, \n typename G1, typename G2> \n struct distance {}; \n template <typename P1, typename P2> \n struct distance<point_tag, point_tag, P1, P2> { \n static typename distance_result<P1, P2>::type \n calculate(const P1& p1, const P2& p2) {…}; \n }; \n template <typename P, typename L> \n struct distance<point_tag,linestring_tag,P,L> { \n template<typename S> \n static typename distance_result<P1, P2>::type \n calculate(const P& point, const L& linestr); \n} \ntemplate <typename G1, typename G2> \ntypename distance_result<G1, G2>::type \ndistance(const G1& g1, const G2& g2) { \n return \n dispatch::distance< tag<G1>::type, \n tag<G2>::type, G1, G2>::calculate(g1, g2); \n} \n system exceeds just a handful. The number of such dispatch \nfunctions needed to implement an API explodes if so me other \nabstraction is not used, such as multi-stage dispat ch. Another way \nto achieve that additional abstraction is inheritan ce sub-typing of \ntags, while SFINAE provides a third. \n3. GTL’s Approach to Generic Geometry \nEmpty concept struct s are defined for the purposes of meta-\nprogramming in terms of concepts and are analogous to tags used \nin tag dispatching. Sub-typing relationships betwe en concepts \n(concept refinements) are implemented by specializi ng meta-\nfunctions that query for such. \n \nEven with concepts inherited from each other (for t ag \ndispatching purposes, for instance) such meta-funct ions would \nstill be convenient for SFINAE checks because inher itance \nrelationships are not easily detected at compile ti me. The use of \nboost::is_base_of could obviate the need for these meta-\nfunctions in GTL. \nTraits related to geometry concepts are broken down into \nmutable and read-only traits structs. A data type that models a \nconcept must provide a specialization for that conc ept’s read-only \ntraits or conform to the default traits definition. It should also do \nthe same for the mutable traits if possible. \nGTL interfaces follow a geometric programming style called \nisotropy, where abstract ideas like orientation and direction are \nprogram data. Direction is a parameter to function calls rather \nthan explicitly coded in function names and handled with flow \ncontrol. The access functions in the traits of a p oint data type \ntherefore defines one get() function that accepts a parameter for \nhorizontal or vertical axis component rather than s eparate x() and \ny() access functions. \nA data type that models a refinement of a concept w ill \nautomatically have read only traits instantiate for the more general \nconcept based upon the traits of the refinement tha t data type \nmodels. The programmer need only provide concept m apping \ntraits for the exact concept their object models an d it becomes \nfully integrated into the generic type system. Concept checking is performed by looking up the con cept \nassociated with a given object type by meta-functio n \ngeometry_concept<T> and using that along with pertinent \nconcept refinement relationships through compile ti me logic to \nproduce a yes or no answer for whether a given func tion should \ninstantiate for the arguments provided or result in SFINAE \nbehavior in the compiler. This allows generic func tions to be \noverloaded in GTL. The two generic functions foo() in the \nexample code below differ only by return type, but are not \nambiguous because their return types cannot both be instantiated \nfor the same template argument type. While SFINAE generic \nfunction overloading is quite powerful and flexible , the compiler \nsupport for it is currently inconsistent, requiring significant effort \nand knowledge of compiler idiosyncrasies and their implications \nin order to produce portable code. \n \n3.1 Geometry Concepts Provided by GTL \nGTL provides geometry concepts that are required to support \nplanar polygon manipulation. A summary of these co ncepts can \nbe found in Table 1. \nTable 1. GTL Concepts \nConcept Abbreviation \ncoordinate_concept C \ninterval_concept I \npoint_concept PT \npoint_3d_concept PT3D \nrectangle_concept R \npolygon_90_concept P90 \npolygon_90_with_holes_concept PWH90 \npolygon_45_concept P45 \npolygon_45_with_holes_concept PWH45 \npolygon_concept P \npolygon_with_holes_concept PWH \npolygon_90_set_concept PS90 \npolygon_45_set_concept PS45 \npolygon_set_concept PS \n \nConcept refinement relationships in GTL are shown i n \nFigure 2, with concepts labeled by the abbreviation s listed in \nTable 1. GTL provides algorithms that have been op timized for \nManhattan and 45-degree VLSI layout data, and conce pts specific \nto these restricted geometries are named with 90 an d 45. template <typename T> struct is_integer {}; \ntemplate <> \nstruct is_integer<int> { typedef int type; }; \ntemplate <typename T> struct is_float {}; \ntemplate <> \nstruct is_float<float> { typedef float type; }; \n \ntemplate <typename T> \ntypename is_int<T>::type foo(T input); \ntemplate <typename T> \ntypename is_float<T>::type foo(T input); \nFigure 2. GTL Concept Refinement Diagram template <typename T> \nstruct point_traits { \n typedef T::coordinate_type coordinate_type; \n coordinate_type get(const T& p, \n orientation_2d orient) { return p.get(orient); \n} \ntemplate <typename T> \nstruct point_mutable_traits { \n void set(const T& p, orientation_2d orient, \n coordinate_type value) { \n p.set(orient, value); \n } \n T construct(coordinate_type x, \n coordinate_type y) { return T(x, y); } \n}; \n struct polygon_concept {}; \nstruct rectangle_concept {}; \ntemplate <typename T> \nstruct is_a_polygon_concept{}; \ntemplate <> struct is_a_polygon_concept< \n rectangle_concept> { typedef gtl_yes type; }; A polygon set in our terminology is any object that is \nsuitable for an argument to a polygon set operation (intersection, \nunion, disjoint union, etc.) A vector of polygons is a natural and \nconvenient way to define such an object. Vectors a nd lists of \nobjects that model polygon and rectangle concepts a re \nautomatically models of polygon sets concepts. A u ser can define \nthe traits for their polygon data type, register it as a \npolygon_concept type by specializing geometry_concept<T> \nand immediately begin using vectors of those polygo ns as \narguments to GTL APIs that expect objects that mode l \npolygon_set_concept . GTL also provides data structures for \npolygon set objects that store the internal represe ntation suitable \nfor consumption by the Booleans algorithms. \n3.2 Generic Functions Provided \nIt is very important to make use of the concept ref inement \ndefinition of parent concept traits with child conc ept objects to \nallow a complete and symmetric library of generic f unctions to be \nimplemented in a manageable amount of code. O(n) g eneric \nfunctions defined for O(m) conceptual types can all ow O(n * m) \nfunction instantiations that all operate on distinc t conceptual \ntypes. A good example of this is the assign() function that \ncopies the second argument to the first and is prov ided in lieu of a \ngeneric free assignment operator, which is not lega l C++ syntax. \nThe assign() function can be called on any pair of objects \nwhere the second is the same conceptual type as the first or a \nrefinement of the first conceptual type. GTL allow s roughly fifty, \nfunctionally correct and semantically sensible, ins tantiations of \nassign() that accept distinct pairs of conceptual types. T here is, \nhowever, only one SFINAE overload of the generic as sign \nfunction for each of thirteen conceptual types. No nonsensical \ncombination of concepts passed to assign() is allowed to \ncompile and the syntax error generated is simply “n o function \nassign that accepts the arguments...” \nThe assign() function alone turns GTL into a Rosetta-\nstone of geometry data type conversions, but the li brary also \nprovides a great many other useful functions such a s area, \nperimeter, contains, distance, extents etc. Becaus e of the \nextensible design, it is very feasible to add new f unctions and \nconcepts over time that work well with the existing functions and \nconcepts. \n3.3 Bending the Rules with view_as \nSometimes use of GTL APIs with given types would be illegal \nbecause the of a conceptual type mismatch, yet the programmer \nknows that some invariant is true at runtime that t he compiler \ncannot know at compile time. For example, that a p olygon is a \nrectangle, or degenerate. In such cases, the progr ammer might \nwant to view an object of a given conceptual type a s if it were a \nrefinement of that conceptual type. In such cases the programmer \ncan concept-cast the object to the refined concept type with a \nview_as function call. A call to view_as provides read only \naccess to the object through the traits associated with the object. \nFor example, some algorithms may be cheaper to appl y on \nconcepts that place restrictions on the geometry da ta through \nrefinement because they can safely assume certain i nvariants. It is \nmuch faster to compute whether a rectangle fully co ntains a \npolygon than it is to compute whether a polygon ful ly contains a \npolygon. Rather than construct a rectangle from th e polygon we can simply view the polygon as a rectangle if we kn ow that to be \nthe case at runtime. \n \nThe ability to perform concept casting, concept ref inement \nand overload generic functions by concept type resu lts in a \ncomplete C++ concepts-based type system. \n4. Booleans Operator Templates \nThe Booleans algorithms are the core algorithmic ca pability \nprovided by GTL. An example of a Boolean XOR opera tion on \ntwo polygons is shown in Figure 1. The geometry co ncepts and \nconcept based object model are focused on providing mechanisms \nfor getting data in and out of the core Booleans in the format of \nthe user’s choosing. This enables the user to dire ctly make use of \nthe API provided by GTL for invoking these algorith ms on their \nown data types. This novel ability to make use of library APIs \nwith application data types motivates us to provide the most \nproductive, intuitive, concise and readable API pos sible. We \noverload the C++ bit-wise Boolean arithmetic operat ors to \nperform geometric Boolean operations because it is immediately \nintuitive, maximally concise, highly readable and p roductive for \nthe user to apply. \n4.1 Supported Operators \nA Boolean operator function call is allowed by the library for any \ntwo objects that model a geometry concept for which an area \nfunction call makes sense. These include the operator& for \nintersection, operator| for union, operator^ for disjoint-union \nand operator– for the and-not/subtract Boolean operation. Self-\nassignment versions of these operators are provided for left hand \nside objects that model the mutable polygon set con cepts, which \nare suitable to store the result of a Boolean. Also supported for \nsuch objects are operator+ /operator- when the right hand side \nis a numeric for inflate/deflate, known as offsetti ng or buffering \noperations. There is no complement operation becau se the ability \nto represent geometries of infinite extent is not e xpected of \napplication geometry types. Nor is such an operati on truly needed \nwhen object ^ rectangle with a suitably large rectangle is \nequivalent for practical purposes. \n4.2 Operator Templates Definition \nTo avoid the unnecessary copying of potentially lar ge data \nstructures as the return value of an operator funct ion call that must \nreturn its result by value, the return value of GTL Boolean \noperators is an operator template. The operator te mplate caches \nreferences to the operator arguments and allocates storage for the \nresult of the operation, which remains empty initia lly, allowing \nthe copy of the operator template to be lightweight when it is \nreturned by value. The operator template lazily pe rforms the \nBoolean operation, storing the output only when fir st requested. if(is_rectilinear(polygon) && \n size(polygon) == 4) { \n //polygon must be a rectangle \n //use cheaper O(n) algorithm \n return contains(view_as< \n rectangle_concept>(polygon), polygon2); \n} else { \n //use O(n log n) Booleans-based algorithm \n return contains(polygon, polygon2); \n} \n Operator templates are expected to be temporaries w hen operators \nare chained. For instance (a + b) - c produces an operator \ntemplate as the result of a + b , passes that into operator- and \nanother operator template is returned by operator- . Only later \nwhen the result of that operator- is requested will both the \nBooleans be performed as the operator templates rec ursively \nperform lazy evaluation of the expression. Because the user is not \nexpected to refer to the operator templates by type , but instead use \nthem only as temporaries, there is little danger of the arguments \ngoing out of scope before the expression is evaluat ed. \n4.3 Exemplary User Code \nThe combination of operator templates with the C++ concepts \nbased type system leads to the ability to write exe mplary user code \nusing the library. For instance, in an application that defines its \nown CBoundingBox and CPolyon, the following GTL based \ncode snippet becomes possible: \n \nThe application of five GTL library algorithms is a ccomplished in \nonly two lines of code while the design intent of t he code is clear \nand easy to read. This is with application rather than library data \ntypes and no performance is sacrificed for data cop y to satisfy the \nsyntactic requirements of library interfaces or the operator \nsemantics of C++ that require return by value. Thi s abstracts \naway the low-level details of the algorithms and al lows the user to \nprogram at a higher level of abstraction while at t he same time \npreserving the optimality of the code produced. \n5. Generic Sweep-line for Booleans \nA common way to implement Booleans is to first inte rsect \npolygon edges with an algorithm such as Bentley Ott mann [1]. \nAfter line segment intersection, new vertices are c ommonly \nintroduced on edges where intersections were identi fied along \nwith crosslinks that stitch the input polygons toge ther into a graph \ndata structure. The graph data structure is then t raversed and a \nrules-based algorithm ensures that interior edges a re not traversed. \nTraversing exterior edges yields closed polygons. [ 12] This \ntraditional algorithm has several problems. The gr aph data \nstructure is expensive to construct, expensive to s tore and \nexpensive to traverse. When the graph is traversed to output \npolygons the winding direction can be used to ident ify holes, but \nno information stored within the graph helps to ass ociate those \nholes to the outer polygons, requiring that additio nal computation \nbe performed if that information is needed. The al gorithm leads \nto complex implementations of rule logic because it requires that \ndegeneracy be handled explicitly with logic, making it challenging \nto achieve a reliable implementation of the algorit hm. \nA much better approach to Booleans is the applicati on of \nsweep-line to identify interior edges. GTL provide s a generic \nsweep-line algorithm framework that is used to impl ement line \nsegment intersection, Booleans and related algorith ms such as \nphysical connectivity extraction. 5.1 Better Booleans through Calculus \nOur Booleans algorithm differs from the traditional approaches \nfound in the literature. The algorithm most closel y resembles [11] \nin that it can perform polygon clipping and line se gment \nintersection with a single pass of sweep-line. In our problem \nformulation we model a polygon as a mathematical fu nction of \ntwo variables x and y such that for all x/y points inside the \npolygon the function returns one, and for all point s outside the \npolygon the function returns zero. This view of a polygon is \nuseful because it allows us to reason about the pro blem \nmathematically. \nIf we consider a mathematical function of two varia bles, we \ncan apply the partial derivative with respect to ea ch of those \nvariables, which provides the points at which the f unction value \nchanges and the directions and magnitudes in which it is \nchanging. Because our geometry is piece-wise linea r this reduces \nthe two dimensional definition of the polygon funct ion to a \ncollection of zero dimensional quantities at its ve rtices that are \ndirected impulses with magnitude of positive or neg ative one. \n \nIntegrating with respect to x and y allows us to re construct \nthe two dimensional polygon function from these zer o \ndimensional derivative quantities. \n \nThis integration with respect to x and y in mathema tical \nterms is analogous to programmatically sweeping fro m left to \nright and from bottom to top along the sweep-line a nd \naccumulating partial sums. Because the polygons ar e piecewise \nlinear this summation is discreet rather than conti nuous and is \ntherefore computationally simple. What this mathem atical model \nfor calculus of polygons allows us to do is superim pose multiple \noverlapping polygons by decomposing them into verte x objects \nthat carry data about direction and magnitude of ch ange along the \nedges that project out of those vertices. Because these vertices are \nzero-dimensional quantities they can be superimpose d simply by \nplacing them together in a collection, trivially so rting them in the \norder they are to be scanned and summing any that h ave the same \npoint in common. When scanned, their magnitudes ar e summed \n(integrated) onto intervals of the sweep-line data structure. The \nsweep-line data structure should ideally be a binar y tree that Figure 4. Integrating polygon-derivative reproduces polygon Figure 3. Derivative of a polygon void foo(list<CPolygon>& result, \nconst list<CPolygon>& a, \nconst list<CPolygon>& b) { \nCBoundingBox domainExtent; \ngtl::extents(domainExtent, a); \nresult += (b & domainExtent) ^ (a - 10); \n} \n provides amortized log(n) lookup, insertion and rem oval of these \nsums, keyed by the lower bound of the interval (whi ch of course \nchanges as the sweep-line moves.) Each such interv al on the \nsweep-line data structure stores the count of the n umber of \npolygons the sweep-line is currently intersecting a long that \ninterval. Notably, the definition allows for count s to be negative. \nA union operation is performed by retaining all edg es for which \nthe count above is greater than zero and the count below is less \nthan or equal to zero or visa-versa. Vertical edge s are a special \ncase because they are parallel to our sweep-line bu t are easily \nhandled by summing them from bottom to top as we pr ogress \nalong the sweep-line. \n \nThe sequence of steps to perform a Boolean OR (unio n) \noperation on two polygons is shown in Figure 5. Th e two input \npolygons are shown overlapping in Figure 5 a. They are \ndecomposed into their derivative points as shown in Figure 5 b. \nLine segment intersection points are inserted as sh own in Figure 5 \nc. These intersection points carry no derivative d ata quantities \nbecause no change in direction of edges takes place at intersection \npoints. The result of a pass of sweep-line to remo ve interior \npoints through integration and output updated deriv ative \nquantities is shown in Figure 5 d. Note that it is the same data-\nformat as the input shown in Figure 5 b and is in f act the \nderivative of the output polygons. This facilitate s the chaining \ntogether of multiple Booleans operations without th e need to \nconvert to and from polygons in between. Note that one point in \nFigure 5 d. has no derivative vector quantities ass igned to it. That \npoint is collinear with the previous and next point in the output \npolygon and therefore doesn’t represent a change in direction of \nedges. It is retained because preserving the topol ogy of collinear \npoints in the output is a requirement for some mesh ing algorithms \nthat their input polygons be “linearly consistent.” Such collinear \npoints can be trivially discarded if undesired. A final pass of \nsweep-line can either integrate the output polygon derivative from \nFigure 5 d to form polygons with holes as shown in Figure 5 e or \nkeyhole out the holes to the outer shells as shown in Figure 5 f. It \nis possible to perform line segment intersection, i nterior point \nremoval and form output polygons in a single pass o f sweep-line. \nWe break it down into separate steps for convenienc e. The \ncomputation required for interior point removal, up dating of \nderivative quantities and formation of output polyg ons increases the computational complexity of sweep-line based ge neralized \nline segment intersection such as that described by [1] by only a \nconstant factor whether performed as a single pass or separated \ninto multiple passes. The algorithm presented here is therefore \noptimal because it is well known that polygon clipp ing is bounded \nby the complexity of line segment intersection, as can be trivially \nproven because line segment intersection could be i mplemented \nwith our polygon-clipping algorithm. \nThe output polygons can contain holes, and the inpu t \npolygons can likewise contain holes. Moreover, the output holes \ncan be associated to their outer shells as addition al data available \nin the output or geometrically by keyholing. The o utput can \neasily be obtained as the non-overlapping trapezoid \ndecomposition of polygons sliced along the sweep-li ne orientation \nsimilar to [11]. All of these polygonal forms of o utput are legal \ninputs to the algorithm, and it is closed both on t he polygon \ndomain as well as the polygon derivative domain mea ning that it \nconsumes its own output. The other advantage of th is algorithm \nover the traditional previous polygon clipping algo rithms is that it \ncorrectly handles all degeneracy in inputs implicit ly with the same \nlogic path that handles the normal case. Our algor ithm reduces \nthe complex logic of categorizing states to simple arithmetic \napplied while scanning. It is robust to negative p olygon counts \n(holes outside of shells), high order overlaps of i ntersections and \nedges, co-linear and duplicate points, zero length edges, zero \ndegree angles and self-intersecting/self-overlappin g polygons, all \nby simply applying the same calculus of summing der ivative \nvalues that are easily computed by inspecting each polygon \nvertex. To our knowledge this polygon-derivative d ata-modeling \nand algorithm for polygon clipping has not appeared in past \nliterature and is novel. \n5.2 Generic Booleans Algorithmic Framework \nThe scanning of geometry for a Boolean in GTL perfo rms \nintegration with respect to x and y of changes in c ounts of the \nnumber of polygons from left-to-right/bottom-to-top . The sweep-\nline data structure stores the current count of the number of \npolygons that overlap intervals of the sweep-line. We employ the \nstl map for our sweep-line data structure using a s imilar technique \nas described in [9] to implement a comparison funct or that \ndepends upon the position of the sweep-line. The c ount data type \nstored as the value of the map element is a templat e parameter of \nthe sweep-line algorithm. It is required to be add able, and \ngenerally conform to the integral behaviors. An in teger is a valid \ndata type for the count and is used to implement un ary Boolean \noperations. A pair of integers can be used to impl ement binary \nBoolean operations such as intersection. A map of property value \nto count can be used to perform sweep-line on an ar bitrary \nnumber of input geometry “layers” in a single pass. Other \ntemplate parameters include an output functor, outp ut data \nstructure and of course the coordinate data type. \n Figure 5. Sequence of Boolean OR (union) operation \ntemplate <typename coordinate_type> \nstruct boolean_op { \n template <typename count, typename output_f> \n struct sweep_line { \n template <output_c, input_i> \n void scan(output_c& o, input_i b, input_i e); \n }; \n}; The generic algorithm takes care of all the details of \nintersecting polygon edges and summing counts while the output \nfunctor, count data type and output data structure control what is \ndone with that information. In this way, the algor ithm can be \nadapted to perform multiple operations with minimal effort. The \nseven simple Booleans supported by GTL employ outpu t functors \nthat differ only in the logic they apply to the cou nt data type. \n \nIf the logic applied by these output functors to th e count \nresults in true on one side of an edge and false on the other then \nthat edge is exterior and appended to the output da ta structure. If \npartial polygons are stored as part of the count da ta structure in \nthe sweep-line tree then the output functor can con struct output \npolygons. \n \nAlso implemented with the generic Booleans framewor k are \nproperty merge and connectivity extraction. By usi ng a map of \nproperty to polygon count as the data type for the counts stored on \nthe sweep-line and appropriate output functor and o utput data \nstructure the connectivity graph of n nodes of poly gonal inputs \ncan be computed in a single pass of the algorithm t o provide a \nsolution to the spatial overlay join problem. An e xample of the \noutput of this algorithm for the geometry in Figure 6 a. is shown \nin Figure 6 b. Similarly, the geometry of all uniq ue combinations \nof overlap between n polygonal inputs can be comput ed and \noutput by the property merge output functor to a ma p of polygon \nsets keyed by sets of property values. An example of the output of \nproperty merge for the geometry in Figure 6 a. is s hown in Figure \n6 c. The property merge algorithm is a generalizat ion of two \ninput Boolean operations to n inputs to solve the n -layer map \noverlay problem. The generic algorithm can be easi ly adapted to \nimplement other sweep-line based algorithms includi ng domain \nspecific algorithm such as capacitance estimation. \n5.3 Offsetting/Buffering Operations \nIn addition to Booleans, GTL also provides the capa bility to offset \npolygons by “inflating” or “deflating” them by a gi ven resizing \nvalue. Polygons grow to encompass all points withi n the resizing \ndistance of their original geometry. If the resizi ng distance is negative, polygons shrink. This implies that circu lar arcs be \ninserted at protruding corners when a polygon is re sized. Such \ncircular arcs are segmented to make the output poly gonal. Other \noptions for handling such corners include inserting a single edge \ninstead of an arc, simply maintaining the original topology or \nleaving the corner region unfilled. The resizing o perations are \naccomplished by a union operation on the original p olygons with \na collection of trapezoids constructed from their e dges of width \nequal to the resizing distance and with polygons at the corners \ngenerated based on the two adjacent edge trapezoids . An example \nof the shapes created from the input 45-degree geom etry in Figure \n7 a is shown in Figure 7 b and the result of the un ion between \nthose shapes and the original to create the output geometry of an \ninflate operation is shown in Figure 7 c. Deflate is accomplished \nby substituting subtraction for union. \n \n6. Numerical Robustness \nThere are three problems in integer arithmetic that must be \novercome to implement generalized line segment inte rsection for \npolygon clipping. These are integer overflow, inte ger truncation \nof fractional results and integer snapping of inter section points. \nOverflow and truncation of fractional results makes computing the \nresult of very innocent looking algebraic expressio ns all but \nimpossible with built-in integer types. The common practice of \nresorting to floating point arithmetic in these cas es is clearly not \nsuitable because the error it introduces is even mo re problematic. \nIntersection points must be snapped to the integer grid at the \noutput of the algorithm. However, snapping the int ersection point \ncauses a small lateral movement of the line segment s it is inserted \non. This movement can cause a line segment to cros s to the other \nside of a vertex than was at the case in the input, introducing new \nintersections. If these new intersections have not yet been reached \nby the forward progress of the line segment interse ction sweep-\nline, they might be handled naturally by the algori thm, however, it \nis just as likely they are introduced prior to the current position of \nthe sweep-line and the algorithm will not have the opportunity to \nhandle them during its forward progress. \nA choice about how to handle spurious intersection points \nintroduced by intersection point snapping must be m ade. It is \nimpossible to both output the idealized “correct” t opology of \nintersected line segments and at the same time outp ut fully \nintersected line segments with their end points on the integer grid \nwith the property that no two line segments interse ct except at \ntheir end points. The invariant that output line s egments not \nintersect except at their end points is crucial bec ause this invariant \nis a requirement of algorithms that would consume t he output. \nTopologically, the important consideration for poly gon clipping is \nthat the output edges describe closed figures. Vio lating this \ninvariant would, at best, cause polygons to be “dro pped” during \nsubsequent execution and, at worst, result in undef ined behavior. Figure 6. Connectivity Extraction and Property Merg e Figure 7. Resize Example: inflate of polygon with h ole //intersect \ncount[0] > 0 && count[1] > 0; \n//union \ncount[0] > 0 || count[1] > 0; \n//self-union \ncount > 0 \n//disjoint-union \n(count[0] > 0) ^ (count[1] > 0) \n//subtract \n(count[0] > 0) && !(count[1] > 0) \n//self-intersect \n(count > 1) \n//self-xor \n(count % 2) \n It is obvious that merging of vertices and the inse rtion of new \nvertices are both topological changes that preserve the property of \nthe network that all closed cycles remain closed. These \ntopological changes are allowed to occur as the res ult of snapping \nintersection points because we choose to enforce th e invariant that \nno line segments at the output intersect except at their end points. \n6.1 Solving Overflow and Truncation \nOverflow is easy to handle if the appropriate data types are \navailable. Thirty-two bit can be promoted to sixty -four and sixty-\nfour bit can be promoted to multi-precision integer . However, in \ngeneric code it becomes impossible to be explicit a bout when to \ncast and what to cast to. The same algorithm might be applied on \nseveral different coordinate data types when instan tiated with \ndifferent template parameters. We provide indirect access to the \nappropriate data types through coordinate traits, a coordinate \nconcept and a meta-function: high_precision_type<T> . The \ncoordinate traits allow the lookup of what data typ e to use for \narea, difference, Euclidean distance, etc. The coo rdinate concept \nis used to provide algorithms that apply these data types correctly \nto ease the burden of common operations such as com puting the \nabsolute distance between two coordinate values in one-\ndimensional space. The high precision type is used where built-in \ndata types would not be sufficient. It defaults to long double, \nwhich is the highest precision built-in data type, but still \npotentially insufficient. By specializing for a sp ecific coordinate \ndata type such as integer, a multi-precision ration al such as the \ngmp mpq type [3] can be specified. This can be don e outside the \nGTL library itself, making it easy to integrate lic ense encumbered \nnumerical data types with GTL and its boost license without the \nneed for the GTL code itself to depend on license e ncumbered \nheader files. \nHandling integer truncation of fractional results c an be done \neither by applying the high-precision type (prefera bly a multi-\nprecision rational) or by algebraic manipulation to minimize the \nneed for division and other operations that may pro duce factional \nresults. Some examples of this are distance compar ison, slope \ncomparison and intersection point computations. Wh en \ncomparing the distances between two points it is no t necessary to \napply the square root operation because that functi on is \nmonotonic. When comparing slopes we use the cross- product as a \nsubstitute for the naïve implementation. This avoi ds division and \nproduces reliable results when performed with integ er data types \nof sufficient precision. Comparing intersection co ordinates can \nalso use the cross product to avoid division becaus e computing \nthe intersection point of two line segments can be algebraically \nmanipulated to require only a single division opera tion per \ncoordinate, which is performed last. \n 6.2 Solving Spurious Intersections \nNon-integer intersection points need to be snapped to the integer \ngrid in the output. We snap each intersection poin t to the integer \ngrid at the time it is identified. We do this by t aking the floor of \nthe fractional value. Integer truncation is platfo rm dependent, but \nfrequently snaps toward zero, which is undesirable because it is \nnot uniformly consistent. Because the integer grid is uniform, the \ndistance a point can be snapped by taking the floor is bounded to \na 1x1 unit integer grid region. Our current appro ach differs from \nthe similar approach described by John Hobby [4] in that he \nrounds to the nearest integer. \nBecause the distance a segment can move is bounded, it is \npredictable. That means that we can predict the di stance a \nsegment might move due to a future intersection eve nt and handle \nany problems that would cause pro-actively in the e xecution of \nline segment intersection. There are two types of intersection \nartifacts created by snapping. The first is caused when a line \nsegment moves laterally and crosses a vertex, causi ng intersection \nwith edges that would not otherwise be intersected. The second is \nwhen an output line sub-segment is lengthened by sn apping and \nits end point crosses a stationary line segment. T he second case is \nfunctionally equivalent to the first since it doesn ’t matter whether \na point moves to cross an edge or an edge moves to cross a point. \nBoth can be handled by the same strategy so we’ll f ocus on the \ncase of the line segment moving in the description of our strategy. \nThat strategy relies upon the following lemma: all artifacts take \nplace only when a vertex lies within a distance of a line segment \nbounded by the max distance an intersection point c an be \nsnapped. This lemma can be trivially proven becaus e the distance \nthat segments can move is bounded and it is obvious ly impossible \nfor two non-intersecting line segments to cross eac h other without \none first crossing an end-point of the other. More over, since the \ndirection of snapping is known to be always downwar d, it follows \nthat a vertex can only be crossed by a line segment if that line \nsegment intersects the 1x1 integer unit grid box wi th that vertex in \nits lower left corner. In these cases, we intersec t the line segment \nwith those vertices pro-actively such that if a fut ure intersection \ncauses the line segment to move, the output topolog y cannot \ncontain spurious intersection artifacts due to that event. Because \nthe vertex is intersected and not other edges, no a dditional line \nsegment intersections need be introduced and no pro pagation of \nintersection artifacts through the network can take place. This \nmethod in known as snap-rounding and has been much discussed \nin the literature. [4] \nGiven an algorithm that finds intersections between line \nsegments, it is easy to find intersections with 1x1 integer grid \nboxes at segment end-points and snapped-intersectio n points by \nmodeling them as several tiny line segments called a widget. Any \nline segment that intersects the unit grid box will intersect at least \none of the segments of the widget shown in Figure 8 . \n \nImportantly, intersection events are detected by th e algorithm \nbased on only the input line segments geometry and never that of Figure 8. Example: Vertex/Segment Intersection Widg et //Segment 1: (x11,y11) to (x12, y12) \n//Segment 2: (x21,y21) to (x22, y22) \nx = (x11 * dy1 * dx2 – x21 * dy2 * dx1 + \n y21 * dx1 * dx2 - y11 * dx1 * dx2) / \n (dy1 * dx2 - dy2 * dx1); \ny = (y11 * dx1 * dy2 - y21 * dx2 * dy1 + \n x21 * dy1 * dy2 - x11 * dy1 * dy2) / \n (dx1 * dy2 - dx2 * dy1); \n the intersected line segments it has produced. Oth erwise, \nnumerical error could propagate forward in cascadin g increased \nseverity to reach arbitrarily large magnitudes. If such were the \ncase, no assurance of numerical robustness could be reasonably \nmade. \nIf the snapping direction is uniform it can be arra nged so that \nvertices snap forward in the scanning direction all owing \nevaluation of the widget to be performed by the sam e sweep-line \nthat finds the intersections. This is our intentio n. However, \ncurrently our arbitrary angle polygon Booleans appl y a much \nsimpler line segment intersection algorithm impleme nted to \nvalidate the sweep-line version of robust line segm ent \nintersection, which is still a work in progress. I t compares all \npairs of line segments that overlap in their x coor dinate range and \nall vertices and snapped intersection points with a ll segments that \noverlap with the x coordinate of those points. It has O(n^2) worst \ncase runtime complexity, but in the common case it has expected \nruntime of O(n 3/2 ) and, in practice, performs nearly as well as the \nexpected O(n log n) runtime of the optimal algorith m, making its \nuse for even quite large input data sets highly pra ctical. \nThe combination of handling overflow and applying r ational \ndata types to overcome truncation errors with the s trategy for \nmitigating errors introduced by intersection point snapping allows \n100% robust integer line segment intersection. The algorithm \napproximates output intersection points to within o ne integer unit \nin x and y and may intersect line segments with poi nts that lie \nwithin one integer unit in x and y. This approxima tes the ideal \n“correct” output to the extent practical with integ er coordinates. \nThe algorithm could be enhanced to round to closest integer grid \npoint when snapping intersections and make intersec ting segments \nto nearby vertices predicated upon whether it later becomes \nnecessary to do so. As a practical matter, however , these \nmeasures would result in very little benefit to acc uracy. That \nbenefit, and more, can be more easily obtained by s caling up the \ninput and applying higher precision integer arithme tic, if \nnecessary, which is easily accomplished using GTL. \n7. Experimental Results \nWe benchmarked our own GTL polygon clipping Boolean s \nalgorithm against the GPC [7], PolyBoolean [6] and CGAL [8] \npolygon clippers. We benchmarked the three GTL alg orithms, \nManhattan, 45-degree, and general Booleans against all three. \nThese benchmarks were performed on a two-package, 8 core, 3.0 \nGHz Xenon with 32 GB of RAM, 32 KB of L1 cache and 6 MB \nL2 cache. Hyper-threading was disabled. None of t he algorithms \ntested were threaded and all ran in the same proces s. We \ncompiled the benchmark executable using gcc 4.2.2 w ith O3 and \nfinline-limit=400 optimization flags. \nInputs consisted of small arbitrary triangles, arbi trarily \ndistributed over square domains of progressively in creasing size. \nRuntimes measured were the wall-clock execution tim e of the \nintersection operation on geometry contained within two such \ndomains. The overlapping triangles in each domain had to be \nmerged first with GTL to make them legal inputs for the other \nthree libraries’ Boolean operations. For the Manha ttan (axis-\naligned rectilinear) benchmark we used small arbitr ary rectangles \ninstead of triangles. Results of our benchmarking are shown in Figures 9, 10 and \n11. Note that in Figure 11 the last two data point s for \nPolyBoolean are absent. PolyBoolean suffered from unexplained \ncrashes as well as erroneously returning an error c ode due to a \nbug in its computation of whether a hole is contain ed within a \npolygon. This prevented PolyBoolean from successfu lly \nprocessing large data sets. CGAL had a similar pro blem that \nprevented it from processing data sets larger than those in Figure \n11. We conclude that this is a bug in CGAL because both GTL \nand GPC were always successful in processing the sa me polygons. \nThis issue with GCAL was observed regardless of whi ch kernel \nwas employed. \n \n \n Figure 9. GPC/GTL Scalability Comparison \nFigure 10. Rectilinear Scalability Comparison \nFigure 11. Small Scale Performance Comparison All libraries performed roughly within the range of 2X faster \nto 2X slower than GTL for the small inputs shown in Figure 11. \nWe feel that such small constant factor variation i s not significant \nsince it could likely be remedied by profile-guided performance \ntuning of implementation details. We did not apply empirical \ncomplexity measurement on the data sets in the Gene ral \nPerformance plot because non-linearity in micro-arc hitecture \nperformance as memory footprints start to exceed L1 cache size \nrenders such analysis on very small input sizes fau lty. \nWhile successful at processing all inputs, the GPC library’s \nruntime scaled sub-optimally for large inputs, as c an be seen in \nFigure 10. The empirical runtime complexity of GPC from that \nplot is n 2.6 , which can be clearly seen in its steep slope rela tive to \nGTL. We were unable to measure CGAL or PolyBoolean for this \nbenchmark because of the bugs that effectively prev ented them \nfrom processing inputs larger than those shown in F igure 11. \nAlso in Figure 9 we show the portion of GTL runtime spent in the \ncore Boolean sweep as gtlb. Note that the runtime of GTL is \ndominated by the currently suboptimal line segment intersection, \nwhich we plan on eventually rectifying by integrati ng line \nsegment intersection into the core Boolean sweep at a constant \nfactor overhead. \nAll libraries were successful in processing large-s cale \nManhattan polygon inputs. There is a 100X variabil ity in \nruntimes, however, as can be seen in Figure 10. Th e Manhattan \nBooleans algorithm in GTL is labeled gtl90 in the f igure, and the \n45-degree Booleans algorithm is labeled gtl45. Not e that the 45-\ndegree algorithm is optimal, computing line segment intersection \nin the same sweep as the Boolean operation, and per forms within \na small constant factor of the similar 90-degree al gorithm. Again, \nwe show the portion of the general Booleans algorit hm labeled \ngtlb. We believe that when upgraded with optimal l ine segment \nintersection the general Booleans algorithm could p erform closer \nto the gtlb curve than the current performance, whi ch is labeled \ngtl. GPC and PolyBoolean both turn in suboptimal n1.8 runtime \nscaling in this benchmark. CGAL appears to be opti mal for this \nbenchmark, scaling at a near linear n 1.07 . Frequently we have \nobserved O(n log n) algorithms will have an empiric al scaling \nfactor of less than one for input ranges that are m odest in size, as \nwe see in both log-log plots for gtlb as well as fo r gtl90. This is \nbecause the micro-architecture has advanced feature s such as \nspeculative memory pre-fetch that become more effec tive as input \nvector sizes grow. However, it clearly demonstrate s that empirical \nscaling observations must be interpreted cautiously when drawing \nconclusions about algorithmic complexity and optima lity. Our \nreview of GPC and PolyBoolean code lead us to belie ve that their \nline segment intersection algorithms should perform at around \nn1.5 log n on the test data we generated. Our conclusio n that they \nare suboptimal is not based upon empirical data alo ne. \n8. Conclusion \nOur C++ Concepts based API for planar polygon manip ulations \nmakes these powerful algorithms readily accessible to applications \ndevelopers. Improvements in our Booleans algorithm over prior \nwork frees users of that API from the hassles of ac commodating \nlibrary restrictions and conventions imposed upon i nput \ngeometries, while the C++ Concepts based API frees them from \nsyntactic restrictions on how the algorithms may be applied. Because our library compares favorably with similar open-source \nlibraries, in terms of both performance and feature set, while \nproviding a superior API based upon generic program ming \ntechniques, we feel that it is a good candidate for acceptance into \nboost and plan to pursue review this year. \n9. ACKNOWLEDGMENTS \nOur thanks to Fernando Cacciola for technical guida nce and \neditorial review and to Intel for supporting our wo rk. \n10. REFERENCES \n[1] Bentley, J.L., Ottmann, T.A. Algorithms for reporti ng and \ncounting geometric intersections. IEEE Transactions on \nComputers, 9 , (C-28), 643-647. \n[2] Gehrels, B., Lalande, B. Generic Geometry Library, 2009. \nRetrieved February 17 2009, from boost: \nhttps://svn.boost.org/svn/boost/sandbox/ggl \n[3] GMP Gnu Multi-Precision Library, 2009. Retrieved Au gust \n9, 2008, from gmplib.org: http://gmplib.org \n[4] Hobby, J. Practical segment intersection with finit e precision \noutput. Technical Report 93/2-27, Bell Laboratories (Lucent \nTechnologies), 1993. \n[5] Kohn, B. Generative Geometry Library, 2008. Retriev ed July \n22, 2008, from boost: \nhttp://www.boostpro.com/vault/index.php?action=down loadf \nile&filename=generative_geometry_algorithms.zip&dir ector \ny=Math - Geometry& \n[6] Leonov, M. PolyBoolean, 2009. Retrieved March 15, 2 009, \nfrom Complex A5 Co. Ltd.: http://www.complex-\na5.ru/polyboolean/index.html \n[7] Murta, A. GPC General Polygon Clipper library, 2009 . \nRetrieved March 15, 2009, from The University of \nManchester: http://www.cs.man.ac.uk/~toby/alan/soft ware/ \n[8] Pion, S. CGAL 3.3.1, 2007. Retrieved October 10, 2 008, \nfrom CGAL: http://www.cgal.org \n[9] Ruud, B. Building a Mutable Set, 2003. Retrieved Ma rch 3, \n2009, from Dr. Dobb’s Portal: \nhttp://www.ddj.com/cpp/184401664 \n[10] Dos Reis, G. and Stroustrup, B. 2006. Specifying C+ + \nconcepts. In Conference Record of the 33rd ACM SIGPLAN-\nSIGACT Symposium on Principles of Programming \nLanguages (Charleston, South Carolina, USA, January 11 - \n13, 2006). POPL '06. ACM, New York, NY, 295-308. DO I= \nhttp://doi.acm.org/10.1145/1111037.1111064 \n[11] Vatti, B. R. 1992. A generic solution to polygon cl ipping. \nCommun. ACM 35, 7 (Jul. 1992), 56-63. DOI= \nhttp://doi.acm.org/10.1145/129902.129906 \n[12] Weiler, K. 1980. Polygon comparison using a graph \nrepresentation. In Proceedings of the 7th Annual Conference \non Computer Graphics and interactive Techniques (Seattle, \nWashington, United States, July 14 - 18, 1980). SIG GRAPH \n'80. ACM, New York, NY, 10-18. DOI= \nhttp://doi.acm.org/10.1145/800250.8074 " } ]
{ "category": "App Definition and Development", "file_name": "GTL_boostcon2009.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "2009-05-08Lecture\nheld at the Boost Library Conference 2009Joachim Faulhaber\nSlide Design by Chih-Hao Tsaihttp://www.chtsai.orgCopyright © Joachim Faulhaber 2009Distributed under Boost Software Licence 1.0Updated version 3.2.0 2009-12-02An Introduction to the \nInterval Template Library2\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Lecture Outline\nBackground and Motivation\nDesign\nExamples\nSemantics\nImplementation\nFuture Works\nAvailability3\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Background and Motivation\nInterval containers simplified the implementation of \ndate and time related tasks\nDecomposing “histories” of attributed events into \nsegments with constant attributes.\nWorking with time grids, e.g. a grid of months.\nAggregations of values associated to date or time \nintervals.\n… that occurred frequently in programs like\nBilling modules\nTherapy scheduling programs\nHospital and controlling statistics4\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nBackground is the date time problem domain ...\n… but the scope of the Itl as a generic library is more \ngeneral: \nan interval_set is a set\n that is implemented as a set of intervals \nan interval_map is a map\n that is implemented as a map of interval value pairs5\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Aspects\nThere are two aspects in the design of interval \ncontainers\nFundamental aspect\ninterval_set <int> mySet;\nmySet.insert(42);\nbool has_answer = mySet.contains(42);interval_set <int> mySet;\nmySet.insert(42);\nbool has_answer = mySet.contains(42);\nOn the fundamental aspect an interval_set can be used \njust as a set of elements\nSet theoretic operations are supported\nAbstracts from sequential and segmental information\nSegmental aspect\nAllows to access and iterate over the segments of \ninterval containers6\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAddability and Subtractability\nAll of itl's (interval) containers are Addable and \nSubtractable \nThey implement operators += , +, -= and -\n+= -=\n sets set union set difference\n maps ? ?\nA possible implementation for maps\nPropagate addition/subtraction to the associated values \n. . . or aggregate on overlap\n. . . or aggregate on collision7\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap\n→ a\n→ b\n+→ a\n→ (a + b)\n→ b\nDecompositional \neffect on Intervals\nAccumulative effect \non associated values\nI\nJJ-II-J\nI∩J\nI, J: intervals, a,b: associated values8\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap, a minimal example\ntypedef itl::set<string> guests;\ninterval_map <time, guests> party;\n \nparty += make_pair(\n interval< time>::rightopen (20:00, 22:00), guests( \"Mary\"));\nparty += make_pair(\n interval< time>::rightopen (21:00, 23:00), guests( \"Harry\")); \n// party now contains\n[20:00, 21:00)->{ \"Mary\"} \n[21:00, 22:00)->{ \"Harry\",\"Mary\"} //guest sets aggregated \n[22:00, 23:00)->{ \"Harry\"}9\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Granu-\nlarityStyle Sets Maps\ninterval interval\njoining interval_set interval_map\nseparating separate_interval_set\nsplitting split_interval_set split_interval_map\nelement set mapDesign\nThe Itl's class templates10\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Joining\nIntervals are joined on overlap or on touch\n. . . for maps , if associated values are equal\nKeeps interval_maps and sets in a minimal form\n interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 5)} interval_map\n \n {[1 3) ->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 5) }\n ->1 ->2 ->1 11\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Splitting\nIntervals are split on overlap and kept separate on touch\nAll interval borders are preserved (insertion memory)\n split_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 2)[2 3)[3 4) }\n \n = {[1 2)[2 3)[3 4)[4 5)} split_interval_map\n \n {[1 3) ->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 4)[4 5) }\n ->1 ->2 ->1 ->1 12\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Separating\nIntervals are joined on overlap but kept separate on \ntouch\nPreserves borders that are never crossed (preserves a \nhidden grid).\n separate_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 4)[4 5)} 13\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA few instances of intervals (interval.cpp)\ninterval< int> int_interval = interval< int>::closed(3,7);\ninterval< double> sqrt_interval\n = interval< double>::rightopen (1/sqrt(2.0), sqrt(2.0));\ninterval< std::string > city_interval\n = interval<std::string>:: leftopen(\"Barcelona\" , \"Boston\");\ninterval< boost::ptime> time_interval\n = interval< boost::ptime>::open(\n time_from_string( \"2008-05-20 19:30\" ),\n time_from_string( \"2008-05-20 23:00\" )\n );14\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks \n(month_and_week_grid.cpp )\n#include <boost/itl/gregorian.hpp> //boost::gregorian plus adapter code \n#include <boost/itl/split_interval_set.hpp>\n// A split_interval_set of gregorian dates as date_grid.\ntypedef split_interval_set<boost::gregorian::date> date_grid;\n// Compute a date_grid of months using boost::gregorian.\ndate_grid month_grid( const interval<date>& scope)\n{\n date_grid month_grid;\n // Compute a date_grid of months using boost::gregorian.\n . . .\n return month_grid;\n}\n// Compute a date_grid of weeks using boost::gregorian.\ndate_grid week_grid( const interval<date>& scope)\n{\n date_grid week_grid;\n // Compute a date_grid of weeks using boost::gregorian.\n . . .\n return week_grid;\n}15\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks\nvoid month_and_time_grid()\n{\n date someday = day_clock::local_day();\n date thenday = someday + months(2);\n interval<date> scope = interval<date>::rightopen(someday, thenday);\n // An intersection of the month and week grids ...\n date_grid month_and_week_grid \n = month_grid(scope) & week_grid(scope);\n // ... allows to iterate months and weeks. Whenever a month\n // or a week changes there is a new interval.\n for(date_grid::iterator it = month_and_week_grid.begin(); \n it != month_and_week_grid.end(); it++)\n { . . . }\n // We can also intersect the grid into an interval_map to make\n // shure that all intervals are within months and week bounds.\n interval_map< boost::gregorian::date, some_type> accrual;\n compute_some_result(accrual, scope);\n accrual &= month_and_week_grid;\n}16\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\n(partys_guest_average.cpp)\nclass counted_sum\n{\npublic:\ncounted_sum() :_sum(0),_count(0){}\ncounted_sum( int sum):_sum(sum),_count(1){}\nint sum()const {return _sum;}\nint count()const{return _count;}\ndouble average() const\n { return _count==0 ? 0.0 : _sum/ static_cast <double>(_count); }\ncounted_sum& operator += (const counted_sum& right)\n{ _sum += right.sum(); _count += right.count(); return *this; }\nprivate:\nint _sum;\nint _count;\n};\nbool operator == (const counted_sum& left, const counted_sum& right)\n{ return left.sum()==right.sum() && left.count()==right.count(); } 17\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\nvoid partys_height_average()\n{\n interval_map<ptime, counted_sum > height_sums;\n height_sums += (\n make_pair(\n interval<ptime>::rightopen(\n time_from_string( \"2008-05-20 19:30\" ), \n time_from_string( \"2008-05-20 23:00\" )), \n counted_sum(165) ) // Mary is 1,65 m tall.\n );\n // Add height of more pary guests . . . \n interval_map<ptime, counted_sum>::iterator height_sum_ =\n height_sums.begin();\n while(height_sum_ != height_sums.end())\n {\n interval<ptime> when = height_sum_->first;\n double height_average = (*height_sum_++).second. average();\n cout << \"[\" << when.first() << \" - \" << when.upper() << \")\"\n << \": \" << height_average << \" cm\" << endl;\n }\n}18\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval containers allow to express a variety of date \nand time operations in an easy way.\nExample man_power.cpp ...\nSubtract weekends and holidays from an interval_set\nworktime -= weekends(scope)\nworktime -= german_reunification_day\nIntersect an interval_map with an interval_set\nclaudias_working_hours &= worktime\nSubtract and interval_set from an interval map\nclaudias_working_hours -= claudias_absense_times\nAdding interval_maps\ninterval_map<date, int> manpower;\nmanpower += claudias_working_hours;\nmanpower += bodos_working_hours;19\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval_maps can also be intersected\nExample user_groups.cpp\ntypedef boost::itl::set<string> MemberSetT;\ntypedef interval_map<date, MemberSetT> MembershipT;\nvoid user_groups()\n{\n . . .\n MembershipT med_users;\n // Compute membership of medical staff\n med_users += make_pair( member_interval_1, MemberSetT( \"Dr.Jekyll\" ));\n med_users += . . . \n MembershipT admin_users;\n // Compute membership of administation staff\n med_users += make_pair( member_interval_2, MemberSetT( \"Mr.Hyde\"));\n . . .\n MembershipT all_users = med_users + admin_users;\n MembershipT super_users = med_users & admin_users;\n . . .\n}20\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl sets is based on a concept itl::Set\nitl::set , interval_set , split_interval_set \nand separate_interval_set are models of concept \nitl::Set\n// Abstract part\nempty set: Set::Set()\nsubset relation: bool Set::contained_in (const Set& s2)const\nequality: bool is_element_equal (const Set& s1, const Set& s2)\nset union: Set& operator += (Set& s1, const Set& s2)\n Set operator + (const Set& s1, const Set& s2)\nset difference: Set& operator -= (Set& s1, const Set& s2)\n Set operator - (const Set& s1, const Set& s2)\nset intersection: Set& operator &= (Set& s1, const Set& s2)\n Set operator & (const Set& s1, const Set& s2) \n// Part related to sequential ordering\nsorting order: bool operator < (const Set& s1, const Set& s2)\nlexicographical equality:\n bool operator == (const Set& s1, const Set& s2)\n 21\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl maps is based on a concept itl::Map\nitl::map , interval_map and split_interval_map \nare models of concept \nitl::Map\n// Abstract part\nempty map: Map::Map()\nsubmap relation: bool Map::contained_in (const Map& m2)const\nequality: bool is_element_equal (const Map& m1, const Map& m2)\nmap union: Map& operator += (Map& m1, const Map& m2)\n Map operator + (const Map& m1, const Map& m2)\nmap difference: Map& operator -= (Map& m1, const Map& m2)\n Map operator - (const Map& m1, const Map& m2)\nmap intersection: Map& operator &= (Map& m1, const Map& m2)\n Map operator & (const Map& m1, const Map& m2) \n// Part related to sequential ordering\nsorting order: bool operator < (const Map& m1, const Map& m2)\nlexicographical equality:\n bool operator == (const Map& m1, const Map& m2)\n 22\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nDefining semantics of itl concepts via sets of laws\naka c++0x axioms\nChecking law sets via automatic testing:\nA Law Based Test Automaton LaBatea\nGenerate\nlaw instance\napply law to instance\ncollect violations\nCommutativity<T a, U b, +>:\n a + b = b + a;23\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nLexicographical Ordering and Equality\nFor all itl containers operator < implements a strict \nweak ordering . \nThe induced equivalence of this ordering is \nlexicographical equality which is implemented as \noperator ==\nThis is in line with the semantics of \nSortedAssociativeContainers24\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nSubset Ordering and Element Equality\nFor all itl containers function contained_in \nimplements a partial ordering .\nThe induced equivalence of this ordering is \nequality of elements which is implemented as \nfunction is_element_equal .25\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nitl::Sets\nAll itl sets implement a Set Algebra , which is to say \nsatisfy a “ classical” set of laws . . .\n. . . using is_element_equal as equality\nAssociativity, Neutrality, Commutativity (for + and &)\nDistributivity, DeMorgan, Symmetric Difference\nMost of the itl sets satisfy the classical set of laws \neven if . . .\n. . . lexicographical equality: operator == is used\nThe differences reflect proper inequalities in sequence \nthat occur for separate_interval_set and \nsplit_interval_set . 26\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nConcept Induction / Concept Transition\nThe semantics of itl::Maps appears to be determined by the \ncodomain type of the map\n is model of if example \n Map<D,Monoid> Monoid interval_map<int, string> \n Map<D,Set> Set C1 interval_map<int, set<int>>\n \n Map<D,CommutMonoid > CommutMonoid interval_map<int, unsigned>\n Map<D,AbelianGroup> AbelianGroup C2 interval_map<int, int,total>\nConditions C1 and C2 restrict the Concept Induction to specific \nmap traits\nC1: Value pairs that carry a neutral element as associated \nvalue are always deleted (Trait: absorbs_neutrons ).\nC2: The map is total: Non existing keys are implicitly mapped to \nneutral elements (Trait: is_total ). 27\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Implementation\nItl containers are implemented based on\nstd::set and std::map\nBasic operations like adding and subtracting intervals or \ninterval value pairs perform with a time complexity \nbetween * amortized O(log n) and O(n), where n is the \nnumber of intervals of a container.\nOperations like addition and subtraction of whole \ncontainers are having a worst case complexity of\nO(m log(n+m)) , where n and m are the numbers of \nintervals of the containers to combine.\n* : Consult the library documentation for more detailed \ninformation.28\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Future Works\nImplementing interval_maps of sets more efficiently\nRevision of features of the extended itl (itl_plus.zip)\nDecomposition of histories : k histories hk with attribute \ntypes A1, ..., Ak are “decomposed ” to a product history \nof tuples of attribute sets:\n(h1<T,A1>,..., h<T,Ak>) → h<T, (set<A1>,…, set<Ak>)>\nCubes (generalized crosstables): Applying aggregate \non collision to maps of tuple value pairs in order to \norganize hierachical data and their aggregates.29\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nItl project on sourceforge (version 2.0.1)\nhttp://sourceforge.net/projects/itl\nLatest version on boost vault/Containers (3.2.0)\nhttp://www.boostpro.com/vault/ → containers\nitl_3_2_0.zip : Core itl in preparation for boost\nitl_plus_3_2_0.zip : Extended itl including histories, cubes \nand automatic validation (LaBatea).\nOnline documentation at\nhttp://www.herold-faulhaber.de/\nDoxygen generated docs for (version 2.0.1)\nhttp://www.herold-faulhaber.de/itl/\nLatest boost style documentation (version 3.2.0)\nhttp://www.herold-faulhaber.de/boost_itl/doc/libs/itl/doc/html/30\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nBoost sandbox\nhttps://svn.boost.org/svn/boost/sandbox/itl/\nCore itl: Interval containers proposed for boost\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl/\nExtended itl_xt: interval_bitset, “histories”, cubes\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl_xt/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl_xt/\nValidater LaBatea: \nCompiles with msvc-8.0 or newer, gcc-4.3.2 or newer\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/validate/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/validate/2009-05-08Lectureheld at the Boost Library Conference 2009Joachim Faulhaber\nSlide Design by Chih-Hao Tsaihttp://www.chtsai.orgCopyright © Joachim Faulhaber 2009Distributed under Boost Software Licence 1.0Updated version 3.2.0 2009-12-02An Introduction to the \nInterval Template Library2\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Lecture Outline\nBackground and Motivation\nDesign\nExamples\nSemantics\nImplementation\nFuture Works\nAvailability3\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Background and Motivation\nInterval containers simplified the implementation of \ndate and time related tasks\nDecomposing “histories” of attributed events into \nsegments with constant attributes.\nWorking with time grids, e.g. a grid of months.\nAggregations of values associated to date or time \nintervals.\n… that occurred frequently in programs like\nBilling modules\nTherapy scheduling programs\nHospital and controlling statistics4\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nBackground is the date time problem domain ...\n… but the scope of the Itl as a generic library is more \ngeneral: \nan interval_set is a set\n that is implemented as a set of intervals \nan interval_map is a map\n that is implemented as a map of interval value pairs5\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Aspects\nThere are two aspects in the design of interval \ncontainers\nFundamental aspect\ninterval_set<int> mySet;\nmySet.insert(42);\nbool has_answer = mySet.contains(42);interval_set<int> mySet;\nmySet.insert(42);\nbool has_answer = mySet.contains(42);\nOn the fundamental aspect an interval_set can be used \njust as a set of elements\nSet theoretic operations are supported\nAbstracts from sequential and segmental information\nSegmental aspect\nAllows to access and iterate over the segments of \ninterval containers6\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAddability and Subtractability\nAll of itl's (interval) containers are Addable and \nSubtractable \nThey implement operators +=, +, -= and -\n+=-=\n sets set unionset difference\n maps ??\nA possible implementation for maps\nPropagate addition/subtraction to the associated values \n. . . or aggregate on overlap\n. . . or aggregate on collision7\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap\n→ a\n→ b\n+→ a\n→ (a + b)\n→ b\nDecompositional \neffect on Intervals\nAccumulative effect \non associated values\nI\nJJ-II-J\nI∩J\nI, J: intervals, a,b: associated values8\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nAggregate on overlap, a minimal example\ntypedef itl::set<string> guests;\ninterval_map<time, guests> party;\n \nparty += make_pair(\n interval<time>::rightopen(20:00, 22:00), guests( \"Mary\"));\nparty += make_pair(\n interval<time>::rightopen(21:00, 23:00), guests( \"Harry\")); \n// party now contains[20:00, 21:00)->{\"Mary\"} [21:00, 22:00)->{\"Harry\",\"Mary\"} //guest sets aggregated [22:00, 23:00)->{\"Harry\"}9\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Granu-\nlarityStyleSets Maps\nintervalinterval\njoininginterval_set interval_map\nseparatingseparate_interval_set\nsplittingsplit_interval_set split_interval_map\nelementset mapDesign\nThe Itl's class templates10\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Joining\nIntervals are joined on overlap or on touch\n. . . for maps, if associated values are equal\nKeeps interval_maps and sets in a minimal form\n interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 5)} interval_map\n \n {[1 3)->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 5) }\n ->1 ->2 ->1 11\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Splitting\nIntervals are split on overlap and kept separate on touch\nAll interval borders are preserved (insertion memory)\n split_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 2)[2 3)[3 4) }\n \n = {[1 2)[2 3)[3 4)[4 5)} split_interval_map\n \n {[1 3)->1 } \n + [2 4) ->1\n + [4 5) ->1\n ={[1 2)[2 3)[3 4) }\n ->1 ->2 ->1 \n ={[1 2)[2 3)[3 4)[4 5) }\n ->1 ->2 ->1 ->1 12\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Design\nInterval Combining Styles: Separating\nIntervals are joined on overlap but kept separate on \ntouch\nPreserves borders that are never crossed (preserves a \nhidden grid).\n separate_interval_set\n \n {[1 3) }\n + [2 4)\n + [4 5)\n = {[1 4) }\n \n = {[1 4)[4 5)} 13\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA few instances of intervals (interval.cpp)\ninterval<int> int_interval = interval< int>::closed(3,7);\ninterval<double> sqrt_interval\n = interval<double>::rightopen(1/sqrt(2.0), sqrt(2.0));\ninterval<std::string> city_interval\n = interval<std::string>:: leftopen(\"Barcelona\", \"Boston\");\ninterval<boost::ptime> time_interval\n = interval<boost::ptime>::open(\n time_from_string(\"2008-05-20 19:30\" ), time_from_string(\"2008-05-20 23:00\" ) );14\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks \n(month_and_week_grid.cpp )\n#include <boost/itl/gregorian.hpp> //boost::gregorian plus adapter code #include <boost/itl/split_interval_set.hpp>\n// A split_interval_set of gregorian dates as date_grid.typedef split_interval_set<boost::gregorian::date> date_grid;\n// Compute a date_grid of months using boost::gregorian.date_grid month_grid( const interval<date>& scope){ date_grid month_grid; // Compute a date_grid of months using boost::gregorian. . . . return month_grid;}\n// Compute a date_grid of weeks using boost::gregorian.date_grid week_grid( const interval<date>& scope){ date_grid week_grid; // Compute a date_grid of weeks using boost::gregorian. . . . return week_grid;}15\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nA way to iterate over months and weeks\nvoid month_and_time_grid(){ date someday = day_clock::local_day(); date thenday = someday + months(2); interval<date> scope = interval<date>::rightopen(someday, thenday);\n // An intersection of the month and week grids ... date_grid month_and_week_grid = month_grid(scope) & week_grid(scope);\n // ... allows to iterate months and weeks. Whenever a month // or a week changes there is a new interval. for(date_grid::iterator it = month_and_week_grid.begin(); it != month_and_week_grid.end(); it++) { . . . }\n // We can also intersect the grid into an interval_map to make // shure that all intervals are within months and week bounds. interval_map<boost::gregorian::date, some_type> accrual; compute_some_result(accrual, scope); accrual &= month_and_week_grid;\n}16\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\n(partys_guest_average.cpp)\nclass counted_sum{public:counted_sum():_sum(0),_count(0){}counted_sum(int sum):_sum(sum),_count(1){}\nint sum()const {return _sum;}int count()const{return _count;}double average()const { return _count==0 ? 0.0 : _sum/ static_cast<double>(_count); }\ncounted_sum& operator += (const counted_sum& right){ _sum += right.sum(); _count += right.count(); return *this; }\nprivate:int _sum;int _count;};\nbool operator == (const counted_sum& left, const counted_sum& right){ return left.sum()==right.sum() && left.count()==right.count(); } 17\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nAggregating with interval_maps\nComputing averages via implementing operator +=\nvoid partys_height_average(){ interval_map<ptime, counted_sum> height_sums;\n height_sums += ( make_pair( interval<ptime>::rightopen( time_from_string( \"2008-05-20 19:30\"), time_from_string( \"2008-05-20 23:00\")), counted_sum(165)) // Mary is 1,65 m tall. );\n // Add height of more pary guests . . . \n interval_map<ptime, counted_sum>::iterator height_sum_ = height_sums.begin(); while(height_sum_ != height_sums.end()) { interval<ptime> when = height_sum_->first; double height_average = (*height_sum_++).second. average();\n cout << \"[\" << when.first() << \" - \" << when.upper() << \")\" << \": \" << height_average << \" cm\" << endl; }}18\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval containers allow to express a variety of date \nand time operations in an easy way.\nExample man_power.cpp ...\nSubtract weekends and holidays from an interval_set\nworktime -= weekends(scope)\nworktime -= german_reunification_day\nIntersect an interval_map with an interval_set\nclaudias_working_hours &= worktime\nSubtract and interval_set from an interval map\nclaudias_working_hours -= claudias_absense_times\nAdding interval_maps\ninterval_map<date, int> manpower;\nmanpower += claudias_working_hours;\nmanpower += bodos_working_hours;19\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Examples\nInterval_maps can also be intersected\nExample user_groups.cpp\ntypedef boost::itl::set<string> MemberSetT;typedef interval_map<date, MemberSetT> MembershipT;\nvoid user_groups(){ . . .\n MembershipT med_users; // Compute membership of medical staff med_users += make_pair(member_interval_1, MemberSetT(\"Dr.Jekyll\")); med_users += . . . \n MembershipT admin_users; // Compute membership of administation staff med_users += make_pair(member_interval_2, MemberSetT(\"Mr.Hyde\")); . . .\n MembershipT all_users = med_users + admin_users;\n MembershipT super_users = med_users & admin_users; . . .\n}20\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl sets is based on a concept itl::Set\nitl::set, interval_set, split_interval_set \nand separate_interval_set are models of concept \nitl::Set\n// Abstract partempty set: Set::Set()subset relation: bool Set::contained_in(const Set& s2)constequality: bool is_element_equal(const Set& s1, const Set& s2)set union: Set& operator += (Set& s1, const Set& s2) Set operator + (const Set& s1, const Set& s2)set difference: Set& operator -= (Set& s1, const Set& s2) Set operator - (const Set& s1, const Set& s2)set intersection: Set& operator &= (Set& s1, const Set& s2) Set operator & (const Set& s1, const Set& s2) \n// Part related to sequential orderingsorting order: bool operator < (const Set& s1, const Set& s2)lexicographical equality: bool operator == (const Set& s1, const Set& s2) 21\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nThe semantics of itl maps is based on a concept itl::Map\nitl::map, interval_map and split_interval_map \nare models of concept \nitl::Map\n// Abstract partempty map: Map::Map()submap relation: bool Map::contained_in(const Map& m2)constequality: bool is_element_equal(const Map& m1, const Map& m2)map union: Map& operator += (Map& m1, const Map& m2) Map operator + (const Map& m1, const Map& m2)map difference: Map& operator -= (Map& m1, const Map& m2) Map operator - (const Map& m1, const Map& m2)map intersection: Map& operator &= (Map& m1, const Map& m2) Map operator & (const Map& m1, const Map& m2) \n// Part related to sequential orderingsorting order: bool operator < (const Map& m1, const Map& m2)lexicographical equality: bool operator == (const Map& m1, const Map& m2) 22\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nDefining semantics of itl concepts via sets of laws\naka c++0x axioms\nChecking law sets via automatic testing:\nA Law Based Test Automaton LaBatea\nGenerate\nlaw instance\napply law to instance\ncollect violations\nCommutativity<T a, U b, +>:\n a + b = b + a;23\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nLexicographical Ordering and Equality\nFor all itl containers operator < implements a strict \nweak ordering. \nThe induced equivalence of this ordering is \nlexicographical equality which is implemented as \noperator ==\nThis is in line with the semantics of \nSortedAssociativeContainers24\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nSubset Ordering and Element Equality\nFor all itl containers function contained_in \nimplements a partial ordering .\nThe induced equivalence of this ordering is \nequality of elements which is implemented as \nfunction is_element_equal .25\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nitl::Sets\nAll itl sets implement a Set Algebra, which is to say \nsatisfy a “classical” set of laws . . .\n. . . using is_element_equal as equality\nAssociativity, Neutrality, Commutativity (for + and &)\nDistributivity, DeMorgan, Symmetric Difference\nMost of the itl sets satisfy the classical set of laws \neven if . . .\n. . . lexicographical equality: operator == is used\nThe differences reflect proper inequalities in sequence \nthat occur for separate_interval_set and \nsplit_interval_set . 26\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Semantics\nConcept Induction / Concept Transition\nThe semantics of itl::Maps appears to be determined by the \ncodomain type of the map\n is model of if example \n Map<D,Monoid> Monoid interval_map<int, string> \n Map<D,Set> Set C1 interval_map<int, set<int>> Map<D,CommutMonoid> CommutMonoid interval_map<int, unsigned>\n Map<D,AbelianGroup> AbelianGroup C2 interval_map<int,int,total>\nConditions C1 and C2 restrict the Concept Induction to specific \nmap traits\nC1: Value pairs that carry a neutral element as associated \nvalue are always deleted (Trait: absorbs_neutrons ).\nC2: The map is total: Non existing keys are implicitly mapped to \nneutral elements (Trait: is_total). 27\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Implementation\nItl containers are implemented based on\nstd::set and std::map\nBasic operations like adding and subtracting intervals or \ninterval value pairs perform with a time complexity \nbetween* amortized O(log n) and O(n), where n is the \nnumber of intervals of a container.\nOperations like addition and subtraction of whole \ncontainers are having a worst case complexity of\nO(m log(n+m)), where n and m are the numbers of \nintervals of the containers to combine.\n* : Consult the library documentation for more detailed \ninformation.28\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Future Works\nImplementing interval_maps of sets more efficiently\nRevision of features of the extended itl (itl_plus.zip)\nDecomposition of histories : k histories hk with attribute \ntypes A1, ..., Ak are “decomposed” to a product history \nof tuples of attribute sets:\n(h1<T,A1>,..., h<T,Ak>) → h<T, (set<A1>,…, set<Ak>)>\nCubes (generalized crosstables): Applying aggregate \non collision to maps of tuple value pairs in order to \norganize hierachical data and their aggregates.29\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nItl project on sourceforge (version 2.0.1)\nhttp://sourceforge.net/projects/itl\nLatest version on boost vault/Containers (3.2.0)\nhttp://www.boostpro.com/vault/ → containers\nitl_3_2_0.zip : Core itl in preparation for boost\nitl_plus_3_2_0.zip : Extended itl including histories, cubes \nand automatic validation (LaBatea).\nOnline documentation at\nhttp://www.herold-faulhaber.de/\nDoxygen generated docs for (version 2.0.1)\nhttp://www.herold-faulhaber.de/itl/\nLatest boost style documentation (version 3.2.0)\nhttp://www.herold-faulhaber.de/boost_itl/doc/libs/itl/doc/html/30\nSlide Design by Chih-Hao Tsai http://www.chtsai.org\n Copyright © Joachim Faulhaber 2009Availability\nBoost sandbox\nhttps://svn.boost.org/svn/boost/sandbox/itl/\nCore itl: Interval containers proposed for boost\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl/\nExtended itl_xt: interval_bitset, “histories”, cubes\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/itl_xt/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/itl_xt/\nValidater LaBatea: \nCompiles with msvc-8.0 or newer, gcc-4.3.2 or newer\nhttps://svn.boost.org/svn/boost/sandbox/itl/boost/validate/\nhttps://svn.boost.org/svn/boost/sandbox/itl/libs/validate/" } ]
{ "category": "App Definition and Development", "file_name": "intro_to_itl.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Planning Guide\nAbstract\nThis book provides recommendations for those considering the use of VoltDB.\nV11.3Planning Guide\nV11.3\nCopyright © 2008-2022 Volt Active Data, Inc.\nThe text and illustrations in this document are licensed under the terms of the GNU Affero General Public License Version 3 as published by the\nFree Software Foundation. See the GNU Affero General Public License ( http://www.gnu.org/licenses/ ) for more details.\nMany of the core VoltDB database features described herein are part of the VoltDB Community Edition, which is licensed under the GNU Affero\nPublic License 3 as published by the Free Software Foundation. Other features are specific to the VoltDB Enterprise Edition and VoltDB Pro, which\nare distributed by Volt Active Data, Inc. under a commercial license.\nThe VoltDB client libraries, for accessing VoltDB databases programmatically, are licensed separately under the MIT license.\nYour rights to access and use VoltDB features described herein are defined by the license you received when you acquired the software.\nVoltDB is a trademark of Volt Active Data, Inc.\nVoltDB software is protected by U.S. Patent Nos. 9,600,514, 9,639,571, 10,067,999, 10,176,240, and 10,268,707. Other patents pending.\nThis document was generated on March 07, 2022.Table of Contents\nPreface ............................................................................................................................. vi\n1. Organization of this Manual ..................................................................................... vi\n2. Other Resources ..................................................................................................... vi\n1. The Planning Process ....................................................................................................... 1\n2. Proof of Concept ............................................................................................................. 2\n2.1. Effective Partitioning ............................................................................................. 2\n2.2. Designing the Stored Procedures .............................................................................. 2\n3. Choosing Hardware ......................................................................................................... 4\n3.1. The Dimensions of Hardware Sizing ........................................................................ 4\n3.2. Sizing for Throughput ............................................................................................ 4\n3.3. Sizing for Capacity ............................................................................................... 5\n3.4. Sizing for Durability .............................................................................................. 7\n4. Sizing Memory ............................................................................................................... 9\n4.1. Planning for Database Capacity ............................................................................... 9\n4.1.1. Sizing Database Tables .............................................................................. 10\n4.1.2. Sizing Database Indexes ............................................................................. 11\n4.1.3. An Example of Database Sizing .................................................................. 12\n4.2. Distributing Data in a Cluster ................................................................................ 12\n4.2.1. Data Memory Usage in Clusters .................................................................. 13\n4.2.2. Memory Requirements for High Availability (K-Safety) ................................... 13\n4.3. Planning for the Server Process (Java Heap Size) ...................................................... 13\n4.3.1. Attributes that Affect Heap Size .................................................................. 14\n4.3.2. Guidelines for Determining Java Heap Size ................................................... 14\n5. Benchmarking ............................................................................................................... 16\n5.1. Benchmarking for Performance .............................................................................. 16\n5.1.1. Designing Your Benchmark Application ....................................................... 16\n5.1.2. How to Measure Throughput ...................................................................... 18\n5.1.3. How to Measure Latency ........................................................................... 19\n5.2. Determining Sites Per Host ................................................................................... 20\niiiList of Figures\n5.1. Determining Optimal Throughput and Latency ................................................................. 17\n5.2. Determining Optimal Sites Per Host ............................................................................... 20\nivList of Tables\n3.1. Quick Estimates for Memory Usage By Datatype ............................................................... 6\n4.1. Memory Requirements For Tables By Datatype ................................................................ 11\nvPreface\nThis book provides information to help those considering the use of VoltDB. Choosing the appropriate\ntechnology is more than deciding between one set of features and another. It is important to understand\nhow the product fits within the existing technology landscape and what it requires in terms of systems,\nsupport, etc. This books provides guidelines for evaluating VoltDB, including sizing hardware, memory,\nand disks based on the specific requirements of your application and the VoltDB features you plan to use.\n1. Organization of this Manual\nThis book is divided into 5 chapters:\n•Chapter 1, The Planning Process\n•Chapter 2, Proof of Concept\n•Chapter 3, Choosing Hardware\n•Chapter 4, Sizing Memory\n•Chapter 5, Benchmarking\n2. Other Resources\nThis book assumes you are already familiar with the VoltDB feature set. The choice of features, especially\nthose related to availability and durability, will impact sizing. Therefore, if you are new to VoltDB, we\nencourage you to visit the VoltDB web site to familiarize yourself with the product options and features.\nYou may also find it useful to review the following books to better understand the process of designing,\ndeveloping, and managing VoltDB applications:\n•VoltDB Tutorial , a quick introduction to the product and is recommended for new users\n•Using VoltDB , a complete reference to the features and functions of the VoltDB product\n•VoltDB Administrator's Guide , information on managing VoltDB clusters\nThese books and more resources are available on the web from http://www.voltdb.com/ .\nviChapter 1. The Planning Process\nWelcome to VoltDB, a best-in-class database designed specifically for high volume transactional applica-\ntions. Since you are reading this book, we assume you are considering the use of VoltDB for an existing or\nplanned database application. The goal of this book is to help you understand the impact of such a decision\non your computing environment, with a particular focus on hardware requirements.\nTechnology evaluation normally involves several related but separate activities:\n•Feature Evaluation\nThe goal of the feature evaluation is to determine how well a product's features match up to the needs of\nyour application. For VoltDB, we strongly encourage you to visit our website and review the available\nproduct overviews and technical whitepapers to see if VoltDB is right for you. If you need additional\ninformation, please feel free to contact us directly.\n•Proof of Concept\nThe proof of concept, or POC, is usually a small application that emulates the key business requirements\nusing the proposed technology. The goal of the POC is to verify, on a small scale, that the technology\nperforms as expected for the target usage.\n•Hardware Planning\nOnce you determine that VoltDB is a viable candidate for your application, the next step is to deter-\nmine what hardware environment is needed to run it. Hardware sizing requires an understanding of the\nrequirements of the business application (volume, throughput, and availability needs) as well as the\ntechnology. The primary goal of this book is to provide the necessary information about VoltDB to help\nyou perform this sizing exercise against the needs of your specific application.\n•Benchmarking\nHaving determined the feasibility of the technology, the final activity is to perform benchmarking to\nevaluate its performance against the expected workload. Benchmarking is often performed against the\nproof of concept or a similar prototype application. Benchmarking can help validate and refine the\nhardware sizing calculations.\nLet's assume you have already performed a feature evaluation, which is why you are reading this book.\nYou are now ready to take the next step. The following chapters provide practical advice when building a\nproof of concept, sizing hardware, and benchmarking a solution with VoltDB.\nNote that this book does not help with the detailed application design itself. For recommendations on\napplication design we recommend the other books about VoltDB. In particular, Using VoltDB .\n1Chapter 2. Proof of Concept\nA proof of concept (POC) is a small application that tests the key requirements of the proposed solution. For\ndatabase applications, the POC usually focuses on a few critical transactions, verifying that the database\ncan support the proposed schema, queries, and, ultimately, the expected volume and throughput. (More\non this in the chapter on Benchmarking.)\nA POC is not a full prototype. Instead, it is just enough code to validate that the technology meets the need.\nDepending upon the specific business requirements, each POC emphasizes different database functional-\nity. Some may focus primarily on capacity, some on scalability, some on throughput, etc.\nWhatever the business requirements, there are two key aspects of VoltDB that must be designed correctly\nto guarantee a truly effective proof of concept. The following sections discuss the use of partitioning and\nstored procedures in POCs.\n2.1. Effective Partitioning\nVoltDB is a distributed database. The data is partitioned automatically, based on a partitioning column you,\nas the application developer, specify. You do not need to determine where each record goes — VoltDB\ndoes that for you.\nHowever, to be effective, you much choose your partitioning columns carefully. The best partitioning\ncolumn is not always the most obvious one.\nThe important thing to keep in mind is that VoltDB partitions both the data and the work. For best perfor-\nmance you want to partition the database tables and associated queries so that the most common transac-\ntions can be run in parallel. That is, the transactions are, in VoltDB parlance, \"single-partitioned\".\nTo be single-partitioned, a transaction must only contain queries that access tables based on a specific\nvalue for the partitioning column. In other words, if a transaction is partitioned on the EmpID column of the\nEmployee table (and that is the partitioning column for the table), all queries in the transaction accessing\nthe Employee table must include the constraint WHERE Employee.EmpID = {value} .\nTo make single-partitioned transactions easier to create, not all tables have to be partitioned. Tables that\nare not updated frequently can be replicated, meaning they can be accessed in any single-partitioned trans-\naction, no matter what the partitioning key value.\nWhen planning the partitioning schema for your database, the important questions to ask yourself are:\n•Which are the critical, most frequent queries? (These are the transactions you want to be single-parti-\ntioned.)\n•For each critical query, what database tables does it access and using what column?\n•Can any of those tables be replicated? (Replicating smaller, less frequently updated tables makes joins\nin single-partitioned procedures easier.)\n2.2. Designing the Stored Procedures\nDesigning the schema and transactions to be single-partitioned is one thing. It is equally important to make\nsure that the stored procedures operate in a way that lets VoltDB do its job effectively.\nThe first step is to write the transactions as stored procedures that are loaded into the schema. Do not write\ncritical transactions as ad hoc queries to the database. VoltDB provides the @AdHoc system procedure\n2Proof of Concept\nfor executing arbitrary SQL queries, which can be helpful when building early prototypes to test queries\nor occasionally validate the content of the database. But @AdHoc queries often run as multi-partitioned\ntransactions and, therefore, should not be used for critical or repetitive transactions.\nThe next step is to ensure that single-partitioned stored procedures are correctly identified as such in the\nschema using the PARTITION ON clause in the CREATE PROCEDURE statement and specifying the\nappropriate partitioning column.\nFinally, when designing for maximum throughput, use asynchronous calls to invoke the stored procedures\nfrom within the POC application. Asynchronous calls allow VoltDB to queue the transactions (and their\nresponses), avoiding any delays between when a transaction completes, the results are returned to the POC\napplication, and the next procedure is invoked.\nChapter 5, Benchmarking later in this book provides additional suggestions for effectively designing and\ntesting proof of concept applications.\n3Chapter 3. Choosing Hardware\nVoltDB is designed to provide world class throughput on commodity hardware. You do not need the latest\nor most expensive hardware to achieve outstanding performance. In fact, a few low- to mid-range servers\ncan easily outperform a single high end server, since VoltDB throughput tends to scale linearly with the\nnumber of servers in the cluster.\nPeople often ask us at VoltDB \"what type of servers should we use and how many?\" The good news is\nthat VoltDB is very flexible. It works well on a variety of configurations. The bad news is that the true\nanswer to the question is \"it depends.\" There is no one configuration that is perfect for all situations.\nLike any technology question, there are trade offs to be made when choosing the \"right\" hardware for your\napplication. This chapter explains what those trade offs are and provides some general rules of thumb that\ncan help when choosing hardware for running a VoltDB database.\n3.1. The Dimensions of Hardware Sizing\nThere are three key dimensions to sizing individual servers: the number and speed of the processors, the\ntotal amount of memory, and the size and type of disks available. When sizing hardware for a distributed\ndatabase such as VoltDB, there is a fourth dimension to consider: the number of servers to use.\nEach of these dimensions impacts different aspects of VoltDB performance. The number of processors\naffects how many partitions can be run on each server and, as a result, throughput. The available memory\nobviously impacts capacity, or the volume of data that can be stored. The size, number, and type of disks\nimpacts the performance of availability features such as snapshots and command logging.\nHowever, they also interact. The more memory per server, the longer it takes to write a snapshot or for a\nnode to rejoin after a failure. So increasing the number of servers but reducing the amount of memory per\nserver may reduce the impact of durability on overall database performance. These are the sorts of trade\noffs that need to be considered.\nThe following sections discuss hardware sizing for three key aspects of a VoltDB application:\n•Throughput\n•Capacity\n•Durability\n3.2. Sizing for Throughput\nThe minimum hardware requirements for running a VoltDB database server is a 64-bit machine with two\nor more processor cores. The more cores the server has, the more VoltDB partitions can potentially run on\nthat server. The more unique partitions, the more throughput is possible with a well partitioned workload.\nHowever, the number of processor cores is not the only constraint on throughput. Different aspects of the\nserver configuration impact different characteristics of the database process.\nFor example, although the more physical cores a server has increases the number of partitions that server\ncan potentially handle, at some point the number of transactions being received and data being returned\nexceeds the capacity of the network port for that server. As a consequence, going beyond 12-16 cores on a\n4Choosing Hardware\nsingle machine may provide little value to a VoltDB database, since the server's network adapter becomes\nthe gating factor.\nRule of Thumb\nVoltDB runs best on servers with between 4 and 16 cores.\nIt should be noted that the preceding discussion refers to physical processor cores. Some servers support\nhyperthreading. Hyperthreading is essentially the virtualization of processor cores, doubling the reported\nnumber of cores. For example, a system with 4 cores and hyperthreading acts like an 8 core machine.\nThese virtualized cores can improve VoltDB performance, particularly for servers with a small (2 or 4)\nnumber of physical cores. However, the more physical cores the system has, the less improvement is seen\nin VoltDB performance. Therefore, hyperthreading is not recommended for VoltDB servers with more\nthan 8 physical cores.\nThe alternative to adding processors to an individual server for improving throughput is to add more\nservers. If a single 4-core server can handle 3 partitions, two such servers can handle 6 partitions, three\ncan handle 9, etc. This is how VoltDB provides essentially linear scaling in throughput.\nBut again, there are limits. For peak performance it is key that network latency and disruption between\nthe cluster nodes be kept to a minimum. In other words, all nodes of the cluster should be on the same\nnetwork switch. Obviously, the capacity of the network switch constrains the number of nodes that it can\nsupport. (A 32 port switch is not uncommon.)\nRule of Thumb\nBest performance is achieved with clusters of 2-32 servers connected to the same network switch.\nIt is possible to run a VoltDB cluster across switches. (For example, this is almost always the case in cloud\nenvironments.) However, latency between the cluster nodes will have a negative impact on performance\nand may ultimately limit overall throughput. In these situations, it is best to benchmark different configu-\nrations to determine exactly what performance can be expected.\nFinally, it should be noted that the speed of the processor cores may not have a significant impact on\noverall throughput. Processor speed affects the time it takes to execute individual transactions, which may\nbe only a small percentage of overall throughput. For workloads with very compute-intensive transactions,\nfaster processors can improve overall performance. But for many small or simple transactions, improved\nprocessor speed will have little or no impact.\n3.3. Sizing for Capacity\nThe second aspect of database sizing is capacity. Capacity describes the maximum volume of data that\nthe database can hold.\nSince VoltDB is an in-memory database, the capacity is constrained by the total memory of all of the nodes\nin the cluster. Of course, one can never size servers too exactly. It is important to allow for growth over\ntime and to account for other parts of the database server that use memory.\nChapter 4, Sizing Memory explains in detail how memory is assigned by the VoltDB server for database\ncontent. Use that chapter to perform accurate sizing when you have a known schema. However, as a rough\nestimate, you can use the following table to approximate the space required for each column. By adding\nup the columns for each table and index (including index pointers) and then multiplying by the expected\nnumber of rows, you can determine the total amount of memory required to store the database contents.\n5Choosing Hardware\nTable 3.1. Quick Estimates for Memory Usage By Datatype\nDatatype Bytes in Table Bytes in Index\nTINYINT 1 1\nSMALLINT 2 2\nINTEGER 4 4\nBIGINT 8 8\nDOUBLE 8 8\nDECIMAL 16 16\nTIMESTAMP 8 8\nVARCHARa or VARBINARY (less than 64 bytes) length + 1 length + 1\nVARCHARa or VARBINARY (64 bytes or greater) length 8\nindex pointers n/a 40\naFor VARCHAR columns declared in characters, rather than in bytes, the length is calculated as four bytes for every character.\nIn other words, for storage calculations a string column declared as VARCHAR(16) has the same length as a column declared as\nVARCHAR(64 BYTES).\nYou must also account for the memory required by the server process itself. If you know how many tables\nthe database will contain and how many sites per host will be used, you can calculate the server process\nmemory requirements using the following formula:\n384MB + (10MB X number of tables) + (128MB X sites per host)\nThis formula assumes you use K-safety, which is recommended for all production environments. If the\ncluster is also the master database for database replication, you should increase the multiplier for sites per\nhost from 128 to 256 megabytes:\n384MB + (10MB X number of tables) + (256MB X sites per host)\nIf you do not know how many tables the database will contain or how many sites per host you expect to use,\nyou can use 2 gigabytes as a rough estimate for the server process size for moderately sized databases and\nservers. But be aware that you may need to increase that estimate once your actual configuration is defined.\nFinally, your estimate of the memory required for the server overall is the combination of the memory\nrequired for the content and the memory for the server process itself, plus 30% as a buffer.\n Server memory = ( content + server process ) + 30% \nWhen sizing for a cluster, where the content is distributed across the servers, the calculation for the memory\nrequired for content on each server is the total content size divided by the number of servers, plus some\npercentage for replicated tables. For example, if 20% of the tables are replicated, a rough estimate of the\nspace required for each server is given by the following equation:\n Per server memory = ( ( content / servers) + 20% + server ) + 30% \nWhen sizing memory for VoltDB servers, it is important to keep in mind the following points:\n•Memory usage includes not only storage for the data, but also temporary storage for processing trans-\nactions, managing queues, and the server processes themselves.\n•Even in the best partitioning schemes, partitioning is never completely balanced. Make allowances for\nvariations in load across the servers.\n6Choosing Hardware\n•If memory usage exceeds approximately 70% of total memory, the operating system can start paging\nand swapping, severely impacting performance.\nRule of Thumb\nKeep memory usage per server within 50-70% of total memory.\nMemory technology and density is advancing so rapidly, (similar to the increase in processor cores per\nserver), it is feasible to configure a small number of servers with extremely large memory capacities that\nprovide capacity and performance equivalent to a larger number of smaller servers. However, the amount\nof memory in use can impact the performance of other aspects of database management, such as snapshots\nand failure recovery. The next section discusses some of the trade offs to consider when sizing for these\nfeatures.\n3.4. Sizing for Durability\nDurability refers to the ability of a database to withstand — or recover from — unexpected events. VoltDB\nhas several features that increase the durability of the database, including K-Safety, snapshots, command\nlogging, and database replication\nK-Safety replicates partitions to provide redundancy as a protection against server failure. Note that when\nyou enable K-Safety, you are replicating the unique partitions across the available hardware. So the hard-\nware resources — particularly servers and memory — for any one copy are being reduced. The easiest\nway to size hardware for a K-Safe cluster is to size the initial instance of the database, based on projected\nthroughput and capacity, then multiply the number of servers by the number of replicas you desire (that\nis, the K-Safety value plus one).\nRule of Thumb\nWhen using K-Safety, configure the number of cluster nodes as a whole multiple of the number\nof copies of the database (that is, K+1).\nK-Safety has no real performance impact under normal conditions. However, the cluster configuration can\naffect performance when recovering from a failure. In a K-Safe cluster, when a failed server rejoins, it\ngets copies of all of its partitions from the other members of the cluster. The larger (in size of memory)\nthe partitions are, the longer they can take to be restored. Since it is possible for the restore action to\nblock database transactions, it is important to consider the trade off of a few large servers that are easier\nto manage against more small servers that can recover in less time.\nTwo of the other durability features — snapshots and command logs — have only a minimal impact on\nmemory and processing power. However, these features do require persistent storage on disk.\nMost VoltDB disk-based features, such as snapshots, export overflow, network partitions, and so on, can\nbe supported on standard disk technology, such as SATA drives. They can also share space on a single\ndisk, assuming the disk has sufficient capacity, since disk I/O is interleaved with other work.\nCommand logging, on the other hand, is time dependent and must keep up with the transactions on the\nserver. The chapter on command logging in Using VoltDB discusses in detail the trade offs between asyn-\nchronous and synchronous logging and the appropriate hardware to use for each. But to summarize:\n•Use fast disks (such as battery-backed cache drives) for synchronous logging\n•Use SATA or other commodity drives for asynchronous logging. However, it is still a good idea to use\na dedicated drive for the command logs to avoid concurrency issues between the logs and other disk\nactivity.\n7Choosing Hardware\nRule of Thumb\nWhen using command logging, whether synchronous or asynchronous, use a dedicated drive for\nthe command logs. Other disk activity (including command log snapshots) can share a separate\ndrive.\nFinally, database replication (DR) does not impact the sizing for memory or processing power of the\nservers. But it does require duplicates of the initial hardware for each additional cluster. For example, when\nusing passive DR, you should double the estimated number of servers — one copy for the master cluster\nand one for the replica. When using cross datacenter replication (XDCR) you will need one complete copy\nfor each of the clusters participating in the XDCR relationship.\nRule of Thumb\nWhen using database replication, multiply the number of servers needed by the number of clusters\ninvolved — two for passive DR (master and replica); two or more to match the number of clusters\nin a XDCR environment.\n8Chapter 4. Sizing Memory\nAn important aspect of system sizing is planning for the memory required to support the application.\nBecause VoltDB is an in-memory database, allocating sufficient memory is vital.\nSection 3.3, “Sizing for Capacity” provides some simple equations for estimating the memory requirements\nof a prospective application. If you are already at the stage where the database schema is well-defined\nand want more precise measurements of potential memory usage, this chapter provides details about how\nmemory gets allocated.\nFor VoltDB databases, there are three aspects of memory sizing the must be considered:\n•Understanding how much memory is required to store the data itself; that is, the contents of the database\n•Evaluating how that data is distributed across the cluster, based on the proportion of partitioned to\nreplicated tables and the K-safety value\n•Determining the memory requirements of the server process\nThe sum of the estimated data requirements per server and the java heap size required by the server process\nprovide the total memory requirement for each server.\n4.1. Planning for Database Capacity\nTo plan effectively for database capacity, you must know in advance both the structure of the data and\nthe projected volume. This means you must have at least a preliminary database schema, including tables,\ncolumns, and indexes, as well as the expected number of rows for each table.\nIt is often useful to write this initial sizing information down, for example in a spreadsheet. Your planning\nmay even allow for growth, assigning values for both the initial volume and projected long-term growth.\nFor example, here is a simplified example of a spreadsheet for a database supporting a flight reservation\nsystem:\nName Type SizeInitial Volume Future Volume\nFlight Replicated table 5,000 20,000\n- FlightByID Index 5,000 20,000\n- FlightByDepartTime Index 5,000 20,000\nAirport Replicated table 10,000 10,000\n- AirportByCode Index 10,000 10,000\nReservation Table 100,000 200,000\n- ReservByFlight Index 100,000 200,000\nCustomer Table 200,000 1,000,000\n- CustomerByID Index 200,000 1,000,000\n- CustomerByName Index 200,000 1,000,000\nUsing the database schema, it is possible to calculate the size of the individual table records and indexes,\nwhich when multiplied by the volume projections gives a good estimate the the total memory needed to\nstore the database contents. The following sections explain how to calculate the size column for individual\ntable rows and indexes.\n9Sizing Memory\n4.1.1. Sizing Database Tables\nThe size of individual table rows depends on the number and datatype of the columns in the table. For\nfixed-size datatypes, such as INTEGER and TIMESTAMP, the column is stored inline in the table record\nusing the specified number of bytes. Table 4.1, “Memory Requirements For Tables By Datatype” specifies\nthe length (in bytes) of fixed size datatypes.\nFor variable length datatypes, such as VARCHAR and VARBINARY, how the data is stored and, con-\nsequently, how much space it requires, depends on both the actual length of the data and the maximum\npossible length. If the maximum length is less than 64 bytes, the data is stored inline in the tuple as fixed-\nlength data consuming the maximum number of bytes plus one for the length. So, for example, a VAR-\nCHAR(32 BYTES) column takes up 33 bytes, no matter how long the actual data is.\nNote that VARCHAR columns can be declared in characters (the default) or bytes. For storage calculations,\nvariable-length strings declared in characters are considered to consume 4 bytes for every character. In\nother words, a variable length string declared as VARCHAR(8) in characters consume the same amount\nof space as a string declared as VARCHAR(32 BYTES).\nIf the maximum length is 64 bytes or more, the data is stored in pooled memory rather than inline. To do\nthis, there is an 8-byte pointer stored inline in the tuple, a 24-byte string reference object, and the space\nrequired to store the data itself in the pool. Within the pool, the data is stored as a 4-byte length, an 8-byte\nreverse pointer to the string reference object, and the data.\nTo complicate the calculation somewhat, data stored in pooled memory is not stored as arbitrary lengths.\nInstead, data is incremented to the smallest appropriate \"pool size\", where pool sizes are powers of 2 and\nintermediary values. In other words, pool sizes include 2, 4, 6 (2+4), 8, 12 (8+4), 16, 24 (8+16), 32 and so\non up to a maximum of 1 megabyte for data plus 12 bytes for the pointer and length. For example, if the\nLastName column in the Customer table is defined as VARCHAR(32) (that is, a maximum length of 128\nbytes) and the actual content is 95 bytes, the column consumes 160 bytes:\n 8 Inline pointer\n 24 String reference object\n 4 Data length\n 8 Reverse pointer\n 95 Data\n 107 128 Pool total / incremented to next pool size\n 160 Total\nNote that if a variable length column is defined with a maximum length greater than or equal to 64 bytes,\nit is not stored inline, even if the actual contents is less than 64 bytes. Variable length columns are stored\ninline only if the maximum length is less than 64 bytes.\nTable 4.1, “Memory Requirements For Tables By Datatype” summarizes the memory requirements for\neach datatype.\n10Sizing Memory\nTable 4.1. Memory Requirements For Tables By Datatype\nDatatype Size (in bytes) Notes\nTINYINT 1\nSMALLINT 2\nINTEGER 4\nBIGINT 8\nDOUBLE 8\nDECIMAL 16\nTIMESTAMP 8\nVARCHAR (<64 bytes) maximum size + 1 Stored inline\nVARBINARY (<64 bytes) maximum size + 1 Stored inline\nVARCHAR (>= 64 bytes) 32 + (actual size + 12 +\npadding)Pooled resource. Total size includes an\n8-byte inline pointer, a 24-byte reference\npointer, plus the pooled resource itself.\nVARBINARY (>=64 bytes) 32 + (actual size + 12 +\npadding)Same as VARCHAR.\nFor tables with variable length columns less than 64 bytes, memory usage can be sized very accurately\nusing the preceding table. However, for tables with variable length columns greater than 64 bytes, sizing\nis approximate at best. Besides the variability introduced by pooling, any sizing calculation must be based\non an estimate of the average length of data in the variable columns .\nFor the safest and most conservative estimates, you can use the maximum length when calculating variable\nlength columns. If, on the other hand, there are many variable length columns or you know the data will\nvary widely, you can use an estimated average or 90th percentile figure, to avoid over-estimating memory\nconsumption.\n4.1.2. Sizing Database Indexes\nIndexes are sized in a way similar to tables, where the size and number of the index columns determine\nthe size of the index.\nVoltDB uses tree indexes. You can calculate the size of the individual index entries by adding up the size\nfor each column in the index plus 40 bytes for overhead (pointers and lengths). The size of the columns are\nidentical to the sizes when sizing tables, as described in Table 4.1, “Memory Requirements For Tables By\nDatatype” , with the exception of non-inlined binary data. For variable length columns equal to or greater\nthan 64 bytes in length, the index only contains an 8-byte pointer; the data itself is not replicated.\nSo, for example, the CustomerByName index on the Customer table, which is an index containing the\nVARCHAR(32) fields LastName and FirstName, has a length of 56 bytes for each entry:\n 8 Pointer to LastName\n 8 Pointer to FirstName\n 40 Overhead\n 56 Total\nThe following equation summarizes how to calculate the size of an index.\n (sum-of-column-sizes + 8 + 32) * rowcount \n11Sizing Memory\n4.1.3. An Example of Database Sizing\nUsing the preceding formulas it is possible to size the sample flight database mentioned earlier. For exam-\nple, it is possible to size the individual rows of the Flight table based on the schema columns and datatypes.\nThe following table demonstrates the sizing of the Flight table.\nColumn Datatype Size in Bytes\nFlightID INTEGER 4\nCarrier VARCHAR(32) 160\nDepartTime TIMESTAMP 8\nArrivalTime TIMESTAMP 8\nOrigin VARCHAR(3\nBYTES)4\nDestination VARCHAR(3\nBYTES)4\nDestination VARCHAR(3\nBYTES)4\nTotal: 192\nThe same calculations can be done for the other tables and indexes. When combined with the expected vol-\numes (described in Section 4.1, “Planning for Database Capacity” ), you get a estimate for the total memory\nrequired for storing the database content of approximately 500 megabytes, as shown in the following table.\nName Type SizeFinal Volume Total Size\nFlight Replicated table 184 20,000 3,840,000\n- FlightByID Index 36 20,000 1,040,008\n- FlightByDepartTime Index 48 20,000 960,000\nAirport Replicated Table 484 10,000 4,840,000\n- AirportByCode Index 44 10,000 440,000\nReservation Table 243 200,000 48,600,000\n- ReservByFlight Index 36 200,000 10,400,008\nCustomer Table 324 1,000,000 324,000,000\n- CustomerByID Index 36 1,000,000 52,000,008\n- CustomerByName Index 56 1,000,000 56,000,000\nTotal: 502,120,024\n4.2. Distributing Data in a Cluster\nIn the simplest case, a single server, the sum of the sizing calculations in the previous section gives you\nan accurate estimate of the memory required for the database content. However, VoltDB scales best in a\ncluster environment. In that case, you need to determine how much of the data will be handled by each\nserver, which is affected by the number of servers, the number and size of partitioned and replicated tables,\nand the level of availability, or K-safety, required.\nThe following sections explain how to determine the distribution (and therefore sizing) of partitioned and\nreplicated tables in a cluster and the impact of K-safety.\n12Sizing Memory\n4.2.1. Data Memory Usage in Clusters\nAlthough it is tempting to simply divide the total memory required by the database content by the number\nof servers, this is not an accurate formula for two reasons:\n•Not all data is partitioned. Replicated tables (and their indexes) appear on all servers.\n•Few if any partitioning schemes provide perfectly even distribution. It is important to account for some\nvariation in the distribution.\nTo accurately calculate the memory usage per server in a cluster, you must account for all replicated tables\nand indexes plus each server's portion of the partitioned tables and indexes.\n Data per server = replicated tables + (partitioned tables/number of servers) \nUsing the sample sizing for the Flight database described in Section 4.1.3, “An Example of Database\nSizing”, the total memory required for the replicated tables and indexes (for the Flight and Airport tables) is\nonly approximately 12 megabytes. The memory required for the remaining partitioned tables and indexes\nis approximately 490 megabytes. Assuming the database is run on two servers, the total memory required\nfor data on each server is approximately 256 megabytes:\n 12 Replicated data\n 2 Number of servers\n 490 245 Paritioned data total / per server\n 256 Total\nOf course, no partitioning scheme is perfect. So it is a good idea to provide additional space (say 20% to\n30%) to allow for any imbalance in the partitioning.\n4.2.2. Memory Requirements for High Availability (K-Safety)\nThe features you plan to use with a VoltDB database also impact capacity planning, most notably K-Safety.\nK-Safety improves availability by replicating data across the cluster, allowing the database to survive\nindividual node failures.\nBecause K-Safety involves replication, it also increases the memory requirements for storing the replicated\ndata. Perhaps the easiest way to size a K-Safe cluster is to size a non-replicated cluster, then multiply by\nthe K-Safety value plus one.\nFor example, let's assume you plan to run a database with a K-Safety value of 2 (in other words, three\ncopies) on a 6-node cluster. The easiest way to determine the required memory capacity per server is\nto calculate the memory requirements for a 2-node (non K-Safe) cluster, then create three copies of that\nhardware and memory configuration.\n4.3. Planning for the Server Process (Java Heap\nSize)\nThe preceding sections explain how to calculate the total memory required for storing the database content\nand indexes. You must also account for the database process itself, which runs within the Java heap.\nCalculating the memory required by the sever process both helps you define the total memory needed\nfor planning purposes but also identifies the Java heap size that you will need to set in production when\nstarting the database.\n13Sizing Memory\nIt is impossible to define an exact formula for the optimal heap size. But the following basic guidelines\ncan be used to accommodate differing hardware configurations and database designs.\n4.3.1. Attributes that Affect Heap Size\nThe database features that have the most direct impact on the server process and, therefore, the Java heap\nrequirements are:\n•Schema size, in terms of number tables and stored procedures\n•The number of sites per host\n•The features in use, specifically K-safety and/or database replication\nThe schema size affects the base requirements for the server process. The more tables the schema has and\nthe more stored procedures it contains, the more heap space it will take up. In particular, it is important to\nprovide enough heap so the schema can be updated, no matter what other features are enabled.\nThe general rule of thumb is a base Java heap size of 384MB, plus 10MB for every table in the schema.\nStored procedures don't impact the heap size as much as the number of tables do. However, if you have\nlots of stored procedures (that is, hundreds or thousands of them) it is a good idea to add additional space\nto the base heap size for good measure.\nBeyond the base heap size, use of K-safety and database replication (for the master database) each increases\nthe requirements for Java heap space, with the increase proportional to the number of sites per host. In\ngeneral each feature requires an additional 128MB for every site per host.\nFor example, a K-safe cluster with 4 sites per host requires an additional 512MB, while a K-safe cluster\nwith 8 sites per host requires an extra gigabyte. If that cluster is also the master cluster for database repli-\ncation, those extra heap requirements are doubled to 1GB and 2GB, respectively.\nNote that the Java heap requirements for features are based solely on the sites per host, not the number of\nnodes in the cluster or the K-safety value. Any K-safety value greater than zero has the same requirements,\nin terms of the server process requirements.\n4.3.2. Guidelines for Determining Java Heap Size\nThe recommended method for determining the appropriate heap size for a VoltDB cluster node is the\nfollowing:\nStep #1 Calculate the base Java heap requirement using the following formula:\n384MB + (10MB X number of tables) = base Java heap size\nBe sure to allow for growth if you expect to add tables in the future. Also, if you expect to\nhave large numbers of stored procedures (200 or more), increment the base heap size accord-\ningly. Note that where memory is available, additional Java heap space beyond the minimum\nsettings may provide performance benefits during operational events, such as schema updates\nand node rejoins.\nStep #2 Based on the hardware to be used in production (specifically cores per server), and perfor-\nmance testing of the proposed schema, determine the optimal number of sites per host. 8 sites\nper host is the default. Setting the sites per host greater than 24 is not recommended.\nStep #3 Determine which database features will be used in production. K-safety, network partition\ndetection, and command logging are recommended for all production environments. Database\nreplication (DR) is an optional feature that provides additional durability.\n14Sizing Memory\nIf K-safety is in use, but DR is not, multiply the number of sites per host by 128MB to\ndetermine the feature-specific Java heap requirements. If K-safety is enabled and the cluster\nis the master database for DR, multiply the number of sites per host by 256MB.\nStep #4 Add the base Java heap requirements defined in Step #1 to the feature-specific requirements\nin Step #3 to determine the recommended Java heap size for the production servers.\n15Chapter 5. Benchmarking\nBenchmarking is the process of evaluating performance against a known baseline. For database applica-\ntions, you can benchmark against other technologies, against a target metric (such as a specific number\nof transactions per second), or you can use the same application on different hardware configurations to\ndetermine which produces the best results.\nFor VoltDB applications, benchmarking is useful for establishing metrics with regards to:\n•Optimum throughput\n•Optimum number of sites per host\n5.1. Benchmarking for Performance\nWhen establishing performance criteria for database applications, there are usually two key metrics:\n•Throughput — how many transactions can be completed at one time, usually measured in transactions\nper second, or TPS\n•Latency — how long each individual transaction takes, usually measured as the average or percentile\nof the latency for a sample number of transactions\nNote that neither of these metrics is exact. For many databases, throughput can vary depending upon\nthe type of transaction; whether it is a read or write operation. One of the advantages of VoltDB is that\nthroughput does not change significantly for write versus read operations. However, VoltDB throughput\ndoes change when there are many multi-partitioned transactions versus a single-partitioned transaction\nworkload. This is why it is important to design your schema and stored procedures correctly when bench-\nmarking a VoltDB database.\nSimilarly, latency can vary, both in how it is measured and what impacts it. You can measure latency as\nthe time from when the client issues a transaction request until the response is received, or from when the\ndatabase server receives the request until it queues the response.\nThe former measurement, latency from the client application's perspective, is perhaps the most accurate\n\"real world\" metric. However, this metric includes both database latency and any network latency between\nthe client and the server. The latter measurement, latency from the database perspective, is a more accurate\nmeasurement of the technology's capability. This metric includes the time required to process the transac-\ntion itself (that is, the stored procedures and the database queries it contains) as well as time the request\nspends in the queue waiting to be executed.\n5.1.1. Designing Your Benchmark Application\nThere is a relationship between throughput, latency, and system configuration. Throughput is a combina-\ntion of the amount of time it takes to execute a transaction (which is part of latency) and the number of\ntransactions that can be run in parallel (that is, the percentage of single-partitioned transactions plus the\nnumber of unique partitions, which is a combination of sites per host and number of servers).\nThis is why it is important that benchmarking for performance be done in conjunction with benchmarking\nfor server configuration (as discussed later). Different configurations will result in different values for\nthroughput and latency.\n16Benchmarking\nYour benchmark results are also affected by the design of the application itself.\n5.1.1.1. Good Application Design\nAs mentioned before, throughput and latency are not abstract numbers. They are a consequence of the\nschema, stored procedures, application, and server configuration in use. As with any VoltDB solution, to\naffectively benchmark performance you must start with a well-designed application. Specifically:\n•Partition all tables (except those that are small and primarily read-only)\n•Make all frequent transactions single-partitioned\n•Use asynchronous procedure calls\nSee Chapter 2, Proof of Concept for more information on writing effective VoltDB applications.\nIt is common practice to start benchmarking with the proof of concept, since the POC includes an initial\nschema and the key transactions required by the solution. If you decide to proceed with development, it\nis also a good idea to benchmark the application periodically throughout the process to make sure you\nunderstand how the additional functions impact your overall performance targets.\nHowever, before benchmarking the POC, it is important to decide how you are going to measure perfor-\nmance. The following sections provide useful information for determining what and how to measure per-\nformance in a VoltDB application.\n5.1.1.2. Rate Limiting\nLatency and throughput are related. To measure throughput, your test application can make stored proce-\ndure calls as fast as it can and then measure how many are completed in a second (TPS). However, if the\ntest application invokes more transactions that the server can complete in that time frame, the additional\ninvocations must wait in the queue before being processed. The time these transactions wait in the queue\nwill result in the latency measurement suddenly spiking.\nThere is an optimal throughput for any given application and system configuration. As the rate of invoca-\ntions from the client application increases, there is a direct increase in TPS while latency stays relatively\nflat. However, as the invocation rate approaches and surpasses the optimal throughput (the limit of what\nthe current configuration can process), latency increases dramatically and TPS may even drop, as shown\nin Figure 5.1, “Determining Optimal Throughput and Latency” .\nFigure 5.1. Determining Optimal Throughput and Latency\n17Benchmarking\nIf your test application \"fire hoses\" the database server — that is, it sends invocations as fast as it can\n— all you can measure is the misleading throughput and latency on the right side of the preceding chart.\nTo determine the optimal rate, you need to be able control the rate at which your benchmark application\nsubmits transaction requests. This process is called rating limiting .\nAt its simplest, rate limiting is constraining the number of invocations issued by the application. For ex-\nample, the following program loop, constrains the application to invoking the SignIn stored procedure\na maximum of 10 times per millisecond, or 10,000 times per second.\nboolean done = false;\nlong maxtxns = 10;\nwhile (!done) {\n long txns = 0;\n long millisecs = System.currentTimeMillis();\n while (millisecs == System.currentTimeMillis()) {\n if ( txns++ < maxtxns } {\n myClient.callProcedure(new SignInCallback(),\n \"SignIn\",\n id, millisecs);\n }\n }\n}\nYou could use a command line argument to parameterize the rate limit value maxtxns and then use\nmultiple runs of the application to create a graph similar to Figure 5.1, “Determining Optimal Throughput\nand Latency” . An even better approach is to use a limit on latency to automatically control the invocation\nrate and let the application close in on the optimal throughput value.\nHow rate limiting based on latency is done is to have a variable for the target latency as well as a variable for\nmaximum allowable throughput (such as maxtxns in the preceding example). The application measures\nboth the average throughput and latency for a set period (every 1 to 5 seconds, for example). If the average\nlatency exceeds the goal, reduce the maximum transactions per second, then repeat. After the same period,\nif the latency still exceeds the goal, reduce the maximum transaction rate again. If the latency does meet\nthe goal, incrementally increase the maximum transaction rate.\nBy using this mix of rate limiting and automated adjustment based on a latency goal, the test application\nwill eventually settle on an optimal throughput rate for the current configuration. This is the method used\nby the sample applications (such as voter and Voltkv) that are provided with the VoltDB software.\n5.1.2. How to Measure Throughput\nNormally for benchmarking it is necessary to \"instrument\" the application. That is, add code to measure\nand report on the benchmark data. Although it is possible (and easy) to instrument a VoltDB application,\nit is not necessary.\nThe easiest way to measure throughput for a VoltDB database is to monitor the database while the bench-\nmark application is running. You can monitor a database using the VoltDB Management Console, which\nis available from any VoltDB server and provides a graphical display of the overall throughput for the\ndatabase cluster.\nFor more detailed information, you can instrument your application by using a variable to track the number\nof completed transactions (incrementing the variable in the asynchronous procedure callback). You then\nperiodically report the average TPS by dividing the number of transactions by the number of seconds since\nthe last report. This approach lets you configure the reporting to whatever increment you choose.\n18Benchmarking\nSee the voter sample application for an example of an instrumented benchmarking application. The\nREADME for the voter application explains how to customize the reporting through the use of command\nline arguments.\n5.1.3. How to Measure Latency\nIt is also possible to get a sense of the overall latency without instrumenting your application using the\nVoltDB Management Console. The latency graph provided by the Management Console shows the average\nlatency, measured every few seconds, as reported by the server. At times, this method can produce a\ndramatic sawtooth graph with extreme highs and lows. In this case, the best way to interpret the overall\nlatency of the application is to imagine a line drawn across the high points of the graph.\nFor a more accurate benchmark, you can use latency metrics built into the VoltDB system procedures\nand Java client interface. To instrument your client application, you can use the ClientResponse interface\nreturned by stored procedures invocations to measure latency. Part of the ClientResponse class are the\ngetClientRoundTrip() and getClusterRoundTrip() methods. Both methods return an integer value repre-\nsenting the number of milliseconds required to process the transaction. The getClientRoundTrip() method\nreports total latency, including the network round trip from the client application to the server. The get-\nClusterRoundTrip() method reports an estimate of the latency associated with the database only.\nIt is easy to use these methods to collect and report custom latency information. For example, the following\ncode fragment captures information about the minimum, maximum, and combined latency during each\nprocedure callback.\nstatic class PocCallback implements ProcedureCallback {\n @Override\n public void clientCallback(ClientResponse response) {\n txncount++;\n int latency = response.getClusterRoundTrip();\n if (latency < minlatency) minlatency = latency;\n if (latency > maxlatency) maxlatency = latency;\n sumlatency += latency;\n .\n .\n .\nThe benchmarking application can then use this information to periodically report specific latency values,\nlike so:\nif (System.currentTimeMillis() >= nextreport ) {\n // report latency\n printf(\"Min latency: %d\\n\" +\n \"Max latency: %d\\n\" + \n \"Average latency: %d\\n\",\n minlatency, maxlatency, sumlatency/txncount);\n // reset variables\n txncount=0;\n minlatency = 5000;\n maxlatency = 0;\n sumlatency = 0;\n // report every 5 secsonds\n nextreport = System.currentTimeMillis() + 5000;\n}\n19Benchmarking\n5.2. Determining Sites Per Host\nAnother important goal for benchmarking is determining the optimal number of sites per host. Each Volt-\nDB server can host multiple partitions, or \"sites\". The ideal number of sites per host is related to the number\nof processor cores available on the server. However, it is not an exact one-to-one relationship. Usually,\nthe number of sites is slightly lower than the number of cores.\nThe equation becomes even more complex with hyperthreading, which \"virtualizes\" multiple processors\nfor each physical core. Hyperthreading can improve the number of sites per host that a VoltDB server can\nsupport. But again, not in direct proportion to a non-hyperthreaded server.\nImportant\nIn general, VoltDB performs best with between 4 and 16 sites per host. However, you should\nnever exceed 24 sites per host, even if the number of processor cores might support more, because\nthe processing required to manage so many sites begins to conflict with the data processing.\nThe easiest way to determine the optimal number of sites per host is by testing, or benchmarking, against\nthe target application. The process for determining the correct number of sites for a specific hardware\nconfiguration is as follows:\n1.Create a benchmark application that measures the optimal throughput, as described in Section 5.1,\n“Benchmarking for Performance” .\n2.Run the benchmark application multiple times, each time increasing the number of sites per host for\nthe database.\n3.Make note of the optimal throughput for each run and graph the optimal TPS against the number of\nsites per host.\nAs the number of sites per host increases, the optimal throughput increases as well. However, at some\npoint, the number of sites exceeds the number of threads that the hardware can support, at which point the\nthroughput levels out, or even decreases, as contention occurs in the processor. Figure 5.2, “Determining\nOptimal Sites Per Host” shows the results graph of a hypothetical benchmark of sites per host for a quad\ncore server. In this case, the optimal number of sites per host turned out to be three.\nFigure 5.2. Determining Optimal Sites Per Host\nBy graphing the relationship between throughput and partitions using a benchmark application, it is pos-\nsible to maximize database performance for the specific hardware configuration you will be using.\n20" } ]
{ "category": "App Definition and Development", "file_name": "PlanningGuide.pdf", "project_name": "VoltDB", "subcategory": "Database" }
[ { "data": "Function Output Iterator\nAuthor : David Abrahams, Jeremy Siek, Thomas Witt\nContact : dave@boost-consulting.com ,jsiek@osl.iu.edu ,witt@ive.uni-hannover.de\nOrganization :Boost Consulting , Indiana University Open Systems Lab , University of\nHanover Institute for Transport Railway Operation and Construction\nDate : 2004-11-01\nCopyright : Copyright David Abrahams, Jeremy Siek, and Thomas Witt 2003.\nabstract: The function output iterator adaptor makes it easier to create custom output\niterators. The adaptor takes a unary function and creates a model of Output Itera-\ntor. Each item assigned to the output iterator is passed as an argument to the unary\nfunction. The motivation for this iterator is that creating a conforming output iterator\nis non-trivial, particularly because the proper implementation usually requires a proxy\nobject.\nTable of Contents\nHeader\nfunction_output_iterator requirements\nfunction_output_iterator models\nfunction_output_iterator operations\nExample\nHeader\n#include <boost/function_output_iterator.hpp>\ntemplate <class UnaryFunction>\nclass function_output_iterator {\npublic:\ntypedef std::output_iterator_tag iterator_category;\ntypedef void value_type;\ntypedef void difference_type;\ntypedef void pointer;\ntypedef void reference;\nexplicit function_output_iterator();\nexplicit function_output_iterator(const UnaryFunction& f);\n/* see below */ operator*();\n1function_output_iterator& operator++();\nfunction_output_iterator& operator++(int);\nprivate:\nUnaryFunction m_f; // exposition only\n};\nfunction_output_iterator requirements\nUnaryFunction must be Assignable and Copy Constructible.\nfunction_output_iterator models\nfunction_output_iterator is a model of the Writable and Incrementable Iterator concepts.\nfunction_output_iterator operations\nexplicit function_output_iterator(const UnaryFunction& f = UnaryFunction());\nEffects: Constructs an instance of function_output_iterator with m_fconstructed from\nf.\noperator*();\nReturns: An object rof unspecified type such that r = t is equivalent to m_f(t) for all\nt.\nfunction_output_iterator& operator++();\nReturns: *this\nfunction_output_iterator& operator++(int);\nReturns: *this\nExample\nstruct string_appender\n{\nstring_appender(std::string& s)\n: m_str(&s)\n{}\nvoid operator()(const std::string& x) const\n{\n*m_str += x;\n}\nstd::string* m_str;\n};\nint main(int, char*[])\n{\n2std::vector<std::string> x;\nx.push_back(\"hello\");\nx.push_back(\" \");\nx.push_back(\"world\");\nx.push_back(\"!\");\nstd::string s = \"\";\nstd::copy(x.begin(), x.end(),\nboost::make_function_output_iterator(string_appender(s)));\nstd::cout << s << std::endl;\nreturn 0;\n}\n3" } ]
{ "category": "App Definition and Development", "file_name": "function_output_iterator.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "New Iterator Concepts\nAuthor : David Abrahams, Jeremy Siek, Thomas Witt\nContact : dave@boost-consulting.com ,jsiek@osl.iu.edu ,witt@styleadvisor.com\nOrganization :Boost Consulting , Indiana University Open Systems Lab ,Zephyr Asso-\nciates, Inc.\nDate : 2004-11-01\nNumber : This is a revised version of n1550=03-0133, which was accepted for Tech-\nnical Report 1 by the C++ standard committee’s library working group.\nThis proposal is a revision of paper n1297, n1477, and n1531.\nCopyright : Copyright David Abrahams, Jeremy Siek, and Thomas Witt 2003.\nAbstract: We propose a new system of iterator concepts that treat access and positioning\nindependently. This allows the concepts to more closely match the requirements of\nalgorithms and provides better categorizations of iterators that are used in practice.\nTable of Contents\nMotivation\nImpact on the Standard\nPossible (but not proposed) Changes to the Working Paper\nChanges to Algorithm Requirements\nDeprecations\nvector<bool>\nDesign\nProposed Text\nAddition to [lib.iterator.requirements]\nIterator Value Access Concepts [lib.iterator.value.access]\nReadable Iterators [lib.readable.iterators]\nWritable Iterators [lib.writable.iterators]\nSwappable Iterators [lib.swappable.iterators]\nLvalue Iterators [lib.lvalue.iterators]\nIterator Traversal Concepts [lib.iterator.traversal]\nIncrementable Iterators [lib.incrementable.iterators]\nSingle Pass Iterators [lib.single.pass.iterators]\nForward Traversal Iterators [lib.forward.traversal.iterators]\nBidirectional Traversal Iterators [lib.bidirectional.traversal.iterators]\nRandom Access Traversal Iterators [lib.random.access.traversal.iterators]\nInteroperable Iterators [lib.interoperable.iterators]\n1Addition to [lib.iterator.synopsis]\nAddition to [lib.iterator.traits]\nFootnotes\nMotivation\nThe standard iterator categories and requirements are flawed because they use a single hierarchy of\nconcepts to address two orthogonal issues: iterator traversal and value access . As a result, many\nalgorithms with requirements expressed in terms of the iterator categories are too strict. Also, many\nreal-world iterators can not be accurately categorized. A proxy-based iterator with random-access\ntraversal, for example, may only legally have a category of “input iterator”, so generic algorithms are\nunable to take advantage of its random-access capabilities. The current iterator concept hierarchy is\ngeared towards iterator traversal (hence the category names), while requirements that address value\naccess sneak in at various places. The following table gives a summary of the current value access\nrequirements in the iterator categories.\nValue Access Requirements in Existing Iterator Categories\nOutput Iterator *i = a\nInput Iterator *iis convertible to T\nForward Iterator *iisT&(orconst T& once issue 200 is resolved)\nRandom Access Iterator i[n] is convertible to T(also i[n] = t is required for mutable\niterators once issue 299 is resolved)\nBecause iterator traversal and value access are mixed together in a single hierarchy, many useful\niterators can not be appropriately categorized. For example, vector<bool>::iterator is almost a\nrandom access iterator, but the return type is not bool& (seeissue 96 and Herb Sutter’s paper J16/99-\n0008 = WG21 N1185). Therefore, the iterators of vector<bool> only meet the requirements of input\niterator and output iterator. This is so nonintuitive that the C++ standard contradicts itself on this\npoint. In paragraph 23.2.4/1 it says that a vector is a sequence that supports random access iterators.\nAnother difficult-to-categorize iterator is the transform iterator, an adaptor which applies a unary\nfunction object to the dereferenced value of the some underlying iterator (see transform iterator ). For\nunary functions such as times , the return type of operator* clearly needs to be the result_type of\nthe function object, which is typically not a reference. Because random access iterators are required to\nreturn lvalues from operator* , if you wrap int* with a transform iterator, you do not get a random\naccess iterator as might be expected, but an input iterator.\nA third example is found in the vertex and edge iterators of the Boost Graph Library . These iterators\nreturn vertex and edge descriptors, which are lightweight handles created on-the-fly. They must be\nreturned by-value. As a result, their current standard iterator category is input_iterator_tag , which\nmeans that, strictly speaking, you could not use these iterators with algorithms like min_element() .\nAs a temporary solution, the concept Multi-Pass Input Iterator was introduced to describe the vertex\nand edge descriptors, but as the design notes for the concept suggest, a better solution is needed.\nIn short, there are many useful iterators that do not fit into the current standard iterator categories.\nAs a result, the following bad things happen:\n•Iterators are often mis-categorized.\n•Algorithm requirements are more strict than necessary, because they cannot separate the need for\nrandom access or bidirectional traversal from the need for a true reference return type.\n2Impact on the Standard\nThis proposal for TR1 is a pure extension. Further, the new iterator concepts are backward-compatible\nwith the old iterator requirements, and old iterators are forward-compatible with the new iterator\nconcepts. That is to say, iterators that satisfy the old requirements also satisfy appropriate concepts in\nthe new system, and iterators modeling the new concepts will automatically satisfy the appropriate old\nrequirements.\nPossible (but not proposed) Changes to the Working Paper\nThe extensions in this paper suggest several changes we might make to the working paper for the next\nstandard. These changes are not a formal part of this proposal for TR1.\nChanges to Algorithm Requirements\nThe algorithms in the standard library could benefit from the new iterator concepts because the new\nconcepts provide a more accurate way to express their type requirements. The result is algorithms that\nare usable in more situations and have fewer type requirements.\nFor the next working paper (but not for TR1), the committee should consider the following changes\nto the type requirements of algorithms. These changes are phrased as textual substitutions, listing the\nalgorithms to which each textual substitution applies.\nForward Iterator - >Forward Traversal Iterator and Readable Iterator\nfind_end, adjacent_find, search, search_n, rotate_copy, lower_bound, upper_bound,\nequal_range, binary_search, min_element, max_element\nForward Iterator (1) - >Single Pass Iterator and Readable Iterator, Forward Iterator (2) - >Forward\nTraversal Iterator and Readable Iterator\nfind_first_of\nForward Iterator - >Readable Iterator and Writable Iterator\niter_swap\nForward Iterator - >Single Pass Iterator and Writable Iterator\nfill, generate\nForward Iterator - >Forward Traversal Iterator and Swappable Iterator\nrotate\nForward Iterator (1) - >Swappable Iterator and Single Pass Iterator, Forward Iterator (2) - >Swap-\npable Iterator and Incrementable Iterator\nswap_ranges\nForward Iterator - >Forward Traversal Iterator and Readable Iterator and Writable Iterator\nremove, remove_if, unique\nForward Iterator - >Single Pass Iterator and Readable Iterator and Writable Iterator\nreplace, replace_if\nBidirectional Iterator - >Bidirectional Traversal Iterator and Swappable Iterator reverse\nBidirectional Iterator - >Bidirectional Traversal Iterator and Readable and Swappable Iterator\npartition\n3Bidirectional Iterator (1) - >Bidirectional Traversal Iterator and Readable Iterator, Bidirectional\nIterator (2) - >Bidirectional Traversal Iterator and Writable Iterator\ncopy_backwards\nBidirectional Iterator - >Bidirectional Traversal Iterator and Swappable Iterator and Readable Iterator\nnext_permutation, prev_permutation\nBidirectional Iterator - >Bidirectional Traversal Iterator and Readable Iterator and Writable Iterator\nstable_partition, inplace_merge\nBidirectional Iterator - >Bidirectional Traversal Iterator and Readable Iterator reverse_copy\nRandom Access Iterator - >Random Access Traversal Iterator and Readable and Writable Iterator\nrandom_shuffle, sort, stable_sort, partial_sort, nth_element, push_heap, pop_heap\nmake_heap, sort_heap\nInput Iterator (2) - >Incrementable Iterator and Readable Iterator equal, mismatch\nInput Iterator (2) - >Incrementable Iterator and Readable Iterator transform\nDeprecations\nFor the next working paper (but not for TR1), the committee should consider deprecating the old\niterator tags, and std::iterator traits, since it will be superceded by individual traits metafunctions.\nvector<bool>\nFor the next working paper (but not for TR1), the committee should consider reclassifying vec-\ntor<bool>::iterator as a Random Access Traversal Iterator and Readable Iterator and Writable\nIterator.\nDesign\nThe iterator requirements are to be separated into two groups. One set of concepts handles the syntax\nand semantics of value access:\n•Readable Iterator\n•Writable Iterator\n•Swappable Iterator\n•Lvalue Iterator\nThe access concepts describe requirements related to operator* and operator-> , including the\nvalue_type ,reference , and pointer associated types.\nThe other set of concepts handles traversal:\n•Incrementable Iterator\n•Single Pass Iterator\n•Forward Traversal Iterator\n•Bidirectional Traversal Iterator\n•Random Access Traversal Iterator\n4The refinement relationships for the traversal concepts are in the following diagram.\nIn addition to the iterator movement operators, such as operator++ , the traversal concepts also\ninclude requirements on position comparison such as operator== and operator< . The reason for the\nfine grain slicing of the concepts into the Incrementable and Single Pass is to provide concepts that are\nexact matches with the original input and output iterator requirements.\nThis proposal also includes a concept for specifying when an iterator is interoperable with another\niterator, in the sense that int* is interoperable with int const* .\n•Interoperable Iterators\nThe relationship between the new iterator concepts and the old are given in the following diagram.\nLike the old iterator requirements, we provide tags for purposes of dispatching based on the traversal\nconcepts. The tags are related via inheritance so that a tag is convertible to another tag if the concept\nassociated with the first tag is a refinement of the second tag.\nOur design reuses iterator_traits<Iter>::iterator_category to indicate an iterator’s traversal\ncapability. To specify capabilities not captured by any old-style iterator category, an iterator designer\ncan use an iterator_category type that is convertible to both the the most-derived old iterator\n5category tag which fits, and the appropriate new iterator traversal tag.\nWe do not provide tags for the purposes of dispatching based on the access concepts, in part because\nwe could not find a way to automatically infer the right access tags for old-style iterators. An iterator’s\nwritability may be dependent on the assignability of its value_type and there’s no known way to detect\nwhether an arbitrary type is assignable. Fortunately, the need for dispatching based on access capability\nis not as great as the need for dispatching based on traversal capability.\nA difficult design decision concerned the operator[] . The direct approach for specifying operator[]\nwould have a return type of reference ; the same as operator* . However, going in this direction would\nmean that an iterator satisfying the old Random Access Iterator requirements would not necessarily be\na model of Readable or Writable Lvalue Iterator. Instead we have chosen a design that matches the\npreferred resolution of issue 299 :operator[] is only required to return something convertible to the\nvalue_type (for a Readable Iterator), and is required to support assignment i[n] = t (for a Writable\nIterator).\nProposed Text\nAddition to [lib.iterator.requirements]\nIterator Value Access Concepts [lib.iterator.value.access]\nIn the tables below, Xis an iterator type, ais a constant object of type X,Risstd::iterator_traits<X>::reference ,\nTisstd::iterator_traits<X>::value_type , and vis a constant object of type T.\nReadable Iterators [lib.readable.iterators]\nA class or built-in type Xmodels the Readable Iterator concept for value type Tif, in addition to X\nbeing Assignable and Copy Constructible, the following expressions are valid and respect the stated\nsemantics. Uis the type of any specified member of type T.\nReadable Iterator Requirements (in addition to Assignable and Copy Constructible)\nExpression Return Type Note/Precondition\niterator_traits<X>::value_type T Any non-reference, non-cv-qualified type\n*a Convertible to T pre: ais dereferenceable. If a == b then *a\nis equivalent to *b.\na->m U& pre: pre: (*a).m is well-defined. Equivalent to\n(*a).m .\nWritable Iterators [lib.writable.iterators]\nA class or built-in type Xmodels the Writable Iterator concept if, in addition to Xbeing Copy Con-\nstructible, the following expressions are valid and respect the stated semantics. Writable Iterators have\nan associated set of value types .\nWritable Iterator Requirements (in addition to Copy Constructible)\nExpression Return Type Precondition\n*a = o pre: The type of ois in the set of\nvalue types of X\n6Swappable Iterators [lib.swappable.iterators]\nA class or built-in type Xmodels the Swappable Iterator concept if, in addition to Xbeing Copy Con-\nstructible, the following expressions are valid and respect the stated semantics.\nSwappable Iterator Requirements (in addition to Copy Constructible)\nExpression Return Type Postcondition\niter_swap(a, b) void the pointed to values are exchanged\n[Note: An iterator that is a model of the Readable Iterator andWritable Iterator concepts is also a\nmodel of Swappable Iterator .--end note ]\nLvalue Iterators [lib.lvalue.iterators]\nTheLvalue Iterator concept adds the requirement that the return type of operator* type be a reference\nto the value type of the iterator.\nLvalue Iterator Requirements\nExpression Return\nTypeNote/Assertion\n*a T& Tiscviterator_traits<X>::value_type\nwhere cvis an optional cv-qualification.\npre: ais dereferenceable.\nIfXis aWritable Iterator then a == b if and only if *ais the same object as *b. IfXis aReadable\nIterator then a == b implies *ais the same object as *b.\nIterator Traversal Concepts [lib.iterator.traversal]\nIn the tables below, Xis an iterator type, aandbare constant objects of type X,randsare mutable\nobjects of type X,Tisstd::iterator_traits<X>::value_type , and vis a constant object of type T.\nIncrementable Iterators [lib.incrementable.iterators]\nA class or built-in type Xmodels the Incrementable Iterator concept if, in addition to Xbeing Assignable\nand Copy Constructible, the following expressions are valid and respect the stated semantics.\nIncrementable Iterator Requirements (in addition to Assignable, Copy Constructible)\nExpression Return Type Assertion\n++r X& &r == &++r\nr++\n*r++\niterator_traversal<X>::type Convertible to incre-\nmentable_traversal_tag\nIfXis aWritable Iterator then X a(r++); is equivalent to X a(r); ++r; and*r++ = o is equivalent\nto*r = o; ++r . IfXis aReadable Iterator then T z(*r++); is equivalent to T z(*r); ++r; .\nSingle Pass Iterators [lib.single.pass.iterators]\nA class or built-in type Xmodels the Single Pass Iterator concept if the following expressions are valid\nand respect the stated semantics.\n7Single Pass Iterator Requirements (in addition to Incrementable Iterator and Equality Comparable)\nExpression Return Type Oper-\national\nSemanticsAssertion/ Pre-\n/Post-condition\n++r X& pre: ris dereferenceable;\npost: ris dereference-\nable or ris past-the-end\na == b convertible to bool ==is an equivalence rela-\ntion over its domain\na != b convertible to bool !(a == b)\niterator_traits<X>::difference_type A signed integral type\nrepresenting the distance\nbetween iterators\niterator_traversal<X>::type Convertible to sin-\ngle_pass_traversal_tag\nForward Traversal Iterators [lib.forward.traversal.iterators]\nA class or built-in type Xmodels the Forward Traversal Iterator concept if, in addition to Xmeeting\nthe requirements of Default Constructible and Single Pass Iterator, the following expressions are valid\nand respect the stated semantics.\nForward Traversal Iterator Requirements (in addition to Default Constructible and Single Pass Iterator)\nExpression Return Type Assertion/Note\nX u; X& note: umay have a singu-\nlar value.\n++r X& r == s and ris deref-\nerenceable implies ++r ==\n++s.\niterator_traversal<X>::type Convertible to for-\nward_traversal_tag\nBidirectional Traversal Iterators [lib.bidirectional.traversal.iterators]\nA class or built-in type Xmodels the Bidirectional Traversal Iterator concept if, in addition to Xmeeting\nthe requirements of Forward Traversal Iterator, the following expressions are valid and respect the stated\nsemantics.\nBidirectional Traversal Iterator Requirements (in addition to Forward Traversal Iterator)\nExpression Return Type Operational\nSemanticsAssertion/ Pre-\n/Post-condition\n--r X& pre: there exists s\nsuch that r == ++s .\npost: sis dereference-\nable.\n++(--r) == r .--r\n== --s implies r ==\ns.&r == &--r .\nr-- convertible to const X& {\nX tmp = r;\n--r;\nre-\nturn tmp;\n}\n8Bidirectional Traversal Iterator Requirements (in addition to Forward Traversal Iterator)\nExpression Return Type Operational\nSemanticsAssertion/ Pre-\n/Post-condition\niterator_traversal<X>::type Convertible to bidirec-\ntional_traversal_tag\nRandom Access Traversal Iterators [lib.random.access.traversal.iterators]\nA class or built-in type Xmodels the Random Access Traversal Iterator concept if the following\nexpressions are valid and respect the stated semantics. In the table below, Distance isitera-\ntor_traits<X>::difference_type andnrepresents a constant object of type Distance .\nRandom Access Traversal Iterator Requirements (in addition to Bidirectional Traversal Iterator)\nExpression Return Type Operational Se-\nmanticsAssertion/ Pre-\ncondition\nr += n X& {\nDistance m = n;\nif (m >= 0)\nwhile (m--)\n++r;\nelse\nwhile (m++)\n--r;\nreturn r;\n}\na + n ,n + a X { X tmp = a; re-\nturn tmp += n;\n}\nr -= n X& return r += -n\na - n X { X tmp = a; re-\nturn tmp -= n;\n}\nb - a Distance a < b ? dis-\ntance(a,b) : -\ndistance(b,a)pre: there exists a\nvalue nofDistance\nsuch that a + n ==\nb.b == a + (b -\na).\na[n] convertible to T *(a + n) pre: a is a Readable\nIterator\na[n] = v convertible to T *(a + n) = v pre: a is a Writable\nIterator\na < b convertible to bool b - a > 0 <is a total ordering\nrelation\na > b convertible to bool b < a >is a total ordering\nrelation\na >= b convertible to bool !(a < b)\na <= b convertible to bool !(a > b)\niterator_traversal<X>::type Convertible to ran-\ndom_access_traversal_tag\n9Interoperable Iterators [lib.interoperable.iterators]\nA class or built-in type Xthat models Single Pass Iterator is interoperable with a class or built-in\ntype Ythat also models Single Pass Iterator if the following expressions are valid and respect the\nstated semantics. In the tables below, xis an object of type X,yis an object of type Y,Distance is\niterator_traits<Y>::difference_type , and nrepresents a constant object of type Distance .\nExpres-\nsionReturn Type Assertion/Precondition/Postcondition\ny = x Y post: y == x\nY(x) Y post: Y(x) == x\nx == y convertible to bool ==is an equivalence relation over its domain.\ny == x convertible to bool ==is an equivalence relation over its domain.\nx != y convertible to bool bool(a==b) != bool(a!=b) over its domain.\ny != x convertible to bool bool(a==b) != bool(a!=b) over its domain.\nIfXandYboth model Random Access Traversal Iterator then the following additional requirements\nmust be met.\nExpres-\nsionReturn Type Operational Se-\nmanticsAssertion/ Precondition\nx < y convertible to bool y - x > 0 <is a total ordering relation\ny < x convertible to bool x - y > 0 <is a total ordering relation\nx > y convertible to bool y < x >is a total ordering relation\ny > x convertible to bool x < y >is a total ordering relation\nx >= y convertible to bool !(x < y)\ny >= x convertible to bool !(y < x)\nx <= y convertible to bool !(x > y)\ny <= x convertible to bool !(y > x)\ny - x Distance distance(Y(x),y) pre: there exists a value nofDistance\nsuch that x + n == y .y == x + (y -\nx).\nx - y Distance distance(y,Y(x)) pre: there exists a value nofDistance\nsuch that y + n == x .x == y + (x -\ny).\nAddition to [lib.iterator.synopsis]\n// lib.iterator.traits, traits and tags\ntemplate <class Iterator> struct is_readable_iterator;\ntemplate <class Iterator> struct iterator_traversal;\nstruct incrementable_traversal_tag { };\nstruct single_pass_traversal_tag : incrementable_traversal_tag { };\nstruct forward_traversal_tag : single_pass_traversal_tag { };\nstruct bidirectional_traversal_tag : forward_traversal_tag { };\nstruct random_access_traversal_tag : bidirectional_traversal_tag { };\nAddition to [lib.iterator.traits]\nThe is_readable_iterator class template satisfies the UnaryTypeTrait requirements.\n10Given an iterator type X,is_readable_iterator<X>::value yields true if, for an object aof type\nX,*ais convertible to iterator_traits<X>::value_type , and false otherwise.\niterator_traversal<X>::type is\ncategory-to-traversal (iterator_traits<X>::iterator_category)\nwhere category-to-traversal is defined as follows\ncategory-to-traversal (C) =\nif (C is convertible to incrementable_traversal_tag)\nreturn C;\nelse if (C is convertible to random_access_iterator_tag)\nreturn random_access_traversal_tag;\nelse if (C is convertible to bidirectional_iterator_tag)\nreturn bidirectional_traversal_tag;\nelse if (C is convertible to forward_iterator_tag)\nreturn forward_traversal_tag;\nelse if (C is convertible to input_iterator_tag)\nreturn single_pass_traversal_tag;\nelse if (C is convertible to output_iterator_tag)\nreturn incrementable_traversal_tag;\nelse\nthe program is ill-formed\nFootnotes\nThe UnaryTypeTrait concept is defined in n1519 ; the LWG is considering adding the requirement that\nspecializations are derived from their nested ::type .\n11" } ]
{ "category": "App Definition and Development", "file_name": "new-iter-concepts.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Permutation Iterator\nAuthor : Toon Knapen, David Abrahams, Roland Richter, Jeremy Siek\nContact : dave@boost-consulting.com ,jsiek@osl.iu.edu\nOrganization :Boost Consulting , Indiana University Open Systems Lab\nDate : 2004-11-01\nCopyright : Copyright Toon Knapen, David Abrahams, Roland Richter, and Jeremy\nSiek 2003.\nabstract: The permutation iterator adaptor provides a permuted view of a given range.\nThat is, the view includes every element of the given range but in a potentially different\norder.\nTable of Contents\nIntroduction\nReference\npermutation_iterator requirements\npermutation_iterator models\npermutation_iterator operations\nExample\nIntroduction\nThe adaptor takes two arguments:\n•an iterator to the range V on which the permutation will be applied\n•the reindexing scheme that defines how the elements of V will be permuted.\nNote that the permutation iterator is not limited to strict permutations of the given range V. The\ndistance between begin and end of the reindexing iterators is allowed to be smaller compared to the\nsize of the range V, in which case the permutation iterator only provides a permutation of a subrange\nof V. The indexes neither need to be unique. In this same context, it must be noted that the past the\nend permutation iterator is completely defined by means of the past-the-end iterator to the indices.\nReference\ntemplate< class ElementIterator\n, class IndexIterator\n, class ValueT = use_default\n, class CategoryT = use_default\n, class ReferenceT = use_default\n1, class DifferenceT = use_default >\nclass permutation_iterator\n{\npublic:\npermutation_iterator();\nexplicit permutation_iterator(ElementIterator x, IndexIterator y);\ntemplate< class OEIter, class OI-\nIter, class V, class C, class R, class D >\npermutation_iterator(\npermutation_iterator<OEIter, OIIter, V, C, R, D> const& r\n, typename enable_if_convertible<OEIter, ElementIterator>::type* = 0\n, typename enable_if_convertible<OIIter, IndexIterator>::type* = 0\n);\nreference operator*() const;\npermutation_iterator& operator++();\nElementIterator const& base() const;\nprivate:\nElementIterator m_elt; // exposition only\nIndexIterator m_order; // exposition only\n};\ntemplate <class ElementIterator, class IndexIterator>\npermutation_iterator<ElementIterator, IndexIterator>\nmake_permutation_iterator( ElementIterator e, IndexIterator i);\npermutation_iterator requirements\nElementIterator shall model Random Access Traversal Iterator. IndexIterator shall model Read-\nable Iterator. The value type of the IndexIterator must be convertible to the difference type of\nElementIterator .\npermutation_iterator models\npermutation_iterator models the same iterator traversal concepts as IndexIterator and the same\niterator access concepts as ElementIterator .\nIfIndexIterator models Single Pass Iterator and ElementIterator models Readable Iterator then\npermutation_iterator models Input Iterator.\nIfIndexIterator models Forward Traversal Iterator and ElementIterator models Readable Lvalue\nIterator then permutation_iterator models Forward Iterator.\nIfIndexIterator models Bidirectional Traversal Iterator and ElementIterator models Readable\nLvalue Iterator then permutation_iterator models Bidirectional Iterator.\nIfIndexIterator models Random Access Traversal Iterator and ElementIterator models Readable\nLvalue Iterator then permutation_iterator models Random Access Iterator.\npermutation_iterator<E1, X, V1, C2, R1, D1> is interoperable with permutation_iterator<E2,\nY, V2, C2, R2, D2> if and only if Xis interoperable with YandE1is convertible to E2.\npermutation_iterator operations\nIn addition to those operations required by the concepts that permutation_iterator models, permu-\ntation_iterator provides the following operations.\npermutation_iterator();\n2Effects: Default constructs m_elt andm_order .\nexplicit permutation_iterator(ElementIterator x, IndexIterator y);\nEffects: Constructs m_elt from xandm_order from y.\ntemplate< class OEIter, class OIIter, class V, class C, class R, class D >\npermutation_iterator(\npermutation_iterator<OEIter, OIIter, V, C, R, D> const& r\n, typename enable_if_convertible<OEIter, ElementIterator>::type* = 0\n, typename enable_if_convertible<OIIter, IndexIterator>::type* = 0\n);\nEffects: Constructs m_elt from r.m_elt andm_order from y.m_order .\nreference operator*() const;\nReturns: *(m_elt + *m_order)\npermutation_iterator& operator++();\nEffects: ++m_order\nReturns: *this\nElementIterator const& base() const;\nReturns: m_order\ntemplate <class ElementIterator, class IndexIterator>\npermutation_iterator<ElementIterator, IndexIterator>\nmake_permutation_iterator(ElementIterator e, IndexIterator i);\nReturns: permutation_iterator<ElementIterator, IndexIterator>(e, i)\nExample\nusing namespace boost;\nint i = 0;\ntypedef std::vector< int > element_range_type;\ntypedef std::list< int > index_type;\nstatic const int element_range_size = 10;\nstatic const int index_size = 4;\nelement_range_type elements( element_range_size );\nfor(element_range_type::iterator el_it = elements.begin() ; el_it != ele-\nments.end() ; ++el_it)\n*el_it = std::distance(elements.begin(), el_it);\nindex_type indices( index_size );\nfor(index_type::iterator i_it = indices.begin() ; i_it != in-\ndices.end() ; ++i_it )\n*i_it = element_range_size -\nindex_size + std::distance(indices.begin(), i_it);\n3std::reverse( indices.begin(), indices.end() );\ntypedef permutation_iterator< element_range_type::iterator, in-\ndex_type::iterator > permutation_type;\npermutation_type begin = make_permutation_iterator( elements.begin(), in-\ndices.begin() );\npermutation_type it = begin;\npermutation_type end = make_permutation_iterator( elements.begin(), in-\ndices.end() );\nstd::cout << \"The original range is : \";\nstd::copy( elements.begin(), ele-\nments.end(), std::ostream_iterator< int >( std::cout, \" \" ) );\nstd::cout << \"\\n\";\nstd::cout << \"The reindexing scheme is : \";\nstd::copy( indices.begin(), in-\ndices.end(), std::ostream_iterator< int >( std::cout, \" \" ) );\nstd::cout << \"\\n\";\nstd::cout << \"The permutated range is : \";\nstd::copy( begin, end, std::ostream_iterator< int >( std::cout, \" \" ) );\nstd::cout << \"\\n\";\nstd::cout << \"Elements at even indices in the permutation : \";\nit = begin;\nfor(i = 0; i < index_size / 2 ; ++i, it+=2 ) std::cout << *it << \" \";\nstd::cout << \"\\n\";\nstd::cout << \"Permutation backwards : \";\nit = begin + (index_size);\nassert( it != begin );\nfor( ; it-- != begin ; ) std::cout << *it << \" \";\nstd::cout << \"\\n\";\nstd::cout << \"Iterate backward with stride 2 : \";\nit = begin + (index_size - 1);\nfor(i = 0 ; i < index_size / 2 ; ++i, it-=2 ) std::cout << *it << \" \";\nstd::cout << \"\\n\";\nThe output is:\nThe original range is : 0 1 2 3 4 5 6 7 8 9\nThe reindexing scheme is : 9 8 7 6\nThe permutated range is : 9 8 7 6\nElements at even indices in the permutation : 9 7\nPermutation backwards : 6 7 8 9\nIterate backward with stride 2 : 6 8\nThe source code for this example can be found here.\n4" } ]
{ "category": "App Definition and Development", "file_name": "permutation_iterator.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "pointee and indirect reference\nAuthor : David Abrahams\nContact : dave@boost-consulting.com\nOrganization :Boost Consulting\nDate : 2005-02-27\nCopyright : Copyright David Abrahams 2004.\nabstract: Provides the capability to deduce the referent types of pointers, smart pointers\nand iterators in generic code.\nOverview\nHave you ever wanted to write a generic function that can operate on any kind of dereferenceable object?\nIf you have, you’ve probably run into the problem of how to determine the type that the object “points\nat”:\ntemplate <class Dereferenceable>\nvoid f(Dereferenceable p)\n{\nwhat-goes-here? value = *p;\n...\n}\npointee\nIt turns out to be impossible to come up with a fully-general algorithm to do determine what-goes-here\ndirectly, but it is possible to require that pointee<Dereferenceable>::type is correct. Naturally,\npointee has the same difficulty: it can’t determine the appropriate ::type reliably for all Derefer-\nenceable s, but it makes very good guesses (it works for all pointers, standard and boost smart pointers,\nand iterators), and when it guesses wrongly, it can be specialized as necessary:\nnamespace boost\n{\ntemplate <class T>\nstruct pointee<third_party_lib::smart_pointer<T> >\n{\ntypedef T type;\n};\n}\nindirect_reference\nindirect_reference<T>::type is rather more specialized than pointee , and is meant to be used to\nforward the result of dereferencing an object of its argument type. Most dereferenceable types just\n1return a reference to their pointee, but some return proxy references or return the pointee by value.\nWhen that information is needed, call on indirect_reference .\nBoth of these templates are essential to the correct functioning of indirect_iterator .\nReference\npointee\ntemplate <class Dereferenceable>\nstruct pointee\n{\ntypedef /* see below */ type;\n};\nRequires: For an object xof type Dereferenceable ,*xis well-formed. If ++xis ill-formed\nit shall neither be ambiguous nor shall it violate access control, and Dereference-\nable::element_type shall be an accessible type. Otherwise iterator_traits<Dereferenceable>::value_type\nshall be well formed. [Note: These requirements need not apply to explicit or partial\nspecializations of pointee ]\ntype is determined according to the following algorithm, where xis an object of type Dereference-\nable :\nif ( ++x is ill-formed )\n{\nreturn ‘‘Dereferenceable::element_type‘‘\n}\nelse if (‘‘*x‘‘ is a mutable reference to\nstd::iterator_traits<Dereferenceable>::value_type)\n{\nreturn iterator_traits<Dereferenceable>::value_type\n}\nelse\n{\nreturn iterator_traits<Dereferenceable>::value_type const\n}\nindirect_reference\ntemplate <class Dereferenceable>\nstruct indirect_reference\n{\ntypedef /* see below */ type;\n};\nRequires: For an object xof type Dereferenceable ,*xis well-formed. If ++xis ill-formed\nit shall neither be ambiguous nor shall it violate access control, and pointee<Dereferenceable>::type&\nshall be well-formed. Otherwise iterator_traits<Dereferenceable>::reference\nshall be well formed. [Note: These requirements need not apply to explicit or partial\nspecializations of indirect_reference ]\ntype is determined according to the following algorithm, where xis an object of type Dereference-\nable :\n2if ( ++x is ill-formed )\nreturn ‘‘pointee<Dereferenceable>::type&‘‘\nelse\nstd::iterator_traits<Dereferenceable>::reference\n3" } ]
{ "category": "App Definition and Development", "file_name": "pointee.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "Broker or API GatewayFunctioninvokerFunction runtimeEvent APIEvent Protocol (HTTP , AMQP , Kafka, ..)\nEvent Message(body+envelope)\nPlatformServicesInternal protocol(out of scope)EXTERNAL EVENTSINTERNAL EVENTSCloud Events Problem Scope and Relations\n2\n3\n1Per language\nEvent-Mapper/ OrchestrationMapping APIMapping SpecSource Service(out of scope for now)functiongetLambdaEventSource(event){if(event.Records&&event.Records[0].cf)return'isCloudfront';if(event.configRuleId&&event.configRuleName&&event.configRuleArn)return'isAwsConfig';if(event.Records&&(event.Records[0].eventSource==='aws:codecommit'))return'isCodeCommit';if(event.authorizationToken===\"incoming-client-token\")return'isApiGatewayAuthorizer';if(event.StackId&&event.RequestType&&event.ResourceType)return'isCloudFormation';if(event.Records&&(event.Records[0].eventSource==='aws:ses'))return'isSes';if(event.pathParameters&&event.pathParameters.proxy)return'isApiGatewayAwsProxy';if(event.source==='aws.events')return'isScheduledEvent';if(event.awslogs&&event.awslogs.data)return'isCloudWatchLogs';if(event.Records&&(event.Records[0].EventSource==='aws:sns'))return'isSns';if(event.Records&&(event.Records[0].eventSource==='aws:dynamodb'))return'isDynamoDb';if(event.records&&event.records[0].approximateArrivalTimestamp)return'isKinesisFirehose';if(event.records&&event.deliveryStreamArn&&event.deliveryStreamArn.startsWith('arn:aws:kinesis:'))return'isKinesisFirehose';if(event.eventType==='SyncTrigger'&&event.identityId&&event.identityPoolId)return'isCognitoSyncTrigger';if(event.Records&&event.Records[0].eventSource==='aws:kinesis')return'isKinesis';if(event.Records&&event.Records[0].eventSource==='aws:s3')return'isS3';if(event.operation&&event.message)return'isMobileBackend';}Determining Your Event Metadata TodayEvents May Go Beyond Text & JSON\ntype Event interface {// GetIDreturns the unique ID of the eventGetID() ID// GetTriggerInforetrunsa trigger info provider (stuff from the event listener, protocol specific) GetTriggerInfo() TriggerInfoProvider// GetSchemareturns the Event Body SchemaGetSchema() string// GetContentTypereturns the content type of the body (data blob)GetContentType() string// GetBodyreturns the body (data blob) of the eventGetBody() []byte// GetHeaderreturns the header (context attributes) by name GetHeader(string) string// GetHeadersloads all headers (context attributes) into a map of string GetHeaders() map[string]string// GetFieldreturns the field (within the data/body) by name/path as an interface{}GetField(string) interface{}// GetFieldsloads all data/body fields into a map of string / interface{}GetFields() map[string]interface{}// GetTimestampreturns when the event originatedGetTimestamp() time.Time…}Example Event API (Go)Event API provide a way to pass the event envelope metadata (Schema, type, source, ..) and the event message data/bodyThe API should not expose any event/schema specific information, the event data is accessed via the data/body or the de-serialized version of the data (e.g. using Fields method)Envelopattributesareprovidedasamapofstringstoallowextensibility / optional data" } ]
{ "category": "App Definition and Development", "file_name": "2018-02-22-CloudEvents.pdf", "project_name": "CloudEvents", "subcategory": "Streaming & Messaging" }
[ { "data": "Indirect Iterator\nAuthor : David Abrahams, Jeremy Siek, Thomas Witt\nContact : dave@boost-consulting.com ,jsiek@osl.iu.edu ,witt@ive.uni-hannover.de\nOrganization :Boost Consulting , Indiana University Open Systems Lab , University of\nHanover Institute for Transport Railway Operation and Construction\nDate : 2004-11-01\nCopyright : Copyright David Abrahams, Jeremy Siek, and Thomas Witt 2003.\nabstract: indirect_iterator adapts an iterator by applying an extra dereference inside\nofoperator*() . For example, this iterator adaptor makes it possible to view a con-\ntainer of pointers (e.g. list<foo*> ) as if it were a container of the pointed-to type\n(e.g. list<foo> ).indirect_iterator depends on two auxiliary traits, pointee and\nindirect_reference , to provide support for underlying iterators whose value_type\nis not an iterator.\nTable of Contents\nindirect_iterator synopsis\nindirect_iterator requirements\nindirect_iterator models\nindirect_iterator operations\nExample\nindirect_iterator synopsis\ntemplate <\nclass Iterator\n, class Value = use_default\n, class CategoryOrTraversal = use_default\n, class Reference = use_default\n, class Difference = use_default\n>\nclass indirect_iterator\n{\npublic:\ntypedef /* see below */ value_type;\ntypedef /* see below */ reference;\ntypedef /* see below */ pointer;\ntypedef /* see below */ difference_type;\ntypedef /* see below */ iterator_category;\n1indirect_iterator();\nindirect_iterator(Iterator x);\ntemplate <\nclass Iterator2, class Value2, class Category2\n, class Reference2, class Difference2\n>\nindirect_iterator(\nindirect_iterator<\nIterator2, Value2, Category2, Reference2, Difference2\n> const& y\n, typename enable_if_convertible<Iterator2, Itera-\ntor>::type* = 0 // exposition\n);\nIterator const& base() const;\nreference operator*() const;\nindirect_iterator& operator++();\nindirect_iterator& operator--();\nprivate:\nIterator m_iterator; // exposition\n};\nThe member types of indirect_iterator are defined according to the following pseudo-code, where\nVisiterator_traits<Iterator>::value_type\nif (Value is use_default) then\ntypedef remove_const<pointee<V>::type>::type value_type;\nelse\ntypedef remove_const<Value>::type value_type;\nif (Reference is use_default) then\nif (Value is use_default) then\ntypedef indirect_reference<V>::type reference;\nelse\ntypedef Value& reference;\nelse\ntypedef Reference reference;\nif (Value is use_default) then\ntypedef pointee<V>::type* pointer;\nelse\ntypedef Value* pointer;\nif (Difference is use_default)\ntypedef iterator_traits<Iterator>::difference_type difference_type;\nelse\ntypedef Difference difference_type;\nif (CategoryOrTraversal is use_default)\ntypedef iterator-category (\niterator_traversal<Iterator>::type,‘‘reference‘‘,‘‘value_type‘‘\n) iterator_category;\nelse\n2typedef iterator-category (\nCategoryOrTraversal,‘‘reference‘‘,‘‘value_type‘‘\n) iterator_category;\nindirect_iterator requirements\nThe expression *v, where vis an object of iterator_traits<Iterator>::value_type , shall be valid\nexpression and convertible to reference .Iterator shall model the traversal concept indicated by it-\nerator_category .Value ,Reference , and Difference shall be chosen so that value_type ,reference ,\nanddifference_type meet the requirements indicated by iterator_category .\n[Note: there are further requirements on the iterator_traits<Iterator>::value_type if the\nValue parameter is not use_default , as implied by the algorithm for deducing the default for the\nvalue_type member.]\nindirect_iterator models\nIn addition to the concepts indicated by iterator_category and by iterator_traversal<indirect_iterator>::type ,\na specialization of indirect_iterator models the following concepts, Where vis an object of itera-\ntor_traits<Iterator>::value_type :\n•Readable Iterator if reference(*v) is convertible to value_type .\n•Writable Iterator if reference(*v) = t is a valid expression (where tis an object of\ntype indirect_iterator::value_type )\n•Lvalue Iterator if reference is a reference type.\nindirect_iterator<X,V1,C1,R1,D1> is interoperable with indirect_iterator<Y,V2,C2,R2,D2>\nif and only if Xis interoperable with Y.\nindirect_iterator operations\nIn addition to the operations required by the concepts described above, specializations of indirect_iterator\nprovide the following operations.\nindirect_iterator();\nRequires: Iterator must be Default Constructible.\nEffects: Constructs an instance of indirect_iterator with a default-constructed m_iterator .\nindirect_iterator(Iterator x);\nEffects: Constructs an instance of indirect_iterator with m_iterator copy constructed\nfrom x.\ntemplate <\nclass Iterator2, class Value2, unsigned Access, class Traversal\n, class Reference2, class Difference2\n>\nindirect_iterator(\nindirect_iterator<\nIterator2, Value2, Access, Traversal, Reference2, Difference2\n> const& y\n, typename enable_if_convertible<Iterator2, Iterator>::type* = 0 // expo-\nsition\n);\n3Requires: Iterator2 is implicitly convertible to Iterator .\nEffects: Constructs an instance of indirect_iterator whose m_iterator subobject is\nconstructed from y.base() .\nIterator const& base() const;\nReturns: m_iterator\nreference operator*() const;\nReturns: **m_iterator\nindirect_iterator& operator++();\nEffects: ++m_iterator\nReturns: *this\nindirect_iterator& operator--();\nEffects: --m_iterator\nReturns: *this\nExample\nThis example prints an array of characters, using indirect_iterator to access the array of characters\nthrough an array of pointers. Next indirect_iterator is used with the transform algorithm to copy\nthe characters (incremented by one) to another array. A constant indirect iterator is used for the source\nand a mutable indirect iterator is used for the destination. The last part of the example prints the\noriginal array of characters, but this time using the make_indirect_iterator helper function.\nchar characters[] = \"abcdefg\";\nconst int N = sizeof(characters)/sizeof(char) - 1; // -\n1 since characters has a null char\nchar* pointers_to_chars[N]; // at the end.\nfor (int i = 0; i < N; ++i)\npointers_to_chars[i] = &characters[i];\n// Example of using indirect_iterator\nboost::indirect_iterator<char**, char>\nindirect_first(pointers_to_chars), indirect_last(pointers_to_chars + N);\nstd::copy(indirect_first, indi-\nrect_last, std::ostream_iterator<char>(std::cout, \",\"));\nstd::cout << std::endl;\n// Example of making mutable and constant indirect iterators\nchar mutable_characters[N];\nchar* pointers_to_mutable_chars[N];\nfor (int j = 0; j < N; ++j)\npointers_to_mutable_chars[j] = &mutable_characters[j];\n4boost::indirect_iterator<char* const*> muta-\nble_indirect_first(pointers_to_mutable_chars),\nmutable_indirect_last(pointers_to_mutable_chars + N);\nboost::indirect_iterator<char* const*, char const> const_indirect_first(pointers_to_chars),\nconst_indirect_last(pointers_to_chars + N);\nstd::transform(const_indirect_first, const_indirect_last,\nmutable_indirect_first, std::bind1st(std::plus<char>(), 1));\nstd::copy(mutable_indirect_first, mutable_indirect_last,\nstd::ostream_iterator<char>(std::cout, \",\"));\nstd::cout << std::endl;\n// Example of using make_indirect_iterator()\nstd::copy(boost::make_indirect_iterator(pointers_to_chars),\nboost::make_indirect_iterator(pointers_to_chars + N),\nstd::ostream_iterator<char>(std::cout, \",\"));\nstd::cout << std::endl;\nThe output is:\na,b,c,d,e,f,g,\nb,c,d,e,f,g,h,\na,b,c,d,e,f,g,\nThe source code for this example can be found here.\n5" } ]
{ "category": "App Definition and Development", "file_name": "indirect_iterator.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "event_23312\nevent_23481\nevent_23593\n...\nevent_1234\nevent_2345\n...event_3456\nevent_4567\n...\nevent_5678\nevent_6789\n...event_7890\nevent_8901\n...Disk and persisted indexesHeap and in-memory index\nPersistevent_34982\nevent_35789\nevent_36791\n...\nevent_1234\nevent_2345\n...event_3456\nevent_4567\n...\nevent_5678\nevent_6789\n...event_7890\nevent_8901\n...Off-heap memory and \npersisted indexes\nLoadQueries\n" } ]
{ "category": "App Definition and Development", "file_name": "realtime_flow.pdf", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "Boost.P ool\nStephen Clear y\nCopyright © 2000-2006 Stephen Clear y\nCopyright © 2011 P aul A. Bristow\nDistrib uted under the Boost Softw are License, Version 1.0. (See accompan ying file LICENSE_1_0.txt or cop y at\nhttp://www .boost.or g/LICENSE_1_0.txt )\nTable of Contents\nIntroduction and Ov ervie w.......................................................................................................................................2\nDocumentation Naming and F ormatting Con ventions ............................................................................................2\nIntroduction ..................................................................................................................................................2\nHow do I use Pool? ........................................................................................................................................3\nInstallation ....................................................................................................................................................3\nBuilding the Test Programs ..............................................................................................................................3\nBoost Pool Interf aces - What interf aces are pro vided and when to use each one. .........................................................3\nPool in More Depth ......................................................................................................................................10\nBoost.Pool C++ Reference .....................................................................................................................................22\nHeader <boost/pool/object_pool.hpp> ..............................................................................................................22\nHeader <boost/pool/pool.hpp> ........................................................................................................................25\nHeader <boost/pool/pool_alloc.hpp> ................................................................................................................31\nHeader <boost/pool/poolfwd.hpp> ...................................................................................................................41\nHeader <boost/pool/simple_se gregated_storage.hpp> ..........................................................................................41\nHeader <boost/pool/singleton_pool.hpp> ..........................................................................................................45\nAppendices .........................................................................................................................................................50\nAppendix A: History .....................................................................................................................................50\nAppendix B: F AQ.........................................................................................................................................50\nAppendix C: Ackno wledgements .....................................................................................................................50\nAppendix D: Tests........................................................................................................................................50\nAppendix E: Tickets......................................................................................................................................50\nAppendix F: Other Implementations .................................................................................................................51\nAppendix G: References ................................................................................................................................51\nAppendix H: Future plans ..............................................................................................................................52\nIndexes...............................................................................................................................................................53\n1\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Introduction and Over view\nDocumentation Naming and Formatting Con ventions\nThis documentation mak es use of the follo wing naming and formatting con ventions.\n•Code is in fixedwidthfont and is syntax-highlighted in color .\n•Replaceable te xt that you will need to supply is in italics .\n•Free functions are rendered in the codefont follo wed by (), as in free_function ().\n•If a name refers to a class template, it is specified lik e this: class_template <>; that is, it is in code font and its name is follo wed\nby <> to indicate that it is a class template.\n•If a name refers to a function-lik e macro, it is specified lik e this: MACRO(); that is, it is uppercase in code font and its name is\nfollowed by () to indicate that it is a function-lik e macro. Object-lik e macros appear without the trailing ().\n•Names that refer to concepts in the generic programming sense are specified in CamelCase.\nNote\nIn addition, notes such as this one specify non-essential information that pro vides additional background or rationale.\nFinally , you can mentally add the follo wing to an y code fragments in this document:\n// Include all of Pool files\n#include <boost/pool.hpp>\nIntroduction\nWhat is P ool?\nPool allocation is a memory allocation scheme that is v ery f ast, b ut limited in its usage. F or more information on pool allocation\n(also called simple se gregated stor age, see concepts concepts and Simple Se gregated Storage ).\nWhy should I use P ool?\nUsing Pools gi ves you more control o ver ho w memory is used in your program. F or example, you could ha ve a situation where you\nwant to allocate a b unch of small objects at one point, and then reach a point in your program where none of them are needed an y\nmore. Using pool interf aces, you can choose to run their destructors or just drop them of f into obli vion; the pool interf ace will\nguarantee that there are no system memory leaks.\nWhen should I use P ool?\nPools are generally used when there is a lot of allocation and deallocation of small objects. Another common usage is the situation\nabove, where man y objects may be dropped out of memory .\nIn general, use Pools when you need a more efficient w ay to do unusual memory control.\nWhic h pool allocator should I use?\npool_allocator is a more general-purpose solution, geared to wards efficiently servicing requests for an y number of contiguous\nchunks.\nfast_pool_allocator is also a general-purpose solution b ut is geared to wards efficiently servicing requests for one chunk at a\ntime; it will w ork for contiguous chunks, b ut not as well as pool_allocator .\n2Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/If you are seriously concerned about performance, use fast_pool_allocator when dealing with containers such as std::list ,\nand use pool_allocator when dealing with containers such as std::vector .\nHow do I use P ool?\nSee the Pool Interf aces section that co vers the dif ferent Pool interf aces supplied by this library .\nLibrar y Structure and Dependencies\nForward declarations of all the e xposed symbols for this library are in the header made inscope by #include\n<boost/pool/poolfwd.hpp>.\nThe library may use macros, which will be prefix ed with BOOST_POOL_ . The e xception to this rule are the include file guards, which\n(for file xxx.hpp) is BOOST_xxx_HPP .\nAll e xposed symbols defined by the library will be in namespace boost::. All symbols used only by the implementation will be in\nnamespace boost::details::pool.\nEvery header used only by the implementation is in the subdirectory /detail/.\nAny header in the library may include an y other header in the library or an y system-supplied header at its discretion.\nInstallation\nThe Boost Pool library is a header -only library . That means there is no .lib, .dll, or .so to b uild; just add the Boost directory to your\ncompiler's include file path, and you should be good to go!\nBuilding the Test Pr ograms\nA jamfile.v2 is pro vided which can be run is the usual w ay, for e xample:\nboost\\libs\\pool\\test>bjam-a>pool_test .log\nBoost P ool Interfaces - What interfaces are pr ovided and when\nto use eac h one .\nIntroduction\nThere are se veral interf aces pro vided which allo w users great fle xibility in ho w the y want to use Pools. Re view the concepts document\nto get the basic understanding of ho w the v arious pools w ork.\nTerminology and Tradeoffs\nObject Usa ge vs. Singleton Usa ge\nObject Usage is the method where each Pool is an object that may be created and destro yed. Destro ying a Pool implicitly frees all\nchunks that ha ve been allocated from it.\nSingleton Usage is the method where each Pool is an object with static duration; that is, it will not be destro yed until program e xit.\nPool objects with Singleton Usage may be shared; thus, Singleton Usage implies thread-safety as well. System memory allocated\nby Pool objects with Singleton Usage may be freed through release_memory or pur ge_memory .\nOut-of-Memor y Conditions: Exceptions vs. Null Return\nSome Pool interf aces thro w exceptions when out-of-memory; others will return0. In general, unless mandated by the Standard,\nPool interf aces will al ways prefer to return0 instead of thro wing an e xception.\n3Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Ordered ver sus unor dered\nAn ordered pool maintains it's free list in order of the address of each free block - this is the most efficient w ay if you're lik ely to\nallocate arrays of objects. Ho wever, freeing an object can be O(N) in the number of currently free blocks which can be prohibiti vely\nexpensi ve in some situations.\nAn unordered pool does not maintain it's free list in an y particular order , as a result allocation and freeing single objects is v ery fast,\nbut allocating arrays may be slo w (and in particular the pool may not be a ware that it contains enough free memory for the allocation\nrequest, and unnecessarily allocate more memory).\nPool Interfaces\npool\nThe pool interf ace is a simple Object Usage interf ace with Null Return.\npool is a f ast memory allocator , and guarantees proper alignment of all allocated chunks.\npool.hpp provides tw o UserAllocator classes and a template class pool , which e xtends and generalizes the frame work\nprovided by the Simple Se gregated Storage solution. F or information on other pool-based interf aces, see the other Pool Interf aces.\nSynopsis\nThere are tw o UserAllocator classes pro vided. Both of them are in pool.hpp .\nThe def ault v alue for the template parameter UserAllocator is al ways default_user_allocator_new_delete .\n4Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/structdefault_user_allocator_new_delete\n{\ntypedef std::size_tsize_type ;\ntypedef std::ptrdiff_t difference_type ;\nstaticchar*malloc(constsize_type bytes)\n{returnnew(std::nothrow)char[bytes];}\nstaticvoidfree(char*constblock)\n{delete[]block;}\n};\nstructdefault_user_allocator_malloc_free\n{\ntypedef std::size_tsize_type ;\ntypedef std::ptrdiff_t difference_type ;\nstaticchar*malloc(constsize_type bytes)\n{returnreinterpret_cast <char*>(std::malloc(bytes));}\nstaticvoidfree(char*constblock)\n{std::free(block);}\n};\ntemplate <typename UserAllocator =default_user_allocator_new_delete >\nclasspool\n{\nprivate:\npool(constpool&);\nvoidoperator =(constpool&);\npublic:\ntypedef UserAllocator user_allocator ;\ntypedef typename UserAllocator ::size_type size_type ;\ntypedef typename UserAllocator ::difference_type difference_type ;\nexplicit pool(size_type requested_size );\n~pool();\nboolrelease_memory ();\nboolpurge_memory ();\nboolis_from(void*chunk)const;\nsize_type get_requested_size ()const;\nvoid*malloc();\nvoid*ordered_malloc ();\nvoid*ordered_malloc (size_type n);\nvoidfree(void*chunk);\nvoidordered_free (void*chunk);\nvoidfree(void*chunks,size_type n);\nvoidordered_free (void*chunks,size_type n);\n};\nExample:\n5Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/voidfunc()\n{\nboost::pool<>p(sizeof(int));\nfor(inti=0;i<10000;++i)\n{\nint*constt=p.malloc();\n...// Do something with t; don't take the time to free() it.\n}\n}// on function exit, p is destroyed, and all malloc()'ed ints are implicitly freed.\nObject_pool\nThe template class object_pool interf ace is an Object Usage interf ace with Null Return, b ut is a ware of the type of the object\nfor which it is allocating chunks. On destruction, an y chunks that ha ve been allocated from that object_pool will ha ve their de-\nstructors called.\nobject_pool.hpp provides a template type that can be used for f ast and efficient memory allocation. It also pro vides automatic\ndestruction of non-deallocated objects.\nFor information on other pool-based interf aces, see the other Pool Interf aces.\nSynopsis\ntemplate <typename ElementType ,typename UserAllocator =default_user_allocator_new_delete >\nclassobject_pool\n{\nprivate:\nobject_pool (constobject_pool &);\nvoidoperator =(constobject_pool &);\npublic:\ntypedef ElementType element_type ;\ntypedef UserAllocator user_allocator ;\ntypedef typename pool<UserAllocator >::size_type size_type ;\ntypedef typename pool<UserAllocator >::difference_type difference_type ;\nobject_pool ();\n~object_pool ();\nelement_type *malloc();\nvoidfree(element_type *p);\nboolis_from(element_type *p)const;\nelement_type *construct ();\n// other construct() functions\nvoiddestroy(element_type *p);\n};\nTemplate P arameters\nElementT ype\nThe template parameter is the type of object to allocate/deallocate. It must ha ve a non-thro wing destructor .\nUserAllocator\nDefines the method that the underlying Pool will use to allocate memory from the system. Def ault is def ault_user_allocator_ne w_delete.\nSee __ UserAllocator for details.\nExample: struct X { ... }; // has destructor with side-ef fects.\n6Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/voidfunc()\n{\nboost::object_pool <X>p;\nfor(inti=0;i<10000;++i)\n{\nX*constt=p.malloc();\n...// Do something with t; don't take the time to free() it.\n}\n}// on function exit, p is destroyed, and all destructors for the X objects are called.\nSingleton_pool\nThe singleton_pool interface at singleton_pool.hpp is a Singleton Usage interf ace with Null Return. It's just the same\nas the pool interf ace b ut with Singleton Usage instead.\nSynopsis\ntemplate <typename Tag,unsigned RequestedSize ,\ntypename UserAllocator =default_user_allocator_new_delete >\nstructsingleton_pool\n{\npublic:\ntypedef Tagtag;\ntypedef UserAllocator user_allocator ;\ntypedef typename pool<UserAllocator >::size_type size_type ;\ntypedef typename pool<UserAllocator >::difference_type difference_type ;\nstaticconstunsigned requested_size =RequestedSize ;\nprivate:\nstaticpool<size_type >p;// exposition only!\nsingleton_pool ();\npublic:\nstaticboolis_from(void*ptr);\nstaticvoid*malloc();\nstaticvoid*ordered_malloc ();\nstaticvoid*ordered_malloc (size_type n);\nstaticvoidfree(void*ptr);\nstaticvoidordered_free (void*ptr);\nstaticvoidfree(void*ptr,std::size_tn);\nstaticvoidordered_free (void*ptr,size_type n);\nstaticboolrelease_memory ();\nstaticboolpurge_memory ();\n};\nNotes\nThe underlying pool p referenced by the static functions in singleton_pool is actually declared in a w ay so that it is:\n•Thread-safe if there is only one thread running before main() begins and after main() ends. All of the static functions of\nsingleton_pool synchronize their access to p.\n•Guaranteed to be constructed before it is used, so that the simple static object in the synopsis abo ve would actually be an incorrect\nimplementation. The actual implementation to guarantee this is considerably more complicated.\nNote that a dif ferent underlying pool p exists for each dif ferent set of template parameters, including implementation-specific ones.\n7Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Template P arameters\nTag\nThe Tag template parameter allo ws dif ferent unbounded sets of singleton pools to e xist. F or example, the pool allocators use tw o\ntag classes to ensure that the tw o different allocator types ne ver share the same underlying singleton pool.\nTag is ne ver actually used by singleton_pool .\nRequestedSize The requested size of memory chunks to allocate. This is passed as a constructor parameter to the underlying pool.\nMust be greater than 0.\nUserAllocator\nDefines the method that the underlying pool will use to allocate memory from the system. See User Allocators for details.\nExample: struct MyPoolT ag { };\ntypedef boost::singleton_pool <MyPoolTag ,sizeof(int)>my_pool;\nvoidfunc()\n{\nfor(inti=0;i<10000;++i)\n{\nint*constt=my_pool::malloc();\n...// Do something with t; don't take the time to free() it.\n}\n// Explicitly free all malloc()'ed ints.\nmy_pool::purge_memory ();\n}\npool_allocator\nThe pool_allocator interface is a Singleton Usage interf ace with Exceptions. It is b uilt on the singleton_pool interf ace, and\nprovides a Standard Allocator -compliant class (for use in containers, etc.).\nIntroduction\npool_alloc.hpp\nProvides tw o template types that can be used for f ast and efficient memory allocation. These types both satisfy the Standard Alloc-\nator requirements [20.1.5] and the additional requirements in [20.1.5/4], so the y can be used with Standard or user -supplied containers.\nFor information on other pool-based interf aces, see the other Pool Interf aces.\nSynopsis\n8Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/structpool_allocator_tag {};\ntemplate <typename T,\ntypename UserAllocator =default_user_allocator_new_delete >\nclasspool_allocator\n{\npublic:\ntypedef UserAllocator user_allocator ;\ntypedef Tvalue_type ;\ntypedef value_type *pointer;\ntypedef constvalue_type *const_pointer ;\ntypedef value_type &reference ;\ntypedef constvalue_type &const_reference ;\ntypedef typename pool<UserAllocator >::size_type size_type ;\ntypedef typename pool<UserAllcoator >::difference_type difference_type ;\ntemplate <typename U>\nstructrebind\n{typedef pool_allocator <U,UserAllocator >other;};\npublic:\npool_allocator ();\npool_allocator (constpool_allocator &);\n// The following is not explicit, mimicking std::allocator [20.4.1]\ntemplate <typename U>\npool_allocator (constpool_allocator <U,UserAllocator >&);\npool_allocator &operator =(constpool_allocator &);\n~pool_allocator ();\nstaticpointer address(reference r);\nstaticconst_pointer address(const_reference s);\nstaticsize_type max_size ();\nstaticvoidconstruct (pointer ptr,constvalue_type &t);\nstaticvoiddestroy(pointer ptr);\nbooloperator ==(constpool_allocator &)const;\nbooloperator !=(constpool_allocator &)const;\nstaticpointer allocate (size_type n);\nstaticpointer allocate (size_type n,pointer);\nstaticvoiddeallocate (pointer ptr,size_type n);\n};\nstructfast_pool_allocator_tag {};\ntemplate <typename T\ntypename UserAllocator =default_user_allocator_new_delete >\nclassfast_pool_allocator\n{\npublic:\ntypedef UserAllocator user_allocator ;\ntypedef Tvalue_type ;\ntypedef value_type *pointer;\ntypedef constvalue_type *const_pointer ;\ntypedef value_type &reference ;\ntypedef constvalue_type &const_reference ;\ntypedef typename pool<UserAllocator >::size_type size_type ;\ntypedef typename pool<UserAllocator >::difference_type difference_type ;\ntemplate <typename U>\nstructrebind\n{typedef fast_pool_allocator <U,UserAllocator >other;};\n9Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/public:\nfast_pool_allocator ();\nfast_pool_allocator (constfast_pool_allocator &);\n// The following is not explicit, mimicking std::allocator [20.4.1]\ntemplate <typename U>\nfast_pool_allocator (constfast_pool_allocator <U,UserAllocator >&);\nfast_pool_allocator &operator =(constfast_pool_allocator &);\n~fast_pool_allocator ();\nstaticpointer address(reference r);\nstaticconst_pointer address(const_reference s);\nstaticsize_type max_size ();\nstaticvoidconstruct (pointer ptr,constvalue_type &t);\nstaticvoiddestroy(pointer ptr);\nbooloperator ==(constfast_pool_allocator &)const;\nbooloperator !=(constfast_pool_allocator &)const;\nstaticpointer allocate (size_type n);\nstaticpointer allocate (size_type n,pointer);\nstaticvoiddeallocate (pointer ptr,size_type n);\nstaticpointer allocate ();\nstaticvoiddeallocate (pointer ptr);\n};\nTemplate P arameters\nTThe first template parameter is the type of object to allocate/deallocate.\nUserAllocator Defines the method that the underlying Pool will use to allocate memory from the system. See User Allocators for\ndetails.\nExample:\nvoidfunc()\n{\nstd::vector<int,boost::pool_allocator <int>>v;\nfor(inti=0;i<10000;++i)\nv.push_back (13);\n}// Exiting the function does NOT free the system memory allocated by the pool allocator.\n// You must call\n// boost::singleton_pool<boost::pool_allocator_tag, sizeof(int)>::release_memory();\n// in order to force freeing the system memory.\nPool in More Depth\nBasic ideas behind pooling\nDynamic memory allocation has been a fundamental part of most computer systems since r oughly 1960... 1\nEveryone uses dynamic memory allocation. If you ha ve ever called malloc or ne w, then you ha ve used dynamic memory allocation.\nMost programmers ha ve a tendenc y to treat the heap as a “magic bag\"” : we ask it for memory , and it magically creates some for us.\nSometimes we run into problems because the heap is not magic.\nThe heap is limited. Ev en on lar ge systems (i.e., not embedded) with huge amounts of virtual memory a vailable, there is a limit.\nEveryone is a ware of the ph ysical limit, b ut there is a more subtle, 'virtual' limit, that limit at which your program (or the entire system)\nslows do wn due to the use of virtual memory . This virtual limit is much closer to your program than the ph ysical limit, especially\nif you are running on a multitasking system. Therefore, when running on a lar ge system, it is considered nice to mak e your program\n10Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/use as fe w resources as necessary , and release them as soon as possible. When using an embedded system, programmers usually\nhave no memory to w aste.\nThe heap is complicated. It has to satisfy an y type of memory request, for an y size, and do it f ast. The common approaches to memory\nmanagement ha ve to do with splitting the memory up into portions, and k eeping them ordered by size in some sort of a tree or list\nstructure. Add in other f actors, such as locality and estimating lifetime, and heaps quickly become v ery complicated. So complicated,\nin fact, that there is no kno wn perfect answer to the problem of ho w to do dynamic memory allocation. The diagrams belo w illustrate\nhow most common memory managers w ork: for each chunk of memory , it uses part of that memory to maintain its internal tree or\nlist structure. Ev en when a chunk is malloc'ed out to a program, the memory manager must save some information in it - usually just\nits size. Then, when the block is free'd, the memory manager can easily tell ho w lar ge it is.\nDynamic memor y allocation is often inefficient\nBecause of the complication of dynamic memory allocation, it is often inefficient in terms of time and/or space. Most memory alloc-\nation algorithms store some form of information with each memory block, either the block size or some relational information, such\nas its position in the internal tree or list structure. It is common for such header fields to tak e up one machine w ord in a block that\nis being used by the program. The ob vious disadv antage, then, is when small objects are dynamically allocated. F or example, if ints\nwere dynamically allocated, then automatically the algorithm will reserv e space for the header fields as well, and we end up with a\n50% w aste of memory . Of course, this is a w orst-case scenario. Ho wever, more modern programs are making use of small objects\non the heap; and that is making this problem more and more apparent. Wilson et. al. state that an a verage-case memory o verhead is\nabout ten to twenty percent 2. This memory o verhead will gro w higher as more programs use more smaller objects. It is this memory\noverhead that brings programs closer to the virtual limit.\nIn lar ger systems, the memory o verhead is not as big of a problem (compared to the amount of time it w ould tak e to w ork around\nit), and thus is often ignored. Ho wever, there are situations where man y allocations and/or deallocations of smaller objects are taking\nplace as part of a time-critical algorithm, and in these situations, the system-supplied memory allocator is often too slo w.\nSimple se gregated storage addresses both of these issues. Almost all memory o verhead is done a way with, and all allocations can\ntake place in a small amount of (amortized) constant time. Ho wever, this is done at the loss of generality; simple se gregated storage\nonly can allocate memory chunks of a single size.\n11Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Simple Segregated Stora ge\nSimple Se gregated Storage is the basic idea behind the Boost Pool library . Simple Se gregated Storage is the simplest, and probably\nthe fastest, memory allocation/deallocation algorithm. It be gins by partitioning a memory block into fix ed-size chunks. Where the\nblock comes from is not important until implementation time. A Pool is some object that uses Simple Se gregated Storage in this\nfashion. To illustrate:\nEach of the chunks in an y given block are al ways the same size. This is the fundamental restriction of Simple Se gregated Storage:\nyou cannot ask for chunks of dif ferent sizes. F or example, you cannot ask a Pool of inte gers for a character , or a Pool of characters\nfor an inte ger (assuming that characters and inte gers are dif ferent sizes).\nSimple Se gregated Storage w orks by interlea ving a free list within the unused chunks. F or example:\nBy interlea ving the free list inside the chunks, each Simple Se gregated Storage only has the o verhead of a single pointer (the pointer\nto the first element in the list). It has no memory o verhead for chunks that are in use by the process.\nSimple Se gregated Storage is also e xtremely f ast. In the simplest case, memory allocation is merely remo ving the first chunk from\nthe free list, a O(1) operation. In the case where the free list is empty , another block may ha ve to be acquired and partitioned, which\nwould result in an amortized O(1) time. Memory deallocation may be as simple as adding that chunk to the front of the free list, a\nO(1) operation. Ho wever, more complicated uses of Simple Se gregated Storage may require a sorted free list, which mak es dealloc-\nation O(N).\n12Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Simple Se gregated Storage gi ves faster e xecution and less memory o verhead than a system-supplied allocator , but at the loss of\ngenerality . A good place to use a Pool is in situations where man y (noncontiguous) small objects may be allocated on the heap, or\nif allocation and deallocation of the same-sized objects happens repeatedly .\nGuaranteeing Alignment - Ho w we guarantee alignment por tably.\nTerminology\nReview the concepts section if you are not already f amiliar with it. Remember that block is a contiguous section of memory , which\nis partitioned or se gregated into fix ed-size chunks. These chunks are what are allocated and deallocated by the user .\nOver view\nEach Pool has a single free list that can e xtend o ver a number of memory blocks. Thus, Pool also has a link ed list of allocated memory\nblocks. Each memory block, by def ault, is allocated using new[], and all memory blocks are freed on destruction. It is the use of\nnew[] that allo ws us to guarantee alignment.\nProof of Concept: Guaranteeing Alignment\nEach block of memory is allocated as a POD type (specifically , an array of characters) through operator new[]. Let POD_size\nbe the number of characters allocated.\nPredicate 1: Arra ys ma y not ha ve pad ding\nThis follo ws from the follo wing quote:\n[5.3.3/2] (Expressions::Unary e xpressions::Sizeof) ... When applied to an arr ay, the r esult is the total number of bytes in the arr ay.\nThis implies that the size of an arr ay of n elements is n times the size of an element.\nTherefore, arrays cannot contain padding, though the elements within the arrays may contain padding.\nPredicate 2: Any block of memor y allocated as an arra y of c haracter s through operator new[] (hereafter ref erred\nto as the b lock) is pr operl y aligned f or an y object of that siz e or smaller\nThis follo ws from:\n•[3.7.3.1/2] (Basic concepts::Storage duration::Dynamic storage duration::Allocation functions) \"... The pointer r eturned shall be\nsuitably aligned so that it can be con verted to a pointer of any complete object type and then used to access the object or arr ay\nin the stor age allocated ...\"\n•[5.3.4/10] (Expressions::Unary e xpressions::Ne w) \"... F or arr ays of c har and unsigned c har, the dif ference between the r esult of\nthe ne w-expression and the addr ess returned by the allocation function shall be an inte gral multiple of the most string ent alignment\n13Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/requir ement (3.9) of any object type whose size is no gr eater than the size of the arr ay being cr eated. [Note: Because allocation\nfunctions ar e assumed to r eturn pointer s to stor age that is appr opriately aligned for objects of any type , this constr aint on arr ay\nallocation o verhead permits the common idiom of allocating c haracter arr ays into whic h objects of other types will later be\nplaced.\"\nConsider: imaginar y object type Element of a siz e whic h is a m ultiple of some actual object siz e; assume\nsizeof (Element )>POD_size\nNote that an object of that size can e xist. One object of that size is an array of the \"actual\" objects.\nNote that the block is properly aligned for an Element. This directly follo ws from Predicate 2.\nCorollar y 1:The b lock is pr operl y aligned f or an arra y of Elements\nThis follo ws from Predicates 1 and 2, and the follo wing quote:\n[3.9/9] (Basic concepts::T ypes) \"An object type is a (possibly cv-qualified) type that is not a function type , not a r eference type , and\nnot a void type .\"\n(Specifically , array types are object types.)\nCorollar y 2: For an y pointer p and integ er i, if p is pr operl y aligned f or the type it points to, then p+i (when\nwell-defined) is pr operl y aligned f or that type; in other w ords, if an arra y is pr operl y aligned, then eac h element\nin that arra y is pr operl y aligned\nThere are no quotes from the Standard to directly support this ar gument, b ut it fits the common conception of the meaning of\n\"alignment\".\nNote that the conditions for p+i being well-defined are outlined in [5.7/5]. We do not quote that here, b ut only mak e note that it\nis well-defined if p and p+i both point into or one past the same array .\nLet: sizeof (Element ) be the least common m ultiple of siz es of se veral actual objects (T1, T2,T3, ...)\nLet: block be a pointer to the memor y block, pe be (Element *) b lock, and pn be (Tn *) b lock\nCorollar y 3: For eac h integ er i, such that pe+i is well-defined, then f or eac h n, there e xists some integ er jn\nsuch that pn+jn is well-defined and ref ers to the same memor y address as pe+i\nThis follo ws naturally , since the memory block is an array of Elements, and for each n, sizeof(Element)%sizeof(Tn)==\n0; thus, the boundary of each element in the array of Elements is also a boundary of each element in each array of Tn.\nTheorem: For eac h integ er i, such that pe+i is well-defined, that ad dress (pe + i) is pr operl y aligned f or eac h\ntype Tn\nSince pe+i is well-defined, then by Corollary 3, pn+jn is well-defined. It is properly aligned from Predicate 2 and Corollaries\n1 and 2.\nUse of the Theorem\nThe proof abo ve covers alignment requirements for cutting chunks out of a block. The implementation uses actual object sizes of:\n•The requested object size ( requested_size ); this is the size of chunks requested by the user\n•void* (pointer to v oid); this is because we interlea ve our free list through the chunks\n•size_type ; this is because we store the size of the ne xt block within each memory block\nEach block also contains a pointer to the ne xt block; b ut that is stored as a pointer to v oid and cast when necessary , to simplify\nalignment requirements to the three types abo ve.\n14Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Therefore, alloc_size is defined to be the lar gest of the sizes abo ve, rounded up to be a multiple of all three sizes. This guarantees\nalignment pro vided all alignments are po wers of tw o: something that appears to be true on all kno wn platforms.\nA Look at the Memor y Bloc k\nEach memory block consists of three main sections. The first section is the part that chunks are cut out of, and contains the interlea ved\nfree list. The second section is the pointer to the ne xt block, and the third section is the size of the ne xt block.\nEach of these sections may contain padding as necessary to guarantee alignment for each of the ne xt sections. The size of the first\nsection is number_of_chunks *lcm(requested_size ,sizeof(void*),sizeof(size_type )); the size of the second\nsection is lcm(sizeof(void*),sizeof(size_type ); and the size of the third section is sizeof(size_type ).\nHere's an e xample memory block, where requested_size ==sizeof(void*)==sizeof(size_type )==4:\nTo sho w a visual e xample of possible padding, here's an e xample memory block where requested_size ==8andsizeof(void\n*)==sizeof(size_type )==4\nHow Contiguous Chunks are Handled\nThe theorem abo ve guarantees all alignment requirements for allocating chunks and also implementation details such as the interlea ved\nfree list. Ho wever, it does so by adding padding when necessary; therefore, we ha ve to treat allocations of contiguous chunks in a\ndifferent w ay.\nUsing array ar guments similar to the abo ve, we can translate an y request for contiguous memory for n objects of requested_size\ninto a request for m contiguous chunks. m is simply ceil(n*requested_size /alloc_size ), where alloc_size is the\nactual size of the chunks.\nTo illustrate:\n15Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Here's an e xample memory block, where requested_size ==1 and sizeof(void*)==sizeof(size_type )==4:\nThen, when the user deallocates the contiguous memory , we can split it up into chunks ag ain.\nNote that the implementation pro vided for allocating contiguous chunks uses a linear instead of quadratic algorithm. This means\nthat it may not find contiguous free chunks if the free list is not ordered. Thus, it is recommended to al ways use an ordered free list\nwhen dealing with contiguous allocation of chunks. (In the e xample abo ve, if Chunk 1 pointed to Chunk 3 pointed to Chunk 2\npointed to Chunk 4, instead of being in order , the contiguous allocation algorithm w ould ha ve failed to find an y of the contiguous\nchunks).\nSimple Segregated Stora ge (Not f or the faint of hear t - Embed ded pr ogrammer s\nonly!)\nIntroduction\nsimple_segregated_storage.hpp provides a template class simple_se gregated_storage that controls access to a free list of\nmemory chunks.\nNote that this is a v ery simple class, with uncheck ed preconditions on almost all its functions. It is intended to be the f astest and\nsmallest possible quick memory allocator for e xample, something to use in embedded systems. This class dele gates man y difficult\npreconditions to the user (especially alignment issues). F or more general usage, see the other Pool Interf aces.\n16Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Synopsis\ntemplate <typename SizeType = std::size_t>\nclass simple_segregated_storage\n{\n private:\n simple_segregated_storage(const simple_segregated_storage &);\n void operator=(const simple_segregated_storage &);\n public:\n typedef SizeType size_type;\n simple_segregated_storage();\n ~simple_segregated_storage();\n static void * segregate(void * block,\n size_type nsz, size_type npartition_sz,\n void * end = 0);\n void add_block(void * block,\n size_type nsz, size_type npartition_sz);\n void add_ordered_block(void * block,\n size_type nsz, size_type npartition_sz);\n bool empty() const;\n void * malloc();\n void free(void * chunk);\n void ordered_free(void * chunk);\n void * malloc_n(size_type n, size_type partition_sz);\n void free_n(void * chunks, size_type n,\n size_type partition_sz);\n void ordered_free_n(void * chunks, size_type n,\n size_type partition_sz);\n};\nSemantics\nAn object of type simple_segregated_storage <SizeType > is empty if its free list is empty . If it is not empty , then it is ordered\nif its free list is ordered. A free list is ordered if repeated calls to malloc() will result in a constantly-increasing sequence of v alues,\nas determined by std::less<void*>. A member function is order -preserving if the free-list maintains its order orientation (that\nis, an ordered free list is still ordered after the member function call).\nTable 1. Symbol Table\nMeaning Symbol\nsimple_se gregated_storage<SizeT ype> Store\nvalue of type Store t\nvalue of type const Store u\nvalues of type v oid * block, chunk, end\nvalues of type Store::size_type partition_sz, sz, n\n17Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Table 2. Template P arameters\nRequir ements Default Parameter\nAn unsigned inte gral type std::size_t SizeT ype\nTable 3. Typedefs\nType Symbol\nSizeT ype size_type\nTable 4. Constructors, Destructors, and State\nNotes Post-Condition Retur n Type Expr ession\nConstructs a ne w Store empty() not used Store()\nDestructs the Store not used (&t)->~Store()\nReturns true if u is empty . Or-\nder-preserving.bool u.empty()\n18Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Table 5. Segr egation\nNotes Semantic Equi val-\nencePost-Condition Pre-Condition Retur n Type Expr ession\nInterlea ves a free\nlist through the\nmemory block spe-\ncified by block of\nsize sz bytes, parti-\ntioning it into as\nmany partition_sz-\nsized chunks as\npossible. The last\nchunk is set to\npoint to end, and a\npointer to the first\nchunck is returned\n(this is al ways\nequal to block).\nThis interlea ved\nfree list is ordered.\nO(sz).partition_sz >=\nsizeof(v oid *) parti-\ntion_sz =\nsizeof(v oid *) * i,\nfor some inte ger i\nsz >= partition_sz\nblock is properly\naligned for an array\nof objects of size\npartition_sz block\nis properly aligned\nfor an array of v oid\n*void * Store::se greg-\nate(block, sz, parti-\ntion_sz, end)\nStore::se greg-\nate(block, sz, parti-\ntion_sz, 0)Same as abo ve void * Store::se greg-\nate(block, sz, parti-\ntion_sz)\nSegregates the\nmemory block spe-\ncified by block of\nsize sz bytes into\npartition_sz-sized\nchunks, and adds\nthat free list to its\nown. If t w as\nempty before this\ncall, then it is\nordered after this\ncall. O(sz).!t.empty() Same as abo ve void t.add_block(block,\nsz, partition_sz)\nSegregates the\nmemory block spe-\ncified by block of\nsize sz bytes into\npartition_sz-sized\nchunks, and mer ges\nthat free list into its\nown. Order -pre-\nserving. O(sz).!t.empty() Same as abo ve void t.add_ordered_block(block,\nsz, partition_sz)\n19Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Table 6. Allocation and Deallocation\nNotes Semantic Equi val-\nencePost-Condition Pre-Condition Retur n Type Expr ession\nTakes the first\navailable chunk!t.empty() void * t.malloc()\nfrom the free list\nand returns it. Or -\nder-preserving.\nO(1).\nPlaces chunk back\non the free list.!t.empty() chunk w as pre vi-\nously returnedvoid t.free(chunk)\nNote that chunk\nmay not be 0. O(1).from a call to\nt.malloc()\nPlaces chunk back\non the free list.!t.empty() Same as abo ve void t.ordered_free(chunk)\nNote that chunk\nmay not be 0. Or -\nder-preserving.\nO(N) with respect\nto the size of the\nfree list.\nAttempts to find a\ncontiguous se-void * t.malloc_n(n, parti-\ntion_sz)\nquence of n parti-\ntion_sz-sized\nchunks. If found,\nremo ves them all\nfrom the free list\nand returns a point-\ner to the first. If not\nfound, returns 0. It\nis strongly recom-\nmended (b ut not re-\nquired) that the free\nlist be ordered, as\nthis algorithm will\nfail to find a con-\ntiguous sequence\nunless it is contigu-\nous in the free list\nas well. Order -pre-\nserving. O(N) with\nrespect to the size\nof the free list.\n20Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Notes Semantic Equi val-\nencePost-Condition Pre-Condition Retur n Type Expr ession\nAssumes that\nchunk actually\nrefers to a block of\nchunks spanning n\n* partition_sz\nbytes; se gregates\nand adds in that\nblock. Note that\nchunk may not be\n0. O(n).t.add_block(chunk,\nn * partition_sz,\npartition_sz)!t.empty() chunk w as pre vi-\nously returned\nfrom a call to\nt.malloc_n(n, parti-\ntion_sz)void t.free_n(chunk, n,\npartition_sz)\nSame as abo ve, ex-\ncept it mer ges in\nthe free list. Order -\npreserving. O(N +\nn) where N is the\nsize of the free list.t.add_ordered_block(chunk,\nn * partition_sz,\npartition_sz)same as abo ve same as abo ve void t.ordered_free_n(chunk,\nn, partition_sz)\nThe UserAllocator Concept\nPool objects need to request memory blocks from the system, which the Pool then splits into chunks to allocate to the user . By spe-\ncifying a UserAllocator template parameter to v arious Pool interf aces, users can control ho w those system memory blocks are allocated.\nIn the follo wing table, UserAllocator is a User Allocator type, block is a v alue of type char *, and n is a v alue of type UserAllocat-\nor::size_type\nTable 7. UserAllocator Requir ements\nDescription Result Expr ession\nAn unsigned inte gral type that can repres-\nent the size of the lar gest object to be al-\nlocated.UserAllocator::size_type\nA signed inte gral type that can represent\nthe dif ference of an y two pointers.UserAllocator::dif ference_type\nAttempts to allocate n bytes from the\nsystem. Returns 0 if out-of-memory .char * UserAllocator::malloc(n)\nblock must ha ve been pre viously returned\nfrom a call to UserAllocator::malloc.void UserAllocator::free(block)\nThere are tw o UserAllocator classes pro vided in this library: default_user_allocator_new_delete and default_user_al-\nlocator_malloc_free , both in pool.hpp. The def ault v alue for the template parameter UserAllocator is al ways default_user_al-\nlocator_new_delete .\n21Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Boost.P ool C++ Ref erence\nHeader < boost/pool/object_pool.hpp >\nProvides a template type boost::object_pool<T , UserAllocator> that can be used for f ast and efficient memory allocation of objects\nof type T. It also pro vides automatic destruction of non-deallocated objects.\nnamespace boost{\ntemplate <typename T,typename UserAllocator >classobject_pool ;\n}\nClass template object_pool\nboost::object_pool —A template class that can be used for f ast and efficient memory allocation of objects. It also pro vides automatic\ndestruction of non-deallocated objects.\nSynopsis\n// In header: < boost/pool/object_pool.hpp >\ntemplate <typename T,typename UserAllocator >\nclassobject_pool :protected boost::pool<UserAllocator >{\npublic:\n// types\ntypedef T element_type ;// ElementType. \ntypedef UserAllocator user_allocator ;// User allocator. \ntypedef pool<UserAllocator >::size_type size_type ; // pool<UserAllocat ↵\nor>::size_type \ntypedef pool<UserAllocator >::difference_type difference_type ;// pool<UserAllocator>::dif ↵\nference_type \n// construct/copy/destruct\nexplicit object_pool (constsize_type =32,constsize_type =0);\n~object_pool ();\n// protected member functions\npool<UserAllocator >&store();\nconstpool<UserAllocator >&store()const;\n// protected static functions\nstaticvoid*&nextof(void*const);\n// public member functions\nelement_type *malloc();\nvoidfree(element_type *const);\nboolis_from(element_type *const)const;\nelement_type *construct ();\ntemplate <typename Arg1,...class ArgN>\nelement_type *construct (Arg1&,...ArgN&);\nvoiddestroy(element_type *const);\nsize_type get_next_size ()const;\nvoidset_next_size (constsize_type );\n};\nDescription\nTThe type of object to allocate/deallocate. T must ha ve a non-thro wing destructor .\n22Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/UserAllocator Defines the allocator that the underlying Pool will use to allocate memory from the system. See User Allocators for\ndetails.\nClass object_pool is a template class that can be used for f ast and efficient memory allocation of objects. It also pro vides automatic\ndestruction of non-deallocated objects.\nWhen the object pool is destro yed, then the destructor for type T is called for each allocated T that has not yet been deallocated.\nO(N).\nWhene ver an object of type ObjectPool needs memory from the system, it will request it from its UserAllocator template parameter .\nThe amount requested is determined using a doubling algorithm; that is, each time more system memory is allocated, the amount of\nsystem memory requested is doubled. Users may control the doubling algorithm by the parameters passed to the object_pool's con-\nstructor .\nobject_pool pub lic construct/cop y/destruct\n1.explicit object_pool (constsize_type arg_next_size =32,\nconstsize_type arg_max_size =0);\nConstructs a ne w (empty by def ault) ObjectPool.\nRequires: next_size != 0.\n2.~object_pool ();\nobject_pool protected member functions\n1.pool<UserAllocator >&store();\nReturns: The underlying boost:: pool storage used by *this.\n2.constpool<UserAllocator >&store()const;\nReturns: The underlying boost:: pool storage used by *this.\nobject_pool protected static functions\n1.staticvoid*&nextof(void*const ptr);\nReturns: The ne xt memory block after ptr (for the sak e of code readability :)\nobject_pool pub lic member functions\n1.element_type *malloc();\nAllocates memory that can hold one object of type ElementT ype.\nIf out of memory , returns 0.\nAmortized O(1).\n2.voidfree(element_type *const chunk);\nDe-Allocates memory that holds a chunk of type ElementT ype.\n23Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Note that p may not be 0.\nNote that the destructor for p is not called. O(N).\n3.boolis_from(element_type *const chunk)const;\nReturns f alse if chunk w as allocated from some other pool or may be returned as the result of a future allocation from some other\npool.\nOtherwise, the return v alue is meaningless.\nNote\nThis function may NO T be used to reliably test random pointer v alues!\nReturns: true if chunk w as allocated from *this or may be returned as the result of a future allocation from *this.\n4.element_type *construct ();\nReturns: A pointer to an object of type T, allocated in memory from the underlying pool and def ault constructed. The returned\nobjected can be freed by a call to destro y. Otherwise the returned object will be automatically destro yed when *this\nis destro yed.\n5.template <typename Arg1,...class ArgN>\nelement_type *construct (Arg1&,...ArgN&);\nNote\nSince the number and type of ar guments to this function is totally arbitrary , a simple system has been set up to\nautomatically generate template construct functions. This system is based on the macro preprocessor m4, which\nis standard on UNIX systems and also a vailable for Win32 systems.\ndetail/pool_construct.m4, when run with m4, will create the file detail/pool_construct.ipp, which only defines\nthe construct functions for the proper number of ar guments. The number of ar guments may be passed into the\nfile as an m4 macro, NumberOfAr guments; if not pro vided, it will def ault to 3.\nFor each dif ferent number of ar guments (1 to NumberOfAr guments), a template function is generated. There are\nthe same number of template parameters as there are ar guments, and each ar gument's type is a reference to that\n(possibly cv-qualified) template ar gument. Each possible permutation of the cv-qualifications is also generated.\nBecause each permutation is generated for each possible number of ar guments, the included file size gro ws ex-\nponentially in terms of the number of constructor ar guments, not linearly . For the sak e of rational compile times,\nonly use as man y arguments as you need.\ndetail/pool_construct.bat and detail/pool_construct.sh are also pro vided to call m4, defining NumberOfAr guments\nto be their command-line parameter . See these files for more details.\nReturns: A pointer to an object of type T, allocated in memory from the underlying pool and constructed from ar guments\nArg1 to ArgN. The returned objected can be freed by a call to destro y. Otherwise the returned object will be auto-\nmatically destro yed when *this is destro yed.\n6.voiddestroy(element_type *const chunk);\nDestro ys an object allocated with construct.\nEquivalent to:\np->~ElementT ype(); this->free(p);\n24Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Requires: p must ha ve been pre viously allocated from *this via a call to construct.\n7.size_type get_next_size ()const;\nReturns: The number of chunks that will be allocated ne xt time we run out of memory .\n8.voidset_next_size (constsize_type x);\nSet a ne w number of chunks to allocate the ne xt time we run out of memory .\nParameters: xwanted ne xt_size (must not be zero).\nHeader < boost/pool/pool.hpp >\nProvides class pool: a f ast memory allocator that guarantees proper alignment of all allocated chunks, and which e xtends and gener -\nalizes the frame work pro vided by the simple se gregated storage solution. Also pro vides tw o UserAllocator classes which can be\nused in conjuction with pool.\nnamespace boost{\nstructdefault_user_allocator_new_delete ;\nstructdefault_user_allocator_malloc_free ;\ntemplate <typename UserAllocator >classpool;\n}\nStruct default_user_allocator_ne w_delete\nboost::def ault_user_allocator_ne w_delete —Allocator used as the def ault template parameter for a UserAllocator template parameter .\nUses ne w and delete.\nSynopsis\n// In header: < boost/pool/pool.hpp >\nstructdefault_user_allocator_new_delete {\n// types\ntypedef std::size_t size_type ; // An unsigned integral type that can represent the ↵\nsize of the largest object to be allocated. \ntypedef std::ptrdiff_t difference_type ;// A signed integral type that can represent the dif ↵\nference of any two pointers. \n// public static functions\nstaticchar*malloc(constsize_type );\nstaticvoidfree(char*const);\n};\nDescription\ndefault_user_allocator_new_delete pub lic static functions\n1.staticchar*malloc(constsize_type bytes);\nAttempts to allocate n bytes from the system. Returns 0 if out-of-memory\n25Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/2.staticvoidfree(char*const block);\nAttempts to de-allocate block.\nRequires: Block must ha ve been pre viously returned from a call to UserAllocator::malloc.\nStruct default_user_allocator_malloc_free\nboost::def ault_user_allocator_malloc_free —UserAllocator used as template parameter for pool and object_pool . Uses malloc and\nfree internally .\nSynopsis\n// In header: < boost/pool/pool.hpp >\nstructdefault_user_allocator_malloc_free {\n// types\ntypedef std::size_t size_type ; // An unsigned integral type that can represent the ↵\nsize of the largest object to be allocated. \ntypedef std::ptrdiff_t difference_type ;// A signed integral type that can represent the dif ↵\nference of any two pointers. \n// public static functions\nstaticchar*malloc(constsize_type );\nstaticvoidfree(char*const);\n};\nDescription\ndefault_user_allocator_malloc_free pub lic static functions\n1.staticchar*malloc(constsize_type bytes);\n2.staticvoidfree(char*const block);\nClass template pool\nboost::pool —A fast memory allocator that guarantees proper alignment of all allocated chunks.\n26Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Synopsis\n// In header: < boost/pool/pool.hpp >\ntemplate <typename UserAllocator >\nclasspool:\nprotected boost::simple_segregated_storage <UserAllocator ::size_type >\n{\npublic:\n// types\ntypedef UserAllocator user_allocator ;// User allocator. \ntypedef UserAllocator ::size_type size_type ; // An unsigned integral type that ↵\ncan represent the size of the largest object to be allocated. \ntypedef UserAllocator ::difference_type difference_type ;// A signed integral type that can ↵\nrepresent the difference of any two pointers. \n// construct/copy/destruct\nexplicit pool(constsize_type ,constsize_type =32,constsize_type =0);\n~pool();\n// private member functions\nvoid*malloc_need_resize ();\nvoid*ordered_malloc_need_resize ();\n// protected member functions\nsimple_segregated_storage <size_type >&store();\nconstsimple_segregated_storage <size_type >&store()const;\n details::PODptr <size_type >find_POD (void*const)const;\nsize_type alloc_size ()const;\n// protected static functions\nstaticboolis_from(void*const,char*const,constsize_type );\nstaticvoid*&nextof(void*const);\n// public member functions\nboolrelease_memory ();\nboolpurge_memory ();\nsize_type get_next_size ()const;\nvoidset_next_size (constsize_type );\nsize_type get_max_size ()const;\nvoidset_max_size (constsize_type );\nsize_type get_requested_size ()const;\nvoid*malloc();\nvoid*ordered_malloc ();\nvoid*ordered_malloc (size_type );\nvoidfree(void*const);\nvoidordered_free (void*const);\nvoidfree(void*const,constsize_type );\nvoidordered_free (void*const,constsize_type );\nboolis_from(void*const)const;\n};\nDescription\nWhene ver an object of type pool needs memory from the system, it will request it from its UserAllocator template parameter . The\namount requested is determined using a doubling algorithm; that is, each time more system memory is allocated, the amount of\nsystem memory requested is doubled.\nUsers may control the doubling algorithm by using the follo wing e xtensions:\nUsers may pass an additional constructor parameter to pool. This parameter is of type size_type, and is the number of chunks to request\nfrom the system the first time that object needs to allocate system memory . The def ault is 32. This parameter may not be 0.\n27Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Users may also pass an optional third parameter to pool's constructor . This parameter is of type size_type, and sets a maximum size\nfor allocated chunks. When this parameter tak es the def ault v alue of 0, then there is no upper limit on chunk size.\nFinally , if the doubling algorithm results in no memory being allocated, the pool will backtrack just once, halving the chunk size\nand trying ag ain.\nUserAllocator type - the method that the Pool will use to allocate memory from the system.\nThere are essentially tw o ways to use class pool: the client can call malloc() and free() to allocate and free single chunks of memory ,\nthis is the most efficient w ay to use a pool, b ut does not allo w for the efficient allocation of arrays of chunks. Alternati vely, the client\nmay call ordered_malloc() and ordered_free(), in which case the free list is maintained in an ordered state, and efficient allocation\nof arrays of chunks are possible. Ho wever, this latter option can suf fer from poor performance when lar ge numbers of allocations\nare performed.\npool pub lic construct/cop y/destruct\n1.explicit pool(constsize_type nrequested_size ,\nconstsize_type nnext_size =32,constsize_type nmax_size =0);\nConstructs a ne w empty Pool that can be used to allocate chunks of size RequestedSize.\nParameters: nmax_size is the maximum number of chunks to allocate in one block.\nnnext_size parameter is of type size_type, is the number of chunks to request from the system\nthe first time that object needs to allocate system memory . The def ault is 32. This\nparameter may not be 0.\nnrequested_size Requested chunk size\n2.~pool();\nDestructs the Pool, freeing its list of memory blocks.\npool priv ate member functions\n1.void*malloc_need_resize ();\nNo memory in an y of our storages; mak e a ne w storage, Allocates chunk in ne wly malloc aftert resize.\nReturns: 0 if out-of-memory . Called if malloc/ordered_malloc needs to resize the free list.\nReturns: pointer to chunk.\n2.void*ordered_malloc_need_resize ();\nCalled if malloc needs to resize the free list.\nNo memory in an y of our storages; mak e a ne w storage,\nReturns: pointer to ne w chunk.\npool protected member functions\n1.simple_segregated_storage <size_type >&store();\nReturns: pointer to store.\n2.constsimple_segregated_storage <size_type >&store()const;\nReturns: pointer to store.\n28Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/3.details::PODptr <size_type >find_POD (void*const chunk)const;\nfinds which POD in the list 'chunk' w as allocated from.\nfind which PODptr storage memory that this chunk is from.\nReturns: the PODptr that holds this chunk.\n4.size_type alloc_size ()const;\nCalculated size of the memory chunks that will be allocated by this Pool.\nReturns: allocated size.\npool protected static functions\n1.staticboolis_from(void*const chunk,char*const i,\nconstsize_type sizeof_i );\nReturns f alse if chunk w as allocated from some other pool, or may be returned as the result of a future allocation from some\nother pool. Otherwise, the return v alue is meaningless.\nNote that this function may not be used to reliably test random pointer v alues.\nParameters: chunk to check if is from this pool. chunk\ni memory chunk at i with element sizeof_i.\nsizeof_i element size (size of the chunk area of that block, not the total size of that block).\nReturns: true if chunk w as allocated or may be returned. as the result of a future allocation.\n2.staticvoid*&nextof(void*const ptr);\nReturns: Pointer dereferenced. (Pro vided and used for the sak e of code readability :)\npool pub lic member functions\n1.boolrelease_memory ();\npool must be ordered. Frees e very memory block that doesn't ha ve any allocated chunks.\nReturns: true if at least one memory block w as freed.\n2.boolpurge_memory ();\npool must be ordered. Frees e very memory block.\nThis function in validates an y pointers pre viously returned by allocation functions of t.\nReturns: true if at least one memory block w as freed.\n3.size_type get_next_size ()const;\nNumber of chunks to request from the system the ne xt time that object needs to allocate system memory . This v alue should ne ver\nbe 0.\nReturns: next_size;\n4.voidset_next_size (constsize_type nnext_size );\nSet number of chunks to request from the system the ne xt time that object needs to allocate system memory . This v alue should\nnever be set to 0.\n29Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Returns: nnext_size.\n5.size_type get_max_size ()const;\nReturns: max_size.\n6.voidset_max_size (constsize_type nmax_size );\nSet max_size.\n7.size_type get_requested_size ()const;\nReturns: the requested size passed into the constructor . (This v alue will not change during the lifetime of a Pool object).\n8.void*malloc();\nAllocates a chunk of memory . Searches in the list of memory blocks for a block that has a free chunk, and returns that free chunk\nif found. Otherwise, creates a ne w memory block, adds its free list to pool's free list,\nReturns: a free chunk from that block. If a ne w memory block cannot be allocated, returns 0. Amortized O(1).\n9.void*ordered_malloc ();\nSame as malloc, only mer ges the free lists, to preserv e order . Amortized O(1).\nReturns: a free chunk from that block. If a ne w memory block cannot be allocated, returns 0. Amortized O(1).\n10.void*ordered_malloc (size_type n);\nGets address of a chunk n, allocating ne w memory if not already a vailable.\nReturns: Address of chunk n if allocated ok.\n0 if not enough memory for n chunks.\n11.voidfree(void*const chunk);\nSame as malloc, only allocates enough contiguous chunks to co ver n * requested_size bytes. Amortized O(n).\nDeallocates a chunk of memory . Note that chunk may not be 0. O(1).\nChunk must ha ve been pre viously returned by t.malloc() or t.ordered_malloc(). Assumes that chunk actually refers to a block of\nchunks spanning n * partition_sz bytes. deallocates each chunk in that block. Note that chunk may not be 0. O(n).\nReturns: a free chunk from that block. If a ne w memory block cannot be allocated, returns 0. Amortized O(1).\n12.voidordered_free (void*const chunk);\nSame as abo ve, but is order -preserving.\nNote that chunk may not be 0. O(N) with respect to the size of the free list. chunk must ha ve been pre viously returned by t.malloc()\nor t.ordered_malloc().\n13.voidfree(void*const chunks,constsize_type n);\nAssumes that chunk actually refers to a block of chunks.\n30Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/chunk must ha ve been pre viously returned by t.ordered_malloc(n) spanning n * partition_sz bytes. Deallocates each chunk in that\nblock. Note that chunk may not be 0. O(n).\n14.voidordered_free (void*const chunks,constsize_type n);\nAssumes that chunk actually refers to a block of chunks spanning n * partition_sz bytes; deallocates each chunk in that block.\nNote that chunk may not be 0. Order -preserving. O(N + n) where N is the size of the free list. chunk must ha ve been pre viously\nreturned by t.malloc() or t.ordered_malloc().\n15.boolis_from(void*const chunk)const;\nReturns: Returns true if chunk w as allocated from u or may be returned as the result of a future allocation from u. Returns\nfalse if chunk w as allocated from some other pool or may be returned as the result of a future allocation from some\nother pool. Otherwise, the return v alue is meaningless. Note that this function may not be used to reliably test random\npointer v alues.\nHeader < boost/pool/pool_alloc.hpp >\nC++ Standard Library compatible pool-based allocators.\nThis header pro vides tw o template types - pool_allocator and f ast_pool_allocator - that can be used for f ast and efficient memory\nallocation in conjunction with the C++ Standard Library containers.\nThese types both satisfy the Standard Allocator requirements [20.1.5] and the additional requirements in [20.1.5/4], so the y can be\nused with either Standard or user -supplied containers.\nIn addition, the f ast_pool_allocator also pro vides an additional allocation and an additional deallocation function:\nSemantic Equi valence Retur n Type Expr ession\nPoolAlloc::allocate(1) T * PoolAlloc::allocate()\nPoolAlloc::dealloc-\nate(p, 1)void PoolAlloc::dealloc-\nate(p)\nThe typedef user_allocator publishes the v alue of the UserAllocator template parameter .\nNotes\nIf the allocation functions run out of memory , they will thro w std::bad_alloc .\nThe underlying Pool type used by the allocators is accessible through the Singleton Pool Interf ace. The identifying tag used for\npool_allocator is pool_allocator_tag, and the tag used for f ast_pool_allocator is f ast_pool_allocator_tag. All template parameters of\nthe allocators (including implementation-specific ones) determine the type of the underlying Pool, with the e xception of the first\nparameter T, whose size is used instead.\nSince the size of T is used to determine the type of the underlying Pool, each allocator for dif ferent types of the same size will share\nthe same underlying pool. The tag class pre vents pools from being shared between pool_allocator and f ast_pool_allocator . For example,\non a system where sizeof(int) == sizeof(void *) , pool_allocator<int> and pool_allocator<void *> will both\nallocate/deallocate from/to the same pool.\nIf there is only one thread running before main() starts and after main() ends, then both allocators are completely thread-safe.\nCompiler and STL Notes\n31Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/A number of common STL libraries contain b ugs in their using of allocators. Specifically , they pass null pointers to the deallocate\nfunction, which is e xplicitly forbidden by the Standard [20.1.5 Table 32]. PoolAlloc will w ork around these libraries if it detects\nthem; currently , workarounds are in place for: Borland C++ (Builder and command-line compiler) with def ault (RogueW ave) library ,\nver. 5 and earlier , STLport (with an y compiler), v er. 4.0 and earlier .\nnamespace boost{\nstructpool_allocator_tag ;\ntemplate <typename T,typename UserAllocator ,typename Mutex,\nunsigned NextSize ,unsigned MaxSize >\nclasspool_allocator ;\ntemplate <typename UserAllocator ,typename Mutex,unsigned NextSize ,\nunsigned MaxSize >\nclasspool_allocator <void,UserAllocator ,Mutex,NextSize ,MaxSize>;\nstructfast_pool_allocator_tag ;\ntemplate <typename T,typename UserAllocator ,typename Mutex,\nunsigned NextSize ,unsigned MaxSize >\nclassfast_pool_allocator ;\ntemplate <typename UserAllocator ,typename Mutex,unsigned NextSize ,\nunsigned MaxSize >\nclassfast_pool_allocator <void,UserAllocator ,Mutex,NextSize ,MaxSize>;\n}\nStruct pool_allocator_ta g\nboost::pool_allocator_tag\nSynopsis\n// In header: < boost/pool/pool_alloc.hpp >\nstructpool_allocator_tag {\n};\nDescription\nSimple tag type used by pool_allocator as an ar gument to the underlying singleton_pool .\nClass template pool_allocator\nboost::pool_allocator —A C++ Standard Library conforming allocator , based on an underlying pool.\n32Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Synopsis\n// In header: < boost/pool/pool_alloc.hpp >\ntemplate <typename T,typename UserAllocator ,typename Mutex,\nunsigned NextSize ,unsigned MaxSize >\nclasspool_allocator {\npublic:\n// types\ntypedef T value_type ; // value_type of template ↵\nparameter T. \ntypedef UserAllocator user_allocator ;// allocator that defines ↵\nthe method that the underlying Pool will use to allocate memory from the system. \ntypedef Mutex mutex; // typedef mutex publishes ↵\nthe value of the template parameter Mutex. \ntypedef value_type * pointer;\ntypedef constvalue_type * const_pointer ;\ntypedef value_type & reference ;\ntypedef constvalue_type & const_reference ;\ntypedef pool<UserAllocator >::size_type size_type ;\ntypedef pool<UserAllocator >::difference_type difference_type ;\n// member classes/structs/unions\n// Nested class rebind allows for transformation from pool_allocator<T> to\n // pool_allocator<U>.\ntemplate <typename U>\nstructrebind{\n// types\ntypedef pool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize >other;\n};\n// construct/copy/destruct\npool_allocator ();\ntemplate <typename U>\npool_allocator (constpool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize >&);\n// public member functions\nbooloperator ==(constpool_allocator &)const;\nbooloperator !=(constpool_allocator &)const;\n// public static functions\nstaticpointer address(reference );\nstaticconst_pointer address(const_reference );\nstaticsize_type max_size ();\nstaticvoidconstruct (constpointer,constvalue_type &);\nstaticvoiddestroy(constpointer);\nstaticpointer allocate (constsize_type );\nstaticpointer allocate (constsize_type ,constvoid*);\nstaticvoiddeallocate (constpointer,constsize_type );\n// public data members\nstaticconstunsigned next_size ;// next_size publishes the values of the template parameter ↵\nNextSize. \n};\nDescription\nTemplate parameters for pool_allocator are defined as follo ws:\nTType of object to allocate/deallocate.\n33Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/UserAllocator . Defines the method that the underlying Pool will use to allocate memory from the system. See User Allocators for\ndetails.\nMutex Allows the user to determine the type of synchronization to be used on the underlying singleton_pool .\nNextSize The v alue of this parameter is passed to the underlying singleton_pool when it is created.\nMaxSize Limit on the maximum size used.\nNote\nThe underlying singleton_pool used by the this allocator constructs a pool instance that is ne ver fr eed. This means\nthat memory allocated by the allocator can be still used after main() has completed, b ut may mean that some memory\nchecking programs will complain about leaks.\npool_allocator pub lic construct/cop y/destruct\n1.pool_allocator ();\nResults in def ault construction of the underlying singleton_pool IFF an instance of this allocator is constructed during global\ninitialization ( required to ensure construction of singleton_pool IFF an instance of this allocator is constructed during global\ninitialization. See tick et #2359 for a complete e xplanation at http://svn.boost.or g/trac/boost/tick et/2359 ) .\n2.template <typename U>\npool_allocator (constpool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize >&);\nResults in the def ault construction of the underlying singleton_pool , this is required to ensure construction of singleton_pool\nIFF an instance of this allocator is constructed during global initialization. See tick et #2359 for a complete e xplanation at ht-\ntp://svn.boost.or g/trac/boost/tick et/2359 .\npool_allocator pub lic member functions\n1.booloperator ==(constpool_allocator &)const;\n2.booloperator !=(constpool_allocator &)const;\npool_allocator pub lic static functions\n1.staticpointer address(reference r);\n2.staticconst_pointer address(const_reference s);\n3.staticsize_type max_size ();\n4.staticvoidconstruct (constpointer ptr,constvalue_type & t);\n34Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/5.staticvoiddestroy(constpointer ptr);\n6.staticpointer allocate (constsize_type n);\n7.staticpointer allocate (constsize_type n,constvoid* const);\nallocate n bytes\nParameters: nbytes to allocate.\n8.staticvoiddeallocate (constpointer ptr,constsize_type n);\nDeallocate n bytes from ptr\nParameters: n number of bytes to deallocate.\nptr location to deallocate from.\nStruct template rebind\nboost::pool_allocator::rebind — Nested class rebind allo ws for transformation from pool_allocator<T> to pool_allocator<U>.\nSynopsis\n// In header: < boost/pool/pool_alloc.hpp >\n// Nested class rebind allows for transformation from pool_allocator<T> to\n// pool_allocator<U>.\ntemplate <typename U>\nstructrebind{\n// types\ntypedef pool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize >other;\n};\nDescription\nNested class rebind allo ws for transformation from pool_allocator<T> to pool_allocator<U> via the member typedef other .\nSpecializations\n•Class template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>\nClass template pool_allocator<v oid, UserAllocator , Mute x, NextSiz e, MaxSiz e>\nboost::pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize> — Specialization of pool_allocator<v oid>.\n35Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Synopsis\n// In header: < boost/pool/pool_alloc.hpp >\ntemplate <typename UserAllocator ,typename Mutex,unsigned NextSize ,\nunsigned MaxSize >\nclasspool_allocator <void,UserAllocator ,Mutex,NextSize ,MaxSize>{\npublic:\n// types\ntypedef void* pointer;\ntypedef constvoid*const_pointer ;\ntypedef void value_type ;\n// member classes/structs/unions\n// Nested class rebind allows for transformation from pool_allocator<T> to\n // pool_allocator<U>.\ntemplate <typename U>\nstructrebind{\n// types\ntypedef pool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize >other;\n};\n};\nDescription\nSpecialization of pool_allocator for type v oid: required by the standard to mak e this a conforming allocator type.\nStruct template rebind\nboost::pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>::rebind — Nested class rebind allo ws for transformation\nfrom pool_allocator<T> to pool_allocator<U>.\nSynopsis\n// In header: < boost/pool/pool_alloc.hpp >\n// Nested class rebind allows for transformation from pool_allocator<T> to\n// pool_allocator<U>.\ntemplate <typename U>\nstructrebind{\n// types\ntypedef pool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize >other;\n};\nDescription\nNested class rebind allo ws for transformation from pool_allocator<T> to pool_allocator<U> via the member typedef other .\nStruct fast_pool_allocator_ta g\nboost::f ast_pool_allocator_tag —Simple tag type used by fast_pool_allocator as a template parameter to the underlying singleton_pool .\n36Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Synopsis\n// In header: < boost/pool/pool_alloc.hpp >\nstructfast_pool_allocator_tag {\n};\nClass template fast_pool_allocator\nboost::f ast_pool_allocator —A C++ Standard Library conforming allocator geared to wards allocating single chunks.\nSynopsis\n// In header: < boost/pool/pool_alloc.hpp >\ntemplate <typename T,typename UserAllocator ,typename Mutex,\nunsigned NextSize ,unsigned MaxSize >\nclassfast_pool_allocator {\npublic:\n// types\ntypedef T value_type ;\ntypedef UserAllocator user_allocator ;\ntypedef Mutex mutex;\ntypedef value_type * pointer;\ntypedef constvalue_type * const_pointer ;\ntypedef value_type & reference ;\ntypedef constvalue_type & const_reference ;\ntypedef pool<UserAllocator >::size_type size_type ;\ntypedef pool<UserAllocator >::difference_type difference_type ;\n// member classes/structs/unions\n// Nested class rebind allows for transformation from fast_pool_allocator<T>\n // to fast_pool_allocator<U>.\ntemplate <typename U>\nstructrebind{\n// types\ntypedef fast_pool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize >other;\n};\n// construct/copy/destruct\nfast_pool_allocator ();\ntemplate <typename U>\nfast_pool_allocator (constfast_pool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize >&);\n// public member functions\nvoidconstruct (constpointer,constvalue_type &);\nvoiddestroy(constpointer);\nbooloperator ==(constfast_pool_allocator &)const;\nbooloperator !=(constfast_pool_allocator &)const;\n// public static functions\nstaticpointer address(reference );\nstaticconst_pointer address(const_reference );\nstaticsize_type max_size ();\nstaticpointer allocate (constsize_type );\nstaticpointer allocate (constsize_type ,constvoid*);\nstaticpointer allocate ();\n37Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/staticvoiddeallocate (constpointer,constsize_type );\nstaticvoiddeallocate (constpointer);\n// public data members\nstaticconstunsigned next_size ;\n};\nDescription\nWhile class template pool_allocator is a more general-purpose solution geared to wards efficiently servicing requests for an y\nnumber of contiguous chunks, fast_pool_allocator is also a general-purpose solution, b ut is geared to wards efficiently servicing\nrequests for one chunk at a time; it will w ork for contiguous chunks, b ut not as well as pool_allocator .\nIf you are seriously concerned about performance, use fast_pool_allocator when dealing with containers such as std::list ,\nand use pool_allocator when dealing with containers such as std::vector .\nThe template parameters are defined as follo ws:\nTType of object to allocate/deallocate.\nUserAllocator . Defines the method that the underlying Pool will use to allocate memory from the system. See User Allocators for\ndetails.\nMutex Allows the user to determine the type of synchronization to be used on the underlying singleton_pool .\nNextSize The v alue of this parameter is passed to the underlying Pool when it is created.\nMaxSize Limit on the maximum size used.\nNote\nThe underlying singleton_pool used by the this allocator constructs a pool instance that is ne ver fr eed. This means\nthat memory allocated by the allocator can be still used after main() has completed, b ut may mean that some memory\nchecking programs will complain about leaks.\nfast_pool_allocator pub lic construct/cop y/destruct\n1.fast_pool_allocator ();\nEnsures construction of the underlying singleton_pool IFF an instance of this allocator is constructed during global initializ-\nation. See tick et #2359 for a complete e xplanation at http://svn.boost.or g/trac/boost/tick et/2359 .\n2.template <typename U>\nfast_pool_allocator (constfast_pool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize\n>&);\nEnsures construction of the underlying singleton_pool IFF an instance of this allocator is constructed during global initializ-\nation. See tick et #2359 for a complete e xplanation at http://svn.boost.or g/trac/boost/tick et/2359 .\nfast_pool_allocator pub lic member functions\n1.voidconstruct (constpointer ptr,constvalue_type & t);\n2.voiddestroy(constpointer ptr);\n38Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Destro y ptr using destructor .\n3.booloperator ==(constfast_pool_allocator &)const;\n4.booloperator !=(constfast_pool_allocator &)const;\nfast_pool_allocator pub lic static functions\n1.staticpointer address(reference r);\n2.staticconst_pointer address(const_reference s);\n3.staticsize_type max_size ();\n4.staticpointer allocate (constsize_type n);\n5.staticpointer allocate (constsize_type n,constvoid* const);\nAllocate memory .\n6.staticpointer allocate ();\nAllocate memory .\n7.staticvoiddeallocate (constpointer ptr,constsize_type n);\nDeallocate memory .\n8.staticvoiddeallocate (constpointer ptr);\ndeallocate/free\nStruct template rebind\nboost::f ast_pool_allocator::rebind — Nested class rebind allo ws for transformation from f ast_pool_allocator<T> to f ast_pool_alloc-\nator<U>.\n39Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Synopsis\n// In header: < boost/pool/pool_alloc.hpp >\n// Nested class rebind allows for transformation from fast_pool_allocator<T>\n// to fast_pool_allocator<U>.\ntemplate <typename U>\nstructrebind{\n// types\ntypedef fast_pool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize >other;\n};\nDescription\nNested class rebind allo ws for transformation from f ast_pool_allocator<T> to f ast_pool_allocator<U> via the member typedef other .\nSpecializations\n•Class template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>\nClass template fast_pool_allocator<v oid, UserAllocator , Mute x, NextSiz e,\nMaxSiz e>\nboost::f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize> — Specialization of f ast_pool_allocator<v oid>.\nSynopsis\n// In header: < boost/pool/pool_alloc.hpp >\ntemplate <typename UserAllocator ,typename Mutex,unsigned NextSize ,\nunsigned MaxSize >\nclassfast_pool_allocator <void,UserAllocator ,Mutex,NextSize ,MaxSize>{\npublic:\n// types\ntypedef void* pointer;\ntypedef constvoid*const_pointer ;\ntypedef void value_type ;\n// member classes/structs/unions\n// Nested class rebind allows for transformation from fast_pool_allocator<T>\n // to fast_pool_allocator<U>.\ntemplate <typename U>\nstructrebind{\n// types\ntypedef fast_pool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize >other;\n};\n};\nDescription\nSpecialization of f ast_pool_allocator<v oid> required to mak e the allocator standard-conforming.\n40Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Struct template rebind\nboost::f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>::rebind — Nested class rebind allo ws for transformation\nfrom f ast_pool_allocator<T> to f ast_pool_allocator<U>.\nSynopsis\n// In header: < boost/pool/pool_alloc.hpp >\n// Nested class rebind allows for transformation from fast_pool_allocator<T>\n// to fast_pool_allocator<U>.\ntemplate <typename U>\nstructrebind{\n// types\ntypedef fast_pool_allocator <U,UserAllocator ,Mutex,NextSize ,MaxSize >other;\n};\nDescription\nNested class rebind allo ws for transformation from f ast_pool_allocator<T> to f ast_pool_allocator<U> via the member typedef other .\nHeader < boost/pool/poolfwd.hpp >\nForward declarations of all public (non-implemention) classes.\nHeader < boost/pool/simple_segregated_stora ge.hpp >\nSimple Se gregated Storage.\nA simple se gregated storage implementation: simple se gregated storage is the basic idea behind the Boost Pool library . Simple se-\ngregated storage is the simplest, and probably the f astest, memory allocation/deallocation algorithm. It be gins by partitioning a\nmemory block into fix ed-size chunks. Where the block comes from is not important until implementation time. A Pool is some object\nthat uses Simple Se gregated Storage in this f ashion.\nBOOST_POOL_VALIDATE_INTERNALS\nnamespace boost{\ntemplate <typename SizeType >classsimple_segregated_storage ;\n}\nClass template simple_segregated_stora ge\nboost::simple_se gregated_storage —Simple Se gregated Storage is the simplest, and probably the f astest, memory allocation/deal-\nlocation algorithm. It is responsible for partitioning a memory block into fix ed-size chunks: where the block comes from is determined\nby the client of the class.\n41Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Synopsis\n// In header: < boost/pool/simple_segregated_storage.hpp >\ntemplate <typename SizeType >\nclasssimple_segregated_storage {\npublic:\n// types\ntypedef SizeType size_type ;\n// construct/copy/destruct\nsimple_segregated_storage (constsimple_segregated_storage &);\nsimple_segregated_storage ();\n simple_segregated_storage& operator =(constsimple_segregated_storage &);\n// private static functions\nstaticvoid*try_malloc_n (void*&,size_type ,size_type );\n// protected member functions\nvoid*find_prev (void*);\n// protected static functions\nstaticvoid*&nextof(void*const);\n// public member functions\nvoidadd_block (void*const,constsize_type ,constsize_type );\nvoidadd_ordered_block (void*const,constsize_type ,constsize_type );\nboolempty()const;\nvoid*malloc();\nvoidfree(void*const);\nvoidordered_free (void*const);\nvoid*malloc_n (size_type ,size_type );\nvoidfree_n(void*const,constsize_type ,constsize_type );\nvoidordered_free_n (void*const,constsize_type ,constsize_type );\n// public static functions\nstaticvoid*segregate (void*,size_type ,size_type ,void*=0);\n};\nDescription\nTemplate class simple_se gregated_storage controls access to a free list of memory chunks. Please note that this is a v ery simple\nclass, with preconditions on almost all its functions. It is intended to be the f astest and smallest possible quick memory allocator -\ne.g., something to use in embedded systems. This class dele gates man y difficult preconditions to the user (i.e., alignment issues).\nAn object of type simple_se gregated_storage<SizeT ype> is empty if its free list is empty . If it is not empty , then it is ordered if its\nfree list is ordered. A free list is ordered if repeated calls to malloc() will result in a constantly-increasing sequence of v alues, as\ndetermined by std::less<void *> . A member function is order-preserving if the free list maintains its order orientation (that is,\nan ordered free list is still ordered after the member function call).\nsimple_segregated_storage pub lic construct/cop y/destruct\n1.simple_segregated_storage (constsimple_segregated_storage &);\n2.simple_segregated_storage ();\nConstruct empty storage area.\nPostconditions: empty()\n42Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/3.simple_segregated_storage& operator =(constsimple_segregated_storage &);\nsimple_segregated_storage priv ate static functions\n1.staticvoid*\ntry_malloc_n (void*& start,size_type n,size_type partition_size );\nRequires: (n > 0), (start != 0), (ne xtof(start) != 0)\nPostconditions: (start != 0) The function attempts to find n contiguous chunks of size partition_size in the free list,\nstarting at start. If it succeds, it returns the last chunk in that contiguous sequence, so that the sequence\nis kno wn by [start, {retv al}] If it f ails, it does do either because it's at the end of the free list or hits a\nnon-contiguous chunk. In either case, it will return 0, and set start to the last considered chunk. You are\nat the end of the free list if ne xtof(start) == 0. Otherwise, start points to the last chunk in the contiguous\nsequence, and ne xtof(start) points to the first chunk in the ne xt contiguous sequence (assuming an ordered\nfree list).\nsimple_segregated_storage protected member functions\n1.void*find_prev (void* ptr);\nTraverses the free list referred to by \"first\", and returns the iterator pre vious to where \"ptr\" w ould go if it w as in the free list. Returns\n0 if \"ptr\" w ould go at the be ginning of the free list (i.e., before \"first\").\nNote\nNote that this function finds the location pre vious to where ptr w ould go if it w as in the free list. It does not find\nthe entry in the free list before ptr (unless ptr is already in the free list). Specifically , find_pre v(0) will return 0,\nnot the last entry in the free list.\nReturns: location pre vious to where ptr w ould go if it w as in the free list.\nsimple_segregated_storage protected static functions\n1.staticvoid*&nextof(void*const ptr);\nThe return v alue is just *ptr cast to the appropriate type. ptr must not be 0. (F or the sak e of code readability :)\nAs an e xample, let us assume that we w ant to truncate the free list after the first chunk. That is, we w ant to set *first to 0; this\nwill result in a free list with only one entry . The normal w ay to do this is to first cast first to a pointer to a pointer to v oid, and\nthen dereference and assign (*static_cast<v oid **>(first) = 0;). This can be done more easily through the use of this con venience\nfunction (ne xtof(first) = 0;).\nReturns: dereferenced pointer .\nsimple_segregated_storage pub lic member functions\n1.voidadd_block (void*const block,constsize_type nsz,\nconstsize_type npartition_sz );\nAdd block Se gregate this block and mer ge its free list into the free list referred to by \"first\".\nRequires: Same as se gregate.\nPostconditions: !empty()\n43Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/2.voidadd_ordered_block (void*const block,constsize_type nsz,\nconstsize_type npartition_sz );\nadd block (ordered into list) This (slo wer) v ersion of add_block se gregates the block and mer ges its free list into our free list in\nthe proper order .\n3.boolempty()const;\nReturns: true only if simple_se gregated_storage is empty .\n4.void*malloc();\nCreate a chunk.\nRequires: !empty() Increment the \"first\" pointer to point to the ne xt chunk.\n5.voidfree(void*const chunk);\nFree a chunk.\nRequires: chunk w as pre viously returned from a malloc() referring to the same free list.\nPostconditions: !empty()\n6.voidordered_free (void*const chunk);\nThis (slo wer) implementation of 'free' places the memory back in the list in its proper order .\nRequires: chunk w as pre viously returned from a malloc() referring to the same free list\nPostconditions: !empty().\n7.void*malloc_n (size_type n,size_type partition_size );\nAttempts to find a contiguous sequence of n partition_sz-sized chunks. If found, remo ves them all from the free list and returns\na pointer to the first. If not found, returns 0. It is strongly recommended (b ut not required) that the free list be ordered, as this al-\ngorithm will f ail to find a contiguous sequence unless it is contiguous in the free list as well. Order -preserving. O(N) with respect\nto the size of the free list.\n8.voidfree_n(void*const chunks,constsize_type n,\nconstsize_type partition_size );\nNote\nIf you're allocating/deallocating n a lot, you should be using an ordered pool.\nRequires: chunks w as pre viously allocated from *this with the same v alues for n and partition_size.\nPostconditions: !empty()\n9.voidordered_free_n (void*const chunks,constsize_type n,\nconstsize_type partition_size );\nFree n chunks from order list.\nRequires: chunks w as pre viously allocated from *this with the same v alues for n and partition_size.\nn should not be zero (n == 0 has no ef fect).\n44Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/simple_segregated_storage pub lic static functions\n1.staticvoid*\nsegregate (void* block,size_type nsz,size_type npartition_sz ,\nvoid* end =0);\nSegregate block into chunks.\nRequires: npartition_sz >= sizeof(v oid *)\nnpartition_sz = sizeof(v oid *) * i, for some inte ger i\nnsz >= npartition_sz\nBlock is properly aligned for an array of object of size npartition_sz and array of v oid *. The requirements abo ve\nguarantee that an y pointer to a chunk (which is a pointer to an element in an array of npartition_sz) may be cast\nto void **.\nMacr o BOOST_POOL_V ALID ATE_INTERNALS\nBOOST_POOL_V ALID ATE_INTERN ALS\nSynopsis\n// In header: < boost/pool/simple_segregated_storage.hpp >\nBOOST_POOL_VALIDATE_INTERNALS\nHeader < boost/pool/singleton_pool.hpp >\nThe singleton_pool class allo ws other pool interf aces for types of the same size to share the same underlying pool.\nHeader singleton_pool.hpp pro vides a template class singleton_pool , which pro vides access to a pool as a singleton object.\nnamespace boost{\ntemplate <typename Tag,unsigned RequestedSize ,typename UserAllocator ,\ntypename Mutex,unsigned NextSize ,unsigned MaxSize >\nclasssingleton_pool ;\n}\nClass template singleton_pool\nboost::singleton_pool\n45Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Synopsis\n// In header: < boost/pool/singleton_pool.hpp >\ntemplate <typename Tag,unsigned RequestedSize ,typename UserAllocator ,\ntypename Mutex,unsigned NextSize ,unsigned MaxSize >\nclasssingleton_pool {\npublic:\n// types\ntypedef Tag tag;\ntypedef Mutex mutex; // The type of mutex used to ↵\nsynchonise access to this pool (default details::pool::default_mutex ). \ntypedef UserAllocator user_allocator ;// The user-allocator used ↵\nby this pool, default = default_user_allocator_new_delete . \ntypedef pool<UserAllocator >::size_type size_type ; // size_type of user allocator. \ntypedef pool<UserAllocator >::difference_type difference_type ;// difference_type of user ↵\nallocator. \n// member classes/structs/unions\nstructobject_creator {\n// construct/copy/destruct\nobject_creator ();\n// public member functions\nvoiddo_nothing ()const;\n};\n// construct/copy/destruct\nsingleton_pool ();\n// public static functions\nstaticvoid*malloc();\nstaticvoid*ordered_malloc ();\nstaticvoid*ordered_malloc (constsize_type );\nstaticboolis_from(void*const);\nstaticvoidfree(void*const);\nstaticvoidordered_free (void*const);\nstaticvoidfree(void*const,constsize_type );\nstaticvoidordered_free (void*const,constsize_type );\nstaticboolrelease_memory ();\nstaticboolpurge_memory ();\n// private static functions\nstaticpool_type &get_pool ();\n// public data members\nstaticconstunsigned requested_size ;// The size of each chunk allocated by this pool. \nstaticconstunsigned next_size ;// The number of chunks to allocate on the first allocation. \nstaticpool<UserAllocator >p;// For exposition only! \n};\nDescription\nThe singleton_pool class allo ws other pool interf aces for types of the same size to share the same pool. Template parameters are as\nfollows:\nTag User -specified type to uniquely identify this pool: allo ws dif ferent unbounded sets of singleton pools to e xist.\nRequestedSize The size of each chunk returned by member function malloc() .\nUserAllocator User allocator , default = default_user_allocator_ne w_delete .\n46Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Mutex This class is the type of mute x to use to protect simultaneous access to the underlying Pool. Can be an y Boost.Thread Mute x\ntype or boost::details::pool::null_mutex . It is e xposed so that users may declare some singleton pools normally (i.e., with\nsynchronization), b ut some singleton pools without synchronization (by specifying boost::details::pool::null_mutex ) for\nefficienc y reasons. The member typedef mutex exposes the v alue of this template parameter . The def ault for this parameter is\nboost::details::pool::def ault_mute x which is a synon ym for either boost::details::pool::null_mutex (when threading support\nis turned of f in the compiler (so BOOST_HAS_THREADS is not set), or threading support has ben e xplicitly disabled with\nBOOST_DISABLE_THREADS (Boost-wide disabling of threads) or BOOST_POOL_NO_MT (this library only)) or for\nboost::mutex (when threading support is enabled in the compiler).\nNextSize The v alue of this parameter is passed to the underlying Pool when it is created and specifies the number of chunks to allocate\nin the first allocation request (def aults to 32). The member typedef static const value next_size exposes the v alue of this\ntemplate parameter .\nMaxSize The v alue of this parameter is passed to the underlying Pool when it is created and specifies the maximum number of chunks\nto allocate in an y single allocation request (def aults to 0).\nNotes:\nThe underlying pool p referenced by the static functions in singleton_pool is actually declared in a w ay that is:\n1 Thread-safe if there is only one thread running before main() be gins and after main() ends -- all of the static functions of\nsingleton_pool synchronize their access to p.\n2 Guaranteed to be constructed before it is used -- thus, the simple static object in the synopsis abo ve would actually be an incorrect\nimplementation. The actual implementation to guarantee this is considerably more complicated.\n3 Note too that a dif ferent underlying pool p e xists for each dif ferent set of template parameters, including implementation-specific\nones.\n4 The underlying pool is constructed \"as if\" by:\npool<UserAllocator> p(RequestedSize, Ne xtSize, MaxSize);\nNote\nThe underlying pool constructed by the singleton is ne ver fr eed. This means that memory allocated by a\nsingleton_pool can be still used after main() has completed, b ut may mean that some memory checking programs\nwill complain about leaks from singleton_pool .\nsingleton_pool pub lic types\n1.typedef Tagtag;\nThe Tag template parameter uniquely identifies this pool and allo ws dif ferent unbounded sets of singleton pools to e xist. F or ex-\nample, the pool allocators use tw o tag classes to ensure that the tw o dif ferent allocator types ne ver share the same underlying\nsingleton pool. Tag is ne ver actually used by singleton_pool .\nsingleton_pool pub lic construct/cop y/destruct\n1.singleton_pool ();\nsingleton_pool pub lic static functions\n1.staticvoid*malloc();\nEquivalent to SingletonPool::p.malloc(); synchronized.\n47Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/2.staticvoid*ordered_malloc ();\nEquivalent to SingletonPool::p.ordered_malloc(); synchronized.\n3.staticvoid*ordered_malloc (constsize_type n);\nEquivalent to SingletonPool::p.ordered_malloc(n); synchronized.\n4.staticboolis_from(void*const ptr);\nEquivalent to SingletonPool::p.is_from(chunk); synchronized.\nReturns: true if chunk is from SingletonPool::is_from(chunk)\n5.staticvoidfree(void*const ptr);\nEquivalent to SingletonPool::p.free(chunk); synchronized.\n6.staticvoidordered_free (void*const ptr);\nEquivalent to SingletonPool::p.ordered_free(chunk); synchronized.\n7.staticvoidfree(void*const ptr,constsize_type n);\nEquivalent to SingletonPool::p.free(chunk, n); synchronized.\n8.staticvoidordered_free (void*const ptr,constsize_type n);\nEquivalent to SingletonPool::p.ordered_free(chunk, n); synchronized.\n9.staticboolrelease_memory ();\nEquivalent to SingletonPool::p.release_memory(); synchronized.\n10.staticboolpurge_memory ();\nEquivalent to SingletonPool::p.pur ge_memory(); synchronized.\nsingleton_pool priv ate static functions\n1.staticpool_type &get_pool ();\nStruct object_creator\nboost::singleton_pool::object_creator\n48Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Synopsis\n// In header: < boost/pool/singleton_pool.hpp >\nstructobject_creator {\n// construct/copy/destruct\nobject_creator ();\n// public member functions\nvoiddo_nothing ()const;\n};\nDescription\nobject_creator pub lic construct/cop y/destruct\n1.object_creator ();\nobject_creator pub lic member functions\n1.voiddo_nothing ()const;\n49Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Appendices\nAppendix A: Histor y\nVersion 2.0.0, January 11, 2011\nDocumentation and testing r evision\nFeatur es:\n•Fix issues 1252 , 4960 , 5526 , 5700 , 2696 .\n•Documentation con verted and re written and re vised by P aul A. Bristo w using Quickbook, Doxygen, for html and pdf, based on\nStephen Cleary's html v ersion, Re vised 05 December , 2006.\nThis used Opera 11.0, and html_to_quickbook .css as a special display format. On the Opera full taskbar (chose enable full\ntaskbar ) View, Style, Manage modes, Display .\nChoose add \\boost -sandbox \\boost_docs \\trunk \\doc\\style \\html \\conversion \\html_to_quickbook .css to My Style\nSheet. Html pages are no w displayed as Quickbook and can be copied and pasted into quickbook files using your f avored te xt editor\nfor Quickbook.\nVersion 1.0.0, January 1, 2000\nFirst release\nAppendix B: FAQ\nWhy should I use P ool?\nUsing Pools gi ves you more control o ver ho w memory is used in your program. F or example, you could ha ve a situation where you\nwant to allocate a b unch of small objects at one point, and then reach a point in your program where none of them are needed an y\nmore. Using pool interf aces, you can choose to run their destructors or just drop them of f into obli vion; the pool interf ace will\nguarantee that there are no system memory leaks.\nWhen should I use P ool?\nPools are generally used when there is a lot of allocation and deallocation of small objects. Another common usage is the situation\nabove, where man y objects may be dropped out of memory .\nIn general, use Pools when you need a more efficient w ay to do unusual memory control.\nAppendix C: Acknowledg ements\nMany, man y thanks to the Boost peers, notably Jef f Garland, Beman Da wes, Ed Bre y, Gary Po well, Peter Dimo v, and Jens Maurer\nfor pro viding helpful suggestions!\nAppendix D: Tests\nSee folder boost/libs/pool/test/.\nAppendix E: Tickets\nReport and vie w bugs and features by adding a tick et at Boost.T rac.\n50Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Existing open tick ets for this library alone can be vie wed here. Existing tick ets for this library - including closed ones - can be vie wed\nhere.\nAppendix F: Other Implementations\nPool allocators are found in man y programming languages, and in man y variations. The be ginnings of man y implementations may\nbe found in common programming literature; some of these are gi ven belo w. Note that none of these are complete implementations\nof a Pool; most of these lea ve some aspects of a Pool as a user e xercise. Ho wever, in each case, e ven though some aspects are\nmissing, these e xamples use the same underlying concept of a Simple Se gregated Storage described in this document.\n1.The C++ Pr ogramming Langua ge, 3rd ed., by Bjarne Stroustrup, Section 19.4.2. Missing aspects:\n•Not portable.\n•Cannot handle allocations of arbitrary numbers of objects (this w as left as an e xercise).\n•Not thread-safe.\n•Suffers from the static initialization problem.\n2.Micr oC/OS-II: The Real-T ime K ernel , by Jean J. Labrosse, Chapter 7 and Appendix B.04.\n•An e xample of the Simple Se gregated Storage scheme at w ork in the internals of an actual OS.\n•Missing aspects:\n•Not portable (though this is OK, since it's part of its o wn OS).\n•Cannot handle allocations of arbitrary numbers of blocks (which is also OK, since this feature is not needed).\n•Requires non-intuiti ve user code to create and destro y the Pool.\n3.Efficient C++: P erformance Pr ogramming Techniques , by Do v Bulka and Da vid Mayhe w, Chapters 6 and 7.\n•This is a good e xample of iterati vely de veloping a Pool solutio.\n•however, their premise (that the system-supplied allocation mechanism is hopelessly inefficient) is fla wed on e very system\nI've tested on.\n•Run their timings on your system before you accept their conclusions.\n•Missing aspect: Requires non-intuiti ve user code to create and destro y the Pool.\n4.Advanced C++: Pr ogramming Styles and Idioms , by James O. Coplien, Section 3.6.\n•Has e xamples of both static and dynamic pooling, b ut missing aspects:\n•Not thread-safe.\n•The static pooling e xample is not portable.\nAppendix G: References\n1.Doug Lea, A Memory Allocator . See http://gee.cs.oswe go.edu/dl/html/malloc.html\n2.Paul R. Wilson, Mark S. Johnstone, Michael Neely , and Da vid Boles, Dynamic Stor age Allocation: A Surve y and Critical Re view\nin International Workshop on Memory Management, September 1995, pg. 28, 36. See ftp://ftp.cs.ute xas.edu/pub/g arbage/allocsrv .ps\n51Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Appendix H: Future plans\nAnother pool interf ace will be written: a base class for per -class pool allocation.\nThis \"pool_base\" interf ace will be Singleton Usage with Exceptions, and b uilt on the singleton_pool interf ace.\n52Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Indexes\nFunction Inde x\nA\naddress\nClass template f ast_pool_allocator , 37, 39\nClass template pool_allocator , 33, 34\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\npool_allocator , 8\nadd_block\nClass template simple_se gregated_storage, 42, 43\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nadd_ordered_block\nClass template simple_se gregated_storage, 42, 44\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nallocate\nClass template f ast_pool_allocator , 37, 39\nClass template pool_allocator , 33, 35\npool_allocator , 8\nC\nconstruct\nClass template f ast_pool_allocator , 37, 38\nClass template object_pool, 22, 24\nClass template pool_allocator , 33, 34\nObject_pool, 6\npool_allocator , 8\nD\ndeallocate\nClass template f ast_pool_allocator , 37, 39\nClass template pool_allocator , 33, 35\npool_allocator , 8\ndestro y\nClass template f ast_pool_allocator , 37, 38\nClass template object_pool, 22, 24\nClass template pool_allocator , 33, 35\nObject_pool, 6\npool_allocator , 8\nF\nfind_pre v\nClass template simple_se gregated_storage, 42, 43\nfree\nClass template object_pool, 22, 23\nClass template pool, 27, 30\nClass template simple_se gregated_storage, 42, 44\nClass template singleton_pool, 46, 48\nObject_pool, 6\npool, 4\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25, 26\n53Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/free_n\nClass template simple_se gregated_storage, 42, 44\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nG\nget_pool\nClass template singleton_pool, 46, 48\nI\nis_from\nClass template object_pool, 22, 24\nClass template pool, 27, 29, 31\nClass template singleton_pool, 46, 48\nObject_pool, 6\npool, 4\nSingleton_pool, 7\nM\nmain\nClass template f ast_pool_allocator , 38\nClass template pool_allocator , 33\nClass template singleton_pool, 46\nHeader < boost/pool/pool_alloc.hpp >, 31\nSingleton_pool, 7\nmalloc\nClass template object_pool, 22, 23\nClass template pool, 27, 28, 30, 31\nClass template simple_se gregated_storage, 42, 44\nClass template singleton_pool, 46, 47\nObject_pool, 6\npool, 4\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25, 26\nmalloc_n\nClass template simple_se gregated_storage, 42, 44\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nmalloc_need_resize\nClass template pool, 27, 28\nmax_size\nClass template f ast_pool_allocator , 37, 39\nClass template pool_allocator , 33, 34\npool_allocator , 8\nN\nnextof\nClass template object_pool, 22, 23\nClass template pool, 27, 29\nClass template simple_se gregated_storage, 42, 43\nO\nordered_free\nClass template pool, 27, 30, 31\nClass template simple_se gregated_storage, 42, 44\nClass template singleton_pool, 46, 48\n54Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/pool, 4\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nordered_free_n\nClass template simple_se gregated_storage, 42, 44\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nordered_malloc\nClass template pool, 27, 30\nClass template singleton_pool, 46, 48\npool, 4\nSingleton_pool, 7\nordered_malloc_need_resize\nClass template pool, 27, 28\nP\npurge_memory\nClass template pool, 27, 29\nClass template singleton_pool, 46, 48\npool, 4\nSingleton_pool, 7\nR\nrelease_memory\nClass template pool, 27, 29\nClass template singleton_pool, 46, 48\npool, 4\nSingleton_pool, 7\nS\nsegregate\nClass template simple_se gregated_storage, 42, 45\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nset_max_size\nClass template pool, 27, 30\nset_ne xt_size\nClass template object_pool, 22, 25\nClass template pool, 27, 29\nsizeof\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\nHeader < boost/pool/pool_alloc.hpp >, 31\nHow Contiguous Chunks are Handled, 15\nT\ntry_malloc_n\nClass template simple_se gregated_storage, 42, 43\nClass Inde x\nD\ndefault_user_allocator_malloc_free\npool, 4\nStruct def ault_user_allocator_malloc_free, 26\ndefault_user_allocator_ne w_delete\npool, 4\nStruct def ault_user_allocator_ne w_delete, 25\n55Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/F\nfast_pool_allocator\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\npool_allocator , 8\nStruct template rebind, 40, 41\nfast_pool_allocator_tag\npool_allocator , 8\nStruct f ast_pool_allocator_tag, 37\nO\nobject_creator\nClass template singleton_pool, 46\nStruct object_creator , 49\nobject_pool\nClass template object_pool, 22, 23\nObject_pool, 6\nP\npool\nClass template object_pool, 22\nClass template pool, 27\npool, 4\npool_allocator\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\npool_allocator , 8\nStruct template rebind, 35, 36\npool_allocator_tag\npool_allocator , 8\nStruct pool_allocator_tag, 32\nR\nrebind\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\npool_allocator , 8\nStruct template rebind, 35, 36, 40, 41\nS\nsimple_se gregated_storage\nClass template pool, 27\nClass template simple_se gregated_storage, 42\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nsingleton_pool\nClass template singleton_pool, 45, 46, 47, 48\nSingleton_pool, 7\nTypedef Inde x\nC\nconst_pointer\nClass template f ast_pool_allocator , 37\n56Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Class template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\npool_allocator , 8\nconst_reference\nClass template f ast_pool_allocator , 37\nClass template pool_allocator , 33\npool_allocator , 8\nD\ndifference_type\nClass template f ast_pool_allocator , 37\nClass template object_pool, 22\nClass template pool, 27\nClass template pool_allocator , 33\nClass template singleton_pool, 46\nObject_pool, 6\npool, 4\npool_allocator , 8\nSingleton_pool, 7\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25\nE\nelement_type\nClass template object_pool, 22\nObject_pool, 6\nM\nmute x\nClass template f ast_pool_allocator , 37\nClass template pool_allocator , 33\nClass template singleton_pool, 46\nO\nother\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\npool_allocator , 8\nStruct template rebind, 35, 36, 40, 41\nP\npointer\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\npool_allocator , 8\nR\nreference\nClass template f ast_pool_allocator , 37\nClass template pool_allocator , 33\npool_allocator , 8\n57Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/S\nsize_type\nClass template f ast_pool_allocator , 37\nClass template object_pool, 22\nClass template pool, 27\nClass template pool_allocator , 33\nClass template simple_se gregated_storage, 42\nClass template singleton_pool, 46\nObject_pool, 6\npool, 4\npool_allocator , 8\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25\nT\ntag\nClass template singleton_pool, 46, 47\nSingleton_pool, 7\nU\nuser_allocator\nClass template f ast_pool_allocator , 37\nClass template object_pool, 22\nClass template pool, 27\nClass template pool_allocator , 33\nClass template singleton_pool, 46\nObject_pool, 6\npool, 4\npool_allocator , 8\nSingleton_pool, 7\nV\nvalue_type\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\npool_allocator , 8\nIndex\nA\naddress\nClass template f ast_pool_allocator , 37, 39\nClass template pool_allocator , 33, 34\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\npool_allocator , 8\nadd_block\nClass template simple_se gregated_storage, 42, 43\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nadd_ordered_block\nClass template simple_se gregated_storage, 42, 44\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nalignment\n58Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Class template pool, 26\nClass template simple_se gregated_storage, 42\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\nHeader < boost/pool/pool.hpp >, 25\nHow Contiguous Chunks are Handled, 15\npool, 4\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nallocate\nClass template f ast_pool_allocator , 37, 39\nClass template pool_allocator , 33, 35\npool_allocator , 8\nallocation\nAllocation and Deallocation, 16\nAppendix B: F AQ, 50\nAppendix F: Other Implementations, 51\nAppendix G: References, 51\nAppendix H: Future plans, 52\nBasic ideas behind pooling, 10\nBoost Pool Interf aces - What interf aces are pro vided and when to use each one., 3\nClass template object_pool, 22, 24\nClass template pool, 27, 29, 31\nClass template simple_se gregated_storage, 41\nClass template singleton_pool, 46\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\nHeader < boost/pool/object_pool.hpp >, 22\nHeader < boost/pool/pool_alloc.hpp >, 31\nHeader < boost/pool/simple_se gregated_storage.hpp >, 41\nHow Contiguous Chunks are Handled, 15\nIntroduction, 2\nObject_pool, 6\npool_allocator , 8\nSimple Se gregated Storage, 12\nAllocation and Deallocation\nallocation, 16\nblock, 16\nchunk, 16\ndeallocation, 16\nmalloc, 16\nordered, 16\nsize, 16\nAppendix B: F AQ\nallocation, 50\ndeallocation, 50\ninterf ace, 50\nmemory , 50\nobjects, 50\nAppendix F: Other Implementations\nallocation, 51\nblock, 51\nconcepts, 51\nobjects, 51\nportable, 51\nsegregated, 51\nAppendix G: References\nallocation, 51\nmalloc, 51\nmemory , 51\nsegregated, 51\n59Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Appendix H: Future plans\nallocation, 52\ninterf ace, 52\nsingleton, 52\nsingleton_pool, 52\nautomatic\nClass template object_pool, 22\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\nHeader < boost/pool/object_pool.hpp >, 22\nObject_pool, 6\nB\nBasic ideas behind pooling\nallocation, 10\nblock, 10\nchunk, 10\nheaders, 10\nmalloc, 10\nmemory , 10\nnew, 10\nobjects, 10\nordered, 10\nsegregated, 10\nsize, 10\nblock\nAllocation and Deallocation, 16\nAppendix F: Other Implementations, 51\nBasic ideas behind pooling, 10\nBoost Pool Interf aces - What interf aces are pro vided and when to use each one., 3\nClass template object_pool, 23\nClass template pool, 28, 29, 30, 31\nClass template simple_se gregated_storage, 41, 43, 44, 45\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13, 14\nHeader < boost/pool/simple_se gregated_storage.hpp >, 41\nHow Contiguous Chunks are Handled, 15\npool, 4\nSegregation, 16\nSimple Se gregated Storage, 12\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 26\nSymbol Table, 16\nThe UserAllocator Concept, 21\nUserAllocator Requirements, 21\nBoost Pool Interf aces - What interf aces are pro vided and when to use each one.\nallocation, 3\nblock, 3\nchunk, 3\nconcepts, 3\ninterf ace, 3\nmemory , 3\nobjects, 3\nordered, 3\nsingleton, 3\nBOOST_POOL_V ALID ATE_INTERN ALS\nHeader < boost/pool/simple_se gregated_storage.hpp >, 41\nMacro BOOST_POOL_V ALID ATE_INTERN ALS, 45\n60Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/build\nInstallation, 3\nBuilding the Test Programs\njamfile, 3\nC\nchunk\nAllocation and Deallocation, 16\nBasic ideas behind pooling, 10\nBoost Pool Interf aces - What interf aces are pro vided and when to use each one., 3\nClass template f ast_pool_allocator , 37, 38\nClass template object_pool, 23, 24, 25\nClass template pool, 26, 27, 28, 29, 30, 31\nClass template simple_se gregated_storage, 41, 42, 43, 44, 45\nClass template singleton_pool, 46, 48\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13, 14\nHeader < boost/pool/pool.hpp >, 25\nHeader < boost/pool/simple_se gregated_storage.hpp >, 41\nHow Contiguous Chunks are Handled, 15\nIntroduction, 2\nObject_pool, 6\npool, 4\nSegregation, 16\nSimple Se gregated Storage, 12\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nSymbol Table, 16\nThe UserAllocator Concept, 21\nClass template f ast_pool_allocator\naddress, 37, 39\nallocate, 37, 39\nchunk, 37, 38\nconstruct, 37, 38\nconst_pointer , 37\nconst_reference, 37\ndeallocate, 37, 39\ndestro y, 37, 38\ndifference_type, 37\nfast_pool_allocator , 37\nheaders, 37\nmain, 38\nmax_size, 37, 39\nmemory , 38, 39\nmute x, 37\nobjects, 38\nother , 37\npointer , 37\nrebind, 37\nreference, 37\nsingleton, 38\nsingleton_pool, 38\nsize, 37, 38, 39\nsize_type, 37\ntemplate, 37, 38, 40\nuser_allocator , 37\nvalue_type, 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>\n61Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/const_pointer , 40\nfast_pool_allocator , 40\nheaders, 40\nother , 40\npointer , 40\nrebind, 40\ntemplate, 40\nvalue_type, 40\nClass template object_pool\nallocation, 22, 24\nautomatic, 22\nblock, 23\nchunk, 23, 24, 25\nconstruct, 22, 24\ndestro y, 22, 24\ndifference_type, 22\nelement_type, 22\nfree, 22, 23\nheaders, 22\ninclude, 24\nis_from, 22, 24\nmalloc, 22, 23\nmemory , 22, 23, 24, 25\nnew, 23, 25\nnextof, 22, 23\nobjects, 22, 23, 24\nobject_pool, 22, 23\npool, 22\nsegregated, 23\nset_ne xt_size, 22, 25\nsize, 22, 23, 24, 25\nsize_type, 22\ntemplate, 22, 24\nuser_allocator , 22\nClass template pool\nalignment, 26\nallocation, 27, 29, 31\nblock, 28, 29, 30, 31\nchunk, 26, 27, 28, 29, 30, 31\ndifference_type, 27\nfree, 27, 30\nheaders, 27\nis_from, 27, 29, 31\nmalloc, 27, 28, 30, 31\nmalloc_need_resize, 27, 28\nmemory , 26, 27, 28, 29, 30\nnew, 28, 30\nnextof, 27, 29\nobjects, 27, 28, 29, 30\nordered, 27, 28, 29, 30, 31\nordered_free, 27, 30, 31\nordered_malloc, 27, 30\nordered_malloc_need_resize, 27, 28\npool, 27\npurge_memory , 27, 29\nrelease_memory , 27, 29\nsegregated, 27, 28, 29\nset_max_size, 27, 30\n62Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/set_ne xt_size, 27, 29\nsimple_se gregated_storage, 27\nsize, 27, 28, 29, 30, 31\nsize_type, 27\ntemplate, 26, 27\nuser_allocator , 27\nClass template pool_allocator\naddress, 33, 34\nallocate, 33, 35\nconstruct, 33, 34\nconst_pointer , 33\nconst_reference, 33\ndeallocate, 33, 35\ndestro y, 33, 35\ndifference_type, 33\nheaders, 33\nmain, 33\nmax_size, 33, 34\nmemory , 33\nmute x, 33\nobjects, 33\nother , 33\npointer , 33\npool_allocator , 33\nrebind, 33\nreference, 33\nsingleton, 33, 34\nsingleton_pool, 33, 34\nsize, 33, 34, 35\nsize_type, 33\ntemplate, 32, 33, 34, 35\nuser_allocator , 33\nvalue_type, 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>\nconst_pointer , 36\nheaders, 36\nother , 36\npointer , 36\npool_allocator , 36\nrebind, 36\ntemplate, 35, 36\nvalue_type, 36\nClass template simple_se gregated_storage\nadd_block, 42, 43\nadd_ordered_block, 42, 44\nalignment, 42\nallocation, 41\nblock, 41, 43, 44, 45\nchunk, 41, 42, 43, 44, 45\ndeallocation, 41\nfind_pre v, 42, 43\nfree, 42, 44\nfree_n, 42, 44\nheaders, 42\nmalloc, 42, 44\nmalloc_n, 42, 44\nmemory , 41, 42, 44\nnextof, 42, 43\n63Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/objects, 42, 45\nordered, 42, 43, 44\nordered_free, 42, 44\nordered_free_n, 42, 44\nsegregate, 42, 45\nsegregated, 41, 42, 43, 44, 45\nsimple_se gregated_storage, 42\nsize, 41, 42, 43, 44, 45\nsize_type, 42\ntemplate, 41, 42\ntry_malloc_n, 42, 43\nClass template singleton_pool\nallocation, 46\nchunk, 46, 48\ndifference_type, 46\nfree, 46, 48\nget_pool, 46, 48\nheaders, 46\ninterf ace, 46\nis_from, 46, 48\nmain, 46\nmalloc, 46, 47\nmemory , 46\nmute x, 46\nobjects, 46\nobject_creator , 46\nordered, 46, 48\nordered_free, 46, 48\nordered_malloc, 46, 48\npurge_memory , 46, 48\nrelease_memory , 46, 48\nsingleton, 45, 46, 47, 48\nsingleton_pool, 45, 46, 47, 48\nsize, 46, 48\nsize_type, 46\ntag, 46, 47\ntemplate, 45, 46, 47\nuser_allocator , 46\nconcepts\nAppendix F: Other Implementations, 51\nBoost Pool Interf aces - What interf aces are pro vided and when to use each one., 3\nDocumentation Naming and F ormatting Con ventions, 2\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\nIntroduction, 2\nThe UserAllocator Concept, 21\nconstruct\nClass template f ast_pool_allocator , 37, 38\nClass template object_pool, 22, 24\nClass template pool_allocator , 33, 34\nObject_pool, 6\npool_allocator , 8\nConstructors, Destructors, and State\nnew, 16\nordered, 16\nconst_pointer\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template pool_allocator , 33\n64Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Class template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\npool_allocator , 8\nconst_reference\nClass template f ast_pool_allocator , 37\nClass template pool_allocator , 33\npool_allocator , 8\nconventions\nDocumentation Naming and F ormatting Con ventions, 2\nD\ndeallocate\nClass template f ast_pool_allocator , 37, 39\nClass template pool_allocator , 33, 35\npool_allocator , 8\ndeallocation\nAllocation and Deallocation, 16\nAppendix B: F AQ, 50\nClass template simple_se gregated_storage, 41\nHeader < boost/pool/pool_alloc.hpp >, 31\nHeader < boost/pool/simple_se gregated_storage.hpp >, 41\nIntroduction, 2\nSimple Se gregated Storage, 12\ndefault_user_allocator_malloc_free\npool, 4\nStruct def ault_user_allocator_malloc_free, 26\ndefault_user_allocator_ne w_delete\npool, 4\nStruct def ault_user_allocator_ne w_delete, 25\ndestro y\nClass template f ast_pool_allocator , 37, 38\nClass template object_pool, 22, 24\nClass template pool_allocator , 33, 35\nObject_pool, 6\npool_allocator , 8\ndifference_type\nClass template f ast_pool_allocator , 37\nClass template object_pool, 22\nClass template pool, 27\nClass template pool_allocator , 33\nClass template singleton_pool, 46\nObject_pool, 6\npool, 4\npool_allocator , 8\nSingleton_pool, 7\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25\nDocumentation Naming and F ormatting Con ventions\nconcepts, 2\nconventions, 2\ninclude, 2\nnaming, 2\nobjects, 2\ntemplate, 2\nE\nelements\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\n65Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/element_type\nClass template object_pool, 22\nObject_pool, 6\nF\nfast_pool_allocator\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\npool_allocator , 8\nStruct template rebind, 40, 41\nfast_pool_allocator_tag\npool_allocator , 8\nStruct f ast_pool_allocator_tag, 37\nfind_pre v\nClass template simple_se gregated_storage, 42, 43\nfree\nClass template object_pool, 22, 23\nClass template pool, 27, 30\nClass template simple_se gregated_storage, 42, 44\nClass template singleton_pool, 46, 48\nObject_pool, 6\npool, 4\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25, 26\nfree_n\nClass template simple_se gregated_storage, 42, 44\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nG\nget_pool\nClass template singleton_pool, 46, 48\nGuaranteeing Alignment - Ho w we guarantee alignment portably .\naddress, 13\nalignment, 13\nallocation, 13\nautomatic, 13\nblock, 13, 14\nchunk, 13, 14\nconcepts, 13\nelements, 13\nmemory , 13, 14\nnew, 13\nobjects, 13, 14\novervie w, 13\npadding, 13\nportable, 13\nsegregated, 13\nsize, 13, 14\nsizeof, 13\nH\nHeader < boost/pool/object_pool.hpp >\nallocation, 22\nautomatic, 22\nheaders, 22\n66Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/memory , 22\nobjects, 22\nobject_pool, 22\ntemplate, 22\nHeader < boost/pool/pool.hpp >\nalignment, 25\nchunk, 25\nheaders, 25\nmemory , 25\nsegregated, 25\ntemplate, 25\nHeader < boost/pool/poolfwd.hpp >\nheaders, 41\nHeader < boost/pool/pool_alloc.hpp >\nallocation, 31\ndeallocation, 31\nheaders, 31\ninterf ace, 31\nmain, 31\nmemory , 31\nsingleton, 31\nsize, 31\nsizeof, 31\ntemplate, 31\nHeader < boost/pool/simple_se gregated_storage.hpp >\nallocation, 41\nblock, 41\nBOOST_POOL_V ALID ATE_INTERN ALS, 41\nchunk, 41\ndeallocation, 41\nheaders, 41\nmemory , 41\nobjects, 41\nsegregated, 41\nsize, 41\ntemplate, 41\nHeader < boost/pool/singleton_pool.hpp >\nheaders, 45\ninterf ace, 45\nobjects, 45\nsingleton, 45\nsingleton_pool, 45\nsize, 45\ntemplate, 45\nheaders\nBasic ideas behind pooling, 10\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template object_pool, 22\nClass template pool, 27\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\nClass template simple_se gregated_storage, 42\nClass template singleton_pool, 46\nHeader < boost/pool/object_pool.hpp >, 22\nHeader < boost/pool/pool.hpp >, 25\nHeader < boost/pool/poolfwd.hpp >, 41\nHeader < boost/pool/pool_alloc.hpp >, 31\n67Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Header < boost/pool/simple_se gregated_storage.hpp >, 41\nHeader < boost/pool/singleton_pool.hpp >, 45\nHow do I use Pool?, 3\nInstallation, 3\nMacro BOOST_POOL_V ALID ATE_INTERN ALS, 45\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25\nStruct f ast_pool_allocator_tag, 37\nStruct object_creator , 49\nStruct pool_allocator_tag, 32\nStruct template rebind, 35, 36, 40, 41\nHow Contiguous Chunks are Handled\nalignment, 15\nallocation, 15\nblock, 15\nchunk, 15\nmemory , 15\nobjects, 15\nordered, 15\npadding, 15\nsize, 15\nsizeof, 15\nHow do I use Pool?\nheaders, 3\ninclude, 3\ninterf ace, 3\nI\ninclude\nClass template object_pool, 24\nDocumentation Naming and F ormatting Con ventions, 2\nHow do I use Pool?, 3\nInstallation, 3\nInstallation\nbuild, 3\nheaders, 3\ninclude, 3\ninstallation, 3\ninstallation\nInstallation, 3\ninterf ace\nAppendix B: F AQ, 50\nAppendix H: Future plans, 52\nBoost Pool Interf aces - What interf aces are pro vided and when to use each one., 3\nClass template singleton_pool, 46\nHeader < boost/pool/pool_alloc.hpp >, 31\nHeader < boost/pool/singleton_pool.hpp >, 45\nHow do I use Pool?, 3\nIntroduction, 2\nObject_pool, 6\npool, 4\nPool Interf aces, 4\npool_allocator , 8\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nThe UserAllocator Concept, 21\nIntroduction\n68Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/allocation, 2\nchunk, 2\nconcepts, 2\ndeallocation, 2\ninterf ace, 2\nmemory , 2\nobjects, 2\nsegregated, 2\nIntroduction and Ov ervie w\novervie w, 2\nis_from\nClass template object_pool, 22, 24\nClass template pool, 27, 29, 31\nClass template singleton_pool, 46, 48\nObject_pool, 6\npool, 4\nSingleton_pool, 7\nJ\njamfile\nBuilding the Test Programs, 3\nM\nMacro BOOST_POOL_V ALID ATE_INTERN ALS\nBOOST_POOL_V ALID ATE_INTERN ALS, 45\nheaders, 45\nsegregated, 45\nmain\nClass template f ast_pool_allocator , 38\nClass template pool_allocator , 33\nClass template singleton_pool, 46\nHeader < boost/pool/pool_alloc.hpp >, 31\nSingleton_pool, 7\nmalloc\nAllocation and Deallocation, 16\nAppendix G: References, 51\nBasic ideas behind pooling, 10\nClass template object_pool, 22, 23\nClass template pool, 27, 28, 30, 31\nClass template simple_se gregated_storage, 42, 44\nClass template singleton_pool, 46, 47\nObject_pool, 6\npool, 4\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25, 26\nUserAllocator Requirements, 21\nmalloc_n\nClass template simple_se gregated_storage, 42, 44\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nmalloc_need_resize\nClass template pool, 27, 28\nmax_size\nClass template f ast_pool_allocator , 37, 39\nClass template pool_allocator , 33, 34\npool_allocator , 8\n69Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/memory\nAppendix B: F AQ, 50\nAppendix G: References, 51\nBasic ideas behind pooling, 10\nBoost Pool Interf aces - What interf aces are pro vided and when to use each one., 3\nClass template f ast_pool_allocator , 38, 39\nClass template object_pool, 22, 23, 24, 25\nClass template pool, 26, 27, 28, 29, 30\nClass template pool_allocator , 33\nClass template simple_se gregated_storage, 41, 42, 44\nClass template singleton_pool, 46\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13, 14\nHeader < boost/pool/object_pool.hpp >, 22\nHeader < boost/pool/pool.hpp >, 25\nHeader < boost/pool/pool_alloc.hpp >, 31\nHeader < boost/pool/simple_se gregated_storage.hpp >, 41\nHow Contiguous Chunks are Handled, 15\nIntroduction, 2\nObject_pool, 6\npool, 4\npool_allocator , 8\nSegregation, 16\nSimple Se gregated Storage, 12\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nStruct def ault_user_allocator_ne w_delete, 25\nThe UserAllocator Concept, 21\nUserAllocator Requirements, 21\nmute x\nClass template f ast_pool_allocator , 37\nClass template pool_allocator , 33\nClass template singleton_pool, 46\nN\nnaming\nDocumentation Naming and F ormatting Con ventions, 2\nnew\nBasic ideas behind pooling, 10\nClass template object_pool, 23, 25\nClass template pool, 28, 30\nConstructors, Destructors, and State, 16\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\npool, 4\nStruct def ault_user_allocator_ne w_delete, 25\nnextof\nClass template object_pool, 22, 23\nClass template pool, 27, 29\nClass template simple_se gregated_storage, 42, 43\nO\nobjects\nAppendix B: F AQ, 50\nAppendix F: Other Implementations, 51\nBasic ideas behind pooling, 10\nBoost Pool Interf aces - What interf aces are pro vided and when to use each one., 3\nClass template f ast_pool_allocator , 38\nClass template object_pool, 22, 23, 24\n70Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Class template pool, 27, 28, 29, 30\nClass template pool_allocator , 33\nClass template simple_se gregated_storage, 42, 45\nClass template singleton_pool, 46\nDocumentation Naming and F ormatting Con ventions, 2\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13, 14\nHeader < boost/pool/object_pool.hpp >, 22\nHeader < boost/pool/simple_se gregated_storage.hpp >, 41\nHeader < boost/pool/singleton_pool.hpp >, 45\nHow Contiguous Chunks are Handled, 15\nIntroduction, 2\nObject_pool, 6\npool, 4\npool_allocator , 8\nSegregation, 16\nSimple Se gregated Storage, 12\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25\nStruct object_creator , 48, 49\nThe UserAllocator Concept, 21\nUserAllocator Requirements, 21\nobject_creator\nClass template singleton_pool, 46\nStruct object_creator , 49\nObject_pool\nallocation, 6\nautomatic, 6\nchunk, 6\nconstruct, 6\ndestro y, 6\ndifference_type, 6\nelement_type, 6\nfree, 6\ninterf ace, 6\nis_from, 6\nmalloc, 6\nmemory , 6\nobjects, 6\nobject_pool, 6\nsize, 6\nsize_type, 6\ntemplate, 6\nuser_allocator , 6\nobject_pool\nClass template object_pool, 22, 23\nHeader < boost/pool/object_pool.hpp >, 22\nObject_pool, 6\nStruct def ault_user_allocator_malloc_free, 26\nordered\nAllocation and Deallocation, 16\nBasic ideas behind pooling, 10\nBoost Pool Interf aces - What interf aces are pro vided and when to use each one., 3\nClass template pool, 27, 28, 29, 30, 31\nClass template simple_se gregated_storage, 42, 43, 44\nClass template singleton_pool, 46, 48\nConstructors, Destructors, and State, 16\n71Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/How Contiguous Chunks are Handled, 15\npool, 4\npool_allocator , 8\nSegregation, 16\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nordered_free\nClass template pool, 27, 30, 31\nClass template simple_se gregated_storage, 42, 44\nClass template singleton_pool, 46, 48\npool, 4\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nordered_free_n\nClass template simple_se gregated_storage, 42, 44\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nordered_malloc\nClass template pool, 27, 30\nClass template singleton_pool, 46, 48\npool, 4\nSingleton_pool, 7\nordered_malloc_need_resize\nClass template pool, 27, 28\nother\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\npool_allocator , 8\nStruct template rebind, 35, 36, 40, 41\novervie w\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\nIntroduction and Ov ervie w, 2\nP\npadding\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\nHow Contiguous Chunks are Handled, 15\npointer\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\npool_allocator , 8\npool\nalignment, 4\nblock, 4\nchunk, 4\nClass template object_pool, 22\nClass template pool, 27\ndefault_user_allocator_malloc_free, 4\ndefault_user_allocator_ne w_delete, 4\ndifference_type, 4\nfree, 4\ninterf ace, 4\nis_from, 4\nmalloc, 4\n72Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/memory , 4\nnew, 4\nobjects, 4\nordered, 4\nordered_free, 4\nordered_malloc, 4\npool, 4\npurge_memory , 4\nrelease_memory , 4\nsegregated, 4\nsize, 4\nsize_type, 4\ntemplate, 4\nuser_allocator , 4\nPool Interf aces\ninterf ace, 4\npool_allocator\naddress, 8\nallocate, 8\nallocation, 8\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\nconstruct, 8\nconst_pointer , 8\nconst_reference, 8\ndeallocate, 8\ndestro y, 8\ndifference_type, 8\nfast_pool_allocator , 8\nfast_pool_allocator_tag, 8\ninterf ace, 8\nmax_size, 8\nmemory , 8\nobjects, 8\nordered, 8\nother , 8\npointer , 8\npool_allocator , 8\npool_allocator_tag, 8\nrebind, 8\nreference, 8\nsingleton, 8\nsingleton_pool, 8\nsize, 8\nsize_type, 8\nStruct template rebind, 35, 36\ntemplate, 8\nuser_allocator , 8\nvalue_type, 8\npool_allocator_tag\npool_allocator , 8\nStruct pool_allocator_tag, 32\nportable\nAppendix F: Other Implementations, 51\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\npurge_memory\nClass template pool, 27, 29\nClass template singleton_pool, 46, 48\n73Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/pool, 4\nSingleton_pool, 7\nR\nrebind\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\npool_allocator , 8\nStruct template rebind, 35, 36, 40, 41\nreference\nClass template f ast_pool_allocator , 37\nClass template pool_allocator , 33\npool_allocator , 8\nrelease_memory\nClass template pool, 27, 29\nClass template singleton_pool, 46, 48\npool, 4\nSingleton_pool, 7\nS\nsegregate\nClass template simple_se gregated_storage, 42, 45\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nsegregated\nAppendix F: Other Implementations, 51\nAppendix G: References, 51\nBasic ideas behind pooling, 10\nClass template object_pool, 23\nClass template pool, 27, 28, 29\nClass template simple_se gregated_storage, 41, 42, 43, 44, 45\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\nHeader < boost/pool/pool.hpp >, 25\nHeader < boost/pool/simple_se gregated_storage.hpp >, 41\nIntroduction, 2\nMacro BOOST_POOL_V ALID ATE_INTERN ALS, 45\npool, 4\nSimple Se gregated Storage, 12\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSymbol Table, 16\nSegregation\nblock, 16\nchunk, 16\nmemory , 16\nobjects, 16\nordered, 16\nsize, 16\nset_max_size\nClass template pool, 27, 30\nset_ne xt_size\nClass template object_pool, 22, 25\nClass template pool, 27, 29\nSimple Se gregated Storage\nallocation, 12\nblock, 12\nchunk, 12\n74Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/deallocation, 12\nmemory , 12\nobjects, 12\nsegregated, 12\nsize, 12\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!)\nadd_block, 16\nadd_ordered_block, 16\nalignment, 16\nblock, 16\nchunk, 16\nfree, 16\nfree_n, 16\ninterf ace, 16\nmalloc, 16\nmalloc_n, 16\nmemory , 16\nobjects, 16\nordered, 16\nordered_free, 16\nordered_free_n, 16\nsegregate, 16\nsegregated, 16\nsimple_se gregated_storage, 16\nsize, 16\nsize_type, 16\ntemplate, 16\nsimple_se gregated_storage\nClass template pool, 27\nClass template simple_se gregated_storage, 42\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nsingleton\nAppendix H: Future plans, 52\nBoost Pool Interf aces - What interf aces are pro vided and when to use each one., 3\nClass template f ast_pool_allocator , 38\nClass template pool_allocator , 33, 34\nClass template singleton_pool, 45, 46, 47, 48\nHeader < boost/pool/pool_alloc.hpp >, 31\nHeader < boost/pool/singleton_pool.hpp >, 45\npool_allocator , 8\nSingleton_pool, 7\nStruct f ast_pool_allocator_tag, 36\nStruct object_creator , 48, 49\nStruct pool_allocator_tag, 32\nSingleton_pool\nchunk, 7\ndifference_type, 7\nfree, 7\ninterf ace, 7\nis_from, 7\nmain, 7\nmalloc, 7\nmemory , 7\nobjects, 7\nordered, 7\nordered_free, 7\nordered_malloc, 7\npurge_memory , 7\n75Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/release_memory , 7\nsingleton, 7\nsingleton_pool, 7\nsize, 7\nsize_type, 7\ntag, 7\ntemplate, 7\nuser_allocator , 7\nsingleton_pool\nAppendix H: Future plans, 52\nClass template f ast_pool_allocator , 38\nClass template pool_allocator , 33, 34\nClass template singleton_pool, 45, 46, 47, 48\nHeader < boost/pool/singleton_pool.hpp >, 45\npool_allocator , 8\nSingleton_pool, 7\nStruct f ast_pool_allocator_tag, 36\nStruct object_creator , 48, 49\nStruct pool_allocator_tag, 32\nsize\nAllocation and Deallocation, 16\nBasic ideas behind pooling, 10\nClass template f ast_pool_allocator , 37, 38, 39\nClass template object_pool, 22, 23, 24, 25\nClass template pool, 27, 28, 29, 30, 31\nClass template pool_allocator , 33, 34, 35\nClass template simple_se gregated_storage, 41, 42, 43, 44, 45\nClass template singleton_pool, 46, 48\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13, 14\nHeader < boost/pool/pool_alloc.hpp >, 31\nHeader < boost/pool/simple_se gregated_storage.hpp >, 41\nHeader < boost/pool/singleton_pool.hpp >, 45\nHow Contiguous Chunks are Handled, 15\nObject_pool, 6\npool, 4\npool_allocator , 8\nSegregation, 16\nSimple Se gregated Storage, 12\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25\nSymbol Table, 16\nTemplate P arameters, 16\nThe UserAllocator Concept, 21\nTypedefs, 16\nUserAllocator Requirements, 21\nsizeof\nGuaranteeing Alignment - Ho w we guarantee alignment portably ., 13\nHeader < boost/pool/pool_alloc.hpp >, 31\nHow Contiguous Chunks are Handled, 15\nsize_type\nClass template f ast_pool_allocator , 37\nClass template object_pool, 22\nClass template pool, 27\nClass template pool_allocator , 33\nClass template simple_se gregated_storage, 42\nClass template singleton_pool, 46\n76Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/Object_pool, 6\npool, 4\npool_allocator , 8\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25\nStruct def ault_user_allocator_malloc_free\nblock, 26\ndefault_user_allocator_malloc_free, 26\ndifference_type, 26\nfree, 26\nheaders, 26\nmalloc, 26\nobjects, 26\nobject_pool, 26\nsize, 26\nsize_type, 26\ntemplate, 26\nStruct def ault_user_allocator_ne w_delete\nblock, 26\ndefault_user_allocator_ne w_delete, 25\ndifference_type, 25\nfree, 25, 26\nheaders, 25\nmalloc, 25, 26\nmemory , 25\nnew, 25\nobjects, 25\nsize, 25\nsize_type, 25\ntemplate, 25\nStruct f ast_pool_allocator_tag\nfast_pool_allocator_tag, 37\nheaders, 37\nsingleton, 36\nsingleton_pool, 36\ntemplate, 36\nStruct object_creator\nheaders, 49\nobjects, 48, 49\nobject_creator , 49\nsingleton, 48, 49\nsingleton_pool, 48, 49\nStruct pool_allocator_tag\nheaders, 32\npool_allocator_tag, 32\nsingleton, 32\nsingleton_pool, 32\nStruct template rebind\nfast_pool_allocator , 40, 41\nheaders, 35, 36, 40, 41\nother , 35, 36, 40, 41\npool_allocator , 35, 36\nrebind, 35, 36, 40, 41\ntemplate, 35, 36, 39, 40, 41\nSymbol Table\nblock, 16\n77Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/chunk, 16\nsegregated, 16\nsize, 16\nT\ntag\nClass template singleton_pool, 46, 47\nSingleton_pool, 7\ntemplate\nClass template f ast_pool_allocator , 37, 38, 40\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template object_pool, 22, 24\nClass template pool, 26, 27\nClass template pool_allocator , 32, 33, 34, 35\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 35, 36\nClass template simple_se gregated_storage, 41, 42\nClass template singleton_pool, 45, 46, 47\nDocumentation Naming and F ormatting Con ventions, 2\nHeader < boost/pool/object_pool.hpp >, 22\nHeader < boost/pool/pool.hpp >, 25\nHeader < boost/pool/pool_alloc.hpp >, 31\nHeader < boost/pool/simple_se gregated_storage.hpp >, 41\nHeader < boost/pool/singleton_pool.hpp >, 45\nObject_pool, 6\npool, 4\npool_allocator , 8\nSimple Se gregated Storage (Not for the f aint of heart - Embedded programmers only!), 16\nSingleton_pool, 7\nStruct def ault_user_allocator_malloc_free, 26\nStruct def ault_user_allocator_ne w_delete, 25\nStruct f ast_pool_allocator_tag, 36\nStruct template rebind, 35, 36, 39, 40, 41\nTemplate P arameters, 16\nThe UserAllocator Concept, 21\nTemplate P arameters\nsize, 16\ntemplate, 16\ntry_malloc_n\nClass template simple_se gregated_storage, 42, 43\nTypedefs\nsize, 16\nU\nUserAllocator Concept\nblock, 21\nchunk, 21\nconcepts, 21\ninterf ace, 21\nmemory , 21\nobjects, 21\nsize, 21\ntemplate, 21\nUserAllocator Requirements\nblock, 21\nmalloc, 21\nmemory , 21\nobjects, 21\n78Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/size, 21\nuser_allocator\nClass template f ast_pool_allocator , 37\nClass template object_pool, 22\nClass template pool, 27\nClass template pool_allocator , 33\nClass template singleton_pool, 46\nObject_pool, 6\npool, 4\npool_allocator , 8\nSingleton_pool, 7\nV\nvalue_type\nClass template f ast_pool_allocator , 37\nClass template f ast_pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 40\nClass template pool_allocator , 33\nClass template pool_allocator<v oid, UserAllocator , Mute x, Ne xtSize, MaxSize>, 36\npool_allocator , 8\n79Boost.Pool\nXML to PDF by RenderX XEP XSL-FO F ormatter , visit us at http://www .renderx.com/" } ]
{ "category": "App Definition and Development", "file_name": "pool.pdf", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "The Next Evolution in \nCI/CD Technology\nA WHITEPAPER PUBLISHED BY THE\nCD FOUNDATION’S CDEVENTS PROJECT TEAM\nUpdated May 2023CDEVENTS - THE NEXT EVOLUTION IN CI/CD TECHNOLOGY - https://cdevents.dev\nThis whitepaper describes the newest technology in CI/CD – CDEvents. It is intended for DevOps \nEngineers, Project Managers/Directors, CTOs, and Cloud Architects who are interested in evolving \ntheir DevOps pipelines to become more scalable, robust, measurable, and visible, using a \ntechnology-agnostic solution to provide interoperability.\nThe technology described is still in its very early stages, and the concepts in it could change quite \nsubstantially as we go along, so please join us to make sure this technology evolves into something \nwe could all benefit from.\nWhat is CDEvents \nToday’s CI/CD systems often comprise of services that do not talk to each other in a standardized \nway. Such services include pipeline orchestrators, build/test tools, deployment tools, metrics \ncollectors and visualizers. This leads to problems related to interoperability, notification of failure \nissues, and poor automation.\nThe Continuous Delivery Foundation’s CDEvents project has been created to solve the \ninteroperability problem. The mission of the CDEvents project is to define standards for an \nevent-based CI/CD pipeline to support CI/CD systems with a decoupled architecture.\nThe CDEvents project focuses on both event-based CI/CD standards and best practices for \nevent-driven CI/CD systems. The CDEvents project aims to define the common language of the \nCI/CD ecosystem events, so it provides a vocabulary, a specification as well as SDKs. CDEVENTS - THE NEXT EVOLUTION IN CI/CD TECHNOLOGY - https://cdevents.devCDEvents Benefits\nCDEvents delivers:\n• Easy to scale pipelines\n• Increased automation between workflows\n• A simple way to enhance or modify workflows\n• Standardized notifications for metrics collectors and visualizers\nA decoupled CI/CD architecture is easy to scale and makes the CI/CD pipelines more resilient to \nfailures, which is critical as the end-to-end software production and delivery pipelines grow more \nand more complex, not least in a microservices architecture with thousands of independent \npipelines. Using CDEvents also increases automation when connecting workflows from different \nsystems to each other, and as a result, empowers tracing/visualizing/auditing of the connected \nworkflows through these events. Additionally, CDEvents make it super easy to switch between \ndifferent CI/CD tooling to enhance or modify your workflows quickly.\nThe Goal of the CDEvents Project \nThe CDEvents project’s mission is to standardize an event protocol specification that caters to \ntechnology-agnostic machine-to-machine communication in CI/CD systems. This specification will \nbe published, reviewed, and agreed upon between relevant Linux Foundation projects/members. \nThe CDEvents project aims to provide reference implementations such as event consumers/\nlisteners and event producers/senders on top of for example CloudEvents.\nHistory\nBefore we dive in further, a bit of history. The Continuous Delivery Foundation’s Interoperability \nSpecial Interest Group(SIG) was created in early 2020 to discuss and research interoperability in the \nCD space. One of the workstreams of the SIG was focused on interoperability through ‘events.’ In \nearly 2021 the workstream was transformed into a SIG of its own, and towards the end of that year, \nthe CDEvents project was created. The project was proposed as a CDF incubating project and was \naccepted by the CDF Technology Oversight Committee in December 2021.CDEVENTS - THE NEXT EVOLUTION IN CI/CD TECHNOLOGY - https://cdevents.devCDEvents Specification\nTo define CDEvents the contributors understood the need to define a common standard and \nvocabulary. Following is a description of the specification. \nCDEvents Topics\nEvents provide interoperability between CI/CD tooling through topics. Part of the mission of \nthe CDEvents project is to determine the best usage and process for CDEvents and to define a \ncommon standard. The CDEvents project team is working to define:\n• When are events suited for triggers, audits, monitoring, and management?\n• Common guidelines for at-least-once, at-most-once, exactly once, and ordering logic.\n• When to apply particular strategies\n• Events to be used by tools for orchestration/workflows\n• Pipeline to pipeline communication via events\n• Tracing/auditing/graphing/visualizing of the entire process, e.g., through events showing what \nhas occurred.\n• CDEvents Metrics, e.g., how many versions have been deployed, how many PRs (Pull Requests) \nhave been raised, and how many events have been issued?\n• How are events related and how are they ordered (links vs trace context)?\nCDEvents Vocabulary\nMost CI/CD platforms define their abstractions, data model, and nomenclature. The interoperability \nSIG has already been collecting this level of data from various platforms. Many labels are shared \nacross platforms, but sometimes the same label bears different meanings in different projects. To \nachieve interoperability through events, a nomenclature with shared semantics across platforms \nwas seen to be essential. This nomenclature has its roots in the “Rosetta Stone” for CI/CD, first \ninitiated through the Interoperability SIG in CDF. The CDEvents vocabulary will continuously revise \nits vocabulary based on the evolution of that document and related publications until the first \nofficial release of the CDEvents protocol specification is published.CDEVENTS - THE NEXT EVOLUTION IN CI/CD TECHNOLOGY - https://cdevents.devTo achieve shared semantics, the CDEvents project first created a vocabulary describing six \n‘buckets’ to group the different but common CDEvents together.\n• Core Events: this includes core events related to core activities and orchestration that need to \nexist to be able to deterministically and continuously be able to deliver software to users.\n• Source Code Version Control Events : Events emitted by changes in source code or by the cre -\nation, modification, or deletion of new repositories that hold source code.\n• Continuous Integration Events : includes events related to building, testing, packaging, and \nreleasing software artifacts, usually binaries.\n• Continuous Deployment Events : include events related to environments where the \nartifacts produced by the integration pipelines actually run. These are services running in \na specific environment (dev, QA, production), or embedded software running in a specific \nhardware platform.\n• Continuous Operations Events : include events related to the operation of services deployed in \ntarget environments, tracking of incidents, and their resolution. Incidents, and their resolution, \ncan be detected by a number of different actors, like the end-user, a quality gate, a monitoring \nsystem, an SRE through a ticketing system, or even the service itself.\n• CloudEvents Binding for CDEvents : The CloudEvents Binding for CDEvents defines how \nCDEvents are mapped to CloudEvents headers and body.\nWithin each ‘phase,’ a few abstractions have been defined. For instance, the ‘Core Events’ phase \ndefines “Task Runs” and “Pipeline Runs”. The ‘Continuous Integration Pipeline Events’ phase defines \n“Build,” “Test Case,” “Test Suite,” and “Artifact.”\nThese phases can also be considered as different profiles of the vocabulary that can be adopted \nindependently. Also notice that the term ‘pipeline’ is used to denote a pipeline, workflow, and \nrelated concepts. We also use the term ‘task’ to denote a job/stage/step.\nWith the vocabulary defined, CDEvents can be easily assigned to each phase. Within each phase, \nabstractions can be assigned. For instance, the Core phase defines “ Task Runs” and “ Pipeline Runs”. \nA Pipeline Run can be:\n• Queued\n• Started\n• Finished \nWhile these six ‘phases’ define the most common CI/CD activities, they are not exhaustive. In the \nfuture, other activities may be included, for instance, monitoring. CDEVENTS - THE NEXT EVOLUTION IN CI/CD TECHNOLOGY - https://cdevents.devCDEvents Format\nCDEvents can be encapsulated in different message/stream/event envelopes, and the first such \nbinding prepared by the CDEvents project uses CloudEvents with CDEvents-specific extensions and \npayload structure, which is based on the CDEvent’s vocabulary.\nCDEvents producers may use the payload to provide extra context to the event’s consumer. The \npayload however is not meant to transport large amounts of data. Data such as logs or software \nartifacts should be linked from the event and not embedded into the events. CDEvents follows the \nCloudEvents recommendation on event size and size limits.\nAll CDEvents contain information such as the type of event, the source of the event, the time \nthe event occurred and a unique identifier. Depending on its type it also contains multiple other \nattributes, of which some are mandatory and some are optional.\nFor more information about the CDEvents format, please visit the CDEvents Documentation site.\nCDEvents Use Cases\nUse cases are key to understanding CDEvents. When defining CDEvents and their attributes, we \nmust know what minimal set of information is needed to satisfy a particular use case. There are two \nroot use cases:\n• The first use case is interoperability, making it possible for one CI/CD tool to consume events \nproduced by another without the need for ‘static’ imperative definitions. This use case focuses \non how to make CI/CD tools work together in a more automated, streamlined manner.\n• The second one is observability and metrics. Essential to improving the CI/CD pipeline is the \nability of the pipeline to collect events from different CI/CD tools. This collection is essential for \nthe pipeline to correlate CDEvents and process them consistently, building an end-to-end view \nof the overall CI/CD workflow. CDEVENTS - THE NEXT EVOLUTION IN CI/CD TECHNOLOGY - https://cdevents.devUse Case One: Interoperability\nIn most enterprise organizations, it is impossible to have one CI/CD setup to rule all development \nprojects. Different languages, platforms, and tooling might be required for each. For this reason, \nmost organizations want to let teams choose their own optimal CI/CD setup. Let’s consider the \nfollowing use case. In our fictional organization, many of the software development teams prefer \nto use Zuul for its dependency handling and scalability. But other teams prefer GoCD. Some teams \nstarted using GitLab and prefer a central platform for source code and the project tools. Other \nteams prefer Jenkins and rely on a wide variety of plugins. To further complicate things, our fictional \ncompany doesn’t build all the software in-house. They instead use suppliers.\nAs our fictional company needs to understand and receive software modules built using different \ntooling. The problem is that Zuul artifacts are a bit different from GoCD artifacts, which is a bit \ndifferent from GitLab artifacts, or artifacts produced by a custom Jenkins build. The solution is to \nwrite custom translation or “glue code” to be able to understand and receive all these diverse built \nsoftware modules.\nAnd this diversity does not apply only to building artifacts. It can apply to many steps in the pipeline, \nincluding: \n• Source changes\n• Build activities\n• Test runs\n• Failures\n• Compositions (multiple artifacts)\n• Announcements\nIn a CDEvents-based system, it is unnecessary to develop custom ‘glue code’ for each activity. \nTo allow our fictional company to announce new artifacts in a standardized format, CDEvents \nhas predefined the format taking care of the interoperability between the diverse build systems. \nUse Case Two: Visualization and Metrics\nConsider the following CI/CD setup. Code is written and maintained on GitHub. When changes \nare made, they go through different tests, maintained by different teams, which use different \ntechnologies. Some tests are running in GitHub directly as GitHub Actions. Some others are \nexecuted in Jenkins and others as Tekton pipelines. Releases are managed through Tekton as well, \nwhile deployments are managed with Argo. Keptn is used to manage remediation strategies on \nproduction clusters.CDEVENTS - THE NEXT EVOLUTION IN CI/CD TECHNOLOGY - https://cdevents.devIf all these systems supported events in some predefined format, they could be easily collected. \nWhen they are not unified, teams must build event collectors that support multiple ways of \ncollecting payloads. For example, both Tekton and Keptn use CloudEvents, but there is no shared \nsemantics for interacting between them.\nThe goal is to have all platforms share the same format for events allowing a standard event \ncollector across all tools and platforms managing CI/CD pipelines. For example, to visualize the flow \nof a change from when it’s written, through the test, release, deploy, and possibly rollback, there \nneeds to be enough information in the events to be able to correlate the data across all tools. \nCDEvents addresses unifying the data through a standard event collector. \nTracking metrics across the CI/CD Pipelines is critical to improving development processes by \nanswering the question ‘How effective is the DevOps setup.’ To answer that question metrics need \nto be commonly defined, collected, and visualized. CDEvents collect data from heterogeneous \nsources, making it possible to store and process it consistently.\nCDEvents Proof of Concept\nThe CDEvents contributors completed a Proof of Concept using Tekton and Keptn. The PoC shows \na combined effort between Keptn and Tekton. In the PoC Tekton played the role of the pipeline \nexecutor doing the heavy lifting with building and deploying whereas Keptn handled the business \ndecision. More information about the PoC can be found on the PoC GitHub page.\nYour Next Steps (Call to action)\nGet involved in the CDEvents project at the Continuous Delivery Foundation. CDEvents will be \ncritical as we move away from traditional monolithic development models to cloud-native models \nwhere decoupled applications require thousands of CI/CD workflows. Building a standard CDEvents \nprotocol specification that can be easily supported by all CI/CD tooling is required. Contributing to \nthe CDEvents team is a way for you to get involved in solving this critical piece of the CI/CD puzzle. \nGet involved by going to https://cdevents.dev/community/. You will find community information \nwhich is the easiest way to get started.CDEVENTS - THE NEXT EVOLUTION IN CI/CD TECHNOLOGY - https://cdevents.devConclusion\nCDEvents is the next evolution of CI/CD pipeline orchestration and visualization. Every DevOps \nEngineer has understood the challenges of building ‘plugins,’ ‘glue code,’ and one-off scripts to build \na single CI/CD pipeline. CDEvents will revolutionize the way pipelines are coordinated and unified \nproviding the end-to-end CI/CD pipeline visibility and data collection needed for both processing \nand tracking key workflow metrics. And most critically, as we move into a cloud-native architecture \nwith microservices, scaling the end-to-end CI/CD pipeline to thousands of workflows is already \nhappening. CDEvents will allow your pipeline to scale, and make it easy to stand up a new workflow \nas often as needed. \nLearn more at cdevents.dev\nAbout the Continuous \nDelivery Foundation\nThe Continuous Delivery Foundation (CDF) serves as the \nvendor-neutral home of many of the fastest-growing projects \nfor continuous integration/ continuous delivery (CI/CD). It \nfosters vendor-neutral collaboration between the industry’s \ntop developers, end-users, and vendors to further CI/CD best \npractices and industry specifications. Its mission is to grow \nand sustain projects that are part of the broad and growing \ncontinuous delivery ecosystem" } ]
{ "category": "App Definition and Development", "file_name": "CDEvents_Whitepaper.pdf", "project_name": "CDEvents", "subcategory": "Streaming & Messaging" }
[ { "data": "Contents \nConcepts \nScheduler \nFifoWorker \nExceptionTranslator \nStateBase \nSimpleState \nState \nEvent \nstate_machine.hpp \nClass template state_machine \nasynchronous_state_machine.hpp \nClass template asynchronous_state_machine \nevent_processor.hpp \nClass template event_processor \nfifo_scheduler.hpp \nClass template fifo_scheduler \nexception_translator.hpp \nClass template exception_translator \nnull_exception_translator.hpp \nClass null_exception_translator \n \nsimple_state.hpp \nEnum history_mode \nClass template simple_state \nstate.hpp \nClass template state \nshallow_history.hpp \nClass template shallow_history \ndeep_history.hpp \nClass template deep_history \n \nevent_base.hpp \nClass event_base \nevent.hpp \nClass template event \n \ntransition.hpp \nClass template transition \nin_state_reaction.hpp \nClass template in_state_reaction \ntermination.hpp \nClass template termination \ndeferral.hpp \nClass template deferral \ncustom_reaction.hpp \nClass template custom_reaction \nresult.hpp \nClass result \nThe Boost Statechart Library \nReference Page 1 of 35 The Boost Statechart Library - Reference \n2008/01/06Concepts \nScheduler concept \nA Scheduler type defines the following: \n/circle6What is passed to the constructors of event_processor<> subtypes and how the lifetime of such objects \nis managed \n/circle6Whether or not multiple event_processor<> subtype objects can share the same queue and sched uler \nthread \n/circle6How events are added to the schedulers' queue \n/circle6Whether and how to wait for new events when the sch edulers' queue runs empty \n/circle6Whether and what type of locking is used to ensure thread-safety \n/circle6Whether it is possible to queue events for no longe r existing event_processor<> subtype objects and \nwhat happens when such an event is processed \n/circle6What happens when one of the serviced event_processor<> subtype objects propagates an exception \nFor a Scheduler type S and an object cpc of type const S::processor_context the following \nexpressions must be well-formed and have the indica ted results: \nTo protect against abuse, all members of S::processor_context should be declared private. As a result, \nevent_processor<> must be a friend of S::processor_context . \nFifoWorker concept \nA FifoWorker type defines the following: \n/circle6Whether and how to wait for new work items when the internal work queue runs empty \n/circle6Whether and what type of locking is used to ensure thread-safety \nFor a FifoWorker type F, an object f of that type, a const object cf of that type, a parameterless function object \nw of arbitrary type and an unsigned long value n the following expressions/statements must be well- formed \nand have the indicated results: Expression Type Result \ncpc.my_scheduler() S & A reference to the scheduler \ncpc.my_handle() S::processor_handle The handle identifying the event_processor<> subtype object \nExpression/Statement Type Effects/Result \nF::work_item boost::function0< \nvoid > \nF() or F( false ) FConstructs a non-blocking (see below) object of the \nFifoWorker type. In single-threaded builds the seco nd \nexpression is not well-formed \nF( true ) FConstructs a blocking (see below) object of the FifoWorker \ntype. Not well-formed in single-threaded builds \nf.queue_work_item \n( w ); Constructs and queues an object of type F::work_item , \npassing w as the only argument \nf.terminate(); Creates and queues an object of type F::work_item that, \nwhen later executed in operator()() , leads to a modification \nof internal state so that terminated() henceforth returns \ntrue \ntrue if terminate() has been called and the resulting work \nitem has been executed in operator()() . Returns false Page 2 of 35 The Boost Statechart Library - Reference \n2008/01/06ExceptionTranslator concept \nAn ExceptionTranslator type defines how C++ excepti ons occurring during state machine operation are tr anslated \nto exception events. \nFor an ExceptionTranslator object et , a parameterless function object a of arbitrary type returning result and a \nfunction object eh of arbitrary type taking a const event_base & parameter and returning result the \nfollowing expression must be well-formed and have t he indicated results: \nStateBase concept \nA StateBase type is the common base of all states o f a given state machine type. \nstate_machine<>::state_base_type is a model of the StateBase concept. \nFor a StateBase type S and a const object cs of that type the following expressions must be wel l-formed and \nhave the indicated results: cf.terminated(); bool otherwise \n \nMust only be called from the thread that also calls \noperator()() \nf( n ); unsigned long Enters a loop that, with each cycle, dequeues and c alls \noperator()() on the oldest work item in the queue. \nThe loop is left and the number of executed work it ems \nreturned if one or more of the following conditions are met: \n/circle6f.terminated() == true \n/circle6The application is single-threaded and the internal \nqueue is empty \n/circle6The application is multi- threaded and the internal queue \nis empty and the worker was created as non-blocking \n/circle6n != 0 and the number of work items that have been \nprocessed since operator()() was called equals n \nIf the queue is empty and none of the above conditi ons are \nmet then the thread calling operator()() is put into a wait \nstate until f.queue_work_item() is called from another \nthread. \n \nMust only be called from exactly one thread \nf(); unsigned long Has exactly the same semantics as f( n ); with n == 0 (see \nabove) \nExpression Type Effects/Result \net \n( a, eh ); result 1. Attempts to execute return a(); \n2. If a() propagates an exception, the exception is caught \n3. Inside the catch block calls eh , passing a suitable stack-allocated model of the \nEvent concept \n4. Returns the result returned by eh \nExpression Type Result \ncs.outer_state_ptr() const S * 0 if cs is an outermost state , a pointer to the direct outer state of \ncs otherwise \nA value unambiguously identifying the most-derived type of Page 3 of 35 The Boost Statechart Library - Reference \n2008/01/06SimpleState concept \nA SimpleState type defines one state of a particula r state machine. \nFor a SimpleState type S and a pointer pS pointing to an object of type S allocated with new the following \nexpressions/statements must be well-formed and have the indicated effects/results: \nState concept \nA State is a refinement of SimpleState (that is, except for the default constructor a Sta te type must also satisfy \nSimpleState requirements). For a State type S, a pointer pS of type S * pointing to an object of type S allocated \nwith new , and an object mc of type state< S, C, I, h >::my_context the following \nexpressions/statements must be well-formed: cs.dynamic_type() S::id_type cs . S::id_type values are comparable with operator==() \nand operator!=() . An unspecified collating order can be \nestablished with std::less< S::id_type > . In contrast to \ntypeid( cs ) , this function is available even on platforms that \ndo not support C++ RTTI (or have been configured to not \nsupport it) \ncs.custom_dynamic_type_ptr< \n Type >() const Type \n*A pointer to the custom type identifier or 0. If != 0 , Type must \nmatch the type of the previously set pointer. This function is \nonly available if \nBOOST_STATECHART_USE_NATIVE_RTTI is not defined \nExpression/Statement Type Effects/Result/Notes \nsimple_state < \n S, C, I, h > * pB = \n pS; simple_state< S, C, I, h > must be an \nunambiguous public base of S. See \nsimple_state<> documentation for the \nrequirements and semantics of C, I and h\nnew S() S * Enters the state S. Certain functions must not be \ncalled from S::S() , see simple_state<> \ndocumentation for more information \npS->exit(); Exits the state S (first stage). The definition of \nan exit member function within models of the \nSimpleState concept is optional since \nsimple_state<> already defines the following \npublic member: void exit() {} . exit() is \nnot called when a state is exited while an \nexception is pending, see \nsimple_state<>::terminate() for more \ninformation \ndelete pS; Exits the state S (second stage) \nS::reactions An mpl::list<> that is either \nempty or contains instantiations of \nthe custom_reaction , \nin_state_reaction , deferral , \ntermination or transition class \ntemplates. If there is only a single \nreaction then it can also be \ntypedef ed directly, without \nwrapping it into an mpl::list<> The declaration of a reactions member \ntypedef within models of the SimpleState \nconcept is optional since simple_state<> \nalready defines the following public member: \ntypedef mpl::list<> reactions; \nExpression/Statement Type Effects/Result/Notes Page 4 of 35 The Boost Statechart Library - Reference \n2008/01/06Event concept \nA Event type defines an event for which state machi nes can define reactions. \nFor a Event type E and a pointer pCE of type const E * pointing to an object of type E allocated with new the \nfollowing expressions/statements must be well-forme d and have the indicated effects/results: \nHeader <boost/statechart/state_machine.hpp> \nClass template state_machine \nThis is the base class template of all synchronous state machines. \nClass template state_machine parameters \nClass template state_machine synopsis state < S, C, I, h > * \n pB = pS; state< S, C, I, h > must be an unambiguous public base of S. See \nstate<> documentation for the requirements and semantics o f C, I and h\nnew S( mc ) S * Enters the state S. No restrictions exist regarding the functions tha t can be \ncalled from S::S() (in contrast to the constructors of models of the \nSimpleState concept). mc must be forwarded to state< S, C, I, h \n>::state() \nExpression/Statement Type Effects/Result/Notes \nconst event < E > * pCB = pCE; event< E > must be an unambiguous public base of E\nnew E( *pCE ) E * Makes a copy of pE \nTemplate parameter Requirements Semantics Default \nMostDerived The most-derived \nsubtype of this class \ntemplate \nInitialState A model of the \nSimpleState or State \nconcepts. The \nContext argument \npassed to the \nsimple_state<> or \nstate<> base of \nInitialState must \nbe MostDerived . \nThat is, \nInitialState must \nbe an outermost state \nof this state machine The state that is entered when \nstate_machine<> \n::initiate() is called \nAllocator A model of the \nstandard Allocator \nconcept Allocator::rebind<>::other \nis used to allocate and deallocate \nall simple_state subtype \nobjects and internal objects of \ndynamic storage duration std::allocator< void > \nExceptionTranslator A model of the \nExceptionTranslator \nconcept see ExceptionTranslator concept null_exception_translator Page 5 of 35 The Boost Statechart Library - Reference \n2008/01/06namespace boost \n{ \nnamespace statechart \n{ \n template< \n class MostDerived, \n class InitialState, \n class Allocator = std::allocator< void >, \n class ExceptionTranslator = null_exception_tran slator > \n class state_machine : noncopyable \n { \n public: \n typedef MostDerived outermost_context_type; \n \n void initiate (); \n void terminate (); \n bool terminated () const; \n \n void process_event ( const event_base & ); \n \n template< class Target > \n Target state_cast () const; \n template< class Target > \n Target state_downcast () const; \n \n // a model of the StateBase concept \n typedef implementation-defined state_base_type; \n // a model of the standard Forward Iterator c oncept \n typedef implementation-defined state_iterator; \n \n state_iterator state_begin () const; \n state_iterator state_end () const; \n \n void unconsumed_event ( const event_base & ) {} \n \n protected: \n state_machine (); \n ~state_machine (); \n \n void post_event ( \n const intrusive_ptr< const event_base > & ); \n void post_event ( const event_base & ); \n }; \n} \n} \nClass template state_machine constructor and destructor \nstate_machine(); \nEffects : Constructs a non-running state machine \n~state_machine(); \nEffects : Destructs the currently active outermost state an d all its direct and indirect inner states. Innermo st states \nare destructed first. Other states are destructed a s soon as all their direct and indirect inner state s have been \ndestructed. The inner states of each state are dest ructed according to the number of their orthogonal region. The \nstate in the orthogonal region with the highest num ber is always destructed first, then the state in t he region with \nthe second-highest number and so on \nNote : Does not attempt to call any exit member functions Page 6 of 35 The Boost Statechart Library - Reference \n2008/01/06Class template state_machine modifier functions \nvoid initiate(); \nEffects : \n1. Calls terminate() \n2. Constructs a function object action with a parameter-less operator()() returning result that \na. enters (constructs) the state specified with the InitialState template parameter \nb. enters the tree formed by the direct and indirect inner initial states of InitialState depth first. \nThe inner states of each state are entered accordin g to the number of their orthogonal region. The sta te \nin orthogonal region 0 is always entered first, the n the state in region 1 and so on \n3. Constructs a function object exceptionEventHandler with an operator()() returning result \nand accepting an exception event parameter that pro cesses the passed exception event, with the followi ng \ndifferences to the processing of normal events: \n/circle6From the moment when the exception has been thrown until right after the execution of the exception \nevent reaction, states that need to be exited are o nly destructed but no exit member functions are \ncalled \n/circle6Reaction search always starts with the outermost unstable state \n/circle6As for normal events, reaction search moves outward when the current state cannot handle the event. \nHowever, if there is no outer state (an outermost state has been reached) the reaction search is \nconsidered unsuccessful. That is, exception events will never be dispatched to orthogonal regions \nother than the one that caused the exception event \n/circle6Should an exception be thrown during exception even t reaction search or reaction execution then the \nexception is propagated out of the exceptionEventHandler function object (that is, \nExceptionTranslator is not used to translate exceptions thrown while processi ng an exception \nevent) \n/circle6If no reaction could be found for the exception eve nt or if the state machine is not stable after \nprocessing the exception event, the original except ion is rethrown. Otherwise, a result object is \nreturned equal to the one returned by simple_state<>::discard_event() \n4. Passes action and exceptionEventHandler to ExceptionTranslator::operator()() . If \nExceptionTranslator::operator()() throws an exception, the exception is propagated t o the \ncaller. If the caller catches the exception, the cu rrently active outermost state and all its direct a nd indirect \ninner states are destructed. Innermost states are d estructed first. Other states are destructed as soo n as all \ntheir direct and indirect inner states have been de structed. The inner states of each state are destru cted \naccording to the number of their orthogonal region. The state in the orthogonal region with the highes t \nnumber is always destructed first, then the state i n the region with the second-highest number and so on. \nContinues with step 5 otherwise (the return value i s discarded) \n5. Processes all posted events (see process_event() ). Returns to the caller if there are no more poste d \nevents \nThrows : Any exceptions propagated from ExceptionTranslator::operator()() . Exceptions never \noriginate in the library itself but only in code su pplied through template parameters: \n/circle6Allocator::rebind<>::other::allocate() \n/circle6state constructors \n/circle6react member functions \n/circle6exit member functions \n/circle6transition-actions \nvoid terminate(); \nEffects : \n1. Constructs a function object action with a parameter-less operator()() returning result that \nterminates the currently active outermost state, discards all remaining events and clears all history \ninformation \n2. Constructs a function object exceptionEventHandler with an operator()() returning result \nand accepting an exception event parameter that pro cesses the passed exception event, with the followi ng \ndifferences to the processing of normal events: Page 7 of 35 The Boost Statechart Library - Reference \n2008/01/06/circle6From the moment when the exception has been thrown until right after the execution of the exception \nevent reaction, states that need to be exited are o nly destructed but no exit member functions are \ncalled \n/circle6Reaction search always starts with the outermost unstable state \n/circle6As for normal events, reaction search moves outward when the current state cannot handle the event. \nHowever, if there is no outer state (an outermost state has been reached) the reaction search is \nconsidered unsuccessful. That is, exception events will never be dispatched to orthogonal regions \nother than the one that caused the exception event \n/circle6Should an exception be thrown during exception even t reaction search or reaction execution then the \nexception is propagated out of the exceptionEventHandler function object (that is, \nExceptionTranslator is not used to translate exceptions thrown while processi ng an exception \nevent) \n/circle6If no reaction could be found for the exception eve nt or if the state machine is not stable after \nprocessing the exception event, the original except ion is rethrown. Otherwise, a result object is \nreturned equal to the one returned by simple_state<>::discard_event() \n3. Passes action and exceptionEventHandler to ExceptionTranslator::operator()() . If \nExceptionTranslator::operator()() throws an exception, the exception is propagated t o the \ncaller. If the caller catches the exception, the cu rrently active outermost state and all its direct a nd indirect \ninner states are destructed. Innermost states are d estructed first. Other states are destructed as soo n as all \ntheir direct and indirect inner states have been de structed. The inner states of each state are destru cted \naccording to the number of their orthogonal region. The state in the orthogonal region with the highes t \nnumber is always destructed first, then the state i n the region with the second-highest number and so on. \nOtherwise, returns to the caller \nThrows : Any exceptions propagated from ExceptionTranslator::operator() . Exceptions never \noriginate in the library itself but only in code su pplied through template parameters: \n/circle6Allocator::rebind<>::other::allocate() \n/circle6state constructors \n/circle6react member functions \n/circle6exit member functions \n/circle6transition-actions \nvoid process_event( const event_base & ); \nEffects : \n1. Selects the passed event as the current event (he nceforth referred to as currentEvent ) \n2. Starts a new reaction search \n3. Selects an arbitrary but in this reaction search not yet visited state from all the currently active innermost \nstates . If no such state exists then continues with step 10 \n4. Constructs a function object action with a parameter-less operator()() returning result that does \nthe following: \na. Searches a reaction suitable for currentEvent , starting with the current innermost state and \nmoving outward until a state defining a reaction fo r the event is found. Returns \nsimple_state<>::forward_event() if no reaction has been found \nb. Executes the found reaction. If the reaction resu lt is equal to the return value of \nsimple_state<>::forward_event() then resumes the reaction search (step a). Returns the \nreaction result otherwise \n5. Constructs a function object exceptionEventHandler returning result and accepting an exception \nevent parameter that processes the passed exception event, with the following differences to the proce ssing \nof normal events: \n/circle6From the moment when the exception has been thrown until right after the execution of the exception \nevent reaction, states that need to be exited are o nly destructed but no exit member functions are \ncalled \n/circle6If the state machine is stable when the exception e vent is processed then exception event reaction \nsearch starts with the innermost state that was las t visited during the last normal event reaction sea rch \n(the exception event was generated as a result of t his normal reaction search) \n/circle6If the state machine is unstable when the exception event is processed then excepti on event reaction \nsearch starts with the outermost unstable state \n/circle6As for normal events, reaction search moves outward when the current state cannot handle the event. Page 8 of 35 The Boost Statechart Library - Reference \n2008/01/06However, if there is no outer state (an outermost state has been reached) the reaction search is \nconsidered unsuccessful. That is, exception events will never be dispatched to orthogonal regions \nother than the one that caused the exception event \n/circle6Should an exception be thrown during exception even t reaction search or reaction execution then the \nexception is propagated out of the exceptionEventHandler function object (that is, \nExceptionTranslator is not used to translate exceptions thrown while processi ng an exception \nevent) \n/circle6If no reaction could be found for the exception eve nt or if the state machine is not stable after \nprocessing the exception event, the original except ion is rethrown. Otherwise, a result object is \nreturned equal to the one returned by simple_state<>::discard_event() \n6. Passes action and exceptionEventHandler to ExceptionTranslator::operator()() . If \nExceptionTranslator::operator()() throws an exception, the exception is propagated t o the \ncaller. If the caller catches the exception, the cu rrently active outermost state and all its direct a nd indirect \ninner states are destructed. Innermost states are d estructed first. Other states are destructed as soo n as all \ntheir direct and indirect inner states have been de structed. The inner states of each state are destru cted \naccording to the number of their orthogonal region. The state in the orthogonal region with the highes t \nnumber is always destructed first, then the state i n the region with the second-highest number and so on. \nOtherwise continues with step 7 \n7. If the return value of ExceptionTranslator::operator()() is equal to the one of \nsimple_state<>::forward_event() then continues with step 3 \n8. If the return value of ExceptionTranslator::operator()() is equal to the one of \nsimple_state<>::defer_event() then the return value of \ncurrentEvent. intrusive_from_this () is stored in a state- specific queue. Continues with step 11 \n9. If the return value of ExceptionTranslator::operator()() is equal to the one of \nsimple_state<>::discard_event() then continues with step 11 \n10. Calls static_cast< MostDerived * >( this )->unconsumed_ev ent \n( currentEvent ) . If unconsumed_event() throws an exception, the exception is propagated t o \nthe caller. Such an exception never leads to the de struction of any states (in contrast to exceptions \npropagated from ExceptionTranslator::operator()() ) \n11. If the posted events queue is non-empty then deq ueues the first event, selects it as currentEvent and \ncontinues with step 2. Returns to the caller otherw ise \nThrows : Any exceptions propagated from MostDerived::unconsumed_event() or \nExceptionTranslator::operator() . Exceptions never originate in the library itself but only in code \nsupplied through template parameters: \n/circle6Allocator::rebind<>::other::allocate() \n/circle6state constructors \n/circle6react member functions \n/circle6exit member functions \n/circle6transition-actions \n/circle6MostDerived::unconsumed_event() \nvoid post_event( \n const intrusive_ptr< const event_base > & ); \nEffects : Pushes the passed event into the posted events qu eue \nThrows : Any exceptions propagated from Allocator::allocate() \nvoid post_event( const event_base & evt ); \nEffects : post_event( evt.intrusive_from_this() ); \nThrows : Any exceptions propagated from Allocator::allocate() \nvoid unconsumed_event( const event_base & evt ); \nEffects : None \nNote : This function (or, if present, the equally named derived class member function) is called by process_event () \nwhenever a dispatched event did not trigger a react ion, see process_event () effects, point 10 for more information. Page 9 of 35 The Boost Statechart Library - Reference \n2008/01/06Class template state_machine observer functions \nbool terminated() const; \nReturns : true , if the machine is terminated. Returns false otherwise \nNote : Is equivalent to state_begin() == state_end() \ntemplate< class Target > \nTarget state_cast() const; \nReturns : Depending on the form of Target either a reference or a pointer to const if at least one of the \ncurrently active states can successfully be dynamic_cast to Target . Returns 0 for pointer targets and throws \nstd::bad_cast for reference targets otherwise. Target can take either of the following forms: const \nClass * or const Class & \nThrows : std::bad_cast if Target is a reference type and none of the active states can be dynamic_cast \nto Target \nNote : The search sequence is the same as for process_event () \ntemplate< class Target > \nTarget state_downcast() const; \nRequires : For reference targets the compiler must support p artial specialization of class templates, otherwise a \ncompile-time error will result. The type denoted by Target must be a model of the SimpleState or State concepts \nReturns : Depending on the form of Target either a reference or a pointer to const if Target is equal to the \nmost-derived type of a currently active state. Retu rns 0 for pointer targets and throws std::bad_cast for \nreference targets otherwise. Target can take either of the following forms: const Class * or const \nClass & \nThrows : std::bad_cast if Target is a reference type and none of the active states has a most derived type \nequal to Target \nNote : The search sequence is the same as for process_event () \nstate_iterator state_begin() const; \nstate_iterator state_end() const; \nReturn : Iterator objects, the range [ state_begin() , state_end() ) refers to all currently active innermost \nstates . For an object i of type state_iterator , *i returns a const state_base_type & and \ni.operator->() returns a const state_base_type * \nNote : The position of a given innermost state in the ra nge is arbitrary. It may change with each call to a modifier \nfunction. Moreover, all iterators are invalidated w henever a modifier function is called \nHeader <boost/statechart/ \nasynchronous_state_machine.hpp> \nClass template asynchronous_state_machine \nThis is the base class template of all asynchronous state machines. \nClass template asynchronous_state_machine parameters \nTemplate parameter Requirements Semantics Default \nMostDerived The most-derived subtype of \nthis class template \nA model of the SimpleState or \nState concepts. The Context Page 10 of 35 The Boost Statechart Library - Reference \n2008/01/06Class template asynchronous_state_machine synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< \n class MostDerived, \n class InitialState, \n class Scheduler = fifo_scheduler<>, \n class Allocator = std::allocator< void >, \n class ExceptionTranslator = null_exception_tran slator > \n class asynchronous_state_machine : \n public state_machine< \n MostDerived, InitialState, Allocator, Excepti onTranslator >, \n public event_processor< Scheduler > \n { \n protected: \n typedef asynchronous_state_machine my_base; \n \n asynchronous_state_machine( \n typename event_processor< Scheduler >::my_c ontext ctx ); \n ~asynchronous_state_machine(); \n }; \n} \n} \nClass template asynchronous_state_machine constructor and destructor \nasynchronous_state_machine( \n typename event_processor< Scheduler >::my_context ctx ); \nEffects : Constructs a non-running asynchronous state machi ne \nNote : Users cannot create asynchronous_state_machine<> subtype objects directly. This can only be \ndone through an object of the Scheduler class \n~asynchronous_state_machine(); \nEffects : Destructs the state machine \nNote : Users cannot destruct asynchronous_state_machine<> subtype objects directly. This can only be \ndone through an object of the Scheduler class InitialState argument passed to the \nsimple_state<> or state<> \nbase of InitialState must \nbe MostDerived . That is, \nInitialState must be an \noutermost state of this state \nmachine The state that is \nentered when the state \nmachine is initiated \nthrough the Scheduler \nobject \nScheduler A model of the Scheduler \nconcept see Scheduler concept fifo_scheduler<> \nAllocator A model of the standard \nAllocator concept std::allocator< void > \nExceptionTranslator A model of the \nExceptionTranslator concept see \nExceptionTranslator \nconcept null_exception_translator Page 11 of 35 The Boost Statechart Library - Reference \n2008/01/06Header <boost/statechart/ event_processor.hpp > \nClass template event_processor \nThis is the base class template of all types that p rocess events. asynchronous_state_machine<> is just \none possible event processor implementation. \nClass template event_processor parameters \nClass template event_processor synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< class Scheduler > \n class event_processor \n { \n public: \n virtual ~event_processor (); \n \n Scheduler & my_scheduler () const; \n \n typedef typename Scheduler::processor_handle \n processor_handle; \n processor_handle my_handle () const; \n \n void initiate (); \n void process_event ( const event_base & evt ); \n void terminate (); \n \n protected: \n typedef const typename Scheduler::processor_c ontext & \n my_context; \n event_processor ( my_context ctx ); \n \n private: \n virtual void initiate_impl() = 0; \n virtual void process_event_impl( \n const event_base & evt ) = 0; \n virtual void terminate_impl() = 0; \n }; \n} \n} \nClass template event_processor constructor and destructor \nevent_processor( my_context ctx ); \nEffects : Constructs an event processor object and stores c opies of the reference returned by \nmyContext.my_scheduler() and the object returned by myContext.my_handle() \nNote : Users cannot create event_processor<> subtype objects directly. This can only be done th rough an \nobject of the Scheduler class Template parameter Requirements Semantics \nScheduler A model of the Scheduler concept see Scheduler concept Page 12 of 35 The Boost Statechart Library - Reference \n2008/01/06virtual ~event_processor(); \nEffects : Destructs an event processor object \nNote : Users cannot destruct event_processor<> subtype objects directly. This can only be done th rough an \nobject of the Scheduler class \nClass template event_processor modifier functions \nvoid initiate(); \nEffects : initiate_impl(); \nThrows : Any exceptions propagated from the implementation of initiate_impl() \nvoid process_event( const event_base & evt ); \nEffects : process_event_impl( evt ); \nThrows : Any exceptions propagated from the implementation of process_event_impl() \nvoid terminate(); \nEffects : terminate_impl(); \nThrows : Any exceptions propagated from the implementation of terminate_impl() \nClass template event_processor observer functions \nScheduler & my_scheduler() const; \nReturns : The Scheduler reference obtained in the constructor \nprocessor_handle my_handle() const; \nReturns : The processor_handle object obtained in the constructor \nHeader <boost/statechart/fifo_scheduler.hpp> \nClass template fifo_scheduler \nThis class template is a model of the Scheduler concept. \nClass template fifo_scheduler parameters \nClass template fifo_scheduler synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< Template \nparameter Requirements Semantics Default \nFifoWorker A model of the FifoWorker concept see FifoWorker \nconcept fifo_worker<> \nAllocator A model of the standard Allocator \nconcept std::allocator< void > Page 13 of 35 The Boost Statechart Library - Reference \n2008/01/06 class FifoWorker = fifo_worker<>, \n class Allocator = std::allocator< void > > \n class fifo_scheduler : noncopyable \n { \n public: \n fifo_scheduler ( bool waitOnEmptyQueue = false ); \n \n typedef implementation-defined processor_handle; \n \n class processor_context : noncopyable \n { \n processor_context( \n fifo_scheduler & scheduler, \n const processor_handle & theHandle ); \n \n fifo_scheduler & my_scheduler() const; \n const processor_handle & my_handle() const; \n \n friend class fifo_scheduler; \n friend class event_processor< fifo_schedule r >; \n }; \n \n template< class Processor > \n processor_handle create_processor (); \n template< class Processor, typename Param1 > \n processor_handle create_processor ( Param1 param1 ); \n \n // More create_processor overloads \n \n void destroy_processor ( processor_handle processor ); \n \n void initiate_processor ( processor_handle processor ); \n void terminate_processor ( processor_handle processor ); \n \n typedef intrusive_ptr< const event_base > eve nt_ptr_type; \n \n void queue_event ( \n const processor_handle & processor, \n const event_ptr_type & pEvent ); \n \n typedef typename FifoWorker::work_item work_i tem; \n \n void queue_work_item ( const work_item & item ); \n \n void terminate (); \n bool terminated () const; \n \n unsigned long operator() ( \n unsigned long maxEventCount = 0 ); \n }; \n} \n} \nClass template fifo_scheduler constructor \nfifo_scheduler( bool waitOnEmptyQueue = false ); \nEffects : Constructs a fifo_scheduler<> object. In multi-threaded builds, waitOnEmptyQueue is \nforwarded to the constructor of a data member of ty pe FifoWorker . In single-threaded builds, the \nFifoWorker data member is default-constructed \nNote : In single-threaded builds the fifo_scheduler<> constructor does not accept any parameters and Page 14 of 35 The Boost Statechart Library - Reference \n2008/01/06operator()() thus always returns to the caller when the event q ueue is empty \nClass template fifo_scheduler modifier functions \ntemplate< class Processor > \nprocessor_handle create_processor(); \nRequires : The Processor type must be a direct or indirect subtype of the event_processor class template \nEffects : Creates and passes to FifoWorker::queue_work_item() an object of type \nFifoWorker::work_item that, when later executed in FifoWorker::operator()() , leads to a call to \nthe constructor of Processor , passing an appropriate processor_context object as the only argument \nReturns : A processor_handle object that henceforth identifies the created even t processor object \nThrows : Any exceptions propagated from FifoWorker::work_item() and \nFifoWorker::queue_work_item() \nCaution : The current implementation of this function makes an (indirect) call to global operator new() . \nUnless global operator new() is replaced, care must be taken when to call this function in applications with \nhard real-time requirements \ntemplate< class Processor, typename Param1 > \nprocessor_handle create_processor( Param1 param1 ); \nRequires : The Processor type must be a direct or indirect subtype of the event_processor class template \nEffects : Creates and passes to FifoWorker::queue_work_item() an object of type \nFifoWorker::work_item that, when later executed in FifoWorker::operator()() , leads to a call to \nthe constructor of Processor , passing an appropriate processor_context object and param1 as \narguments \nReturns : A processor_handle object that henceforth identifies the created even t processor object \nThrows : Any exceptions propagated from FifoWorker::work_item() and \nFifoWorker::queue_work_item() \nNote : boost::ref() and boost::cref() can be used to pass arguments by reference rather than by copy. \nfifo_scheduler<> has 5 additional create_processor<> overloads, allowing to pass up to 6 custom \narguments to the constructors of event processors \nCaution : The current implementation of this and all other overloads make (indirect) calls to global operator \nnew() . Unless global operator new() is replaced, care must be taken when to call these overloads in \napplications with hard real-time requirements \nvoid destroy_processor( processor_handle processor ); \nRequires : processor was obtained from a call to one of the create_processor<>() overloads on the \nsame fifo_scheduler<> object \nEffects : Creates and passes to FifoWorker::queue_work_item() an object of type \nFifoWorker::work_item that, when later executed in FifoWorker::operator()() , leads to a call to \nthe destructor of the event processor object associ ated with processor . The object is silently discarded if the \nevent processor object has been destructed before \nThrows : Any exceptions propagated from FifoWorker::work_item() and \nFifoWorker::queue_work_item() \nCaution : The current implementation of this function leads to an (indirect) call to global operator delete() \n(the call is made when the last processor_handle object associated with the event processor object is \ndestructed). Unless global operator delete() is replaced, care must be taken when to call this function in \napplications with hard real-time requirements \nvoid initiate_processor( processor_handle processor ); \nRequires : processor was obtained from a call to one of the create_processor() overloads on the same \nfifo_scheduler<> object \nEffects : Creates and passes to FifoWorker::queue_work_item() an object of type \nFifoWorker::work_item that, when later executed in FifoWorker::operator()() , leads to a call to \ninitiate () on the event processor object associated with processor . The object is silently discarded if the \nevent processor object has been destructed before Page 15 of 35 The Boost Statechart Library - Reference \n2008/01/06Throws : Any exceptions propagated from FifoWorker::work_item() and \nFifoWorker::queue_work_item() \nvoid terminate_processor( processor_handle processo r ); \nRequires : processor was obtained from a call to one of the create_processor<>() overloads on the \nsame fifo_scheduler<> object \nEffects : Creates and passes to FifoWorker::queue_work_item() an object of type \nFifoWorker::work_item that, when later executed in FifoWorker::operator()() , leads to a call to \nterminate () on the event processor object associated with processor . The object is silently discarded if the \nevent processor object has been destructed before \nThrows : Any exceptions propagated from FifoWorker::work_item() and \nFifoWorker::queue_work_item() \nvoid queue_event( \n const processor_handle & processor, \n const event_ptr_type & pEvent ); \nRequires : pEvent.get() != 0 and processor was obtained from a call to one of the \ncreate_processor<>() overloads on the same fifo_scheduler<> object \nEffects : Creates and passes to FifoWorker::queue_work_item() an object of type \nFifoWorker::work_item that, when later executed in FifoWorker::operator()() , leads to a call to \nprocess_event ( *pEvent ) on the event processor object associated with processor . The object is \nsilently discarded if the event processor object ha s been destructed before \nThrows : Any exceptions propagated from FifoWorker::work_item() and \nFifoWorker::queue_work_item() \nvoid queue_work_item( const work_item & item ); \nEffects : FifoWorker::queue_work_item( item ); \nThrows : Any exceptions propagated from the above call \nvoid terminate(); \nEffects : FifoWorker::terminate() \nThrows : Any exceptions propagated from the above call \nunsigned long operator()( unsigned long maxEventCou nt = 0 ); \nRequires : Must only be called from exactly one thread \nEffects : FifoWorker::operator()( maxEventCount ) \nReturns : The return value of the above call \nThrows : Any exceptions propagated from the above call \nClass template fifo_scheduler observer functions \nbool terminated() const; \nRequires : Must only be called from the thread that also cal ls operator()() \nReturns : FifoWorker::terminated(); \nHeader <boost/statechart/exception_translator.hpp> \nClass template exception_translator \nThis class template is a model of the ExceptionTranslator concept. Page 16 of 35 The Boost Statechart Library - Reference \n2008/01/06Class template exception_translator parameters \nClass template exception_translator synopsis & semantics \nnamespace boost \n{ \nnamespace statechart \n{ \n class exception_thrown : public event< exception_ thrown > {}; \n \n template< class ExceptionEvent = exception_thrown > \n class exception_translator \n { \n public: \n template< class Action, class ExceptionEventH andler > \n result operator()( \n Action action, \n ExceptionEventHandler eventHandler ) \n { \n try \n { \n return action(); \n } \n catch( ... ) \n { \n return eventHandler( ExceptionEvent() ); \n } \n } \n }; \n} \n} \nHeader <boost/statechart/ \nnull_exception_translator.hpp> \nClass null_exception_translator \nThis class is a model of the ExceptionTranslator concept. \nClass null_exception_translator synopsis & semantics \nnamespace boost \n{ \nnamespace statechart \n{ \n class null_exception_translator \n { \n public: \n template< class Action, class ExceptionEventH andler > \n result operator()( \n Action action, ExceptionEventHandler ) Template \nparameter Requirements Semantics Default \nExceptionEvent A model of the Event \nconcept The type of event that is dispatched when an \nexception is propagated into the framework exception_thrown Page 17 of 35 The Boost Statechart Library - Reference \n2008/01/06 { \n return action(); \n } \n }; \n} \n} \nHeader <boost/statechart/simple_state.hpp> \nEnum history_mode \nDefines the history type of a state. \nnamespace boost \n{ \nnamespace statechart \n{ \n enum history_mode \n { \n has_no_history, \n has_shallow_history, \n has_deep_history, \n has_full_history // shallow & deep \n }; \n} \n} \nClass template simple_state \nThis is the base class template for all models of t he SimpleState concept. Such models must not call any of the \nfollowing simple_state<> member functions from their constructors: \nvoid post_event ( \n const intrusive_ptr< const event_base > & ); \nvoid post_event ( const event_base & ); \n \ntemplate< \n class HistoryContext, \n implementation-defined-unsigned-integer-type \n orthogonalPosition > \nvoid clear_shallow_history (); \ntemplate< \n class HistoryContext, \n implementation-defined-unsigned-integer-type \n orthogonalPosition > \nvoid clear_deep_history (); \n \noutermost_context_type & outermost_context (); \nconst outermost_context_type & outermost_context () const; \n \ntemplate< class OtherContext > \nOtherContext & context (); \ntemplate< class OtherContext > \nconst OtherContext & context () const; \n \ntemplate< class Target > \nTarget state_cast () const; \ntemplate< class Target > \nTarget state_downcast () const; Page 18 of 35 The Boost Statechart Library - Reference \n2008/01/06 \nstate_iterator state_begin () const; \nstate_iterator state_end () const; \nStates that need to call any of these member functi ons from their constructors must derive from the state class \ntemplate. \nClass template simple_state parameters \nClass template simple_state synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< \n class MostDerived, \n class Context, \n class InnerInitial = unspecified , \n history_mode historyMode = has_no_history > \n class simple_state : implementation-defined \n { \n public: \n // by default, a state has no reactions \n typedef mpl::list<> reactions; \n \n // see template parameters \n template< implementation-defined-unsigned-integer-type \n innerOrthogonalPosition > \n struct orthogonal Template \nparameter Requirements Semantics Default \nMostDerived The most-derived subtype of this class template \nContext A most-derived direct or indirect subtype of the \nstate_machine or asynchronous_state_machine class \ntemplates or a model of the SimpleState or State concepts or \nan instantiation of the simple_state<>::orthogonal class \ntemplate. Must be a complete type Defines the \nstates' position \nin the state \nhierarchy \nInnerInitial An mpl::list<> containing models of the SimpleState or \nState concepts or instantiations of the shallow_history or \ndeep_history class templates. If there is only a single inner \ninitial state that is not a template instantiation then it can also \nbe passed directly, without wrapping it into an mpl::list<> . \nThe Context argument passed to the simple_state<> or \nstate<> base of each state in the list must correspond to the \northogonal region it belongs to. That is, the first state in the \nlist must pass MostDerived::orthogonal< 0 > , the second \nMostDerived::orthogonal< 1 > and so forth. \nMostDerived::orthogonal< 0 > and MostDerived are \nsynonymous Defines the \ninner initial \nstate for each \northogonal \nregion. By \ndefault, a state \ndoes not have \ninner states unspecified \nhistoryMode One of the values defined in the history_mode enumeration Defines \nwhether the \nstate saves \nshallow, deep \nor both \nhistories upon \nexit has_no_history Page 19 of 35 The Boost Statechart Library - Reference \n2008/01/06 { \n // implementation-defined \n }; \n \n typedef typename Context::outermost_context_t ype \n outermost_context_type; \n \n outermost_context_type & outermost_context (); \n const outermost_context_type & outermost_context () const; \n \n template< class OtherContext > \n OtherContext & context (); \n template< class OtherContext > \n const OtherContext & context () const; \n \n template< class Target > \n Target state_cast () const; \n template< class Target > \n Target state_downcast () const; \n \n // a model of the StateBase concept \n typedef implementation-defined state_base_type; \n // a model of the standard Forward Iterator c oncept \n typedef implementation-defined state_iterator; \n \n state_iterator state_begin () const; \n state_iterator state_end () const; \n \n void post_event ( \n const intrusive_ptr< const event_base > & ); \n void post_event ( const event_base & ); \n \n result discard_event (); \n result forward_event (); \n result defer_event (); \n template< class DestinationState > \n result transit (); \n template< \n class DestinationState, \n class TransitionContext, \n class Event > \n result transit ( \n void ( TransitionContext::* )( const Event & ), \n const Event & ); \n result terminate (); \n \n template< \n class HistoryContext, \n implementation-defined-unsigned-integer-type \n orthogonalPosition > \n void clear_shallow_history (); \n template< \n class HistoryContext, \n implementation-defined-unsigned-integer-type \n orthogonalPosition > \n void clear_deep_history (); \n \n static id_type static_type (); \n \n template< class CustomId > \n static const CustomId * custom_static_type_ptr (); Page 20 of 35 The Boost Statechart Library - Reference \n2008/01/06 \n template< class CustomId > \n static void custom_static_type_ptr ( const CustomId * ); \n \n // see transit () or terminate () effects \n void exit() {} \n \n protected: \n simple_state (); \n ~simple_state (); \n }; \n} \n} \nClass template simple_state constructor and destructor \nsimple_state(); \nEffects : Constructs a state object \n~simple_state(); \nEffects : Pushes all events deferred by the state into the posted events queue \nClass template simple_state modifier functions \nvoid post_event( \n const intrusive_ptr< const event_base > & pEvt ); \nRequires : If called from a constructor of a direct or indir ect subtype then the most-derived type must directl y or \nindirectly derive from the state class template . All direct and indirect callers must be exception- neutral \nEffects : outermost_context (). post_event ( pEvt ); \nThrows : Whatever the above call throws \nvoid post_event( const event_base & evt ); \nRequires : If called from a constructor of a direct or indir ect subtype then the most-derived type must directl y or \nindirectly derive from the state class template . All direct and indirect callers must be exception- neutral \nEffects : outermost_context (). post_event ( evt ); \nThrows : Whatever the above call throws \nresult discard_event(); \nRequires : Must only be called from within react member functions, which are called by \ncustom_reaction<> instantiations. All direct and indirect callers mu st be exception-neutral \nEffects : Instructs the state machine to discard the curren t event and to continue with the processing of the \nremaining events (see state_machine<>::process_event () for details) \nReturns : A result object. The user-supplied react member function must return this object to its cal ler \nresult forward_event(); \nRequires : Must only be called from within react member functions, which are called by \ncustom_reaction<> instantiations. All direct and indirect callers mu st be exception-neutral \nEffects : Instructs the state machine to forward the curren t event to the next state (see \nstate_machine<>::process_event () for details) \nReturns : A result object. The user-supplied react member function must return this object to its cal ler \nresult defer_event(); Page 21 of 35 The Boost Statechart Library - Reference \n2008/01/06Requires : Must only be called from within react member functions, which are called by \ncustom_reaction<> instantiations. All direct and indirect callers mu st be exception-neutral \nEffects : Instructs the state machine to defer the current event and to continue with the processing of the re maining \nevents (see state_machine<>::process_event () for details) \nReturns : A result object. The user-supplied react member function must return this object to its cal ler \nThrows : Any exceptions propagated from Allocator::rebind<>::other::allocate() (the template \nparameter passed to the base class of outermost_context_type ) \ntemplate< class DestinationState > \nresult transit(); \nRequires : Must only be called from within react member functions, which are called by \ncustom_reaction<> instantiations. All direct and indirect callers mu st be exception-neutral \nEffects : \n1. Exits all currently active direct and indirect in ner states of the innermost common context of this state and \nDestinationState . Innermost states are exited first. Other states a re exited as soon as all their direct \nand indirect inner states have been exited. The inn er states of each state are exited according to the number \nof their orthogonal region. The state in the orthog onal region with the highest number is always exite d first, \nthen the state in the region with the second-highes t number and so on. \nThe process of exiting a state consists of the foll owing steps: \n1. If there is an exception pending that has not yet b een handled successfully then only step 5 is execut ed \n2. Calls the exit member function (see synopsis ) of the most-derived state object. If exit() throws \nthen steps 3 and 4 are not executed \n3. If the state has shallow history then shallow his tory information is saved \n4. If the state is an innermost state then deep hist ory information is saved for all direct and indirec t outer \nstates that have deep history \n5. The state object is destructed \n2. Enters (constructs) the state that is both a dire ct inner state of the innermost common context and either the \nDestinationState itself or a direct or indirect outer state of DestinationState \n3. Enters (constructs) the tree formed by the direct and indirect inner states of the previously entere d state \ndown to the DestinationState and beyond depth first. The inner states of each s tate are entered \naccording to the number of their orthogonal region. The state in orthogonal region 0 is always entered first, \nthen the state in region 1 and so on \n4. Instructs the state machine to discard the curren t event and to continue with the processing of the remaining \nevents (see state_machine<>::process_event () for details) \nReturns : A result object. The user-supplied react member function must return this object to its cal ler \nThrows : Any exceptions propagated from: \n/circle6Allocator::rebind<>::other::allocate() (the template parameter passed to the base class o f \noutermost_context_type ) \n/circle6state constructors \n/circle6exit member functions \nCaution : Inevitably destructs this state before returning to the calling react member function, which must \ntherefore not attempt to access anything except sta ck objects before returning to its caller \ntemplate< \n class DestinationState, \n class TransitionContext, \n class Event > \nresult transit( \n void ( TransitionContext::* )( const Event & ), \n const Event & ); \nRequires : Must only be called from within react member functions, which are called by \ncustom_reaction<> instantiations. All direct and indirect callers mu st be exception-neutral \nEffects : Page 22 of 35 The Boost Statechart Library - Reference \n2008/01/061. Exits all currently active direct and indirect in ner states of the innermost common context of this state and \nDestinationState . Innermost states are exited first. Other states a re exited as soon as all their direct \nand indirect inner states have been exited. The inn er states of each state are exited according to the number \nof their orthogonal region. The state in the orthog onal region with the highest number is always exite d first, \nthen the state in the region with the second-highes t number and so on. \nThe process of exiting a state consists of the foll owing steps: \n1. If there is an exception pending that has not yet b een handled successfully then only step 5 is execut ed \n2. Calls the exit member function (see synopsis ) of the most-derived state object. If exit() throws \nthen steps 3 and 4 are not executed \n3. If the state has shallow history then shallow his tory information is saved \n4. If the state is an innermost state then deep hist ory information is saved for all direct and indirec t outer \nstates that have deep history \n5. The state object is destructed \n2. Executes the passed transition action, forwarding the passed event \n3. Enters (constructs) the state that is both a dire ct inner state of the innermost common context and either the \nDestinationState itself or a direct or indirect outer state of DestinationState \n4. Enters (constructs) the tree formed by the direct and indirect inner states of the previously entere d state \ndown to the DestinationState and beyond depth first. The inner states of each s tate are entered \naccording to the number of their orthogonal region. The state in orthogonal region 0 is always entered first, \nthen the state in region 1 and so on \n5. Instructs the state machine to discard the curren t event and to continue with the processing of the remaining \nevents (see state_machine<>::process_event () for details) \nReturns : A result object. The user-supplied react member function must return this object to its cal ler \nThrows : Any exceptions propagated from: \n/circle6Allocator::rebind<>::other::allocate() (the template parameter passed to the base class o f \noutermost_context_type ) \n/circle6state constructors \n/circle6exit member functions \n/circle6the transition action \nCaution : Inevitably destructs this state before returning to the calling react member function, which must \ntherefore not attempt to access anything except sta ck objects before returning to its caller \nresult terminate(); \nRequires : Must only be called from within react member functions, which are called by \ncustom_reaction<> instantiations. All direct and indirect callers mu st be exception-neutral \nEffects : Exits this state and all its direct and indirect inner states. Innermost states are exited first. Ot her states are \nexited as soon as all their direct and indirect inn er states have been exited. The inner states of eac h state are exited \naccording to the number of their orthogonal region. The state in the orthogonal region with the highes t number is \nalways exited first, then the state in the region w ith the second-highest number and so on. \nThe process of exiting a state consists of the foll owing steps: \n1. If there is an exception pending that has not yet been handled successfully then only step 5 is exec uted \n2. Calls the exit member function (see synopsis ) of the most-derived state object. If exit() throws then \nsteps 3 and 4 are not executed \n3. If the state has shallow history then shallow his tory information is saved \n4. If the state is an innermost state then deep hist ory information is saved for all direct and indirec t outer states \nthat have deep history \n5. The state object is destructed \nAlso instructs the state machine to discard the cur rent event and to continue with the processing of t he remaining \nevents (see state_machine<>::process_event () for details) \nReturns : A result object. The user-supplied react member function must return this object to its cal ler \nThrows : Any exceptions propagated from: \n/circle6Allocator::rebind<>::other::allocate() (the template parameter passed to the base class o f \noutermost_context_type , used to allocate space to save history) \n/circle6exit member functions Page 23 of 35 The Boost Statechart Library - Reference \n2008/01/06Note : If this state is the only currently active inner state of its direct outer state then the direct out er state is \nterminated also. The same applies recursively for a ll indirect outer states \nCaution : Inevitably destructs this state before returning to the calling react member function, which must \ntherefore not attempt to access anything except sta ck objects before returning to its caller \ntemplate< \n class HistoryContext, \n implementation-defined-unsigned-integer-type \n orthogonalPosition > \nvoid clear_shallow_history(); \nRequires : If called from a constructor of a direct or indir ect subtype then the most-derived type must directl y or \nindirectly derive from the state class template. The historyMode argument passed to the \nsimple_state<> or state<> base of HistoryContext must be equal to has_shallow_history or \nhas_full_history \nEffects : Clears the shallow history of the orthogonal regi on specified by orthogonalPosition of the state \nspecified by HistoryContext \nThrows : Any exceptions propagated from Allocator::rebind<>::other::allocate() (the template \nparameter passed to the base class of outermost_context_type ) \ntemplate< \n class HistoryContext, \n implementation-defined-unsigned-integer-type \n orthogonalPosition > \nvoid clear_deep_history(); \nRequires : If called from a constructor of a direct or indir ect subtype then the most-derived type must directl y or \nindirectly derive from the state class template. The historyMode argument passed to the \nsimple_state<> or state<> base of HistoryContext must be equal to has_deep_history or \nhas_full_history \nEffects : Clears the deep history of the orthogonal region specified by orthogonalPosition of the state \nspecified by HistoryContext \nThrows : Any exceptions propagated from Allocator::rebind<>::other::allocate() (the template \nparameter passed to the base class of outermost_context_type ) \nClass template simple_state observer functions \noutermost_context_type & outermost_context(); \nRequires : If called from a constructor of a direct or indir ect subtype then the most-derived type must directl y or \nindirectly derive from the state class template. If called from a destructor of a d irect or indirect subtype then the \nstate_machine<> subclass portion must still exist \nReturns : A reference to the outermost context, which is al ways the state machine this state belongs to \nconst outermost_context_type & outermost_context() const; \nRequires : If called from a constructor of a direct or indir ect subtype then the most-derived type must directl y or \nindirectly derive from the state class template. If called from a destructor of a d irect or indirect subtype then the \nstate_machine<> subclass portion must still exist \nReturns : A reference to the const outermost context, which is always the state machine this state belongs to \ntemplate< class OtherContext > \nOtherContext & context(); \nRequires : If called from a constructor of a direct or indir ect subtype then the most-derived type must directl y or \nindirectly derive from the state class template. If called from a destructor of a d irect or indirect subtype with a \nstate_machine<> subtype as argument then the state_machine<> subclass portion must still exist \nReturns : A reference to a direct or indirect context \ntemplate< class OtherContext > Page 24 of 35 The Boost Statechart Library - Reference \n2008/01/06const OtherContext & context() const; \nRequires : If called from a constructor of a direct or indir ect subtype then the most-derived type must directl y or \nindirectly derive from the state class template. If called from a destructor of a d irect or indirect subtype with a \nstate_machine<> subtype as argument then the state_machine<> subclass portion must still exist \nReturns : A reference to a const direct or indirect context \ntemplate< class Target > \nTarget state_cast() const; \nRequires : If called from a constructor of a direct or indir ect subtype then the most-derived type must directl y or \nindirectly derive from the state class template \nReturns : Has exactly the same semantics as state_machine<>::state_cast <>() \nThrows : Has exactly the same semantics as state_machine<>::state_cast <>() \nNote : The result is unspecified if this function is called when the machine is unstable \ntemplate< class Target > \nTarget state_downcast() const; \nRequires : If called from a constructor of a direct or indir ect subtype then the most-derived type must directl y or \nindirectly derive from the state class template. Moreover, state_machine<>::state_downcast <>() \nrequirements also apply \nReturns : Has exactly the same semantics as state_machine<>::state_downcast <>() \nThrows : Has exactly the same semantics as state_machine<>::state_downcast <>() \nNote : The result is unspecified if this function is called when the machine is unstable \nstate_iterator state_begin() const; \nstate_iterator state_end() const; \nRequire : If called from a constructor of a direct or indir ect subtype then the most-derived type must directl y or \nindirectly derive from the state class template \nReturn : Have exactly the same semantics as state_machine<>::state_begin () and \nstate_machine<>::state_end () \nNote : The result is unspecified if these functions are called when the machine is unstable \nClass template simple_state static functions \nstatic id_type static_type(); \nReturns : A value unambiguously identifying the type of MostDerived \nNote : id_type values are comparable with operator==() and operator!=() . An unspecified collating \norder can be established with std::less< id_type > \ntemplate< class CustomId > \nstatic const CustomId * custom_static_type_ptr(); \nRequires : If a custom type identifier has been set then CustomId must match the type of the previously set \npointer \nReturns : The pointer to the custom type identifier for MostDerived or 0 \nNote : This function is not available if BOOST_STATECHART_USE_NATIVE_RTTI is defined \ntemplate< class CustomId > \nstatic void custom_static_type_ptr( const CustomId * ); \nEffects : Sets the pointer to the custom type identifier fo r MostDerived \nNote : This function is not available if BOOST_STATECHART_USE_NATIVE_RTTI is defined Page 25 of 35 The Boost Statechart Library - Reference \n2008/01/06Header <boost/statechart/ state.hpp > \nClass template state \nThis is the base class template for all models of t he State concept. Such models typically need to call at lea st one of \nthe following simple_state<> member functions from their constructors: \nvoid post_event ( \n const intrusive_ptr< const event_base > & ); \nvoid post_event ( const event_base & ); \n \ntemplate< \n class HistoryContext, \n implementation-defined-unsigned-integer-type \n orthogonalPosition > \nvoid clear_shallow_history (); \ntemplate< \n class HistoryContext, \n implementation-defined-unsigned-integer-type \n orthogonalPosition > \nvoid clear_deep_history (); \n \noutermost_context_type & outermost_context (); \nconst outermost_context_type & outermost_context () const; \n \ntemplate< class OtherContext > \nOtherContext & context (); \ntemplate< class OtherContext > \nconst OtherContext & context () const; \n \ntemplate< class Target > \nTarget state_cast () const; \ntemplate< class Target > \nTarget state_downcast () const; \n \nstate_iterator state_begin () const; \nstate_iterator state_end () const; \nStates that do not need to call any of these member functions from their constructors should rather de rive from the \nsimple_state class template, what saves the implementation of t he forwarding constructor. \nClass template state synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< \n class MostDerived, \n class Context, \n class InnerInitial = unspecified , \n history_mode historyMode = has_no_history > \n class state : public simple_state< \n MostDerived, Context, InnerInitial, historyMode > \n { \n protected: \n struct my_context \n { \n // implementation-defined Page 26 of 35 The Boost Statechart Library - Reference \n2008/01/06 }; \n \n typedef state my_base; \n \n state( my_context ctx ); \n ~state(); \n }; \n} \n} \nDirect and indirect subtypes of state<> must provide a constructor with the same signature as the state<> \nconstructor, forwarding the context parameter. \nHeader <boost/statechart/shallow_history.hpp> \nClass template shallow_history \nThis class template is used to specify a shallow hi story transition target or a shallow history inner initial state. \nClass template shallow_history parameters \nClass template shallow_history synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< class DefaultState > \n class shallow_history \n { \n // implementation-defined \n }; \n} \n} \nHeader <boost/statechart/deep_history.hpp> \nClass template deep_history \nThis class template is used to specify a deep histo ry transition target or a deep history inner initia l state. The \ncurrent deep history implementation has some limitations . \nClass template deep_history parameters Template \nparameter Requirements Semantics \nDefaultState A model of the SimpleState or State concepts. The type passed as \nContext argument to the simple_state<> or state<> base \nof DefaultState must itself pass has_shallow_history or \nhas_full_history as historyMode argument to its simple_state<> or \nstate<> base The state that is \nentered if shallow \nhistory is not \navailable \nTemplate \nparameter Requirements Semantics Page 27 of 35 The Boost Statechart Library - Reference \n2008/01/06Class template deep_history synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< class DefaultState > \n class deep_history \n { \n // implementation-defined \n }; \n} \n} \nHeader <boost/statechart/event_base.hpp> \nClass event_base \nThis is the common base of all events. \nClass event_base synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n class event_base \n { \n public: \n intrusive_ptr< const event_base > \n intrusive_from_this () const; \n \n typedef implementation-defined id_type; \n \n id_type dynamic_type () const; \n \n template< typename CustomId > \n const CustomId * custom_dynamic_type_ptr () const; \n \n protected: \n event_base ( unspecified-parameter ); \n virtual ~event_base (); \n }; \n} \n} \nClass event_base constructor and destructor \nevent_base( unspecified-parameter ); \nEffects : Constructs the common base portion of an event DefaultState A model of the SimpleState or State concepts. The type passed as \nContext argument to the simple_state<> or state<> base \nof DefaultState must itself pass has_deep_history or \nhas_full_history as historyMode argument to its simple_state<> or \nstate<> base The state that is \nentered if deep \nhistory is not \navailable Page 28 of 35 The Boost Statechart Library - Reference \n2008/01/06virtual ~event_base(); \nEffects : Destructs the common base portion of an event \nClass event_base observer functions \nintrusive_ptr< const event_base > intrusive_from_th is() const; \nReturns : Another intrusive_ptr< const event_base > referencing this if this is already \nreferenced by an intrusive_ptr<> . Otherwise, returns an intrusive_ptr< const event_base > \nreferencing a newly created copy of the most-derive d object \nid_type dynamic_type() const; \nReturns : A value unambiguously identifying the most-derive d type \nNote : id_type values are comparable with operator==() and operator!=() . An unspecified collating \norder can be established with std::less< id_type > . In contrast to typeid( cs ) , this function is \navailable even on platforms that do not support C++ RTTI (or have been configured to not support it) \ntemplate< typename CustomId > \nconst CustomId * custom_dynamic_type_ptr() const; \nRequires : If a custom type identifier has been set then CustomId must match the type of the previously set \npointer \nReturns : A pointer to the custom type identifier or 0 \nNote : This function is not available if BOOST_STATECHART_USE_NATIVE_RTTI is defined \nHeader <boost/statechart/event.hpp> \nClass template event \nThis is the base class template of all events. \nClass template event parameters \nClass template event synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< class MostDerived, class Allocator = st d::allocator< void > > \n class event : implementation-defined \n { \n public: \n static void * operator new ( std::size_t size ); Template \nparameter Requirements Semantics Default \nMostDerived The most-derived \nsubtype of this class \ntemplate \nAllocator A model of the \nstandard Allocator \nconcept Allocator::rebind< MostDerived >::other is \nused to allocate and deallocate all event subtype \nobjects of dynamic storage duration, see operator \nnew std::allocator< \nvoid > Page 29 of 35 The Boost Statechart Library - Reference \n2008/01/06 static void operator delete ( void * pEvent ); \n \n static id_type static_type (); \n \n template< class CustomId > \n static const CustomId * custom_static_type_ptr (); \n \n template< class CustomId > \n static void custom_static_type_ptr ( const CustomId * ); \n \n protected: \n event (); \n virtual ~event (); \n }; \n} \n} \nClass template event constructor and destructor \nevent(); \nEffects : Constructs an event \nvirtual ~event(); \nEffects : Destructs an event \nClass template event static functions \nstatic void * operator new( std::size_t size ); \nEffects : Allocator::rebind< MostDerived >::other().allocate( 1, static_cast< \nMostDerived * >( 0 ) ); \nReturns : The return value of the above call \nThrows : Whatever the above call throws \nstatic void operator delete( void * pEvent ); \nEffects : Allocator::rebind< MostDerived >::other().deallocat e( static_cast< \nMostDerived * >( pEvent ), 1 ); \nstatic id_type static_type(); \nReturns : A value unambiguously identifying the type of MostDerived \nNote : id_type values are comparable with operator==() and operator!=() . An unspecified collating \norder can be established with std::less< id_type > \ntemplate< class CustomId > \nstatic const CustomId * custom_static_type_ptr(); \nRequires : If a custom type identifier has been set then CustomId must match the type of the previously set \npointer \nReturns : The pointer to the custom type identifier for MostDerived or 0 \nNote : This function is not available if BOOST_STATECHART_USE_NATIVE_RTTI is defined \ntemplate< class CustomId > \nstatic void custom_static_type_ptr( const CustomId * ); \nEffects : Sets the pointer to the custom type identifier fo r MostDerived Page 30 of 35 The Boost Statechart Library - Reference \n2008/01/06Note : This function is not available if BOOST_STATECHART_USE_NATIVE_RTTI is defined \nHeader <boost/statechart/transition.hpp> \nClass template transition \nThis class template is used to specify a transition reaction. Instantiations of this template can appe ar in the \nreactions member typedef in models of the SimpleState and State concepts. \nClass template transition parameters \nClass template transition synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< \n class Event, \n class Destination, \n class TransitionContext = unspecified , \n void ( TransitionContext::*pTransitionAction )( \n const Event & ) = unspecified > \n class transition \n { \n // implementation-defined \n }; \n} \n} \nClass template transition semantics \nWhen executed, one of the following calls to a memb er function of the state for which the reaction was defined is Template \nparameter Requirements Semantics Default \nEvent A model of the Event concept or the class \nevent_base The event triggering the \ntransition. If \nevent_base is specified, \nthe transition is triggered \nby all models of the \nEvent concept \nDestination A model of the SimpleState or State concepts or \nan instantiation of the shallow_history or \ndeep_history class templates. The source state \n(the state for which this transition is defined) \nand Destination must have a common direct \nor indirect context The destination state to \nmake a transition to \nTransitionContext A common context of the source and \nDestination state The state of which the \ntransition action is a \nmember unspecified \npTransitionAction A pointer to a member function of \nTransitionContext . The member function \nmust accept a const Event & parameter and \nreturn void The transition action that \nis executed during the \ntransition. By default no \ntransition action is \nexecuted unspecified Page 31 of 35 The Boost Statechart Library - Reference \n2008/01/06made: \n/circle6transit< Destination >() , if no transition action was specified \n/circle6transit< Destination >( pTransitionAction, currentEvent ) , if a transition action \nwas specified \nHeader <boost/statechart/in_state_reaction.hpp> \nClass template in_state_reaction \nThis class template is used to specify an in-state reaction. Instantiations of this template can appea r in the \nreactions member typedef in models of the SimpleState and State concepts. \nClass template in_state_reaction parameters \nClass template in_state_reaction synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< \n class Event, \n class ReactionContext = unspecified , \n void ( ReactionContext::*pAction )( \n const Event & ) = unspecified > \n class in_state_reaction \n { \n // implementation-defined \n }; \n} \n} \nClass template in_state_reaction semantics \nWhen executed then the following happens: \n1. If an action was specified, pAction is called, passing the triggering event as the onl y argument \n2. A call is made to the discard_event member function of the state for which the reaction was defined Template \nparameter Requirements Semantics Default \nEvent A model of the Event concept or the \nclass event_base The event triggering the in-state \nreaction. If event_base is \nspecified, the in-state reaction is \ntriggered by all models of the \nEvent concept \nReactionContext Either the state defining the in-state \nreaction itself or one of it direct or \nindirect contexts The state of which the action is a \nmember unspecified \npAction A pointer to a member function of \nReactionContext . The member \nfunction must accept a const Event & \nparameter and return void The action that is executed during \nthe in-state reaction unspecified Page 32 of 35 The Boost Statechart Library - Reference \n2008/01/06Header <boost/statechart/ termination.hpp > \nClass template termination \nThis class template is used to specify a terminatio n reaction. Instantiations of this template can app ear in the \nreactions member typedef in models of the SimpleState and State concepts. \nClass template termination parameters \nClass template termination synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< class Event > \n class termination \n { \n // implementation-defined \n }; \n} \n} \nClass template termination semantics \nWhen executed, a call is made to the terminate member function of the state for which the reactio n was \ndefined. \nHeader <boost/statechart/deferral.hpp> \nClass template deferral \nThis class template is used to specify a deferral r eaction. Instantiations of this template can appear in the \nreactions member typedef in models of the SimpleState and State concepts. \nClass template deferral parameters \nClass template deferral synopsis \nnamespace boost \n{ \nnamespace statechart Template \nparameter Requirements Semantics \nEvent A model of the Event concept \nor the class event_base The event triggering the termination. If event_base is \nspecified, the termination is triggered by all mode ls of the \nEvent concept \nTemplate \nparameter Requirements Semantics \nEvent A model of the Event concept or \nthe class event_base The event triggering the deferral. If event_base is specified, \nthe deferral is triggered by all models of the Event concept Page 33 of 35 The Boost Statechart Library - Reference \n2008/01/06{ \n template< class Event > \n class deferral \n { \n // implementation-defined \n }; \n} \n} \nClass template deferral semantics \nWhen executed, a call is made to the defer_event member function of the state for which the reactio n was \ndefined. \nHeader <boost/statechart/custom_reaction.hpp> \nClass template custom_reaction \nThis class template is used to specify a custom rea ction. Instantiations of this template can appear i n the \nreactions member typedef in models of the SimpleState and State concepts. \nClass template custom_reaction parameters \nClass template custom_reaction synopsis \nnamespace boost \n{ \nnamespace statechart \n{ \n template< class Event > \n class custom_reaction \n { \n // implementation-defined \n }; \n} \n} \nClass template custom_reaction semantics \nWhen executed, a call is made to the user-supplied react member function of the state for which the reactio n was \ndefined. The react member function must have the following signature: \nresult react( const Event & ); \nand must call exactly one of the following reaction functions and return the obtained result object: \nresult discard_event (); \nresult forward_event (); \nresult defer_event (); \ntemplate< class DestinationState > Template \nparameter Requirements Semantics \nEvent A model of the Event concept \nor the class event_base The event triggering the custom reaction. If event_base is \nspecified, the custom reaction is triggered by all models of the \nEvent concept Page 34 of 35 The Boost Statechart Library - Reference \n2008/01/06result transit (); \ntemplate< \n class DestinationState, \n class TransitionContext, \n class Event > \nresult transit ( \n void ( TransitionContext::* )( const Event & ), \n const Event & ); \nresult terminate (); \nHeader <boost/statechart/result.hpp> \nClass result \nDefines the nature of the reaction taken in a user- supplied react member function (called when a \ncustom_reaction is executed). Objects of this type are always obta ined by calling one of the reaction \nfunctions and must be returned from the react member function immediately. \nnamespace boost \n{ \nnamespace statechart \n{ \n class result \n { \n public: \n result ( const result & other ); \n ~result (); \n \n private: \n // Result objects are not assignable \n result & operator=( const result & other ); \n }; \n} \n} \nClass result constructor and destructor \nresult( const result & other ); \nRequires : other is not consumed \nEffects : Copy-constructs a new result object and marks other as consumed. That is, result has destructive \ncopy semantics \n~result(); \nRequires : this is marked as consumed \nEffects : Destructs the result object \n \nRevised 06 January, 2008 \nCopyright © 2003-2008 Andreas Huber Dönni \nDistributed under the Boost Software License, Versi on 1.0. (See accompanying file LICENSE_1_0.txt or copy at \nhttp://www.boost.org/LICENSE_1_0.txt ) \nPage 35 of 35 The Boost Statechart Library - Reference \n2008/01/06" } ]
{ "category": "App Definition and Development", "file_name": "reference.pdf", "project_name": "ArangoDB", "subcategory": "Database" }