id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
22,100 | setsmultisetsand multimaps we conclude this by examining several additional abstractions that are closely related to the map adtand that can be implemented using data structures similar to those for map set is an unordered collection of elementswithout duplicatesthat typically supports efficient membership tests in essenceelements of set are like keys of mapbut without any auxiliary values multiset (also known as bagis set-like container that allows duplicates multimap is similar to traditional mapin that it associates values with keyshoweverin multimap the same key can be mapped to multiple values for examplethe index of this book maps given term to one or more locations at which the term occurs elsewhere in the book the set adt python provides support for representing the mathematical notion of set through the built-in classes frozenset and setas originally discussed in with frozenset being an immutable form both of those classes are implemented using hash tables in python python' collections module defines abstract base classes that essentially mirror these built-in classes although the choice of names is counterintuitivethe abstract base class collections set matches the concrete frozenset classwhile the abstract base class collections mutableset is akin to the concrete set class in our own discussionwe equate the "set adtwith the behavior of the builtin set class (and thusthe collections mutableset base classwe begin by listing what we consider to be the five most fundamental behaviors for set ss add( )add element to the set this has no effect if the set already contains discard( )remove element from the setif present this has no effect if the set does not contain in sreturn true if the set contains element in pythonthis is implemented with the special contains method len( )return the number of elements in set in pythonthis is implemented with the special method len iter( )generate an iteration of all elements of the set in pythonthis is implemented with the special method iter |
22,101 | in the next sectionwe will see that the above five methods suffice for deriving all other behaviors of set those remaining behaviors can be naturally grouped as follows we begin by describing the following additional operations for removing one or more elements from sets remove( )remove element from the set if the set does not contain eraise keyerror pop)remove and return an arbitrary element from the set if the set is emptyraise keyerror clear)remove all elements from the set the next group of behaviors perform boolean comparisons between two sets =treturn true if sets and have identical contents !treturn true if sets and are not equivalent <treturn true if set is subset of set treturn true if set is proper subset of set >treturn true if set is superset of set treturn true if set is proper superset of set isdisjoint( )return true if sets and have no common elements finallythere exists variety of behaviors that either update an existing setor compute new set instancebased on classical set theory operations treturn new set representing the union of sets and |tupdate set to be the union of and set treturn new set representing the intersection of sets and &tupdate set to be the intersection of and set treturn new set representing the symmetric difference of sets and tthat isa set of elements that are in precisely one of or ^tupdate set to become the symmetric difference of itself and set treturn new set containing elements in but not -tupdate set to remove all common elements with set |
22,102 | mapshash tablesand skip lists python' mutableset abstract base class to aid in the creation of user-defined set classespython' collections module provides mutableset abstract base class (just as it provides the mutablemapping abstract base class discussed in section the mutableset base class provides concrete implementations for all methods described in section except for five core behaviors (adddiscardcontains len and iter that must be implemented by any concrete subclass this design is an example of what is known as the template method patternas the concrete methods of the mutableset class rely on the presumed abstract methods that will subsequently be provided by subclass for the purpose of illustrationwe examine algorithms for implementing several of the derived methods of the mutableset base class for exampleto determine if one set is proper subset of anotherwe must verify two conditionsa proper subset must have size strictly smaller than that of its supersetand each element of subset must be contained in the superset an implementation of the corresponding lt method based on this logic is given in code fragment supports syntax def lt (selfother)"""return true if this set is proper subset of other ""if len(self>len(other)return false proper subset must have strictly smaller size for in selfif not in otherreturn false not subset since element missing from other return true successall conditions are met code fragment possible implementation of the mutableset lt methodwhich tests if one set is proper subset of another as another examplewe consider the computation of the union of two sets the set adt includes two forms for computing union the syntax should produce new set that has contents equal to the union of existing sets and this operation is implemented through the special method or in python another syntaxs | is used to update existing set to become the union of itself and set thereforeall elements of that are not already contained in should be added to we note that this "in-placeoperation may be implemented more efficiently than if we were to rely on the first formusing the syntax tin which identifier is reassigned to new set instance that represents the union for conveniencepython' built-in set class supports named version of these behaviorswith union(tequivalent to tand update(tequivalent to | (yetthose named versions are not formally provided by the mutableset abstract base class |
22,103 | supports syntax def or (selfother)"""return new set that is the union of two existing sets ""result type(self)create new instance of concrete class for in selfresult add(efor in otherresult add(ereturn result code fragment an implementation of the mutableset or methodwhich computes the union of two existing sets an implementation of the behavior that computes new set as union of two others is given in the form of the or special methodin code fragment an important subtlety in this implementation is the instantiation of the resulting set since the mutableset class is designed as an abstract base classinstances must belong to concrete subclass when computing the union of two such concrete instancesthe result should presumably be an instance of the same class as the operands the function type(selfreturns reference to the actual class of the instance identified as selfand the subsequent parentheses in expression type(self)call the default constructor for that class in terms of efficiencywe analyze such set operations while letting denote the size of and denote the size of set for an operation such as if the concrete sets are implemented with hashingthe expected running time of the implementation in code fragment is ( )because it loops over both setsperforming constant-time operations in the form of containment check and possible insertion into the result our implementation of the in-place version of union is given in code fragment in the form of the ior special method that supports syntax | notice that in this casewe do not create new set instanceinstead we modify and return the existing setafter updating its contents to reflect the union operation the in-place version of the union has expected running time (mwhere is the size of the second setbecause we only have to loop through that second set supports syntax | def ior (selfother)"""modify this set to be the union of itself an another set ""for in otherself add(ereturn self technical requirement of in-place operator code fragment an implementation of the mutableset which performs an in-place union of one set with another ior method |
22,104 | implementing setsmultisetsand multimaps sets although sets and maps have very different public interfacesthey are really quite similar set is simply map in which keys do not have associated values any data structure used to implement map can be modified to implement the set adt with similar performance guarantees we could trivially adapt any map class by storing set elements as keysand using none as an irrelevant valuebut such an implementation is unnecessarily wasteful an efficient set implementation should abandon the item composite that we use in our mapbase class and instead store set elements directly in data structure multisets the same element may occur several times in multiset all of the data structures we have seen can be reimplemented to allow for duplicates to appear as separate elements howeveranother way to implement multiset is by using map in which the map key is (distinctelement of the multisetand the associated value is count of the number of occurrences of that element within the multiset in factthat is essentially what we did in section when computing the frequency of words within document python' standard collections module includes definition for class named counter that is in essence multiset formallythe counter class is subclass of dictwith the expectation that values are integersand with additional functionality like most common(nmethod that returns list of the most common elements the standard iter reports each element only once (since those are formally the keys of the dictionarythere is another method named elementsthat iterates through the multiset with each element being repeated according to its count multimaps although there is no multimap in python' standard librariesa common implementation approach is to use standard map in which the value associated with key is itself container class storing any number of associated values we give an example of such multimap class in code fragment our implementation uses the standard dict class as the mapand list of values as composite value in the dictionary we have designed the class so that different map implementation can easily be substituted by overriding the class-level maptype attribute at line |
22,105 | class multimap """ multimap class built upon use of an underlying map for storage ""maptype dict map typecan be redefined by subclass def init (self) """create new empty multimap instance ""create map instance for storage self map self maptype self def iter (self) """iterate through all ( ,vpairs in multimap "" for ,secondary in self map items) for in secondary yield ( , def add(selfkv) """add pair ( ,vto multimap ""create empty listif needed container self map setdefault( ] container append( self + def pop(selfk) """remove and return arbitrary ( ,vwith key (or raise keyerror""may raise keyerror secondary self map[ secondary pop if len(secondary= no pairs left del self map[ self - return (kv def find(selfk) """return arbitrary ( ,vpair with given key (or raise keyerror""may raise keyerror secondary self map[ return (ksecondary[ ] def find all(selfk) """generate iteration of all ( ,vpairs with given key ""empty listby default secondary self map get( ] for in secondary yield ( ,vcode fragment an implementation of multimap using dict for storage the len methodwhich returns self nis omitted from this listing |
22,106 | exercises for help with exercisesplease visit the sitewww wiley com/college/goodrich reinforcement - give concrete implementation of the pop method in the context of the mutablemapping classrelying only on the five primary abstract methods of that class - give concrete implementation of the itemsmethod in the context of the mutablemapping classrelying only on the five primary abstract methods of that class what would its running time be if directly applied to the unsortedtablemap subclassr- give concrete implementation of the itemsmethod directly within the unsortedtablemap classensuring that the entire iteration runs in (ntime - what is the worst-case running time for inserting key-value pairs into an initially empty map that is implemented with the unsortedtablemap classr- reimplement the unsortedtablemap class from section using the positionallist class from section rather than python list - which of the hash table collision-handling schemes could tolerate load factor above and which could notr- our position classes for lists and trees support the eq method so that two distinct position instances are considered equivalent if they refer to the same underlying node in structure for positions to be allowed as keys in hash tablethere must be definition for the hash method that is consistent with this notion of equivalence provide such hash method - what would be good hash code for vehicle identification number that is string of numbers and letters of the form " xx xx ,where " represents digit and an "xrepresents letterr- draw the -entry hash table that results from using the hash functionh( ( mod to hash the keys and assuming collisions are handled by chaining - what is the result of the previous exerciseassuming collisions are handled by linear probingr- show the result of exercise - assuming collisions are handled by quadratic probingup to the point where the method fails |
22,107 | - what is the result of exercise - when collisions are handled by double hashing using the secondary hash function ( ( mod ) - what is the worst-case time for putting entries in an initially empty hash tablewith collisions resolved by chainingwhat is the best caser- show the result of rehashing the hash table shown in figure into table of size using the new hash function ( mod - our hashmapbase class maintains load factor < reimplement that class to allow the user to specify the maximum loadand adjust the concrete subclasses accordingly - give pseudo-code description of an insertion into hash table that uses quadratic probing to resolve collisionsassuming we also use the trick of replacing deleted entries with special "deactivated entryobject - modify our probehashmap to use quadratic probing - explain why hash table is not suited to implement sorted map - describe how sorted list implemented as doubly linked list could be used to implement the sorted map adt - what is the worst-case asymptotic running time for performing deletions from sortedtablemap instance that initially contains entriesr- consider the following variant of the find index method from code fragment in the context of the sortedtablemap classdef find index(selfklowhigh)if high lowreturn high elsemid (low high/ if self table[midkey kreturn self find index(kmid highelsereturn self find index(klowmid does this always produce the same result as the original versionjustify your answer - what is the expected running time of the methods for maintaining maxima set if we insert pairs such that each pair has lower cost and performance than one before itwhat is contained in the sorted map at the end of this series of operationswhat if each pair had lower cost and higher performance than the one before itr- draw an example skip list that results from performing the following series of operations on the skip list shown in figure del [ ] [ [ del [ record your coin flipsas well |
22,108 | - give pseudo-code description of the using skip list delitem map operation when - give concrete implementation of the pop methodin the context of mutableset abstract base classthat relies only on the five core set behaviors described in section - give concrete implementation of the isdisjoint method in the context of the mutableset abstract base classrelying only on the five primary abstract methods of that class your algorithm should run in (min(nm)where and denote the respective cardinalities of the two sets - what abstraction would you use to manage database of friendsbirthdays in order to support efficient queries such as "find all friends whose birthday is todayand "find the friend who will be the next to celebrate birthday"creativity - on page of section we give an implementation of the method setdefault as it might appear in the mutablemapping abstract base class while that method accomplishes the goal in general fashionits efficiency is less than ideal in particularwhen the key is newthere will be failed search due to the initial use of getitem and then subsequent insertion via setitem for concrete implementationsuch as the unsortedtablemapthis is twice the work because complete scan of the table will take place during the failed getitem and then another complete scan of the table takes place due to the implementation of setitem better solution is for the unsortedtablemap class to override setdefault to provide direct solution that performs single search give such an implementation of unsortedtablemap setdefault - repeat exercise - for the probehashmap class - repeat exercise - for the chainhashmap class - for an ideal compression functionthe capacity of the bucket array for hash table should be prime number thereforewe consider the problem of locating prime number in range [ mimplement method for finding such prime by using the sieve algorithm in this algorithmwe allocate cell boolean array asuch that cell is associated with the integer we then initialize the array cells to all be "trueand we "mark offall the cells that are multiples of and so on this process can stop after it reaches number larger than (hintconsider bootstrapping method for finding the primes up to |
22,109 | - perform experiments on our chainhashmap and probehashmap classes to measure its efficiency using random key sets and varying limits on the load factor (see exercise - - our implementation of separate chaining in chainhashmap conserves memory by representing empty buckets in the table as nonerather than as empty instances of secondary structure because many of these buckets will hold single itema better optimization is to have those slots of the table directly reference the item instanceand to reserve use of secondary containers for buckets that have two or more items modify our implementation to provide this additional optimization - computing hash code can be expensiveespecially for lengthy keys in our hash table implementationswe compute the hash code when first inserting an itemand recompute each item' hash code each time we resize our table python' dict class makes an interesting trade-off the hash code is computed oncewhen an item is insertedand the hash code is stored as an extra field of the item compositeso that it need not be recomputed reimplement our hashtablebase class to use such an approach - describe how to perform removal from hash table that uses linear probing to resolve collisions where we do not use special marker to represent deleted elements that iswe must rearrange the contents so that it appears that the removed entry was never inserted in the first place - the quadratic probing strategy has clustering problem related to the way it looks for open slots namelywhen collision occurs at bucket ( )it checks buckets [( (ki mod ]for show that mod will assume at most ( )/ distinct valuesfor primeas ranges from to as part of this justificationnote that mod ( ) mod for all better strategy is to choose prime such that mod and then to check the buckets [( ( + mod nas ranges from to ( )/ alternating between plus and minus show that this alternate version is guaranteed to check every bucket in - refactor our probehashmap design so that the sequence of secondary probes for collision resolution can be more easily customized demonstrate your new framework by providing separate concrete subclasses for linear probing and quadratic probing - design variation of binary search for performing the multimap operation find all(kimplemented with sorted search table that includes duplicatesand show that it runs in time ( log )where is the number of elements in the dictionary and is the number of items with given key |
22,110 | mapshash tablesand skip lists - although keys in map are distinctthe binary search algorithm can be applied in more general setting in which an array stores possibly duplicative elements in nondecreasing order consider the goal of identifying the index of the leftmost element with key greater than or equal to given does the find index method as given in code fragment guarantee such resultdoes the find index method as given in exercise - guarantee such resultjustify your answers - suppose we are given two sorted search tables and each with entries (with and being implemented with arraysdescribe an (log )time algorithm for finding the kth smallest key in the union of the keys from and (assuming no duplicatesc- give an (log )-time solution for the previous problem - suppose that each row of an array consists of ' and ' such thatin any row of aall the ' come before any ' in that row assuming is already in memorydescribe method running in ( log ntime (not ( time!for counting the number of ' in - given collection of cost-performance pairs (cp)describe an algorithm for finding the maxima pairs of in ( log ntime - show that the methods above(pand prev(pare not actually needed to efficiently implement map using skip list that iswe can implement insertions and deletions in skip list using strictly top-downscanforward approachwithout ever using the above or prev methods (hintin the insertion algorithmfirst repeatedly flip the coin to determine the level where you should start inserting the new entry - describe how to modify skip-list representation so that index-based operationssuch as retrieving the item at index jcan be performed in (log nexpected time - for sets and the syntax returns new set that is the symmetric differencethat isa set of elements that are in precisely one of or this syntax is supported by the special xor method provide an implementation of that method in the context of the mutableset abstract base classrelying only on the five primary abstract methods of that class - in the context of the mutableset abstract base classdescribe concrete implementation of the and methodwhich supports the syntax for computing the intersection of two existing sets - an inverted file is critical data structure for implementing search engine or the index of book given document dwhich can be viewed as an unorderednumbered list of wordsan inverted file is an ordered list of wordslsuch thatfor each word in lwe store the indices of the places in where appears design an efficient algorithm for constructing from |
22,111 | - python' collections module provides an ordereddict class that is unrelated to our sorted map abstraction an ordereddict is subclass of the standard hash-based dict class that retains the expected ( performance for the primary map operationsbut that also guarantees that the iter method reports items of the map according to first-infirst-out (fifoorder that isthe key that has been in the dictionary the longest is reported first (the order is unaffected when the value for an existing key is overwritten describe an algorithmic approach for achieving such performance projects - perform comparative analysis that studies the collision rates for various hash codes for character stringssuch as various polynomial hash codes for different values of the parameter use hash table to determine collisionsbut only count collisions where different strings map to the same hash code (not if they map to the same location in this hash tabletest these hash codes on text files found on the internet - perform comparative analysis as in the previous exercisebut for -digit telephone numbers instead of character strings - implement an ordereddict classas described in exercise - ensuring that the primary map operations run in ( expected time - design python class that implements the skip-list data structure use this class to create complete implementation of the sorted map adt - extend the previous project by providing graphical animation of the skip-list operations visualize how entries move up the skip list during insertions and are linked out of the skip list during removals alsoin search operationvisualize the scan-forward and drop-down actions - write spell-checker class that stores lexicon of wordsw in python setand implements methodcheck( )which performs spell check on the string with respect to the set of wordsw if is in then the call to check(sreturns list containing only sas it is assumed to be spelled correctly in this case if is not in then the call to check(sreturns list of every word in that might be correct spelling of your program should be able to handle all the common ways that might be misspelling of word in including swapping adjacent characters in wordinserting single character in between two adjacent characters in worddeleting single character from wordand replacing character in word with another character for an extra challengeconsider phonetic substitutions as well |
22,112 | notes hashing is well-studied technique the reader interested in further study is encouraged to explore the book by knuth [ ]as well as the book by vitter and chen [ skip lists were introduced by pugh [ our analysis of skip lists is simplification of presentation given by motwani and raghavan [ for more in-depth analysis of skip listsplease see the various research papers on skip lists that have appeared in the data structures literature [ exercise - was contributed by james lee |
22,113 | search trees contents binary search trees navigating binary search tree searches insertions and deletions python implementation performance of binary search tree balanced search trees python framework for balancing search trees avl trees update operations python implementation splay trees splaying when to splay python implementation amortized analysis of splaying ( , trees multiway search trees ( , )-tree operations red-black trees red-black tree operations python implementation exercises |
22,114 | binary search trees in we introduced the tree data structure and demonstrated variety of applications one important use is as search tree (as described on page in this we use search tree structure to efficiently implement sorted map the three most fundamental methods of map (see section arem[ ]return the value associated with key in map mif one existsotherwise raise keyerrorimplemented with getitem method [kvassociate value with key in map mreplacing the existing value if the map already contains an item with key equal to kimplemented with setitem method del [ ]remove from map the item with key equal to kif has no such itemthen raise keyerrorimplemented with delitem method the sorted map adt includes additional functionality (see section )guaranteeing that an iteration reports keys in sorted orderand supporting additional searches such as find gt(kand find range(startstopbinary trees are an excellent data structure for storing items of mapassuming we have an order relation defined on the keys in this contexta binary search tree is binary tree with each position storing key-value pair (kvsuch thatkeys stored in the left subtree of are less than keys stored in the right subtree of are greater than an example of such binary search tree is given in figure as matter of conveniencewe will not diagram the values associated with keys in this since those values do not affect the placement of items within search tree figure binary search tree with integer keys we omit the display of associated values in this since they are not relevant to the order of items within search tree |
22,115 | navigating binary search tree we begin by demonstrating that binary search tree hierarchically represents the sorted order of its keys in particularthe structural property regarding the placement of keys within binary search tree assures the following important consequence regarding an inorder traversal (section of the tree proposition an inorder traversal of binary search tree visits positions in increasing order of their keys justificationwe prove this by induction on the size of subtree if subtree has at most one itemits keys are trivially visited in order more generallyan inorder traversal of (sub)tree consists of recursive traversal of the (possibly emptyleft subtreefollowed by visit of the rootand then recursive traversal of the (possibly emptyright subtree by inductiona recursive inorder traversal of the left subtree will produce an iteration of the keys in that subtree in increasing order furthermoreby the binary search tree propertyall keys in the left subtree have keys strictly smaller than that of the root thereforevisiting the root just after that subtree extends the increasing order of keys finallyby the search tree propertyall keys in the right subtree are strictly greater than the rootand by inductionan inorder traversal of that subtree will visit those keys in increasing order since an inorder traversal can be executed in linear timea consequence of this proposition is that we can produce sorted iteration of the keys of map in linear timewhen represented as binary search tree although an inorder traversal is typically expressed using top-down recursionwe can provide nonrecursive descriptions of operations that allow more finegrained navigation among the positions of binary search relative to the order of their keys our generic binary tree adt from is defined as positional structureallowing direct navigation using methods such as parent( )left( )and right(pwith binary search treewe can provide additional navigation based on the natural order of the keys stored in the tree in particularwe can support the following methodsakin to those provided by positionallist (section first)return the position containing the least keyor none if the tree is empty last)return the position containing the greatest keyor none if empty tree before( )return the position containing the greatest key that is less than that of position ( the position that would be visited immediately before in an inorder traversal)or none if is the first position after( )return the position containing the least key that is greater than that of position ( the position that would be visited immediately after in an inorder traversal)or none if is the last position |
22,116 | search trees the "firstposition of binary search tree can be located by starting walk at the root and continuing to the left childas long as left child exists by symmetrythe last position is reached by repeated steps rightward starting at the root the successor of positionafter( )is determined by the following algorithm algorithm after( )if right(pis not none then {successor is leftmost position in ' right subtreewalk right(pwhile left(walkis not none do walk left(walkreturn walk else {successor is nearest ancestor having in its left subtreewalk ancestor parent(walkwhile ancestor is not none and walk =right(ancestordo walk ancestor ancestor parent(walkreturn ancestor code fragment computing the successor of position in binary search tree the rationale for this process is based purely on the workings of an inorder traversalgiven the correspondence of proposition if has right subtreethat right subtree is recursively traversed immediately after is visitedand so the first position to be visited after is the leftmost position within the right subtree if does not have right subtreethen the flow of control of an inorder traversal returns to ' parent if were in the right subtree of that parentthen the parent' subtree traversal is complete and the flow of control progresses to its parent and so on once an ancestor is reached in which the recursion is returning from its left subtreethen that ancestor becomes the next position visited by the inorder traversaland thus is the successor of notice that the only case in which no such ancestor is found is when was the rightmost (lastposition of the full treein which case there is no successor symmetric algorithm can be defined to determine the predecessor of positionbefore(pat this pointwe note that the running time of single call to after(por before(pis bounded by the height of the full treebecause it is found after either single downward walk or single upward walk while the worst-case running time is ( )we note that either of these methods run in ( amortized timein that series of calls to after(pstarting at the first position will execute in total of (ntime we leave formal justification of this fact to exercise - but intuitively the upward and downward paths mimic steps of the inorder traversal ( related argument was made in the justification of proposition |
22,117 | searches the most important consequence of the structural property of binary search tree is its namesake search algorithm we can attempt to locate particular key in binary search tree by viewing it as decision tree (recall figure in this casethe question asked at each position is whether the desired key is less thanequal toor greater than the key stored at position pwhich we denote as keyif the answer is "less than,then the search continues in the left subtree if the answer is "equal,then the search terminates successfully if the answer is "greater than,then the search continues in the right subtree finallyif we reach an empty subtreethen the search terminates unsuccessfully (see figure ( (bfigure (aa successful search for key in binary search tree(ban unsuccessful search for key that terminates because there is no subtree to the left of the key we describe this approach in code fragment if key occurs in subtree rooted at pa call to treesearch(tpkresults in the position at which the key is foundin this casethe getitem map operation would return the associated value at that position in the event of an unsuccessful searchthe treesearch algorithm returns the final position explored on the search path (which we will later make use of when determining where to insert new item in search treealgorithm treesearch(tpk)if = key(then return {successful searchelse if key(and left(pis not none then return treesearch(tt left( ) {recur on left subtreeelse if key(and right(pis not none then return treesearch(tt right( ) {recur on right subtreereturn {unsuccessful searchcode fragment recursive search in binary search tree |
22,118 | analysis of binary tree searching the analysis of the worst-case running time of searching in binary search tree is simple algorithm treesearch is recursive and executes constant number of primitive operations for each recursive call each recursive call of treesearch is made on child of the previous position that istreesearch is called on the positions of path of that starts at the root and goes down one level at time thusthe number of such positions is bounded by where is the height of in other wordssince we spend ( time per position encountered in the searchthe overall search runs in (htimewhere is the height of the binary search tree (see figure time per level ( height tree to( ( total timeo(hfigure illustrating the running time of searching in binary search tree the figure uses standard caricature of binary search tree as big triangle and path from the root as zig-zag line in the context of the sorted map adtthe search will be used as subroutine for implementing the getitem methodas well as for the setitem and delitem methodssince each of these begins by trying to locate an existing item with given key to implement sorted map operations such as find lt and find gtwe will combine this search with traversal methods before and after all of these operations will run in worst-case (htime for tree with height we can use variation of this technique to implement the find range method in time ( )where is the number of items reported (see exercise - admittedlythe height of can be as large as the number of entriesnbut we expect that it is usually much smaller indeedlater in this we show various strategies to maintain an upper bound of (log non the height of search tree |
22,119 | insertions and deletions algorithms for inserting or deleting entries of binary search tree are fairly straightforwardalthough not trivial insertion the map command [kvas supported by the setitem methodbegins with search for key (assuming the map is nonemptyif foundthat item' existing value is reassigned otherwisea node for the new item can be inserted into the underlying tree in place of the empty subtree that was reached at the end of the failed search the binary search tree property is sustained by that placement (note that it is placed exactly where search would expect itpseudo-code for such treeinsert algorithm is given in in code fragment algorithm treeinsert(tkv)inputa search key to be associated with value treesearch(tt root()kif = key(then set ' value to else if key(then add node with item ( ,vas left child of else add node with item ( ,vas right child of code fragment algorithm for inserting key-value pair into map that is represented as binary search tree an example of insertion into binary search tree is shown in figure ( (bfigure insertion of an item with key into the search tree of figure finding the position to insert is shown in ( )and the resulting tree is shown in ( |
22,120 | deletion deleting an item from binary search tree is bit more complex than inserting new item because the location of the deletion might be anywhere in the tree (in contrastinsertions are always enacted at the bottom of path to delete an item with key kwe begin by calling treesearch(tt root)kto find the position of storing an item with key equal to if the search is successfulwe distinguish between two cases (of increasing difficulty)if has at most one childthe deletion of the node at position is easily implemented when introducing update methods for the linkedbinarytree class in section we declared nonpublic utilitydelete( )that deletes node at position and replaces it with its child (if any)presuming that has at most one child that is precisely the desired behavior it removes the item with key from the map while maintaining all other ancestor-descendant relationships in the treethereby assuring the upkeep of the binary search tree property (see figure if position has two childrenwe cannot simply remove the node from since this would create "holeand two orphaned children insteadwe proceed as follows (see figure )we locate position containing the item having the greatest key that is strictly less than that of position pthat isr before(pby the notation of section because has two childrenits predecessor is the rightmost position of the left subtree of we use ' item as replacement for the one being deleted at position because has the immediately preceding key in the mapany items in ' right subtree will have keys greater than and any other items in ' left subtree will have keys less than thereforethe binary search tree property is satisfied after the replacement having used ' as replacement for pwe instead delete the node at position from the tree fortunatelysince was located as the rightmost position in subtreer does not have right child thereforeits deletion can be performed using the first (and simplerapproach as with searching and insertionthis algorithm for deletion involves the traversal of single path downward from the rootpossibly moving an item between two positions of this pathand removing node from that path and promoting its child thereforeit executes in time (hwhere is the height of the tree |
22,121 | ( (bfigure deletion from the binary search tree of figure bwhere the item to delete (with key is stored at position with one child (abefore the deletion(bafter the deletion (ap (bfigure deletion from the binary search tree of figure bwhere the item to delete (with key is stored at position with two childrenand replaced by its predecessor (abefore the deletion(bafter the deletion |
22,122 | python implementation in code fragments through we define treemap class that implements the sorted map adt using binary search tree in factour implementation is more general we support all of the standard map operations (section )all additional sorted map operations (section )and positional operations including first)last)find position( )before( )after( )and delete(pour treemap class takes advantage of multiple inheritance for code reuseinheriting from the linkedbinarytree class of section for our representation as positional binary treeand from the mapbase class from code fragment of section to provide us with the key-value composite item and the concrete behaviors from the collections mutablemapping abstract base class we subclass the nested position class to support more specific keyand valueaccessors for our maprather than the elementsyntax inherited from the tree adt we define several nonpublic utilitiesmost notably subtree search(pkmethod that corresponds to the treesearch algorithm of code fragment that returns positionideally one that contains the key kor otherwise the last position that is visited on the search path we rely on the fact that the final position during an unsuccessful search is either the nearest key less than or the nearest key greater than this search utility becomes the basis for the public find position(kmethodand also for internal use when searchinginsertingor deleting items from mapas well as for the robust searches of the sorted map adt when making structural modifications to the treewe rely on nonpublic update methodssuch as add rightthat are inherited from the linkedbinarytree class (see section it is important that these inherited methods remain nonpublicas the search tree property could be violated through misuse of such operations finallywe note that our code is peppered with calls to presumed methods named rebalance insertrebalance deleteand rebalance access these methods serve as hooks for future use when balancing search treeswe discuss them in section we conclude with brief guide to the organization of our code code fragment beginning of treemap class including redefined position class and nonpublic search utilities code fragment positional methods first)last)before( )after( )and find position(paccessor code fragment selected methods of the sorted map adtfind min)find ge( )and find range(startstop)related methods are omitted for the sake of brevity code fragment getitem ( )setitem (kv)and iter code fragment deletion either by positionas delete( )or by keyas delitem ( |
22,123 | class treemap(linkedbinarytreemapbase) """sorted map implementation using binary search tree "" override position class class position(linkedbinarytree position) def key(self) """return key of map key-value pair "" return self elementkey def value(self) """return value of map key-value pair "" return self elementvalue nonpublic utilities def subtree search(selfpk) """return position of subtree having key kor last node searched "" if = key)found match return elif key)search left subtree if self left(pis not none return self subtree search(self left( ) elsesearch right subtree if self right(pis not none return self subtree search(self right( ) return unsucessful search def subtree first position(selfp) """return position of first item in subtree rooted at "" walk while self left(walkis not nonekeep walking left walk self left(walk return walk def subtree last position(selfp) """return position of last item in subtree rooted at "" walk while self right(walkis not nonekeep walking right walk self right(walk return walk code fragment beginning of treemap class based on binary search tree |
22,124 | def first(self)"""return the first position in the tree (or none if empty""return self subtree first position(self root)if len(self else none def last(self)"""return the last position in the tree (or none if empty""return self subtree last position(self root)if len(self else none def before(selfp)"""return the position just before in the natural order return none if is the first position ""inherited from linkedbinarytree self validate(pif self left( )return self subtree last position(self left( )elsewalk upward walk above self parent(walkwhile above is not none and walk =self left(above)walk above above self parent(walkreturn above def after(selfp)"""return the position just after in the natural order return none if is the last position ""symmetric to before(pdef find position(selfk)"""return position with key kor else neighbor (or none if empty""if self is empty)return none elsep self subtree search(self root)khook for balanced tree subclasses self rebalance access(preturn code fragment navigational methods of the treemap class |
22,125 | def find min(self)"""return (key,valuepair with minimum key (or none if empty""if self is empty)return none elsep self firstreturn ( key) value)def find ge(selfk)"""return (key,valuepair with least key greater than or equal to return none if there does not exist such key ""if self is empty)return none elsemay not find exact match self find position(kif keykp' key is too small self after(preturn ( key) value)if is not none else none def find range(selfstartstop)"""iterate all (key,valuepairs such that start <key stop if start is noneiteration begins with minimum key of map if stop is noneiteration continues through the maximum key of map ""if not self is empty)if start is nonep self firstelsewe initialize with logic similar to find ge self find position(startif keystartp self after(pwhile is not none and (stop is none or keystop)yield ( key) value) self after(pcode fragment some of the sorted map operations for the treemap class |
22,126 | def getitem (selfk)"""return value associated with key (raise keyerror if not found""if self is empty)raise keyerrorkey errorrepr( )elsep self subtree search(self root)khook for balanced tree subclasses self rebalance access(pif ! key)raise keyerrorkey errorrepr( )return valuedef setitem (selfkv)"""assign value to key koverwriting existing value if present ""if self is empty)from linkedbinarytree leaf self add root(self item( , )elsep self subtree search(self root)kif key=kreplace existing item' value elementvalue hook for balanced tree subclasses self rebalance access(preturn elseitem self item( ,vif keykleaf self add right(piteminherited from linkedbinarytree elseleaf self add left(piteminherited from linkedbinarytree hook for balanced tree subclasses self rebalance insert(leafdef iter (self)"""generate an iteration of all keys in the map in order "" self firstwhile is not noneyield keyp self after(pcode fragment map operations for accessing and inserting items in the treemap class reverse iteration can be implemented with reverse using symmetric approach to iter |
22,127 | def delete(selfp)"""remove the item at given position ""inherited from linkedbinarytree self validate(pif self left(pand self right( ) has two children replacement self subtree last position(self left( )from linkedbinarytree self replace(preplacement element) replacement now has at most one child parent self parent(pinherited from linkedbinarytree self delete(pif root deletedparent is none self rebalance delete(parentdef delitem (selfk)"""remove item associated with key (raise keyerror if not found""if not self is empty) self subtree search(self root)kif = key)self delete(prely on positional version return successful deletion complete hook for balanced tree subclasses self rebalance access(praise keyerrorkey errorrepr( )code fragment support for deleting an item from treemaplocated either by position or by key performance of binary search tree an analysis of the operations of our treemap class is given in table almost all operations have worst-case running time that depends on hwhere is the height of the current tree this is because most operations rely on constant amount of work for each node along particular path of the treeand the maximum path length within tree is proportional to the height of the tree most notablyour implementations of map operations getitem setitem and delitem each begin with call to the subtree search utility which traces path downward from the root of the treeusing ( time at each node to determine how to continue the search similar paths are traced when looking for replacement during deletionor when computing position' inorder predecessor or successor we note that although single call to the after method has worst-case running time of ( )the successive calls made during call to iter require total of (ntimesince each edge is traced at most twicein sensethose calls have ( amortized time bounds similar argument can be used to prove the ( hworst-case bound for call to find range that reports results (see exercise - |
22,128 | operation in [ ] [kv delete( )del [kt find position(kt first) last) find min) find maxt before( ) after(pt find lt( ) find le( ) find gt( ) find ge(kt find range(startstopiter( )reversed(trunning time (ho(ho(ho(ho(ho(ho(ho( ho(ntable worst-case running times of the operations for treemap we denote the current height of the tree with hand the number of items reported by find range as the space usage is ( )where is the number of items stored in the map binary search tree is therefore an efficient implementation of map with entries only if its height is small in the best caset has height log( ) which yields logarithmic-time performance for all the map operations in the worst casehowevert has height nin which case it would look and feel like an ordered list implementation of map such worst-case configuration arisesfor exampleif we insert items with keys in increasing or decreasing order (see figure figure example of binary search tree with linear heightobtained by inserting entries with keys in increasing order we can nevertheless take comfort thaton averagea binary search tree with keys generated from random series of insertions and removals of keys has expected height (log )the justification of this statement is beyond the scope of the bookrequiring careful mathematical language to precisely define what we mean by random series of insertions and removalsand sophisticated probability theory in applications where one cannot guarantee the random nature of updatesit is better to rely on variations of search treespresented in the remainder of this that guarantee worst-case height of (log )and thus (log nworstcase time for searchesinsertionsand deletions |
22,129 | balanced search trees in the closing of the previous sectionwe noted that if we could assume random series of insertions and removalsthe standard binary search tree supports (log nexpected running times for the basic map operations howeverwe may only claim (nworst-case timebecause some sequences of operations may lead to an unbalanced tree with height proportional to in the remainder of this we explore four search tree algorithms that provide stronger performance guarantees three of the four data structures (avl treessplay treesand red-black treesare based on augmenting standard binary search tree with occasional operations to reshape the tree and reduce its height the primary operation to rebalance binary search tree is known as rotation during rotationwe "rotatea child to be above its parentas diagrammed in figure figure rotation operation in binary search tree rotation can be performed to transform the left formation into the rightor the right formation into the left note that all keys in subtree have keys less than that of position xall keys in subtree have keys that are between those of positions and yand all keys in subtree have keys that are greater than that of position to maintain the binary search tree property through rotationwe note that if position was left child of position prior to rotation (and therefore the key of is less than the key of )then becomes the right child of after the rotationand vice versa furthermorewe must relink the subtree of items with keys that lie between the keys of the two positions that are being rotated for examplein figure the subtree labeled represents items with keys that are known to be greater than that of position and less than that of position in the first configuration of that figuret is the right subtree of position xin the second configurationit is the left subtree of position because single rotation modifies constant number of parent-child relationshipsit can be implemented in ( time with linked binary tree representation |
22,130 | search trees in the context of tree-balancing algorithma rotation allows the shape of tree to be modified while maintaining the search tree property if used wiselythis operation can be performed to avoid highly unbalanced tree configurations for examplea rightward rotation from the first formation of figure to the second reduces the depth of each node in subtree by onewhile increasing the depth of each node in subtree by one (note that the depth of nodes in subtree are unaffected by the rotation one or more rotations can be combined to provide broader rebalancing within tree one such compound operation we consider is trinode restructuring for this manipulationwe consider position xits parent yand its grandparent the goal is to restructure the subtree rooted at in order to reduce the overall path length to and its subtrees pseudo-code for restructure(xmethod is given in code fragment and illustrated in figure in describing trinode restructuringwe temporarily rename the positions xyand as aband cso that precedes and precedes in an inorder traversal of there are four possible orientations mapping xyand to aband cas shown in figure which are unified into one case by our relabeling the trinode restructuring replaces with the node identified as bmakes the children of this node be and cand makes the children of and be the four previous children of xyand (other than and )while maintaining the inorder relationships of all the nodes in algorithm restructure( )inputa position of binary search tree that has both parent and grandparent outputtree after trinode restructuring (which corresponds to single or double rotationinvolving positions xyand let (abcbe left-to-right (inorderlisting of the positions xyand zand let ( be left-to-right (inorderlisting of the four subtrees of xyand not rooted at xyor replace the subtree rooted at with new subtree rooted at let be the left child of and let and be the left and right subtrees of arespectively let be the right child of and let and be the left and right subtrees of crespectively code fragment the trinode restructuring operation in binary search tree in practicethe modification of tree caused by trinode restructuring operation can be implemented through case analysis either as single rotation (as in figure and bor as double rotation (as in figure and dthe double rotation arises when position has the middle of the three relevant keys and is first rotated above its parentand then above what was originally its grandparent in any of the casesthe trinode restructuring is completed with ( running time |
22,131 | = = single rotation = = = = (ac= = single rotation = = = = (ba= = double rotation = = = = (cc= = double rotation = = = = (dfigure schematic illustration of trinode restructuring operation( and brequire single rotation( and drequire double rotation |
22,132 | python framework for balancing search trees our treemap classintroduced in section is concrete map implementation that does not perform any explicit balancing operations howeverwe designed that class to also serve as base class for other subclasses that implement more advanced tree-balancing algorithms summary of our inheritance hierarchy is shown in figure linkedbinarytree (section mapbase (section treemap (section avltreemap (section splaytreemap (section redblacktreemap (section figure our hierarchy of balanced search trees (with references to where they are definedrecall that treemap inherits multiply from linkedbinarytree and mapbase hooks for rebalancing operations our implementation of the basic map operations in section includes strategic calls to three nonpublic methods that serve as hooks for rebalancing algorithmsa call to rebalance insert(pis made from within the setitem method immediately after new node is added to the tree at position call to rebalance delete(pis made each time node has been deleted from the treewith position identifying the parent of the node that has just been removed formallythis hook is called from within the public delete(pmethodwhich is indirectly invoked by the public delitem (kbehavior we also provide hookrebalance access( )that is called when an item at position of tree is accessed through public method such as getitem this hook is used by the splay tree structure (see section to restructure tree so that more frequently accessed items are brought closer to the root we provide trivial declarations of these three methodsin code fragment having bodies that do nothing (using the pass statementa subclass of treemap may override any of these methods to implement nontrivial action to rebalance tree this is another example of the template method design patternas seen in section |
22,133 | def rebalance insert(selfp)pass def rebalance delete(selfp)pass def rebalance access(selfp)pass code fragment additional code for the treemap class (continued from code fragment )providing stubs for the rebalancing hooks nonpublic methods for rotating and restructuring second form of support for balanced search trees is our inclusion of nonpublic utility methods rotate and restructure thatrespectivelyimplement single rotation and trinode restructuring (described at the beginning of section although these methods are not invoked by the public treemap operationswe promote code reuse by providing these implementation in this class so that they are inherited by all balanced-tree subclasses our implementations are provided in code fragment to simplify the codewe define an additional relink utility that properly links parent and child nodes to each otherincluding the special case in which "childis none reference the focus of the rotate method then becomes redefining the relationship between the parent and childrelinking rotated node directly to its original grandparentand shifting the "middlesubtree (that labeled as in figure between the rotated nodes for the trinode restructuringwe determine whether to perform single or double rotationas originally described in figure factory for creating tree nodes we draw attention to an important subtlety in the design of both our treemap class and the original linkedbinarytree subclass the low-level definition of node is provided by the nested node class within linkedbinarytree yetseveral of our tree-balancing strategies require that auxiliary information be stored at each node to guide the balancing process those classes will override the nested node class to provide storage for an additional field whenever we add new node to the treeas within the add right method of the linkedbinarytree (originally given in code fragment )we intentionally instantiate the node using the syntax self noderather than the qualified name linkedbinarytree node this is vital to our frameworkwhen the expression self node is applied to an instance of tree (sub)classpython' name resolution follows the inheritance structure (as described in section if subclass has overridden the definition for the node classinstantiation of self node relies on the newly defined node class this technique is an example of the factory method design patternas we provide subclass the means to control the type of node that is created within methods of the parent class |
22,134 | def relink(selfparentchildmake left child)"""relink parent node with child node (we allow child to be none""make it left child if make left childparent left child elsemake it right child parent right child if child is not nonemake child point to parent child parent parent def rotate(selfp)"""rotate position above its parent "" node we assume this exists parent grandparent (possibly nonez parent if is nonex becomes root self root parent none elsex becomes direct child of self relink(zxy = leftnow rotate and yincluding transfer of middle subtree if = leftx right becomes left child of self relink(yx righttruey becomes right child of self relink(xyfalseelsex left becomes right child of self relink(yx leftfalsey becomes left child of self relink(xytruedef restructure(selfx)"""perform trinode restructure of position with parent/grandparent "" self parent(xz self parent(yif ( =self right( )=( =self right( ))matching alignments single rotation (of yself rotate(yreturn is new subtree root elseopposite alignments double rotation (of xself rotate(xself rotate(xreturn is new subtree root code fragment additional code for the treemap class (continued from code fragment )to provide nonpublic utilities for balanced search tree subclasses |
22,135 | avl trees the treemap classwhich uses standard binary search tree as its data structureshould be an efficient map data structurebut its worst-case performance for the various operations is linear timebecause it is possible that series of operations results in tree with linear height in this sectionwe describe simple balancing strategy that guarantees worst-case logarithmic running time for all the fundamental map operations definition of an avl tree the simple correction is to add rule to the binary search tree definition that will maintain logarithmic height for the tree although we originally defined the height of subtree rooted at position of tree to be the number of edges on the longest path from to leaf (see section )it is easier for explanation in this section to consider the height to be the number of nodes on such longest path by this definitiona leaf position has height while we trivially define the height of "nullchild to be in this sectionwe consider the following height-balance propertywhich characterizes the structure of binary search tree in terms of the heights of its nodes height-balance propertyfor every position of the heights of the children of differ by at most any binary search tree that satisfies the height-balance property is said to be an avl treenamed after the initials of its inventorsadel'son-vel'skii and landis an example of an avl tree is shown in figure figure an example of an avl tree the keys of the items are shown inside the nodesand the heights of the nodes are shown above the nodes (with empty subtrees having height |
22,136 | an immediate consequence of the height-balance property is that subtree of an avl tree is itself an avl tree the height-balance property has also the important consequence of keeping the height smallas shown in the following proposition proposition the height of an avl tree storing entries is (log njustificationinstead of trying to find an upper bound on the height of an avl tree directlyit turns out to be easier to work on the "inverse problemof finding lower bound on the minimum number of nodes (hof an avl tree with height we will show that (hgrows at least exponentially from thisit will be an easy step to derive that the height of an avl tree storing entries is (log nwe begin by noting that ( and ( because an avl tree of height must have exactly one node and an avl tree of height must have at least two nodes nowan avl tree with the minimum number of nodes having height for > is such that both its subtrees are avl trees with the minimum number of nodesone with height and the other with height taking the root into accountwe obtain the following formula that relates (hto ( and ( )for > ( ( ( ( at this pointthe reader familiar with the properties of fibonacci progressions (section and exercise - will already see that (his function exponential in to formalize that observationwe proceed as follows formula implies that (his strictly increasing function of thuswe know that ( ( replacing ( with ( in formula and dropping the we getfor > ( ( ( formula indicates that (hat least doubles each time increases by which intuitively means that (hgrows exponentially to show this fact in formal waywe apply formula repeatedlyyielding the following series of inequalitiesn( ( ( ( ( ( that isn( * ( )for any integer isuch that > since we already know the values of ( and ( )we pick so that is equal to either or that iswe pick |
22,137 | by substituting the above value of in formula we obtainfor > - + * - ( > - ( > - ( by taking logarithms of both sides of formula we obtain log( ( ) from which we get log( ( ) ( which implies that an avl tree storing entries has height at most logn by proposition and the analysis of binary search trees given in section the operation getitem in map implemented with an avl treeruns in time (log )where is the number of items in the map of coursewe still have to show how to maintain the height-balance property after an insertion or deletion update operations given binary search tree we say that position is balanced if the absolute value of the difference between the heights of its children is at most and we say that it is unbalanced otherwise thusthe height-balance property characterizing avl trees is equivalent to saying that every position is balanced the insertion and deletion operations for avl trees begin similarly to the corresponding operations for (standardbinary search treesbut with post-processing for each operation to restore the balance of any portions of the tree that are adversely affected by the change insertion suppose that tree satisfies the height-balance propertyand hence is an avl treeprior to the insertion of new item an insertion of new item in binary search treeas described in section results in new node at leaf position this action may violate the height-balance property (seefor examplefigure )yet the only positions that may become unbalanced are ancestors of pbecause those are the only positions whose subtrees have changed thereforelet us describe how to restructure to fix any unbalance that may have occurred |
22,138 | ( (bfigure an example insertion of an item with key in the avl tree of figure (aafter adding new node for key the nodes storing keys and become unbalanced(ba trinode restructuring restores the height-balance property we show the heights of nodes above themand we identify the nodes xyand and subtrees and participating in the trinode restructuring we restore the balance of the nodes in the binary search tree by simple "search-and-repairstrategy in particularlet be the first position we encounter in going up from toward the root of such that is unbalanced (see figure alsolet denote the child of with higher height (and note that must be an ancestor of pfinallylet be the child of with higher height (there cannot be tie and position must also be an ancestor of ppossibly itself we rebalance the subtree rooted at by calling the trinode restructuring methodrestructure( )originally described in section an example of such restructuring in the context of an avl insertion is portrayed in figure to formally argue the correctness of this process in reestablishing the avl height-balance propertywe consider the implication of being the nearest ancestor of that became unbalanced after the insertion of it must be that the height of increased by one due to the insertion and that it is now greater than its sibling since remains balancedit must be that it formerly had subtrees with equal heightsand that the subtree containing has increased its height by one that subtree increased either because pand thus its height changed from to or because previously had equal-height subtrees and the height of the one containing has increased by letting > denote the height of the tallest child of xthis scenario might be portrayed as in figure after the trinode restructuringwe see that each of xyand has become balanced furthermorethe node that becomes the root of the subtree after the restructuring has height which is precisely the height that had before the insertion of the new item thereforeany ancestor of that became temporarily unbalanced becomes balanced againand this one restructuring restores the heightbalance property globally |
22,139 | + + - - (ah+ + + - (bh+ + + - (cfigure rebalancing of subtree during typical insertion into an avl tree(abefore the insertion(bafter an insertion in subtree causes imbalance at (cafter restoring balance with trinode restructuring notice that the overall height of the subtree after the insertion is the same as before the insertion |
22,140 | deletion recall that deletion from regular binary search tree results in the structural removal of node having either zero or one children such change may violate the height-balance property in an avl tree in particularif position represents the parent of the removed node in tree there may be an unbalanced node on the path from to the root of (see figure in factthere can be at most one such unbalanced node (the justification of this fact is left as exercise - ( (bfigure deletion of the item with key from the avl tree of figure (aafter removing the node storing key the root becomes unbalanced(ba (singlerotation restores the height-balance property as with insertionwe use trinode restructuring to restore balance in the tree in particularlet be the first unbalanced position encountered going up from toward the root of alsolet be the child of with larger height (note that position is the child of that is not an ancestor of )and let be the child of defined as followsif one of the children of is taller than the otherlet be the taller child of yelse (both children of have the same height)let be the child of on the same side as (that isif is the left child of zlet be the left child of yelse let be the right child of yin any casewe then perform restructure(xoperation (see figure the restructured subtree is rooted at the middle position denoted as in the description of the trinode restructuring operation the height-balance property is guaranteed to be locally restored within the subtree of (see exercises - and - unfortunatelythis trinode restructuring may reduce the height of the subtree rooted at by which may cause an ancestor of to become unbalanced soafter rebalancing zwe continue walking up looking for unbalanced positions if we find anotherwe perform restructure operation to restore its balanceand continue marching up looking for moreall the way to the root stillsince the height of is (log )where is the number of entriesby proposition (log ntrinode restructurings are sufficient to restore the height-balance property |
22,141 | performance of avl trees by proposition the height of an avl tree with items is guaranteed to be (log nbecause the standard binary search tree operation had running times bounded by the height (see table )and because the additional work in maintaining balance factors and restructuring an avl tree can be bounded by the length of path in the treethe traditional map operations run in worst-case logarithmic time with an avl tree we summarize these results in table and illustrate this performance in figure operation in [kv delete( )del [kt find position(kt first) last) find min) find maxt before( ) after(pt find lt( ) find le( ) find gt( ) find ge(kt find range(startstopiter( )reversed(trunning time (log no(log no(log no(log no(log no(log no(log no( log no(ntable worst-case running times of operations for an -item sorted map realized as an avl tree twith denoting the number of items reported by find range height time per level ( avl tree to( (log ndown phase ( up phase worst-case time: (log nfigure illustrating the running time of searches and updates in an avl tree the time performance is ( per levelbroken into down phasewhich typically involves searchingand an up phasewhich typically involves updating height values and performing local trinode restructurings (rotations |
22,142 | python implementation complete implementation of an avltreemap class is provided in code fragments and it inherits from the standard treemap class and relies on the balancing framework described in section we highlight two important aspects of our implementation firstthe avltreemap overrides the definition of the nested node classas shown in code fragment in order to provide support for storing the height of the subtree stored at node we also provide several utilities involving heights of nodesand the corresponding positions to implement the core logic of the avl balancing strategywe define utilitynamed rebalancethat suffices as hook for restoring the height-balance property after an insertion or deletion although the inherited behaviors for insertion and deletion are quite differentthe necessary post-processing for an avl tree can be unified in both caseswe trace an upward path from the position at which the change took placerecalculating the height of each position based on the (updatedheights of its childrenand using trinode restructuring operation if an imbalanced position is reached if we reach an ancestor with height that is unchanged by the overall map operationor if we perform trinode restructuring that results in the subtree having the same height it had before the map operationwe stop the processno further ancestor' height will change to detect the stopping conditionwe record the "oldheight of each node and compare it to the newly calculated height class avltreemap(treemap) """sorted map implementation using an avl tree "" nested node class class node(treemap node) """node class for avl maintains height value for balancing ""slots _height additional data member to store height def init (selfelementparent=noneleft=noneright=none) superinit (elementparentleftrightwill be recomputed during balancing self height def left height(self) return self left height if self left is not none else def right height(self) return self right height if self right is not none else code fragment avltreemap class (continued in code fragment |
22,143 | positional-based utility methods def recompute height(selfp) node height max( node left height) node right height)def isbalanced(selfp)return abs( node left heightp node right height)< def tall child(selfpfavorleft=false)parameter controls tiebreaker if node left height( if favorleft else node right height)return self left(pelsereturn self right(pdef tall grandchild(selfp)child self tall child(pif child is on leftfavor left grandchildelse favor right grandchild alignment (child =self left( )return self tall child(childalignmentdef rebalance(selfp)while is not nonetrivially if new node old height node height imbalance detectedif not self isbalanced( )perform trinode restructuringsetting to resulting rootand recompute new local heights after the restructuring self restructure(self tall grandchild( )self recompute height(self left( )self recompute height(self right( )adjust for recent changes self recompute height(phas height changedif node height =old heightp none no further changes needed elsep self parent(prepeat with parent override balancing hooks def rebalance insert(selfp)self rebalance(pdef rebalance delete(selfp)self rebalance(pcode fragment avltreemap class (continued from code fragment |
22,144 | splay trees the next search tree structure we study is known as splay tree this structure is conceptually quite different from the other balanced search trees we discuss in this for splay tree does not strictly enforce logarithmic upper bound on the height of the tree in factthere are no additional heightbalanceor other auxiliary data associated with the nodes of this tree the efficiency of splay trees is due to certain move-to-root operationcalled splayingthat is performed at the bottommost position reached during every insertiondeletionor even search (in essencethis is tree variant of the moveto-front heuristic that we explored for lists in section intuitivelya splay operation causes more frequently accessed elements to remain nearer to the rootthereby reducing the typical search times the surprising thing about splaying is that it allows us to guarantee logarithmic amortized running timefor insertionsdeletionsand searches splaying given node of binary search tree we splay by moving to the root of through sequence of restructurings the particular restructurings we perform are importantfor it is not sufficient to move to the root of by just any sequence of restructurings the specific operation we perform to move up depends upon the relative positions of xits parent yand (if it existsx' grandparent there are three cases that we consider zig-zigthe node and its parent are both left children or both right children (see figure we promote xmaking child of and child of ywhile maintaining the inorder relationships of the nodes in (at (bfigure zig-zig(abefore(bafter there is another symmetric configuration where and are left children |
22,145 | zig-zagone of and is left child and the other is right child (see figure in this casewe promote by making have and as its childrenwhile maintaining the inorder relationships of the nodes in (at (bfigure zig-zag(abefore(bafter there is another symmetric configuration where is right child and is left child zigx does not have grandparent (see figure in this casewe perform single rotation to promote over ymaking child of xwhile maintaining the relative inorder relationships of the nodes in (at (bfigure zig(abefore(bafter there is another symmetric configuration where is originally left child of we perform zig-zig or zig-zag when has grandparentand we perform zig when has parent but not grandparent splaying step consists of repeating these restructurings at until becomes the root of an example of the splaying of node is shown in figures and |
22,146 | ( ( (cfigure example of splaying node(asplaying the node storing starts with zig-zag(bafter the zig-zag(cthe next step will be zig-zig (continues in figure |
22,147 | ( ( ( figure example of splaying node:(dafter the zig-zig(ethe next step is again zig-zig( after the zig-zig (continued from figure |
22,148 | when to splay the rules that dictate when splaying is performed are as followswhen searching for key kif is found at position pwe splay pelse we splay the leaf position at which the search terminates unsuccessfully for examplethe splaying in figures and would be performed after searching successfully for key or unsuccessfully for key when inserting key kwe splay the newly created internal node where gets inserted for examplethe splaying in figures and would be performed if were the newly inserted key we show sequence of insertions in splay tree in figure ( ( ( ( ( ( (gfigure sequence of insertions in splay tree(ainitial tree(bafter inserting but before zig step(cafter splaying(dafter inserting but before zig-zag step(eafter splaying( after inserting but before zig-zig step(gafter splaying |
22,149 | when deleting key kwe splay the position that is the parent of the removed noderecall that by the removal algorithm for binary search treesthe removed node may be that originally containing kor descendant node with replacement key an example of splaying following deletion is shown in figure ( ( ( ( (efigure deletion from splay tree(athe deletion of from the root node is performed by moving to the root the key of its inorder predecessor wdeleting wand splaying the parent of (bsplaying starts with zig-zig(cafter the zig-zig(dthe next step is zig(eafter the zig |
22,150 | python implementation although the mathematical analysis of splay tree' performance is complex (see section )the implementation of splay trees is rather simple adaptation to standard binary search tree code fragment provides complete implementation of splaytreemap classbased upon the underlying treemap class and use of the balancing framework described in section it is important to note that our original treemap class makes calls to the rebalance access methodnot just from within the getitem methodbut also during setitem when modifying the value associated with an existing keyand after any map operations that result in failed search class splaytreemap(treemap) """sorted map implementation using splay tree "" splay operation def splay(selfp) while !self root) parent self parent( grand self parent(parent if grand is none zig case self rotate( elif (parent =self left(grand)=( =self left(parent)) zig-zig case move parent up self rotate(parentthen move up self rotate( else zig-zag case move up self rotate(pmove up again self rotate( override balancing hooks def rebalance insert(selfp) self splay( def rebalance delete(selfp) if is not none self splay( def rebalance access(selfp) self splay(pcode fragment complete implementation of the splaytreemap class |
22,151 | amortized analysis of splaying after zig-zig or zig-zagthe depth of position decreases by twoand after zig the depth of decreases by one thusif has depth dsplaying consists of sequence of / zig-zigs and/or zig-zagsplus one final zig if is odd since single zig-zigzig-zagor zig affects constant number of nodesit can be done in ( time thussplaying position in binary search tree takes time ( )where is the depth of in in other wordsthe time for performing splaying step for position is asymptotically the same as the time needed just to reach that position in top-down search from the root of worst-case time in the worst casethe overall running time of searchinsertionor deletion in splay tree of height is ( )since the position we splay might be the deepest position in the tree moreoverit is possible for to be as large as nas shown in figure thusfrom worst-case point of viewa splay tree is not an attractive data structure in spite of its poor worst-case performancea splay tree performs well in an amortized sense that isin sequence of intermixed searchesinsertionsand deletionseach operation takes on average logarithmic time we perform the amortized analysis of splay trees using the accounting method amortized performance of splay trees for our analysiswe note that the time for performing searchinsertionor deletion is proportional to the time for the associated splaying so let us consider only splaying time let be splay tree with keysand let be node of we define the size (wof as the number of nodes in the subtree rooted at note that this definition implies that the size of nonleaf node is one more than the sum of the sizes of its children we define the rank (wof node as the logarithm in base of the size of wthat isr(wlog( ( )clearlythe root of has the maximum sizenand the maximum ranklog nwhile each leaf has size and rank we use cyber-dollars to pay for the work we perform in splaying position in and we assume that one cyber-dollar pays for zigwhile two cyber-dollars pay for zig-zig or zig-zag hencethe cost of splaying position at depth is cyber-dollars we keep virtual account storing cyber-dollars at each position of note that this account exists only for the purpose of our amortized analysisand does not need to be included in data structure implementing the splay tree |
22,152 | an accounting analysis of splaying when we perform splayingwe pay certain number of cyber-dollars (the exact value of the payment will be determined at the end of our analysiswe distinguish three casesif the payment is equal to the splaying workthen we use it all to pay for the splaying if the payment is greater than the splaying workwe deposit the excess in the accounts of several nodes if the payment is less than the splaying workwe make withdrawals from the accounts of several nodes to cover the deficiency we show below that payment of (log ncyber-dollars per operation is sufficient to keep the system workingthat isto ensure that each node keeps nonnegative account balance an accounting invariant for splaying we use scheme in which transfers are made between the accounts of the nodes to ensure that there will always be enough cyber-dollars to withdraw for paying for splaying work when needed in order to use the accounting method to perform our analysis of splayingwe maintain the following invariantbefore and after splayingeach node of has (wcyber-dollars in its account note that the invariant is "financially sound,since it does not require us to make preliminary deposit to endow tree with zero keys let ( be the sum of the ranks of all the nodes of to preserve the invariant after splayingwe must make payment equal to the splaying work plus the total change in ( we refer to single zigzig-zigor zig-zag operation in splaying as splaying substep alsowe denote the rank of node of before and after splaying substep with (wand ( )respectively the following proposition gives an upper bound on the change of ( caused by single splaying substep we will repeatedly use this lemma in our analysis of full splaying of node to the root |
22,153 | proposition let be the variation of ( caused by single splaying substep ( zigzig-zigor zig-zagfor node in we have the followingd < ( (xr( ) if the substep is zig-zig or zig-zag < ( (xr( )if the substep is zig justificationwe use the fact (see proposition appendix athatif and blog log log ( let us consider the change in ( caused by each type of splaying substep zig-zig(recall figure since the size of each node is one more than the size of its two childrennote that only the ranks of xyand change in zig-zig operationwhere is the parent of and is the parent of alsor (xr( ) ( < ( )and ( < (ythusd (xr (yr (zr(xr(yr(zr (yr (zr(xr( < (xr ( (xnote that (xn (zn ( ( thusr(xr ( ( as per formula that isr ( (xr( this inequality and formula imply < ( ( (xr( ( < ( (xr( ) zig-zag(recall figure againby the definition of size and rankonly the ranks of xyand changewhere denotes the parent of and denotes the parent of alsor(xr(yr(zr (xthusd (xr (yr (zr(xr(yr(zr (yr (zr(xr( < (yr ( ( ( note that (yn (zn ( )hencer (yr ( ( as per formula thusd < ( ( ( (xr( ) < ( (xr( ) zig(recall figure in this caseonly the ranks of and changewhere denotes the parent of alsor (yr(xthusd (yr (xr(yr( < (xr( < ( (xr( ) |
22,154 | proposition let be splay tree with root and let be the total variation of ( caused by splaying node at depth we have < ( (tr( ) justificationsplaying node consists of / splaying substepseach of which is zig-zig or zig-zagexcept possibly the last onewhich is zig if is odd let (xr(xbe the initial rank of xand for clet ri (xbe the rank of after the ith substep and di be the variation of ( caused by the ith substep by proposition the total variation of ( caused by splaying is di = < (ri (xri- ( ) = (rc (xr ( ) < ( (tr( ) by proposition if we make payment of ( (tr( ) cyber-dollars towards the splaying of node xwe have enough cyber-dollars to maintain the invariantkeeping (wcyber-dollars at each node in and pay for the entire splaying workwhich costs cyber-dollars since the size of the root is nits rank (tlog given that ( > the payment to be made for splaying is (log ncyber-dollars to complete our analysiswe have to compute the cost for maintaining the invariant when node is inserted or deleted when inserting new node into splay tree with keysthe ranks of all the ancestors of are increased namelylet wi wd be the ancestors of wwhere wwi is the parent of wi- and wd is the root for dlet (wi and (wi be the size of wi before and after the insertionrespectivelyand let (wi and (wi be the rank of wi before and after the insertion we have (wi (wi alsosince (wi < (wi+ )for we have the following for each in this ranger (wi log( (wi )log( (wi <log( (wi+ ) (wi+ thusthe total variation of ( caused by the insertion is (wi (wii= - < (wd ( (wi+ (wi ) = (wd ( <log thereforea payment of (log ncyber-dollars is sufficient to maintain the invariant when new node is inserted |
22,155 | when deleting node from splay tree with keysthe ranks of all the ancestors of are decreased thusthe total variation of ( caused by the deletion is negativeand we do not need to make any payment to maintain the invariant when node is deleted thereforewe may summarize our amortized analysis in the following proposition (which is sometimes called the "balance propositionfor splay trees)proposition consider sequence of operations on splay treeeach one searchinsertionor deletionstarting from splay tree with zero keys alsolet ni be the number of keys in the tree after operation iand be the total number of insertions the total running time for performing the sequence of operations is log ni = which is ( log nin other wordsthe amortized running time of performing searchinsertionor deletion in splay tree is (log )where is the size of the splay tree at the time thusa splay tree can achieve logarithmic-time amortized performance for implementing sorted map adt this amortized performance matches the worstcase performance of avl trees( treesand red-black treesbut it does so using simple binary tree that does not need any extra balance information stored at each of its nodes in additionsplay trees have number of other interesting properties that are not shared by these other balanced search trees we explore one such additional property in the following proposition (which is sometimes called the "static optimalityproposition for splay trees)proposition consider sequence of operations on splay treeeach one searchinsertionor deletionstarting from splay tree with zero keys alsolet (idenote the number of times the entry is accessed in the splay treethat isits frequencyand let denote the total number of entries assuming that each entry is accessed at least oncethen the total running time for performing the sequence of operations is (ilog(mf ( ) = we omit the proof of this propositionbut it is not as hard to justify as one might imagine the remarkable thing is that this proposition states that the amortized running time of accessing an entry is (log(mf ( )) |
22,156 | ( , trees in this sectionwe consider data structure known as ( , tree it is particular example of more general structure known as multiway search treein which internal nodes may have more than two children other forms of multiway search trees will be discussed in section multiway search trees recall that general trees are defined so that internal nodes may have many children in this sectionwe discuss how general trees can be used as multiway search trees map items stored in search tree are pairs of the form (kv)where is the key and is the value associated with the key definition of multiway search tree let be node of an ordered tree we say that is -node if has children we define multiway search tree to be an ordered tree that has the following propertieswhich are illustrated in figure aeach internal node of has at least two children that iseach internal node is -node such that > each internal -node of with children cd stores an ordered set of key-value pairs ( )(kd- vd- )where <<kd- let us conventionally define and kd for each item (kvstored at node in the subtree of rooted at ci dwe have that ki- < <ki that isif we think of the set of keys stored at as including the special fictitious keys and kd +then key stored in the subtree of rooted at child node ci must be "in betweentwo keys stored at this simple viewpoint gives rise to the rule that -node stores regular keysand it also forms the basis of the algorithm for searching in multiway search tree by the above definitionthe external nodes of multiway search do not store any data and serve only as "placeholders these external nodes can be efficiently represented by none referencesas has been our convention with binary search trees (section howeverfor the sake of expositionwe will discuss these as actual nodes that do not store anything based on this definitionthere is an interesting relationship between the number of key-value pairs and the number of external nodes in multiway search tree proposition an -item multiway search tree has external nodes we leave the justification of this proposition as an exercise ( - |
22,157 | ( ( (cfigure (aa multiway search tree (bsearch path in for key (unsuccessful search)(csearch path in for key (successful search |
22,158 | searching in multiway tree searching for an item with key in multiway search tree is simple we perform such search by tracing path in starting at the root (see figure and when we are at -node during this searchwe compare the key with the keys kd- stored at if ki for some ithe search is successfully completed otherwisewe continue the search in the child ci of such that ki- ki (recall that we conventionally define and kd if we reach an external nodethen we know that there is no item with key in and the search terminates unsuccessfully data structures for representing multiway search trees in section we discuss linked data structure for representing general tree this representation can also be used for multiway search tree when using general tree to implement multiway search treewe must store at each node one or more key-value pairs associated with that node that iswe need to store with reference to some collection that stores the items for during search for key in multiway search treethe primary operation needed when navigating node is finding the smallest key at that node that is greater than or equal to for this reasonit is natural to model the information at node itself as sorted mapallowing use of the find ge(kmethod we say such map serves as secondary data structure to support the primary data structure represented by the entire multiway search tree this reasoning may at first seem like circular argumentsince we need representation of (secondaryordered map to represent (primaryordered map we can avoid any circular dependencehoweverby using the bootstrapping techniquewhere we use simple solution to problem to create newmore advanced solution in the context of multiway search treea natural choice for the secondary structure at each node is the sortedtablemap of section because we want to determine the associated value in case of match for key kand otherwise the corresponding child ci such that ki- ki we recommend having each key ki in the secondary structure map to the pair (vi ci with such realization of multiway search tree processing -node while searching for an item of with key can be performed using binary search operation in (log dtime let dmax denote the maximum number of children of any node of and let denote the height of the search time in multiway search tree is therefore ( log dmax if dmax is constantthe running time for performing search is (hthe primary efficiency goal for multiway search tree is to keep the height as small as possible we next discuss strategy that caps dmax at while guaranteeing height that is logarithmic in nthe total number of items stored in the map |
22,159 | ( , )-tree operations multiway search tree that keeps the secondary data structures stored at each node small and also keeps the primary multiway tree balanced is the ( treewhich is sometimes called - tree or tree this data structure achieves these goals by maintaining two simple properties (see figure )size propertyevery internal node has at most four children depth propertyall the external nodes have the same depth figure ( tree againwe assume that external nodes are empty andfor the sake of simplicitywe describe our search and update methods assuming that external nodes are real nodesalthough this latter requirement is not strictly needed enforcing the size property for ( trees keeps the nodes in the multiway search tree simple it also gives rise to the alternative name tree,since it implies that each internal node in the tree has or children another implication of this rule is that we can represent the secondary map stored at each internal node using an unordered list or an ordered arrayand still achieve ( )-time performance for all operations (since dmax the depth propertyon the other handenforces an important bound on the height of ( tree proposition the height of ( tree storing items is (log njustificationlet be the height of ( tree storing items we justify the proposition by showing the claim log( < <log( ( to justify this claim note first thatby the size propertywe can have at most nodes at depth at most nodes at depth and so on thusthe number of external nodes in is at most likewiseby the depth property and the definition |
22,160 | of ( treewe must have at least nodes at depth at least nodes at depth and so on thusthe number of external nodes in is at least in additionby proposition the number of external nodes in is thereforewe obtain < < taking the logarithm in base of the terms for the above inequalitieswe get that <log( < hwhich justifies our claim (formula when terms are rearranged proposition states that the size and depth properties are sufficient for keeping multiway tree balanced moreoverthis proposition implies that performing search in ( tree takes (log ntime and that the specific realization of the secondary structures at the nodes is not crucial design choicesince the maximum number of children dmax is constant maintaining the size and depth properties requires some effort after performing insertions and deletions in ( treehowever we discuss these operations next insertion to insert new item (kv)with key kinto ( tree we first perform search for assuming that has no item with key kthis search terminates unsuccessfully at an external node let be the parent of we insert the new item into node and add new child (an external nodeto on the left of our insertion method preserves the depth propertysince we add new external node at the same level as existing external nodes neverthelessit may violate the size property indeedif node was previously -nodethen it would become -node after the insertionwhich causes the tree to no longer be ( tree this type of violation of the size property is called an overflow at node wand it must be resolved in order to restore the properties of ( tree let be the children of wand let be the keys stored at to remedy the overflow at node wwe perform split operation on as follows (see figure )replace with two nodes and where is -node with children storing keys and is -node with children storing key if is the root of create new root node uelselet be the parent of insert key into and make and children of uso that if was child of uthen and become children and of urespectively as consequence of split operation on node wa new overflow may occur at the parent of if such an overflow occursit triggers in turn split at node (see figure split operation either eliminates the overflow or propagates it into the parent of the current node we show sequence of insertions in ( tree in figure |
22,161 | (au ww (bc (cfigure node split(aoverflow at -node (bthe third key of inserted into the parent of (cnode replaced with -node and -node ( ( ( ( ( ( figure an insertion in ( tree that causes cascading split(abefore the insertion(binsertion of causing an overflow(ca split(dafter the split new overflow occurs(eanother splitcreating new root node( final tree |
22,162 | ( ( ( ( ( ( ( ( ( ( ( (lfigure sequence of insertions into ( tree(ainitial tree with one item(binsertion of (cinsertion of (dinsertion of which causes an overflow(esplitwhich causes the creation of new root node( after the split(ginsertion of (hinsertion of which causes an overflow(isplit(jafter the split(kinsertion of (linsertion of |
22,163 | analysis of insertion in ( , tree because dmax is at most the original search for the placement of new key uses ( time at each leveland thus (log ntime overallsince the height of the tree is (log nby proposition the modifications to single node to insert new key and child can be implemented to run in ( timeas can single split operation the number of cascading split operations is bounded by the height of the treeand so that phase of the insertion process also runs in (log ntime thereforethe total time to perform an insertion in ( tree is (log ndeletion let us now consider the removal of an item with key from ( tree we begin such an operation by performing search in for an item with key removing an item from ( tree can always be reduced to the case where the item to be removed is stored at node whose children are external nodes supposefor instancethat the item with key that we wish to remove is stored in the ith item (ki vi at node that has only internal-node children in this casewe swap the item (ki vi with an appropriate item that is stored at node with external-node children as follows (see figure ) we find the rightmost internal node in the subtree rooted at the ith child of znoting that the children of node are all external nodes we swap the item (ki vi at with the last item of once we ensure that the item to remove is stored at node with only externalnode children (because either it was already at or we swapped it into )we simply remove the item from and remove the ith external node of removing an item (and childfrom node as described above preserves the depth propertyfor we always remove an external child from node with only external children howeverin removing such an external nodewe may violate the size property at indeedif was previously -nodethen it becomes -node with no items after the removal (figure and )which is not allowed in ( tree this type of violation of the size property is called an underflow at node to remedy an underflowwe check whether an immediate sibling of is -node or -node if we find such sibling sthen we perform transfer operationin which we move child of to wa key of to the parent of and sand key of to (see figure and if has only one siblingor if both immediate siblings of are -nodesthen we perform fusion operationin which we merge with siblingcreating new node and move key from the parent of to (see figure and |
22,164 | ( ( ( ( ( ( ( (hfigure sequence of removals from ( tree(aremoval of causing an underflow(ba transfer operation(cafter the transfer operation(dremoval of causing an underflow(ea fusion operation( after the fusion operation(gremoval of (hafter removing |
22,165 | fusion operation at node may cause new underflow to occur at the parent of wwhich in turn triggers transfer or fusion at (see figure hencethe number of fusion operations is bounded by the height of the treewhich is (log nby proposition if an underflow propagates all the way up to the rootthen the root is simply deleted (see figure and ( (bu ( (dfigure propagating sequence of fusions in ( tree(aremoval of which causes an underflow(bfusionwhich causes another underflow(csecond fusion operationwhich causes the root to be removed(dfinal tree performance of ( , trees the asymptotic performance of ( tree is identical to that of an avl tree (see table in terms of the sorted map adtwith guaranteed logarithmic bounds for most operations the time complexity analysis for ( tree having keyvalue pairs is based on the followingthe height of ( tree storing entries is (log )by proposition splittransferor fusion operation takes ( time searchinsertionor removal of an entry visits (log nnodes thus( trees provide for fast map search and update operations ( trees also have an interesting relationship to the data structure we discuss next |
22,166 | red-black trees although avl trees and ( trees have number of nice propertiesthey also have some disadvantages for instanceavl trees may require many restructure operations (rotationsto be performed after deletionand ( trees may require many split or fusing operations to be performed after an insertion or removal the data structure we discuss in this sectionthe red-black treedoes not have these drawbacksit uses ( structural changes after an update in order to stay balanced formallya red-black tree is binary search tree (see section with nodes colored red and black in way that satisfies the following propertiesroot propertythe root is black red propertythe children of red node (if anyare black depth propertyall nodes with zero or one children have the same black depthdefined as the number of black ancestors (recall that node is its own ancestoran example of red-black tree is shown in figure figure an example of red-black treewith "rednodes drawn in white the common black depth for this tree is we can make the red-black tree definition more intuitive by noting an interesting correspondence between red-black trees and ( trees (excluding their trivial external nodesnamelygiven red-black treewe can construct corresponding ( tree by merging every red node into its parentstoring the entry from at its parentand with the children of becoming ordered children of the parent for examplethe red-black tree in figure corresponds to the ( tree from figure as illustrated in figure the depth property of the red-black tree corresponds to the depth property of the ( tree since exactly one black node of the red-black tree contributes to each node of the corresponding ( tree converselywe can transform any ( tree into corresponding red-black tree by coloring each node black and then performing the following transformationsas illustrated in figure |
22,167 | figure an illustration that the red-black tree of figure corresponds to the ( tree of figure based on the highlighted grouping of red nodes with their black parents if is -nodethen keep the (blackchildren of as is if is -nodethen create new red node ygive ' last two (blackchildren to yand make the first child of and be the two children of if is -nodethen create two new red nodes and zgive ' first two (blackchildren to ygive ' last two (blackchildren to zand make and be the two children of notice that red node always has black parent in this construction proposition the height of red-black tree storing entries is (log -( -( or -( figure correspondence between nodes of ( tree and red-black tree( -node( -node( -node |
22,168 | justificationlet be red-black tree storing entriesand let be the height of we justify this proposition by establishing the following factlog( < < log( let be the common black depth of all nodes of having zero or one children let be the ( tree associated with and let be the height of (excluding trivial leavesbecause of the correspondence between red-black trees and ( treeswe know that henceby proposition <log( ) by the red propertyh < thuswe obtain < log( ) the other inequalitylog( <hfollows from proposition and the fact that has nodes red-black tree operations the algorithm for searching in red-black tree is the same as that for standard binary search tree (section thussearching in red-black tree takes time proportional to the height of the treewhich is (log nby proposition the correspondence between ( trees and red-black trees provides important intuition that we will use in our discussion of how to perform updates in red-black treesin factthe update algorithms for red-black trees can seem mysteriously complex without this intuition split and fuse operations of ( tree will be effectively mimicked by recoloring neighboring red-black tree nodes rotation within red-black tree will be used to change orientations of -node between the two forms shown in figure (binsertion now consider the insertion of key-value pair (kvinto red-black tree the algorithm initially proceeds as in standard binary search tree (section namelywe search for in until we reach null subtreeand we introduce new leaf at that positionstoring the item in the special case that is the only node of and thus the rootwe color it black in all other caseswe color red this action corresponds to inserting (kvinto node of the ( tree with external children the insertion preserves the root and depth properties of but it may violate the red property indeedif is not the root of and the parent of is redthen we have parent and child (namelyy and xthat are both red note that by the root propertyy cannot be the root of and by the red property (which was previously satisfied)the parent of must be black since and its parent are redbut ' grandparent is blackwe call this violation of the red property double red at node to remedy double redwe consider two cases |
22,169 | case the sibling of is black (or none(see figure in this casethe double red denotes the fact that we have added the new node to corresponding -node of the ( tree effectively creating malformed -node this formation has one red node (ythat is the parent of another red node ( )while we want it to have the two red nodes as siblings instead to fix this problemwe perform trinode restructuring of the trinode restructuring is done by the operation restructure( )which consists of the following steps (see again figure this operation is also discussed in section )take node xits parent yand grandparent zand temporarily relabel them as aband cin left-to-right orderso that aband will be visited in this order by an inorder tree traversal replace the grandparent with the node labeled band make nodes and the children of bkeeping inorder relationships unchanged after performing the restructure(xoperationwe color black and we color and red thusthe restructuring eliminates the double-red problem notice that the portion of any path through the restructured part of the tree is incident to exactly one black nodeboth before and after the trinode restructuring thereforethe black depth of the tree is unaffected (ab (bfigure restructuring red-black tree to remedy double red(athe four configurations for xyand before restructuring(bafter restructuring |
22,170 | case the sibling of is red (see figure in this casethe double red denotes an overflow in the corresponding ( tree to fix the problemwe perform the equivalent of split operation namelywe do recoloringwe color and black and their parent red (unless is the rootin which caseit remains blacknotice that unless is the rootthe portion of any path through the affected part of the tree is incident to exactly one black nodeboth before and after the recoloring thereforethe black depth of the tree is unaffected by the recoloring unless is the rootin which case it is increased by one howeverit is possible that the double-red problem reappears after such recoloringalbeit higher up in the tree since may have red parent if the double-red problem reappears at zthen we repeat the consideration of the two cases at thusa recoloring either eliminates the double-red problem at node xor propagates it to the grandparent of we continue going up performing recolorings until we finally resolve the double-red problem (with either final recoloring or trinode restructuringthusthe number of recolorings caused by an insertion is no more than half the height of tree that iso(log nby proposition (az (bfigure recoloring to remedy the double-red problem(abefore recoloring and the corresponding -node in the associated ( tree before the split(bafter recoloring and the corresponding nodes in the associated ( tree after the split as further examplesfigures and show sequence of insertion operations in red-black tree |
22,171 | ( ( ( ( ( ( ( ( ( ( ( (lfigure sequence of insertions in red-black tree(ainitial tree(binsertion of (cinsertion of which causes double red(dafter restructuring(einsertion of which causes double red( after recoloring (the root remains black)(ginsertion of (hinsertion of (iinsertion of which causes double red(jafter restructuring(kinsertion of which causes double red(lafter recoloring (continues in figure |
22,172 | ( ( ( ( (qfigure sequence of insertions in red-black tree(minsertion of which causes double red(nafter restructuring(oinsertion of which causes double red(pafter recoloring there is again double redto be handled by restructuring(qafter restructuring (continued from figure |
22,173 | deletion deleting an item with key from red-black tree initially proceeds as for binary search tree (section structurallythe process results in the removal node that has at most one child (either that originally containing key or its inorder predecessorand the promotion of its remaining child (if anyif the removed node was redthis structural change does not affect the black depths of any paths in the treenor introduce any red violationsand so the resulting tree remains valid red-black tree in the corresponding ( tree this case denotes the shrinking of -node or -node if the removed node was blackthen it either had zero children or it had one child that was red leaf (because the null subtree of the removed node has black height in the latter casethe removed node represents the black part of corresponding -nodeand we restore the redblack properties by recoloring the promoted child to black the more complex case is when (nonrootblack leaf is removed in the corresponding ( treethis denotes the removal of an item from -node without rebalancingsuch change results in deficit of one for the black depth along the path leading to the deleted item by necessitythe removed node must have sibling whose subtree has black height (given that this was valid red-black tree prior to the deletion of the black leaf to remedy this scenariowe consider more general setting with node that is known to have two subtreestheavy and tlight such that the root of tlight (if anyis black and such that the black depth of theavy is exactly one more than that of tlight as portrayed in figure in the case of removed black leafz is the parent of that leaf and tlight is trivially the empty subtree that remains after the deletion we describe the more general case of deficit because our algorithm for rebalancing the tree willin some casespush the deficit higher in the tree (just as the resolution of deletion in ( tree sometimes cascades upwardwe let denote the root of theavy (such node exists because theavy has black height at least one tlight theavy figure portrayal of deficit between the black heights of subtrees of node the gray color in illustrating and denotes the fact that these nodes may be colored either black or red |
22,174 | we consider three possible cases to remedy deficit case node is black and has red child (see figure we perform trinode restructuringas originally described in section the operation restructure(xtakes the node xits parent yand grandparent zlabels them temporarily left to right as aband cand replaces with the node labeled bmaking it the parent of the other two we color and blackand give the former color of notice that the path to tlight in the result includes one additional black node after the restructurethereby resolving its deficit in contrastthe number of black nodes on paths to any of the other three subtrees illustrated in figure remains unchanged resolving this case corresponds to transfer operation in the ( tree between the two children of the node with the fact that has red child assures us that it represents either -node or -node in effectthe item previously stored at is demoted to become new -node to resolve the deficiencywhile an item stored at or its child is promoted to take the place of the item previously stored at tlight tlight tlight tlight figure resolving black deficit in tlight by performing trinode restructuring as restructure(xtwo possible configurations are shown (two other configurations are symmetricthe gray color of in the left figures denotes the fact that this node may be colored either red or black the root of the restructured portion is given that same colorwhile the children of that node are both colored black in the result |
22,175 | case node is black and both children of are black (or noneresolving this case corresponds to fusion operation in the corresponding ( tree as must represent -node we do recoloringwe color redandif is redwe color it black (see figure this does not introduce any red violationbecause does not have red child in the case that was originally redand thus the parent in the corresponding ( tree is -node or -nodethis recoloring resolves the deficit (see figure the path leading to tlight includes one additional black node in the resultwhile the recoloring did not affect the number of black nodes on the path to the subtrees of theavy in the case that was originally blackand thus the parent in the corresponding ( tree is -nodethe recoloring has not increased the number of black nodes on the path to tlight in factit has reduced the number of black nodes on the path to theavy (see figure after this stepthe two children of will have the same black height howeverthe entire tree rooted at has become deficientthereby propogating the problem higher in the treewe must repeat consideration of all three cases at the parent of as remedy tlight theavy (az tlight theavy tlight (bfigure resolving black deficit in tlight by recoloring operation(awhen is originally redreversing the colors of and resolves the black deficit in tlight ending the process(bwhen is originally blackrecoloring causes the entire subtree of to have black deficitrequiring cascading remedy |
22,176 | case node is red (see figure because is red and theavy has black depth at least must be black and the two subtrees of must each have black root and black depth equal to that of theavy in this casewe perform rotation about and zand then recolor black and red this denotes reorientation of -node in the corresponding ( tree this does not immediately resolve the deficitas the new subtree of is an old subtree of with black root and black height equal to that of the original theavy we reapply the algorithm to resolve the deficit at zknowing that the new child that is the root of theavy is now blackand therefore that either case applies or case applies furthermorethe next application will be the lastbecause case is always terminal and case will be terminal given that is red tlight theavy theavy tlight figure rotation and recoloring about red node and black node zassuming black deficit at this amounts to change of orientation in the corresponding -node of ( tree this operation does not affect the black depth of any paths through this portion of the tree furthermorebecause was originally redthe new subtree of must have black root and must have black height equal to the original theavy thereforea black deficit remains at node after the transformation in figure we show sequence of deletions on red-black tree dashed edge in those figuressuch as to the right of in part ( )represents branch with black deficiency that has not yet been resolved we illustrate case restructuring in parts (cand (dwe illustrate case recoloring in parts ( and (gfinallywe show an example of case rotation between parts (iand ( )concluding with case recoloring in part ( |
22,177 | ( ( ( ( ( ( ( ( ( ( (kfigure sequence of deletions from red-black tree(ainitial tree(bremoval of (cremoval of causing black deficit to the right of (handled by restructuring)(dafter restructuring(eremoval of ( removal of causing black deficit to the right of (handled by recoloring)(gafter recoloring(hremoval of (iremoval of causing black deficit to the right of (handled initially by rotation)(jafter the rotation the black deficit needs to be handled by recoloring(kafter the recoloring |
22,178 | performance of red-black trees the asymptotic performance of red-black tree is identical to that of an avl tree or ( tree in terms of the sorted map adtwith guaranteed logarithmic time bounds for most operations (see table for summary of the avl performance the primary advantage of red-black tree is that an insertion or deletion requires only constant number of restructuring operations (this is in contrast to avl trees and ( treesboth of which require logarithmic number of structural changes per map operation in the worst case that isan insertion or deletion in red-black tree requires logarithmic time for searchand may require logarithmic number of recoloring operations that cascade upward yet we showin the following propositionsthat there are constant number of rotations or restructure operations for single map operation proposition the insertion of an item in red-black tree storing items can be done in (log ntime and requires (log nrecolorings and at most one trinode restructuring justificationrecall that an insertion begins with downward searchthe creation of new leaf nodeand then potential upward effort to remedy double-red violation there may be logarithmically many recoloring operations due to an upward cascading of case applicationsbut single application of the case action eliminates the double-red problem with trinode restructuring thereforeat most one restructuring operation is needed for red-black tree insertion proposition the algorithm for deleting an item from red-black tree with items takes (log ntime and performs (log nrecolorings and at most two restructuring operations justificationa deletion begins with the standard binary search tree deletion algorithmwhich requires time proportional to the height of the treefor red-black treesthat height is (log nthe subsequent rebalancing takes place along an upward path from the parent of deleted node we considered three cases to remedy resulting black deficit case requires trinode restructuring operationyet completes the processso this case is applied at most once case may be applied logarithmically many timesbut it only involves recoloring of up to two nodes per application case requires rotationbut this case can only apply oncebecause if the rotation does not resolve the problemthe very next action will be terminal application of either case or case in the worst casethere will be (log nrecolorings from case single rotation from case and trinode restructuring from case |
22,179 | python implementation complete implementation of redblacktreemap class is provided in code fragments through it inherits from the standard treemap class and relies on the balancing framework described in section we beginin code fragment by overriding the definition of the nested node class to introduce an additional boolean field to denote the current color of node our constructor intentionally sets the color of new node to red to be consistent with our approach for inserting items we define several additional utility functionsat the top of code fragment that aid in setting the color of nodes and querying various conditions when an element has been inserted as leaf in the treethe rebalance insert hook is calledallowing us the opportunity to modify the tree the new node is red by defaultso we need only look for the special case of the new node being the root (in which case it should be colored black)or the possibility that we have double-red violation because the new node' parent is also red to remedy such violationswe closely follow the case analysis described in section the rebalancing after deletion also follows the case analysis described in section an additional challenge is that by the time the rebalance hook is calledthe old node has already been removed from the tree that hook is invoked on the parent of the removed node some of the case analysis depends on knowing about the properties of the removed node fortunatelywe can reverse engineer that information by relying on the red-black tree properties in particularif denotes the parent of the removed nodeit must be thatif has no childrenthe removed node was red leaf (exercise - if has one childthe removed node was black leafcausing deficitunless that one remaining child is red leaf (exercise - if has two childrenthe removed node was black node with one red childwhich was promoted (exercise - class redblacktreemap(treemap) """sorted map implementation using red-black tree "" class node(treemap node) """node class for red-black tree maintains bit that denotes color ""slots _red add additional data member to the node class def init (selfelementparent=noneleft=noneright=none) superinit (elementparentleftrightnew node red by default self red true code fragment beginning of the redblacktreemap class (continued in code fragment |
22,180 | positional-based utility methods we consider nonexistent child to be trivially black def set red(selfp) node red true def set black(selfp) node red false def set color(selfpmake red) node red make red def is red(selfp)return is not none and node red def is red leaf(selfp)return self is red(pand self is leaf(pdef get red child(selfp)"""return red child of (or none if no such child""for child in (self left( )self right( ))if self is red(child)return child return none support for insertions def rebalance insert(selfp)new node is always red self resolve red(pdef resolve red(selfp)if self is root( )self set black(pelseparent self parent(pif self is red(parent)uncle self sibling(parentif not self is red(uncle)middle self restructure(pself set black(middleself set red(self left(middle)self set red(self right(middle)elsegrand self parent(parentself set red(grandself set black(self left(grand)self set black(self right(grand)self resolve red(grandmake root black double red problem case misshapen -node do trinode restructuring and then fix colors case overfull -node grandparent becomes red its children become black recur at red grandparent code fragment continuation of the redblacktreemap class (continued from code fragment and concluded in code fragment |
22,181 | support for deletions def rebalance delete(selfp)if len(self= special caseensure that root is black self set black(self root)elif is not nonen self num children(pif = deficit exists unless child is red leaf next(self children( )if not self is red leaf( )self fix deficit(pcelif = removed black node with red child if self is red leaf(self left( ))self set black(self left( )elseself set black(self right( )def fix deficit(selfzy)"""resolve black deficit at zwhere is the root of heavier subtree ""if not self is red( ) is blackwill apply case or self get red child(yif is not nonecase is black and has red child xdo "transferold color self is red(zmiddle self restructure(xmiddle gets old color of self set color(middleold colorchildren become black self set black(self left(middle)self set black(self right(middle)elsecase is blackbut no red childrenrecolor as "fusionself set red(yif self is red( )this resolves the problem self set black(zelif not self is root( )self fix deficit(self parent( )self sibling( )recur upward elsecase is redrotate misaligned -node and repeat self rotate(yself set black(yself set red(zif =self right( )self fix deficit(zself left( )elseself fix deficit(zself right( )code fragment conclusion of the redblacktreemap class (continued from code fragment |
22,182 | exercises for help with exercisesplease visit the sitewww wiley com/college/goodrich reinforcement - if we insert the entries ( )( )( , )( )and ( )in this orderinto an initially empty binary search treewhat will it look liker- insertinto an empty binary search treeentries with keys (in this orderdraw the tree after each insertion - how many different binary search trees can store the keys { } - dr amongus claims that the order in which fixed set of entries is inserted into binary search tree does not matter--the same tree results every time give small example that proves he is wrong - dr amongus claims that the order in which fixed set of entries is inserted into an avl tree does not matter--the same avl tree results every time give small example that proves he is wrong - our implementation of the treemap subtree search utilityfrom code fragment relies on recursion for large unbalanced treepython' default limit on recursive depth may be prohibitive give an alternative implementation of that method that does not rely on the use of recursion - do the trinode restructurings in figures and result in single or double rotationsr- draw the avl tree resulting from the insertion of an entry with key into the avl tree of figure - draw the avl tree resulting from the removal of the entry with key from the avl tree of figure - explain why performing rotation in an -node binary tree when using the array-based representation of section takes (ntime - give schematic figurein the style of figure showing the heights of subtrees during deletion operation in an avl tree that triggers trinode restructuring for the case in which the two children of the node denoted as start with equal heights what is the net effect of the height of the rebalanced subtree due to the deletion operationr- repeat the previous problemconsidering the case in which ' children start with different heights |
22,183 | - the rules for deletion in an avl tree specifically require that when the two subtrees of the node denoted as have equal heightchild should be chosen to be "alignedwith (so that and are both left children or both right childrento better understand this requirementrepeat exercise assuming we picked the misaligned choice of why might there be problem in restoring the avl property with that choicer- perform the following sequence of operations in an initially empty splay tree and draw the tree after each set of operations insert keys in this order search for keys in this order delete keys in this order - what does splay tree look like if its entries are accessed in increasing order by their keysr- is the search tree of figure (aa ( treewhy or why notr- an alternative way of performing split at node in ( tree is to partition into and with being -node and -node which of the keys or do we store at ' parentwhyr- dr amongus claims that ( tree storing set of entries will always have the same structureregardless of the order in which the entries are inserted show that he is wrong - draw four different red-black trees that correspond to the same ( tree - consider the set of keys { draw ( tree storing as its keys using the fewest number of nodes draw ( tree storing as its keys using the maximum number of nodes - consider the sequence of keys ( draw the result of inserting entries with these keys (in the given orderinto an initially empty ( tree an initially empty red-black tree - for the following statements about red-black treesprovide justification for each true statement and counterexample for each false one subtree of red-black tree is itself red-black tree node that does not have sibling is red there is unique ( tree associated with given red-black tree there is unique red-black tree associated with given ( tree - explain why you would get the same output in an inorder listing of the entries in binary search treet independent of whether is maintained to be an avl treesplay treeor red-black tree |
22,184 | - consider tree storing , entries what is the worst-case height of in the following casesa is binary search tree is an avl tree is splay tree is ( tree is red-black tree - draw an example of red-black tree that is not an avl tree - let be red-black tree and let be the position of the parent of the original node that is deleted by the standard search tree deletion algorithm prove that if has zero childrenthe removed node was red leaf - let be red-black tree and let be the position of the parent of the original node that is deleted by the standard search tree deletion algorithm prove that if has one childthe deletion has caused black deficit at pexcept for the case when the one remaining child is red leaf - let be red-black tree and let be the position of the parent of the original node that is deleted by the standard search tree deletion algorithm prove that if has two childrenthe removed node was black and had one red child creativity - explain how to use an avl tree or red-black tree to sort comparable elements in ( log ntime in the worst case - can we use splay tree to sort comparable elements in ( log ntime in the worst casewhy or why notc- repeat exercise - for the treemap class - show that any -node binary tree can be converted to any other -node binary tree using (nrotations - for key that is not found in binary search tree prove that both the greatest key less than and the least key greater than lie on the path traced by the search for - in section we claim that the find range method of binary search tree executes in ( htime where is the number of items found within the range and is the height of the tree our implementationin code fragment begins by searching for the starting keyand then repeatedly calling the after method until reaching the end of the range each call to after is guaranteed to run in (htime this suggests weaker (shbound for find rangesince it involves (scalls to after prove that this implementation achieves the stronger ( hbound |
22,185 | - describe how to perform an operation remove range(startstopthat removes all the items whose keys fall within range(startstopin sorted map that is implemented with binary search tree and show that this method runs in time ( )where is the number of items removed and is the height of - repeat the previous problem using an avl treeachieving running time of ( log nwhy doesn' the solution to the previous problem trivially result in an ( log nalgorithm for avl treesc- suppose we wish to support new method count range(startstopthat determines how many keys of sorted map fall in the specified range we could clearly implement this in ( htime by adapting our approach to find range describe how to modify the search tree structure to support (hworst-case time for count range - if the approach described in the previous problem were implemented as part of the treemap classwhat additional modifications (if anywould be necessary to subclass such as avltreemap in order to maintain support for the new methodc- draw schematic of an avl tree such that single remove operation could require (log ntrinode restructurings (or rotationsfrom leaf to the root in order to restore the height-balance property - in our avl implementationeach node stores the height of its subtreewhich is an arbitrarily large integer the space usage for an avl tree can be reduced by instead storing the balance factor of nodewhich is defined as the height of its left subtree minus the height of its right subtree thusthe balance factor of node is always equal to - or except during an insertion or removalwhen it may become temporarily equal to - or + reimplement the avltreemap class storing balance factors rather than subtree heights - if we maintain reference to the position of the leftmost node of binary search treethen operation find min can be performed in ( time describe how the implementation of the other map methods need to be modified to maintain reference to the leftmost position - if the approach described in the previous problem were implemented as part of the treemap classwhat additional modifications (if anywould be necessary to subclass such as avltreemap in order to accurately maintain the reference to the leftmost positionc- describe modification to the binary search tree implementation having worst-case ( )-time performance for methods after(pand before(pwithout adversely affecting the asymptotics of any other methods |
22,186 | - if the approach described in the previous problem were implemented as part of the treemap classwhat additional modifications (if anywould be necessary to subclass such as avltreemap in order to maintain the efficiencyc- for standard binary search treetable claims ( )-time performance for the delete(pmethod explain why delete(pwould run in ( time if given solution to exercise - - describe modification to the binary search tree data structure that would support the following two index-based operations for sorted map in (htimewhere is the height of the tree at index( )return the position of the item at index of sorted map index of( )return the index of the item at position of sorted map - draw splay treet together with the sequence of updates that produced itand red-black treet on the same set of ten entriessuch that preorder traversal of would be the same as preorder traversal of - show that the nodes that become temporarily unbalanced in an avl tree during an insertion may be nonconsecutive on the path from the newly inserted node to the root - show that at most one node in an avl tree becomes temporarily unbalanced after the immediate deletion of node as part of the standard delitem map operation - let and be ( trees storing and entriesrespectivelysuch that all the entries in have keys less than the keys of all the entries in describe an (log log )-time method for joining and into single tree that stores all the entries in and - repeat the previous problem for red-black trees and - justify proposition - the boolean indicator used to mark nodes in red-black tree as being "redor "blackis not strictly needed when we have distinct keys describe scheme for implementing red-black tree without adding any extra space to standard binary search tree nodes - let be red-black tree storing entriesand let be the key of an entry in show how to construct from in (log ntimetwo red-black trees and such that contains all the keys of less than kand contains all the keys of greater than this operation destroys - show that the nodes of any avl tree can be colored "redand "blackso that becomes red-black tree |
22,187 | - the standard splaying step requires two passesone downward pass to find the node to splayfollowed by an upward pass to splay the node describe method for splaying and searching for in one downward pass each substep now requires that you consider the next two nodes in the path down to xwith possible zig substep performed at the end describe how to perform the zig-zigzig-zagand zig steps - consider variation of splay treescalled half-splay treeswhere splaying node at depth stops as soon as the node reaches depth / perform an amortized analysis of half-splay trees - describe sequence of accesses to an -node splay tree where is oddthat results in consisting of single chain of nodes such that the path down alternates between left children and right children - as positional structureour treemap implementation has subtle flaw position instance associated with an key-value pair (kvshould remain valid as long as that item remains in the map in particularthat position should be unaffected by calls to insert or delete other items in the collection our algorithm for deleting an item from binary search tree may fail to provide such guaranteein particular because of our rule for using the inorder predecessor of key as replacement when deleting key that is located in node with two children given an explicit series of python commands that demonstrates such flaw - how might the treemap implementation be changed to avoid the flaw described in the previous problemprojects - perform an experimental study to compare the speed of our avl treesplay treeand red-black tree implementations for various sequences of operations - redo the previous exerciseincluding an implementation of skip lists (see exercise - - implement the map adt using ( tree (see section - redo the previous exerciseincluding all methods of the sorted map adt (see section - redo exercise - providing positional supportas we did for binary search trees (section )so as to include methods first)last)before( )after( )and find position(keach item should have distinct position in this abstractioneven though several items may be stored at single node of tree |
22,188 | - write python class that can take any red-black tree and convert it into its corresponding ( tree and can take any ( tree and convert it into its corresponding red-black tree - in describing multisets and multimaps in section we describe general approach for adapting traditional map by storing all duplicates within secondary container as value in the map give an alternative implementation of multimap using binary search tree such that each entry of the map is stored at distinct node of the tree with the existence of duplicateswe redefine the search tree property so that all items in the left subtree of position with key have keys that are less than or equal to kwhile all items in the right subtree of have keys that are greater than or equal to use the public interface given in code fragment - prepare an implementation of splay trees that uses top-down splaying as described in exercise - perform extensive experimental studies to compare its performance to the standard bottom-up splaying implemented in this - the mergeable heap adt is an extension of the priority queue adt consisting of operations add(kv)min)remove minand merge( )where the merge(hoperations performs union of the mergeable heap with the present oneincorporating all items into the current one while emptying describe concrete implementation of the mergeable heap adt that achieves (log nperformance for all its operationswhere denotes the size of the resulting heap for the merge operation - write program that performs simple -body simulationcalled "jumping leprechauns this simulation involves leprechaunsnumbered to it maintains gold value gi for each leprechaun iwhich begins with each leprechaun starting out with million dollars worth of goldthat isgi for each in additionthe simulation also maintainsfor each leprechaunia place on the horizonwhich is represented as double-precision floating-point numberxi in each iteration of the simulationthe simulation processes the leprechauns in order processing leprechaun during this iteration begins by computing new place on the horizon for iwhich is determined by the assignment xi xi rgi where is random floating-point number between - and the leprechaun then steals half the gold from the nearest leprechauns on either side of him and adds this gold to his gold valuegi write program that can perform series of iterations in this simulation for given numbernof leprechauns you must maintain the set of horizon positions using sorted map data structure described in this |
22,189 | notes some of the data structures discussed in this are extensively covered by knuth in his sorting and searching book [ ]and by mehlhorn in [ avl trees are due to adel'son-vel'skii and landis [ ]who invented this class of balanced search trees in binary search treesavl treesand hashing are described in knuth' sorting and searching [ book average-height analyses for binary search trees can be found in the books by ahohopcroftand ullman [ and cormenleisersonrivest and stein [ the handbook by gonnet and baeza-yates [ contains number of theoretical and experimental comparisons among map implementations ahohopcroftand ullman [ discuss ( treeswhich are similar to ( trees red-black trees were defined by bayer [ variations and interesting properties of red-black trees are presented in paper by guibas and sedgewick [ the reader interested in learning more about different balanced tree data structures is referred to the books by mehlhorn [ and tarjan [ ]and the book by mehlhorn and tsakalidis [ knuth [ is excellent additional reading that includes early approaches to balancing trees splay trees were invented by sleator and tarjan [ (see also [ ] |
22,190 | sorting and selection contents why study sorting algorithms merge-sort divide-and-conquer array-based implementation of merge-sort the running time of merge-sort merge-sort and recurrence equations alternative implementations of merge-sort quick-sort randomized quick-sort additional optimizations for quick-sort studying sorting through an algorithmic lens lower bound for sorting linear-time sortingbucket-sort and radix-sort comparing sorting algorithms python' built-in sorting functions sorting according to key function selection prune-and-search randomized quick-select analyzing randomized quick-select exercises |
22,191 | why study sorting algorithmsmuch of this focuses on algorithms for sorting collection of objects given collectionthe goal is to rearrange the elements so that they are ordered from smallest to largest (or to produce new copy of the sequence with such an orderas we did when studying priority queues (see section )we assume that such consistent order exists in pythonthe natural order of objects is typically defined using the operator having following propertiesirreflexive propertyk transitive propertyif and then the transitive property is important as it allows us to infer the outcome of certain comparisons without taking the time to perform those comparisonsthereby leading to more efficient algorithms sorting is among the most importantand well studiedof computing problems data sets are often stored in sorted orderfor exampleto allow for efficient searches with the binary search algorithm (see section many advanced algorithms for variety of problems rely on sorting as subroutine python has built-in support for sorting datain the form of the sort method of the list class that rearranges the contents of listand the built-in sorted function that produces new list containing the elements of an arbitrary collection in sorted order those built-in functions use advanced algorithms (some of which we will describe in this and they are highly optimized programmer should typically rely on calls to the built-in sorting functionsas it is rare to have special enough circumstance to warrant implementing sorting algorithm from scratch with that saidit remains important to have deep understanding of sorting algorithms most immediatelywhen calling the built-in functionit is good to know what to expect in terms of efficiency and how that may depend upon the initial order of elements or the type of objects that are being sorted more generallythe ideas and approaches that have led to advances in the development of sorting algorithm carry over to algorithm development in many other areas of computing we have introduced several sorting algorithms already in this bookinsertion-sort (see sections and selection-sort (see section bubble-sort (see exercise - heap-sort (see section in this we present four other sorting algorithmscalled merge-sortquick-sortbucket-sortand radix-sortand then discuss the advantages and disadvantages of the various algorithms in section in section we will explore another technique used in python for sorting data according to an order other than the natural order defined by the operator |
22,192 | merge-sort divide-and-conquer the first two algorithms we describe in this merge-sort and quick-sortuse recursion in an algorithmic design pattern called divide-and-conquer we have already seen the power of recursion in describing algorithms in an elegant manner (see the divide-and-conquer pattern consists of the following three steps divideif the input size is smaller than certain threshold (sayone or two elements)solve the problem directly using straightforward method and return the solution so obtained otherwisedivide the input data into two or more disjoint subsets conquerrecursively solve the subproblems associated with the subsets combinetake the solutions to the subproblems and merge them into solution to the original problem using divide-and-conquer for sorting we will first describe the merge-sort algorithm at high levelwithout focusing on whether the data is an array-based (pythonlist or linked listwe will soon give concrete implementations for each to sort sequence with elements using the three divide-and-conquer stepsthe merge-sort algorithm proceeds as follows divideif has zero or one elementreturn immediatelyit is already sorted otherwise ( has at least two elements)remove all the elements from and put them into two sequencess and each containing about half of the elements of sthat iss contains the first / elements of sand contains the remaining / elements conquerrecursively sort sequences and combineput back the elements into by merging the sorted sequences and into sorted sequence in reference to the divide stepwe recall that the notation xindicates the floor of xthat isthe largest integer ksuch that < similarlythe notation xindicates the ceiling of xthat isthe smallest integer msuch that < |
22,193 | we can visualize an execution of the merge-sort algorithm by means of binary tree called the merge-sort tree each node of represents recursive invocation (or callof the merge-sort algorithm we associate with each node of the sequence that is processed by the invocation associated with the children of node are associated with the recursive calls that process the subsequences and of the external nodes of are associated with individual elements of scorresponding to instances of the algorithm that make no recursive calls figure summarizes an execution of the merge-sort algorithm by showing the input and output sequences processed at each node of the merge-sort tree the step-by-step evolution of the merge-sort tree is shown in figures through this algorithm visualization in terms of the merge-sort tree helps us analyze the running time of the merge-sort algorithm in particularsince the size of the input sequence roughly halves at each recursive call of merge-sortthe height of the merge-sort tree is about logn (recall that the base of log is if omitted ( (bfigure merge-sort tree for an execution of the merge-sort algorithm on sequence with elements(ainput sequences processed at each node of (boutput sequences generated at each node of |
22,194 | ( ( ( ( ( ( figure visualization of an execution of merge-sort each node of the tree represents recursive call of merge-sort the nodes drawn with dashed lines represent calls that have not been made yet the node drawn with thick lines represents the current call the empty nodes drawn with thin lines represent completed calls the remaining nodes (drawn with thin lines and not emptyrepresent calls that are waiting for child invocation to return (continues in figure |
22,195 | ( ( ( ( ( (lfigure visualization of an execution of merge-sort (combined with figures and |
22,196 | ( ( ( (pfigure visualization of an execution of merge-sort (continued from figure several invocations are omitted between (mand (nnote the merging of two halves performed in step (pproposition the merge-sort tree associated with an execution of mergesort on sequence of size has height log nwe leave the justification of proposition as simple exercise ( - we will use this proposition to analyze the running time of the merge-sort algorithm having given an overview of merge-sort and an illustration of how it workslet us consider each of the steps of this divide-and-conquer algorithm in more detail dividing sequence of size involves separating it at the element with index / and recursive calls can be started by passing these smaller sequences as parameters the difficult step is combining the two sorted sequences into single sorted sequence thusbefore we present our analysis of merge-sortwe need to say more about how this is done |
22,197 | array-based implementation of merge-sort we begin by focusing on the case when sequence of items is represented as an (array-basedpython list the merge function (code fragment is responsible for the subtask of merging two previously sorted sequencess and with the output copied into we copy one element during each pass of the while loopconditionally determining whether the next element should be taken from or the divide-and-conquer merge-sort algorithm is given in code fragment we illustrate step of the merge process in figure during the processindex represents the number of elements of that have been copied to swhile index represents the number of elements of that have been copied to assuming and both have at least one uncopied elementwe copy the smaller of the two elements being considered since objects have been previously copiedthe next element is placed in [ (for examplewhen is the next element is copied to [ ]if we reach the end of one of the sequenceswe must copy the next element from the other def merge( ) """merge two sorted python lists and into properly sized list "" = = while len( ) if =len( or ( len( and [is [ ]) [ +js [icopy ith element of as next item of + else [ +js [jcopy jth element of as next item of + code fragment an implementation of the merge operation for python' arraybased list class ij ( ij (bfigure step in the merge of two sorted arrays for which js [iwe show the arrays before the copy step in (aand after it in ( |
22,198 | sorting and selection def merge sort( ) """sort the elements of python list using the merge-sort algorithm "" len( if return list is already sorted divide mid / [ :midcopy of first half [mid:ncopy of second half conquer (with recursionsort copy of first half merge sort( sort copy of second half merge sort( merge results merge( smerge sorted halves back into code fragment an implementation of the recursive merge-sort algorithm for python' array-based list class (using the merge function defined in code fragment the running time of merge-sort we begin by analyzing the running time of the merge algorithm let and be the number of elements of and respectively it is clear that the operations performed inside each pass of the while loop take ( time the key observation is that during each iteration of the loopone element is copied from either or into (and that element is considered no furtherthereforethe number of iterations of the loop is thusthe running time of algorithm merge is ( having analyzed the running time of the merge algorithm used to combine subproblemslet us analyze the running time of the entire merge-sort algorithmassuming it is given an input sequence of elements for simplicitywe restrict our attention to the case where is power of we leave it to an exercise ( - to show that the result of our analysis also holds when is not power of when evaluating the merge-sort recursionwe rely on the analysis technique introduced in section we account for the amount of time spent within each recursive callbut excluding any time spent waiting for successive recursive calls to terminate in the case of our merge sort functionwe account for the time to divide the sequence into two subsequencesand the call to merge to combine the two sorted sequencesbut we exclude the two recursive calls to merge sort |
22,199 | merge-sort tree as portrayed in figures through can guide our analysis consider recursive call associated with node of the merge-sort tree the divide step at node is straightforwardthis step runs in time proportional to the size of the sequence for vbased on the use of slicing to create copies of the two list halves we have already observed that the merging step also takes time that is linear in the size of the merged sequence if we let denote the depth of node vthe time spent at node is ( / )since the size of the sequence handled by the recursive call associated with is equal to / looking at the tree more globallyas shown in figure we see thatgiven our definition of "time spent at node,the running time of merge-sort is equal to the sum of the times spent at the nodes of observe that has exactly nodes at depth this simple observation has an important consequencefor it implies that the overall time spent at all the nodes of at depth is ( / )which is (nby proposition the height of is log nthussince the time spent at each of the log levels of is ( )we have the following resultproposition algorithm merge-sort sorts sequence of size in ( log ntimeassuming two elements of can be compared in ( time height time per level (nn / (nn/ (log nn/ / / / (ntotal timeo( log nfigure visual analysis of the running time of merge-sort each node represents the time spent in particular recursive calllabeled with the size of its subproblem |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.