id
int64
0
25.6k
text
stringlengths
0
4.59k
22,400
writing this short book has been fun and rewarding experience we would like to thankin no particular order the following people who have helped us during the writing of this book sonu kapoor generously hosted our book which when we released the first draft received over thirteen thousand downloadswithout his generosity this book would not have been able to reach so many people jon skeet provided us with an alarming number of suggestions throughout for which we are eternally grateful jon also edited this book as well we would also like to thank those who provided the odd suggestion via email to us all feedback was listened to and you will no doubt see some content influenced by your suggestions special thank you also goes out to those who helped publicise this book from microsoft' channel weekly show (thanks dan!to the many bloggers who helped spread the word you gave us an audience and for that we are extremely grateful thank you to all who contributed in some way to this book the programming community never ceases to amaze us in how willing its constituents are to give time to projects such as this one thank you vi
22,401
granville barnett granville is currently ph candidate at queensland university of technology (qutworking on parallelism at the microsoft qut eresearch centre he also holds degree in computer scienceand is microsoft mvp his main interests are in programming languages and compilers granville can be contacted via one of two placeseither his personal website (blog (luca del tongo luca is currently studying for his masters degree in computer science at florence his main interests vary from web development to research fields such as data mining and computer vision luca also maintains an italian blog which can be found at vii
22,402
22,403
introduction what this book isand what it isn' this book provides implementations of common and uncommon algorithms in pseudocode which is language independent and provides for easy porting to most imperative programming languages it is not definitive book on the theory of data structures and algorithms for the most part this book presents implementations devised by the authors themselves based on the concepts by which the respective algorithms are based upon so it is more than possible that our implementations differ from those considered the norm you should use this book alongside another on the same subjectbut one that contains formal proofs of the algorithms in question in this book we use the abstract big oh notation to depict the run time complexity of algorithms so that the book appeals to larger audience assumed knowledge we have written this book with few assumptions of the readerbut some have been necessary in order to keep the book as concise and approachable as possible we assume that the reader is familiar with the following big oh notation an imperative programming language object oriented concepts big oh notation for run time complexity analysis we use big oh notation extensively so it is vital that you are familiar with the general concepts to determine which is the best algorithm for you in certain scenarios we have chosen to use big oh notation for few reasonsthe most important of which is that it provides an abstract measurement by which we can judge the performance of algorithms without using mathematical proofs
22,404
figure algorithmic run time expansion figure shows some of the run times to demonstrate how important it is to choose an efficient algorithm for the sanity of our graph we have omitted cubic ( )and exponential ( run times cubic and exponential algorithms should only ever be used for very small problems (if ever!)avoid them if feasibly possible the following list explains some of the most common big oh notationso( constantthe operation doesn' depend on the size of its inpute adding node to the tail of linked list where we always maintain pointer to the tail node (nlinearthe run time complexity is proportionate to the size of (log nlogarithmicnormally associated with algorithms that break the problem into smaller chunks per each invocatione searching binary search tree ( log njust log nusually associated with an algorithm that breaks the problem into smaller chunks per each invocationand then takes the results of these smaller chunks and stitches them back togethere quick sort ( quadratice bubble sort ( cubicvery rare ( exponentialincredibly rare if you encounter either of the latter two items (cubic and exponentialthis is really signal for you to review the design of your algorithm while prototyping algorithm designs you may just have the intention of solving the problem irrespective of how fast it works we would strongly advise that you always review your algorithm design and optimise where possible--particularly loops
22,405
and recursive calls--so that you can get the most efficient run times for your algorithms the biggest asset that big oh notation gives us is that it allows us to essentially discard things like hardware if you have two sorting algorithmsone with quadratic run timeand the other with logarithmic run time then the logarithmic algorithm will always be faster than the quadratic one when the data set becomes suitably large this applies even if the former is ran on machine that is far faster than the latter whybecause big oh notation isolates key factor in algorithm analysisgrowth an algorithm with quadratic run time grows faster than one with logarithmic run time it is generally said at some point as the logarithmic algorithm will become faster than the quadratic algorithm big oh notation also acts as communication tool picture the sceneyou are having meeting with some fellow developers within your product group you are discussing prototype algorithms for node discovery in massive networks several minutes elapse after you and two others have discussed your respective algorithms and how they work does this give you good idea of how fast each respective algorithm isno the result of such discussion will tell you more about the high level algorithm design rather than its efficiency replay the scene back in your headbut this time as well as talking about algorithm design each respective developer states the asymptotic run time of their algorithm using the latter approach you not only get good general idea about the algorithm designbut also key efficiency data which allows you to make better choices when it comes to selecting an algorithm fit for purpose some readers may actually work in product group where they are given budgets per feature each feature holds with it budget that represents its uppermost time bound if you save some time in one feature it doesn' necessarily give you buffer for the remaining features imagine you are working on an applicationand you are in the team that is developing the routines that will essentially spin up everything that is required when the application is started everything is great until your boss comes in and tells you that the start up time should not exceed ms the efficiency of every algorithm that is invoked during start up in this example is absolutely key to successful product even if you don' have these budgets you should still strive for optimal solutions taking quantitative approach for many software development properties will make you far superior programmer measuring one' work is critical to success imperative programming language all examples are given in pseudo-imperative coding format and so the reader must know the basics of some imperative mainstream programming language to port the examples effectivelywe have written this book with the following target languages in mind + java
22,406
the reason that we are explicit in this requirement is simple--all our implementations are based on an imperative thinking style if you are functional programmer you will need to apply various aspects from the functional paradigm to produce efficient solutions with respect to your functional language whether it be haskellf#ocamletc two of the languages that we have listed (cand javatarget virtual machines which provide various things like security sand boxingand memory management via garbage collection algorithms it is trivial to port our implementations to these languages when porting to +you must remember to use pointers for certain things for examplewhen we describe linked list node as having reference to the next nodethis description is in the context of managed environment in +you should interpret the reference as pointer to the next node and so on for programmers who have fair amount of experience with their respective language these subtleties will present no issuewhich is why we really do emphasise that the reader must be comfortable with at least one imperative language in order to successfully port the pseudoimplementations in this book it is essential that the user is familiar with primitive imperative language constructs before reading this book otherwise you will just get lost some algorithms presented in this book can be confusing to follow even for experienced programmersobject oriented concepts for the most part this book does not use features that are specific to any one language in particularwe never provide data structures or algorithms that work on generic types--this is in order to make the samples as easy to follow as possible howeverto appreciate the designs of our data structures you will need to be familiar with the following object oriented (ooconcepts inheritance encapsulation polymorphism this is especially important if you are planning on looking at the ctarget that we have implemented (more on that in ss which makes extensive use of the oo concepts listed above as final note it is also desirable that the reader is familiar with interfaces as the ctarget uses interfaces throughout the sorting algorithms pseudocode throughout this book we use pseudocode to describe our solutions for the most part interpreting the pseudocode is trivial as it looks very much like more abstract ++or #but there are few things to point out pre-conditions should always be enforced post-conditions represent the result of applying algorithm to data structure
22,407
the type of parameters is inferred all primitive language constructs are explicitly begun and ended if an algorithm has return type it will often be presented in the postconditionbut where the return type is sufficiently obvious it may be omitted for the sake of brevity most algorithms in this book require parametersand because we assign no explicit type to those parameters the type is inferred from the contexts in which it is usedand the operations performed upon it additionallythe name of the parameter usually acts as the biggest clue to its type for instance is pseudo-name for number and so you can assume unless otherwise stated that translates to an integer that has the same number of bits as word on bit machinesimilarly is pseudo-name for list where list is resizeable array ( vectorthe last major point of reference is that we always explicitly end language construct for instance if we wish to close the scope of for loop we will explicitly state end for rather than leaving the interpretation of when scopes are closed to the reader while implicit scope closure works well in simple codein complex cases it can lead to ambiguity the pseudocode style that we use within this book is rather straightforward all algorithms start with simple algorithm signaturee algorithm algorithmname(arg arg argn nend algorithmname immediately after the algorithm signature we list any pre or post conditions algorithm algorithmname( pren is the value to compute the factorial of >= postthe factorial of has been computed /nend algorithmname the example above describes an algorithm by the name of algorithmnamewhich takes single numeric parameter the pre and post conditions follow the algorithm signatureyou should always enforce the pre-conditions of an algorithm when porting them to your language of choice normally what is listed as pre-conidition is critical to the algorithms operation this may cover things like the actual parameter not being nullor that the collection passed in must contain at least items the post-condition mainly describes the effect of the algorithms operation an example of post-condition might be "the list has been sorted in ascending orderbecause everything we describe is language independent you will need to make your own mind up on how to best handle pre-conditions for examplein the ctarget we have implementedwe consider non-conformance to preconditions to be exceptional cases we provide message in the exception to tell the caller why the algorithm has failed to execute normally
22,408
tips for working through the examples as with most books you get out what you put in and so we recommend that in order to get the most out of this book you work through each algorithm with pen and paper to track things like variable namesrecursive calls etc the best way to work through algorithms is to set up tableand in that table give each variable its own column and continuously update these columns this will help you keep track of and visualise the mutations that are occurring throughout the algorithm often while working through algorithms in such way you can intuitively map relationships between data structures rather than trying to work out few values on paper and the rest in your head we suggest you put everything on paper irrespective of how trivial some variables and calculations may be so that you always have point of reference when dealing with recursive algorithm traces we recommend you do the same as the abovebut also have table that records function calls and who they return to this approach is far cleaner way than drawing out an elaborate map of function calls with arrows to one anotherwhich gets large quickly and simply makes things more complex to follow track everything in simple and systematic way to make your time studying the implementations far easier book outline we have split this book into two partspart provides discussion and pseudo-implementations of common and uncommon data structuresand part provides algorithms of varying purposes from sorting to string operations the reader doesn' have to read the book sequentially from beginning to end can be read independently from one another we suggest that in part you read each in its entiretybut in part you can get away with just reading the section of that describes the algorithm you are interested in each of the on data structures present initially the algorithms concerned with insertion deletion searching the previous list represents what we believe in the vast majority of cases to be the most important for each respective data structure for all readers we recommend that before looking at any algorithm you quickly look at appendix which contains table listing the various symbols used within our algorithms and their meaning one keyword that we would like to point out here is yield you can think of yield in the same light as return the return keyword causes the method to exit and returns control to the callerwhereas yield returns each value to the caller with yield control only returns to the caller when all values to return to the caller have been exhausted
22,409
testing all the data structures and algorithms have been tested using minimised test driven development style on paper to flesh out the pseudocode algorithm we then transcribe these tests into unit tests satisfying them one by one when all the test cases have been progressively satisfied we consider that algorithm suitably tested for the most part algorithms have fairly obvious cases which need to be satisfied some however have many areas which can prove to be more complex to satisfy with such algorithms we will point out the test cases which are tricky and the corresponding portions of pseudocode within the algorithm that satisfy that respective case as you become more familiar with the actual problem you will be able to intuitively identify areas which may cause problems for your algorithms implementation this in some cases will yield an overwhelming list of concerns which will hinder your ability to design an algorithm greatly when you are bombarded with such vast amount of concerns look at the overall problem again and sub-divide the problem into smaller problems solving the smaller problems and then composing them is far easier task than clouding your mind with too many little details the only type of testing that we use in the implementation of all that is provided in this book are unit tests because unit tests contribute such core piece of creating somewhat more stable software we invite the reader to view appendix which describes testing in more depth where can get the codethis book doesn' provide any code specifically aligned with ithowever we do actively maintain an open source project that houses cimplementation of all the pseudocode listed the project is named data structures and algorithms (dsaand can be found at final messages we have just few final messages to the reader that we hope you digest before you embark on reading this book understand how the algorithm works first in an abstract senseand always work through the algorithms on paper to understand how they achieve their outcome if you always follow these key pointsyou will get the most out of this book all readers are encouraged to provide suggestionsfeature requestsand bugs so we can further improve our implementations
22,410
data structures
22,411
linked lists linked lists can be thought of from high level perspective as being series of nodes each node has at least single pointer to the next nodeand in the last node' case null pointer representing that there are no more nodes in the linked list in dsa our implementations of linked lists always maintain head and tail pointers so that insertion at either the head or tail of the list is constant time operation random insertion is excluded from this and will be linear operation as suchlinked lists in dsa have the following characteristics insertion is ( deletion is ( searching is (nout of the three operations the one that stands out is that of insertion in dsa we chose to always maintain pointers (or more aptly referencesto the node(sat the head and tail of the linked list and so performing traditional insertion to either the front or back of the linked list is an ( operation an exception to this rule is performing an insertion before node that is neither the head nor tail in singly linked list when the node we are inserting before is somewhere in the middle of the linked list (known as random insertionthe complexity is (nin order to add before the designated node we need to traverse the linked list to find that node' current predecessor this traversal yields an (nrun time this data structure is trivialbut linked lists have few key points which at times make them very attractive the list is dynamically resizedthus it incurs no copy penalty like an array or vector would eventually incurand insertion is ( singly linked list singly linked lists are one of the most primitive data structures you will find in this book each node that makes up singly linked list consists of valueand reference to the next node (if anyin the list
22,412
figure singly linked list node figure singly linked list populated with integers insertion in general when people talk about insertion with respect to linked lists of any form they implicitly refer to the adding of node to the tail of the list when you use an api like that of dsa and you see general purpose method that adds node to the listyou can assume that you are adding the node to the tail of the list not the head adding node to singly linked list has only two cases head in which case the node we are adding is now both the head and tail of the listor we simply need to append our node onto the end of the list updating the tail reference appropriately algorithm add(value prevalue is the value to add to the list postvalue has been placed at the tail of the list node(value if head head tail else tail next tail end if end add as an example of the previous algorithm consider adding the following sequence of integers to the list and the resulting list is that of figure searching searching linked list is straightforwardwe simply traverse the list checking the value we are looking for with the value of each node in the linked list the algorithm listed in this section is very similar to that used for traversal in ss
22,413
algorithm contains(headvalue prehead is the head node in the list value is the value to search for postthe item is either in the linked listtrueotherwise false head while and value value next end while if return false end if return true end contains deletion deleting node from linked list is straightforward but there are few cases we need to account for the list is emptyor the node to remove is the only node in the linked listor we are removing the head nodeor we are removing the tail nodeor the node to remove is somewhere in between the head and tailor the item to remove doesn' exist in the linked list the algorithm whose cases we have described will remove node from anywhere within list irrespective of whether the node is the head etc if you know that items will only ever be removed from the head or tail of the list then you can create much more concise algorithms in the case of always removing from the front of the linked list deletion becomes an ( operation
22,414
algorithm remove(headvalue prehead is the head node in the list value is the value to remove from the list postvalue is removed from the listtrueotherwise false if head /case return false end if head if value value if head tail /case head tail else /case head head next end if return true end if while next and next value value next end while if next if next tail /case tail end if /this is only case if the conditional on line was alse next next next return true end if /case return false end remove traversing the list traversing singly linked list is the same as that of traversing doubly linked list (defined in ss you start at the head of the list and continue until you come across node that is the two cases are as follows node we have exhausted all nodes in the linked listor we must update the node reference to be node next the algorithm described is very simple one that makes use of simple while loop to check the first case
22,415
algorithm traverse(head prehead is the head node in the list postthe items in the list have been traversed head while yield value next end while end traverse traversing the list in reverse order traversing singly linked list in forward manner ( left to rightis simple as demonstrated in ss howeverwhat if we wanted to traverse the nodes in the linked list in reverse order for some reasonthe algorithm to perform such traversal is very simpleand just like demonstrated in ss we will need to acquire reference to the predecessor of nodeeven though the fundamental characteristics of the nodes that make up singly linked list make this an expensive operation for each nodefinding its predecessor is an (noperationso over the course of traversing the whole list backwards the cost becomes ( figure depicts the following algorithm being applied to linked list with the integers and algorithm reversetraversal(headtail prehead and tail belong to the same list postthe items in the list have been traversed in reverse order if tail curr tail while curr head prev head while prev next curr prev prev next end while yield curr value curr prev end while yield curr value end if end reversetraversal this algorithm is only of real interest when we are using singly linked listsas you will soon see that doubly linked lists (defined in ss make reverse list traversal simple and efficientas shown in ss doubly linked list doubly linked lists are very similar to singly linked lists the only difference is that each node has reference to both the next and previous nodes in the list
22,416
figure reverse traveral of singly linked list figure doubly linked list node
22,417
the following algorithms for the doubly linked list are exactly the same as those listed previously for the singly linked list searching (defined in ss traversal (defined in ssinsertion the only major difference between the algorithm in ss is that we need to remember to bind the previous pointer of to the previous tail node if was not the first node to be inserted into the list algorithm add(value prevalue is the value to add to the list postvalue has been placed at the tail of the list node(value if head head tail else previous tail tail next tail end if end add figure shows the doubly linked list after adding the sequence of integers defined in ss figure doubly linked list populated with integers deletion as you may of guessed the cases that we use for deletion in doubly linked list are exactly the same as those defined in ss like insertion we have the added task of binding an additional reference ( reviousto the correct value
22,418
algorithm remove(headvalue prehead is the head node in the list value is the value to remove from the list postvalue is removed from the listtrueotherwise false if head return false end if if value head value if head tail head tail else head head next head previous end if return true end if head next while and value value next end while if tail tail tail previous tail next return true else if previous next next next previous previous return true end if return false end remove reverse traversal singly linked lists have forward only designwhich is why the reverse traversal algorithm defined in ss required some creative invention doubly linked lists make reverse traversal as simple as forward traversal (defined in ssexcept that we start at the tail node and update the pointers in the opposite direction figure shows the reverse traversal algorithm in action
22,419
figure doubly linked list reverse traversal algorithm reversetraversal(tail pretail is the tail node of the list to traverse postthe list has been traversed in reverse order tail while yield value previous end while end reversetraversal summary linked lists are good to use when you have an unknown number of items to store using data structure like an array would require you to specify the size up frontexceeding that size involves invoking resizing algorithm which has linear run time you should also use linked lists when you will only remove nodes at either the head or tail of the list to maintain constant run time this requires maintaining pointers to the nodes at the head and tail of the list but the memory overhead will pay for itself if this is an operation you will be performing many times what linked lists are not very good for is random insertionaccessing nodes by indexand searching at the expense of little memory (in most cases bytes would suffice)and few more read/writes you could maintain count variable that tracks how many items are contained in the list so that accessing such primitive property is constant operation you just need to update count during the insertion and deletion algorithms singly linked lists should be used when you are only performing basic insertions in general doubly linked lists are more accommodating for non-trivial operations on linked list we recommend the use of doubly linked list when you require forwards and backwards traversal for the most cases this requirement is present for exampleconsider token stream that you want to parse in recursive descent fashion sometimes you will have to backtrack in order to create the correct parse tree in this scenario doubly linked list is best as its design makes bi-directional traversal much simpler and quicker than that of singly linked
22,420
list
22,421
binary search tree binary search trees (bstsare very simple to understand we start with root node with value xwhere the left subtree of contains nodes with values and the right subtree contains nodes whose values are > each node follows the same rules with respect to nodes in their left and right subtrees bsts are of interest because they have operations which are favourably fastinsertionlook upand deletion can all be done in (log ntime it is important to note that the (log ntimes for these operations can only be attained if the bst is reasonably balancedfor tree data structure with self balancing properties see avl tree defined in ss in the following examples you can assumeunless used as parameter alias that root is reference to the root node of the tree figure simple unbalanced binary search tree
22,422
insertion as mentioned previously insertion is an (log noperation provided that the tree is moderately balanced algorithm insert(value prevalue has passed custom type checks for type postvalue has been placed in the correct location in the tree if root root node(value else insertnode(rootvalue end if end insert algorithm insertnode(currentvalue precurrent is the node to start from postvalue has been placed in the correct location in the tree if value current value if current left current left node(value else insertnode(current leftvalue end if else if current right current right node(value else insertnode(current rightvalue end if end if end insertnode the insertion algorithm is split for good reason the first algorithm (nonrecursivechecks very core base case whether or not the tree is empty if the tree is empty then we simply create our root node and finish in all other cases we invoke the recursive insertn ode algorithm which simply guides us to the first appropriate place in the tree to put value note that at each stage we perform binary chopwe either choose to recurse into the left subtree or the right by comparing the new value with that of the current node for any totally ordered typeno value can simultaneously satisfy the conditions to place it in both subtrees
22,423
searching searching bst is even simpler than insertion the pseudocode is self-explanatory but we will look briefly at the premise of the algorithm nonetheless we have talked previously about insertionwe go either left or right with the right subtree containing values that are > where is the value of the node we are inserting when searching the rules are made little more atomic and at any one time we have four cases to consider the root in which case value is not in the bstor root value value in which case value is in the bstor value root valuewe must inspect the left subtree of root for valueor value root valuewe must inspect the right subtree of root for value algorithm contains(rootvalue preroot is the root node of the treevalue is what we would like to locate postvalue is either located or not if root return false end if if root value value return true else if value root value return contains(root leftvalue else return contains(root rightvalue end if end contains
22,424
deletion removing node from bst is fairly straightforwardwith four cases to consider the value to remove is leaf nodeor the value to remove has right subtreebut no left subtreeor the value to remove has left subtreebut no right subtreeor the value to remove has both left and right subtree in which case we promote the largest value in the left subtree there is also an implicit fifth case whereby the node to be removed is the only node in the tree this case is already covered by the firstbut should be noted as possibility nonetheless of course in bst value may occur more than once in such case the first occurrence of that value in the bst will be removed # right subtree and left subtree # left subtree no right subtree # right subtree no left subtree # leaf node figure binary search tree deletion cases the remove algorithm given below relies on two further helper algorithms named indp arentand indn ode which are described in ss and ss respectively
22,425
algorithm remove(value prevalue is the value of the node to removeroot is the root node of the bst count is the number of items in the bst postnode with value is removed if found in which case yields trueotherwise false nodet oremove findnode(value if nodet oremove return false /value not in bst end if parent findparent(value if count root /we are removing the only node in the bst else if nodet oremove left and nodet oremove right null /case # if nodet oremove value parent value parent left else parent right end if else if nodet oremove left and nodet oremove right /case if nodet oremove value parent value parent left nodet oremove right else parent right nodet oremove right end if else if nodet oremove left and nodet oremove right /case # if nodet oremove value parent value parent left nodet oremove left else parent right nodet oremove left end if else /case # largestv alue nodet oremove left while largestv alue right /find the largest value in the left subtree of nodet oremove largestv alue largestv alue right end while /set the parentsright pointer of largestv alue to findparent(largestv alue valueright nodet oremove value largestv alue value end if count count - return true end remove
22,426
finding the parent of given node the purpose of this algorithm is simple to return reference (or pointerto the parent node of the one with the given value we have found that such an algorithm is very usefulespecially when performing extensive tree transformations algorithm findparent(valueroot prevalue is the value of the node we want to find the parent of root is the root node of the bst and is posta reference to the parent node of value if foundotherwise if value root value return end if if value root value if root left return else if root left value value return root else return findparent(valueroot left end if else if root right return else if root right value value return root else return findparent(valueroot right end if end if end findparent special case in the above algorithm is when the specified value does not exist in the bstin which case we return callers to this algorithm must take account of this possibility unless they are already certain that node with the specified value exists attaining reference to node this algorithm is very similar to ss but instead of returning reference to the parent of the node with the specified valueit returns reference to the node itself againis returned if the value isn' found
22,427
algorithm findnode(rootvalue prevalue is the value of the node we want to find the parent of root is the root node of the bst posta reference to the node of value if foundotherwise if root return end if if root value value return root else if value root value return findnode(root leftvalue else return findnode(root rightvalue end if end findnode astute readers will have noticed that the findnode algorithm is exactly the same as the contains algorithm (defined in ss with the modification that we are returning reference to node not true or alse given findnodethe easiest way of implementing contains is to call findnode and compare the return value with finding the smallest and largest values in the binary search tree to find the smallest value in bst you simply traverse the nodes in the left subtree of the bst always going left upon each encounter with nodeterminating when you find node with no left subtree the opposite is the case when finding the largest value in the bst both algorithms are incredibly simpleand are listed simply for completeness the base case in both indm inand indm ax algorithms is when the left ( indm in)or right ( indm axnode references are in which case we have reached the last node algorithm findmin(root preroot is the root node of the bst root postthe smallest value in the bst is located if root left return root value end if findmin(root left end findmin
22,428
algorithm findmax(root preroot is the root node of the bst root postthe largest value in the bst is located if root right return root value end if findmax(root right end findmax tree traversals there are various strategies which can be employed to traverse the items in treethe choice of strategy depends on which node visitation order you require in this section we will touch on the traversals that dsa provides on all data structures that derive from binarysearcht ree preorder when using the preorder algorithmyou visit the root firstthen traverse the left subtree and finally traverse the right subtree an example of preorder traversal is shown in figure algorithm preorder(root preroot is the root node of the bst postthe nodes in the bst have been visited in preorder if root yield root value preorder(root left preorder(root right end if end preorder postorder this algorithm is very similar to that described in sshowever the value of the node is yielded after traversing both subtrees an example of postorder traversal is shown in figure algorithm postorder(root preroot is the root node of the bst postthe nodes in the bst have been visited in postorder if root postorder(root left postorder(root right yield root value end if end postorder
22,429
( ( ( ( (efigure preorder visit binary search tree example (
22,430
( ( ( ( (efigure postorder visit binary search tree example (
22,431
inorder another variation of the algorithms defined in ss and ss is that of inorder traversal where the value of the current node is yielded in between traversing the left subtree and the right subtree an example of inorder traversal is shown in figure ( ( ( ( ( (ffigure inorder visit binary search tree example algorithm inorder(root preroot is the root node of the bst postthe nodes in the bst have been visited in inorder if root inorder(root left yield root value inorder(root right end if end inorder one of the beauties of inorder traversal is that values are yielded in their comparison order in other wordswhen traversing populated bst with the inorder strategythe yielded sequence would have property xi <xi+
22,432
breadth first traversing tree in breadth first order yields the values of all nodes of particular depth in the tree before any deeper ones in other wordsgiven depth we would visit the values of all nodes at in left to right fashionthen we would proceed to and so on until we hade no more nodes to visit an example of breadth first traversal is shown in figure traditionally breadth first traversal is implemented using list (vectorresizeable arrayetcto store the values of the nodes visited in breadth first order and then queue to store those nodes that have yet to be visited ( ( ( ( (efigure breadth first visit binary search tree example (
22,433
algorithm breadthfirst(root preroot is the root node of the bst postthe nodes in the bst have been visited in breadth first order queue while root yield root value if root left enqueue(root left end if if root right enqueue(root right end if if ! isempty( root dequeue( else root end if end while end breadthfirst summary binary search tree is good solution when you need to represent types that are ordered according to some custom rules inherent to that type with logarithmic insertionlookupand deletion it is very effecient traversal remains linearbut there are many ways in which you can visit the nodes of tree trees are recursive data structuresso typically you will find that many algorithms that operate on tree are recursive the run times presented in this are based on pretty big assumption that the binary search tree' left and right subtrees are reasonably balanced we can only attain logarithmic run times for the algorithms presented earlier when this is true binary search tree does not enforce such propertyand the run times for these operations on pathologically unbalanced tree become linearsuch tree is effectively just linked list later in ss we will examine an avl tree that enforces self-balancing properties to help attain logarithmic run times
22,434
heap heap can be thought of as simple tree data structurehowever heap usually employs one of two strategies min heapor max heap each strategy determines the properties of the tree and its values if you were to choose the min heap strategy then each parent node would have value that is <than its children for examplethe node at the root of the tree will have the smallest value in the tree the opposite is true for the max heap strategy in this book you should assume that heap employs the min heap strategy unless otherwise stated unlike other tree data structures like the one defined in ss heap is generally implemented as an array rather than series of nodes which each have references to other nodes the nodes are conceptually the samehoweverhaving at most two children figure shows how the tree (not heap data structure( ( ( )would be represented as an array the array in figure is result of simply adding values in top-to-bottomleft-to-right fashion figure shows arrows to the direct left and right child of each value in the array this is very much centred around the notion of representing tree as an array and because this property is key to understanding this figure shows step by step process to represent tree data structure as an array in figure you can assume that the default capacity of our array is eight using just an array is often not sufficient as we have to be up front about the size of the array to use for the heap often the run time behaviour of program can be unpredictable when it comes to the size of its internal data structuresso we need to choose more dynamic data structure that contains the following properties we can specify an initial size of the array for scenarios where we know the upper storage limit requiredand the data structure encapsulates resizing algorithms to grow the array as required at run time
22,435
figure array representation of simple tree data structure figure direct children of the nodes in an array representation of tree data structure vector arraylist list figure does not specify how we would handle adding null references to the heap this varies from case to casesometimes null values are prohibited entirelyin other cases we may treat them as being smaller than any non-null valueor indeed greater than any non-null value you will have to resolve this ambiguity yourself having studied your requirements for the sake of clarity we will avoid the issue by prohibiting null values because we are using an array we need some way to calculate the index of parent nodeand the children of node the required expressions for this are defined as follows for node at index (index )/ (parent index index (left child index (right childin figure arepresents the calculation of the right child of ( )and bcalculates the index of the parent of (( )/ insertion designing an algorithm for heap insertion is simplebut we must ensure that heap order is preserved after each insertion generally this is post-insertion operation inserting value into the next free slot in an array is simplewe just need to keep track of the next free index in the array as counterand increment it after each insertion inserting our value into the heap is the first part of the algorithmthe second is validating heap order in the case of min-heap ordering this requires us to swap the values of parent and its child if the value of the child is the value of its parent we must do this for each subtree containing the value we just inserted
22,436
figure converting tree data structure to its array counterpart
22,437
figure calculating node properties the run time efficiency for heap insertion is (log nthe run time is by product of verifying heap order as the first part of the algorithm (the actual insertion into the arrayis ( figure shows the steps of inserting the values and into min-heap
22,438
figure inserting values into min-heap
22,439
algorithm add(value prevalue is the value to add to the heap count is the number of items in the heap postthe value has been added to the heap heap[countvalue count count + minheapify( end add algorithm minheapify( precount is the number of items in the heap heap is the array used to store the heap items postthe heap has preserved min heap ordering count - while and heap[iheap[( )/ swap(heap[ ]heap[( )/ ( )/ end while end minheapify the design of the maxheapify algorithm is very similar to that of the minheapify algorithmthe only difference is that the operator in the second condition of entering the while loop is changed to deletion just as for insertiondeleting an item involves ensuring that heap ordering is preserved the algorithm for deletion has three steps find the index of the value to delete put the last value in the heap at the index location of the item to delete verify heap ordering for each subtree which used to include the value
22,440
algorithm remove(value prevalue is the value to remove from the heap lef tand right are updated aliasfor index and index respectively count is the number of items in the heap heap is the array used to store the heap items postvalue is located in the heap and removedtrueotherwise false /step index findindex(heapvalue if index return false end if count count - /step heap[indexheap[count /step while lef heap[lef tor heap[indexheap[right /promote smallest key from subtree if heap[lef theap[right swap(heaplef tindex index lef else swap(heaprightindex index right end if end while return true end remove figure shows the remove algorithm visuallyremoving from heap containing the values and in figure you can assume that we have specified that the backing array of the heap should have an initial capacity of eight please note that in our deletion algorithm that we don' default the removed value in the heap array if you are using heap for reference typesi objects that are allocated on heap you will want to free that memory this is important in both unmanagedand managed languages in the latter we will want to null that empty hole so that the garbage collector can reclaim that memory if we were to not null that hole then the object could still be reached and thus won' be garbage collected searching searching heap is merely matter of traversing the items in the heap array sequentiallyso this operation has run time complexity of (nthe search can be thought of as one that uses breadth first traversal as defined in ss to visit the nodes within the heap to check for the presence of specified item
22,441
figure deleting an item from heap
22,442
algorithm contains(value prevalue is the value to search the heap for count is the number of items in the heap heap is the array used to store the heap items postvalue is located in the heapin which case trueotherwise false - while count and heap[ value - + end while if count return true else return false end if end contains the problem with the previous algorithm is that we don' take advantage of the properties in which all values of heap holdthat is the property of the heap strategy being used for instance if we had heap that didn' contain the value we would have to exhaust the whole backing heap array before we could determine that it wasn' present in the heap factoring in what we know about the heap we can optimise the search algorithm by including logic which makes use of the properties presented by certain heap strategy optimising to deterministically state that value is in the heap is not that straightforwardhowever the problem is very interesting one as an example consider min-heap that doesn' contain the value we can only rule that the value is not in the heap if the parent of the current node being inspected and the current node being inspected nodes at the current level we are traversing if this is the case then cannot be in the heap and so we can provide an answer without traversing the rest of the heap if this property is not satisfied for any level of nodes that we are inspecting then the algorithm will indeed fall back to inspecting all the nodes in the heap the optimisation that we present can be very common and so we feel that the extra logic within the loop is justified to prevent the expensive worse case run time the following algorithm is specifically designed for min-heap to tailor the algorithm for max-heap the two comparison operations in the else if condition within the inner while loop should be flipped
22,443
algorithm contains(value prevalue is the value to search the heap for count is the number of items in the heap heap is the array used to store the heap items postvalue is located in the heapin which case trueotherwise false start nodes while start count start nodes end nodes start count while start count and start end if value heap[start return true else if value parent(heap[start]and value heap[start count count end if start start end while if count nodes return false end if nodes nodes end while return false end contains the new contains algorithm determines if the value is not in the heap by checking whether count nodes in such an event where this is true then we can confirm that nodes at level value parent( )value thus there is no possible way that value is in the heap as an example consider figure if we are searching for the value within the min-heap displayed it is obvious that we don' need to search the whole heap to determine is not present we can verify this after traversing the nodes in the second level of the heap as the previous expression defined holds true traversal as mentioned in ss traversal of heap is usually done like that of any other array data structure which our heap implementation is based upon as result you traverse the array starting at the initial array index ( in most languagesand then visit each value within the array until you have reached the upper bound of the heap you will note that in the search algorithm that we use count as this upper bound rather than the actual physical bound of the allocated array count is used to partition the conceptual heap from the actual array implementation of the heapwe only care about the items in the heapnot the whole array--the latter may contain various other bits of data as result of heap mutation
22,444
figure determining is not in the heap after inspecting the nodes of level figure living and dead space in the heap backing array if you have followed the advice we gave in the deletion algorithm then heap that has been mutated several times will contain some form of default value for items no longer in the heap potentially you will have at most lengthof (heaparraycount garbage values in the backing heap array data structure the garbage values of course vary from platform to platform to make things simple the garbage value of reference type will be simple and for value type figure shows heap that you can assume has been mutated many times for this example we can further assume that at some point the items in indexes actually contained references to live objects of type in figure subscript is used to disambiguate separate objects of from what you have read thus far you will most likely have picked up that traversing the heap in any other order would be of little benefit the heap property only holds for the subtree of each node and so traversing heap in any other fashion requires some creative intervention heaps are not usually traversed in any other way than the one prescribed previously summary heaps are most commonly used to implement priority queues (see ss for sample implementationand to facilitate heap sort as discussed in both the insertion ss and deletion ss sections heap maintains heap order according to the selected ordering strategy these strategies are referred to as min-heap
22,445
and max heap the former strategy enforces that the value of parent node is less than that of each of its childrenthe latter enforces that the value of the parent is greater than that of each of its children when you come across heap and you are not told what strategy it enforces you should assume that it uses the min-heap strategy if the heap can be configured otherwisee to use max-heap then this will often require you to state this explicitly the heap abides progressively to strategy during the invocation of the insertionand deletion algorithms the cost of such policy is that upon each insertion and deletion we invoke algorithms that have logarithmic run time complexities while the cost of maintaining the strategy might not seem overly expensive it does still come at price we will also have to factor in the cost of dynamic array expansion at some stage this will occur if the number of items within the heap outgrows the space allocated in the heap' backing array it may be in your best interest to research good initial starting size for your heap array this will assist in minimising the impact of dynamic array resizing
22,446
sets set contains number of valuesin no particular order the values within the set are distinct from one another generally set implementations tend to check that value is not in the set before adding itavoiding the issue of repeated values from ever occurring this section does not cover set theory in depthrather it demonstrates briefly the ways in which the values of sets can be definedand common operations that may be performed upon them the notation { defines set whose values are listed within the curly braces given the set defined previously we can say that is member of denoted by aand that is not member of denoted by often defining set by manually stating its members is tiresomeand more importantly the set may contain large number of values more concise way of defining set and its members is by providing series of properties that the values of the set must satisfy for examplefrom the definition { | the set contains only positive integers that are even is an alias to the current value we are inspecting and to the right hand side of are the properties that must satisfy to be in the set in this examplex must be and the remainder of the arithmetic expression / must be you will be able to note from the previous definition of the set that the set can contain an infinite number of valuesand that the values of the set will be all even integers that are member of the natural numbers set nwhere { finally in this brief introduction to sets we will cover set intersection and unionboth of which are very common operations (amongst many othersperformed on sets the union set can be defined as follows { or }and intersection { and bfigure demonstrates set intersection and union graphically given the set definitions { }and { the union of the two sets is { }and the intersection of the two sets is { both set union and intersection are sometimes provided within the framework associated with mainstream languages this is the case in net where such algorithms exist as extension methods defined in the type system linq enumerable as result dsa does not provide implementations of
22,447
figure aa bba these algorithms most of the algorithms defined in system linq enumerable deal mainly with sequences rather than sets exclusively set union can be implemented as simple traversal of both sets adding each item of the two sets to new union set algorithm union(set set preset and set union is set posta union of set and set has been created foreach item in set union add(item end foreach foreach item in set union add(item end foreach return union end union the run time of our union algorithm is ( nwhere is the number of items in the first set and is the number of items in the second set this runtime applies only to sets that exhibit ( insertions set intersection is also trivial to implement the only major thing worth pointing out about our algorithm is that we traverse the set containing the fewest items we can do this because if we have exhausted all the items in the smaller of the two sets then there are no more items that are members of both setsthus we have no more items to add to the intersection set
22,448
algorithm intersection(set set preset and set intersectionand smallerset are sets postan intersection of set and set has been created if set count set count smallerset set else smallerset set end if foreach item in smallerset if set contains(itemand set contains(item intersection add(item end if end foreach return intersection end intersection the run time of our intersection algorithm is (nwhere is the number of items in the smaller of the two sets just like our union algorithm linear runtime can only be attained when operating on set with ( insertion unordered sets in the general sense do not enforce the explicit ordering of their members for example the members of { conform to no ordering scheme because it is not required most libraries provide implementations of unordered sets and so dsa does notwe simply mention it here to disambiguate between an unordered set and ordered set we will only look at insertion for an unordered set and cover briefly why hash table is an efficient data structure to use for its implementation insertion an unordered set can be efficiently implemented using hash table as its backing data structure as mentioned previously we only add an item to set if that item is not already in the setso the backing data structure we use must have quick look up and insertion run time complexity hash map generally provides the following ( for insertion approaching ( for look up the above depends on how good the hashing algorithm of the hash table isbut most hash tables employ incredibly efficient general purpose hashing algorithms and so the run time complexities for the hash table in your library of choice should be very similar in terms of efficiency
22,449
ordered an ordered set is similar to an unordered set in the sense that its members are distinctbut an ordered set enforces some predefined comparison on each of its members to produce set whose members are ordered appropriately in dsa and earlier we used binary search tree (defined in ss as the internal backing data structure for our ordered set from versions onwards we replaced the binary search tree with an avl tree primarily because avl is balanced the ordered set has its order realised by performing an inorder traversal upon its backing tree data structure which yields the correct ordered sequence of set members because an ordered set in dsa is simply wrapper for an avl tree that additionally ensures that the tree contains unique items you should read ss to learn more about the run time complexities associated with its operations summary sets provide way of having collection of unique objectseither ordered or unordered when implementing set (either ordered or unorderedit is key to select the correct backing data structure as we discussed in ss because we check first if the item is already contained within the set before adding it we need this check to be as quick as possible for unordered sets we can rely on the use of hash table and use the key of an item to determine whether or not it is already contained within the set using hash table this check results in near constant run time complexity ordered sets cost little more for this checkhowever the logarithmic growth that we incur by using binary search tree as its backing data structure is acceptable another key property of sets implemented using the approach we describe is that both have favourably fast look-up times just like the check before insertionfor hash table this run time complexity should be near constant ordered sets as described in perform binary chop at each stage when searching for the existence of an item yielding logarithmic run time we can use sets to facilitate many algorithms that would otherwise be little less clear in their implementation for example in ss we use an unordered set to assist in the construction of an algorithm that determines the number of repeated words within string
22,450
queues queues are an essential data structure that are found in vast amounts of software from user mode to kernel mode applications that are core to the system fundamentally they honour first in first out (fifostrategythat is the item first put into the queue will be the first servedthe second item added to the queue will be the second to be served and so on traditional queue only allows you to access the item at the front of the queuewhen you add an item to the queue that item is placed at the back of the queue historically queues always have the following three core methodsenqueueplaces an item at the back of the queuedequeueretrieves the item at the front of the queueand removes it from the queuepeek retrieves the item at the front of the queue without removing it from the queue as an example to demonstrate the behaviour of queue we will walk through scenario whereby we invoke each of the previously mentioned methods observing the mutations upon the queue data structure the following list describes the operations performed upon the queue in figure enqueue( enqueue( enqueue( enqueue( enqueue( dequeue( peek( this operation is sometimes referred to as front
22,451
enqueue( peek( dequeue( standard queue queue is implicitly like that described prior to this section in dsa we don' provide standard queue because queues are so popular and such core data structure that you will find pretty much every mainstream library provides queue data structure that you can use with your language of choice in this section we will discuss how you canif requiredimplement an efficient queue data structure the main property of queue is that we have access to the item at the front of the queue the queue data structure can be efficiently implemented using singly linked list (defined in ss singly linked list provides ( insertion and deletion run time complexities the reason we have an ( run time complexity for deletion is because we only ever remove items from the front of queues (with the dequeue operationsince we always have pointer to the item at the head of singly linked listremoval is simply case of returning the value of the old head nodeand then modifying the head pointer to be the next node of the old head node the run time complexity for searching queue remains the same as that of singly linked listo( priority queue unlike standard queue where items are ordered in terms of who arrived firsta priority queue determines the order of its items by using form of custom comparer to see which item has the highest priority other than the items in priority queue being ordered by priority it remains the same as normal queueyou can only access the item at the front of the queue sensible implementation of priority queue is to use heap data structure (defined in ss using heap we can look at the first item in the queue by simply returning the item at index within the heap array heap provides us with the ability to construct priority queue where the items with the highest priority are either those with the smallest valueor those with the largest double ended queue unlike the queues we have talked about previously in this double ended queue allows you to access the items at both the frontand back of the queue double ended queue is commonly known as deque which is the name we will here on in refer to it as deque applies no prioritization strategy to its items like priority queue doesitems are added in order to either the front of back of the deque the former properties of the deque are denoted by the programmer utilising the data structures exposed interface
22,452
figure queue mutations
22,453
deque' provide front and back specific versions of common queue operationse you may want to enqueue an item to the front of the queue rather than the back in which case you would use method with name along the lines of enqueuefront the following list identifies operations that are commonly supported by deque'senqueuefront enqueueback dequeuefront dequeueback peekfront peekback figure shows deque after the invocation of the following methods (inorder) enqueueback( enqueuefront( enqueueback( enqueuefront( dequeuefront( dequeueback(the operations have one-to-one translation in terms of behaviour with those of normal queueor priority queue in some cases the set of algorithms that add an item to the back of the deque may be named as they are with normal queuese enqueueback may simply be called enqueue an so on some frameworks also specify explicit behaviour' that data structures must adhere to this is certainly the case in net where most collections implement an interface which requires the data structure to expose standard add method in such scenario you can safely assume that the add method will simply enqueue an item to the back of the deque with respect to algorithmic run time complexities deque is the same as normal queue that is enqueueing an item to the back of the queue is ( )additionally enqueuing an item to the front of the queue is also an ( operation deque is wrapper data structure that uses either an arrayor doubly linked list using an array as the backing data structure would require the programmer to be explicit about the size of the array up frontthis would provide an obvious advantage if the programmer could deterministically state the maximum number of items the deque would contain at any one time unfortunately in most cases this doesn' holdas result the backing array will inherently incur the expense of invoking resizing algorithm which would most likely be an (noperation such an approach would also leave the library developer
22,454
figure deque data structure after several mutations
22,455
to look at array minimization techniques as wellit could be that after several invocations of the resizing algorithm and various mutations on the deque later that we have an array taking up considerable amount of memory yet we are only using few small percentage of that memory an algorithm described would also be (nyet its invocation would be harder to gauge strategically to bypass all the aforementioned issues deque typically uses doubly linked list as its baking data structure while node that has two pointers consumes more memory than its array item counterpart it makes redundant the need for expensive resizing algorithms as the data structure increases in size dynamically with language that targets garbage collected virtual machine memory reclamation is an opaque process as the nodes that are no longer referenced become unreachable and are thus marked for collection upon the next invocation of the garbage collection algorithm with +or any other language that uses explicit memory allocation and deallocation it will be up to the programmer to decide when the memory that stores the object can be freed summary with normal queues we have seen that those who arrive first are dealt with firstthat is they are dealt with in first-in-first-out (fifoorder queues can be ever so usefulfor example the windows cpu scheduler uses different queue for each priority of process to determine which should be the next process to utilise the cpu for specified time quantum normal queues have constant insertion and deletion run times searching queue is fairly unusual--typically you are only interested in the item at the front of the queue despite thatsearching is usually exposed on queues and typically the run time is linear in this we have also seen priority queues where those at the front of the queue have the highest priority and those near the back have the lowest one implementation of priority queue is to use heap data structure as its backing storeso the run times for insertiondeletionand searching are the same as those for heap (defined in ss queues are very natural data structureand while they are fairly primitive they can make many problems lot simpler for example the breadth first search defined in ss makes extensive use of queues
22,456
avl tree in the early ' adelson-velsky and landis invented the first selfbalancing binary search tree data structurecalling it avl tree an avl tree is binary search tree (bstdefined in ss with self-balancing condition stating that the difference between the height of the left and right subtrees cannot be no more than onesee figure this conditionrestored after each tree modificationforces the general shape of an avl tree before continuinglet us focus on why balance is so important consider binary search tree obtained by starting with an empty tree and inserting some values in the following order , , , , the bst in figure represents the worst case scenario in which the running time of all common operations such as searchinsertion and deletion are (nby applying balance condition we ensure that the worst case running time of each common operation is (log nthe height of an avl tree with nodes is (log nregardless of the order in which values are inserted the avl balance conditionknown also as the node balance factor represents an additional piece of information stored for each node this is combined with technique that efficiently restores the balance condition for the tree in an avl tree the inventors make use of well-known technique called tree rotation + figure the left and right subtrees of an avl tree differ in height by at most
22,457
figure unbalanced binary search tree bfigure avl treesinsertion order- ) , , , , - ) , , , ,
22,458
tree rotations tree rotation is constant time operation on binary search tree that changes the shape of tree while preserving standard bst properties there are left and right rotations both of them decrease the height of bst by moving smaller subtrees down and larger subtrees up right rotation left rotation figure tree left and right rotations
22,459
algorithm leftrotation(node prenode right postnode right is the new root of the subtree node has become node right' left child and bst properties are preserved rightn ode node right node right rightn ode left rightn ode left node end leftrotation algorithm rightrotation(node prenode left postnode left is the new root of the subtree node has become node left' right child and bst properties are preserved lef tn ode node left node left lef tn ode right lef tn ode right node end rightrotation the right and left rotation algorithms are symmetric only pointers are changed by rotation resulting in an ( runtime complexitythe other fields present in the nodes are not changed tree rebalancing the algorithm that we present in this section verifies that the left and right subtrees differ at most in height by if this property is not present then we perform the correct rotation notice that we use two new algorithms that represent double rotations these algorithms are named leftandrightrotationand rightandleftrotation the algorithms are self documenting in their namese leftandrightrotation first performs left rotation and then subsequently right rotation
22,460
algorithm checkbalance(current precurrent is the node to start from balancing postcurrent height has been updated while tree balance is if needed restored through rotations if current left and current right current height - else current height max(height(current left),height(current right) end if if height(current leftheight(current right if height(current left leftheight(current left right rightrotation(current else leftandrightrotation(current end if else if height(current leftheight(current right- if height(current right leftheight(current right right leftrotation(current else rightandleftrotation(current end if end if end checkbalance insertion avl insertion operates first by inserting the given value the same way as bst insertion and then by applying rebalancing techniques if necessary the latter is only performed if the avl property no longer holdsthat is the left and right subtrees height differ by more than each time we insert node into an avl tree we go down the tree to find the correct point at which to insert the nodein the same manner as for bst insertionthen we travel up the tree from the inserted node and check that the node balancing property has not been violatedif the property hasn' been violated then we need not rebalance the treethe opposite is true if the balancing property has been violated
22,461
algorithm insert(value prevalue has passed custom type checks for type postvalue has been placed in the correct location in the tree if root root node(value else insertnode(rootvalue end if end insert algorithm insertnode(currentvalue precurrent is the node to start from postvalue has been placed in the correct location in the tree while preserving tree balance if value current value if current left current left node(value else insertnode(current leftvalue end if else if current right current right node(value else insertnode(current rightvalue end if end if checkbalance(current end insertnode deletion our balancing algorithm is like the one presented for our bst (defined in ss the major difference is that we have to ensure that the tree still adheres to the avl balance property after the removal of the node if the tree doesn' need to be rebalanced and the value we are removing is contained within the tree then no further step are required howeverwhen the value is in the tree and its removal upsets the avl balance property then we must perform the correct rotation(
22,462
algorithm remove(value prevalue is the value of the node to removeroot is the root node of the avl postnode with value is removed and tree rebalanced if found in which case yields trueotherwise false nodet oremove root parent stackpath root while nodet oremove and nodet oremove alue alue parent nodet oremove if value nodet oremove value nodet oremove nodetoremove left else nodet oremove nodetoremove right end if path push(nodetoremove end while if nodet oremove return false /value not in avl end if parent findparent(value if count /count keeps track of the of nodes in the avl root /we are removing the only node in the avl else if nodet oremove left and nodet oremove right null /case # if nodet oremove value parent value parent left else parent right end if else if nodet oremove left and nodet oremove right /case if nodet oremove value parent value parent left nodet oremove right else parent right nodet oremove right end if else if nodet oremove left and nodet oremove right /case # if nodet oremove value parent value parent left nodet oremove left else parent right nodet oremove left end if else /case # largestv alue nodet oremove left while largestv alue right /find the largest value in the left subtree of nodet oremove largestv alue largestv alue right
22,463
end while /set the parentsright pointer of largestv alue to findparent(largestv alue valueright nodet oremove value largestv alue value end if while path count checkbalance(path pop()/we trackback to the root node check balance end while count count return true end remove summary the avl tree is sophisticated self balancing tree it can be thought of as the smarteryounger brother of the binary search tree unlike its older brother the avl tree avoids worst case linear complexity runtimes for its operations the avl tree guarantees via the enforcement of balancing algorithms that the left and right subtrees differ in height by at most which yields at most logarithmic runtime complexity
22,464
algorithms
22,465
sorting all the sorting algorithms in this use data structures of specific type to demonstrate sortinge bit integer is often used as its associated operations ( etcare clear in their behaviour the algorithms discussed can easily be translated into generic sorting algorithms within your respective language of choice bubble sort one of the most simple forms of sorting is that of comparing each item with every other item in some listhowever as the description may imply this form of sorting is not particularly effecient ( in it' most simple form bubble sort can be implemented as two loops algorithm bubblesort(list prelist postlist has been sorted into values of ascending order for to list count for to list count if list[ilist[ swap(list[ ]list[ ] end if end for end for return list end bubblesort merge sort merge sort is an algorithm that has fairly efficient space time complexity ( log nand is fairly trivial to implement the algorithm is based on splitting listinto two similar sized lists (lef tand rightand sorting each list and then merging the sorted lists back together notethe function mergeordered simply takes two ordered lists and makes them one
22,466
figure bubble sort iterations algorithm mergesort(list prelist postlist has been sorted into values of ascending order if list count /already sorted return list end if list count lef list( right list(list count for to lef count- lef [ilist[ end for for to right count- right[ilist[ end for lef mergesort(lef right mergesort(right return mergeordered(lef tright end mergesort
22,467
divide impera (mergefigure merge sort divide et impera approach quick sort quick sort is one of the most popular sorting algorithms based on divide et impera strategyresulting in an ( log ncomplexity the algorithm starts by picking an itemcalled pivotand moving all smaller items before itwhile all greater elements after it this is the main quick sort operationcalled partitionrecursively repeated on lesser and greater sub lists until their size is one or zero in which case the list is implicitly sorted choosing an appropriate pivotas for example the median element is fundamental for avoiding the drastically reduced performance of (
22,468
pivot pivot pivot pivot pivot pivot pivot pivot pivot figure quick sort example (pivot median strategy algorithm quicksort(list prelist postlist has been sorted into values of ascending order if list count /already sorted return list end if pivot -medianvalue(list for to list count- if list[ipivot equal insert(list[ ] end if if list[ipivot less insert(list[ ] end if if list[ipivot greater insert(list[ ] end if end for return concatenate(quicksort(less)equalquicksort(greater) end quicksort
22,469
insertion sort insertion sort is somewhat interesting algorithm with an expensive runtime of ( it can be best thought of as sorting scheme similar to that of sorting hand of playing cardsi you take one card and then look at the rest with the intent of building up an ordered set of cards in your hand figure insertion sort iterations algorithm insertionsort(list prelist postlist has been sorted into values of ascending order unsorted while unsorted list count hold list[unsorted unsorted while > and hold list[ list[ list[ - - end while list[ hold unsorted unsorted end while return list end insertionsort
22,470
shell sort put simply shell sort can be thought of as more efficient variation of insertion sort as described in ss it achieves this mainly by comparing items of varying distances apart resulting in run time complexity of ( log nshell sort is fairly straight forward but may seem somewhat confusing at first as it differs from other sorting algorithms in the way it selects items to compare figure shows shell sort being ran on an array of integersthe red coloured square is the current value we are holding algorithm shellsort(list prelist postlist has been sorted into values of ascending order increment list count while increment current increment while current list count hold list[current current increment while > and hold list[ list[ incrementlist[ iincrement end while list[ incrementhold current current end while increment end while return list end shellsort radix sort unlike the sorting algorithms described previously radix sort uses buckets to sort itemseach bucket holds items with particular property called key normally bucket is queueeach time radix sort is performed these buckets are emptied starting the smallest key bucket to the largest when looking at items within list to sort we do so by isolating specific keye in the example we are about to show we have maximum of three keys for all itemsthat is the highest key we need to look at is hundreds because we are dealing within this example base numbers we have at any one point possible key values each of which has their own bucket before we show you this first simple version of radix sort let us clarify what we mean by isolating keys given the number if we look at the first keythe ones then we can see we have two of themprogressing to the next key tens we can see that the number has zero of themfinally we can see that the number has single hundred the number used as an example has in total three keys
22,471
figure shell sort
22,472
ones tens hundreds for further clarification what if we wanted to determine how many thousands the number hasclearly there are nonebut often looking at number as final like we often do it is not so obvious so when asked the question how many thousands does have you should simply pad the number with zero in that locatione here it is more obvious that the key value at the thousands location is zero the last thing to identify before we actually show you simple implementation of radix sort that works on only positive integersand requires you to specify the maximum key size in the list is that we need way to isolate specific key at any one time the solution is actually very simplebut its not often you want to isolate key in number so we will spell it out clearly here key can be accessed from any integer with the following expressionkey (number keyt oaccess as simple example lets say that we want to access the tens key of the number the tens column is key and so after substitution yields key ( the next key to look at for number can be attained by multiplying the last key by ten working left to right in sequential manner the value of key is used in the following algorithm to work out the index of an array of queues to enqueue the item into algorithm radix(listmaxkeysize prelist maxkeysize > and represents the largest key size in the list postlist has been sorted queues queue[ indexof key fori to maxkeysize foreach item in list queues[getqueueindex(itemindexof key)enqueue(item end foreach list collapsequeues(queues clearqueues(queues indexof key indexof key end for return list end radix figure shows the members of queues from the algorithm described above operating on the list whose members are and the key we are interested in for each number is highlighted omitted queues in figure mean that they contain no items summary throughout this we have seen many different algorithms for sorting listssome are very efficient ( quick sort defined in ss )some are not (
22,473
figure radix sort base algorithm bubble sort defined in ss selecting the correct sorting algorithm is usually denoted purely by efficiencye you would always choose merge sort over shell sort and so on there are also other factors to look at though and these are based on the actual implementation some algorithms are very nicely expressed in recursive fashionhowever these algorithms ought to be pretty efficiente implementing linearquadraticor slower algorithm using recursion would be very bad idea if you want to learn more about why you should be veryvery careful when implementing recursive algorithms see appendix
22,474
numeric unless stated otherwise the alias denotes standard bit integer primality test simple algorithm that determines whether or not given integer is prime numbere and are all prime numbershowever is not as it can be the result of the product of two numbers that are in an attempt to slow down the inner loop the is used as the upper bound algorithm isprime( postn is determined to be prime or not for to do for to sqrt(ndo if return false end if end for end for end isprime base conversions dsa contains number of algorithms that convert base number to its equivalent binaryoctal or hexadecimal form for example has binary representation of table shows the algorithm trace when the number to convert to binary is
22,475
algorithm tobinary( pren > postn has been converted into its base representation while list add( / end while return reverse(list end tobinary list { table algorithm trace of tobinary attaining the greatest common denominator of two numbers fairly routine problem in mathematics is that of finding the greatest common denominator of two integerswhat we are essentially after is the greatest number which is multiple of bothe the greatest common denominator of and is one of the most elegant solutions to this problem is based on euclid' algorithm that has run time complexity of ( algorithm greatestcommondenominator(mn prem and are integers postthe greatest common denominator of the two integers is calculated if return end if return greatestcommondenominator(nm end greatestcommondenominator
22,476
computing the maximum value for number of specific base consisting of digits this algorithm computes the maximum value of number for given number of digitse using the base system the maximum number we can have made up of digits is the number similarly the maximum number that consists of digits for base number is which is the expression by which we can compute this maximum value for digits isb in the previous expression is the number baseand is the number of digits as an example if we wanted to determine the maximum value for hexadecimal number (base consisting of digits the expression would be as follows the maximum value of the previous example would be represented as which yields in the following algorithm numberbase should be considered restricted to the values of and for this reason in our actual implementation numberbase has an enumeration type the base enumeration type is defined asbase {binary octal decimal hexadecimal the reason we provide the definition of base is to give you an idea how this algorithm can be modelled in more readable manner rather than using various checks to determine the correct base to use for our implementation we cast the value of numberbase to an integeras such we extract the value associated with the relevant option in the base enumeration as an example if we were to cast the option octal to an integer we would get the value in the algorithm listed below the cast is implicit so we just use the actual argument numberbase algorithm maxvalue(numberbasen prenumberbase is the number system to usen is the number of digits postthe maximum value for numberbase consisting of digits is computed return power(numberbasen- end maxvalue factorial of number attaining the factorial of number is primitive mathematical operation many implementations of the factorial algorithm are recursive as the problem is recursive in naturehowever here we present an iterative solution the iterative solution is presented because it too is trivial to implement and doesn' suffer from the use of recursion (for more on recursion see sscthe factorial of and is the aforementioned acts as base case that we will build upon the factorial of is the factorial of similarly the factorial of is the factorial of and so on we can indicate that we are after the factorial of number using the form where is the number we wish to attain the factorial of our algorithm doesn' use such notation but it is handy to know
22,477
algorithm factorial( pren > is the number to compute the factorial of postthe factorial of is computed if return end if actorial for to actorial actorial end for return actorial end factorial summary in this we have presented several numeric algorithmsmost of which are simply here because they were fun to design perhaps the message that the reader should gain from this is that algorithms can be applied to several domains to make work in that respective domain attainable numeric algorithms in particular drive some of the most advanced systems on the planet computing such data as weather forecasts
22,478
searching sequential search simple algorithm that search for specific item inside list it operates looping on each element (nuntil match occurs or the end is reached algorithm sequentialsearch(listitem prelist postreturn index of item if foundotherwise - index while index list count and list[index item index index end while if index list count and list[indexitem return index end if return - end sequentialsearch probability search probability search is statistical sequential searching algorithm in addition to searching for an itemit takes into account its frequency by swapping it with it' predecessor in the list the algorithm complexity still remains at (nbut in non-uniform items search the more frequent items are in the first positionsreducing list scanning time figure shows the resulting state of list after searching for two itemsnotice how the searched items have had their search probability increased after each search operation respectively
22,479
figure asearch( )bsearch( algorithm probabilitysearch(listitem prelist posta boolean indicating where the item is found or notin the former case swap founded item with its predecessor index while index list count and list[index item index index end while if index >list count or list[index item return false end if if index swap(list[index]list[index ] end if return true end probabilitysearch summary in this we have presented few novel searching algorithms we have presented more efficient searching algorithms earlier onlike for instance the logarithmic searching algorithm that avl and bst tree' use (defined in ss we decided not to cover searching algorithm known as binary chop (another name for binary searchbinary chop usually refers to its array counterpartas
22,480
the reader has already seen such an algorithm in ss searching algorithms and their efficiency largely depends on the underlying data structure being used to store the data for instance it is quicker to determine whether an item is in hash table than it is an arraysimilarly it is quicker to search bst than it is linked list if you are going to search for data fairly often then we strongly advise that you sit down and research the data structures available to you in most cases using list or any other primarily linear data structure is down to lack of knowledge model your data and then research the data structures that best fit your scenario
22,481
strings strings have their own in this text purely because string operations and transformations are incredibly frequent within programs the algorithms presented are based on problems the authors have come across previouslyor were formulated to satisfy curiosity reversing the order of words in sentence defining algorithms for primitive string operations is simplee extracting sub-string of stringhowever some algorithms that require more inventiveness can be little more tricky the algorithm presented here does not simply reverse the characters in stringrather it reverses the order of words within string this algorithm works on the principal that words are all delimited by white spaceand using few markers to define where words start and end we can easily reverse them
22,482
algorithm reversewords(value prevalue sb is string buffer postthe words in value have been reversed last value length start last while last > /skip whitespace while start > and value[startwhitespace start start end while last start /march down to the index before the beginning of the word while start > and start whitespace start start end while /append chars from start to length to string buffer sb for start to last sb append(value[ ] end for /if this isn' the last word in the string add some whitespace after the word in the buffer if start sb append(' end if last start start last end while /check if we have added one too many whitespace to sb if sb[sb length - whitespace /cut the whitespace sb length sb length - end if return sb end reversewords detecting palindrome although not frequent algorithm that will be applied in real-life scenarios detecting palindrome is funand as it turns out pretty trivial algorithm to design the algorithm that we present has (nrun time complexity our algorithm uses two pointers at opposite ends of string we are checking is palindrome or not these pointers march in towards each other always checking that each character they point to is the same with respect to value figure shows the ispalindrome algorithm in operation on the string "was it eliot' toilet saw?if you remove all punctuationand white space from the aforementioned string you will find that it is valid palindrome
22,483
figure lef and right pointers marching in towards one another algorithm ispalindrome(value prevalue postvalue is determined to be palindrome or not word value strip(touppercase( lef right word length - while word[lef tword[rightand lef right lef lef right right end while return word[lef tword[right end ispalindrome in the ispalindrome algorithm we call method by the name of strip this algorithm discards punctuation in the stringincluding white space as result word contains heavily compacted representation of the original stringeach character of which is in its uppercase representation palindromes discard white spacepunctuationand case making these changes allows us to design simple algorithm while making our algorithm fairly robust with respect to the palindromes it will detect counting the number of words in string counting the number of words in string can seem pretty trivial at firsthowever there are few cases that we need to be aware of tracking when we are in string updating the word count at the correct place skipping white space that delimits the words as an example consider the string "ben ate hayclearly this string contains three wordseach of which distinguished via white space all of the previously listed points can be managed by using three variables index wordcount inw ord
22,484
figure string with three words figure string with varying number of white space delimiting the words of the previously listed index keeps track of the current index we are at in the stringwordcount is an integer that keeps track of the number of words we have encounteredand finally inw ord is boolean flag that denotes whether or not at the present time we are within word if we are not currently hitting white space we are in wordthe opposite is true if at the present index we are hitting white space what denotes wordin our algorithm each word is separated by one or more occurrences of white space we don' take into account any particular splitting symbols you may usee in net string split can take char (or array of charactersthat determines delimiter to use to split the characters within the string into chunks of stringsresulting in an array of sub-strings in figure we present string indexed as an array typically the pattern is the same for most wordsdelimited by single occurrence of white space figure shows the same stringwith the same number of words but with varying white space splitting them
22,485
algorithm wordcount(value prevalue postthe number of words contained within value is determined inw ord true wordcount index /skip initial white space while value[indexwhitespace and index value length - index index end while /was the string just whitespace if index value length and value[indexwhitespace return end if while index value length if value[indexwhitespace /skip all whitespace while value[indexwhitespace and index value length - index index end while inw ord alse wordcount wordcount else inw ord true end if index index end while /last word may have not been followed by whitespace if inw ord wordcount wordcount end if return wordcount end wordcount determining the number of repeated words within string with the help of an unordered setand an algorithm that can split the words within string using specified delimiter this algorithm is straightforward to implement if we split all the words using single occurrence of white space as our delimiter we get all the words within the string back as elements of an array then if we iterate through these words adding them to set which contains only unique strings we can attain the number of unique words from the string all that is left to do is subtract the unique word count from the total number of stings contained in the array returned from the split operation the split operation that we refer to is the same as that mentioned in ss
22,486
figure aundesired uniques setbdesired uniques set algorithm repeatedwordcount(value prevalue postthe number of repeated words in value is returned words value split(' uniques set foreach word in words uniques add(word strip() end foreach return words length -uniques count end repeatedwordcount you will notice in the repeatedwordcount algorithm that we use the strip method we referred to earlier in ss this simply removes any punctuation from word the reason we perform this operation on each word is so that we can build more accurate unique string collectione "test"and "test!are the same word minus the punctuation figure shows the undesired and desired sets for the unique set respectively determining the first matching character between two strings the algorithm to determine whether any character of string matches any of the characters in another string is pretty trivial put simplywe can parse the strings considered using double loop and checkdiscarding punctuationthe equality between any characters thus returning non-negative index that represents the location of the first character in the match (figure )otherwise we return - if no match occurs this approach exhibit run time complexity of (
22,487
word match index index index bfigure afirst stepbsecond step cmatch occurred algorithm any(word,match prewordmatch postindex representing match location if occured- otherwise for to word length while word[iwhitespace - + end while for index to match length while match[indexwhitespace index index end while if match[indexword[ return index end if end for end for return - end any summary we hope that the reader has seen how fun algorithms on string data types are strings are probably the most common data type (and data structure remember we are dealing with an arraythat you will work with so its important that you learn to be creative with them we for one find strings fascinating simple google search on string nuances between languages and encodings will provide you with great number of problems now that we have spurred you along little with our introductory algorithms you can devise some of your own
22,488
algorithm walkthrough learning how to design good algorithms can be assisted greatly by using structured approach to tracing its behaviour in most cases tracing an algorithm only requires single table in most cases tracing is not enoughyou will also want to use diagram of the data structure your algorithm operates on this diagram will be used to visualise the problem more effectively seeing things visually can help you understand the problem quickerand better the trace table will store information about the variables used in your algorithm the values within this table are constantly updated when the algorithm mutates them such an approach allows you to attain history of the various values each variable has held you may also be able to infer patterns from the values each variable has contained so that you can make your algorithm more efficient we have found this approach both simpleand powerful by combining visual representation of the problem as well as having history of past values generated by the algorithm it can make understandingand solving problems much easier in this we will show you how to work through both iterativeand recursive algorithms using the technique outlined iterative algorithms we will trace the ispalindrome algorithm (defined in ss as our example iterative walkthrough before we even look at the variables the algorithm usesfirst we will look at the actual data structure the algorithm operates on it should be pretty obvious that we are operating on stringbut how is this representeda string is essentially block of contiguous memory that consists of some char data typesone after the other each character in the string can be accessed via an index much like you would do when accessing items within an array the picture should be presenting itself string can be thought of as an array of characters for our example we will use ispalindrome to operate on the string "never odd or evennow we know how the string data structure is representedand the value of the string we will operate on let' go ahead and draw it as shown in figure
22,489
figure visualising the data structure we are operating on value word lef right table column for each variable we wish to track the ispalindrome algorithm uses the following list of variables in some form throughout its execution value word lef right having identified the values of the variables we need to keep track of we simply create column for each in table as shown in table nowusing the ispalindrome algorithm execute each statement updating the variable values in the table appropriately table shows the final table values for each variable used in ispalindrome respectively while this approach may look little bloated in printon paper it is much more compact where we have the strings in the table you should annotate these strings with array indexes to aid the algorithm walkthrough there is one other point that we should clarify at this time whether to include variables that change only few timesor not at all in the trace table in table we have included both the valueand word variables because it was convenient to do so you may find that you want to promote these values to larger diagram (like that in figure and only use the trace table for variables whose values change during the algorithm we recommend that you promote the core data structure being operated on to larger diagram outside of the table so that you can interrogate it more easily value "never odd or evenword "neveroddorevenlef table algorithm trace for ispalindrome right
22,490
we cannot stress enough how important such traces are when designing your algorithm you can use these trace tables to verify algorithm correctness at the cost of simple tableand quick sketch of the data structure you are operating on you can devise correct algorithms quicker visualising the problem domain and keeping track of changing data makes problems lot easier to solve moreover you always have point of reference which you can look back on recursive algorithms for the most part working through recursive algorithms is as simple as walking through an iterative algorithm one of the things that we need to keep track of though is which method call returns to who most recursive algorithms are much simple to follow when you draw out the recursive calls rather than using table based approach in this section we will use recursive implementation of an algorithm that computes number from the fiboncacci sequence algorithm fibonacci( pren is the number in the fibonacci sequence to compute postthe fibonacci sequence number has been computed if return else if return end if return fibonacci( fibonacci( end fibonacci before we jump into showing you diagrammtic representation of the algorithm calls for the fibonacci algorithm we will briefly talk about the cases of the algorithm the algorithm has three cases in total > the first two items in the preceeding list are the base cases of the algorithm until we hit one of our base cases in our recursive method call tree we won' return anything the third item from the list is our recursive case with each call to the recursive case we etch ever closer to one of our base cases figure shows diagrammtic representation of the recursive call chain in figure the order in which the methods are called are labelled figure shows the call chain annotated with the return values of each method call as well as the order in which methods return to their callers in figure the return values are represented as annotations to the red arrows it is important to note that each recursive call only ever returns to its caller upon hitting one of the two base cases when you do eventually hit base case that branch of recursive calls ceases upon hitting base case you go back to
22,491
figure call chain for fibonacci algorithm figure return chain for fibonacci algorithm
22,492
the caller and continue execution of that method execution in the caller is contiued at the next statementor expression after the recursive call was made in the fibonacci algorithmsrecursive case we make two recursive calls when the first recursive call (fibonacci( )returns to the caller we then execute the the second recursive call (fibonacci( )after both recursive calls have returned to their callerthe caller can then subesequently return to its caller and so on recursive algorithms are much easier to demonstrate diagrammatically as figure demonstrates when you come across recursive algorithm draw method call diagrams to understand how the algorithm works at high level summary understanding algorithms can be hard at timesparticularly from an implementation perspective in order to understand an algorithm try and work through it using trace tables in cases where the algorithm is also recursive sketch the recursive calls out so you can visualise the call/return chain in the vast majority of cases implementing an algorithm is simple provided that you know how the algorithm works mastering how an algorithm works from high level is key for devising well designed solution to the problem in hand
22,493
translation walkthrough the conversion from pseudo to an actual imperative language is usually very straight forwardto clarify an example is provided in this example we will convert the algorithm in ss to the clanguage public static bool isprime(int number if (number return false int innerloopbound (int)math floor(math sqrt(number)) for (int numberi++ for(int <innerloopboundj++ if ( =number return false return true for the most part the conversion is straight forward processhowever you may have to inject various calls to other utility algorithms to ascertain the correct result consideration to take note of is that many algorithms have fairly strict preconditionsof which there may be several in these scenarios you will need to inject the correct code to handle such situations to preserve the correctness of the algorithm most of the preconditions can be suitably handled by throwing the correct exception
22,494
summary as you can see from the example used in this we have tried to make the translation of our pseudo code algorithms to mainstream imperative languages as simple as possible whenever you encounter keyword within our pseudo code examples that you are unfamiliar with just browse to appendix which descirbes each keyword
22,495
recursive vs iterative solutions one of the most succinct properties of modern programming languages like ++ #and java (as well as many othersis that these languages allow you to define methods that reference themselvessuch methods are said to be recursive one of the biggest advantages recursive methods bring to the table is that they usually result in more readableand compact solutions to problems recursive method then is one that is defined in terms of itself generally recursive algorithms has two main properties one or more base casesand recursive case for now we will briefly cover these two aspects of recursive algorithms with each recursive call we should be making progress to our base case otherwise we are going to run into trouble the trouble we speak of manifests itself typically as stack overflowwe will describe why later now that we have briefly described what recursive algorithm is and why you might want to use such an approach for your algorithms we will now talk about iterative solutions an iterative solution uses no recursion whatsoever an iterative solution relies only on the use of loops ( forwhiledo-whileetcthe down side to iterative algorithms is that they tend not to be as clear as to their recursive counterparts with respect to their operation the major advantage of iterative solutions is speed most production software you will find uses little or no recursive algorithms whatsoever the latter property can sometimes be companies prerequisite to checking in codee upon checking in static analysis tool may verify that the code the developer is checking in contains no recursive algorithms normally it is systems level code that has this zero tolerance policy for recursive algorithms using recursion should always be reserved for fast algorithmsyou should avoid it for the following algorithm run time deficiencies ( (
22,496
( if you use recursion for algorithms with any of the above run time efficiency' you are inviting trouble the growth rate of these algorithms is high and in most cases such algorithms will lean very heavily on techniques like divide and conquer while constantly splitting problems into smaller problems is good practicein these cases you are going to be spawning lot of method calls all this overhead (method calls don' come that cheapwill soon pile up and either cause your algorithm to run lot slower than expectedor worseyou will run out of stack space when you exceed the allotted stack space for thread the process will be shutdown by the operating system this is the case irrespective of the platform you usee netor native +etc you can ask for bigger stack sizebut you typically only want to do this if you have very good reason to do so activation records an activation record is created every time you invoke method put simply an activation record is something that is put on the stack to support method invocation activation records take small amount of time to createand are pretty lightweight normally an activation record for method call is as follows (this is very general)the actual parameters of the method are pushed onto the stack the return address is pushed onto the stack the top-of-stack index is incremented by the total amount of memory required by the local variables within the method jump is made to the method in many recursive algorithms operating on large data structuresor algorithms that are inefficient you will run out of stack space quickly consider an algorithm that when invoked given specific value it creates many recursive calls in such case big chunk of the stack will be consumed we will have to wait until the activation records start to be unwound after the nested methods in the call chain exit and return to their respective caller when method exits it' activation record is unwound unwinding an activation record results in several steps the top-of-stack index is decremented by the total amount of memory consumed by the method the return address is popped off the stack the top-of-stack index is decremented by the total amount of memory consumed by the actual parameters
22,497
while activation records are an efficient way to support method calls they can build up very quickly recursive algorithms can exhaust the stack size allocated to the thread fairly fast given the chance just about now we should be dusting the cobwebs off the age old example of an iterative vs recursive solution in the form of the fibonacci algorithm this is famous example as it highlights both the beauty and pitfalls of recursive algorithm the iterative solution is not as prettynor self documenting but it does the job lot quicker if we were to give the fibonacci algorithm an input of say then we would have to wait while to get the value back because it has an ( run time the iterative version on the other hand has (nrun time don' let this put you off recursion this example is mainly used to shock programmers into thinking about the ramifications of recursion rather than warning them off some problems are recursive in nature something that you may come across is that some data structures and algorithms are actually recursive in nature perfect example of this is tree data structure common tree node usually contains valuealong with two pointers to two other nodes of the same node type as you can see tree is recursive in its makeup wit each node possibly pointing to two other nodes when using recursive algorithms on tree' it makes sense as you are simply adhering to the inherent design of the data structure you are operating on of course it is not all good newsafter all we are still bound by the limitations we have mentioned previously in this we can also look at sorting algorithms like merge sortand quick sort both of these algorithms are recursive in their design and so it makes sense to model them recursively summary recursion is powerful tooland one that all programmers should know of often software projects will take trade between readabilityand efficiency in which case recursion is great provided you don' go and use it to implement an algorithm with quadratic run time or higher of course this is not rule of thumbthis is just us throwing caution to the wind defensive coding will always prevail many times recursion has natural home in recursive data structures and algorithms which are recursive in nature using recursion in such scenarios is perfectly acceptable using recursion for something like linked list traversal is little overkill its iterative counterpart is probably less lines of code than its recursive counterpart because we can only talk about the implications of using recursion from an abstract point of view you should consult your compiler and run time environment for more details it may be the case that your compiler recognises things like tail recursion and can optimise them this isn' unheard ofin fact most commercial compilers will do this the amount of optimisation compilers can
22,498
do though is somewhat limited by the fact that you are still using recursion youas the developer have to accept certain accountability' for performance
22,499
no pointers overloaded operators primitive variable types input/output java library data structures summary questions arrays the array workshop applet insertion searching deletion the duplicates issue not too swift the basics of arrays in java creating an array accessing array elements initialization an array example dividing program into classes classes lowarray and lowarrayapp class interfaces not so convenient who' responsible for what the higharray java example the user' life made easier abstraction the ordered workshop applet linear search binary search java code for an ordered array binary search with the find(method the ordarray class advantages of ordered arrays logarithms the equation the opposite of raising two to power