id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
23,600 | sorting ( / ( + / this equation is valid for any that is power of so we may also write ( / ( / + / / and ( / ( / + / / ( ( + now add up all the equations this means that we add all of the terms on the left-hand side and set the result equal to the sum of all of the terms on the right-hand side observe that the term ( / )/( / appears on both sides and thus cancels in factvirtually all the terms appear on both sides and cancel this is called telescoping sum after everything is addedthe final result is ( (nlog because all of the other terms cancel and there are log equationsand so all the at the end of these equations add up to log multiplying through by gives the final answer (nn log ( log nnotice that if we did not divide through by at the start of the solutionsthe sum would not telescope this is why it was necessary to divide through by an alternative method is to substitute the recurrence relation continually on the righthand side we have ( ( / since we can substitute / into the main equation ( / ( ( ( / ) / ( / we have ( ( / againby substituting / into the main equationwe see that ( / ( ( / / ( / so we have ( ( / |
23,601 | continuing in this mannerwe obtain ( ( / using log nwe obtain (nnt( log log the choice of which method to use is matter of taste the first method tends to produce scrap work that fits better on standard / sheet of paper leading to fewer mathematical errorsbut it requires certain amount of experience to apply the second method is more of brute-force approach recall that we have assumed the analysis can be refined to handle cases when is not power of the answer turns out to be almost identical (this is usually the casealthough mergesort' running time is ( log )it has the significant problem that merging two sorted lists uses linear extra memory the additional work involved in copying to the temporary array and backthroughout the algorithmslows the sort considerably this copying can be avoided by judiciously switching the roles of and tmparray at alternate levels of the recursion variant of mergesort can also be implemented nonrecursively (exercise the running time of mergesortwhen compared with other ( log nalternativesdepends heavily on the relative costs of comparing elements and moving elements in the array (and the temporary arraythese costs are language dependent for instancein javawhen performing generic sort (using comparator)an element comparison can be expensive (because comparisons might not be easily inlinedand thus the overhead of dynamic dispatch could slow things down)but moving elements is cheap (because they are reference assignmentsrather than copies of large objectsmergesort uses the lowest number of comparisons of all the popular sorting algorithmsand thus is good candidate for general-purpose sorting in java in factit is the algorithm used in the standard java library for generic sorting on the other handin classic ++in generic sortcopying objects can be expensive if the objects are largewhile comparing objects often is relatively cheap because of the ability of the compiler to aggressively perform inline optimization in this scenarioit might be reasonable to have an algorithm use few more comparisonsif we can also use significantly fewer data movements quicksortwhich we discuss in the next sectionachieves this tradeoff and is the sorting routine that has been commonly used in +libraries new ++ move semantics possibly change this dynamicand so it remains to be seen whether quicksort will continue to be the sorting algorithm used in +libraries quicksort as its name implies for ++quicksort has historically been the fastest known generic sorting algorithm in practice its average running time is ( log nit is very fastmainly due to very tight and highly optimized inner loop it has ( worst-case performancebut this can be made exponentially unlikely with little effort by combining quicksort |
23,602 | sorting with heapsortwe can achieve quicksort' fast running time on almost all inputswith heapsort' ( log nworst-case running time exercise describes this approach the quicksort algorithm is simple to understand and prove correctalthough for many years it had the reputation of being an algorithm that could in theory be highly optimized but in practice was impossible to code correctly like mergesortquicksort is divide-andconquer recursive algorithm let us begin with the following simple sorting algorithm to sort list arbitrarily choose any itemand then form three groupsthose smaller than the chosen itemthose equal to the chosen itemand those larger than the chosen item recursively sort the first and third groupsand then concatenate the three groups the result is guaranteed by the basic principles of recursion to be sorted arrangement of the original list direct implementation of this algorithm is shown in figure and its performance isgenerally speakingquite template void sortvector items ifitems size vector smallervector samevector largerauto chosenitem itemsitems size ]forauto items ifi chosenitem smaller push_backstd::movei )else ifchosenitem larger push_backstd::movei )else same push_backstd::movei )sortsmaller )/recursive callsortlarger )/recursive callstd::movebeginsmaller )endsmaller )beginitems )std::movebeginsame )endsame )beginitems smaller size)std::movebeginlarger )endlarger )enditems larger size)figure simple recursive sorting algorithm |
23,603 | respectable on most inputs in factif the list contains large numbers of duplicates with relatively few distinct itemsas is sometimes the casethen the performance is extremely good the algorithm we have described forms the basis of the quicksort howeverby making the extra listsand doing so recursivelyit is hard to see how we have improved upon mergesort in factso farwe really haven' in order to do betterwe must avoid using significant extra memory and have inner loops that are clean thus quicksort is commonly written in manner that avoids creating the second group (the equal items)and the algorithm has numerous subtle details that affect the performancetherein lies the complications we now describe the most common implementation of quicksort--"classic quicksort,in which the input is an arrayand in which no extra arrays are created by the algorithm the classic quicksort algorithm to sort an array consists of the following four easy steps if the number of elements in is or then return pick any element in this is called the pivot partition { (the remaining elements in sinto two disjoint groupss { { }| return {quicksort( followed by followed by quicksort( )since the partition step ambiguously describes what to do with elements equal to the pivotthis becomes design decision part of good implementation is handling this case as efficiently as possible intuitivelywe would hope that about half the elements that are equal to the pivot go into and the other half into much as we like binary search trees to be balanced figure shows the action of quicksort on set of numbers the pivot is chosen (by chanceto be the remaining elements in the set are partitioned into two smaller sets recursively sorting the set of smaller numbers yields (by rule of recursionthe set of large numbers is similarly sorted the sorted arrangement of the entire set is then trivially obtained it should be clear that this algorithm worksbut it is not clear why it is any faster than mergesort like mergesortit recursively solves two subproblems and requires linear additional work (step )butunlike mergesortthe subproblems are not guaranteed to be of equal sizewhich is potentially bad the reason that quicksort is faster is that the partitioning step can actually be performed in place and very efficiently this efficiency more than makes up for the lack of equal-sized recursive calls the algorithm as described so far lacks quite few detailswhich we now fill in there are many ways to implement steps and the method presented here is the result of extensive analysis and empirical study and represents very efficient way to implement quicksort even the slightest deviations from this method can cause surprisingly bad results picking the pivot although the algorithm as described works no matter which element is chosen as pivotsome choices are obviously better than others |
23,604 | sorting select pivot partition quicksort small quicksort large figure the steps of quicksort illustrated by example wrong way the popularuninformed choice is to use the first element as the pivot this is acceptable if the input is randombut if the input is presorted or in reverse orderthen the pivot provides poor partitionbecause either all the elements go into or they go into worsethis happens consistently throughout the recursive calls the practical effect is that if the first element is used as the pivot and the input is presortedthen quicksort will take quadratic time to do essentially nothing at allwhich is quite embarrassing moreoverpresorted input (or input with large presorted sectionis quite frequentso using the first element as pivot is an absolutely horrible idea and should be discarded immediately an alternative is choosing the larger of the first two distinct elements as pivotbut this has |
23,605 | the same bad properties as merely choosing the first element do not use that pivoting strategyeither safe maneuver safe course is merely to choose the pivot randomly this strategy is generally perfectly safeunless the random number generator has flaw (which is not as uncommon as you might think)since it is very unlikely that random pivot would consistently provide poor partition on the other handrandom number generation is generally an expensive commodity and does not reduce the average running time of the rest of the algorithm at all median-of-three partitioning the median of group of numbers is the / th largest number the best choice of pivot would be the median of the array unfortunatelythis is hard to calculate and would slow down quicksort considerably good estimate can be obtained by picking three elements randomly and using the median of these three as pivot the randomness turns out not to help muchso the common course is to use as pivot the median of the leftrightand center elements for instancewith input as beforethe left element is the right element is and the center (in position (left right)/ element is thusthe pivot would be using median-of-three partitioning clearly eliminates the bad case for sorted input (the partitions become equal in this caseand actually reduces the number of comparisons by partitioning strategy there are several partitioning strategies used in practicebut the one described here is known to give good results it is very easyas we shall seeto do this wrong or inefficientlybut it is safe to use known method the first step is to get the pivot element out of the way by swapping it with the last element starts at the first element and starts at the next-to-last element if the original input was the same as beforethe following figure shows the current situation for nowwe will assume that all the elements are distinct later onwe will worry about what to do in the presence of duplicates as limiting caseour algorithm must do the proper thing if all of the elements are identical it is surprising how easy it is to do the wrong thing what our partitioning stage wants to do is to move all the small elements to the left part of the array and all the large elements to the right part "smalland "largeareof courserelative to the pivot while is to the left of jwe move rightskipping over elements that are smaller than the pivot we move leftskipping over elements that are larger than the pivot when and have stoppedi is pointing at large element and is pointing at small element if |
23,606 | sorting is to the left of jthose elements are swapped the effect is to push large element to the right and small element to the left in the example abovei would not move and would slide over one place the situation is as follows we then swap the elements pointed to by and and repeat the process until and crossafter first swap before second swap after second swap before third swap at this stagei and have crossedso no swap is performed the final part of the partitioning is to swap the pivot element with the element pointed to by iafter swap with pivot pivot when the pivot is swapped with in the last stepwe know that every element in position must be small this is because either position contained small element |
23,607 | to start withor the large element originally in position was replaced during swap similar argument shows that elements in positions must be large one important detail we must consider is how to handle elements that are equal to the pivot the questions are whether or not should stop when it sees an element equal to the pivot and whether or not should stop when it sees an element equal to the pivot intuitivelyi and ought to do the same thingsince otherwise the partitioning step is biased for instanceif stops and does notthen all elements that are equal to the pivot will wind up in to get an idea of what might be goodwe consider the case where all the elements in the array are identical if both and stopthere will be many swaps between identical elements although this seems uselessthe positive effect is that and will cross in the middleso when the pivot is replacedthe partition creates two nearly equal subarrays the mergesort analysis tells us that the total running time would then be ( log nif neither nor stopsand code is present to prevent them from running off the end of the arrayno swaps will be performed although this seems gooda correct implementation would then swap the pivot into the last spot that touchedwhich would be the next-tolast position (or lastdepending on the exact implementationthis would create very uneven subarrays if all the elements are identicalthe running time is ( the effect is the same as using the first element as pivot for presorted input it takes quadratic time to do nothingthuswe find that it is better to do the unnecessary swaps and create even subarrays than to risk wildly uneven subarrays thereforewe will have both and stop if they encounter an element equal to the pivot this turns out to be the only one of the four possibilities that does not take quadratic time for this input at first glance it may seem that worrying about an array of identical elements is silly after allwhy would anyone want to sort , identical elementshoweverrecall that quicksort is recursive suppose there are , , elementsof which , are identical (ormore likelycomplex elements whose sort keys are identicaleventuallyquicksort will make the recursive call on only these , elements then it really will be important to make sure that , identical elements can be sorted efficiently small arrays for very small arrays ( < )quicksort does not perform as well as insertion sort furthermorebecause quicksort is recursivethese cases will occur frequently common solution is not to use quicksort recursively for small arraysbut instead use sorting algorithm that is efficient for small arrayssuch as insertion sort using this strategy can actually save about percent in the running time (over doing no cutoff at alla good cutoff range is although any cutoff between and is likely to produce similar results this also saves nasty degenerate casessuch as taking the median of three elements when there are only one or two actual quicksort routines the driver for quicksort is shown in figure |
23,608 | sorting /*quicksort algorithm (driver*template void quicksortvector quicksorta size )figure driver for quicksort the general form of the routines will be to pass the array and the range of the array (left and rightto be sorted the first routine to deal with is pivot selection the easiest way to do this is to sort [left] [right]and [centerin place this has the extra advantage that the smallest of the three winds up in [left]which is where the partitioning step would put it anyway the largest winds up in [right]which is also the correct placesince it is larger than the pivot thereforewe can place the pivot in [right and initialize and to left and right in the partition phase yet another benefit is that because [leftis smaller than the pivotit will act as sentinel for thuswe do not need to worry about running past the end since will stop on elements equal to the pivotstoring the pivot in [right- provides sentinel for the code in /*return median of leftcenterand right order these and hide the pivot *template const comparable median vector aint leftint right int center left right ifacenter aleft std::swapaleft ]acenter )ifaright aleft std::swapaleft ]aright )ifaright acenter std::swapacenter ]aright )/place pivot at position right std::swapacenter ]aright )return aright ]figure code to perform median-of-three partitioning |
23,609 | figure does the median-of-three partitioning with all the side effects described it may seem that it is only slightly inefficient to compute the pivot by method that does not actually sort [left] [center]and [right]butsurprisinglythis produces bad results (see exercise the real heart of the quicksort routine is in figure it includes the partitioning and recursive calls there are several things worth noting in this implementation line initializes and to past their correct valuesso that there are no special cases to consider this initialization depends on the fact that median-of-three partitioning has /*internal quicksort method that makes recursive calls uses median-of-three partitioning and cutoff of is an array of comparable items left is the left-most index of the subarray right is the right-most index of the subarray *template void quicksortvector aint leftint right ifleft <right const comparable pivot median aleftright )/begin partitioning int leftj right forwhilea++ pivot whilepivot -- ifi std::swapai ]aj )else breakstd::swapai ]aright )quicksortalefti )quicksortai right )else /restore pivot /sort small elements /sort large elements /do an insertion sort on the subarray insertionsortaleftright )figure main quicksort routine |
23,610 | sorting int left right forwhileai pivot ++whilepivot aj --ifi std::swapai ]aj )else breakfigure small change to quicksortwhich breaks the algorithm some side effectsthis program will not work if you try to use it without change with simple pivoting strategybecause and start in the wrong place and there is no longer sentinel for the swapping action at line is sometimes written explicitlyfor speed purposes for the algorithm to be fastit is necessary to force the compiler to compile this code inline many compilers will do this automatically if swap is declared using inlinebut for those that do notthe difference can be significant finallylines and show why quicksort is so fast the inner loop of the algorithm consists of an increment/decrement (by which is fast) testand jump there is no extra juggling as there is in mergesort this code is still surprisingly tricky it is tempting to replace lines to with the statements in figure this does not workbecause there would be an infinite loop if [ia[jpivot analysis of quicksort like mergesortquicksort is recursivethereforeits analysis requires solving recurrence formula we will do the analysis for quicksortassuming random pivot (no medianof-three partitioningand no cutoff for small arrays we will take ( ( as in mergesort the running time of quicksort is equal to the running time of the two recursive calls plus the linear time spent in the partition (the pivot selection takes only constant timethis gives the basic quicksort relation (nt(it( cn ( where | is the number of elements in we will look at three cases worst-case analysis the pivot is the smallest elementall the time then and if we ignore ( which is insignificantthe recurrence is (nt( cnn> ( |
23,611 | we telescopeusing equation ( repeatedly thust( ( ( ( ( ( ( ( ( ( ( ( adding up all these equations yields (nt( ( ( = as claimed earlier to see that this is the worst possible casenote that the total cost of all the partitions in recursive calls at depth must be at most since the recursion depth is at most nthis gives an ( worst-case bound for quicksort best-case analysis in the best casethe pivot is in the middle to simplify the mathwe assume that the two subarrays are each exactly half the size of the originaland although this gives slight overestimatethis is acceptable because we are only interested in big-oh answer ( ( / cn ( divide both sides of equation ( by (nt( / + / ( we will telescope using this equationt( / ( / + / / ( / ( / + / / ( ( + ( ( ( we add all the equations from ( to ( and note that there are log of themt(nt( log ( which yields (ncn log ( log ( notice that this is the exact same analysis as mergesorthencewe get the same answer that this is the best case is implied by results in section |
23,612 | sorting average-case analysis this is the most difficult part for the average casewe assume that each of the sizes for is equally likelyand hence has probability / this assumption is actually valid for our pivoting and partitioning strategybut it is not valid for some others partitioning strategies that do not preserve the randomness of the subarrays cannot use this analysis interestinglythese strategies seem to result in programs that take longer to run in practice with this assumptionthe average value of ( )and hence ( )is ( /nn- = (jequation ( then becomes - (nt(jcn ( = if equation ( is multiplied by nit becomes - nt( (jcn ( = we need to remove the summation sign to simplify matters we note that we can telescope with one more equationn- ( ) ( (jc( ) ( = if we subtract equation ( from equation ( )we obtain nt( ( ) ( ( cn ( we rearrange terms and drop the insignificant - on the rightobtaining nt( ( ) ( cn ( we now have formula for (nin terms of ( only again the idea is to telescopebut equation ( is in the wrong form divide equation ( by ( ) (nt( + + ( now we can telescope ( ( - ( ( - - - ( ( ( ( ( |
23,613 | adding equations ( through ( yields (nt( + + ( = the sum is about loge ( where is known as euler' constantso (no(log nn+ ( (no( log ( and so although this analysis seems complicatedit really is not--the steps are natural once you have seen some recurrence relations the analysis can actually be taken further the highly optimized version that was described above has also been analyzedand this result gets extremely difficultinvolving complicated recurrences and advanced mathematics the effect of equal elements has also been analyzed in detailand it turns out that the code presented does the right thing linear-expected-time algorithm for selection quicksort can be modified to solve the selection problemwhich we have seen in and recall that by using priority queuewe can find the kth largest (or smallestelement in ( log nfor the special case of finding the medianthis gives an ( log nalgorithm since we can sort the array in ( log ntimeone might expect to obtain better time bound for selection the algorithm we present to find the kth smallest element in set is almost identical to quicksort in factthe first three steps are the same we will call this algorithm quickselect let |si denote the number of elements in si the steps of quickselect are if | then and return the element in as the answer if cutoff for small arrays is being used and | <cutoffthen sort and return the kth smallest element pick pivot elementv partition {vinto and as was done with quicksort if <| |then the kth smallest element must be in in this casereturn quickselect( kif | |then the pivot is the kth smallest element and we can return it as the answer otherwisethe kth smallest element lies in and it is the ( | )st smallest element in we make recursive call and return quickselect( | in contrast to quicksortquickselect makes only one recursive call instead of two the worst case of quickselect is identical to that of quicksort and is ( intuitivelythis is because quicksort' worst case is when one of and is emptythusquickselect is not |
23,614 | sorting really saving recursive call the average running timehoweveris (nthe analysis is similar to quicksort' and is left as an exercise the implementation of quickselect is even simpler than the abstract description might imply the code to do this is shown in figure when the algorithm terminatesthe /*internal selection method that makes recursive calls uses median-of-three partitioning and cutoff of places the kth smallest item in [ - is an array of comparable items left is the left-most index of the subarray right is the right-most index of the subarray is the desired rank ( is minimumin the entire array *template void quickselectvector aint leftint rightint ifleft <right const comparable pivot median aleftright )/begin partitioning int leftj right forwhilea++ pivot whilepivot -- ifi std::swapai ]aj )else breakstd::swapai ]aright )/restore pivot /recurseonly this part changes ifk < quickselectalefti )else ifk quickselectai rightk )else /do an insertion sort on the subarray insertionsortaleftright )figure main quickselect routine |
23,615 | kth smallest element is in position (because arrays start at index this destroys the original orderingif this is not desirablethen copy must be made using median-of-three pivoting strategy makes the chance of the worst case occurring almost negligible by carefully choosing the pivothoweverwe can eliminate the quadratic worst case and ensure an (nalgorithm the overhead involved in doing this is considerableso the resulting algorithm is mostly of theoretical interest in we will examine the linear-time worst-case algorithm for selectionand we shall also see an interesting technique of choosing the pivot that results in somewhat faster selection algorithm in practice general lower bound for sorting although we have ( log nalgorithms for sortingit is not clear that this is as good as we can do in this sectionwe prove that any algorithm for sorting that uses only comparisons requires ( log ncomparisons (and hence timein the worst caseso that mergesort and heapsort are optimal to within constant factor the proof can be extended to show that ( log ncomparisons are requiredeven on averagefor any sorting algorithm that uses only comparisonswhich means that quicksort is optimal on average to within constant factor specificallywe will prove the following resultany sorting algorithm that uses only comparisons requires log( !)comparisons in the worst case and log( !comparisons on average we will assume that all elements are distinctsince any sorting algorithm must work for this case decision trees decision tree is an abstraction used to prove lower bounds in our contexta decision tree is binary tree each node represents set of possible orderingsconsistent with comparisons that have been madeamong the elements the results of the comparisons are the tree edges the decision tree in figure represents an algorithm that sorts the three elements aband the initial state of the algorithm is at the root (we will use the terms state and node interchangeably no comparisons have been doneso all orderings are legal the first comparison that this particular algorithm performs compares and the two results lead to two possible states if bthen only three possibilities remain if the algorithm reaches node then it will compare and other algorithms might do different thingsa different algorithm would have different decision tree if cthe algorithm enters state since there is only one ordering that is consistentthe algorithm can terminate and report that it has completed the sort if cthe algorithm cannot do thisbecause there are two possible orderings and it cannot possibly be sure which is correct in this casethe algorithm will require one more comparison every algorithm that sorts by using only comparisons can be represented by decision tree of courseit is only feasible to draw the tree for extremely small input sizes the number of comparisons used by the sorting algorithm is equal to the depth of the deepest |
23,616 | sorting < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < figure decision tree for three-element sort leaf in our casethis algorithm uses three comparisons in the worst case the average number of comparisons used is equal to the average depth of the leaves since decision tree is largeit follows that there must be some long paths to prove the lower boundsall that needs to be shown are some basic tree properties lemma let be binary tree of depth then has at most leaves proof the proof is by induction if then there is at most one leafso the basis is true otherwisewe have rootwhich cannot be leafand left and right subtreeeach of depth at most by the induction hypothesisthey can each have at most - leavesgiving total of at most leaves this proves the lemma lemma binary tree with leaves must have depth at least log lproof immediate from the preceding lemma |
23,617 | theorem any sorting algorithm that uses only comparisons between elements requires at least log( !)comparisons in the worst case proof decision tree to sort elements must have nleaves the theorem follows from the preceding lemma theorem any sorting algorithm that uses only comparisons between elements requires ( log ncomparisons proof from the previous theoremlog( !comparisons are required log( !log( ( )( ( )( )log log( log( log log >log log( log( log( / >log >log ( log nthis type of lower-bound argumentwhen used to prove worst-case resultis sometimes known as an information-theoretic lower bound the general theorem says that if there are different possible cases to distinguishand the questions are of the form yes/nothen log pquestions are always required in some case by any algorithm to solve the problem it is possible to prove similar result for the average-case running time of any comparison-based sorting algorithm this result is implied by the following lemmawhich is left as an exerciseany binary tree with leaves has an average depth of at least log decision-tree lower bounds for selection problems section employed decision-tree argument to show the fundamental lower bound that any comparison-based sorting algorithm must use roughly log comparisons in this sectionwe show additional lower bounds for selection in an -element collectionspecifically comparisons are necessary to find the smallest item log comparisons are necessary to find the two smallest items / (log ncomparisons are necessary to find the median |
23,618 | sorting the lower bounds for all these problemswith the exception of finding the medianare tightalgorithms exist that use exactly the specified number of comparisons in all our proofswe assume all items are unique lemma if all the leaves in decision tree are at depth or higherthe decision tree must have at least leaves proof note that all nonleaf nodes in decision tree have two children the proof is by induction and follows lemma the first lower boundfor finding the smallest itemis the easiest and most trivial to show theorem any comparison-based algorithm to find the smallest element must use at least comparisons proof every elementxexcept the smallest elementmust be involved in comparison with some other elementyin which is declared larger than otherwiseif there were two different elements that had not been declared larger than any other elementsthen either could be the smallest lemma the decision tree for finding the smallest of elements must have at least leaves proof by theorem all leaves in this decision tree are at depth or higher then this lemma follows from lemma the bound for selection is little more complicated and requires looking at the structure of the decision tree it will allow us to prove lower bounds for problems and on our list lemma the decision tree for finding the kth smallest of elements must have at least leaves - proof observe that any algorithm that correctly identifies the kth smallest element must be able to prove that all other elements are either larger than or smaller than otherwiseit would be giving the same answer regardless of whether was larger or smaller than tand the answer cannot be the same in both circumstances thus each leaf in the treein addition to representing the kth smallest elementalso represents the smallest items that have been identified let be the decision treeand consider two setss xk }representing the smallest itemsand which are the remaining itemsincluding the |
23,619 | yes < no tn ty tree ty tree figure smallest three elements are abc }largest four elements are defg }the comparison between and for this choice of and can be eliminated when forming tree tkth smallest form new decision treetby purging any comparisons in between an element in and an element in since any element in is smaller than an element in rthe comparison tree node and its right subtree may be removed from without any loss of information figure shows how nodes can be pruned any permutation of that is fed into tfollows the same path of nodes and leads to the same leaf as corresponding sequence consisting of permutation of followed by the same permutation of since identifies the overall kth smallest elementand the smallest element in is that elementit follows that tidentifies the smallest element in thus tmust have at least | |- leaves these leaves in directly leaves representing since there are correspond to choices for sk- there must be at least leaves in - direct application of lemma allows us to prove the lower bounds for finding the second smallest element and the median theorem any comparison-based algorithm to find the kth smallest element must use at least log comparisons - proof immediate from lemma and lemma theorem any comparison-based algorithm to find the second smallest element must use at least log comparisons proof applying theorem with yields log |
23,620 | sorting theorem any comparison-based algorithm to find the median must use at least / (log ncomparisons proof apply theorem with / the lower bound for selection is not tightnor is it the best knownsee the references for details adversary lower bounds although decision-tree arguments allowed us to show lower bounds for sorting and some selection problemsgenerally the bounds that result are not that tightand sometimes are trivial for instanceconsider the problem of finding the minimum item since there are possible choices for the minimumthe information theory lower bound that is produced by decision-tree argument is only log in theorem we were able to show the bound by using what is essentially an adversary argument in this sectionwe expand on this argument and use it to prove the following lower bound / comparisons are necessary to find both the smallest and largest item recall our proof that any algorithm to find the smallest item requires at least comparisonsevery elementxexcept the smallest elementmust be involved in comparison with some other elementyin which is declared larger than otherwiseif there were two different elements that had not been declared larger than any other elementsthen either could be the smallest this is the underlying idea of an adversary argument which has some basic steps establish that some basic amount of information must be obtained by any algorithm that solves problem in each step of the algorithmthe adversary will maintain an input that is consistent with all the answers that have been provided by the algorithm thus far argue that with insufficient stepsthere are multiple consistent inputs that would provide different answers to the algorithmhencethe algorithm has not done enough stepsbecause if the algorithm were to provide an answer at that pointthe adversary would be able to show an input for which the answer is wrong to see how this workswe will re-prove the lower bound for finding the smallest element using this proof template theorem (restatedany comparison-based algorithm to find the smallest element must use at least comparisons |
23,621 | new proof begin by marking each item as (for unknownwhen an item is declared larger than another itemwe will change its marking to (for eliminatedthis change represents one unit of information initially each unknown item has value of but there have been no comparisonsso this ordering is consistent with prior answers comparison between two items is either between two unknowns or it involves at least one item eliminated from being the minimum figure shows how our adversary will construct the input valuesbased on the questioning if the comparison is between two unknownsthe first is declared the smaller and the second is automatically eliminatedproviding one unit of information we then assign it (irrevocablya number larger than the most convenient is the number of eliminated items if comparison is between an eliminated number and an unknownthe eliminated number (which is larger than by the prior sentencewill be declared largerand there will be no changesno eliminationsand no information obtained if two eliminated numbers are comparedthen they will be differentand consistent answer can be providedagain with no changesand no information provided at the endwe need to obtain - units of informationand each comparison provides only unit at the mosthenceat least comparisons are necessary lower bound for finding the minimum and maximum we can use this same technique to establish lower bound for finding both the minimum and maximum item observe that all but one item must be eliminated from being the smallestand all but one item must be eliminated from being the largestthus the total information that any algorithm must acquire is howevera comparison eliminates both from being the maximum and from being the minimumthusa comparison can provide two units of information consequentlythis argument yields only the trivial lower bound our adversary needs to do more work to ensure that it does not give out two units of information more than it needs to to achieve thiseach item will initially be unmarked if it "winsa comparison ( it is declared larger than some item)it obtains if it "losesa comparison ( it is declared smaller than some item)it obtains an at the endall but two items will be wl our adversary will ensure that it only hands out two units of information if it is comparing two unmarked items that can happen only / timesthen the remaining information has to be obtained one unit at timewhich will establish the bound answer information new new < no change mark as change to #elim consistently no change no change all others figure adversary constructs input for finding the minimum as algorithm runs |
23,622 | sorting theorem any comparison-based algorithm to find the minimum and maximum must use at least / comparisons proof the basic idea is that if two items are unmarkedthe adversary must give out two pieces of information otherwiseone of the items has either or an (perhaps bothin that casewith reasonable carethe adversary should be able to avoid giving out two units of information for instanceif one itemxhas and the other itemyis unmarkedthe adversary lets win again by saying this gives one unit of information for but no new information for it is easy to see thatin principlethere is no reason that the adversary should have to give more than one unit of information out if there is at least one unmarked item involved in the comparison it remains to show that the adversary can maintain values that are consistent with its answers if both items are unmarkedthen obviously they can be safely assigned values consistent with the comparison answerthis case yields two units of information otherwiseif one of the items involved in comparison is unmarkedit can be assigned value the first timeconsistent with the other item in the comparison this case yields one unit of information answer information < < or wl or wl new new unchanged + > or wl unchanged - < or wl unchanged max( yl or or wl > or or wl or or wl unchanged min( ywl wl consistent unchanged unchanged wl wl wl symmetric to an above case figure adversary constructs input for finding the maximum and minimum as algorithm runs |
23,623 | otherwiseboth items involved in the comparison are marked if both are wlthen we can answer consistently with the current assignmentyielding no information otherwiseat least one of the items has only an or only we will allow that item to compare redundantly (if it is an then it loses againif it is then it wins again)and its value can be easily adjusted if neededbased on the other item in the comparison (an can be lowered as neededa can be raised as neededthis yields at most one unit of information for the other item in the comparisonpossibly zero figure summarizes the action of the adversarymaking the primary element whose value changes in all cases at most / comparisons yield two units of informationmeaning that the remaining informationnamely / unitsmust each be obtained one comparison at time thus the total number of comparisons that are needed is at least / / it is easy to see that this bound is achievable pair up the elementsand perform comparison between each pair then find the maximum among the winners and the minimum among the losers linear-time sortsbucket sort and radix sort although we proved in section that any general sorting algorithm that uses only comparisons requires ( log ntime in the worst caserecall that it is still possible to sort in linear time in some special cases simple example is bucket sort for bucket sort to workextra information must be available the input an must consist of only positive integers smaller than (obviously extensions to this are possible if this is the casethen the algorithm is simplekeep an array called countof size mwhich is initialized to all thuscount has cellsor bucketswhich are initially empty when ai is readincrement count[ai by after all the input is readscan the count arrayprinting out representation of the sorted list this algorithm takes ( )the proof is left as an exercise if is ( )then the total is (nalthough this algorithm seems to violate the lower boundit turns out that it does not because it uses more powerful operation than simple comparisons by incrementing the appropriate bucketthe algorithm essentially performs an -way comparison in unit time this is similar to the strategy used in extendible hashing (section this is clearly not in the model for which the lower bound was proven this algorithm doeshoweverquestion the validity of the model used in proving the lower bound the model actually is strong modelbecause general-purpose sorting algorithm cannot make assumptions about the type of input it can expect to see but must it is possible that the current assignment for both items has the same numberin such case we can increase all items whose current value is larger than by and then add to to break the tie |
23,624 | sorting make decisions based on ordering information only naturallyif there is extra information availablewe should expect to find more efficient algorithmsince otherwise the extra information would be wasted although bucket sort seems like much too trivial an algorithm to be usefulit turns out that there are many cases where the input is only small integersso that using method like quicksort is really overkill one such example is radix sort radix sort is sometimes known as card sort because it was used until the advent of modern computers to sort old-style punch cards suppose we have numbers in the range to that we would like to sort in general this is numbers in the range to - for some constant obviously we cannot use bucket sortthere would be too many buckets the trick is to use several passes of bucket sort the natural algorithm would be to bucket sort by the most significant "digit(digit is taken to base )then next most significantand so on but simpler idea is to perform bucket sorts in the reverse orderstarting with the least significant "digitfirst of coursemore than one number could fall into the same bucketand unlike the original bucket sortthese numbers could be differentso we keep them in list each pass is stableitems that agree in the current digit retain the ordering determined in prior passes the trace in figure shows the result of sorting which is the first ten cubes arranged randomly (we use to make clear the tens and hundreds digitsafter the first passthe items are sorted on the least significant digitand in generalafter the kth passthe items are sorted on the least significant digits so at the endthe items are completely sorted to see that the algorithm worksnotice that the only possible failure would occur if two numbers came out of the same bucket in the wrong order but the previous passes ensure that when several numbers enter bucketthey enter in sorted order according to the - least significant digits the running time is ( ( )where is the number of passesn is the number of elements to sortand is the number of buckets one application of radix sort is sorting strings if all the strings have the same length lthen by using buckets for each characterwe can implement radix sort in (nltime the most straightforward way of doing this is shown in figure in our codewe assume that all characters are asciiresiding in the first positions of the unicode character set in each passwe add an item to the appropriate bucketand then after all the buckets are populatedwe step through the buckets dumping everything back to the array notice that when bucket is populated and emptied in the next passthe order from the current pass is preserved counting radix sort is an alternative implementation of radix sort that avoids using vectors to represent buckets insteadwe maintain count of how many items would go in each bucketthis information can go into an array countso that count[kis the number of items that are in bucket then we can use another array offsetso that offset[kinitial itemssorted by ' digitsorted by ' digitsorted by ' digit figure radix sort trace |
23,625 | /radix sort an array of strings assume all are all ascii assume all have same length *void radixsortavector arrint stringlen const int buckets vectorbucketsbuckets )forint pos stringlen pos > --pos forstring arr bucketsspos push_backstd::moves )int idx forauto thisbucket buckets forstring thisbucket arridx+std::moves )thisbucket clear)figure simple implementation of radix sort for stringsusing an arraylist of buckets represents the number of items whose value is strictly smaller than then when we see value for the first time in the final scanoffset[ktells us valid array spot where it can be written to (but we have to use temporary array for the write)and after that is doneoffset[kcan be incremented counting radix sort thus avoids the need to keep lists as further optimizationwe can avoid using offset by reusing the count array the modification is that we initially have count[ + represent the number of items that are in bucket then after that information is computedwe can scan the count array from the smallest to largest index and increment count[kby count[ - it is easy to verify that after this scanthe count array stores exactly the same information that would have been stored in offset figure shows an implementation of counting radix sort lines to implement the logic aboveassuming that the items are stored in array in and the result of single pass is placed in array out initiallyin represents arr and out represents the temporary arraybuffer after each passwe switch the roles of in and out if there are an even number of passesthen at the endout is referencing arrso the sort is complete otherwisewe have to move from the buffer back into arr |
23,626 | sorting /counting radix sort an array of strings assume all are all ascii assume all have same length *void countingradixsortvectro arrint stringlen const int buckets int arr size)vector buffern )vector *in &arrvector *out &bufferforint pos stringlen pos > --pos vector countbuckets )forint ++ ++count(*in) pos ]forint <buckets++ countb +countb ]forint ++ (*out)count(*in) pos ]+std::move(*in) )/swap in and out roles std::swapinout )/if odd number of passesin is bufferout is arrso move back ifstringlen = forint arr size)++ (*out) std::move(*in) )figure counting radix sort for fixed-length strings generallycounting radix sort is prefereable to using vectors to store bucketsbut it can suffer from poor locality (out is filled in non-sequentiallyand thussurprisinglyit is not always faster than using vector of vectors we can extend either version of radix sort to work with variable-length strings the basic algorithm is to first sort the strings by their length instead of looking at all the strings |
23,627 | /radix sort an array of strings assume all are all ascii assume all have length bounded by maxlen *void radixsortvector arrint maxlen const int buckets vectorwordsbylengthmaxlen )vectorbucketsbuckets )forstring arr wordsbylengths lengthpush_backstd::moves )int idx forauto wordlist wordsbylength forstring wordlist arridx+std::moves )int startingindex arr size)forint pos maxlen pos > --pos startingindex -wordsbylengthpos size)forint startingindexi arr size)++ bucketsarri ]pos push_backstd::movearri )idx startingindexforauto thisbucket buckets forstring thisbucket arridx+std::moves )thisbucket clear)figure radix sort for variable-length strings we can then look only at strings that we know are long enough since the string lengths are small numbersthe initial sort by length can be done by--bucket sortfigure shows this implementation of radix sortwith vectors to store buckets herethe words are grouped into buckets by length at lines and and then placed back into the array at lines to lines and look at only those strings that have character at position |
23,628 | sorting posby making use of the variable startingindexwhich is maintained at lines and except for thatlines to in figure are the same as lines to in figure the running time of this version of radix sort is linear in the total number of characters in all the strings (each character appears exactly once at line and the statement at line executes precisely as many times as the line radix sort for strings will perform especially well when the characters in the string are drawn from reasonably small alphabet and when the strings either are relatively short or are very similar because the ( log ncomparison-based sorting algorithms will generally look only at small number of characters in each string comparisononce the average string length starts getting largeradix sort' advantage is minimized or evaporates completely external sorting so farall the algorithms we have examined require that the input fit into main memory there arehoweverapplications where the input is much too large to fit into memory this section will discuss external sorting algorithmswhich are designed to handle very large inputs why we need new algorithms most of the internal sorting algorithms take advantage of the fact that memory is directly addressable shellsort compares elements [iand [ -hk in one time unit heapsort compares elements [iand [ * + in one time unit quicksortwith median-of-three partitioningrequires comparing [left] [center]and [rightin constant number of time units if the input is on tapethen all these operations lose their efficiencysince elements on tape can only be accessed sequentially even if the data are on diskthere is still practical loss of efficiency because of the delay required to spin the disk and move the disk head to see how slow external accesses really arecreate random file that is largebut not too big to fit in main memory read the file in and sort it using an efficient algorithm the time it takes to read the input is certain to be significant compared to the time to sort the inputeven though sorting is an ( log noperation and reading the input is only (nmodel for external sorting the wide variety of mass storage devices makes external sorting much more device dependent than internal sorting the algorithms that we will consider work on tapeswhich are probably the most restrictive storage medium since access to an element on tape is done by winding the tape to the correct locationtapes can be efficiently accessed only in sequential order (in either directionwe will assume that we have at least three tape drives to perform the sorting we need two drives to do an efficient sortthe third drive simplifies matters if only one tape drive is presentthen we are in troubleany algorithm will require ( tape accesses |
23,629 | the simple algorithm the basic external sorting algorithm uses the merging algorithm from mergesort suppose we have four tapesta ta tb tb which are two input and two output tapes depending on the point in the algorithmthe and tapes are either input tapes or output tapes suppose the data are initially on ta suppose further that the internal memory can hold (and sortm records at time natural first step is to read records at time from the input tapesort the records internallyand then write the sorted records alternately to tb and tb we will call each set of sorted records run when this is donewe rewind all the tapes suppose we have the same input as our example for shellsort ta ta tb tb if then after the runs are constructedthe tapes will contain the data indicated in the following figure ta ta tb tb now tb and tb contain group of runs we take the first run from each tape and merge themwriting the resultwhich is run twice as longonto ta recall that merging two sorted lists is simplewe need almost no memorysince the merge is performed as tb and tb advance then we take the next run from each tapemerge theseand write the result to ta we continue this processalternating between ta and ta until either tb or tb is empty at this point either both are empty or there is one run left in the latter casewe copy this run to the appropriate tape we rewind all four tapes and repeat the same stepsthis time using the tapes as input and the tapes as output this will give runs of we continue the process until we get one run of length this algorithm will require log( / )passesplus the initial run-constructing pass for instanceif we have million records of bytes eachand four megabytes of internal memorythen the first pass will create runs we would then need nine more passes to complete the sort our example requires log / more passeswhich are shown in the following figuresta ta tb tb |
23,630 | ta ta tb tb ta ta tb tb sorting multiway merge if we have extra tapesthen we can expect to reduce the number of passes required to sort our input we do this by extending the basic (two-waymerge to -way merge merging two runs is done by winding each input tape to the beginning of each run then the smaller element is foundplaced on an output tapeand the appropriate input tape is advanced if there are input tapesthis strategy works the same waythe only difference being that it is slightly more complicated to find the smallest of the elements we can find the smallest of these elements by using priority queue to obtain the next element to write on the output tapewe perform deletemin operation the appropriate input tape is advancedand if the run on the input tape is not yet completedwe insert the new element into the priority queue using the same example as beforewe distribute the input onto the three tapes ta ta ta tb tb tb we then need two more passes of three-way merging to complete the sort ta ta ta tb tb tb |
23,631 | ta ta ta tb tb tb after the initial run construction phasethe number of passes required using -way merging is logk ( / )because the runs get times as large in each pass for the example abovethe formula is verifiedsince log ( / ) if we have tapesthen and our large example from the previous section would require log passes polyphase merge the -way merging strategy developed in the last section requires the use of tapes this could be prohibitive for some applications it is possible to get by with only tapes as an examplewe will show how to perform two-way merging using only three tapes suppose we have three tapest and and an input file on that will produce runs one option is to put runs on each of and we could then merge this result onto obtaining one tape with runs the problem is that since all the runs are on one tapewe must now put some of these runs on to perform another merge the logical way to do this is to copy the first eight runs from onto and then perform the merge this has the effect of adding an extra half pass for every pass we do an alternative method is to split the original runs unevenly suppose we put runs on and runs on we would then merge runs onto before was empty at this pointwe could rewind and and merge with runsand which has runsonto we could then merge runs until was emptywhich would leave runs left on and runs on we could then merge and and so on the following table shows the number of runs on each tape after each passt run const after after after after after after after the original distribution of runs makes great deal of difference for instanceif runs are placed on with on then after the first mergewe obtain runs on and runs on after another mergethere are runs on and runs on at this point the going gets slowbecause we can only merge two sets of runs before is exhausted then has runs and has runs againwe can only merge two sets of runsobtaining with runs and with runs after three more passest has two |
23,632 | sorting runs and the other tapes are empty we must copy one run to another tapeand then we can finish the merge it turns out that the first distribution we gave is optimal if the number of runs is fibonacci number fn then the best way to distribute them is to split them into two fibonacci numbers fn- and fn- otherwiseit is necessary to pad the tape with dummy runs in order to get the number of runs up to fibonacci number we leave the details of how to place the initial set of runs on the tapes as an exercise we can extend this to -way mergein which case we need kth order fibonacci numbers for the distributionwhere the kth order fibonacci number is defined as ( (nf( ( ( ( ( ( )with the appropriate initial conditions ( ( < < ( ( replacement selection the last item we will consider is construction of the runs the strategy we have used so far is the simplest possiblewe read as many records as possible and sort themwriting the result to some tape this seems like the best approach possibleuntil one realizes that as soon as the first record is written to an output tapethe memory it used becomes available for another record if the next record on the input tape is larger than the record we have just outputthen it can be included in the run using this observationwe can give an algorithm for producing runs this technique is commonly referred to as replacement selection initiallym records are read into memory and placed in priority queue we perform deleteminwriting the smallest (valuedrecord to the output tape we read the next record from the input tape if it is larger than the record we have just writtenwe can add it to the priority queue otherwiseit cannot go into the current run since the priority queue is smaller by one elementwe can store this new element in the dead space of the priority queue until the run is completed and use the element for the next run storing an element in the dead space is similar to what is done in heapsort we continue doing this until the size of the priority queue is zeroat which point the run is over we start new run by building new priority queueusing all the elements in the dead space figure shows the run construction for the small example we have been usingwith dead elements are indicated by an asterisk in this examplereplacement selection produces only three runscompared with the five runs obtained by sorting because of thisa three-way merge finishes in one pass instead of two if the input is randomly distributedreplacement selection can be shown to produce runs of average length for our large examplewe would expect runs instead of runsso five-way merge would require four passes in this casewe have not saved passalthough we might if we get lucky and have runs or less since external sorts take so longevery pass saved can make significant difference in the running time as we have seenit is possible for replacement selection to do no better than the standard algorithm howeverthe input is frequently sorted or nearly sorted to start within which case replacement selection produces only few very long runs this kind of input is common for external sorts and makes replacement selection extremely valuable |
23,633 | elements in heap array output next element read end of run rebuild heap end of run end of tape [ [ [ run run run rebuild heap figure example of run construction summary sorting is one of the oldest and most well-studied problems in computing for most general internal sorting applicationsan insertion sortshellsortmergesortor quicksort is the method of choice the decision regarding which to use depends on the size of the input and on the underlying environment insertion sort is appropriate for very small amounts of input shellsort is good choice for sorting moderate amounts of input with proper increment sequenceit gives excellent performance and uses only few lines of code mergesort has ( log nworst-case performance but requires additional space howeverthe number of comparisons that are used is nearly optimalbecause any algorithm that sorts by using only element comparisons must use at least log ( !)comparisons for some input sequence quicksort does not by itself provide this worst-case guarantee and is tricky to code howeverit has almost certain ( log nperformance and can be combined with heapsort to give an ( log nworst-case guarantee strings can be sorted in linear time using radix sortand this may be practical alternative to comparison-based sorts in some instances exercises sort the sequence using insertion sort what is the running time of insertion sort if all elements are equal |
23,634 | sorting suppose we exchange elements [iand [ + ]which were originally out of order prove that at least and at most inversions are removed show the result of running shellsort on the input using the increments { what is the running time of shellsort using the two-increment sequence { } show that for any nthere exists three-increment sequence such that shellsort runs in ( / time show that for any nthere exists six-increment sequence such that shellsort runs in ( / time prove that the running time of shellsort is ( using increments of the form cc ci for any integer prove that for these incrementsthe average running time is ( / prove that if -sorted file is then -sortedit remains -sorted prove that the running time of shellsortusing the increment sequence suggested by hibbardis ( / in the worst case (hintyou can prove the bound by considering the special case of what shellsort does when all elements are either or set [ if is expressible as linear combination of ht ht- / + and otherwise determine the running time of shellsort for sorted input reverse-ordered input do either of the following modifications to the shellsort routine coded in figure affect the worst-case running timea before line subtract one from gap if it is even before line add one to gap if it is even show how heapsort processes the input what is the running time of heapsort for presorted inputshow that there are inputs that force every percolatedown in heapsort to go all the way to leaf (hintwork backward rewrite heapsort so that it sorts only items that are in the range low to highwhich are passed as additional parameters sort using mergesort how would you implement mergesort without using recursion determine the running time of mergesort for sorted input reverse-ordered input random input in the analysis of mergesortconstants have been disregarded prove that the number of comparisons used in the worst case by mergesort is log log |
23,635 | sort using quicksort with median-of-three partitioning and cutoff of using the quicksort implementation in this determine the running time of quicksort for sorted input reverse-ordered input random input repeat exercise when the pivot is chosen as the first element the larger of the first two distinct elements random element the average of all elements in the set for the quicksort implementation in this what is the running time when all keys are equalb suppose we change the partitioning strategy so that neither nor stops when an element with the same key as the pivot is found what fixes need to be made in the code to guarantee that quicksort worksand what is the running time when all keys are equalc suppose we change the partitioning strategy so that stops at an element with the same key as the pivotbut does not stop in similar case what fixes need to be made in the code to guarantee that quicksort worksand when all keys are equalwhat is the running time of quicksort suppose we choose the element in the middle position of the array as pivot does this make it unlikely that quicksort will require quadratic timeconstruct permutation of elements that is as bad as possible for quicksort using median-of-three partitioning and cutoff of the quicksort in the text uses two recursive calls remove one of the calls as followsa rewrite the code so that the second recursive call is unconditionally the last line in quicksort do this by reversing the if/else and returning after the call to insertionsort remove the tail recursion by writing while loop and altering left continuing from exercise after part ( ) perform test so that the smaller subarray is processed by the first recursive callwhile the larger subarray is processed by the second recursive call remove the tail recursion by writing while loop and altering left or rightas necessary prove that the number of recursive calls is logarithmic in the worst case suppose the recursive quicksort receives an int parameterdepthfrom the driver that is initially approximately log modify the recursive quicksort to call heapsort on its current subarray if the level of recursion has reached depth (hintdecrement depth as you make recursive callswhen it is switch to heapsort |
23,636 | sorting prove that the worst-case running time of this algorithm is ( log nc conduct experiments to determine how often heapsort gets called implement this technique in conjunction with tail-recursion removal in exercise explain why the technique in exercise would no longer be needed when implementing quicksortif the array contains lots of duplicatesit may be better to perform three-way partition (into elements less thanequal toand greater than the pivotto make smaller recursive calls assume three-way comparisons give an algorithm that performs three-way in-place partition of an -element subarray using only three-way comparisons if there are items equal to the pivotyou may use additional comparable swapsabove and beyond the two-way partitioning algorithm (hintas and move toward each othermaintain five groups of elements as shown below)equal small unknown large equal prove that using the algorithm abovesorting an -element array that contains only different valuestakes (dntime write program to implement the selection algorithm solve the following recurrencet( ( / ) - = ( )cnt( sorting algorithm is stable if elements with equal elements are left in the same order as they occur in the input which of the sorting algorithms in this are stable and which are notwhy suppose you are given sorted list of elements followed by (nrandomly ordered elements how would you sort the entire list if (no( ) (no(log ) (non) how large can (nbe for the entire list still to be sortable in (ntime prove that any algorithm that finds an element in sorted list of elements requires (log ncomparisons using stirling' formulan( / ) pngive precise estimate for log( ! in how many ways can two sorted arrays of elements be mergedb give nontrivial lower bound on the number of comparisons required to merge two sorted lists of elements by taking the logarithm of your answer in part ( prove that merging two sorted arrays of items requires at least comparisons you must show that if two elements in the merged list are consecutive and from different liststhen they must be compared consider the following algorithm for sorting six numbersr sort the first three numbers using algorithm sort the second three numbers using algorithm merge the two sorted groups using algorithm |
23,637 | show that this algorithm is suboptimalregardless of the choices for algorithms aband write program that reads points in plane and outputs any group of four or more colinear points ( points on the same linethe obvious brute-force algorithm requires ( time howeverthere is better algorithm that makes use of sorting and runs in ( log ntime show that the two smallest elements among can be found in log comparisons the following divide-and-conquer algorithm is proposed for finding the simultaneous maximum and minimumif there is one itemit is the maximum and minimumand if there are two itemsthen compare themand in one comparison you can find the maximum and minimum otherwisesplit the input into two halvesdivided as evenly as possibly (if is oddone of the two halves will have one more element than the otherrecursively find the maximum and minimum of each halfand then in two additional comparisons produce the maximum and minimum for the entire problem suppose is power of what is the exact number of comparisons used by this algorithmb suppose is of the form what is the exact number of comparisons used by this algorithmc modify the algorithm as followswhen is evenbut not divisible by foursplit the input into sizes of / and / what is the exact number of comparisons used by this algorithmsuppose we want to partition items into equal-sized groups of size /gsuch that the smallest / items are in group the next smallest / items are in group and so on the groups themselves do not have to be sorted for simplicityyou may assume that and are powers of two give an ( log galgorithm to solve this problem prove an ( log glower bound to solve this problem using comparison-based algorithms give linear-time algorithm to sort fractionseach of whose numerators and denominators are integers between and suppose arrays and are both sorted and both contain elements give an (log nalgorithm to find the median of suppose you have an array of elements containing only two distinct keystrue and false give an (nalgorithm to rearrange the list so that all false elements precede the true elements you may use only constant extra space suppose you have an array of elementscontaining three distinct keystruefalseand maybe give an (nalgorithm to rearrange the list so that all false elements precede maybe elementswhich in turn precede true elements you may use only constant extra space prove that any comparison-based algorithm to sort elements requires comparisons give an algorithm to sort elements in comparisons |
23,638 | sorting prove that comparisons are required to sort elements using any comparisonbased algorithm give an algorithm to sort elements with comparisons write an efficient version of shellsort and compare performance when the following increment sequences are useda shell' original sequence hibbard' increments knuth' incrementshi ( hk+ and hk (with if gonnet' incrementsht sedgewick' increments implement an optimized version of quicksort and experiment with combinations of the followinga pivotfirst elementmiddle elementrandom elementmedian of threemedian of five cutoff values from to write routine that reads in two alphabetized files and merges them togetherforming thirdalphabetizedfile suppose we implement the median-of-three routine as followsfind the median of [left] [center] [right]and swap it with [rightproceed with the normal partitioning step starting at left and at right- (instead of left+ and right- suppose the input is for this inputwhat is the running time of this version of quicksortb suppose the input is in reverse order for this inputwhat is the running time of this version of quicksortprove that any comparison-based sorting algorithm requires ( log ncomparisons on average we are given an array that contains numbers we want to determine if there are two numbers whose sum equals given number for instanceif the input is and is then the answer is yes ( and number may be used twice do the followinga give an ( algorithm to solve this problem give an ( log nalgorithm to solve this problem (hintsort the items first after that is doneyou can solve the problem in linear time code both solutions and compare the running times of your algorithms repeat exercise for four numbers try to design an ( log nalgorithm (hintcompute all possible sums of two elements sort these possible sums then proceed as in exercise repeat exercise for three numbers try to design an ( algorithm consider the following strategy for percolatedownwe have hole at node the normal routine is to compare ' children and then move the child up to if it is larger (in the case of (max)heapthan the element we are trying to placethereby pushing the hole downwe stop when it is safe to place the new element in the hole the alternative strategy is to move elements up and the hole down as far as |
23,639 | possiblewithout testing whether the new cell can be inserted this would place the new cell in leaf and probably violate the heap orderto fix the heap orderpercolate the new cell up in the normal manner write routine to include this ideaand compare the running time with standard implementation of heapsort propose an algorithm to sort large file using only two tapes show that lower bound of !/ on the number of heaps is implied by the fact that buildheap uses at most comparisons use stirling' formula to expand this bound is an -by- matrix in which the entries in each rows are in increasing order and the entries in each column are in increasing order (reading top to bottomconsider the problem of determining if is in using three-way comparisons ( one comparison of with [ ][jtells you either that is less thanequal toor greater than [ ][ ] give an algorithm that uses at most comparisons prove that any algorithm must use at least comparisons there is prize hidden in boxthe value of the prize is positive integer between and nand you are given to win the prizeyou have to guess its value your goal is to do it in as few guesses as possiblehoweveramong those guessesyou may only make at most guesses that are too high the value will be specified at the start of the gameand if you make more than guesses that are too highyou lose sofor exampleif you then can win in guesses by simply guessing the sequence suppose log nwhat strategy minimizes the number of guessesb suppose show that you can always win in / guesses suppose show that any algorithm that wins the prize must use / guesses give an algorithm and matching lower bound for any constant references knuth' book [ is comprehensive reference for sorting gonnet and baeza-yates [ has some more resultsas well as huge bibliography the original paper detailing shellsort is [ the paper by hibbard [ suggested the use of the increments and tightened the code by avoiding swaps theorem is from [ pratt' lower boundwhich uses more complex method than that suggested in the textcan be found in [ improved increment sequences and upper bounds appear in [ ][ ]and [ ]matching lower bounds have been shown in [ it has been shown that no increment sequence gives an ( log nworst-case running time [ the average-case running time for shellsort is still unresolved yao [ has performed an extremely complex analysis for the three-increment case the result has yet to be extended to more incrementsbut has been slightly improved [ the paper by jiangliand vityani [ has shown an ( + / lower bound on the average-case running time of -pass shellsort experiments with various increment sequences appear in [ |
23,640 | sorting heapsort was invented by williams [ ]floyd [ provided the linear-time algorithm for heap construction theorem is from [ an exact average-case analysis of mergesort has been described in [ an algorithm to perform merging in linear time without extra space is described in [ quicksort is from hoare [ this paper analyzes the basic algorithmdescribes most of the improvementsand includes the selection algorithm detailed analysis and empirical study was the subject of sedgewick' dissertation [ many of the important results appear in the three papers [ ][ ]and [ [ provides detailed implementation with some additional improvementsand points out that older implementations of the unix qsort library routine are easily driven to quadratic behavior exercise is from [ decision trees and sorting optimality are discussed in ford and johnson [ this paper also provides an algorithm that almost meets the lower bound in terms of number of comparisons (but not other operationsthis algorithm was eventually shown to be slightly suboptimal by manacher [ the selection lower bounds obtained in theorem are from [ the lower bound for finding the maximum and minimum simultaneously is from pohl [ the current best lower bound for finding the median is slightly above comparisons due to dor and zwick [ ]they also have the best upper boundwhich is roughly comparisons [ external sorting is covered in detail in [ stable sortingdescribed in exercise has been addressed by horvath [ bentley and mcelroy"engineering sort function,software--practice and experience ( ) - dor and zwick"selecting the median,siam journal on computing ( ) dor and zwick"median selection requires ( ) comparisons,siam journal on discrete math ( ) - floyd"algorithm treesort ,communications of the acm ( ) ford and johnson" tournament problem,american mathematics monthly ( ) - fussenegger and gabow" counting approach to lower bounds for selection problems,journal of the acm ( ) - golin and sedgewick"exact analysis of mergesort,fourth siam conference on discrete mathematics gonnet and baeza-yateshandbook of algorithms and data structures ed addison-wesleyreadingmass hibbard"an empirical study of minimal storage sorting,communications of the acm ( ) - hoare"quicksort,computer journal ( ) - horvath"stable sorting in asymptotically optimal time and extra space,journal of the acm ( ) - huang and langston"practical in-place merging,communications of the acm ( ) - incerpi and sedgewick"improved upper bounds on shellsort,journal of computer and system sciences ( ) - |
23,641 | janson and knuth"shellsort with three increments,random structures and algorithms ( ) - jiangm liand vitanyi" lower bound on the average-case complexity of shellsort,journal of the acm ( ) - knuththe art of computer programming volume sorting and searching ed addison-wesleyreadingmass manacher"the ford-johnson sorting algorithm is not optimal,journal of the acm ( ) - musser"introspective sorting and selection algorithms,software--practice and experience ( ) - papernov and stasevich" method of information sorting in computer memories,problems of information transmission ( ) - plaxtonb poonenand suel"improved lower bounds for shellsort,proceedings of the thirty-third annual symposium on the foundations of computer science ( ) - pohl" sorting problem and its complexity,communications of the acm ( ) - prattshellsort and sorting networksgarland publishingnew york (originally presented as the author' ph thesisstanford university schaffer and sedgewick"the analysis of heapsort,journal of algorithms ( ) - sedgewick"quicksort with equal keys,siam journal on computing ( ) - sedgewick"the analysis of quicksort programs,acta informatica ( ) - sedgewick"implementing quicksort programs,communications of the acm ( ) - sedgewickquicksortgarland publishingnew york (originally presented as the author' ph thesisstanford university sedgewick" new upper bound for shellsort,journal of algorithms ( ) - shell" high-speed sorting procedure,communications of the acm ( ) - weiss"empirical results on the running time of shellsort,computer journal ( ) - weiss and sedgewick"more on shellsort increment sequences,information processing letters ( ) - weiss and sedgewick"tight lower bounds for shellsort,journal of algorithms ( ) - williams"algorithm heapsort,communications of the acm ( ) - yao"an analysis of (hk shellsort,journal of algorithms ( ) - |
23,642 | |
23,643 | the disjoint sets class in this we describe an efficient data structure to solve the equivalence problem the data structure is simple to implement each routine requires only few lines of codeand simple array can be used the implementation is also extremely fastrequiring constant average time per operation this data structure is also very interesting from theoretical point of viewbecause its analysis is extremely difficultthe functional form of the worst case is unlike any we have yet seen for the disjoint sets data structurewe will show how it can be implemented with minimal coding effort greatly increase its speedusing just two simple observations analyze the running time of fast implementation see simple application equivalence relations relation is defined on set if for every pair of elements (ab)ab sa is either true or false if is truethen we say that is related to an equivalence relation is relation that satisfies three properties (reflexivea afor all (symmetrica if and only if (transitivea and implies that we will consider several examples the <relationship is not an equivalence relationship although it is reflexivesince <aand transitivesince < and < implies <cit is not symmetricsince < does not imply < electrical connectivitywhere all connections are by metal wiresis an equivalence relation the relation is clearly reflexiveas any component is connected to itself if is electrically connected to bthen must be electrically connected to aso the relation is symmetric finallyif is connected to and is connected to cthen is connected to thus electrical connectivity is an equivalence relation two cities are related if they are in the same country it is easily verified that this is an equivalence relation suppose town is related to if it is possible to travel from to by taking roads this relation is an equivalence relation if all the roads are two-way |
23,644 | the disjoint sets class the dynamic equivalence problem given an equivalence relation ~the natural problem is to decidefor any and bif if the relation is stored as two-dimensional array of boolean variablesthenof coursethis can be done in constant time the problem is that the relation is usually not explicitlybut rather implicitlydefined as an examplesuppose the equivalence relation is defined over the five-element set { then there are pairs of elementseach of which is either related or not howeverthe information implies that all pairs are related we would like to be able to infer this quickly the equivalence class of an element is the subset of that contains all the elements that are related to notice that the equivalence classes form partition of severy member of appears in exactly one equivalence class to decide if bwe need only to check whether and are in the same equivalence class this provides our strategy to solve the equivalence problem the input is initially collection of setseach with one element this initial representation is that all relations (except reflexive relationsare false each set has different elementso that si sj this makes the sets disjoint there are two permissible operations the first is findwhich returns the name of the set (that isthe equivalence classcontaining given element the second operation adds relations if we want to add the relation bthen we first see if and are already related this is done by performing finds on both and and checking whether they are in the same equivalence class if they are notthen we apply union this operation merges the two equivalence classes containing and into new equivalence class from set point of viewthe result of is to create new set sk si sj destroying the originals and preserving the disjointness of all the sets the algorithm to do this is frequently known as the disjoint set union/find algorithm for this reason this algorithm is dynamic becauseduring the course of the algorithmthe sets can change via the union operation the algorithm must also operate onlinewhen find is performedit must give an answer before continuing another possibility would be an offline algorithm such an algorithm would be allowed to see the entire sequence of unions and finds the answer it provides for each find must still be consistent with all the unions that were performed up until the findbut the algorithm can give all its answers after it has seen all the questions the difference is similar to taking written exam (which is generally offline--you only have to give the answers before time expiresor an oral exam (which is onlinebecause you must answer the current question before proceeding to the next questionnotice that we do not perform any operations comparing the relative values of elements but merely require knowledge of their location for this reasonwe can assume that all the elements have been numbered sequentially from to and that the numbering can union is (little-usedreserved word in +we use it throughout in describing the union/find algorithmbut when we write codethe member function will be named unionsets |
23,645 | be determined easily by some hashing scheme thusinitially we have si {ifor through our second observation is that the name of the set returned by find is actually fairly arbitrary all that really matters is that find( )==find(bis true if and only if and are in the same set these operations are important in many graph theory problems and also in compilers which process equivalence (or typedeclarations we will see an application later there are two strategies to solve this problem one ensures that the find instruction can be executed in constant worst-case timeand the other ensures that the union instruction can be executed in constant worst-case time it has recently been shown that both cannot be done simultaneously in constant worst-case time we will now briefly discuss the first approach for the find operation to be fastwe could maintainin an arraythe name of the equivalence class for each element then find is just simple ( lookup suppose we want to perform union( ,bsuppose that is in equivalence class and is in equivalence class then we scan down the arraychanging all is to unfortunatelythis scan takes (nthusa sequence of unions (the maximumsince then everything is in one setwould take ( time if there are ( find operationsthis performance is finesince the total running time would then amount to ( for each union or find operation over the course of the algorithm if there are fewer findsthis bound is not acceptable one idea is to keep all the elements that are in the same equivalence class in linked list this saves time when updatingbecause we do not have to search through the entire array this by itself does not reduce the asymptotic running timebecause it is still possible to perform ( equivalence class updates over the course of the algorithm if we also keep track of the size of each equivalence classand when performing unions we change the name of the smaller equivalence class to the largerthen the total time spent for merges is ( log nthe reason for this is that each element can have its equivalence class changed at most log timessince every time its class is changedits new equivalence class is at least twice as large as its old using this strategyany sequence of finds and up to unions takes at most ( log ntime in the remainder of this we will examine solution to the union/find problem that makes unions easy but finds hard even sothe running time for any sequence of at most finds and up to unions will be only little more than ( basic data structure recall that the problem does not require that find operation return any specific namejust that finds on two elements return the same answer if and only if they are in the same set one idea might be to use tree to represent each setsince each element in tree has the same root thusthe root can be used to name the set we will represent each set by tree (recall that collection of trees is known as forest initiallyeach set contains one element the trees we will use are not necessarily binary treesbut their representation is this reflects the fact that array indices start at |
23,646 | the disjoint sets class figure eight elementsinitially in different sets easybecause the only information we will need is parent link the name of set is given by the node at the root since only the name of the parent is requiredwe can assume that this tree is stored implicitly in an arrayeach entry [iin the array represents the parent of element if is rootthen [ - in the forest in figure [ - for < as with binary heapswe will draw the trees explicitlywith the understanding that an array is being used figure shows the explicit representation we will draw the root' parent link vertically for convenience to perform union of two setswe merge the two trees by making the parent link of one tree' root link to the root node of the other tree it should be clear that this operation takes constant time figures and represent the forest after each of union( , )union( , )union( , )where we have adopted the convention that the new root after the union( ,yis the implicit representation of the last forest is shown in figure find(xon element is performed by returning the root of the tree containing the time to perform this operation is proportional to the depth of the node representing xassumingof coursethat we can find the node representing in constant time using the strategy aboveit is possible to create tree of depth so the worst-case running figure after union( , figure after union( , |
23,647 | figure after union( , - - - - - figure implicit representation of previous tree time of find is (ntypicallythe running time is computed for sequence of intermixed instructions in this casem consecutive operations could take (mntime in the worst case the code in figures through represents an implementation of the basic algorithmassuming that error checks have already been performed in our routineunions are performed on the roots of the trees sometimes the operation is performed by passing any two elements and having the union perform two finds to determine the roots in previously seen data structuresfind has always been an accessorand thus const member function section describes mutator version that is more efficient both versions can class disjsets publicexplicit disjsetsint numelements )int findint constint findint )void unionsetsint root int root )privatevector }figure disjoint sets class interface |
23,648 | the disjoint sets class /*construct the disjoint sets object numelements is the initial number of disjoint sets *disjsets::disjsetsint numelements snumelements figure disjoint sets initialization routine /*union two disjoint sets for simplicitywe assume root and root are distinct and represent set names root is the root of set root is the root of set *void disjsets::unionsetsint root int root sroot root figure union (not the best way /*perform find error checks omitted again for simplicity return the set containing *int disjsets::findint const ifsx return xelse return findsx )figure simple disjoint sets find algorithm be supported simultaneously the mutator is always calledunless the controlling object is unmodifiable the average-case analysis is quite hard to do the least of the problems is that the answer depends on how to define average (with respect to the union operationfor instancein the forest in figure we could say that since there are five treesthere are * equally likely results of the next union (as any two different trees can be unioned |
23,649 | of coursethe implication of this model is that there is only chance that the next union will involve the large tree another model might say that all unions between any two elements in different trees are equally likelyso larger tree is more likely to be involved in the chance that the large next union than smaller tree in the example abovethere is an tree is involved in the next unionsince (ignoring symmetriesthere are ways in which to merge two elements in { }and ways to merge an element in { with an element in { there are still more models and no general agreement on which is the best the average running time depends on the model( )( log )and (mnbounds have actually been shown for three different modelsalthough the latter bound is thought to be more realistic quadratic running time for sequence of operations is generally unacceptable fortunatelythere are several ways of easily ensuring that this running time does not occur smart union algorithms the unions above were performed rather arbitrarilyby making the second tree subtree of the first simple improvement is always to make the smaller tree subtree of the largerbreaking ties by any methodwe call this approach union-by-size the three unions in the preceding example were all tiesand so we can consider that they were performed by size if the next operation were union( , )then the forest in figure would form had the size heuristic not been useda deeper tree would have been formed (fig we can prove that if unions are done by sizethe depth of any node is never more than log to see thisnote that node is initially at depth when its depth increases as result of unionit is placed in tree that is at least twice as large as before thusits depth can be increased at most log times (we used this argument in the quick-find algorithm at the end of section this implies that the running time for find operation is (log )and sequence of operations takes ( log nthe tree in figure shows the worst tree possible after unions and is obtained if all unions are between equal-sized trees (the worst-case trees are binomial treesdiscussed in to implement this strategywe need to keep track of the size of each tree since we are really just using an arraywe can have the array entry of each root contain the negative of figure result of union-by-size |
23,650 | the disjoint sets class figure result of an arbitrary union figure worst-case tree for the size of its tree thusinitially the array representation of the tree is all - when union is performedcheck the sizesthe new size is the sum of the old thusunion-by-size is not at all difficult to implement and requires no extra space it is also faston average for virtually all reasonable modelsit has been shown that sequence of operations requires (maverage time if union-by-size is used this is because when random unions are performedgenerally very small (usually one-elementsets are merged with large sets throughout the algorithm an alternative implementationwhich also guarantees that all the trees will have depth at most (log )is union-by-height we keep track of the heightinstead of the sizeof each tree and perform unions by making the shallow tree subtree of the deeper tree this is an easy algorithmsince the height of tree increases only when two equally deep trees are joined (and then the height goes up by onethusunion-by-height is trivial modification of union-by-size since heights of zero would not be negativewe actually store the negative of heightminus an additional initiallyall entries are - figure shows forest and its implicit representation for both union-by-size and union-by-height the code in figure implements union-by-height |
23,651 | - - - - - - - - figure forest with implicit representation for union-by-size and union-by-height /*union two disjoint sets for simplicitywe assume root and root are distinct and represent set names root is the root of set root is the root of set *void disjsets::unionsetsint root int root ifsroot sroot /root is deeper sroot root /make root new root else ifsroot =sroot --sroot ]/update height if same sroot root /make root new root figure code for union-by-height (rank |
23,652 | the disjoint sets class path compression the union/find algorithmas described so faris quite acceptable for most cases it is very simple and linear on average for sequence of instructions (under all modelshoweverthe worst case of ( log ncan occur fairly easily and naturally for instanceif we put all the sets on queue and repeatedly dequeue the first two sets and enqueue the unionthe worst case occurs if there are many more finds than unionsthis running time is worse than that of the quick-find algorithm moreoverit should be clear that there are probably no more improvements possible for the union algorithm this is based on the observation that any method to perform the unions will yield the same worst-case treessince it must break ties arbitrarily thereforethe only way to speed the algorithm upwithout reworking the data structure entirelyis to do something clever on the find operation the clever operation is known as path compression path compression is performed during find operation and is independent of the strategy used to perform unions suppose the operation is find(xthen the effect of path compression is that every node on the path from to the root has its parent changed to the root figure shows the effect of path compression after find( on the generic worst tree of figure the effect of path compression is that with an extra two link changesnodes and are now one position closer to the root and nodes and are now two positions closer thusthe fast future accesses on these nodes will pay (we hopefor the extra work to do the path compression as the code in figure showspath compression is trivial change to the basic find algorithm the only change to the find routine (besides the fact that it is no longer const member functionis that [xis made equal to the value returned by findthusafter the root of the set is found recursivelyx' parent link references it this occurs recursively to every node on the path to the rootso this implements path compression when unions are done arbitrarilypath compression is good ideabecause there is an abundance of deep nodes and these are brought near the root by path compression it has been proven that when path compression is done in this casea sequence of figure an example of path compression |
23,653 | /*perform find with path compression error checks omitted again for simplicity return the set containing *int disjsets::findint ifsx return xelse return sx findsx )figure code for disjoint sets find with path compression operations requires at most ( log ntime it is still an open problem to determine what the average-case behavior is in this situation path compression is perfectly compatible with union-by-sizeand thus both routines can be implemented at the same time since doing union-by-size by itself is expected to execute sequence of operations in linear timeit is not clear that the extra pass involved in path compression is worthwhile on average indeedthis problem is still open howeveras we shall see laterthe combination of path compression and smart union rule guarantees very efficient algorithm in all cases path compression is not entirely compatible with union-by-heightbecause path compression can change the heights of the trees it is not at all clear how to recompute them efficiently the answer is do notthen the heights stored for each tree become estimated heights (sometimes known as ranks)but it turns out that union-by-rank (which is what this has now becomeis just as efficient in theory as union-by-size furthermoreheights are updated less often than sizes as with union-by-sizeit is not clear whether path compression is worthwhile on average what we will show in the next section is that with either union heuristicpath compression significantly reduces the worst-case running time worst case for union-by-rank and path compression when both heuristics are usedthe algorithm is almost linear in the worst case specificallythe time required in the worst case is (ma(mn)(provided > )where (mnis an incredibly slowly growing function that for all intents and purposes is at most for any problem instance howevera(mnis not constantso the running time is not linear in the remainder of this sectionwe first look at some very slow-growing functionsand then in sections to we establish bound on the worst case for sequence of at most unions and find operations in an -element universe in which union is by rank and finds use path compression the same bound holds if union-by-rank is replaced with union-by-size |
23,654 | the disjoint sets class slowly growing functions consider the recurrence ( tf( <= > ( in this equationt(nrepresents the number of timesstarting at nthat we must iteratively apply (nuntil we reach (or lesswe assume that (nis nicely defined function that reduces call the solution to the equation (nwe have already encountered this recurrence when analyzing binary search theref(nn/ each step halves we know that this can happen at most log times until reaches hence we have (nlog (we ignore low-order termsetc observe that in this casef (nis much less than (nfigure shows the solution for (nfor various (nin our casewe are most interested in (nlog the solution (nlogn is known as the iterated logarithm the iterated logarithmwhich represents the number of times the logarithm needs to be iteratively applied until we reach oneis very slowly growing function observe that log log log log and log but keep in mind that is , -digit number so while logn is growing functionfor all intents and purposesit is at most but we can still produce even more slowly growing functions for instanceif (nlognthen (nlog* in factwe can add stars at will to produce functions that grow slower and slower an analysis by recursive decomposition we now establish tight bound on the running time of sequence of (nunion/find operations the unions and finds may occur in any orderbut unions are done by rank and finds are done with path compression (nf (nn- - - / - / / log / logc log log log logn logn log* log* log** figure different values of the iterated function |
23,655 | figure large disjoint set tree (numbers below nodes are rankswe begin by establishing two lemmas concerning the properties of the ranks figure gives visual picture of both lemmas lemma when executing sequence of union instructionsa node of rank must have at least one child of rank proof by induction the basis is clearly true when node grows from rank to rank rit obtains child of rank by the inductive hypothesisit already has children of ranks thus establishing the lemma the next lemma seems somewhat obvious but is used implicitly in the analysis lemma at any point in the union/find algorithmthe ranks of the nodes on path from the leaf to root increase monotonically proof the lemma is obvious if there is no path compression ifafter path compressionsome node is descendant of wthen clearly must have been descendant of when only unions were considered hence the rank of is less than the rank of suppose we have two algorithmsa and algorithm works and computes all the answers correctlybut algorithm does not compute correctlyor even produce useful answers supposehoweverthat every step in algorithm can be mapped to an equivalent step in algorithm then it is easy to see that the running time for algorithm describes the running time for algorithm exactly we can use this idea to analyze the running time of the disjoint sets data structure we will describe an algorithm bwhose running time is exactly the same as the disjoint sets structureand then algorithm cwhose running time is exactly the same as algorithm thus any bound for algorithm will be bound for the disjoint sets data structure |
23,656 | the disjoint sets class partial path compression algorithm is our standard sequence of union-by-rank and find with path compression operations we design an algorithm that will perform the exact same sequence of path compression operations as algorithm in algorithm bwe perform all the unions prior to any find then each find operation in algorithm is replaced by partial find operation in algorithm partial find operation specifies the search item and the node up to which the path compression is performed the node that will be used is the node that would have been the root at the time the matching find was performed in algorithm figure shows that algorithm and algorithm will get equivalent trees (forestsat the endand it is easy to see that the exact same amount of parent changes are performed by algorithm ' findscompared to algorithm ' partial finds but algorithm should be simpler to analyzesince we have removed the mixing of unions and finds from the equation the basic quantity to analyze is the number of parent changes that can occur in any sequence of partial findssince all but the top two nodes in any find with path compression will obtain new parents recursive decomposition what we would like to do next is to divide each tree into two halvesa top half and bottom half we would then like to ensure that the number of partial find operations in the top half plus the number of partial find operations in the bottom half is exactly the same as the total number of partial find operations we would then like to write formula for the total path compression cost in the tree in terms of the path compression cost in the top half plus the path compression cost in the bottom half without specifying how we decide which nodes are in the top halfand which nodes are in the bottom halfwe can look at figures and to see how most of what we want to do can work immediately in figure the partial find resides entirely in the bottom half thus one partial find in the bottom half corresponds to one original partial findand the charges can be recursively assigned to the bottom half find (cg union (bfg union (bfa partial find (cba figure sequences of union and find operations replaced with equivalent cost of union and partial find operations |
23,657 | bottom figure recursive decompositioncase partial find is entirely in bottom top bottom figure recursive decompositioncase partial find is entirely in top top bottom figure recursive decompositioncase partial find goes from bottom to top |
23,658 | the disjoint sets class in figure the partial find resides entirely in the top half thus one partial find in the top half corresponds to one original partial findand the charges can be recursively assigned to the top half howeverwe run into lots of trouble when we reach figure here is in the bottom halfand is in the top half the path compression would require that all nodes from to ' child acquire as its parent for nodes in the top halfthat is no problembut for nodes in the bottom half this is deal breakerany recursive charges to the bottom have to keep everything in the bottom so as figure showswe can perform the path compression on the topbut while some nodes in the bottom will need new parentsit is not clear what to dobecause the new parents for those bottom nodes cannot be top nodesand the new parents cannot be other bottom nodes the only option is to make loop where these nodesparents are themselves and make sure these parent changes are correctly charged in our accounting although this is new algorithm because it can no longer be used to generate an identical treewe don' need identical treeswe only need to be sure that each original partial find can be mapped into new partial find operation and that the charges are identical figure shows what the new tree will look likeand so the big remaining issue is the accounting looking at figure we see that the path compression charges from to can be split into three parts firstthere is the path compression from (the first top node on the upward pathto clearly those charges are already accounted for recursively then there is the charge from the topmost-bottom node to but that is only one unitand there can be at most one of those per partial find operation in factwe can do little betterthere can be at most one of those per partial find operation on the top half but how do we account for the parent changes on the path from to wone idea would be to argue that those changes would be exactly the same cost as if there were partial find from to but there is big problem with that argumentit converts an original partial find into partial find on the top plus partial find on the bottomwhich means the number of operationsy top bottom figure recursive decompositioncase path compression can be performed on the top nodesbut the bottom nodes must get new parentsthe parents cannot be top parentsand they cannot be other bottom nodes |
23,659 | top bottom figure recursive decompositioncase the bottom node new parents are the nodes themselves mwould no longer be the same fortunatelythere is simpler argumentsince each node on the bottom can have its parent set to itself only oncethe number of charges are limited by the number of nodes on the bottom whose parents are also in the bottom ( is excludedthere is one important detail that we must verify can we get in trouble on subsequent partial find given that our reformulation detaches the nodes between and from the path to ythe answer is no in the original partial findsuppose any of the nodes between and are involved in subsequent original partial find in that caseit will be with one of ' ancestorsand when that happensany of those nodes will be the topmost "bottom nodein our reformulation thus on the subsequent partial findthe original partial find' parent change will have corresponding one unit charge in our reformulation we can now proceed with the analysis let be the total number of original partial find operations let mt be the total number of partial find operations performed exclusively on the top halfand let mb be the total number of partial find operations performed exclusively on the bottom half let be the total number of nodes let nt be the total number of tophalf nodeslet nb be the total number of bottom-half nodesand let nnrb be the total number of non-root bottom nodes ( the number of bottom nodes whose parents are also bottom nodes prior to any partial findslemma mt mb proof in cases and each original partial find operation is replaced by partial find on the top halfand in case it is replaced by partial find on the bottom half thus each partial find is replaced by exactly one partial find operation on one of the halves our basic idea is that we are going to partition the nodes so that all nodes with rank or lower are in the bottomand the remaining nodes are in the top the choice of will be made later in the proof the next lemma shows that we can provide recursive formula |
23,660 | the disjoint sets class for the number of parent changes by splitting the charges into the top and bottom groups one of the key ideas is that recursive formula is written not only in terms of and nwhich would be obviousbut also in terms of the maximum rank in the group lemma let (mnrbe the number of parent changes for sequence of finds with path compression on items whose maximum rank is suppose we partition so that all nodes with rank at or lower are in the bottomand the remaining nodes are in the top assuming appropriate initial conditionsc(mnrc(mt nt rc(mb nb smt nnrb proof the path compression that is performed in each of the three cases is covered by (mt nt rc(mb nb snode in case is accounted for by mt finallyall the other bottom nodes on the path are non-root nodes that can have their parent set to themselves at most once in the entire sequence of compressions they are accounted for by nnrb if union-by-rank is usedthen by lemma every top node has children of ranks prior to the commencement of the partial find operations each of those children are definitely root nodes in the bottom (their parent is top nodeso for each top nodes nodes (the children plus the top node itself are definitely not included in nnrb thuswe can refomulate lemma as followslemma let (mnrbe the number of parent changes for sequence of finds with path compression on items whose maximum rank is suppose we partition so that all nodes with rank at or lower are in the bottomand the remaining nodes are in the top assuming appropriate initial conditionsc(mnrc(mt nt rc(mb nb smt ( )nt proof substitute nnrb ( )nt into lemma if we look at lemma we see that (mnris recursively defined in terms of two smaller instances our basic goal at this point is to remove one of these instances by providing bound for it what we would like to do is to remove (mt nt rwhybecauseif we do sowhat is left is (mb nb sin that casewe have recursive formula in which is reduced to if is small enoughwe can make use of variation of equation ( )namelythat the solution to <= ( ( tf(nm > is ( ( )solet' start with simple bound for (mnr) |
23,661 | theorem (mnrm log proof we start with lemma (mnrc(mt nt rc(mb nb smt ( )nt ( observe that in the top halfthere are only nodes of rank + + rand thus no node can have its parent change more than ( - - times this yields trivial bound of nt ( - - for (mt nt rthusc(mnrnt ( (mb nb smt ( )nt ( combining termsc(mnrnt ( (mb nb smt ( select / then so (mnrc(mb nb / mt ( equivalentlysince according to lemma mb +mt (the proof falls apart without this) (mnrm (mb nb / mb ( let (mnrc(mnrmthen (mnrd(mb nb / ( which implies (mnrn log this yields (mnrm log theorem any sequence of unions and finds with path compression makes at most log log parent changes during the finds proof the bound is immediate from theorem since <log an om log bound the bound in theorem is pretty goodbut with little workwe can do even better recallthat central idea of the recursive decomposition is choosing to be as small as possible but to do thisthe other terms must also be smalland as gets smallerwe would expect (mt nt rto get larger but the bound for (mt nt rused primitive estimateand theorem itself can now be used to give better estimate for this term since the (mt nt restimate will now be lowerwe will be able to use lower theorem (mnr logr |
23,662 | the disjoint sets class proof from lemma we havec(mnrc(mt nt rc(mb nb smt ( )nt ( and by theorem (mt nt rmt nt log thusc(mnrmt nt log (mb nb smt ( )nt ( rearranging and combining terms yields (mnrc(mb nb mt ( log )nt ( so choose log clearlythis choice implies that ( log and thus we obtain (mnrc(mb nb log mt ( rearranging as in theorem we obtain (mnr (mb nb log mb ( this timelet (mnrc(mnr mthen (mnrd(mb nb log ( which implies (mnrn logr this yields (mnr logr an om (mnbound not surprisinglywe can now use theorem to improve theorem theorem (mnr log* proof following the steps in the proof of theorem we have (mnrc(mt nt rc(mb nb smt ( )nt ( and by theorem (mt nt mt nt logr thusc(mnr mt nt logr (mb nb smt ( )nt ( rearranging and combining terms yields (mnrc(mb nb mt ( logr )nt ( so choose logr to obtain (mnrc(mb nb logr mt ( |
23,663 | rearranging as in theorems and we obtain (mnr (mb nb logr mb ( this timelet (mnrc(mnr mthen (mnrd(mb nb logrn ( which implies (mnrn log* this yields (mnr log* needless to saywe could continue this ad infinitim thus with bit of mathwe get progression of boundsc(mnr logr (mnr log* (mnr log** (mnr log*** (mnr log**** each of these bounds would seem to be better than the previous sinceafter allthe more the slower log** grows howeverthis ignores the fact that while log**** is smaller than log***rthe term is not smaller than the term thus what we would like to do is to optimize the number of that are used define (mnto represent the optimal number of that will be used specificallyi times ***(log <( /na(mnmin > log thenthe running time of the union/find algorithm can be bounded by (ma(mn)theorem any sequence of unions and finds with path compression makes at most times ( ) log ***(log nparent changes during the finds proof this follows from the above discussion and the fact that <log theorem any sequence of unions and finds with path compression makes at most ma(mn parent changes during the finds proof in theorem choose to be (mn)thuswe obtain bound of ( + ) + ( / )or ma(mn |
23,664 | the disjoint sets class an application an example of the use of the union/find data structure is the generation of mazessuch as the one shown in figure in figure the starting point is the top-left cornerand the ending point is the bottom-right corner we can view the maze as -by- rectangle of cells in which the top-left cell is connected to the bottom-right celland cells are separated from their neighboring cells via walls simple algorithm to generate the maze is to start with walls everywhere (except for the entrance and exitwe then continually choose wall randomlyand knock it down if the cells that the wall separates are not already connected to each other if we repeat this process until the starting and ending cells are connectedthen we have maze it is actually better to continue knocking down walls until every cell is reachable from every other cell (this generates more false leads in the mazewe illustrate the algorithm with -by- maze figure shows the initial configuration we use the union/find data structure to represent sets of cells that are connected to each other initiallywalls are everywhereand each cell is in its own equivalence class figure shows later stage of the algorithmafter few walls have been knocked down supposeat this stagethe wall that connects cells and is randomly targeted because and are already connected (they are in the same set)we would not remove the wallas it would simply trivialize the maze suppose that cells and are randomly targeted next by performing two find operationswe see that these are in different setsthus and are not already connected thereforewe knock down the wall that separates themas shown in figure notice that as result of this operationthe sets figure -by- maze |
23,665 | { { { { { { { { { { { { { { { { { { { { { { { { { figure initial stateall walls upall cells in their own set containing and are combined via union operation this is because everything that was connected to is now connected to everything that was connected to at the end of the algorithmdepicted in figure everything is connectedand we are done the running time of the algorithm is dominated by the union/find costs the size of the union/find universe is equal to the number of cells the number of find operations is proportional to the number of cellssince the number of removed walls is one less than the number of cellswhile with carewe see that there are only about twice the number of walls as cells in the first place thusif is the number of cellssince there are two finds per randomly targeted wallthis gives an estimate of between (roughly and find operations throughout the algorithm thereforethe algorithm' running time can be taken as ( logn)and this algorithm quickly generates maze { { { { { { { { { { { { { figure at some point in the algorithmseveral walls downsets have mergedif at this point the wall between and is randomly selectedthis wall is not knocked downbecause and are already connected |
23,666 | the disjoint sets class { { { { { { { { { { { { figure wall between squares and is randomly selected in figure this wall is knocked downbecause and are not already connectedtheir sets are merged { figure eventually walls are knocked downall elements are in the same set summary we have seen very simple data structure to maintain disjoint sets when the union operation is performedit does not matteras far as correctness is concernedwhich set retains its name valuable lesson that should be learned here is that it can be very important to consider the alternatives when particular step is not totally specified the union step is flexibleby taking advantage of thiswe are able to get much more efficient algorithm path compression is one of the earliest forms of self-adjustmentwhich we have seen elsewhere (splay treesskew heapsits use is extremely interestingespecially from theoretical point of viewbecause it was one of the first examples of simple algorithm with not-so-simple worst-case analysis |
23,667 | exercises show the result of the following sequence of instructionsunion( , )union( , )union( , )union( , )union( , )union( , )union( , )union( , )union ( , )union( , )union( , )union( , )union( , )union( , )union ( , )union( when the unions are performed arbitrarily performed by height performed by size for each of the trees in the previous exerciseperform find with path compression on the deepest node write program to determine the effects of path compression and the various unioning strategies your program should process long sequence of equivalence operations using all six of the possible strategies show that if unions are performed by heightthen the depth of any tree is (log suppose (nis nicely defined function that reduces to smaller integer what ( ( ) with appropriate initial is the solution to the recurrence (nf(nconditions show that if then the running time of union/find operations is (mb show that if log nthen the running time of union/find operations is (mc suppose ( log log nwhat is the running time of union/find operationsd suppose ( lognwhat is the running time of union/find operationstarjan' original bound for the union/find algorithm defined (mnmin{ > |( (im/ log )}where ( >= ( ( (ija( (ij ) >= ij > herea(mnis one version of the ackermann function are the two definitions of asymptotically equivalentprove that for the mazes generated by the algorithm in section the path from the starting to ending points is unique design an algorithm that generates maze that contains no path from start to finish but has the property that the removal of prespecified wall creates unique path suppose we want to add an extra operationdeunionwhich undoes the last union operation that has not been already undone show that if we do union-by-height and finds without path compressionthen deunion is easyand sequence of unionfindand deunion operations takes ( log ntime why does path compression make deunion hard |
23,668 | the disjoint sets class show how to implement all three operations so that the sequence of operations takes ( log /log log ntime suppose we want to add an extra operationremove( )which removes from its current set and places it in its own show how to modify the union/find algorithm so that the running time of sequence of unionfindand remove operations is (ma(mn)show that if all of the unions precede the findsthen the disjoint sets algorithm with path compression requires linear timeeven if the unions are done arbitrarily prove that if unions are done arbitrarilybut path compression is performed on the findsthen the worst-case running time is ( log nprove that if unions are done by size and path compression is performedthe worstcase running time is (ma(mn)the disjoint set analysis in section can be refined to provide tight bounds for small show that (mn and (mn are both show that (mn is at most let < choose and show that (mnris at most suppose we implement partial path compression on find(iby making every other node on the path from to the root link to its grandparent (where this makes sensethis is known as path halving write procedure to do this prove that if path halving is performed on the finds and either union-by-height or union-by-size is usedthe worst-case running time is (ma(mn)write program that generates mazes of arbitrary size if you are using system with windowing packagegenerate maze similar to that in figure otherwise describe textual representation of the maze (for instanceeach line of output represents square and has information about which walls are presentand have your program generate representation references various solutions to the union/find problem can be found in [ ][ ]and [ hopcroft and ullman showed an ( lognbound using nonrecursive decomposition tarjan [ obtained the bound (ma(mn))where (mnis as defined in exercise more precise (but asymptotically identicalbound for appears in [ and [ the analysis in section is due to seidel and sharir [ various other strategies for path compression and unions also achieve the same boundsee [ for details lower bound showing that under certain restrictions (ma(mn)time is required to process union/find operations was given by tarjan [ identical bounds under less restrictive conditions have been shown in [ and [ applications of the union/find data structure appear in [ and [ certain special cases of the union/find problem can be solved in (mtime [ this reduces the running time of several algorithmssuch as [ ]graph dominanceand reducibility (see references |
23,669 | in by factor of (mnotherssuch as [ and the graph connectivity problem in this are unaffected the paper lists examples tarjan has used path compression to obtain efficient algorithms for several graph problems [ average-case results for the union/find problem appear in [ ][ ][ ]and [ results bounding the running time of any single operation (as opposed to the entire sequenceappear in [ and [ exercise is solved in [ general union/find structuresupporting more operationsis given in [ ahoj hopcroftand ullman"on finding lowest common ancestors in trees,siam journal on computing ( ) - banachowski" complement to tarjan' result about the lower bound on the complexity of the set union problem,information processing letters ( ) - bollobas and simon"probabilistic analysis of disjoint set union algorithms,siam journal on computing ( ) - blum"on the single-operation worst-case time complexity of the disjoint set union problem,siam journal on computing ( ) - doyle and rivest"linear expected time of simple union find algorithm,information processing letters ( ) - fischer"efficiency of equivalence algorithms,in complexity of computer computation (eds miller and thatcher)plenum pressnew york - fredman and saks"the cell probe complexity of dynamic data structures,proceedings of the twenty-first annual symposium on theory of computing ( ) - gabow and tarjan" linear-time algorithm for special case of disjoint set union,journal of computer and system sciences ( ) - galler and fischer"an improved equivalence algorithm,communications of the acm ( ) - hopcroft and karp"an algorithm for testing the equivalence of finite automata,technical report tr- - department of computer sciencecornell universityithacan hopcroft and ullman"set merging algorithms,siam journal on computing ( ) - knuth and schonhage"the expected linearity of simple equivalence algorithm,theoretical computer science ( ) - lapoutre"new techniques for the union-find problem,proceedings of the first annual acm-siam symposium on discrete algorithms ( ) - lapoutre"lower bounds for the union-find and the split-find problem on pointer machines,proceedings of the twenty-second annual acm symposium on theory of computing ( ) - seidel and sharir"top-down analysis of path compression,siam journal on computing ( ) - tarjan"efficiency of good but not linear set union algorithm,journal of the acm ( ) - tarjan" class of algorithms which require nonlinear time to maintain disjoint sets,journal of computer and system sciences ( ) - |
23,670 | the disjoint sets class tarjan"applications of path compression on balanced trees,journal of the acm ( ) - tarjan and van leeuwen"worst-case analysis of set union algorithms,journal of the acm ( ) - van kreveld and overmars"union-copy structures and dynamic segment trees,journal of the acm ( ) - westbrook and tarjan"amortized analysis of algorithms for set union with backtracking,siam journal on computing ( ) - yao"on the average behavior of set merging algorithms,proceedings of eighth annual acm symposium on the theory of computation ( ) - |
23,671 | graph algorithms in this we discuss several common problems in graph theory not only are these algorithms useful in practicethey are also interesting because in many real-life applications they are too slow unless careful attention is paid to the choice of data structures we will show several real-life problemswhich can be converted to problems on graphs give algorithms to solve several common graph problems show how the proper choice of data structures can drastically reduce the running time of these algorithms see an important techniqueknown as depth-first searchand show how it can be used to solve several seemingly nontrivial problems in linear time definitions graph (veconsists of set of verticesvand set of edgese each edge is pair (vw)where vw edges are sometimes referred to as arcs if the pair is orderedthen the graph is directed directed graphs are sometimes referred to as digraphs vertex is adjacent to if and only if (vwe in an undirected graph with edge (vw)and hence (wv) is adjacent to and is adjacent to sometimes an edge has third componentknown as either weight or cost path in graph is sequence of vertices wn such that (wi wi+ for < the length of such path is the number of edges on the pathwhich is equal to we allow path from vertex to itselfif this path contains no edgesthen the path length is this is convenient way to define an otherwise special case if the graph contains an edge (vvfrom vertex to itselfthen the path vv is sometimes referred to as loop the graphs we will consider will generally be loopless simple path is path such that all vertices are distinctexcept that the first and last could be the same cycle in directed graph is path of length at least such that wn this cycle is simple if the path is simple for undirected graphswe require that the edges be distinct the logic of these requirements is that the path uvu in an undirected graph should not be considered cyclebecause (uvand (vuare the same edge in directed graphthese are different edgesso it makes sense to call this cycle directed graph is acyclic if it has no cycles directed acyclic graph is sometimes referred to by its abbreviationdag |
23,672 | graph algorithms an undirected graph is connected if there is path from every vertex to every other vertex directed graph with this property is called strongly connected if directed graph is not strongly connectedbut the underlying graph (without direction to the arcsis connectedthen the graph is said to be weakly connected complete graph is graph in which there is an edge between every pair of vertices an example of real-life situation that can be modeled by graph is the airport system each airport is vertexand two vertices are connected by an edge if there is nonstop flight from the airports that are represented by the vertices the edge could have weightrepresenting the timedistanceor cost of the flight it is reasonable to assume that such graph is directedsince it might take longer or cost more (depending on local taxesfor exampleto fly in different directions we would probably like to make sure that the airport system is strongly connectedso that it is always possible to fly from any airport to any other airport we might also like to quickly determine the best flight between any two airports "bestcould mean the path with the fewest number of edges or could be taken with respect to oneor allof the weight measures traffic flow can be modeled by graph each street intersection represents vertexand each street is an edge the edge costs could representamong other thingsa speed limit or capacity (number of laneswe could then ask for the shortest route or use this information to find the most likely location for bottlenecks in the remainder of this we will see several more applications of graphs many of these graphs can be quite largeso it is important that the algorithms we use be efficient representation of graphs we will consider directed graphs (undirected graphs are similarly representedsupposefor nowthat we can number the verticesstarting at the graph shown in figure represents vertices and edges figure directed graph |
23,673 | one simple way to represent graph is to use two-dimensional array this is known as an adjacency matrix representation for each edge (uv)we set [ ][vto trueotherwise the entry in the array is false if the edge has weight associated with itthen we can set [ ][vequal to the weight and use either very large or very small weight as sentinel to indicate nonexistent edges for instanceif we were looking for the cheapest airplane routewe could represent nonexistent flights with cost of if we were lookingfor some strange reasonfor the most expensive airplane routewe could use (or perhaps to represent nonexistent edges although this has the merit of extreme simplicitythe space requirement is (| | )which can be prohibitive if the graph does not have very many edges an adjacency matrix is an appropriate representation if the graph is dense| (| | in most of the applications that we shall seethis is not true for instancesuppose the graph represents street map assume manhattan-like orientationwhere almost all the streets run either north-south or east-west thereforeany intersection is attached to roughly four streetsso if the graph is directed and all streets are two-waythen | |vif there are , intersectionsthen we have , -vertex graph with , edge entrieswhich would require an array of size , , most of these entries would contain zero this is intuitively badbecause we want our data structures to represent the data that are actually there and not the data that are not present if the graph is not densein other wordsif the graph is sparsea better solution is an adjacency list representation for each vertexwe keep list of all adjacent vertices the space requirement is then (| | |)which is linear in the size of the graph the abstract representation should be clear from figure if the edges have weightsthen this additional information is also stored in the adjacency lists adjacency lists are the standard way to represent graphs undirected graphs can be similarly representedeach edge (uvappears in two listsso the space usage essentially doubles common requirement in graph algorithms is to find all vertices adjacent to some given vertex vand this can be donein time proportional to the number of such vertices foundby simple scan down the appropriate adjacency list there are several alternatives for maintaining the adjacency lists firstobserve that the lists themselves can be maintained in either vectors or lists howeverfor sparse graphswhen using vectorsthe programmer may need to initialize each vector with smaller capacity than the defaultotherwisethere could be significant wasted space because it is important to be able to quickly obtain the list of adjacent vertices for any vertexthe two basic options are to use map in which the keys are vertices and the values are adjacency listsor to maintain each adjacency list as data member of vertex class the first option is arguably simplerbut the second option can be fasterbecause it avoids repeated lookups in the map in the second scenarioif the vertex is string (for instancean airport nameor the name of street intersection)then map can be used in which the key is the vertex name and the value is vertex (typically pointer to vertex)and each vertex object keeps list of (pointers to theadjacent vertices and perhaps also the original string name when we speak of linear-time graph algorithmso(| | |is the running time we require |
23,674 | graph algorithms (empty figure an adjacency list representation of graph in most of the we present the graph algorithms using pseudocode we will do this to save space andof courseto make the presentation of the algorithms much clearer at the end of section we provide working +implementation of routine that makes underlying use of shortest-path algorithm to obtain its answers topological sort topological sort is an ordering of vertices in directed acyclic graphsuch that if there is path from vi to vj then vj appears after vi in the ordering the graph in figure represents the course prerequisite structure at state university in miami directed edge (vwindicates that course must be completed before course may be attempted topological ordering of these courses is any course sequence that does not violate the prerequisite requirement it is clear that topological ordering is not possible if the graph has cyclesince for two vertices and on the cyclev precedes and precedes furthermorethe ordering is not necessarily uniqueany legal ordering will do in the graph in figure and are both topological orderings simple algorithm to find topological ordering is first to find any vertex with no incoming edges we can then print this vertexand remove italong with its edgesfrom the graph then we apply this same strategy to the rest of the graph to formalize thiswe define the indegree of vertex as the number of edges (uvwe compute the indegrees of all vertices in the graph assuming that the indegree for each |
23,675 | cap mad cop mac mad mad cop cop cop cop cis cop cda cop cop cda cop figure an acyclic graph representing course prerequisite structure figure an acyclic graph vertex is storedand that the graph is read into an adjacency listwe can then apply the algorithm in figure to generate topological ordering the function findnewvertexofindegreezero scans the array of vertices looking for vertex with indegree that has not already been assigned topological number it returns not_a_vertex if no such vertex existsthis indicates that the graph has cycle because findnewvertexofindegreezero is simple sequential scan of the array of verticeseach call to it takes (| |time since there are |vsuch callsthe running time of the algorithm is (| | by paying more careful attention to the data structuresit is possible to do better the cause of the poor running time is the sequential scan through the array of vertices if the |
23,676 | graph algorithms void graph::topsortforint counter counter num_verticescounter+vertex findnewvertexofindegreezero)ifv =not_a_vertex throw cyclefoundexception} topnum counterfor each vertex adjacent to indegree--figure simple topological sort pseudocode graph is sparsewe would expect that only few vertices have their indegrees updated during each iteration howeverin the search for vertex of indegree we look at (potentiallyall the verticeseven though only few have changed we can remove this inefficiency by keeping all the (unassignedvertices of indegree in special box the findnewvertexofindegreezero function then returns (and removesany vertex in the box when we decrement the indegrees of the adjacent verticeswe check each vertex and place it in the box if its indegree falls to to implement the boxwe can use either stack or queuewe will use queue firstthe indegree is computed for every vertex then all vertices of indegree are placed on an initially empty queue while the queue is not emptya vertex is removedand all vertices adjacent to have their indegrees decremented vertex is put on the queue as soon as its indegree falls to the topological ordering then is the order in which the vertices dequeue figure shows the status after each phase vertex indegree before dequeue enqueue dequeue figure result of applying topological sort to the graph in figure |
23,677 | void graph::topsortqueue qint counter makeempty)for each vertex ifv indegree = enqueuev )while! isemptyvertex dequeue) topnum ++counter/assign next number for each vertex adjacent to if-- indegree = enqueuew )ifcounter !num_vertices throw cyclefoundexception}figure pseudocode to perform topological sort pseudocode implementation of this algorithm is given in figure as beforewe will assume that the graph is already read into an adjacency list and that the indegrees are computed and stored with the vertices we also assume each vertex has named data membertopnumin which to place its topological numbering the time to perform this algorithm is (| | |if adjacency lists are used this is apparent when one realizes that the body of the for loop is executed at most once per edge computing the indegrees can be done with the following codethis same logic shows that the cost of this computation is (| | |)even though there are nested loops for each vertex indegree for each vertex for each vertex adjacent to indegree++the queue operations are done at most once per vertexand the other initialization stepsincluding the computation of indegreesalso take time proportional to the size of the graph |
23,678 | graph algorithms shortest-path algorithms in this section we examine various shortest-path problems the input is weighted graphassociated with each edge (vi vj is cost ci, to traverse the edge the cost of path vn is - = ci, + this is referred to as the weighted path length the unweighted path length is merely the number of edges on the pathnamelyn single-source shortest-path problem given as input weighted graphg (ve)and distinguished vertexsfind the shortest weighted path from to every other vertex in for examplein the graph in figure the shortest weighted path from to has cost of and goes from to to to the shortest unweighted path between these vertices is generallywhen it is not specified whether we are referring to weighted or an unweighted paththe path is weighted if the graph is notice also that in this graph there is no path from to the graph in the preceding example has no edges of negative cost the graph in figure shows the problems that negative edges can cause the path from to has figure directed graph - figure graph with negative-cost cycle |
23,679 | cost but shorter path exists by following the loop which has cost - this path is still not the shortestbecause we could stay in the loop arbitrarily long thusthe shortest path between these two points is undefined similarlythe shortest path from to is undefinedbecause we can get into the same loop this loop is known as negative-cost cyclewhen one is present in the graphthe shortest paths are not defined negative-cost edges are not necessarily badas the cycles arebut their presence seems to make the problem harder for conveniencein the absence of negative-cost cyclethe shortest path from to is zero there are many examples where we might want to solve the shortest-path problem if the vertices represent computersthe edges represent link between computersand the costs represent communication costs (phone bill per megabyte of data)delay costs (number of seconds required to transmit megabyte)or combination of these and other factorsthen we can use the shortest-path algorithm to find the cheapest way to send electronic news from one computer to set of other computers we can model airplane or other mass transit routes by graphs and use shortestpath algorithm to compute the best route between two points in this and many practical applicationswe might want to find the shortest path from one vertexsto only one other vertext currently there are no algorithms in which finding the path from to one vertex is any faster (by more than constant factorthan finding the path from to all vertices we will examine algorithms to solve four versions of this problem firstwe will consider the unweighted shortest-path problem and show how to solve it in (| |+| |nextwe will show how to solve the weighted shortest-path problem if we assume that there are no negative edges the running time for this algorithm is (|elog | |when implemented with reasonable data structures if the graph has negative edgeswe will provide simple solutionwhich unfortunately has poor time bound of (| | |finallywe will solve the weighted problem for the special case of acyclic graphs in linear time unweighted shortest paths figure shows an unweighted graphg using some vertexswhich is an input parameterwe would like to find the shortest path from to all other vertices we are only interested in the number of edges contained on the pathso there are no weights on the edges this is clearly special case of the weighted shortest-path problemsince we could assign all edges weight of for nowsuppose we are interested only in the length of the shortest pathsnot in the actual paths themselves keeping track of the actual paths will turn out to be matter of simple bookkeeping suppose we choose to be immediatelywe can tell that the shortest path from to is then path of length we can mark this informationobtaining the graph in figure now we can start looking for all vertices that are distance away from these can be found by looking at the vertices that are adjacent to if we do thiswe see that and are one edge from this is shown in figure we can now find vertices whose shortest path from is exactly by finding all the vertices adjacent to and (the vertices at distance )whose shortest paths are not |
23,680 | figure an unweighted directed graph figure graph after marking the start node as reachable in zero edges figure graph after finding all vertices whose path length from is |
23,681 | figure graph after finding all vertices whose shortest path is already known this search tells us that the shortest path to and is figure shows the progress that has been made so far finally we can findby examining vertices adjacent to the recently evaluated and that and have shortest path of three edges all vertices have now been calculatedand so figure shows the final result of the algorithm this strategy for searching graph is known as breadth-first search it operates by processing vertices in layersthe vertices closest to the start are evaluated firstand the most distant vertices are evaluated last this is much the same as level-order traversal for trees given this strategywe must translate it into code figure shows the initial configuration of the table that our algorithm will use to keep track of its progress for each vertexwe will keep track of three pieces of information firstwe will keep its distance from in the entry dv initially all vertices are unreachable except for swhose path length is the entry in pv is the bookkeeping variablewhich will allow us to print the actual paths the entry known is set to true after vertex is processed initiallyall entries are not knownincluding the start vertex when vertex is marked knownwe have figure final shortest paths |
23,682 | graph algorithms known dv pv figure initial configuration of table used in unweighted shortest-path computation guarantee that no cheaper path will ever be foundand so processing for that vertex is essentially complete the basic algorithm can be described in figure the algorithm in figure mimics the diagrams by declaring as known the vertices at distance then then and so onand setting all the adjacent vertices that still have dw to distance dw void graph::unweightedvertex for each vertex dist infinityv known falses dist forint currdist currdist num_verticescurrdist+for each vertex if! known & dist =currdist known truefor each vertex adjacent to ifw dist =infinity dist currdist path vfigure pseudocode for unweighted shortest-path algorithm |
23,683 | figure bad case for unweighted shortest-path algorithm using figure by tracing back through the pv variablethe actual path can be printed we will see how when we discuss the weighted case the running time of the algorithm is (| | )because of the doubly nested for loops an obvious inefficiency is that the outside loop continues until num_vertices- even if all the vertices become known much earlier although an extra test could be made to avoid thisit does not affect the worst-case running timeas can be seen by generalizing what happens when the input is the graph in figure with start vertex we can remove the inefficiency in much the same way as was done for topological sort at any point in timethere are only two types of unknown vertices that have dv some have dv currdistand the rest have dv currdist because of this extra structureit is very wasteful to search through the entire table to find proper vertex very simple but abstract solution is to keep two boxes box # will have the unknown vertices with dv currdistand box # will have dv currdist the test to find an appropriate vertex can be replaced by finding any vertex in box # after updating (inside the innermost if block)we can add to box # after the outermost for loop terminatesbox # is emptyand box # can be transferred to box # for the next pass of the for loop we can refine this idea even further by using just one queue at the start of the passthe queue contains only vertices of distance currdist when we add adjacent vertices of distance currdist since they enqueue at the rearwe are guaranteed that they will not be processed until after all the vertices of distance currdist have been processed after the last vertex at distance currdist dequeues and is processedthe queue only contains vertices of distance currdist so this process perpetuates we merely need to begin the process by placing the start node on the queue by itself the refined algorithm is shown in figure in the pseudocodewe have assumed that the start vertexsis passed as parameter alsoit is possible that the queue might empty prematurelyif some vertices are unreachable from the start node in this casea distance of infinity will be reported for these nodeswhich is perfectly reasonable finallythe known data member is not usedonce vertex is processed it can never enter the queue againso the fact that it need not be reprocessed is implicitly marked thusthe known data member can be discarded figure shows how the values on the graph we have been using are changed during the algorithm (it includes the changes that would occur to known if we had kept itusing the same analysis as was performed for topological sortwe see that the running time is (| | |)as long as adjacency lists are used dijkstra' algorithm if the graph is weightedthe problem (apparentlybecomes harderbut we can still use the ideas from the unweighted case |
23,684 | graph algorithms void graph::unweightedvertex queue qfor each vertex dist infinitys dist enqueues )while! isemptyvertex dequeue)for each vertex adjacent to ifw dist =infinity dist dist path vq enqueuew )figure psuedocode for unweighted shortest-path algorithm we keep all of the same information as before thuseach vertex is marked as either known or unknown tentative distance dv is kept for each vertexas before this distance turns out to be the shortest path length from to using only known vertices as intermediates as beforewe record pv which is the last vertex to cause change to dv the general method to solve the single-source shortest-path problem is known as dijkstra' algorithm this thirty-year-old solution is prime example of greedy algorithm greedy algorithms generally solve problem in stages by doing what appears to be the best thing at each stage for exampleto make change in currencymost people count out the quarters firstthen the dimesnickelsand pennies this greedy algorithm gives change using the minimum number of coins the main problem with greedy algorithms is that they do not always work the addition of -cent piece breaks the coin-changing algorithm for returning centsbecause the answer it gives (one -cent piece and three penniesis not optimal (one dime and one nickeldijkstra' algorithm proceeds in stagesjust like the unweighted shortest-path algorithm at each stagedijkstra' algorithm selects vertexvwhich has the smallest dv among all the unknown vertices and declares that the shortest path from to is known the remainder of stage consists of updating the values of dw in the unweighted casewe set dw dv if dw thuswe essentially lowered the value of dw if vertex offered shorter path if we apply the same logic to the weighted |
23,685 | initial state dequeued dequeued dequeued known dv pv known dv pv known dv pv known dv pv qv dequeued dequeued dequeued dequeued known dv pv known dv pv known dv pv known dv pv qv empty figure how the data change during the unweighted shortest-path algorithm casethen we should set dw dv cv, if this new value for dw would be an improvement put simplythe algorithm decides whether or not it is good idea to use on the path to the original costdw is the cost without using vthe cost calculated above is the cheapest path using (and only known verticesthe graph in figure is our example figure represents the initial configurationassuming that the start nodesis the first vertex selected is with path length this vertex is marked known now that is knownsome entries need to be adjusted the vertices adjacent to are and both these vertices get their entries adjustedas indicated in figure nextv is selected and marked known vertices and are adjacentand it turns out that all require adjustingas shown in figure nextv is selected is adjacent but already knownso no work is performed on it is adjacent but not adjustedbecause the cost of going through is and path of length is already known figure shows the table after these vertices are selected |
23,686 | graph algorithms figure the directed graph (againv known dv pv figure initial configuration of table used in dijkstra' algorithm known dv pv figure after is declared known the next vertex selected is at cost is the only adjacent vertexbut it is not adjustedbecause then is selectedand the distance for is adjusted down to the resulting table is depicted in figure nextv is selectedv gets updated down to the resulting table is figure |
23,687 | known dv pv figure after is declared known known dv pv figure after is declared known known dv pv figure after and then are declared known finallyv is selected the final table is shown in figure figure graphically shows how edges are marked known and vertices updated during dijkstra' algorithm to print out the actual path from start vertex to some vertex vwe can write recursive routine to follow the trail left in the variables we now give pseudocode to implement dijkstra' algorithm each vertex stores various data members that are used in the algorithm this is shown in figure |
23,688 | graph algorithms known dv pv figure after is declared known known dv pv figure after is declared known and algorithm terminates the path can be printed out using the recursive routine in figure the routine recursively prints the path all the way up to the vertex before on the pathand then just prints this works because the path is simple figure shows the main algorithmwhich is just for loop to fill up the table using the greedy selection rule proof by contradiction will show that this algorithm always works as long as no edge has negative cost if any edge has negative costthe algorithm could produce the wrong answer (see exercise ( )the running time depends on how the vertices are manipulatedwhich we have yet to consider if we use the obvious algorithm of sequentially scanning the vertices to find the minimum dv each phase will take (| |time to find the minimumand thus (| | time will be spent finding the minimum over the course of the algorithm the time for updating dw is constant per updateand there is at most one update per edge for total of (| |thusthe total running time is (| | | (| | if the graph is densewith | (| | )this algorithm is not only simple but also essentially optimalsince it runs in time linear in the number of edges if the graph is sparsewith | (| |)this algorithm is too slow in this casethe distances would need to be kept in priority queue there are actually two ways to do thisboth are similar |
23,689 | figure stages of dijkstra' algorithm selection of the vertex is deletemin operationsince once the unknown minimum vertex is foundit is no longer unknown and must be removed from future consideration the update of ' distance can be implemented two ways one way treats the update as decreasekey operation the time to find the minimum is then (log | |)as is the time to perform updateswhich amount to decreasekey operations this gives running time of (|elog | |vlog | | (|elog | |)an improvement |
23,690 | graph algorithms /*pseudocode sketch of the vertex structure in real ++path would be of type vertex *and many of the code fragments that we describe require either dereferencing or use the -operator instead of the operator needless to saythis obscures the basic algorithmic ideas *struct vertex list adj/adjacency list bool knowndisttype dist/disttype is probably int vertex path/probably vertex *as mentioned above /other data and member functions as needed }figure vertex class for dijkstra' algorithm (pseudocode/*print shortest path to after dijkstra has run assume that the path exists *void graph::printpathvertex ifv path !not_a_vertex printpathv path )cout <to "cout <vfigure routine to print the actual shortest path over the previous bound for sparse graphs since priority queues do not efficiently support the find operationthe location in the priority queue of each value of di will need to be maintained and updated whenever di changes in the priority queue if the priority queue is implemented by binary heapthis will be messy if pairing heap (is usedthe code is not too bad an alternate method is to insert and the new value dw into the priority queue every time ' distance changes thusthere may be more than one representative for each vertex in the priority queue when the deletemin operation removes the smallest vertex from the priority queueit must be checked to make sure that it is not already known andif |
23,691 | void graph::dijkstravertex for each vertex dist infinityv known falses dist whilethere is an unknown distance vertex vertex smallest unknown distance vertexv known truefor each vertex adjacent to if! known disttype cvw cost of edge from to wifv dist cvw dist /update decreasew dist to dist cvw ) path vfigure pseudocode for dijkstra' algorithm it isit is simply ignored and another deletemin is performed although this method is superior from software point of viewand is certainly much easier to codethe size of the priority queue could get to be as large as |ethis does not affect the asymptotic time boundssince | <| | implies that log | < log |vthuswe still get an (|elog | |algorithm howeverthe space requirement does increaseand this could be important in some applications moreoverbecause this method requires |edeletemins instead of only | |it is likely to be slower in practice notice that for the typical problemssuch as computer mail and mass transit commutesthe graphs are typically very sparse because most vertices have only couple of edgesso it is important in many applications to use priority queue to solve this problem there are better time bounds possible using dijkstra' algorithm if different data structures are used in we will see another priority queue data structure called the |
23,692 | graph algorithms fibonacci heap when this is usedthe running time is (| |+|vlog | |fibonacci heaps have good theoretical time bounds but fair amount of overheadso it is not clear whether using fibonacci heaps is actually better in practice than dijkstra' algorithm with binary heaps to datethere are no meaningful average-case results for this problem graphs with negative edge costs if the graph has negative edge coststhen dijkstra' algorithm does not work the problem is that once vertexuis declared knownit is possible that from some other unknown vertexvthere is path back to that is very negative in such casetaking path from to back to is better than going from to without using exercise (aasks you to construct an explicit example tempting solution is to add constant to each edge costthus removing negative edgescalculate shortest path on the new graphand then use that result on the original the naive implementation of this strategy does not work because paths with many edges become more weighty than paths with few edges combination of the weighted and unweighted algorithms will solve the problembut at the cost of drastic increase in running time we forget about the concept of known verticessince our algorithm needs to be able to change its mind we begin by placing on queue thenat each stagewe dequeue vertex we find all vertices adjacent to such that dw dv cv, we update dw and pw and place on queue if it is not already there bit can be set for each vertex to indicate presence in the queue we repeat the process until the queue is empty figure (almostimplements this algorithm although the algorithm works if there are no negative-cost cyclesit is no longer true that the code in the inner for loop is executed once per edge each vertex can dequeue at most |vtimesso the running time is (| | |if adjacency lists are used (exercise ( )this is quite an increase from dijkstra' algorithmso it is fortunate thatin practiceedge costs are nonnegative if negative-cost cycles are presentthen the algorithm as written will loop indefinitely by stopping the algorithm after any vertex has dequeued | timeswe can guarantee termination acyclic graphs if the graph is known to be acyclicwe can improve dijkstra' algorithm by changing the order in which vertices are declared knownotherwise known as the vertex selection rule the new rule is to select vertices in topological order the algorithm can be done in one passsince the selections and updates can take place as the topological sort is being performed this selection rule works because when vertex is selectedits distancedv can no longer be loweredsince by the topological ordering rule it has no incoming edges emanating from unknown nodes there is no need for priority queue with this selection rulethe running time is (| | |)since the selection takes constant time an acyclic graph could model some downhill skiing problem--we want to get from point to bbut can only go downhillso clearly there are no cycles another possible |
23,693 | void graph::weightednegativevertex queue qfor each vertex dist infinitys dist enqueues )while! isemptyvertex dequeue)for each vertex adjacent to ifv dist cvw dist /update dist dist cvww path vifw is not already in enqueuew )figure pseudocode for weighted shortest-path algorithm with negative edge costs application might be the modeling of (nonreversiblechemical reactions we could have each vertex represent particular state of an experiment edges would represent transition from one state to anotherand the edge weights might represent the energy released if only transitions from higher energy state to lower are allowedthe graph is acyclic more important use of acyclic graphs is critical path analysis the graph in figure will serve as our example each node represents an activity that must be performedalong with the time it takes to complete the activity this graph is thus known as an activity-node graph the edges represent precedence relationshipsan edge (vwmeans that activity must be completed before activity may begin of coursethis implies that the graph must be acyclic we assume that any activities that do not depend (either directly or indirectlyon each other can be performed in parallel by different servers this type of graph could be (and frequently isused to model construction projects in this casethere are several important questions which would be of interest to answer firstwhat is the earliest completion time for the projectwe can see from the graph that time units are required along the path acfh another important question is to determine which activities can be delayedand by how longwithout affecting the minimum completion time for instancedelaying any of acfor would push the completion |
23,694 | graph algorithms ( ( ( start ( ( ( finish ( ( ( figure activity-node graph time past units on the other handactivity is less critical and can be delayed up to two time units without affecting the final completion time to perform these calculationswe convert the activity-node graph to an event-node graph each event corresponds to the completion of an activity and all its dependent activities events reachable from node in the event-node graph may not commence until after the event is completed this graph can be constructed automatically or by hand dummy edges and nodes may need to be inserted in the case where an activity depends on several others this is necessary in order to avoid introducing false dependencies (or false lack of dependenciesthe event-node graph corresponding to the graph in figure is shown in figure to find the earliest completion time of the projectwe merely need to find the length of the longest path from the first event to the last event for general graphsthe longest-path problem generally does not make sensebecause of the possibility of positive-cost cycles these are the equivalent of negative-cost cycles in shortest-path problems if positive-cost cycles are presentwe could ask for the longest simple pathbut no satisfactory solution is known for this problem since the event-node graph is acyclicwe need not worry about cycles in this caseit is easy to adapt the shortest-path algorithm to compute the earliest / / / / / figure event-node graph / / / / |
23,695 | completion time for all nodes in the graph if eci is the earliest completion time for node ithen the applicable rules are ec ecw max (ecv cv, ( , ) figure shows the earliest completion time for each event in our example event-node graph we can also compute the latest timelci that each event can finish without affecting the final completion time the formulas to do this are lcn ecn lcv min (lcw cv, ( , ) these values can be computed in linear time by maintainingfor each vertexa list of all adjacent and preceding vertices the earliest completion times are computed for vertices by their topological orderand the latest completion times are computed by reverse topological order the latest completion times are shown in figure the slack time for each edge in the event-node graph represents the amount of time that the completion of the corresponding activity can be delayed without delaying the overall completion it is easy to see that slack( ,wlcw ecv cv, / / / / / / / / / figure earliest completion times / / / / / figure latest completion times / / / / |
23,696 | graph algorithms / / / / / / / / / / / / / / / / / / figure earliest completion timelatest completion timeand slack figure shows the slack (as the third entryfor each activity in the event-node graph for each nodethe top number is the earliest completion time and the bottom entry is the latest completion time some activities have zero slack these are critical activitieswhich must finish on schedule there is at least one path consisting entirely of zero-slack edgessuch path is critical path all-pairs shortest path sometimes it is important to find the shortest paths between all pairs of vertices in the graph although we could just run the appropriate single-source algorithm |vtimeswe might expect somewhat faster solutionespecially on dense graphif we compute all the information at once in we will see an (| | algorithm to solve this problem for weighted graphs althoughfor dense graphsthis is the same bound as running simple (nonpriority queuedijkstra' algorithm |vtimesthe loops are so tight that the specialized all-pairs algorithm is likely to be faster in practice on sparse graphsof courseit is faster to run |vdijkstra' algorithms coded with priority queues shortest path example in this section we write some +routines to compute word ladders in word ladder each word is formed by changing one character in the ladder' previous word for instancewe can convert zero to five by sequence of one-character substitutions as followszero hero here hire fire five this is an unweighted shortest problem in which each word is vertexand two vertices have edges (in both directionsbetween them if they can be converted to each other with one-character substitution in section we described and wrote +routine that would create map in which the keys are wordsand the values are vectors containing the words that can result from one-character transformation as suchthis map represents the graphin adjacency list formatand we only need to write one routine to run the single-source unweighted shortest-path algorithm and second routine to output the sequence of wordsafter the |
23,697 | /runs the shortest path calculation from the adjacency mapreturning vector /that contains the sequence of word changes to get from first to second unordered_map findchainconst unordered_mapadjacentwordsconst string firstconst string second unordered_map previouswordqueue qq pushfirst )while! emptystring current front) pop)auto itr adjacentwords findcurrent )const vector adj itr->secondforstring str adj ifpreviouswordstr ="previouswordstr currentq pushstr )previouswordfirst ""return previousword/after the shortest path calculation has runcomputes the vector that /contains the sequence of words changes to get from first to second vector getchainfrompreviousmapconst unordered_map previousconst string second vector resultauto prev const_cast &>previous )forstring current secondcurrent !""current prevcurrent result push_backcurrent )reversebeginresult )endresult )return resultfigure +code to find word ladders |
23,698 | graph algorithms single-source shortest-path algorithm has completed these two routines are both shown in figure the first routine is findchainwhich takes the map representing the adjacency lists and the two words to be connected and returns map in which the keys are wordsand the corresponding value is the word prior to the key on the shortest ladder starting at first in other wordsin the example aboveif the starting word is zerothe value for key five is firethe value for key fire is hirethe value for key hire is hereand so on clearly this provides enough information for the second routinegetchainfrompreviousmapwhich can work its way backward findchain is direct implementation of the pseudocode in figure and for simplicityit assumes that first is key in adjacentwords (this is easily tested prior to the callor we can add extra code at line that throws an exception if this condition is not satisfiedthe basic loop incorrectly assigns previous entry for first (when the initial word adjacent to first is processedso at line that entry is repaired getchainfromprevmap uses the prev map and secondwhich presumably is key in the map and returns the words used to form the word ladder by working its way backward through prev this generates the words backwardso the stl reverse algorithm is used to fix the problem the cast at line is needed because operator[cannot be applied on an immutable map it is possible to generalize this problem to allow single-character substitutions that include the deletion of character or the addition of character to compute the adjacency list requires only little more effortin the last algorithm in section every time representative for word in group is computedwe check if the representative is word in group if it isthen the representative is adjacent to (it is single-character deletion)and is adjacent to the representative (it is single-character additionit is also possible to assign cost to character deletion or insertion (that is higher than simple substitution)and this yields weighted shortest-path problem that can be solved with dijkstra' algorithm network flow problems suppose we are given directed graph (vewith edge capacities cv, these capacities could represent the amount of water that could flow through pipe or the amount of traffic that could flow on street between two intersections we have two verticesswhich we call the sourceand twhich is the sink through any edge(vw)at most cv, units of "flowmay pass at any vertexvthat is not either or tthe total flow coming in must equal the total flow going out the maximum-flow problem is to determine the maximum amount of flow that can pass from to as an examplefor the graph in figure on the left the maximum flow is as indicated by the graph on the right although this example graph is acyclicthis is not requirementour (eventualalgorithm will work even if the graph has cycle as required by the problem statementno edge carries more flow than its capacity vertex has three units of flow coming inwhich it distributes to and vertex takes three units of flow from and and combines thissending the result to vertex can |
23,699 | figure graph (leftand its maximum flow combine and distribute flow in any manner that it likesas long as edge capacities are not violated and as long as flow conservation is maintained (what goes in must come outlooking at the graphwe see that has edges of capacities and leaving itand has edges of capacities and entering it so perhaps the maximum flow could be instead of howeverfigure shows how we can prove that the maximum flow is we cut the graph into two partsone part contains and some other verticesthe other part contains since flow must cross through the cutthe total capacity of all edges (uvwhere is in ' partition and is in ' partition is bound on the maximum flow these edges are (acand (dt)with total capacity so the maximum flow cannot exceed any graph has large number of cutsthe cut with minimum total capacity provides bound on the maximum flowand as it turns out (but it is not immediately obvious)the minimum cut capacity is exactly equal to the maximum flow figure cut in graph partitions the vertices with and in different groups the total edge cost across the cut is proving that flow of is maximum |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.