|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| In [[computer science]], '''divide and conquer''' ('''D&C''') is an important [[algorithm design]] [[paradigm]] based on multi-branched [[recursion]]. A divide and conquer [[algorithm]] works by recursively breaking down a problem into two or more sub-problems of the same (or related) type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.
| | BSOD_or_the_Blue_Screen_of_Death___sometimes_known_because_blue_screen_bodily_memory_dump___is_an_error_which_occurs_on_a_Windows_program_-_when_the_computer_really_shuts_down_or_automatically_reboots._This_error_could_occur_really_as_a_computer_is_booting_up_or_some_Windows_application_is_running._When_the_Windows_OS_discovers_an_unrecoverable_error_it_hangs_the_program_or_leads_to_memory_dumps.<br><br>Install_an_anti-virus_software._If_you_already_have_that_on_you_computer_then_carry_out_a_full_system_scan._If_it_finds_any_viruses_on_the_computer__delete_those._Viruses_invade_the_computer_and_make_it_slower._To_safeguard_the_computer_from_various_viruses__it_is_very_greater_to_keep_the_anti-virus_software_running_when_we_employ_the_web._We_could_also_fix_the_protection_settings_of_your_internet_browser._It_can_block_unknown_and_risky_websites_plus_also_block_off_any_spyware_or_malware_trying_to_get_into_the_computer.<br><br>The_Windows_registry_is_a_program_database_of_info._Windows_and_alternative_software_store_a_lot_of_settings_plus_different_information_in_it__plus_retrieve_such_information_from_the_registry_all_the_time._The_registry_is_also_a_bottleneck_in_which_considering_it_is_very_the_heart_of_the_running_system__any_issues_with_it_can_result_mistakes_and_bring_the_running_system_down.<br><br>If_that_refuses_to_work_you_need_to_try_plus_repair_the_matter_with_a__registry_cleaner_._What_occurs_on_countless_computers_is_the_fact_that_their_registry_database_becomes_damaged_plus_unable_to_show_a_computer_where_the_DLL_files_which_it_demands_are._Every_Windows_PC_has_a_central__registry__database_that_stores_information_regarding_all_of_the_DLL_files_on_your_computer.<br><br>The_[http:__bestregistrycleanerfix.com_registry_cleaners]_must_come_because_standard_with_a_back_up_and_restore_center._This_must_be_an_convenient_to_implement_process.That_means_that_if_you_encounter_a_problem_with_a_PC_following_utilizing_a_registry_cleaning_you_can_simply_restore_a_settings.<br><br>The_program_is_designed_plus_built_for_the_purpose_of_helping_we_accomplish_tasks_and_not_be_pestered_by_windows_XP_error_messages._When_there_are_errors__what_do_we_do__Some_folks_pull_their_hair_plus_cry__whilst_those_sane_ones_have_their_PC_repaired__while_those_actually_wise_ones_research_to_have_the_errors_fixed_themselves._No__these_errors_were_not_also_crafted_to_rob_we_off_the_money_and_time._There_are_items_to_do_to_really_avoid_this_from_happening.<br><br>Most_probably_should_you_are_experiencing_a_slow_computer_it_will_be_a_couple_years_aged._You_also_may_not_have_been_told_which_while_you_utilize_a_computer_everyday;_there_are_certain_points_which_it_requires_to_continue_running_inside_its_best_performance._We_additionally_might_not_even_own_any_diagnostic_tools_which_will_get_your_PC_running_like_fresh_again._So_never_allow_which_stop_you_from_getting_the_program_cleaned._With_access_to_the_web_you_will_find_the_tools_that_will_assist_you_get_the_program_running_like_new_again.<br><br>Ally_Wood_is_a_pro_software_reviewer_and_has_worked_in_CNET._Now_she_is_working_for_her_own_review_software_firm_to_give_suggestions_to_the_software_creator_and_has_completed_deep_test_inside_registry_cleaner_software._After_reviewing_the_top_registry_cleaner__she_has_created_complete_review_on_a_review_site_for_we_that_can_be_accessed_for_free. |
| | |
| This technique is the basis of efficient algorithms for all kinds of problems, such as [[sorting algorithm|sorting]] (e.g., [[quicksort]], [[merge sort]]), [[multiplication algorithm|multiplying large numbers]] (e.g. [[Karatsuba algorithm|Karatsuba]]), [[syntactic analysis]] (e.g., [[top-down parser]]s), and computing the [[discrete Fourier transform]] ([[fast Fourier transform|FFT]]s).
| |
| | |
| On the other hand, the ability to understand and design D&C algorithms is a skill that takes time to master. As when proving a [[theorem]] by [[Mathematical induction|induction]], it is often necessary to replace the original problem by a more general or complicated problem in order to initialize the recursion, and there is no systematic method for finding the proper generalization. These D&C complications are seen when optimizing the calculation of a [[Fibonacci_number#Matrix form|Fibonacci number with efficient double recursion]].
| |
| | |
| The correctness of a divide and conquer algorithm is usually proved by [[mathematical induction]], and its computational cost is often determined by solving [[recurrence relation]]s.
| |
| | |
| == Decrease and conquer ==
| |
| The name "divide and conquer" is sometimes applied also to algorithms that reduce each problem to only one sub-problem, such as the [[binary search]] algorithm for finding a record in a sorted list (or its analog in [[numerical algorithm|numerical computing]], the [[bisection algorithm]] for [[root-finding algorithm|root finding]]).<ref name=CLR>Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest, ''Introduction to Algorithms'' (MIT Press, 2000).</ref> These algorithms can be implemented more efficiently than general divide-and-conquer algorithms; in particular, if they use [[tail recursion]], they can be converted into simple [[loop (computing)|loop]]s. Under this broad definition, however, every algorithm that uses recursion or loops could be regarded as a "divide and conquer algorithm". Therefore, some authors consider that the name "divide and conquer" should be used only when each problem may generate two or more subproblems.<ref>Brassard, G. and Bratley, P. Fundamental of Algorithmics, Prentice-Hall, 1996.</ref> The name '''decrease and conquer''' has been proposed instead for the single-subproblem class.<ref>Anany V. Levitin, ''Introduction to the Design and Analysis of Algorithms'' (Addison Wesley, 2002).</ref>
| |
| | |
| An important application of decrease and conquer is in optimization, where if the search space is reduced ("pruned") by a constant factor at each step, the overall algorithm has the same asymptotic complexity as the pruning step, with the constant depending on the pruning factor (by summing the [[geometric series]]); this is known as [[prune and search]].
| |
| | |
| == Early historical examples ==
| |
| Early examples of these algorithms are primarily decrease and conquer – the original problem is successively broken down into ''single'' subproblems, and indeed can be solved iteratively.
| |
| | |
| Binary search, a decrease and conquer algorithm where the subproblems are of roughly half the original size, has a long history. While a clear description of the algorithm on computers appeared in 1946 in an article by [[John Mauchly]], the idea of using a sorted list of items to facilitate searching dates back at least as far as [[Babylonia]] in 200 BC.<ref name=Knuth3/> Another ancient decrease and conquer algorithm is the [[Euclidean algorithm]] to compute the [[greatest common divisor]] of two numbers (by reducing the numbers to smaller and smaller equivalent subproblems), which dates to several centuries BC.
| |
| | |
| An early example of a divide-and-conquer algorithm with multiple subproblems is [[Carl Friedrich Gauss|Gauss]]'s 1805 description of what is now called the [[Cooley-Tukey FFT algorithm|Cooley-Tukey fast Fourier transform]] (FFT) algorithm,<ref name=Heideman84>Heideman, M. T., D. H. Johnson, and C. S. Burrus, "Gauss and the history of the fast Fourier transform," IEEE ASSP Magazine, 1, (4), 14–21 (1984)</ref> although he did not analyze its [[algorithmic complexity|operation count]] quantitatively and FFTs did not become widespread until they were rediscovered over a century later.
| |
| | |
| An early two-subproblem D&C algorithm that was specifically developed for computers and properly analyzed is the [[merge sort]] algorithm, invented by [[John von Neumann]] in 1945.<ref>{{ cite book | last=Knuth | first=Donald | authorlink=Donald Knuth | year=1998 | title=The Art of Computer Programming: Volume 3 Sorting and Searching | page=159 | isbn=0-201-89685-0 }}</ref>
| |
| | |
| Another notable example is the [[Karatsuba algorithm|algorithm]] invented by [[Anatolii Alexeevitch Karatsuba|Anatolii A. Karatsuba]] in 1960<ref>{{cite journal| last=Karatsuba | first=Anatolii A. | authorlink=Anatolii Alexeevitch Karatsuba | coauthors=[[Yuri Petrovich Ofman|Yuri P. Ofman]] | year=1962 | title=Умножение многозначных чисел на автоматах | journal=[[Doklady Akademii Nauk SSSR]] | volume=146 | pages=293–294}} Translated in {{cite journal| journal=Physics-Doklady | volume=7 | year=1963 | pages=595–596}}</ref> that could multiply two ''n''-digit numbers in <math>O(n^{\log_2 3})</math> operations (in [[Big O notation]]). This algorithm disproved [[Andrey Kolmogorov]]'s 1956 conjecture that <math>\Omega(n^2)\,\!</math> operations would be required for that task.
| |
| | |
| As another example of a divide and conquer algorithm that did not originally involve computers, [[Donald Knuth|Knuth]] gives the method a [[post office]] typically uses to route mail: letters are sorted into separate bags for different geographical areas, each of these bags is itself sorted into batches for smaller sub-regions, and so on until they are delivered.<ref name=Knuth3>Donald E. Knuth, ''The Art of Computer Programming: Volume 3, Sorting and Searching'', second edition (Addison-Wesley, 1998).</ref> This is related to a [[radix sort]], described for [[IBM 80 series Card Sorters|punch-card sorting]] machines as early as 1929.<ref name=Knuth3/>
| |
| | |
| == Advantages ==
| |
| | |
| === Solving difficult problems ===
| |
| Divide and conquer is a powerful tool for solving conceptually difficult problems: all it requires is a way of breaking the problem into sub-problems, of solving the trivial cases and of combining sub-problems to the original problem. Similarly, decrease and conquer only requires reducing the problem to a single smaller problem, such as the classic [[Tower of Hanoi]] puzzle, which reduces moving a tower of height ''n'' to moving a tower of height ''n'' − 1.
| |
| | |
| === Algorithm efficiency ===
| |
| The divide-and-conquer paradigm often helps in the discovery of efficient algorithms. It was the key, for example, to Karatsuba's fast multiplication method, the quicksort and mergesort algorithms, the [[Strassen algorithm]] for matrix multiplication, and fast Fourier transforms.
| |
| | |
| In all these examples, the D&C approach led to an improvement in the [[asymptotic complexity|asymptotic cost]] of the solution.
| |
| For example, if the [[Recursion (computer science)|base cases]] have constant-bounded size, the work of splitting the problem and combining the partial solutions is proportional to the problem's size ''n'', and there are a bounded number ''p'' of subproblems of size ~ ''n''/''p'' at each stage, then the cost of the divide-and-conquer algorithm will be O(''n'' log ''n'').
| |
| | |
| === Parallelism ===
| |
| Divide and conquer algorithms are naturally adapted for execution in multi-processor machines, especially shared-memory systems where the communication of data between processors does not need to be planned in advance, because distinct sub-problems can be executed on different processors.
| |
| | |
| === Memory access ===
| |
| Divide-and-conquer algorithms naturally tend to make efficient use of [[memory cache]]s. The reason is that once a sub-problem is small enough, it and all its sub-problems can, in principle, be solved within the cache, without accessing the slower main memory. An algorithm designed to exploit the cache in this way is called ''[[cache-oblivious algorithm|cache-oblivious]]'', because it does not contain the cache size(s) as an explicit parameter.<ref name="cahob">{{cite journal | author = M. Frigo | coauthors = C. E. Leiserson, H. Prokop | title = Cache-oblivious algorithms | journal = Proc. 40th Symp. on the Foundations of Computer Science | year = 1999}}</ref>
| |
| Moreover, D&C algorithms can be designed for important algorithms (e.g., sorting, FFTs, and matrix multiplication) to be ''optimal'' cache-oblivious algorithms–they use the cache in a provably optimal way, in an asymptotic sense, regardless of the cache size. In contrast, the traditional approach to exploiting the cache is ''blocking'', as in [[loop nest optimization]], where the problem is explicitly divided into chunks of the appropriate size—this can also use the cache optimally, but only when the algorithm is tuned for the specific cache size(s) of a particular machine.
| |
| | |
| The same advantage exists with regards to other hierarchical storage systems, such as [[Non-Uniform Memory Access|NUMA]] or [[virtual memory]], as well as for multiple levels of cache: once a sub-problem is small enough, it can be solved within a given level of the hierarchy, without accessing the higher (slower) levels.
| |
| | |
| ===Roundoff control===
| |
| In computations with rounded arithmetic, e.g. with [[floating point]] numbers, a divide-and-conquer algorithm may yield more accurate results than a superficially equivalent iterative method. For example, one can add ''N'' numbers either by a simple loop that adds each datum to a single variable, or by a D&C algorithm called [[pairwise summation]] that breaks the data set into two halves, recursively computes the sum of each half, and then adds the two sums. While the second method performs the same number of additions as the first, and pays the overhead of the recursive calls, it is usually more accurate.<ref>Nicholas J. Higham, "The accuracy of floating point summation", ''SIAM J. Scientific Computing'' '''14''' (4), 783–799 (1993).</ref>
| |
| | |
| == Implementation issues ==
| |
| | |
| === Recursion ===
| |
| Divide-and-conquer algorithms are naturally implemented as [[subroutine|recursive procedures]]. In that case, the partial sub-problems leading to the one currently being solved are automatically stored in the [[call stack|procedure call stack]]. A recursive function is a function that is defined in terms of itself.
| |
| | |
| === Explicit stack ===
| |
| Divide and conquer algorithms can also be implemented by a non-recursive program that stores the partial sub-problems in some explicit data structure, such as a [[stack (data structure)|stack]], [[queue (data structure)|queue]], or [[priority queue]]. This approach allows more freedom in the choice of the sub-problem that is to be solved next, a feature that is important in some applications — e.g. in [[breadth first recursion|breadth-first recursion]] and the [[branch and bound]] method for function optimization. This approach is also the standard solution in programming languages that do not provide support for recursive procedures.
| |
| | |
| === Stack size ===
| |
| In recursive implementations of D&C algorithms, one must make sure that there is sufficient memory allocated for the recursion stack, otherwise the execution may fail because of [[stack overflow]]. Fortunately, D&C algorithms that are time-efficient often have relatively small recursion depth. For example, the quicksort algorithm can be implemented so that it never requires more than <math>\log_2 n</math> nested recursive calls to sort <math>n</math> items.
| |
| | |
| Stack overflow may be difficult to avoid when using recursive procedures, since many compilers assume that the recursion stack is a contiguous area of memory, and some allocate a fixed amount of space for it. Compilers may also save more information in the recursion stack than is strictly necessary, such as return address, unchanging parameters, and the internal variables of the procedure. Thus, the risk of stack overflow can be reduced by minimizing the parameters and internal variables of the recursive procedure, and/or by using an explicit stack structure.
| |
| | |
| === Choosing the base cases ===
| |
| In any recursive algorithm, there is considerable freedom in the choice of the ''base cases'', the small subproblems that are solved directly in order to terminate the recursion.
| |
| | |
| Choosing the smallest or simplest possible base cases is more elegant and usually leads to simpler programs, because there are fewer cases to consider and they are easier to solve. For example, an FFT algorithm could stop the recursion when the input is a single sample, and the quicksort list-sorting algorithm could stop when the input is the empty list; in both examples there is only one base case to consider, and it requires no processing.
| |
| | |
| On the other hand, efficiency often improves if the recursion is stopped at relatively large base cases, and these are solved non-recursively, resulting in a [[hybrid algorithm]]. This strategy avoids the overhead of recursive calls that do little or no work, and may also allow the use of specialized non-recursive algorithms that, for those base cases, are more efficient than explicit recursion. A general procedure for a simple hybrid recursive algorithm is ''short-circuiting the base case,'' also known as ''[[arm's-length recursion]].'' In this case whether the next step will result in the base case is checked before the function call, avoiding an unnecessary function call. For example, in a tree, rather than recursing to a child node and then checking if it is null, checking null before recursing; this avoids half the function calls in some algorithms on binary trees. Since a D&C algorithm eventually reduces each problem or sub-problem instance to a large number of base instances, these often dominate the overall cost of the algorithm, especially when the splitting/joining overhead is low. Note that these considerations do not depend on whether recursion is implemented by the compiler or by an explicit stack.
| |
| | |
| Thus, for example, many library implementations of quicksort will switch to a simple loop-based [[insertion sort]] (or similar) algorithm once the number of items to be sorted is sufficiently small. Note that, if the empty list were the only base case, sorting a list with ''n'' entries would entail ''n''+1 quicksort calls that would do nothing but return immediately. Increasing the base cases to lists of size 2 or less will eliminate most of those do-nothing calls, and more generally a base case larger than 2 is typically used to reduce the fraction of time spent in function-call overhead or stack manipulation.
| |
| | |
| Alternatively, one can employ large base cases that still use a divide-and-conquer algorithm, but implement the algorithm for predetermined set of fixed sizes where the algorithm can be completely [[loop unwinding|unrolled]] into code that has no recursion, loops, or [[Conditional (programming)|conditionals]] (related to the technique of [[partial evaluation]]). For example, this approach is used in some efficient FFT implementations, where the base cases are unrolled implementations of divide-and-conquer FFT algorithms for a set of fixed sizes.<ref name="fftw">{{cite journal | author = Frigo, M. | coauthors = Johnson, S. G. | url = http://www.fftw.org/fftw-paper-ieee.pdf | title = The design and implementation of FFTW3 | journal = Proceedings of the IEEE | volume = 93 | issue = 2 |date=February 2005 | pages = 216–231 | doi = 10.1109/JPROC.2004.840301}}</ref> [[Source code generation]] methods may be used to produce the large number of separate base cases desirable to implement this strategy efficiently.<ref name="fftw"/>
| |
| | |
| The generalized version of this idea is known as recursion "unrolling" or "coarsening" and various techniques have been proposed for automating the procedure of enlarging the base case.<ref>Radu Rugina and Martin Rinard, "[http://people.csail.mit.edu/rinard/paper/lcpc00.pdf Recursion unrolling for divide and conquer programs]," in ''Languages and Compilers for Parallel Computing'', chapter 3, pp. 34–48. ''Lecture Notes in Computer Science'' vol. 2017 (Berlin: Springer, 2001).</ref>
| |
| | |
| === Sharing repeated subproblems ===
| |
| For some problems, the branched recursion may end up evaluating the same sub-problem many times over. In such cases it may be worth identifying and saving the solutions to these overlapping subproblems, a technique commonly known as [[memoization]]. Followed to the limit, it leads to [[bottom-up design|bottom-up]] divide-and-conquer algorithms such as [[dynamic programming]] and [[chart parsing]].
| |
| | |
| ===Outline===
| |
| # An instance of the problem to be solved, is divided into a number of smaller instances of the same problem, generally of equal sizes. Any sub-instance may be further sub-divided into its sub-instances. A stage reaches when either a direct solution of a sub-instance at some stage is available or it is not further sub-divisible. In the latter case, when no further sub-division is possible,we attempt a direct solution for the sub-instance.
| |
| # Such smaller instances are solved.
| |
| # Combined the solutions so obtained of the smaller instances to get the solution of the original instance of the programs.
| |
| | |
| == See also ==
| |
| {{Portal|Computer Science}}
| |
| * [[Akra–Bazzi method]]
| |
| * [[Master theorem]]
| |
| * [[Mathematical induction]]
| |
| * [[MapReduce]]
| |
| | |
| ==References==
| |
| <references/>
| |
| | |
| ==External links==
| |
| | |
| {{DEFAULTSORT:Divide And Conquer Algorithm}}
| |
| [[Category:Algorithms]]
| |
| [[Category:Operations research]]
| |
| [[Category:Optimization algorithms and methods]]
| |