In computer science, divide and conquer is a very important algorithm. The literal explanation is "divide and conquer", that is, to divide a complex problem into two or more identical or similar sub-problems, and then divide the sub-problems into smaller sub-problems, until the final sub-problems can be solved simply and directly. The solution of the original problem is the combination of the solutions of the sub-problems. The computational time required for any problem that can be solved by a computer is related to its scale. The smaller the scale of the problem, the easier it is to solve it directly, and the less calculation time it takes to solve the problem. For example, for the sorting problem of n elements, when n=1, no calculation is required. When n=2, the order can be sorted as long as a comparison is made. When n=3, only three comparisons are required. When n is large, the problem is not so easy to deal with. It is sometimes quite difficult to directly solve a large-scale problem.
For a problem of scale n, if the problem can be easily solved (for example, the scale n is small), it is solved directly, otherwise it is decomposed into k smaller-scale sub-problems, which are independent of each other and the original problem The form is the same, these sub-problems are solved recursively, and then the solutions of the sub-problems are merged to obtain the solution of the original problem. This algorithm design strategy is called divide and conquer. If the original problem can be divided into k sub-problems, 1<k≤n, and these sub-problems are solvable and the solutions of these sub-problems can be used to find the solution of the original problem, then this divide-and-conquer method is feasible. The sub-problems generated by the divide-and-conquer method are often smaller models of the original problem, which provides convenience for the use of recursive techniques. In this case, the repeated application of divide-and-conquer methods can make the sub-problem consistent with the original problem type but its scale continues to shrink, and finally the sub-problem is reduced to the point where it is easy to find its solution directly. This naturally leads to a recursive process. Divide and conquer and recursion are like twin brothers, which are often used in algorithm design at the same time, and many efficient algorithms are produced from them.
The problems that the divide and conquer method can solve generally have the following characteristics:
The first feature is that most problems can be satisfied, because the computational complexity of the problem generally increases with the increase in the size of the problem; the second feature is the prerequisite for applying the divide and conquer method, which is also satiable for most problems This feature reflects the application of recursive thinking; The third feature is the key. Whether the divide and conquer method can be used depends entirely on whether the problem has the third feature. If it has the first and second features, it does not. The third feature, you can consider the greedy method or dynamic programming method. The fourth feature relates to the efficiency of the divide-and-conquer method. If the sub-problems are not independent, the divide-and-conquer method will do a lot of unnecessary work to solve the public sub-problems repeatedly. At this time, although the divide-and-conquer method can be used, it is generally used. The dynamic programming method is better.
The divide and conquer method has three steps at each level of recursion: 1. divide (decomposition): decompose the original problem into several smaller, independent sub-problems with the same form as the original problem; 2 conquer (solve): If the sub-problem is small and easy to be solved, solve it directly. Otherwise, solve each sub-problem recursively. 3 Combine: Combine the solution of each sub-problem into the solution of the original problem.
The palindrome here refers to the qualification string. The content read from the beginning to the end is consistent with the content read from the end to the beginning. For example, doggod, no matter how long it takes from left to right, it is the same from right to left.
def isPal(s): if len(s) <= 1: return True else: return s[0]==s[-1] and isPal(s[1:-1]) s ='doggod' result = isPal(s) print result
The idea of binary search is relatively simple: 1) Choose a symbol i to divide the collection into two sub-collections 2) Determine whether the symbol L(i) can be equal to the value des to be searched, if it is equal, return directly 3) Otherwise, judge L(i) And the size of des 4) Determine whether the next step is to find left or right based on the result of the judgment 5) Recursive memory The above steps
def binarySearch(L,e,low,high): if high == low: return L[low] == e mid = (low+high)//2 if L[mid]==e: return True elif L[mid]>e: if low == mid: return False else: return binarySearch(L,e,low, mid-1) else: return binarySearch(L,e,mid+1,high) def search(L,e): result = binarySearch(L,e,0,len(L)-1) print result L = range(10); e = 7 search(L,e)
Picture.png
Large integers are divided into two parts from high to low. Suppose the high-order part of integer 1 is A and the low-order part is B; the high-order part of integer 2 is C and the low-order part is D, then the following equation is given:
image
If the length of a large integer is abstracted as n, then:
image
Therefore, the product of integer 1 and integer 2 can be written in the following form:
image
In this way, the original ****1 times**** product of a large integer with a length of ****n**** is transformed into a product with a length of ****n/2**** The **** 4 times **** product of large integers (AC, AD, BC, BD).
(3) Strassen matrix multiplication (4) Checkerboard coverage (5) Merge sort (6) Quick sort (7) Linear time selection (8) Closest point pair problem (9) Round robin schedule (10) Tower of Hanoi
The English name of the master theorem is master theorem , which provides progressive time complexity analysis for many recursive relations derived from the divide and conquer method . Set the constant a >= 1, b> 1. If the overall calculation scale of an algorithm T(n) = a T(n/b) + f(n), then there are the following rules:
image