Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore Competitive Programming in Python: 128 Algorithms to Develop your Coding Skills

Competitive Programming in Python: 128 Algorithms to Develop your Coding Skills

Published by Willington Island, 2021-07-22 07:33:45

Description: Want to kill it at your job interview in the tech industry? Want to win that coding competition? Learn all the algorithmic techniques and programming skills you need from two experienced coaches, problem setters, and jurors for coding competitions. The authors highlight the versatility of each algorithm by considering a variety of problems and show how to implement algorithms in simple and efficient code. Readers can expect to master 128 algorithms in Python and discover the right way to tackle a problem and quickly implement a solution of low complexity. Classic problems like Dijkstra's shortest path algorithm and Knuth-Morris-Pratt's string matching algorithm are featured alongside lesser known data structures like Fenwick trees and Knuth's dancing links.

ALGORITHM'S THEOREM

Search

Read the Text Version

168 Matchings and Flows of vertices such that u,v ∈ S with (u,v) ∈ A does not exist. The width of the partial order is defined as the size of the largest antichain. The problem is to compute this width. An important relation between chains and antichains is given by Dilworth’s theo- rem (1950): The size of the largest antichain equals the size of the smallest chain decomposition. Showing that the size of the largest antichain is at most the size of the small- est chain decomposition is straightforward. Assume that there is an antichain longer than a smallest decomposition in chains. Then, according to the pigeonhole princi- ple, at least two elements of the antichain should be in the same chain, which is a contradiction. It is possible to prove the other direction either by induction on the size of the graph, or as a reduction to Ko˝nig’s theorem (see Section 9.1 on page 142): we construct a bipartite graph H (V ,V ,E) such that V is a copy of V and for each arc (u,v) ∈ A in the original graph, there is (u,v ) ∈ E, where v is a copy of v. Let M be a maximum matching in the new graph H . By Ko˝nig’s theorem, a set S of vertices exists that touches every edge of H , and |M| = |S|. Every matching in H corresponds to a decomposition in chains: following the edges of the matching, we can recover chains in G. Every unmatched node in V corresponds to an endpoint of a chain. Hence, the size of the decomposition is the number of endpoints |V | − |M|, which is minimum because M is maximum. We now want to build a longest antichain from S. If we select the nodes v in V such that neither v nor v is in S, then we have at least |V | − |S| elements. If there was an edge between any pair (v,v ) of these elements, then either v or v would be in S, which is a contradiction. Thus, the selected elements form an antichain, and we have proved Dilworth’s theorem. Algorithm in O(|V | · |E|) This problem can be reduced to a maximum matching problem, see Figure 9.12. • Construct a bipartite graph H (V ,V ,E) where V is a copy of V and (u,v ) ∈ E if and only if (u,v) ∈ A. • Compute a maximal bipartite matching M in H . • Count the number of unmatched vertices in V . This is the size of the largest antichain in G, hence the width of the partial order G. • Consider the set of arcs C in G such that (u,v) ∈ C if and only if (u,v) ∈ M. Then C is a decomposition of V into a minimal number of chains. Application: Reservations of Taxis [kattis:taxicab] Imagine the following situation. A taxi company has gathered reservations for tomorrow’s trajectories. During the night, it must assign the taxis in its fleet to the reservations and minimise the number of taxis employed. Each reservation starts from

9.12 Width of a Partial Order—Dilworth 169 8 8 8’ 8 8’ 8 7 7 7’ 7 7’ 7 6 6 6’ 6 6’ 6 5 5 5’ 5 5’ 5 4 4 4’ 4 4’ 4 3 3 3’ 3 3’ 3 2 2 2’ 2 2’ 2 1 1 1’ 1 1’ 1 Figure 9.12 From left to right: a partial order G, the associated bipartite graph H with a covering of the vertices S marked in grey, a maximum matching in H , the decomposition into minimum chains in G associated with an antichain marked in grey. a precise location in the city at a certain time, in order to reach a given destination. We suppose that all the source–destination travel times are known. We can thus decide if the same taxi can honour the reservation j after having completed the reservation i. In this case, we denote i j , and the relation is a partial order on the day’s reservations. The solution to the problem is simply the width of this partial order. Generalisation If the graph is weighted, we can look for a minimal decomposition into chains which also minimises the total weight of the arcs. This problem can be solved in time O(|V |3) by reduction to the problem of minimal-cost perfect matching in a bipartite graph. Implementation Details Our implementation returns an array p encoding an optimal partition, where p[u] is the index of the chain containing the vertex u, where the chains are numbered starting with 0. The input is a square matrix M indicating for each pair of vertices u,v the cost of the arc (u,v) or None if there is no such arc. This implementation supposes that the vertices are given in topological order, otherwise a topological sort must be done first, see 6.7. Hence, if the arc (u,v) exists, then u < v.

170 Matchings and Flows def dilworth(graph): # maximum matching n = len(graph) # partition into chains match = max_bipartite_matching(graph) part = [None] * n # in inverse topological order nb_chains = 0 # start of chain for v in range(n - 1, -1, -1): if part[v] is None: # follow the chain u=v # mark while u is not None: part[u] = nb_chains u = match[u] nb_chains += 1 return part Problems Stock Charts [gcj:2009round2C] Taxi Cab Scheme [icpcarchive:3126] [poj:2060]

10 Trees Rooted trees are combinatorial structures which appear naturally when considering objects with a structure of recursive decomposition. This is the case, for example, for classifications, hierarchies or genealogies. The recursive nature of trees is an invita- tion to recursive algorithms, and the key to an algorithm is to find the appropriate method of exploration. In this section, we review certain classic problems concerning trees. Formally, a tree is a connected acyclic graph. One of its vertices can be designated as the root, and in this case we generally speak of a rooted tree. This root provides the tree with an orientation from parent to child. Starting from a vertex and climbing up the links, we arrive at the root. The vertices without children are called leaf nodes. A tree with n vertices contains exactly n − 1 edges. For proof, it suffices to observe that if we repeatedly remove a leaf from the tree, along with its incident edge, we end up with an isolated vertex, i.e. a tree with 1 vertex and 0 edges. The vertices of a rooted tree are partitioned into levels. The root is the unique vertex at level 0, its child nodes are at level 1 and so on. The largest non-empty level number defines the depth of the tree, which is also the maximum distance from the root to a leaf. The subtree rooted at vertex v consists of all vertices and edges reachable from the root of the original tree only through vertex v. A disjoint union of trees is called a forest, as the reader might have guessed. There are numerous dynamic data structures based on trees, such as binary red- black search trees or interval trees. These structures invoke rebalancing operations on the trees in order to guarantee logarithmic-time insert, delete and query operations. However, in programming competitions, the inputs are given only once, and thus it is often possible to skip the insert/delete operations by directly constructing balanced structures. A tree can be represented by essentially one of two methods. The first is the clas- sic representation of a graph with an adjacency list, which does not distinguish any particular vertex as the root of the tree. Another representation commonly used is that of an array of predecessors. For a rooted tree (often with vertex 0 as the root), each vertex other than the root has a unique predecessor, encoded in the array. Based on the context, one or the other of the representations will be best suited, while translating between them can easily be done in linear time.

172 Trees def tree_prec_to_adj(prec, root=0): # add predecessors n = len(prec) # add successors graph = [[prec[u]] for u in range(n)] graph[root] = [] for u in range(n): if u != root: graph[prec[u]].append(u) return graph def tree_adj_to_prec(graph, root=0): prec = [None] * len(graph) prec[root] = root # mark to visit root only once to_visit = [root] while to_visit: # DFS node = to_visit.pop() for neighbor in graph[node]: if prec[neighbor] is None: prec[neighbor] = node to_visit.append(neighbor) prec[root] = None # put the standard mark for root return prec 10.1 Huffman Coding Definition Let be a finite alphabet. We denote by ∗ the set of all words on . Let S ⊆ ∗ be a finite set of words. A binary code for an alphabet of n characters is a function c : → {0,1}∗ such that no word c(a) of the code is a prefix of another word c(b) for a,b ∈ S. This code applied to each of the letters transforms a word from S into a word in {0,1}∗, and the property for the prefixes allows an unambiguous decoding of the original. In general, we would like the encoding to be as short as possible, hence the more frequent letters should be encoded by shorter words. Formally, given a frequency function f : S → R+, we seek a code minimising the cost f (a) · |c(a)|. a∈S Algorithm in O(n log n) A Huffman code can be seen as a binary tree, the leaves of which are the letters of the alphabet and each node is labelled with the sum of the frequencies of the letters of the leaves of the subtree rooted at this node. Every inner node is connected to exactly two child nodes with edges labelled, respectively, by 0 and 1. The concatena- tion of the labels along the path from the root to a leaf vertex constitutes the code word c of the corresponding word from S. As such, the cost of c can be expressed as the sum over a ∈ S, of fa multiplied by the depth of the leaf a in the tree. To construct the tree,






























Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook