## CS 574, Spring 2021

### Notes on HW1

• Problem 1: For (c), the best value for the base is actually near 0.5448 (by choosing alpha_3 = 6^{1/3}e/3, and alpha_2 = (1/2)alpha_3, and alpha_1 = (1/3)alpha_3).

The main result in (d), with the base approaching 1/e, was from Alt, Guibas, Mehlhorn, Karp, and Wigderson (1996) (Theorem 3), who also obtained other results in related settings.

• Problem 2: Someone has noticed a different, simple proof without randomization: just pick an independent set of size Omega(n/D) (for example, by greedy), fix an ordering of the rest, then put the vertices in the independent set to make them good.

• Problem 3: This technique is quite general (with many applications in computational geometry), and was taken from paper of mine (1999).

(For specifically the min-area triangle problem, Prof. X is probably wrong, since the problem is "3SUM-hard" and is conjectured not to be solvable in n^{2-delta} time for any positive constant delta.)

• Problem 4: Omega(n^2) is actually known (and was mentioned in Yao's original paper on his minimax principle, though a proof was not given). This paper by Dietmar (1992) proved Omega(n^2) randomized lower bounds for a class of graph problems (a major open question on monotone graph properties is still unresolved).

### Notes on HW2

• Problem 1: This problem is equivalent to bounding the "(strong) independence number" in a sparse "c-uniform hypergraph". Part (b) was first proved by Spencer (1972) (which generalizes a classical theorem by Turan for graphs). Recently, Alman and Vassilevska-Williams (2021) used roughly the same observation, for the c=3 case, to obtain their latest record-breaking algorithm for matrix multiplication (see their Theorem 3.1, although the notation may look different and Spencer is not explicitly cited).

• Problem 2(b): Deterministic algorithms were known for this and many related problems; see the comprehensive paper by Frederickson and Johnson (1984) (and as some of you have found, this post by Chao Xu). A recent paper of mine described a randomized O(n+k) algorithm for a more general class of matrices.

• Problem 3: The best known algorithm, from this paper, has run time close to O(n + k^2) (ignoring n^epsilon factors).

• Problem 4: Regarding (a), a known result by Clarkson and Shor (1989) showed that the sum of |S_Delta| over all Delta in VD(R) is O(n) without the extra log factor. So, the approach in (c) can actually achieve O(n log n) expected time (just like the randomized incremental algorithm from class).

### Notes on HW3

• Problem 1: For more on this problem, see Clifford and Starikovskaya (2016) and Chan et al. (2020).

• Problem 2: This alternative algorithm can be found in the original paper by Karp, Luby, and Madras (1989).

• Problem 3(b): The best alpha is 1/2...

• Problem 4: Straightforward application of the method of bounded differences gives e^{-Theta(t^2/(k^2n)}, but this can be slightly improved to e^{-Theta(t^2/(kn))} by first dividing the input into n/k blocks of length k before applying the method of bounded differences...

### Notes on HW4

• Problem 3: The algebraic result mentioned in the problem statement on computing determinants of univariate polynomials was obtained by Storjohann (2003). (This result was used to solve the max-weight bipartite matching problem by Sankowski (2009).)