You are currently browsing the monthly archive for September 2008.

The Chinese State Administration of Foreign Experts Affairs has a FAQ page that is worth a read. Check out Why do the Chinese people always stare at foreigners?.

In June, I wrote about my work on using spectral partitioning to approximate Max Cut.

I have now posted a revised paper with a couple new things.

One is an improved analysis due to Moses Charikar, of the algorithm described in the June paper. Moses shows that if one starts from a graph in which the optimum cuts a 1-\epsilon fraction of edges, then the algorithm cuts at least a 1-4\sqrt \epsilon + 8\epsilon fraction of edges (and also at least half of the edges). Cutting more than a 1-\frac 2 \pi \sqrt \epsilon + o_\epsilon(1) fraction of edges is Unique-Games-hard. Optimizing the fraction

\displaystyle \frac{ \max \{ \ \frac 12  \ , \ (1-4\sqrt \epsilon + 8\epsilon) \ \} }{1-\epsilon}

we see that the approximation ratio of the algorithm is always at least .531.

The second new result answers a question raised by an anonymous commenter in June: what about Max Cut Gain? Read the rest of this entry »

I am spending a month in Beijing. This week the China Theory Week is under way, the second in a series of annual events bringing graduate students from all over to Beijing for a week of talking about theory.

In October, there will be the China Symposium on Theoretical Computer Science, which is like the Theory Week but with older people.

Yesterday I spoke about my work on spectral methods for Max Cut, on which there have been a couple of advances since I posted about it here: (i) Moses Charikar has improved my analysis of the recursive algorithm, and (ii) I am now able to handle (in a complicated way and with very poor bounds) the “Max Cut Gain” problem.

I will write about it here, but first I will have to find out how to do it. WordPress is blocked, I have not been following all the explanations about “tunneling” and “proxy servers,” and so far the only way I have found to post here (open a remote desktop on a Berkeley computer) is excruciatingly slow and not conducive to write math (which needs previews etc.). Meanwhile a revised paper is available online.

As mentioned before, the MSRI is devoting this semester to connections between additive combinatorics and ergodic theory.

One of the generally exciting things about additive combinatorics is the confluence of techniques from analysis, combinatorics, and ergodic theory. One of the specifically exciting things to me has been the applicability of the analytic and the combinatorial techniques to computer science. It makes sense that we should make some effort to understand the ergodic-theoretic techniques too, just in case.

So what is ergodic theory? Read the rest of this entry »

Terry Tao and Gil Kalai have written on the mathematical work of Oded Schramm. Tao suggests a survey paper by Lawler as a source to learn more about the stochastic Loewner equation and its applications. Schramm was a plenary speaker at the 2006 ICM in Madrid, and his talk can be seen going to http://www.icm2006.org/video/ and clicking on “11th Session.” Unfortunately the site is very badly designed and, on my computer, it works neither with Firefox nor with Chrome. (It does work with IE.)

Oded has also worked in computer science, and here I would like to recall a lovely recent paper of his on property testing.

In graph property testing, we are interested in a graph property such as, say, 2-colorability, and we would like to develop very efficient algorithms, at the cost of delivering an approximate answer. The approximation takes the following form: if the graph is 2-colorable, then we want the algorithm to accept with high probability, and if the algorithm accepts with noticeable probablity, then it must be the case that the graph is at least “\epsilon-close” to 2-colorable. (A similar definition can be given for any other graph property, such as connectivity, planarity, expansion, 3-colorability, and so on.)

The notion of “closeness” can be defined in various ways, which affect the complexity of the problem.

If we deal with dense graphs, we usually define two n-vertex graphs to be “\epsilon-close” if one can be converted into the other by adding or removing at most \epsilon n^2 edges. In this “dense graph model” of property testing, the notion of closeness is loose enough that essentially every natural graph property has very efficient algorithms, whose running time depends only on \epsilon.

If we deal with sparse graphs, such as graphs whose maximum degree is d (think of d as a constant, while the number n of vertices goes to infinity), then we would say that two graphs are \epsilon-close if one can be converted into the other by adding or removing up to \epsilon d n edges. In this “bounded degree graph model” of property testing, some properties become much harder to test. While 2-colorability, and, in fact, k-colorability for every k, have testers of complexity poly(1/\epsilon) in the dense graph model, 2-colorability as complexity \Omega (\sqrt n) and O(\sqrt n polylog n) and 3-colorability has complexity \Omega(n) in the bounded degree model.

Also, some properties are trivial in the dense graph model and non-trivial in the bounded-degree one. Consider planarity: in the dense graph model, a graph is \epsilon-close to planar if and only if it is (\epsilon + o(1))-close to the empty graph, and so testing planarity is equivalent to testing emptiness, and thus trivial. In the bounded-degree model, it had been an open question to find a sub-linear time tester for planarity.

In STOC 2008, Benjamini, Schramm and Shapira prove that planarity is testable in the bounded-degree model with complexity that depends only on \epsilon, and independent of the size of the graph. The same result applies to any graph property defined in terms of excluded minors. Previously, very few properties were known to be testable with complexity dependent only on \epsilon in this model.

The proof applies more broadly to properties that are “hyper-finite,” meaning that graphs with the property can be broken up into finite-size components after removing a small fraction of edges. The result relies on a previous result by Schramm, studying “limits” of sequences of hyper-finite graphs using the kind of compactness techniques described by Terry Tao in his recent MSRI lectures.

I received today the terrible news that Oded Schramm, the outstanding probabilist, died two days ago in a hiking accident.

Oded was best known for his rigorous proofs of results for which heuristic arguments existed via statistical physics methods. The theory he developed with Gregory Lawler and Wendelin Werner has been spectacularly successful and it was recognized by many awards, most notably Werner’s Fields Medal.

Important problems in computer science, especially the existence of a satisfiability threshold in random 3SAT, have non-rigorous solutions via statistical physics methods, of a type that is not amenable to the Lawler-Schramm-Werner rigorous approach. The search for rigorous methods will have to continue without the leading figure of the field.

a

Follow

Get every new post delivered to your Inbox.

Join 248 other followers