The spectral norm of the infinite {d}-regular tree is {2 \sqrt {d-1}}. We will see what this means and how to prove it.

When talking about the expansion of random graphs, abobut the construction of Ramanujan expanders, as well as about sparsifiers, community detection, and several other problems, the number {2 \sqrt{d-1}} comes up often, where {d} is the degree of the graph, for reasons that tend to be related to properties of the infinite {d}-regular tree.

Read the rest of this entry »

A regular connected graph is Ramanujan if and only if its Ihara zeta function satisfies a Riemann hypothesis.

The purpose of this post is to explain all the words in the previous sentence, and to show the proof, except for the major step of proving a certain identity.

There are at least a couple of reasons why more computer scientists should know about this result. One is that it is nice to see a connection, even if just at a syntactic level, between analytic facts that imply that the primes are pseudorandom and analytic facts that imply that good expanders are pseudorandom (the connection is deeper in the case of the Ramanujan Cayley graphs constructed by Lubotzky, Phillips and Sarnak). The other is that the argument looks at eigenvalues of the adjacency matrix of a graph as roots of a characteristic polynomial, a view that is usually not very helpful in achieving quantitative result, with the important exception of the work of Marcus, Spielman and Srivastava on interlacing polynomials.

Read the rest of this entry »

In preparation for the special program on spectral graph theory at the Simons Institute, which starts in a week, I have been reading on the topics of the theory that I don’t know much about: the spectrum of random graphs, properties of highly expanding graphs, spectral sparsification, and so on.

I have been writing some notes for myself, and here is something that bothers me: How do you call the second largest, in absolute value, eigenvalue of the adjacency matrix of a graph, without resorting to the sentence I just wrote? And how do you denote it?

I have noticed that the typical answer to the first question is “second eigenvalue,” but this is a problem when it creates confusion with the actual second largest eigenvalue of the adjacency matrix, which could be a very different quantity. The answer to the second question seems to be either a noncommittal “{\lambda}” or a rather problematic “{\lambda_2}.”

For my own use, I have started to used the notation {\lambda_{2abs}}, which can certainly use some improvement, but I am still at a loss concerning terminology.

Perhaps one should start from where this number is coming from, and it seems that its important property is that, if the graph is {d} regular and has {n} vertices, and has adjacency matrix A, this number is the spectral norm of {A - \frac dn J} (where {J} is the matrix with ones everywhere), so that it measures the distance of {A} from the “perfect {d}-regular expander” in a norm that is useful to reason about cuts and also tractable to compute.

So, since it is the spectral norm of a modification of the adjacency matrix, how about calling it {<}adjective{>} spectral norm? I would vote for shifted spectral norm because I would think of subtracting {\frac dn J} as a sort of shift.

Please, do better in the comments!

This year, perhaps because of a mistake, the winners of the Field Medals and the Nevanlinna prize were made public before the opening ceremony of the ICM.

Congratulations to my former colleague Maryam Mirzakhani for being the first Fields Medals winner from Iran, a nation that can certainly use some good news, and a nation that has always done well in identifying and nurturing talent in mathematics and related fields. She is also the first woman to receive this award in 78 years.

And congratulations to Subhash Khot for a very well deserved Nevanlinna prize, and one can read about his work in his own words, in my words, and about the latest impact of his work in the the words of Barak and Steurer.

The Simons foundations has excellent articles up about their work and the work of Artur Avila, Manjul Bhargava, and Martin Hairer, the other Fields Medal recipient. An unusual thing about Manjul Bhargava’s work is that one can actually understand the statements of some of his results.

The New York Times has a fascinating article according to which the Fields Medal got its current status because of Steve Smale and cold war paranoia. I don’t know if they are overstating their case, but it is a great story.

Shafi Goldwasser sends a reminder that the deadline to submit to the next innovations in theoretical computer science conference is next Friday. The conference will take place in January 2015 at the Weizmann Institute, with contingency plans to hold it in Boston in case the need for a relocation arises.

A few weeks ago, the Proceedings of the National Academy of Science published an article on a study conducted by a group of Cornell researchers at Facebook. They picked about 600,000 users and then, for a week, a subset of them saw fewer “negative” posts (up to 90% were filtered) than they would otherwise see, a subset saw fewer “positive” posts (same), and a control group got a random subset.

After the week, the users in the “negative” group posted fewer, and more negative, posts, and those in the “positive” group posted more, and more positive, posts.

Posts were classified according to an algorithm called LIWC2007.

The study run contrary to a conventional wisdom that people find it depressing to see on Facebook good things happening to their friends.

The paper has caused considerable controversy for being a study with human subjects conducted without explicit consent. Every university, including of course Cornell, requires experiments involving people to be approved by a special committee, and participants must sign informed consent forms. Facebook maintains that the study is consistent with its terms of service. The highly respected privacy organization EPIC has filed a complaint with the FTC. (And they have been concerned with Facebook’s term of service for a long time.)

Here I would like to explore a different angle: almost everybody thinks that observational studies about human behavior can be done without informed consent. This means that if the Cornell scientists had run an analysis on old Facebook data, with no manipulation of the feed generation algorithm, there would not have been such a concern.

At the same time, the number of posts that are fit for the feed of a typical user vastly exceed what can fit in one screen, and so there are algorithms that pick a rather small subset of posts that are evaluated to be of higher relevance, according to some scoring function. Now suppose that, if N posts fit on the screen, the algorithm picks the 2N highest scoring posts, and then randomly picks half of them. This seems rather reasonable because the scoring function is going to be an approximation of relevance anyway.

The United States has roughly 130 million Facebook subscriber. Suppose that the typical user looks, in a week, at 200 posts, which seems reasonable (in our case, those would be a random subset of roughly 400 posts). According to the PNAS study, roughly 50% of the posts are positive and 25% are negative, so of the initial 400, roughly 200 are positive and 100 are negative. Let’s look at the 100,000 users for which the random sampling picked the fewest positive posts: we would be expecting roughly 3 standard deviations below the mean, so about 80 positive posts instead of the expected 100; the 100,000 users with the fewest negative posts would get about 35 instead of the expected 50.

This is much less variance than in the PNAS study, where they would have got, respectively, only 10 positive and only 5 negative, but it may have been enough to pick up a signal.

Apart from the calculations, which I probably got wrong anyway, what we have is that in the PNAS study they picked a subset of people and then they varied the distribution of posts, while in the second case you pick random posts for everybody and then you select the users with the most variance.

If you could arrange distributions so that the distributions of posts seen by each users are the same, would it really be correct to view one study as experimental and one as observational? If the PNAS study had filtered 20% instead of 90% of the positive/negative posts, would it have been ethical? Does it matter what is the intention when designing the randomized algorithm that selects posts? If Facebook were to introduce randomness in the scoring algorithm with the goal of later running observational studies would it be ethical? Would they need to let people opt out? I genuinely don’t know the answer to these questions, but I haven’t seen them discussed elsewhere.

As of today, I am again an employee of the University of California, this time as senior scientist at the Simons Institute, as well as professor of EECS.

As anybody who has spent time there can confirm, the administrative staff of the Simons Institute is exceptionally good and proactive. Not only they take care of the things you ask them, but they take care of the things that you did not know you should have asked them. In fact at Berkeley the quality of the administration tracks pretty well the level at which it is taking place. At the level of departments and of smaller units, everything usually works pretty well, and then things get worse as you go up.

Which brings me to the office of the Chancellor, which runs U.C. Berkeley, and from which I received my official job offer. As you can see, that office cannot even get right, on its own letterhead, the name of the university that it runs:

Also, my address was spelled wrong, and the letter offered me the wrong position. I can’t believe they managed to put on the correct postage stamp. I was then instructed by the EECS department chair to respond by saying “I accept your offer of [correct terms],” which sounded passive-aggressive, but that’s what I did.

When Twitter started to become popular, I remember thinking that the premise of its service, that its distinguishing feature was its limitation, was ridiculous. (Remember never to ask me for investment advice.)

At the time, I thought that it would be really fun to create a parody site where you could only post one bit messages. Clearly, the site would be called bitter, and when you log in the prompt would ask “Are you bitter?” and if you answered yes your post would be a frowny face, while if you answered no your post would be a smiley face. I went as far as checking that this didn’t seem too hard to pull off in Drupal, to make sure no such parody site existed already, and to see if bittr.com or bittr.net were available. (Of course they weren’t!)

Anyways, I was mistaken in thinking that two possible messages, and hence one bit of information, was the end of the road. Indeed, it is possible to have only one possible message, and this is the insight pursued by yo, which, apparently, is not a parody and has received one million dollars in funding.

“I may have the genetic coding that I’m inclined to be an alcoholic, but I have the desire not to do that – and I look at the homosexual issue the same way”

(Rick Perry, Governor of Texas)

So, if I understand Perry’s point, he may have a genetic inclination to be gay but he forces himself “not to do that”?

Judge Rolf Treu has ruled that teachers’ tenure violates California students’ constitutional right to an education.

The California Constitution, which I am now reading for the first time, has an entire section on education (also, an entire section on water), it specifically mandates state funding for the University of California, and it states, right at the beginning, a fundamental right to privacy. (Using the word “privacy,” which does not occur in the US constitution.) Yay California Constitution!

Back to Judge Treu, he has found that the right to an education implies a right to good teachers, and that tenure causes students to have bad teachers. Hence tenure is unconstitutional. Also the bad teachers disproportionally end up in districts with poor and minority students, so tenure is unconstitutional also on equal protection grounds. I am now looking forward to conservative California judges striking down other laws and regulations that affect the educational opportunities of poor and minority students, including prop 209.

a

Follow

Get every new post delivered to your Inbox.

Join 257 other followers