Ran Canetti has written a post on the Berkeley Simons Institute blog concerning new assumptions used in recent cryptographic work on problems such as obfuscation, and concerning how the theory community should view such work.
From the majority opinions of two rulings of the Supreme Court of the United States.
Bowers v. Hardwick (1986):
The Constitution does not confer a fundamental right upon homosexuals to engage in sodomy. […] to claim that a right to engage in such conduct is “deeply rooted in this Nation’s history and tradition” or “implicit in the concept of ordered liberty” is, at best, facetious.
Obergefell v. Hodges (2015)
[Same-sex couples] ask for equal dignity in the eyes of the law. The Constitution grants them that right. The Constitution […] does not permit the State to bar same-sex couples from marriage on the same terms as accorded to couples of the opposite sex.
This is very clever, from beginning to end:
(via The Bold Italic)
This year, the chair of ICALP decided to play an April Fool’s prank three weeks early, and I received the following message:
“Dear author, we regret to inform you that the margins in your submission are too small, and hence we are rejecting it without review”
I was almost fooled. In my defense, the second time that I applied for a position in Italy, the hiring committee judged all my publications to be non-existent, because the (multiple) copies I had sent them had not been authenticated by a notary. So I am trained not to consider it too strange that a paper could be evaluated based on the width of its margins (or the stamps on its pages) rather than on the content of its theorem.
[Update 10/24/14: there was a bug in the code I wrote yesterday night, apologies to the colleagues at Rutgers!]
[Update 10/24/14: a reaction to the authoritative study of MIT and the University of Maryland. Also, coincidentally, today Scott Adams comes down against reputation-based rankings]
Saeed Seddighin and MohammadTaghi Hajiaghayi have proposed a ranking methodology for theory groups based on the following desiderata: (1) the ranking should be objective, and based only on quantitative information and (2) the ranking should be transparent, and the methodology openly revealed.
Inspired by their work, I propose an alternative methodology that meets both criteria, but has some additional advantages, including having an easier implementation. Based on the same Brown University dataset, I count, for each theory group, the total number of letters in the name of each faculty member.
Here are the results (apologies for the poor formatting):
1 ( 201 ) Massachusetts Institute of Technology
2 ( 179 ) Georgia Institute of Technology
3 ( 146 ) Rutgers – State University of New Jersey – New Brunswick
4 ( 142 ) University of Illinois at Urbana-Champaign
5 ( 141 ) Princeton University
6 ( 139 ) Duke University
7 ( 128 ) Carnegie Mellon University
8 ( 126 ) University of Texas – Austin
9 ( 115 ) University of Maryland – College Park
10 ( 114 ) Texas A&M University
11 ( 111 ) Northwestern University
12 ( 110 ) Stanford University
13 ( 108 ) Columbia University
14 ( 106 ) University of Wisconsin – Madison
15 ( 105 ) University of Massachusetts – Amherst
16 ( 105 ) University of California – San Diego
17 ( 98 ) University of California – Irvine
18 ( 94 ) New York University
19 ( 94 ) State University of New York – Stony Brook
20 ( 93 ) University of Chicago
21 ( 91 ) Harvard University
22 ( 91 ) Cornell University
23 ( 87 ) University of Southern California
24 ( 87 ) University of Michigan
25 ( 85 ) University of Pennsylvania
26 ( 84 ) University of California – Los Angeles
27 ( 81 ) University of California – Berkeley
28 ( 78 ) Dartmouth College
29 ( 76 ) Purdue University
30 ( 71 ) California Institute of Technology
31 ( 67 ) Ohio State University
32 ( 63 ) Brown University
33 ( 61 ) Yale University
34 ( 54 ) University of Rochester
35 ( 53 ) University of California – Santa Barbara
36 ( 53 ) Johns Hopkins University
37 ( 52 ) University of Minnesota – Twin Cities
38 ( 49 ) Virginia Polytechnic Institute and State University
39 ( 48 ) North Carolina State University
40 ( 47 ) University of Florida
41 ( 45 ) Rensselaer Polytechnic Institute
42 ( 44 ) University of Washington
43 ( 44 ) University of California – Davis
44 ( 44 ) Pennsylvania State University
45 ( 40 ) University of Colorado Boulder
46 ( 38 ) University of Utah
47 ( 36 ) University of North Carolina – Chapel Hill
48 ( 33 ) Boston University
49 ( 31 ) University of Arizona
50 ( 30 ) Rice University
51 ( 14 ) University of Virginia
52 ( 12 ) Arizona State University
53 ( 12 ) University of Pittsburgh
I should acknowledge a couple of limitations of this methodology: (1) the Brown dataset is not current, but I believe that the results would not be substantially different even with current data, (2) it might be reasonable to only count the letters in the last name, or to weigh the letters in the last name by 1 and the letters in the first name by 1/2. If there is sufficient interest, I will post rankings according to these other methodologies.
When he was 14, Joshua Wong cofounded Scholarism, the Hong Kong student movement that successfully protested the introduction of a “patriotic” curriculum. Now he is one of the student leaders of the Hong Kong pro-democracy movement.
Today Joshua Wong turns 18, and he gains the right to vote. May he be able to use this right freely!
[Photo by Anthony Kwan, video by the New York Times]
When Hong Kong was “handed over” to China on July 1st, 1997, the plan was that the city, now a Special Administrative Region, would retain its independent laws and institutions for 50 years, and it would have elections with universal suffrage (one person one vote). In 2007, it was decided that universal suffrage would start in 2017.
Discussion on how to regulate the 2017 elections has been going on for the last several months. A coalition of pro-democracy groups ran an informal referendum on the preferred system of election, gathering about 800,000 votes, or a fifth of the registered electorate. All the options in the referendum assumed no vetting process for the candidate, contrary to Beijing’s stance that any system for the 2017 election would only allow candidates pre-approved by the mainland government.
Afterwards (this happened during the summer), the pro-democracy groups organized an enormous rally, which had hundreds of thousands of participants, and announced plans to “occupy Central with love and peace” (Central contains the financial district) on October 1st if the Hong Kong legislature passed an election law in which candidates could run only with Beijing’s approval.
This was followed by an anti-democracy rally, partly attended by people bused in from across the border, which is a rather surreal notion; it’s like people are saying “we want our voices heard about the fact that we do not want our voices heard.”
A few days in advance of October 1st, a group of university students, some of them associated with group scholarism started a sit-in at a government building. Scholarism made news three years ago, when it (successfully) fought the proposal to introduce a “patriotic education” curriculum in grade school.
People have been facing the police with umbrellas and goggles to protect themselves from pepper spray.
The plaza in front of the government building, where the sit-in started, has been cleared, but for the whole weekend both Central and the neighboring district of Admiralty have been filled by thousands of protesters, day and night.
There is a petition at whitehouse.gov that has already exceeded the threshold required to receive a response, but that people might want to sign on.
Considering how the Chinese government feels about students rallying for democracy, there is reason to be worried.
[photos taken from Facebook, credits unknown]
The Italian prime minister is in the United States for the UN general assembly meeting, and he was in the San Francisco bay area on Sunday and Monday. (No, this is not the one who paid money to an underage prostitute and had she released from custody by falsely claiming she was the niece of Mubarak, it’s the new one.)
Those two days were busy, and he met Italian start-up founders in the area, went to a dinner at Stanford hosted by John Hennessy, presided the inauguration of a new Italian-language school, went to Twitter, Google, Facebook and Yahoo, and he met the local Italian research community.
For the last event, a few colleagues and I were asked to give a short presentation. Being not sure what to say to a prime minister, I asked a colleague who is the department chair at an Italian computer science department for some data on state funding of university research in computer science, and if there was a way to turn this data into a recommendation, and his response could be summarized as “we cannot be saved, there is no hope.” This might have made a good theme for a presentation, but instead I talked about the importance of fundamental research, and of working on ideas for which the technology is not ready yet, so that when the technology is ready the ideas are mature. Politicians are good at feigning attention when their mind is elsewhere, and he feigned it well.
Yesterday I was “interviewed” as part of the process to naturalize as an American citizen. Part of the interview is a test of the knowledge that an American is supposed to have. I liked to think that the officer would bring up a map of the world and ask me to point to France, then I would point to Brazil, and he would say, “great, now you are ready to be an American!” (Instead he asked how many US senators there are, when was the constitution written, and things like that.) The vibe was very different from any other interaction I have had with the American immigration system before; now it’s not any more “who are you, why are you stealing our jobs, and how do we know you are not a terrorist,” but it’s all “yay, you are going to be one us.”
A few weeks ago, the Proceedings of the National Academy of Science published an article on a study conducted by a group of Cornell researchers at Facebook. They picked about 600,000 users and then, for a week, a subset of them saw fewer “negative” posts (up to 90% were filtered) than they would otherwise see, a subset saw fewer “positive” posts (same), and a control group got a random subset.
After the week, the users in the “negative” group posted fewer, and more negative, posts, and those in the “positive” group posted more, and more positive, posts.
Posts were classified according to an algorithm called LIWC2007.
The study run contrary to a conventional wisdom that people find it depressing to see on Facebook good things happening to their friends.
The paper has caused considerable controversy for being a study with human subjects conducted without explicit consent. Every university, including of course Cornell, requires experiments involving people to be approved by a special committee, and participants must sign informed consent forms. Facebook maintains that the study is consistent with its terms of service. The highly respected privacy organization EPIC has filed a complaint with the FTC. (And they have been concerned with Facebook’s term of service for a long time.)
Here I would like to explore a different angle: almost everybody thinks that observational studies about human behavior can be done without informed consent. This means that if the Cornell scientists had run an analysis on old Facebook data, with no manipulation of the feed generation algorithm, there would not have been such a concern.
At the same time, the number of posts that are fit for the feed of a typical user vastly exceed what can fit in one screen, and so there are algorithms that pick a rather small subset of posts that are evaluated to be of higher relevance, according to some scoring function. Now suppose that, if N posts fit on the screen, the algorithm picks the 2N highest scoring posts, and then randomly picks half of them. This seems rather reasonable because the scoring function is going to be an approximation of relevance anyway.
The United States has roughly 130 million Facebook subscriber. Suppose that the typical user looks, in a week, at 200 posts, which seems reasonable (in our case, those would be a random subset of roughly 400 posts). According to the PNAS study, roughly 50% of the posts are positive and 25% are negative, so of the initial 400, roughly 200 are positive and 100 are negative. Let’s look at the 100,000 users for which the random sampling picked the fewest positive posts: we would be expecting roughly 3 standard deviations below the mean, so about 80 positive posts instead of the expected 100; the 100,000 users with the fewest negative posts would get about 35 instead of the expected 50.
This is much less variance than in the PNAS study, where they would have got, respectively, only 10 positive and only 5 negative, but it may have been enough to pick up a signal.
Apart from the calculations, which I probably got wrong anyway, what we have is that in the PNAS study they picked a subset of people and then they varied the distribution of posts, while in the second case you pick random posts for everybody and then you select the users with the most variance.
If you could arrange distributions so that the distributions of posts seen by each users are the same, would it really be correct to view one study as experimental and one as observational? If the PNAS study had filtered 20% instead of 90% of the positive/negative posts, would it have been ethical? Does it matter what is the intention when designing the randomized algorithm that selects posts? If Facebook were to introduce randomness in the scoring algorithm with the goal of later running observational studies would it be ethical? Would they need to let people opt out? I genuinely don’t know the answer to these questions, but I haven’t seen them discussed elsewhere.