I figured I'd follow Michael Mitzenmacher's sigcomm PC post-analysis
with a few ruminations of my own. Michael noted that the quality of submissions seemed down, and I'm one of the other PC members who agreed with this. I'm also on the SOSP 2009
program committee, and the quality gap is surprising, even compared to previous years. Comparing the papers I reviewed in sigcomm 2008 and 2009, where the scale is roughly logarithmic and "1" is bottom 50% and 5 is "accept or i will scream":
1: 1 2 3 4 5 6 7 (28%)
2: 1 2 3 4 5 6 7 8 9 10 11 (44%)
3: 1 2 3 4 (16%)
4: 1 2 3 (12%)
1: 1 2 3 4 5 (31%)
2: 1 2 3 4 5 6 7 (44%)
3: 1 2 3 (18%)
4: 1 (6%)
The total number of submissions was down slightly (288 in 2008, ~275 in 2009).
There's a small bias because the number of later-round papers I reviewed was lower than in the previous year (reviews in round 2 omitted some of the lowest-ranked papers from round 1, which were "quick rejected"). In comparison, in my first round of SOSP reviews, I've already assigned one 5-equivalent.
Every PC is an experiment in how to run a PC, and this year's SIGCOMM was no exception. For background on this ongoing dialogue, it's worth peeking at some of the papers in last year's Workshop on Organizing Workshops, Conferences, and Symposia for Computer Systems -- we continue the debate about how to organize reviewing, double vs. single (or non)-blind review, accept papers, etc. Like last year, the PC was split into "light" and "heavy", where only "heavy" attended the PC meeting. The 2009 PC was larger (25 heavy, 35 light) than the 2008 PC (22 heavy, 24 light), and the per-reviewer workload was lighter (16 vs 25 papers). This is a useful experiment - one common complaint about serving on PCs is the time commitment. Unfortunately, I don't think this one worked out: my impression was that the room was a bit too full and individual PC members hadn't read enough of the papers to get as good a sense for the overall ordering of papers. Unfortunately, it seems there's some good that comes from a painful reviewing load in terms of having a good feel for the papers. I definitely like the way this and last year's PC structured things with heavy and light---those with more workload read a greater proportion of good papers, because the bottom 25% were rejected after the first round. This is a great way to preserve the sanity of your program committee.
There was (apparently; I didn't hear too much of it) some grumbling last year about the number of PC papers accepted. As a result, out-of-band papers were handled using a different website, along with some number of randomly selected additional papers. While this is a pretty sensible way to try things, it turned out to be really awkward - I couldn't login to the OOB website for weeks, and my guess is that the "randomly selected" papers took a hit by being reviewed separately from the main papers. It's clear that we need ACM (or some third party) to run a hosted instance of the HotCRP conference management software where the sysadmin and conference chairs are separate, so that the reviews are integrated but the chairs can't see who's reviewing their own papers, perhaps with some small tweaks to HotCRP to allow reviews to be assigned by an "OOB Chair". I note that USENIX has already started heading down this path.
Finally, I strongly encourage the next local host of the Sigcomm PC meeting to follow Brad Karp's lead and have an expert barista with a good espresso machine parked oustide of the meeting room. Simply awesome!