I would like to share a few insights from sitting on the Information Processing on Sensor Networks (IPSN) 2015 Shadow Technical Program Committee. I wouldn't consider these to be words of "wisdom", per se; I'm still a young researcher. But hopefully someone can get something out of reading this.
The responsibility of a technical program committee (TPC) for a conference is to select papers that will form a strong conference program. The IPSN program chair, Dr. Omprakash Gnawali (Om) of the University of Houston invited students to form a Shadow TPC as an educational experience and networking opportunity. With students from ETH Zurich, University of Virginia, Carnegie Mellon University, Texas A&M, University of Houston, and myself from Rice University, we had a widely gathered set of students at the committee table. We were also joined remotely with students from Clemson and Utah.
Together, our loosely-defined goal was to select papers that: (1) reveal new technical insights, (2) stimulate lively discussion, and/or (3) provide useful tools to the community. Underlying all of this is that papers implicitly had to: (0a) be technically sound and (0b) be written clearly. We shadowed the task of the real IPSN TPC: Of the 40 papers that opted into the IPSN Shadow TPC evaluation, we were to select 10-12 papers for acceptance into the conference program.
As the TPC chair, Om guided the committee through a structured selection process. He began by distributing papers amongst the committee members. We formulated our reviews through the HotCRP (pronounced "Hot Crap") system, granting each paper a score from 1-5 (Reject, Weak Reject, Weak Accept, Accept, Strong Accept), and assigning our expertise on a paper from 1-4 (No familiarity, Some familiarity, Familiar, Expert). Each paper had three to five reviewers; each reviewer had ten to eleven papers.
On the day of the committee meeting, Om assigned each committee member three papers for which we would lead the discussion -- we only led papers we had previously reviewed. After summarizing the paper, then briefly covering the reviewers' written positive and negative thoughts, the leader would open the table up for discussion. Others who had reviewed the paper would often chime in with their opinions. In order to be accepted, a paper would need a "champion": a reviewer who would stand up for and defend a paper's acceptance. Other members could choose to play the role of devil's advocate, detractors of a paper's merits. Eventually, the entire committee would come to a consensus and we would mark a paper for acceptance or rejection. Occasionally, we would defer the paper acceptance by asking for an additional review or simply punting the discussion to a later time in the afternoon. But through the process, we selected the papers of sufficient quality to get into the conference.
I went first, summarizing the first paper's reviews. I proceeded to champion the paper, giving my personal opinion on why the paper should be accepted. This particular paper showcased a cleverly designed tool that had the potential to advance the research community and provided insights into its usability design. I opened the table for discussion, but it was fairly clear this paper was strong -- reviewers had given it 4,5,5 -- and the table easily came to consensus for acceptance.
Some other papers were even easier to reject. Those with little academic merit had no chance of getting into the conference. Those with two 1's were invariably rejected after a brief discussion.
But the fun came in discussing the papers in-between. For many papers, we discussed the completeness of evaluation, the academic positioning of the work, and the significance of the result. Reviewers had to convince other committee members of the reasoning behind their arguments leading to their acceptance/rejection recommendation. Before accepting any paper, we discussed whether the quality of the paper was sufficient for the conference. It was challenging to stake a position on some papers, especially those that fell outside of the scope of our expertise. But we made rational decisions on our limited knowledge and defended our positions to consensus. Occasionally we had to vote to eventually determine a paper's fate.
At the end of the day, I realized I wasn't terribly impressed with the intellectual contributions of the papers that went through. The papers we had selected were complete in form, and interesting to a degree, but only incrementally advanced the field. There were few groundbreaking, or even surprising, contributions. (It's possible that the highest quality IPSN papers opted not to be evaluated by the Shadow TPC.) But the papers were nonetheless of decent quality and would form a good academic conference program.
It was exhausting to go through the Shadow TPC, and I can only imagine how much more intimidating it is to go through an actual TPC, where you must defend your opinions against technical experts in your research community. However, from this experience, I learned a few key elements.
The best preparation for the meeting was to write a good review. The review became the basis of the discussions and helped the reviewer more clearly form their opinions. A good review tended to incorporate highlights and suggestions regarding intellectual depth, technical correctness, literature positioning, and writing style structures, eventually jockeying the paper for acceptance or rejection. The committee appreciated reviewers that thoroughly studied the paper, examining its true contributions to the field and identifying potential technical pitfalls, especially with regard to faulty assumptions, technical inaccuracies, and unrealistic evaluations. With such a review in hand, it was easy for reviewers to defend their positions on whether or not a paper should be accepted.
It quickly became obvious that the optimal program committee member strategy was not to evaluate a paper from an author's perspective, but from the reader's perspective. That is, no effort could be wasted feeling bad that the author had done a lot of work. Instead, the work was best evaluated purely on what the research community could get out of the paper: technical designs, novel insights, intellectual investigations, and real-world studies, to name a few. Defending a paper from the author's perspective came across as poorly reasoned, so we mostly refrained from entering that position.
From sitting on the shadow program committee, it became especially clear that authors should take the mindset of the program committee to maximize the chances of their paper's acceptance.
This can be summarized in one sentence: Make it easy for others to defend your work.
Here are some tips derived from such thinking:
It's much easier said than done. I struggle with the paper writing process a lot. But I think taking on the persona of the research community will help set a researcher up for success. To that end, I strongly recommend that students take advantage of Shadow TPC opportunities to understand the nature of the beast that is academic paper selection. It was a most educational -- albeit exhausting -- experience.
For more about what makes a good systems paper, I suggest the following two links:
How (and How Not) to Write a Good Systems Paper (Roy Levin, David Redell)
How to get a paper rejected from MobiSys (David Kotz)