Peer Review

  • I don't consider myself an expert in computer networks research but i still have 10 accepted (and ~20 rejected) papers with ~100 citations in journals, conferences and workshops within my 3 years working on R&D. Hence my comments below derive from my non-academical and more development-oriented background.

    Peer review is completely broken nowadays for the following reasons:

    -Reviewers might not have relevant expertise to judge whether a paper is good or not. I am not talking about top-notch conferences, mainly lower-tier ones and workshops. I have seen a lot of professors pushing paper reviews to their PhDs, or even students. I was also invited to review a paper as soon as i submitted my first one (and it was rejected btw)! In another workshop i was invited to review papers just because i was a colleague of one of the organisers.

    -Old-school academia. This might not apply to all fields or all academia, but i have had good papers rejected because they didn't have a lot of math or simulations! My paper was examining an actual implementation of an SDN (software defined networking) protocol or a strategy with platforms and orchestrators which require weeks to setup and implement (Opendaylight, Openstack, OpenMANO, etc) with actual experiments on real users' video activity, yet to be rejected because i didn't provide any simulation. Jeez, novelty does not come only from theory, somebody has to implement and test these things in actual machines..

    -Politics. I won't say much on this aspect, other than a colleague of mine had 1000 citations within the same time just because there was a "mafia" (not my words but his) of professors, lecturers, PhDs reviewing and accepting all the group's papers because they were overloading them with each other's references and co-inviting them to each other's conferences and workshops.

  • Rodney Brooks subsumption architecture revolutionized robotics. Back in the 90s you could replicate his work with Attila and Genghis and recreate hexapod robots for under $1000 which was an pittance for a robot.

    He was at MIT when he submitted those papers. The churlishness of those reviews is staggering. What if he had given up and gone on to something else?

  • Peer review is broken. More than once I've learned that papers get easier through peer review if you cite the paper of a reviewer (of course you don't know in advance who reviews it so it's basically luck). I often got comments about missing related work "from relevant authors".

  • There are two types of peer review.

    1. Pre-publication peer review Reviewers examine the work before it's presented to the rest of the people.

    2. Post-publication peer review Paper on an online forum where others can read and comment and cite them. Papers can be accepted into journals and go trough peer-review later.

    Post-publication peer review does not mean that authors can't use pre-publication peer review. Scientists can (and should be encouraged) to ask their colleagues to review papers before posting online to spot errors.

    Some fields have long tradition to publishing and circulating working papers, manuscripts, drafts, posters etc. years before they submit the final paper for publication and pre-publication peer-review. That works as well.

  • I think peer review is quite important.

    Note the word “peer,” though.

    I would not peer review papers on robotics or AI, as those are not my forté. I could review papers on some generalizations of software engineering or the Swift programming language.

    Editing is a different matter. My mother was a scientific editor, and absolutely brutal. She edited a book I wrote on writing software for a certain specialty of Web site, and it was quite humbling. She knew nothing at all about the technology, yet provided an immensely valuable service.

    The book is long past it’s “sell-by” date, yet stands as maybe the only “perfect” artifact of my life, thanks to her.

    Then there’s comment threads in venues like this one, which some might equate to “peer review.”

    There was an old Dilbert comic on Peer review. I don’t think it can be found online, but was in “Build a Better Life by Stealing Office Supplies.”

    It was entitled “The Joy of Feedback,” and probably applies to comment threads.

  • Peer review is flawed, but I don't think it's broken. For context, I'm a late-year PhD student in machine learning theory, and I submit almost entirely to conferences, which is the norm in my field.

    To me, the big problem is that there is little incentive for experts to review papers. I write a few papers every year, and I review 5x that number. Slightly more senior people -- postdocs and junior faculty -- may review more like 30-40 papers. Most papers I review are not good papers. This is especially true at the big machine learning conferences like NeurIPS, ICML, and ICLR, where a disconcertingly large fraction (1/3?) of submissions are outright bad. Most of them are at best "meh". So there's not much benefit to the reviewer beyond some notion of community service.

    I think my ~colleagues (PhD students, postdocs, young faculty) have similar opinions. We've certainly discussed these issues in the past. One positive step NeurIPS has taken is giving "best reviewer" awards that come from with a free registration. That's a nice benefit, even if it won't on its own incentivize good reviews. Some people have suggested outright paying people for reviews at some quasi-competitive rate, maybe $100 per review (note that this is not great -- a good review takes at least several hours if the paper is not terrible). NeurIPS also added a requirement this year that all submitting authors must review, which is an interesting experiment.

    My impractical wish is that we also have some kind of "worst reviewer" sticker for all the reviewers who just paraphrase the abstract, add some milquetoast feelings on novelty, and weak reject. Less facetiously, some kind of StackOverflow reputation system might be useful. As is there's so little external signal about being a good or bad reviewer that the incentives are pretty wrong. Some kind of points system might help.

    But my overall point might be: reviewers by and large aren't malicious, jealous people who are only rejecting your work because they want to protect their own turf. They are more likely good faith actors who are trying to balance time spent on a pretty thankless but societally useful job (reviewing) with time spent on much more personally enjoyable and rewarding activities (research, family, hobbies, etc). There aren't really obvious solutions to tilt the balance toward better reviews that can still handle the volume of papers.

  • That's peer review from 3+ decades ago. Things have changed substantially. PIs dont have enough time to review all that stuff, lot of it is delegated to students, reviews can be hasty, with reviewers who may latch on to a specific detail and let the rest pass through. It's not even that they are not trying hard enough - Peer review is not doable in 2020. If you honestly want to scrutinize and improve every detail of a manuscript, you re better off posting it in reddit or twitter.

    > clamor for double blind anonymous review

    That's not even possible in most fields. you can easily tell which lab it comes from (you can even often guess who the reviewer is)

  • Any ideas for how the system might work differently? Could peer review learn anything from the open source community?

  • This topic comes up again and again. (Here some references from a quick search: https://news.ycombinator.com/item?id=10531374 https://news.ycombinator.com/item?id=18523847 https://news.ycombinator.com/item?id=8731271 https://news.ycombinator.com/item?id=18595074 https://news.ycombinator.com/item?id=22251079)

    But there are not many good solutions.

    Most seem to agree that we should move over to a more open system, like OpenReview, and also that all publications should be open for everyone afterwards, and not behind some paywall. But these are only two aspects of the system, and this is less about the peer reviewing itself.

    The community already has the trend to publish more and more directly on ArXiv. Just in addition to submitting it to a conference. If the conference peer review filters it out for some unlucky reason, it's still open for everyone to see. That might be already an improvement of what we had before. But probably it's not really the optimum yet. But what is the optimum?

  • I love this topic of peer review. Feels like arguing about methodologies, software QA & test, learning organizations back in the 90s.

    Peer review and the reproducibility crisis are the absolute bleeding edge of the human experience. Science, governance, transparency, accountability, progress, group decision making. All of it.

    I encourage all the critics, thought leaders to also consider how to apply these lessons elsewhere.

    Every thing said about peer review also applies to policy making, product development, news reporting, investigative journalism.

    Peter Drucker, way back when, stated that management is a technology. Same applies to peer review and reproducibility.

    For whatever reason, this post prompted me to think "sociology of scientific progress". I encourage critics to also factor in the collective human psychology. Covered by so many books, like the classics Structure of Scientific Revolutions and The Diffusion of Innovation. (Sorry, none of the more contemporary titles, a la Predictably Irrational, are jumping into my head at the moment.)

  • Brooks initial papers like "Intelligence without representation", "Elephants don't place Chess" and robots like Ghengis and Attila changed the robotics landscape. It was a leap akin to going from a Mainframes to an Apple II.

    Brooks was extraordinarily well placed to lead this switch in Robotics. He was at MIT, a place with very talented engineers - mechanical, electrical - etc. who could work on the robots. MIT also had limitless funding and a culture for disruptive research with long payoffs. This might not have been possible at other places like CMU where robotics has more industrial applications. I remember watching the movies of Atilla and Genghis over and over again, downloaded over an FTP connection!

    https://www.youtube.com/watch?v=-6piNZBFdfY

  • undefined

  • How is it that peer review hasn't changed in 30 years? With the amount of people afflicted has nobody had the time and/or influence to improve the process?

    There are no doubt lots of smart people concerned by this...

  • Science did well before radical ideas could be voted down by peer review.

    Would Einstein and Darwin have made it through peer review? I doubt it.

    What current day people like that are held up by their "peers"? I fear we'll never know.

  • Peer review is something I've been thinking about a lot lately as an outsider. I've never gone through a formal peer review process or worked in academia, but it's pretty obvious that the process has significant problems.

    However, the problem that peer reviews are trying to solve (conferring legitimacy on a paper by the scientific community) is so important that we can't give up on it. Sure, pre-print servers mean science can move faster, but it also means anything can be "science." If we normalize publishing papers as preprints, there's very little stopping conspiracy theorists from publishing a paper and having it look plausible.

    The sudden rush of media coverage for COVID research has really exasperated the problem. Journalists should responsibly cover preprint studies, but frankly most don't, and who can blame them when even the scientific community is struggling to review the papers itself.

    I volunteer with a group[0] that's trying to address this problem in a small way by creating visual summaries/reviews of popular preprints for mainstream readers. This is a side project, so the review process is just a shared Google doc for each paper, but what I'd like to see:

    As a reviewer:

      * View diffs between paper revisions
      * Comment on papers inline, like a pull request
      * +1/-1 a submission, or just leave questions without a decision
      * Gain "karma" for good reviews to build my reputation in the community (like Stack OverFlow)
    
    As a paper author:

      * Reply to comments to understand and explain context
      * Post revisions
    
    As a reader:

      * See visual summaries of papers, with inlined reviews (like Research Explained)
      * Vote on both studies and reviews to surface the good ones
      * Browse profiles of authors and reviewers to understand their qualifications/contributions
    
    I do think https://distill.pub is on the right track. Their papers are significantly more visual/readable than most, and they require contributors to open a pull request against the distill repo. However, their process isn't approachable for non-technical fields.

    [0]: https://www.researchexplained.org