Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some of the things that peer reviewers do, in my experience, in biology:

- question whether or not the conclusions you are making are supported by the data you are presenting

- ask for additional experiments

- evaluate whether or not your research is sufficiently novel and properly contextualized

- spotting obvious red flags - you seem to discount this, but it's quite valuable

In my experience, the process of peer review has been onerous, sometimes taking years of work and many experiments, and has by and large led to a better end-product. There are not so great aspects of peer review, but it's definitely not a joke as you characterize it.

I'll add that in biology and adjacent fields, it makes no sense to discount peer review because the reviewers do not repeat your experiment - doing so is simply not practical, and you don't have to stretch your imagination very far to understand why.



I also work in biological sciences research, but I'm more skeptical of peer review than you appear to be. My main criticism is that peer review is an n=2 process. Why not publish an unreviewed pre-print in bioRxiv and explicitly solicit constructive, public feedback directly on the pre-print on bioRxiv? I envision something similar to GitHub where users can open issues and have nuanced discussions about the work. The authors can address these issues by replying to users and updating the data and/or manuscript while bioRxiv logs the change history. Journals can then select sufficiently mature manuscripts on bioRxiv and invite the authors to publish.

This would massively increase the number of people that review a manuscript while also shortening the feedback cycle. The papers I've published have typically been in the peer review process for months to years with just a handful of feedback cycles of sometimes dubious utility. This can be improved!

Edit: I forgot to mention the issue of politics in peer review! If you're in a relatively small field, most of the big researchers all know each other, so peer review isn't truly blinded in practice. Junior researchers are also pressured into acquiescing to the peer reviewers rather than having an actual scientific debate (speaking from experience).


As it happens, I'm building "Github for Journals".

I pivoted away from attempting a crowd source review approach with a reputation system to trying to support journals in going Diamond Open Access.

But the platform I've built supports co-author collaboration, preprints and preprint review, journal publishing flows, and post publication review - all in a continuous flow that utilizes an interface drawing from Github PRs and Google Docs.

You can submit a paper, collect feedback from co-authors, then submit it as a preprint and collect preprint feedback, then submit to a journal and run the journal review process, then collect feedback on the final published paper. And you can manage multiple versions of the paper, collecting review rounds on each version, through that whole process.

It's in alpha, I'm pushing really hard with a short runway to get the journal flows to usable beta while trying to raise seed funding... the catch being I feel very strongly that it needs to be non-profit, so seed funding here is grants and donations.

I'm looking for journal editors who want to participate in UX research. I'm also interested in talking to folks who run preprint servers to see if they'd have any interest in using the platform. If you (being any reader) know any, or have leads for funding, reach out: dbingham@theroadgoeson.com


When you say "submit to a journal" does that mean you are not a journal? Why operate as a preprint server, but not offer to publish with peer-review? (Perhaps I'm misinterpreting your comment).


It doesn't sound like that poster operates as a journal, and that makes sense. Academic researchers need to publish papers in long-standing and highly respected journals in order to be promoted and eventually gain tenure. Journals do not add value by simply providing space for researchers to publish their work—they add value by existing as a reputable brand that can endow select researchers with academic and social credit.


As mentioned in my other comment, crappy peer-review is a big problem for most journals, so a solution to that needs to be found.


Yeah, before I pivoted to trying to flip journals, I spent a year exploring crowd sourcing with an eye on improving peer review. After building a beta and collecting a bunch of user feedback, my conclusion is that academics on the whole aren't ready to crowd source. Journal editors are still necessary facilitators and community organizers. So that lead to exploring flips.

However, I think there's a lot that software can do to nudge towards better peer review. And once we have journals using a platform we can build lots of experimental features and make them easy to use and adopt to work towards improving it.

I've kept crowd sourcing preprint review in the platform - though I removed the reputation system since UX research suggested it was an active deterrent to people using the platform - to enable continued experimentation with it. And the platform makes it easy for preprint review to flow naturally into journal review and for the two to live comfortably alongside each other. The idea being that this should help enable people to experiment with preprint review with out having to take a risk by giving up journal publishing.

And the platform has crowdsourced post-publication review as well.

My thought is that if we can get the journals using the platform, that will get authors and reviewers in the platform and since preprint and post-publish review are really easy to do in the platform that will drastically increase the usage of both forms of review. Then folks can do metascience on all of the above and compare the three forms to see which is most effective. Hopefully that can then spur movement to better review.

I also want to do work to ensure all the artifacts (data, supplementary material, etc) of the paper live alongside it and are easily accessed during review. And work to better encourage, rewards, and recognize replications. I think there's a lot we can explore once we have a large portion of the scholarly community using a single platform.

The trick is getting there.


The platform is intended to host many journals in the same way Github hosts many open source projects. And to facilitate interactions, conversation, and collaboration among authors, editors, and reviewers across them.


I think the key is that peer review is a promise of an n=2 process.

There's no promise that an unreviewed pre-print is going to get two constructive readers. It's also wildly subject to bias - being on a pre-print with a junior, female researcher was eye opening as to the merits of double blind review.


You could blind the pre-print process, too?


I've not seen a major attempt to blind pre-prints, and given you have to remove some identifying information for blinding, I think that would be a tall order.


Why would that be a tall-order? Seems fairly simple and straight-forward, doesn't it?

You'd set up a server where people have accounts, but publishing pre-prints is anonymous by default, and identities can be revealed later.

In the current peer review system, people already have to produce papers with those identifiers removed. They can do exactly the same in the pre-print world, can't they?


A great many papers in my field contain contextual details about the settings the studies were conducted in that would effectively deblind them.

That sort of betrays the idea of a pre-print, in my opinion, because they should not depend on "Someday we'll come back and fix this".


How does conventional peer review work for those papers?


> Junior researchers are also pressured into acquiescing to the peer reviewers rather than having an actual scientific debate

Yes. When I was teaching at the graduate school level, doctoral students sometimes came to me for advice about how they should respond to peer reviewer comments. Those comments were usually constructive and worthwhile, but sometimes they seemed to indicate either a misunderstanding or an ideological bias on the part of the reviewer. (This was in the social sciences, where ideology comes with the territory.) But even in those latter cases, the junior researchers just wanted to know how they could best placate the reviewer and get their paper published. None had the nerve, time, or desire for an actual scholarly debate.


As both a grad student and a postdoc I wrote appeals to rejections for peer review that succeeded.


Yes, you can certainly do that, but I wonder how long the appeal and approval process took? I'd bet it's measured in months.


It was considerably faster than a wholesale resubmission to a new journal, and landed the paper in a better home than it would otherwise have found.


Exactly. The quality of peer review is generally pretty poor. There are a lot of really terrible studies and reviews being published in high quality journals from people like the Mayo clinic, that you have to wonder how they passed peer review.

And then on the other hand, if you ever actually have to submit a paper to peer review, you'll see how clueless a lot of the reviewers actually are. About half do give useful critiques and comments, but the other half seem to have weird beliefs about the subject in question, and they pan your paper due to you not sharing said weird beliefs.


I agree with your suggestion and would 100% welcome that process - though I don't think they're necessarily mutually exclusive. As I see it, the main difference between the status quo and the more open process you suggest is that in theory reviewers that are hand-picked by the editor are more likely to have directly relevant experience, ideally translating to a better, and potentially more efficient review. Of course, that also comes with the drawbacks that you mentioned - that the reviewers are easily de-anonymized, and that they may be biased against your research since they're essentially competitors -- I've had the good fortune of not being negatively affected by this, but I have many colleagues who have not been so lucky.

Edit: Also, to comment more on my own experience, I was lucky to be working in a well-established lab with a PI whose name carried a lot of weight and who had a lot of experience getting papers through the review process. We also had the resources to address requests that might've been too much for a less well-funded lab. I'm aware that this colours my views and didn't mean to suggest that peer review, or the publication process, are perfect. The main reason I wanted to provide my perspective is that I feel that on HN there's often an undercurrent of criticism that is levied against the state of scientific research that isn't entirely fair in ways that may not be obvious to readers that haven't experienced it first-hand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: