By: Janet Wilson
Publishing is the “be all end all” for researchers – publications allow them to pay their lab bills and researchers and make profit. It increases their reputation and allows them to contribute to their field. Once a innovative research question has been developed, after the infinite hours spent in the lab and after the countless hours analyzing and synthesizing data and writing reports, a scientist must cast what they believe to be a polished product that is both meaningful and relevant to their field into the abyss of what is the peer review process. This process is not without its downfalls, is highly criticized and should be revised in order to benefit the field of science.
Here at MSURJ we also use a peer review process to assess the validity of submissions to our journal. All submissions are reviewed by three peer reviews that are experts in the field of the article. This allows for critical analysis in regards to both the quality and validity of the research, with the ultimate goal of identifying whether the article is worthy of being published. Nonetheless, our peer review process differs from that of journals like Nature, Cell and the like because we, the editors, ultimately want every eligible article to be published. Resources aren’t scarce and it doesn’t cost us to publish undergraduate research, thus there isn’t a limit on the number of articles that can be published in each edition of the journal. The more articles that are deemed eligible according to our criteria, the more can be published.
On the contrary, the journals publishing science today have extremely low acceptance rates. For example, that of Science is 7% and declining every year. The peer review process is inherently rigourous – and so it should be – in order to ensure that only the highest quality science is allowed to be published. It is usually a blinded process meaning that the authors’ identity is not revealed to the peer reviewers, and the peer review comments are fully anonymous.
Some argue that the peer review process is keeping science as we know it locked in a slow, expensive process that is prone to error. It relies on the opinions of researchers in the field to deem an article worthy or, the more frequent outcome, unworthy of publication. This is inherently subjective and can frustrate authors when, for example, two peer reviewers may deem an article acceptable, while one may not. The low numbers of peer reviewers allotted to each paper perpetuate this problem. Additionally, many reviewers may turn down requests to review articles because it is so time consuming, or may review it in a rushed manner that cannot be justified.
So how can the peer-reviewing process be improved you ask? First off, it would be best if reviewers could be recognized in some way for their contribution to the article. Although this would eliminate the anonymity of the process, it would reward reviewers for their contributions and lead to an increased interest in reviewing positions.
Additionally, the process could be improved by allotting a rating method which scores reviewers based on their involvement in the editing process. Reviewers could be ranked based on their scores, and those with the highest scores could be rewarded by the journal for their contributions. This would lead to more commitment from reviewers to the improvement of the paper in question.
Another more radical approach could be to eliminate the peer review process altogether. Articles could be published without verification of their validity but as they are read by researchers in the field the article could be up-voted or down-voted based on their validity. This process would be similar to the manner in which citations of published articles are currently tracked. Over time, articles of the highest quality would acquire the most upvotes and articles with multiple down-votes could be recognized by their authors as needing revision.
This method would also present challenges such as ensuring only individuals who are specialists in a particular subject area are allowed to vote. Nevertheless, statistically speaking, it would present a more reliable method for assessing an article’s validity because more individuals would be able to vote on an article than in the peer review process.