What’s wrong with the peer review process and how can it be improved?

By: Janet Wilson

Publishing is the “be all end all” for researchers – publications allow them to pay their lab bills and researchers and make profit. It increases their reputation and allows them to contribute to their field. Once a innovative research question has been developed, after the infinite hours spent in the lab and after the countless hours analyzing and synthesizing data and writing reports, a scientist must cast what they believe to be a polished product that is both meaningful and relevant to their field into the abyss of what is the peer review process. This process is not without its downfalls, is highly criticized and should be revised in order to benefit the field of science.

Here at MSURJ we also use a peer review process to assess the validity of submissions to our journal. All submissions are reviewed by three peer reviews that are experts in the field of the article. This allows for critical analysis in regards to both the quality and validity of the research, with the ultimate goal of identifying whether the article is worthy of being published. Nonetheless, our peer review process differs from that of journals like Nature, Cell and the like because we, the editors, ultimately want every eligible article to be published. Resources aren’t scarce and it doesn’t cost us to publish undergraduate research, thus there isn’t a limit on the number of articles that can be published in each edition of the journal. The more articles that are deemed eligible according to our criteria, the more can be published.

On the contrary, the journals publishing science today have extremely low acceptance rates. For example, that of Science is 7% and declining every year. The peer review process is inherently rigourous – and so it should be – in order to ensure that only the highest quality science is allowed to be published. It is usually a blinded process meaning that the authors’ identity is not revealed to the peer reviewers, and the peer review comments are fully anonymous.

Some argue that the peer review process is keeping science as we know it locked in a slow, expensive process that is prone to error. It relies on the opinions of researchers in the field to deem an article worthy or, the more frequent outcome, unworthy of publication. This is inherently subjective and can frustrate authors when, for example, two peer reviewers may deem an article acceptable, while one may not. The low numbers of peer reviewers allotted to each paper perpetuate this problem. Additionally, many reviewers may turn down requests to review articles because it is so time consuming, or may review it in a rushed manner that cannot be justified.

So how can the peer-reviewing process be improved you ask? First off, it would be best if reviewers could be recognized in some way for their contribution to the article. Although this would eliminate the anonymity of the process, it would reward reviewers for their contributions and lead to an increased interest in reviewing positions.

Additionally, the process could be improved by allotting a rating method which scores reviewers based on their involvement in the editing process. Reviewers could be ranked based on their scores, and those with the highest scores could be rewarded by the journal for their contributions. This would lead to more commitment from reviewers to the improvement of the paper in question.

Another more radical approach could be to eliminate the peer review process altogether. Articles could be published without verification of their validity but as they are read by researchers in the field the article could be up-voted or down-voted based on their validity. This process would be similar to the manner in which citations of published articles are currently tracked. Over time, articles of the highest quality would acquire the most upvotes and articles with multiple down-votes could be recognized by their authors as needing revision.

This method would also present challenges such as ensuring only individuals who are specialists in a particular subject area are allowed to vote. Nevertheless, statistically speaking, it would present a more reliable method for assessing an article’s validity because more individuals would be able to vote on an article than in the peer review process.

The Neuroscience of Eating Disorders

Written By: Laura Meng

During the 19th century, Sir William Gull formally proposed the clinical term Anorexia nervosa (AN) to encompass a set of homogeneous and aberrant thought processes and behaviours: a salient pursuit of weight loss despite low body weight, fear of weight gain, substantial value attributed to thinness, and specific physiological impacts, including amenorrhea and emaciation.(1) Since then, the Diagnostic Statistical Manual of Mental Disorders-Issue V has additionally included Bulimia nervosa (BN), Binge eating disorder (BED), and not otherwise specified subcategories of eating disorders (EDs).(2) BN comprises alternating episodes of binging—consumption of food beyond satiation—and compensatory behaviours, including purging, abuse of laxatives, and excessive exercise.(1) EDs are often temporally comorbid with affective psychiatric disorders, including anxiety and depression. They are among the highest morbidity of psychiatric disorders, and exhibit a high rate of suicide and relapse.(3)


A unifying neuropsychological dimension of Idée fixe: the “domination of mental life” associated with food consumption and an inability to inhibit these thoughts is present in EDs.(2) It is proposed that maladaptive habit formation, neuromodulator dysfunction, and stress contribute significantly to their course of development.(3)


Both clinical trials and rodent models suggest that an imbalance of goal directed behaviours (GDB) and habitual behaviours can result in compulsivity: a repeated inability to inhibit inappropriate responses despite adverse consequences.(4) GDB or action-outcome learning involves the presence of a cognitive link between the action with its desired reward. GDB are responsive to the magnitude of the reward, and a decline in action performance is expected if the value of the reward decreases; thus, they are sensitive to reward devaluation. Amygdalal, ventral striatal, dorsalmedial striatal (DMS), and orbitalfrontal cortical (OFC) activity are observed during GDB.(3) As a behaviour is repeated, habitual behaviours or stimulus-response learning occurs. Habitual behaviours are relatively insensitive to both devaluation and the action outcome. Instead, they are specific responses elicited by specific environmental cues. The anatomical areas active during habitual learning include the dorsalateral striatum (DLS) and the dorsolateral prefrontal cortex.(4) In patients with EDs, habitual behaviours are resistant to change.(2)


A trans-diagnostic model of EDs propose that compulsivity is present in AN, BN, and BED, and can be treated through targeted psychotherapy.(6) In clinical paradigms, a deficit in DMS and OFC activity was present in patients with AN compared to controls while in rodent models of AN, higher activity of the DLS was observed. These findings suggest a deficit in GDB and/or excessive habit formation contribute to the compulsivity of AN.(4) Dopamine is pivotal to the cortical-striatal systems that underlie GDB-habit formation through modulating intracellular signaling cascades. Among its other functions, serotonin modulates affective states, and selective serotonin reuptake inhibitors are often utilized as an adjunct in current treatments of eating disorders.(5) Both these neuromodulators exhibit deviations from expected functioning in individuals with eating disorders, though their specific mechanism is currently ambiguous. Furthermore, rodent models and self-reports in patients delineate the significance of stress in shaping behaviour: stress often precedes the deleterious compensatory behaviours, including binging and purging, observed across EDs.(6)


Continued psychiatric and neuroscience development iterate the importance of approaching EDs through multiple facets.(5) A randomized trial comparing the efficacy of non-specific psychotherapy VS psychotherapy that targets Regulating Emotions and Changing Habits (REaCH) aimed to improve AN treatment through refining an existing therapeutic technique.(6) Neurobiological models, including the development of a neurocognitive endophenotype of compulsion, rodent models utilizing subneuronal knockouts, and continued efforts in elucidating the genetic biomarkers common to EDs illustrate an integrative approach to understanding EDs. Sincere efforts from students, researchers, patients, and clinicians alike continue to further our understanding and development of future treatments for individuals with eating disorders.



1.) Gruber, R. (2016). Biological Psychiatry: Eating Disorders [Powerpoint Presentation].

2.) https://ajp.psychiatryonline.org/doi/pdf/10.1176/ajp.134.9.974

3.) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4095887/

4.)  https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4894125/

5.) https://ac.els-cdn.com/S1364661311002427/1-s2.0-S1364661311002427-main.pdf?_tid=597e8564-e83e-4421-ba57-07815a8321be&acdnat=1542036617_cacba44270f312db6841da44e1d6ee15

6.) https://www.cambridge.org/core/journals/psychological-medicine/article/targeting-habits-in-anorexia-nervosa-a-proofofconcept-randomized-trial/08F8A6A197890B65BE24CBA46634D401/core-reader

Pythagoras: Triangles and Triads

Written By: Yingke Liang

Have you ever listened to western stringed music and enjoyed it? If you did, you owe it all to Pythagoras (yes, the triangle guy).

Pythagoras, using his knowledge of numbers and how to play the lyre, studied the ratios of string lengths and the resultant sounds produced at different ratios. He figured out that strumming a string stopped at exactly half the length of the original (a 1:1 ratio) produced a note that was an octave higher. Similarly, strumming a string stopped at a 1:2 ratio of the length produced a perfect fifth. Stopping the strings at 2:3 and 3:4 ratios also produced harmonious intervals. These ratios encompass the first four counting numbers which also happen to add up to 10 which Pythagoras found to be deeply profound and considered such ratios to be “Music of the Spheres”. Pythagorean tuning centers on the 2:3 ratio so tuning focuses on making the perfect fifths in tune. However, scales tuned in Pythagorean tuning will always have a “fifth” that sounds a quarter of a semitone flat and is named the “wolf interval”. It is a strange creature that could almost sound like a fifth but it doesn’t. The third, sixth, and seventh notes in an ascending major scale are also slightly sharp, but not jarring. Despite these slight hiccups, Pythagorean tuning is still employed by string orchestras today, because it is described as sounding more “natural” than other tuning schemes. The Pythagorean tuning was adapted during the medieval Ars Nova, a time when all formal music was religious so the “Music of the Spheres” wasn’t just a Pythagoras thing. (As a side-note, to circumvent this wolf interval the equal-tempered system was developed, which is the system keyboards are tuned in today. However, the trade-off is that the fifths just don’t sound as sweet and music doesn’t sound as natural as in Pythagorean tuning. I guess we can’t have it all).

Pythagoras wasn’t just lazing around developing this musical theory, however. Pythagoras lived during a tumultuous period of Greek history where people needed something to believe in, from the abstract to miscellaneous objects such as bronzed pets. Unsurprisingly, the cult of Pythagoras worshiped numbers, specifically the positive integers. Zero and the negative integers were considered in another realm and not an object of worship. Other than the rather banal object of worship, the Pythagoreans had normal cult-like rules, such as extremely strict vegetarianism, encouraging silence and the wearing of pure linen clothing, and a vendetta against beans because Pythagoras thought that each time a person farted they lost a portion of their soul. Despite the strange hijinks, the Pythagoreans were led by a talented Renaissance man and from this cult arose a system of musical harmonics that birthed western art music.



Medenhall, Margaret “The Music of the Spheres: Musical Theory and Alchemical Image” Mythological Studies Journal, vol. 4, 2013.

Hawkins, William. “Pythagoras, The Music of the Spheres, and the Wolf Interval.” Cleveland: Philosophical Club of Cleveland, 2012.

What’s in a name?

Written By: Katharine Kocik

In terms of classifying organisms, names usually reveal a great deal about a species. The familiar binomial nomenclature system involves two levels of taxa: the genus name and the species name. The first word in the name, the genus, reflects the recent phylogeny of a species, and the second name separates species within the same genus. This system was first used consistently by Carl Linnaeus in the mid-18th century and remains the universal method of identifying species today. Despite its success so far, one must wonder at the implications as more and more species are discovered.

Current estimates say that 1.5 million out of 8.7 billion species have been discovered – leaving about 87% of the world’s biodiversity unnamed. The staggering volume of names that must be unique and universal, not including the majority that remain nameless, raises the concern of maintaining the system’s integrity. The International Codes for Zoological Nomenclature, Nomenclature for Bacteria, and Nomenclature for algae, fungi, and plants, address this through extensive sets of rules and exceptions.

The three Codes are very similar, so in this instance, the Code for Zoological Nomenclature (animals) will represent the rules for naming all organisms. These regulations generally allow for the discoverer’s freedom in choosing a name, given that it is presented in a particular format and not already used – although a plant species and an animal species may share the same name.

The most significant criterion for a species name, aside from binomial nomenclature, is its Latin form. Most names are derived from Greek or Latin to describe the species, such as the species name of the blue jay, cristatus, which means “crested”.

If the name is not of Latin origin, a suffix may be added. For example, a twirler moth species in Southern California and Northern Mexico was discovered in 2017 with unique yellow-white scales on its head. Scientist Vazrick Nazari named it Neopalpa donaldtrumpi (adding -i to the end), for its resemblance to Donald Trump’s hair and to call attention to the destruction of fragile habitats in the US.

Other memorable species names include La cucuracha, a pyralid moth, and Dissup irae, a fossil fly that was reported as difficult to see. As these names reflect, the Code “recommends” that if a species has an unconventional name or is named after a person, there should still be an association between the organism and name. Upon discovery of a new dinosaur species in China, for instance, Director Stephen Spielberg recommended the species name nedegoapeferima, made up by combining last names of actors that starred in Jurassic Park. David Attenborough, a famous nature documentary filmmaker, has several species named after him, including multiple plant species and a dinosaur. Often scientists simply name new species after people they admire: four damselfly species in the Heteragrion genus take their names from the four members of the band Queen – Heteragrion brianmayi, Heteragrion freddiemercuryi, Heteragrion johndeaconi, and Hetaragrion rogertaylori.

For now, the creativity of biologists will suffice to keep biological species names unique — let alone more appropriate for the lack of Latin in the 21st century, as scientists continue to work on that 87% of species remaining.


I’m sorry, what?

Written by: Howard Li

Two weeks ago, I met up with a friend who I haven’t seen for a long time. We met in first year and both studied life science, albeit in different departments. Facing the onslaught of tedious assignments, ruthless exams, and frankly ridiculous lab reports, we drifted apart in our second and third year. So, after an awkward exchange of pleasantries, I asked him how his research was going in the hopes of sparking up a casual conversation about the fun times in a research lab as an undergraduate student. Little did I know, he would launch into a passionate verbal barrage of technical terms to describe the work he was doing that took me back to my childhood years trying to parse English from Chinese and not understanding the majority of either language.

“I’m sorry, what?” was all that I could pathetically mutter when he finished. I watched as his face changed into a smirk as he realized, undoubtedly from my blank expression, that I must have not understood a lick of what he said (to my credit, I understood that he was doing something with viruses). He asked me what I though of his project and I hit him with another cheeky “I’m sorry, what?” to confirm that I indeed had no idea of what he was saying. With the awkwardness broken, we went on to catch up about each others lives and I was even able to get a confused expression out of him when I tried to describe my project at the lab.

Now, I think that we’re both pretty on top of our classes and that we’re generally pretty savvy in keeping up with science (i.e. we’re both pretty big nerds). So I was surprised that we had such a hard time describing our undergraduate level research projects to each other. Granted, mine is a biochemistry project while his is more focused on immunology. But with both fields falling under the category of life science, I thought that it is reasonable to assume that they were related enough that two newbies to science can freely converse with each other. And both of us have taken both biochemistry and immunology classes too! However, it seems that nowadays, science is so specialized that beyond the very introductory ideas, the entire knowledge base and mindset of people from different fields is completely different.

If two undergraduate students studying closely related sciences had such a hard time talking to each other, then imagine the breakdown in communication when a problem requires experts from vastly different fields of science and engineering to work together to solve. And while we all know how badly the media butchers and misrepresents scientific findings, can we really fault them? If scientists have trouble understanding other scientists, then how can we expect the general audience to understand with anything short of a universal translator. To most people, science may as well be an entirely alien language.

While I joke, I think that scientific communication is of vast importance to our future. More and more problems now absolutely require the input of experts from all over science and engineering to tackle. And with issues such as climate change urgently knocking on our doorstep, science needs to play a key role in informing the public, politicians, and policy makers. The only way to do this is for us to learn to overcome this language barrier and to communicate science in a simple, but accurate way. To the public, and to other scientists.

Right now, we are students. Our job is to learn so that in the future, we can contribute to society. Part of that learning needs to be in effective scientific communication. When we graduate (hopefully), we will become engineers, researchers, professors, and doctors; and we will need to work with people from all varieties of disciplines in order to face the challenges that await us. So at the very least, we should all be speaking the same language so that “I’m sorry, what?” ceases to be a response.

An Investigation on our love for blackboards

By: Mathilde Papillon

The blackboard. This archaic teaching tool is in practically every single class of any science student. It also furnishes most math departments and shows up in theoretical labs everywhere. Why is that? How is it that scientists and mathematicians working on the finest, newest technologies still bother with the messy chalk? Today, there are so many other presentation tools available to us, and yet, this 1801 invention remains a favourite. As it turns out, there are a few reasons justifying this choice.


Many scientists and mathematicians explain that this love is rooted in the tool’s sheer simplicity. As Harvard’s math professor Oliver Knill will say, no other method communication allows for such freedom in expression of thought. No reliance on batteries or projectors, nor paper or erasable marker ink. Just plain old chalk with a wooden eraser. The audience’s attention is funnelled towards this “point of focus,” as physicist Lewis Buzbee describes it in an article for Slate.

This Californian author also points out the blackboard’s contribution to how we teach. This tool allows for a true, authentic development of an idea, whether that be solving an equation or stating a proof. The speaker exerts full control over the lesson’s progress and has the liberty of emphasizing any aspect with a simple dab of the chalk. As the subject at hand unfurls itself onto the boards, the drawings, equations, and definitions appear as an ensemble to the student, facilitating otherwise abstract connections.

Not too surprisingly, the blackboard presents a lot of advantages for the audience as well. First off, as Knill points out, the blackboard forces the speaker to slow down, allowing for students to better process the material at hand. As mathematician and historian Donald Mackenzie points out in his essay Dusty Discipline, the large gestures involved with black boards, like sliding boards around or erasing, allow for structured pauses and break down the material into smaller bites. Furthermore, most students will agree that chalk is simply easier to perceive than ink, the latter often leading to messy, smudged writing. Knill actually points this out using images from the movie Arrival in which a whiteboard renders rather simple equations quite messy and difficult to decipher.

On this note, it would appear as though many of the future’s brightest innovations will continue to be developed (and then explained) on this timeless tool, enamoured by the simplicity and structure it provides to a lesson.

Reference List:




The Experience of Research as an Undergrad

Ah, research! Cutting-edge technology, exciting chemicals, pushing the limits of knowledge with your own two hands! But is that all there is to it?

The reality is that pushing the limits of knowledge requires a lot of inspiration, and takes a really freaking long time. Doing research is not your run-of-the-mill undergraduate lab. There, you do one experiment, say, synthesize aspirin, which has already been well characterized and done numerous times by a vast number of people You then you write about your specific attempt and all is well and good. In research, you don’t have the luxury of previous renditions of the same experiment because they’ve already been done, so what’s the point?

Instead, you need to find a new topic to study so that you can appease: 1) your supervisor, 2) your advisory panel, and 3) a funding agency, if you get there. Each of these require a more original and “exciting” experiment, and often those are quite hard to find. In fact, losing your research topic because someone else has already studied it, Usually, people tend to find new topics by looking into similar topics and tweaking them slightly, or delving deeper into a topic that has only been generally covered. This involves reading A LOT of papers so that you can become an expert on the current status of the research area you’re interested in. Also, keep in mind that while you will have help along the way, ultimately you must decide on the topic on your own because it is YOUR project, not your supervisor’s; otherwise, what’s the point?

Finally, after digging through trawls of papers, you have solidified your research topic and you are pretty confident that it will be exciting enough to give you a degree (let’s not get ahead of ourselves to the grant stage yet). However, since your topic is so new and exciting, you have no idea how you’re going to do it or if it will even work. You can ask around for help from people in your surroundings, but odds are they are not familiar enough with your topic. After all, you chose this topic specifically because it is new, and nobody has really researched it yet. So how do you proceed? By, guess what, reading more papers! In this case reading papers is like an extension of asking people in your surroundings. You won’t get the exact answer you’re looking for, but you can get an approximation of what you can do to get results. Also, thanks to modern technology, there are now internet resources such as research gate to help you in addition to reading a ton of papers, so all is not lost.

After much scrounging around, you are finally ready to plunge into research, exciting! Time to collect data!

…but collecting significant data also takes a long time and along the way you will inevitably have experiments that fail, reagents that degrade, part of your project getting scooped, etc. Eventually, you will succeed in enough experiments to get data to write a thesis and get your graduate degree, but if you plan on pursuing academia look forward to having to do this all again for your PhD! And then your post-doc! And maybe another post-doc! And then if a university accepts you into their faculty, your assistant professorship, which is like a more intense post-doc! And at the very end of the road, when you are finally offered tenure, you’ll realize that the things you have been doing this whole time are the same things you’ll be doing from now on as well: reading papers, creating experiment proposals, reading more papers, doing experiments, etc. This is also why professors always seem so old – getting to that stage takes a long time.

All this may sound really daunting and maybe even discouraging, but this is just a run-through of the drier parts of research. When you’re knee-deep in some sprawling experiment (and they always become sprawling), it’ll seem like there is never enough time. When your experiment fails and you have to read more papers, you’ll learn cool things you didn’t know even from all your previous education. You’ll meet people who will be experts about things you’ve never even heard about. You’ll get to use cutting-edge technologies and exciting chemicals just like you thought you would. And, at the end, when your experiments do succeed and you have collected enough data on top of all the knowledge you have amassed during the process, you will really have discovered something that nobody has ever seen before and nobody yet knows about, until YOU tell them about it! Now tell me that isn’t the coolest thing ever. (You really can’t.) The long path of research definitely has many downer moments and dry patches, but it is equally full of excitement and discovery. As long as you have patience and are undaunted by occasional failures, you truly will be on the frontline of pushing the boundaries of human knowledge.

Evidence that New Doctors Cause Increase in Mortality Rate in the UK

In England, there is a commonly held belief that it is unsafe to be admitted to the hospital on “Black Wednesday”, the first Wednesday of August. Each year, this is the day when the group of newly certified doctors begin working in National Health Services (NHS) hospitals. One study compared the likelihood of death for patients who are admitted in the final Wednesday of July, with patients who were admitted in the first Wednesday in August. This study found that there is a 6% higher mortality rate for patients who are admitted on Black Wednesday.

There are 1600 hospitals and specialist care centres that operate under the NHS. Each centre routinely collects administrative data when admitting their patients. A group did a retrospective study using the archived hospital admissions data from 2000 to 2008. Each year, over 14 million records are collected. Two cohorts of patients were tracked: one group being patients who were admitted as emergency—unplanned and non-elective patients in the last Wednesday of July. The second cohort comprised of patients who were admitted as emergency patients in the first Wednesday of August. Patients who were transferred were taken into consideration to avoid double counting.

Each cohort was then tracked for one week. If the patient had not died by the following Tuesday, they were considered alive. Otherwise, if they had passed away by the following Tuesday, it was counted as a death. The study only tracked patients for one week, because it was deemed to be the best method to “capture errors caused by failure of training or inadequate supervision”, on the part of the junior doctors. Having a short-term study also avoided any possible biases that may arise from seasonal effects that would complicate the analyses.

The study only analyzed emergency admissions to ensure randomness in the data. They wanted to avoid bias that could have resulted from differences in planned admissions due to administrative pressures.

After considering both cohorts, the study analyzed 299741 hospital admissions, with 151844 admissions in the last week of July, and 147897 in the first week of August. They found that there were 4409 deaths in total, with 2182 deaths in the last week of July, and the last week of August.

The study found small, non-significant differences in the crude odds ratio of death between the two cohorts. However, after adjusting for the year, gender, age group, socio-economic deprivation, and co-morbidity of the patients, it was found that patients who were admitted on Black Wednesday had a 6% higher risk of mortality. The 95% confidence interval ranged from 1.00 to 1.15, and the p value was 1.05.

In short, for hospitals in the NHS from 2000 to 2008, it was found that there was a small, but still statistically significant, increase in the risk of death for patients who were admitted on Black Wednesday, over patients who were admitted the week prior.


Source: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0007103

Sustainable Farming (feat. Rocks!)

Climate change is one of this generation’s most persistent and pressing problems. It not only affects sea levels, habitats, and wildlife, but also resources vital to human survival. One of theses resources is food: as we deplete fertile land, waste fresh water, and cause severe weather changes, we increase the risk of our global food security.

The rapid growth of the human population means that food security will soon become a concern for both developing and developed countries alike. To address this issue, Dr. David J. Beerling and his colleagues from the University of Sheffield are researching agricultural practices that not only preserve the environment, but also act to undo human pollution. In a paper published by Nature on 17 January 2018, the team put forth a farming practice that uses silicate rocks to remove carbon dioxide from the atmospheres.

The process involves the regular addition of small pieces of calcium and magnesium-bearing rocks into the soil. The silicate rocks react with the carbon dioxide in the atmosphere to form stable alkaline forms of carbon dioxide (namely bicarbonate and carbonate), then carry the compounds with the rest of the soil runoff into the ocean. This process therefore assists with the reduction of carbon dioxide in the atmosphere (a major cause of Earth’s severe climate change).


Image courtesy of Nature

Dr. Beerling’s research also indicates that his team’s process improves crop performance, and can act as a substitute for fertilizers. The silicate rocks can also increase the pest and disease protection of the crop. Dr. Beerling hopes that the benefits will create an incentive for farmers to adopt the practice.

Of course, there are financial and practicality issues preventing this novel process from being adopted. For instance, a substantial amount of silicate rocks is required to accomplish the carbon sequestration (or removal of carbon dioxide from the atmosphere). For 10 to 30 tonnes of carbon dioxide per hectare of crop per year, 9-27 pentagrams of silicate rock is needed. Moreover, a cost-effective way to obtain these rocks does not exist either. Our current rock mining, grinding, and spreading technologies would likely yield carbon emissions equivalent to 10-30% of the carbon that would be sequestered by the silicate rocks obtained. The research paper consequently emphasizes the need for innovation in the industrial sector in sustainable rock mining practices.

Finally, because this idea is so novel, further research and greater public acceptance is needed for it to become common practice. If effective, however, silicate rocks have the potential to reshape sustainable agricultural practices.



Alcohol and Potential DNA Damage

A recent study completed by the Medical Research Council (MRC) Laboratory of Molecular Biology in Cambridge suggests a novel reason for why alcohol consumption increases the risk of cancer. In a study published in Nature on 3 January 2018, the Cancer Research UK-funded experiment found that alcohol consumption causes DNA damage in stem cells. In particular, the DNA of haematopoietic stem cells (blood stem cells) are adversely affected by alcohol consumption.

Previous studies that have investigated the carcinogenic effects of alcohol used cell cultures for their experiments. The experiment conducted by the MRC laboratory adopted a novel approach and exposed live mice instead of cultures to ethanol. After chromosome analysis and DNA sequencing of the mice’s genetic information, the team noticed permanent chromosome alterations in the blood stem cells. In particular, the acetaldehyde produced by the body upon consuming alcohol breaks the double-stranded DNA and causes chromosome rearrangements. These mutations increase the risk of cancer because the stem cells become faulty.

The MRC laboratory experiment also observed the role of the enzyme aldehyde dehydrogenase (ALDH) in the body’s response to alcohol. They noticed that mice lacking a functioning ALDH enzyme had four times as much DNA damage as those who did. This confirms our understanding that ALDH is one way the body mitigates the effects of alcohol; ALDH converts acetaldehyde into acetate, which the body uses as energy.

The insight into ALDH’s function in the body compliments our current understanding of the enzyme. For example, a large portion of South East Asians, who on average have lower alcohol tolerances, lack functional versions of ALDH enzymes. This study may also suggest that, based off of one’s inherited ability to produce ALDH enzymes, some individuals may be more prone to the carcinogenic effects of alcohol than others.

Lastly, the study did recognize that cells have DNA repair systems. However, not everyone carries a seamless DNA repair system, as they can often be lost due to chance mutations. Further, with substantial enough alcohol exposure, these systems may fail (as they did with the mice) and result in DNA damage.

The study did not conclude whether such DNA damage was hereditary, as the lab only looked at blood stem cells. Nevertheless, Cancer Research UK has publicized this study as a compelling reason to control alcohol intake and consume in moderation.