You are Sitting on a Deadly Weapon

By Mathilde Papillon

What if I told you that you are currently sitting on a very dangerous weapon? That may seem dramatic at first, but let me indulge you for a moment, and by the end of this, I think you will agree.

Let us begin by investigating the toll the chair has on the human body after only a few minutes of use. An example of this is the weakening of the circulatory system. As Dr. Richard Klabunde explains on his educational website, veins require muscular contractions and relaxations for proper blood flow. In other words, muscles surrounding veins initiate circulation when activated and deactivated. As such, when one stops moving, veins must depend on very little muscle movement to continue pumping all 5 litres of blood contained in the human body. As a result, less blood flows, and the brain gets less oxygen. After as little as a half hour, your mental focus deteriorates. This might explain a few late nights at Schulich… Your circulatory system is also weakened when the enzymes present in blood capillaries, responsible for disintegrating fats, stop working. Eventually, lipids clog up in the veins and reduce blood flow (Hamilton et al.).

Another instant response to the chair manifests itself in musculoskeletal degenerations. During prolonged sitting time, certain disks between your spine’s vertebrae that normally allow for free movement are squeezed, while others are made taut with tension. This causes them to “lose sponginess” (Berkowitz and Clark, 1), as described in an article of the Washington Post on the dangers of sitting. Similarly, the National Institute of Health for Osteoporosis links prolonged sedentary time with an instant weakening of the bones, also known as minor osteoporosis. We likely all agree that fragile bones sound more inconvenient than a quick standing break.

A final example is the muscular degeneration of your abdominal muscles. A study that compared people who sat at work to people who did not observed that “sedentary office workers […] had lower musculoskeletal fitness than healthy, age-matched controls, with the main difference found in endurance of the trunk muscles” (del Pozo-Cruz et al.). Underused abdominals also “form a posture-wrecking alliance that can exaggerate the spine’s natural arch, a condition called hyperlordosis” (Berkowitz and Clark).

Clearly, every extra second spent in the chair has an immediate and tangible impact on your body, including reduced blood flow and musculoskeletal degeneration. Unfortunately, there is even more bad news. The chair isn’t just bad for you short-term, but there is overwhelming evidence of its negative effects in the long-term. The list of chronic diseases ensuing from a lifetime of sedentariness is extensive, including cancer, diabetes, and cardiovascular disease, among others.

Since that claim is easy to dismiss without too much thought, let us really indulge in its implications. We begin with a list of all the cancers that men can catch from the chair after a few decades of consistent sitting.

  • One study presented in Brigid M. Lynch’s Sedentary Behavior and Cancer: A Systematic Review of the Literature and Proposed Biological Mechanisms found that men who watched more than 9 hours per day of television compared to those who watched under 3 hours a day are more prone to colorectal cancer.
  • Another study analyzed in the report identified an increased risk of colon and rectal cancer in men who work desk jobs (Lynch).
  • In a colorectal cancer-specific study, it was “found that participants who spent the most time in sedentary work had a risk of distal colon cancer that was 2 times higher [than the control]” (Boyle et al.)
  • A prostate cancer-specific study concluded that “four or more hours of moderate/ vigorous intensity physical activity […] provided a 35% lower risk of prostate cancer” (Lynch et al.).

And now for the women!

  • A study determined that an increased time spent sitting during recreational activities causes an increased risk of endometrial cancer (Lynch).
  • Another research paper concluded that those siting over 6 hours a day are subject to an increased risk of ovarian cancer (Lynch).
  • Sitting for over 6 hours a day is further linked to an augmented risk of cancer mortality (Lynch, 2698), meaning that a woman who sits more often in her life is more likely to die from any cancer.

Clearly, more chair time is more cancer time for everyone.

Moving on to an equally-exciting disease, diabetes is also a well-known consequence of the sedentary lifestyle. Researcher Emma Wilmot, MD, explains the mechanism of this correlation to the online health magazine, Prevention: “When we sit for long periods of time, enzyme changes occur in our muscles that can lead to increased blood sugar levels. The effects of sitting on glucose happen very quickly, which is why regular exercise won’t fully protect you.”

The medical evidence is there to support this:

  • An American study observed that in the case of women, every extra 2 hours of television watched per day increases risk for diabetes by 14% (Hu et al.).
  • The same study determined that every extra 2 hours of sitting at work per day increases the risk by another 7% (Hu et al.).
  • An Australian study concluded that “males reporting higher amounts of sitting (6 to <8 hours and ≥8 hours) are significantly more likely to report ever having diabetes” (George, Richard, and Rozencraz).

The list of side-effects does not end here; yet another chronic illness your chair threatens you with is cardiovascular (CVD) disease, or heart disease.

As a matter of fact, studies dating back to as long ago as the mid-twentieth century conclude that sedentary behaviour is directly associated with CVD complications. This is the case with a 1958 American paper reporting that “men in physically active jobs have less coronary heart disease during middle-age, what disease they have is less severe, and they develop it later than men in physically inactive jobs” (Morris and Crawford).

A more recent Canadian study observing 17,013 people over 12 years found that “the risk for CVD mortality in function of time spent sitting increases among both men and women” (Ford and Casperson). Another study that specifically followed men found that subjects sitting over 10 hours per week in the car and over 23 hours per week in total are 82% and 64% more likely to die from CVD (Warren et al.). In a word, heart disease and sedentariness hold an intimate, life-threatening relationship.

All of these findings and statistics ultimately show just how lethal of a weapon the chair really is. You are sitting on an instigator of cancer, diabetes, and heart disease, which, at this very moment, is slowing down your blood flow, weakening your bones, and squeezing your spine.

Whether you are reading this at the library, at the office, or from the comfort of your home, take a stand against this weapon even if only for a minute. You might just be doing yourself a serious favour.

References

Berkowitz, Bonnie and Patterson Clark. “The health hazards of sitting.” Washington Post     20 Jan. 2014: n.p. Web. 2 Nov. 2015.

Boyle, Terry et al. “Long-Term Sedentary Work and the Risk of Subsite-specific                   Colorectal Cancer.” American Journal of Epidemiology 173.10 (2011): 1183-91.              Oxford Journals. Web. 1 Nov. 2015.

del Pozo-Cruz, B. et al. “Musculoskeletal fitness and health-related quality of life                characteristics among sedentary office workers affected by sub-acute, non-specific       low back pain: a cross-sectional study.” Physiotherapy Journal 99.3 (2013):                   194-200. American Association for Cancer Research. Web. 28 Oct. 2015.

Ford, Earl S. and Carl J. Casperson. “Sedentary behaviour and cardiovascular disease: a     review of prospective studies.” International Journal of Epidemiology 41.5 (2015):   1338-53. United States National Library of Medicine. Web. 3 Nov. 2015.

George, Emma S., Richard R. Rozencraz, and Gregory S. Kolt. “Chronic disease and             sitting time in middle-aged Australian males: findings from the 45 and Up Study.”           International Journal of Behavioural Nutrition and Physical Activity 10.20 (2013): 1-8. United States National Library of Medicine. Web. 3 Nov. 2015.

Hu, Frank B. et al. “Television Watching and Other Sedentary Behaviors in Relation to    Risk of Obesity and Type 2 Diabetes Mellitus in Women.” The Journal of the                 American Medical Association 289.14 (2008): 1785-91. United States National            Library of Medicine. Web. 3 Nov. 2015.

“Increase Physical Activity” Alliance for a Healthier Generation. American Heart                   Association. 2015. Web. 30 Nov. 2015

Klabunde, Richard. “Factors Promoting Venous Return.” Cardiovascular Physiology        Concepts. n.p., 2013. Web. 4 Nov. 2015.

Lynch, Brigid M., et al. “Sedentary Behavior and Cancer: A Systematic Review of the         Literature and Proposed Biological Mechanisms.” Cancer Epidemiology, Biomarkers & Prevention 19.11 (2010): 2691-709. American Association for Cancer Research.    Web. 24 Oct. 2015.

Lynch, Brigid M. “Sedentary Behavior and Prostate Cancer Risk in the NIH–AARP Diet    and Health Study.” Cancer Epidemiology, Biomarkers & Prevention 23.5 (2014):        882-9. United States National Library of Medicine. Web. 3 Nov. 2015.

Morris, J.N. and Margaret D. Crawford. “Coronary Heart Disease and Physical Activity    of Work.British Medical Journal 2.5111 (1958): p. 1485-96. United States National     Library of Medicine. Web. 3 Nov. 2015.

Warren, Tatiana S. et al. “Sedentary Behaviors Increase Risk of Cardiovascular Disease       Mortality in Men.” Medicine & Science in Sports & Exercise 42.5 (2010): 879-85.             Unites States National Library of Medicine. Web. 4 Nov. 2015.

“What Is Osteoporosis? Fast Facts: An Easy-to-Read Series of Publications for the             Public.” National Institute of Health for Osteoporosis and Related Bone Diseases,          Government of the United States of America. Nov. 2014. Web. 3 Nov. 2015.

Advertisements

The Neuroscience of Eating Disorders

Written By: Laura Meng

During the 19th century, Sir William Gull formally proposed the clinical term Anorexia nervosa (AN) to encompass a set of homogeneous and aberrant thought processes and behaviours: a salient pursuit of weight loss despite low body weight, fear of weight gain, substantial value attributed to thinness, and specific physiological impacts, including amenorrhea and emaciation.(1) Since then, the Diagnostic Statistical Manual of Mental Disorders-Issue V has additionally included Bulimia nervosa (BN), Binge eating disorder (BED), and not otherwise specified subcategories of eating disorders (EDs).(2) BN comprises alternating episodes of binging—consumption of food beyond satiation—and compensatory behaviours, including purging, abuse of laxatives, and excessive exercise.(1) EDs are often temporally comorbid with affective psychiatric disorders, including anxiety and depression. They are among the highest morbidity of psychiatric disorders, and exhibit a high rate of suicide and relapse.(3)

 

A unifying neuropsychological dimension of Idée fixe: the “domination of mental life” associated with food consumption and an inability to inhibit these thoughts is present in EDs.(2) It is proposed that maladaptive habit formation, neuromodulator dysfunction, and stress contribute significantly to their course of development.(3)

 

Both clinical trials and rodent models suggest that an imbalance of goal directed behaviours (GDB) and habitual behaviours can result in compulsivity: a repeated inability to inhibit inappropriate responses despite adverse consequences.(4) GDB or action-outcome learning involves the presence of a cognitive link between the action with its desired reward. GDB are responsive to the magnitude of the reward, and a decline in action performance is expected if the value of the reward decreases; thus, they are sensitive to reward devaluation. Amygdalal, ventral striatal, dorsalmedial striatal (DMS), and orbitalfrontal cortical (OFC) activity are observed during GDB.(3) As a behaviour is repeated, habitual behaviours or stimulus-response learning occurs. Habitual behaviours are relatively insensitive to both devaluation and the action outcome. Instead, they are specific responses elicited by specific environmental cues. The anatomical areas active during habitual learning include the dorsalateral striatum (DLS) and the dorsolateral prefrontal cortex.(4) In patients with EDs, habitual behaviours are resistant to change.(2)

 

A trans-diagnostic model of EDs propose that compulsivity is present in AN, BN, and BED, and can be treated through targeted psychotherapy.(6) In clinical paradigms, a deficit in DMS and OFC activity was present in patients with AN compared to controls while in rodent models of AN, higher activity of the DLS was observed. These findings suggest a deficit in GDB and/or excessive habit formation contribute to the compulsivity of AN.(4) Dopamine is pivotal to the cortical-striatal systems that underlie GDB-habit formation through modulating intracellular signaling cascades. Among its other functions, serotonin modulates affective states, and selective serotonin reuptake inhibitors are often utilized as an adjunct in current treatments of eating disorders.(5) Both these neuromodulators exhibit deviations from expected functioning in individuals with eating disorders, though their specific mechanism is currently ambiguous. Furthermore, rodent models and self-reports in patients delineate the significance of stress in shaping behaviour: stress often precedes the deleterious compensatory behaviours, including binging and purging, observed across EDs.(6)

 

Continued psychiatric and neuroscience development iterate the importance of approaching EDs through multiple facets.(5) A randomized trial comparing the efficacy of non-specific psychotherapy VS psychotherapy that targets Regulating Emotions and Changing Habits (REaCH) aimed to improve AN treatment through refining an existing therapeutic technique.(6) Neurobiological models, including the development of a neurocognitive endophenotype of compulsion, rodent models utilizing subneuronal knockouts, and continued efforts in elucidating the genetic biomarkers common to EDs illustrate an integrative approach to understanding EDs. Sincere efforts from students, researchers, patients, and clinicians alike continue to further our understanding and development of future treatments for individuals with eating disorders.

 

References:

1.) Gruber, R. (2016). Biological Psychiatry: Eating Disorders [Powerpoint Presentation].

2.) https://ajp.psychiatryonline.org/doi/pdf/10.1176/ajp.134.9.974

3.) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4095887/

4.)  https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4894125/

5.) https://ac.els-cdn.com/S1364661311002427/1-s2.0-S1364661311002427-main.pdf?_tid=597e8564-e83e-4421-ba57-07815a8321be&acdnat=1542036617_cacba44270f312db6841da44e1d6ee15

6.) https://www.cambridge.org/core/journals/psychological-medicine/article/targeting-habits-in-anorexia-nervosa-a-proofofconcept-randomized-trial/08F8A6A197890B65BE24CBA46634D401/core-reader

Pythagoras: Triangles and Triads

Written By: Yingke Liang

Have you ever listened to western stringed music and enjoyed it? If you did, you owe it all to Pythagoras (yes, the triangle guy).

Pythagoras, using his knowledge of numbers and how to play the lyre, studied the ratios of string lengths and the resultant sounds produced at different ratios. He figured out that strumming a string stopped at exactly half the length of the original (a 1:1 ratio) produced a note that was an octave higher. Similarly, strumming a string stopped at a 1:2 ratio of the length produced a perfect fifth. Stopping the strings at 2:3 and 3:4 ratios also produced harmonious intervals. These ratios encompass the first four counting numbers which also happen to add up to 10 which Pythagoras found to be deeply profound and considered such ratios to be “Music of the Spheres”. Pythagorean tuning centers on the 2:3 ratio so tuning focuses on making the perfect fifths in tune. However, scales tuned in Pythagorean tuning will always have a “fifth” that sounds a quarter of a semitone flat and is named the “wolf interval”. It is a strange creature that could almost sound like a fifth but it doesn’t. The third, sixth, and seventh notes in an ascending major scale are also slightly sharp, but not jarring. Despite these slight hiccups, Pythagorean tuning is still employed by string orchestras today, because it is described as sounding more “natural” than other tuning schemes. The Pythagorean tuning was adapted during the medieval Ars Nova, a time when all formal music was religious so the “Music of the Spheres” wasn’t just a Pythagoras thing. (As a side-note, to circumvent this wolf interval the equal-tempered system was developed, which is the system keyboards are tuned in today. However, the trade-off is that the fifths just don’t sound as sweet and music doesn’t sound as natural as in Pythagorean tuning. I guess we can’t have it all).

Pythagoras wasn’t just lazing around developing this musical theory, however. Pythagoras lived during a tumultuous period of Greek history where people needed something to believe in, from the abstract to miscellaneous objects such as bronzed pets. Unsurprisingly, the cult of Pythagoras worshiped numbers, specifically the positive integers. Zero and the negative integers were considered in another realm and not an object of worship. Other than the rather banal object of worship, the Pythagoreans had normal cult-like rules, such as extremely strict vegetarianism, encouraging silence and the wearing of pure linen clothing, and a vendetta against beans because Pythagoras thought that each time a person farted they lost a portion of their soul. Despite the strange hijinks, the Pythagoreans were led by a talented Renaissance man and from this cult arose a system of musical harmonics that birthed western art music.

Sources:

https://www.washingtonpost.com/archive/1996/03/13/pythagoras-the-cult-of-personality-and-the-mystical-power-of-numbers/92ef23a9-fad2-4c12-8089-ddd0aaf8c4a7/?utm_term=.d4c719b8d2da

Medenhall, Margaret “The Music of the Spheres: Musical Theory and Alchemical Image” Mythological Studies Journal, vol. 4, 2013.

Hawkins, William. “Pythagoras, The Music of the Spheres, and the Wolf Interval.” Cleveland: Philosophical Club of Cleveland, 2012.

What’s in a name?

Written By: Katharine Kocik

In terms of classifying organisms, names usually reveal a great deal about a species. The familiar binomial nomenclature system involves two levels of taxa: the genus name and the species name. The first word in the name, the genus, reflects the recent phylogeny of a species, and the second name separates species within the same genus. This system was first used consistently by Carl Linnaeus in the mid-18th century and remains the universal method of identifying species today. Despite its success so far, one must wonder at the implications as more and more species are discovered.

Current estimates say that 1.5 million out of 8.7 billion species have been discovered – leaving about 87% of the world’s biodiversity unnamed. The staggering volume of names that must be unique and universal, not including the majority that remain nameless, raises the concern of maintaining the system’s integrity. The International Codes for Zoological Nomenclature, Nomenclature for Bacteria, and Nomenclature for algae, fungi, and plants, address this through extensive sets of rules and exceptions.

The three Codes are very similar, so in this instance, the Code for Zoological Nomenclature (animals) will represent the rules for naming all organisms. These regulations generally allow for the discoverer’s freedom in choosing a name, given that it is presented in a particular format and not already used – although a plant species and an animal species may share the same name.

The most significant criterion for a species name, aside from binomial nomenclature, is its Latin form. Most names are derived from Greek or Latin to describe the species, such as the species name of the blue jay, cristatus, which means “crested”.

If the name is not of Latin origin, a suffix may be added. For example, a twirler moth species in Southern California and Northern Mexico was discovered in 2017 with unique yellow-white scales on its head. Scientist Vazrick Nazari named it Neopalpa donaldtrumpi (adding -i to the end), for its resemblance to Donald Trump’s hair and to call attention to the destruction of fragile habitats in the US.

Other memorable species names include La cucuracha, a pyralid moth, and Dissup irae, a fossil fly that was reported as difficult to see. As these names reflect, the Code “recommends” that if a species has an unconventional name or is named after a person, there should still be an association between the organism and name. Upon discovery of a new dinosaur species in China, for instance, Director Stephen Spielberg recommended the species name nedegoapeferima, made up by combining last names of actors that starred in Jurassic Park. David Attenborough, a famous nature documentary filmmaker, has several species named after him, including multiple plant species and a dinosaur. Often scientists simply name new species after people they admire: four damselfly species in the Heteragrion genus take their names from the four members of the band Queen – Heteragrion brianmayi, Heteragrion freddiemercuryi, Heteragrion johndeaconi, and Hetaragrion rogertaylori.

For now, the creativity of biologists will suffice to keep biological species names unique — let alone more appropriate for the lack of Latin in the 21st century, as scientists continue to work on that 87% of species remaining.

 

I’m sorry, what?

Written by: Howard Li

Two weeks ago, I met up with a friend who I haven’t seen for a long time. We met in first year and both studied life science, albeit in different departments. Facing the onslaught of tedious assignments, ruthless exams, and frankly ridiculous lab reports, we drifted apart in our second and third year. So, after an awkward exchange of pleasantries, I asked him how his research was going in the hopes of sparking up a casual conversation about the fun times in a research lab as an undergraduate student. Little did I know, he would launch into a passionate verbal barrage of technical terms to describe the work he was doing that took me back to my childhood years trying to parse English from Chinese and not understanding the majority of either language.

“I’m sorry, what?” was all that I could pathetically mutter when he finished. I watched as his face changed into a smirk as he realized, undoubtedly from my blank expression, that I must have not understood a lick of what he said (to my credit, I understood that he was doing something with viruses). He asked me what I though of his project and I hit him with another cheeky “I’m sorry, what?” to confirm that I indeed had no idea of what he was saying. With the awkwardness broken, we went on to catch up about each others lives and I was even able to get a confused expression out of him when I tried to describe my project at the lab.

Now, I think that we’re both pretty on top of our classes and that we’re generally pretty savvy in keeping up with science (i.e. we’re both pretty big nerds). So I was surprised that we had such a hard time describing our undergraduate level research projects to each other. Granted, mine is a biochemistry project while his is more focused on immunology. But with both fields falling under the category of life science, I thought that it is reasonable to assume that they were related enough that two newbies to science can freely converse with each other. And both of us have taken both biochemistry and immunology classes too! However, it seems that nowadays, science is so specialized that beyond the very introductory ideas, the entire knowledge base and mindset of people from different fields is completely different.

If two undergraduate students studying closely related sciences had such a hard time talking to each other, then imagine the breakdown in communication when a problem requires experts from vastly different fields of science and engineering to work together to solve. And while we all know how badly the media butchers and misrepresents scientific findings, can we really fault them? If scientists have trouble understanding other scientists, then how can we expect the general audience to understand with anything short of a universal translator. To most people, science may as well be an entirely alien language.

While I joke, I think that scientific communication is of vast importance to our future. More and more problems now absolutely require the input of experts from all over science and engineering to tackle. And with issues such as climate change urgently knocking on our doorstep, science needs to play a key role in informing the public, politicians, and policy makers. The only way to do this is for us to learn to overcome this language barrier and to communicate science in a simple, but accurate way. To the public, and to other scientists.

Right now, we are students. Our job is to learn so that in the future, we can contribute to society. Part of that learning needs to be in effective scientific communication. When we graduate (hopefully), we will become engineers, researchers, professors, and doctors; and we will need to work with people from all varieties of disciplines in order to face the challenges that await us. So at the very least, we should all be speaking the same language so that “I’m sorry, what?” ceases to be a response.

An Investigation on our love for blackboards

By: Mathilde Papillon

The blackboard. This archaic teaching tool is in practically every single class of any science student. It also furnishes most math departments and shows up in theoretical labs everywhere. Why is that? How is it that scientists and mathematicians working on the finest, newest technologies still bother with the messy chalk? Today, there are so many other presentation tools available to us, and yet, this 1801 invention remains a favourite. As it turns out, there are a few reasons justifying this choice.

 

Many scientists and mathematicians explain that this love is rooted in the tool’s sheer simplicity. As Harvard’s math professor Oliver Knill will say, no other method communication allows for such freedom in expression of thought. No reliance on batteries or projectors, nor paper or erasable marker ink. Just plain old chalk with a wooden eraser. The audience’s attention is funnelled towards this “point of focus,” as physicist Lewis Buzbee describes it in an article for Slate.

This Californian author also points out the blackboard’s contribution to how we teach. This tool allows for a true, authentic development of an idea, whether that be solving an equation or stating a proof. The speaker exerts full control over the lesson’s progress and has the liberty of emphasizing any aspect with a simple dab of the chalk. As the subject at hand unfurls itself onto the boards, the drawings, equations, and definitions appear as an ensemble to the student, facilitating otherwise abstract connections.

Not too surprisingly, the blackboard presents a lot of advantages for the audience as well. First off, as Knill points out, the blackboard forces the speaker to slow down, allowing for students to better process the material at hand. As mathematician and historian Donald Mackenzie points out in his essay Dusty Discipline, the large gestures involved with black boards, like sliding boards around or erasing, allow for structured pauses and break down the material into smaller bites. Furthermore, most students will agree that chalk is simply easier to perceive than ink, the latter often leading to messy, smudged writing. Knill actually points this out using images from the movie Arrival in which a whiteboard renders rather simple equations quite messy and difficult to decipher.

On this note, it would appear as though many of the future’s brightest innovations will continue to be developed (and then explained) on this timeless tool, enamoured by the simplicity and structure it provides to a lesson.

Reference List:

http://mbarany.com/DustyDisciplineBWM15.pdf

http://www.math.harvard.edu/~knill/pedagogy/blackboards/index.html

https://slate.com/human-interest/2014/10/a-history-of-the-blackboard-how-the-blackboard-became-an-effective-and-ubiquitous-teaching-tool.html

On the Horizon in Machine Learning: Identifying Natural Selection At Work in the Human Genome

Written by: Janet Wilson

Machine Learning (ML) is a rapidly evolving branch of artificial intelligence in which a program is developed that can evolve on its own and improve by learning from data and experience. It can be used to make predictions, classify items, estimate probabilities and more. For example, ML is used in Apple’s face recognition method, Google’s search result algorithms and fraud detection methods used by credit card companies.

In most recent news, ML is being trained and has been successfully identifying evolutionary pressures that the human genome is under and how natural selection is shaping it over time.

Due to the fact that the human genome is comprised of more than 20,000 genes and more than 3 million base pairs, ML has the potential to outperform humans by a large margin. The error-prone, tedious data analysis and DNA sequence searching/comparing methods that would have to be employed by humans would take ages and wouldn’t be nearly as accurate as computer-based methods have the potential to be.

Usually ML involves teaching a program how to perform a task using a method called supervised learning. The programmer will provide the machine with the expected output of the program and from this the machine will determine how it should generate the output. This is called the training phase. This challenging in the case of genome analysis, because the expected result of the computation we want the program to perform simply isn’t known.

Currently researchers are trying to train ML systems to identify evolution based on simulated examples of natural selection, allowing the machine to create and internalize a definition of what natural selection looks like from its own statistical, computerized point of view.

The second phase of ML involves testing the program on data other than that which it has encountered in its training phase. ML algorithms have been tested on genome data and have successfully identified the evolution of the lactase gene in caucasian populations. This is a clear, known example of evolution in the human genome which has allowed individuals with the lactase gene to digest cows milk.

Based on massive human genome sequencing, over 20,000 mutations have been identified that researchers want to understand further through ML. Now the task of researchers is to continue to train and perfect their programs to be able to identify and visualize the evolution of this massive number of mutations, that continues to increase.

Hopefully soon, these machines will be able to trace the evolutionary roots and propagation of all mutations found in the human population. Their refined skills will allow us to understand our evolutionary history and perhaps even predict the future evolutionary patterns of the human race. Although it is unfortunate that this is yet another example of computers outperforming humans, the potential applications of ML are vast and exciting. On top of genetic analysis and evolution-modelling, ML methods are being employed in many other fields and have the potential to drastically improve medicine, technology, marketing and much, much more. So we will have to sit back and see where ML leads!