You Have Allergies Because You’re Too Clean

Emma Hazelwood

An allergy is when the body’s immune system is sensitive to a normally harmless
molecule. In most people, this molecule would have no effect, but in those with an
allergy, the body sees it as a threat and reacts abnormally to its presence.

Allergies are becoming more and more common in the Western world. The number
of children with a food allergy has doubled in recent years, and the World Allergy
Organisation recently revealed that the global prevalence of asthma (a common
symptom of allergies) has increased by 50 per cent every decade for the past 40
years. 50 years ago, one in 5000 people were allergic to wheat; this figure is now
closer to one in 130. Allergies are becoming a huge problem, with 50% of children in
the UK having an allergy, and 20,000 people admitted to hospital each year for a
dangerous (and potentially life-threatening) allergic reaction.

Scientists disagree about the explanations for this increase. Theories include genetic
reasons, a change in diet, and something called the ‘hygiene hypothesis’. The idea
behind this is that allergies are so common now because we are kept too clean as
children, when our immune systems are developing. This means that the immune
system is not exposed to as many pathogens, so cannot regulate itself as well.
The basic theory was proposed in 1989 by Strachan, who said that young children
exposed to infectious diseases will be less likely to suffer from allergies. Since then,
it has been developed and is now also known as the “lost friends hypothesis”. It is
believed that, as well as colds, measles and other common childhood infections
(which have only evolved in the last 10, 000 years), it is exposure to ancient
microbes present in the time of human evolution that can prevent allergies; we have
“lost” our “old friends”, whom our immune systems need to develop properly.

Although the hygiene hypothesis has not been scientifically proven, there is lots of
evidence that supports it. Links exist between increased allergy prevalence and
many factors related to cleanliness (e.g. early day care attendance, rural living,
contact with animals, older siblings, large family size, and infection by common
diseases).

Allergy_skin_testing

Image credit: Wikimedia commons

It makes sense that a child in day care will have increased exposure to infections. In
fact, many parents send their child to day care so they will become immune to
diseases such as chicken pox, which can be dangerous if caught later in life. It has
been found that children who went to a large day care with other many children have
a reduced likelihood of developing an allergy.

Several studies report a reduced incidence of hayfever and asthma in the children of
farmers. In particular, factors to thank for this are: contact with animals as a child,
exposure to stables under the age of one, and consumption of farm milk (presumably
raw/ unpasteurized). As farm animals can be considered ‘dirty’, this suggests that
exposure to common farmyard microbes may influence vulnerability to allergies.
One of the most significant links with allergy prevalence is family size. This is
because being in a larger family, with more children, means more microbes and
infections are brought into the home. Hay fever and eczema are less common in larger families, and a study on asthma showed that being from a small family
increases the chances of a child being diagnosed. It has been found that having
many older siblings (at least three) in particular can have a protective effect from
allergies. Sharing a bedroom as a child, which is more likely in large families, also
had a protective effect. This all agrees with the hygiene hypothesis, as this would
provide more opportunity for exposure to microbes or infection.

The huge increase in allergy prevalence has been seen much more dramatically in
the industrialised world than in developing countries. This could be for genetic
reasons, but there is evidence that this too is due to different levels of exposure to
microbes and disease. Firstly, it is known that the average Eastern family is larger
than the average Western family, which, as we know, decreases the likelihood of
developing an allergy. Furthermore, immigrants from developing countries have
been found to increasingly develop autoimmune disorders in relation to the length of
time they have been in the industrialised country. Studies in Ghana demonstrate an
increase in immunological disorders as it grew more affluent and presumably
cleaner.

Allergies are on the rise, and it seems that the increased hygiene in the Western
world may be the cause.

Savant Syndrome

Ellie Marshall

Can you think of any talents you possess? Perhaps you’re a great runner or are skilled at
playing an instrument? Now imagine that you didn’t have to work for those talents at all, and that they are beyond all normal human capabilities. This is what it is like to have Savant syndrome.

Savant syndrome is a rare phenomenon where a person possesses unexplained and
remarkable talents despite mental or physical disabilities. Almost all congenital savants have some form of brain damage, usually to the left hemisphere and around 50% of savants have autism. The remaining 50% either have some form of damage to or disease of the central nervous system. Due to this, some people can acquire savant like abilities later in life after a head injury, dementia, concussion, epilepsy or other brain disturbances.

Exceptionally deep but narrow memory is common to all savants, which allows them to excel at certain activities. For example, one boy could recite the route and time table of every bus in the city of Milwaukee, Wisconsin.

Such talents can be placed into 5 categories: music, usually performance and mostly piano, with perfect pitch but sometimes composing instead or playing multiple instruments (up to 22 in some cases); art, usually painting drawing or sculpting; lightning calculation, including the ability to calculate prime numbers; calendar calculation; and visual-spatial ability, including the capacity to precisely measure distances without the use of instruments, the ability to construct complex models with painstaking accuracy and map making. Skills are usually singular, although multiple skills can be possessed in some cases. The most common savants are ‘human calendars’ and have the ability to rapidly calculate the day of any given date or recall personal memories from that particular date.

DerekAmato_0

Image credit: Derek Amato

One of the most famous savants is the late Kim Peek, who inspired the character ‘Raymond Babbitt’ in the 1988 film ‘Rain man’. Kim was born with a developmental disability but memorised over 6000 books and had an encyclopaedic knowledge of history, sports, geography, music, literature and nine other areas of expertise. He could name all the US area codes and major city zip codes. He also memorised maps found in the front of telephone books and could tell you exactly how to get from one city to another and then how to travel around that city street by street. One of his most remarkable qualities was his ability to read books at lightning speed by simultaneously scanning one page with the left eye and the other with the right eye. MRI scans showed he lacked a corpus callosum (part of the brain that transfers information between hemispheres) with other central nervous system damage. Despite his brilliant mind, Kim had an IQ of 87, markedly lower than average and struggled to follow certain directions.

Contrastingly, Derek Amato was born without any brain dysfunction. However, aged 39 he suffered a head injury in a pool that caused him to suffer from headaches, memory loss and 35% hearing loss in one ear. Several weeks later something dramatic happened. Whilst round at a friend’s house, he spotted a cheap electric keyboard and without thinking he sat at it. He had never played the piano, nor had any previous inclination to, but his fingers found the keys by instinct and to his amazement rippled across them. He started with his right hand, playing arpeggios and climbing in lyrical chains of triads. His left hand followed, laying down bass and picking out harmonies. Amato sped up, slowed down, varied the volume and was soon playing chords as if he had been playing for years. When he finally stopped and looked up, his friend was in tears. Amato found he an overwhelming compulsion to play and would shut himself in for as long as two to three days exploring his new skill.

So, what is the mechanism behind this? There are many theories as to why this occurs, but the most widely accepted theory is as follows: When the left hemisphere and higher-level memory circuits of the brain become damaged, parts of the undamaged brain are recruited to compensate. Lower level memory capacities are also recruited. This is known as cross-modal neuroplasticity. It has been established that some savants operate by accessing low level, less processed information that exists in all human brains but is not usually available to conscious awareness. For example, instead of seeing a whole tree, they would see every individual leaf and branch. However, some argue that this ‘recruitment’ of new areas of the brain to replace damaged areas and develop new skills is a ‘release’ of pre-existing areas, previously masked by more dominant areas of the brain.

Savantism occurs in males more often than females in a ratio of 6:1, the reason being for
this that males are more likely to develop disorders involving damage to the left hemisphere such as autism, dyslexia and delayed speech. The left hemisphere develops slower than the right, meaning it has greater susceptibility to pre-natal influences. Testosterone has a neurotoxic effect and can slow the growth of the left hemisphere, allowing the right hemisphere to become bigger and more dominant in compensation. The right hemisphere of the brain is responsible for art awareness, creativity, imagination, intuition, insight, music awareness and holistic thought.

We cannot fully model brain function until we can account for and incorporate savant
syndrome. Understanding this condition has wide implications regarding buried potential in some, if not all of us. If such potential could lie dormant in Amato, who knows what spectacular abilities lie dormant in us?

Contact sports and public health.

Hugh McCloskey

Imagine our society lauding people of all ages for activities that were potentially harming their long-term mental and physical health. But how? our society has come on leaps and bounds in health promotion, education surrounding alcohol consumption, numerous initiatives to help people stop smoking. All these things clearly demonstrate our conscious commitment to public health as a nation.

But what if I told you we as a society were actively promoting activities in our young people that could lead to conditions with symptoms similar to those caused by the long-term abuse of tobacco and alcohol in their later lives. And that we were promoting these activities as beneficial.

The activities I am talking about are of course contact sports, anything that involves players sustaining direct impacts to their head, examples include; rugby, boxing, MMA, kickboxing etc. There is a growing body of evidence to suggest that impacts to the head sustained by sports players can lead to chronic traumatic encephalopathy or CTE.

Hockey_goal_cmd_2004

Image credit: Wikimedia commons

CTE has also been identified as a potential environmental factor in many neurodegenerative diseases such as Motor Neuron disease or ALS, Alzheimer’s and Dementia. Although there has been a recent stir in the sporting community around concussion with the programs such as the IRFUS “recognize and remove” campaign (1) being implemented in rugby, this focuses on large traumatic impacts that illicit a loss of consciousness or disorientation.

Perhaps the insidious thing about CTE and resulting neurodegenerative diseases is that they don’t require multiple large concussions resulting in loss of consciousness to occur before they are incurred (but that would certainly help them on their way). In fact, they can be brought on by multiple sub-concussive blows sustained over time causing what is known as mild traumatic brain injury or MTBI.

There is, however, no doubt that extended careers in extreme pugilistic sport bring about neurodegenerative diseases with 97% of NFL players studied post-mortem having evidence of neurodegeneration! (2) . Anecdotally This month a New England Patriots star committed suicide in jail after a conviction for murder (3) When his brain was examined it was found to have one of the most advanced stages of chronic traumatic encephalopathy possible (4). Negative changes to personality and tendency towards paranoia and violence are well-documented symptoms of CTE (5) The fact is we still have no exact values to quantify just how many impacts of what magnitude a person can sustain during a sporting career before they increase their risk of neurodegenerative
disease.

800px-LacrosseGoalie

Image credit: Wikimedia commons

This begs the question; Could a casual career in weekend or school rugby lead to increased risk of depression, paranoia, violent behavior suicidality in the short term and neurodegeneration in later life?

We live in a society where sporting culture is becoming ever more performance orientated. Athletes are becoming larger, faster, and stronger at younger ages meaning the forces they generate and impart during training are greater than ever before. They are also training and competing more regularly which again implies more net impacts.
However, society seems to see no problem with this because what little is entering the collective consciousness about brain injury in sport through that medium of movies like “concussion” tell us that these injuries only happen to professional NFL athletes in other countries. But what if it’s happening to our young men and women competing in boxing, rugby, American football etc right here? The problem is that we don’t know.

I am not advocating a blanket ban on these selected sports, I myself gained a tremendous amount of discipline and self-respect from training and competition in MMA and rugby. However, I believe these sports should at least implement a system of education around the risks faced by players until further investigation can influence their practice and prevent these things happening. Unfortunately, there is no way to avoid impacts to the head in certain sports and in these cases athletes should be educated about the long-term risk of brain injury they may well be exposing themselves to.

Finally, to return briefly to our attitudes on public health as a nation permit me to use some shocking but I believe justified imagery to highlight a paradox. What if there were a competitive children’s smoking league where despite the risk of cancer, heart disease and stroke we allowed children to go head to head in a ring smoking as many cigarettes as possible whilst we cheered from the side-lines?

Sounds ridiculous, right? And yet we allow children as young as 10 to begin careers in boxing and rugby. With evidence that sporting trauma in childhood can bring on neurodegeneration (6) we can only ignore these issues for so long.

1. http://www.irishrugby.ie/downloads/IRFU-Guide- to-Concussion%282%29.pdf
2. Gardner, R.C. and Yaffe, K., 2015. Epidemiology of mild traumatic brain injury and
neurodegenerative disease. Molecular and Cellular Neuroscience, 66, pp.75-80.
3. https://www.washingtonpost.com/news/early-lead/wp/2017/04/19/aaron- hernandez-
found-dead- in-prison- cell/
4. http://www.freep.com/story/sports/nfl/patriots/2017/09/21/aaron-hernandez- cte-suicide-
murder-new- england-patriots/690651001/
5. http://www.alz.org/dementia/chronic-traumatic- encephalopathy-cte- symptoms.asp
6. Keightley, M.L., Sinopoli, K.J., Davis, K.D., Mikulis, D.J., Wennberg, R., Tartaglia, M.C.,
Chen, J.K. and Tator, C.H., 2014. Is there evidence for neurodegenerative change following
traumatic brain injury in children and youth? A scoping review. Frontiers in human
neuroscience.

Is Mindfulness Meditation worth it?

Emma Pallen

In the past, the word meditation was associated with Tibetan monks chanting on isolated
mountaintops. But nowadays, it seems that everyone and their cat are espousing the benefits of
the mindfulness-based practice. However, instead of aiming to achieve spiritual enlightenment,
modern meditation is far more concerned with the health benefits, both mental and physical.
With claims such as decreased anxiety and depression, boosted immune systems and even
being linked to a longer life span, it all sounds too good to be true. Is this all just pseudo-science
mumbo-jumbo, or have the Tibetan monks really been sitting on a panacea for human health
problems all this time?
Mindfulness meditation is the practice of focusing on the present moment, instead of
deliberating over past failures, or worrying about future problems. It makes sense that
something like this could improve mental health, especially in modern Western society, where
there are so many competing calls for our attention. Numerous studies have found that this
process of stopping and refocusing your attention on the present, whether that’s through
breathing, focusing on bodily sensations or simply by mindfully enjoying the food you’re eating,
leads to decreased rumination and worry. This in turn is linked to decreased anxiety and
depression.

Buddhist_monk_in_Khao_Luang-Sukhothai

Image credit: Wikimedia commons

As well as being beneficial for our mental wellbeing, mindfulness has been shown to have
numerous physical health benefits as well. Recently, researchers at Coventry University
investigated the effects of mind-body interventions on gene activity. Remarkably, they found that
for participants who practiced mind-body interventions such as mindfulness, gene activity was
reduced in genes related to inflammation. This is the opposite effect of chronic stress. Not only
does this reinforce the notion that mindfulness reduces stress, it also suggests that practicing
mind-body interventions may even reduce the risk of physical inflammation-related disorders
such as arthritis and asthma.
Practicing mindfulness meditation will not only lead you to a happier and healthier life, it may
also lead you toward a longer one. Researchers at the University of California showed that
participants who had attended a three-month meditation retreat had greater levels of an enzyme
that builds up telomeres than a control group. Telomeres are regions of DNA at the end of
chromosomes that get shorter every time a cell divides. The length of telomeres is related to
ageing and longevity, so it appears that mindfulness could be linked to a longer life span.
Clearly, meditation has its benefits. But, like many things that seem too good to be true, it may
also have a dark side. For some people, instead of leading to peace and enlightenment,
mindfulness meditation can lead to panic, depression or even psychosis. According to a study
conducted by David Shapiro at the University of California, 7% of people who have tried
mindfulness meditation reported anxiety, depression, pain, or panic. There is little published
research on these potential negative effects of mindfulness, perhaps because of its ‘trending’
status at the moment, publication bias towards studies with positive results, or simply because
those who experience these negative effects simply stop with practice and don’t report it.

 

However, there are some potential explanations as to why some people have such negative
experiences. Meditation involves sitting with and accepting your own thoughts and feelings,
positive or negative. This can be sometimes difficult for even mentally healthy people, so for
people who are already suffering with poor mental health or negative feelings, this could
potentially make things worse. Similarly, for patients with post-traumatic stress disorder (PTSD),
mindfulness can be difficult as traumatic memories can rise to the surface.
Nonetheless, the potential negative effects of mindfulness need not put us off. It may simply be
a case of weighing up the risks versus the rewards. Speaking to the Guardian in 2016, Floridan
Ruths, a mindfulness researcher and a practicing psychologist, compared the cost-benefit
calculations of meditation to how we think about exercise. “If we exercise, we live longer, we’re
slimmer, we’ve got less risk of dementia, we’re happier and less anxious,” he said. “People don’t
talk about the fact that when you exercise, you are at a natural risk of injuring yourself.” And as
with exercise, some people are unable to exercise due to a pre-existing condition, or may have
a higher risk of injury.
Another potential explanation as to why some people have negative experiences meditating is
due to poor practice, whether that’

Bangalore_Monument

Image credit: Wikimedia commons

s down to a lack of information on the correct ways to
meditate or due to a poor teacher. Indeed, unlike other forms of therapy such as cognitive
behavioral therapy (CBT), there is no professionally accredited training for mindfulness
teachers, and anyone can call themselves a mindfulness coach. This may have led to the
‘pseudo-science’ perception of mindfulness. Additionally, many studies that have found positive
effects of mindfulness only compared the effects of mindfulness to ‘treatment as usual’ (TAU),
such as seeing a GP, or to waiting list controls. This makes it unclear as to whether the positive
effects of mindfulness are simply due to placebo, spending more time with a therapist and
becoming more aware of emotions, or whether there is indeed an ‘active component’ of
mindfulness that specifically causes the observed benefits.
So, while it seems like mindfulness meditation does have positive effects, a lot more research
needs to be done. It is still unclear as to how long lasting the effects of mindfulness are, and
clearly, not everyone will benefit. It is also unclear as to the mechanism of action of mindfulness
and how it works in comparison to other forms of therapy such as CBT or talking therapy.
Obviously, what mindfulness does have is that is quick and cheap, and can be done by anyone
at any time. Also, unlike other forms of therapy that require a diagnosis before being able to be
accessed on the NHS, mindfulness can also be done to ‘maintain’ mental health, hopefully
avoiding the necessity of using other mental health services.

Apple’s newest innovation – Facial Recognition

Laura Bowles

“Pay with your face.” As threatening and sinister as this may sound, this isn’t a line from the new chapter of dystopian series Black Mirror. It’s a tagline for Apple’s newest technological innovation – the £999 iPhone X and its ‘Face ID’ feature. Apple’s approach to marketing seems to focus heavily on their development in facial recognition software. I’m quite attached to my face, in more ways than one, so this set off some alarm bells in the tin-hat conspiracy theorist deep inside me. Despite this, the scientist in me is more prominent, so I decided to give Apple the benefit of the doubt and get some questions answered. How does the technology work? If it doesn’t work as Apple promises, what will this mean for user security? Should we be worried about what information that organisations – legal or criminal – may be able to glean with this software? Is it even worth it?

 

It’s clear why Apple felt the need to give facial recognition a serious update. So far, it has been notoriously easy to trick. Nguyen Minh Duc, manager of the application security department at Hanoi University of Technology, succeeded in tricking Lenovo, Asus and Toshiba laptops with a photograph of the user. Alibaba (‘China’s answer to Amazon’) attempted to solve this problem when developing a service that allows customers to verify purchases by looking into their phone camera. The payment would only be accepted if the software could detect the user blinking. However, the average person could simply use a video of themselves blinking instead of a photo and manage to successfully deceive the system.

 

So, how does Apple believe it has achieved its “revolution in recognition?” They released a document to inform the consumer on their Face ID security in September 2017. When you want to unlock your phone, instead of comparing what the camera detects with a normal colour image, the iPhone camera uses infrared dots to create a sequence of 3D maps of depth and 2D infrared images (think heat-sensing photography). Because the technology uses light that isn’t in the visible spectrum of wavelengths, Face ID even works when the user is wearing sunglasses or in darkness. The camera then randomizes this data and creates a pattern that is specific to each device. This is then transformed into a string of code that allows your face to be recognised over a variety of expressions and poses, supposedly without being able to be tricked by photos, videos or even 3D face replicas.

Facial_Recognition_(5883951201)

Image credit: Wikimedia Commons

This can all be done using a piece of computer software called a biometric ‘artificial neural network,’ called biometric because it is inspired by biological brains. In a similar way to how the human mind might develop, a neural network ‘learns’ by experiencing examples to get closer to a desired result using a complex system of computer cells. Apple took infrared images and depth maps of thousands of people of different genders, ages and backgrounds, so their neural network would function for a diverse range of customers.

 

This all sounds very convincing, but if the saved data is such a close representation of my appearance, I would want to make certain that only the right people have access to it. In 2013, Apple changed the way that their iPhones were kept secure, using a processor chip called the Secure Enclave – a physical piece of biometric hardware for your data. It’s not interwoven with the software you use every day, as this is more vulnerable to infiltration. The string of code that allows Face ID to recognise your face is kept in this chip and isn’t sent to an external server for Apple or otherwise to access. The chip is not only well-encrypted (protected), but the images that are initially taken of your face are cropped, minimizing the amount of background information that is stored. This means that strangers won’t be able to find out where you live by seeing your road name in the corner of an image, and you won’t get targeted advertising from the stack of Domino’s boxes in the corner of your room. If someone can get a hold of your phone, they may be able to hack it remotely, but this may be unlikely due to Apple’s level of encryption.

 

Face ID isn’t the only feature that has been found to be controversial. The iPhone X is the first iPhone to not have a home button, meaning that Face ID will effectively replace Touch ID (uses fingerprint instead of facial recognition). In terms of security, it looks like Face ID comes out on top. The chances of someone else unlocking your phone with Touch ID are one in 50,000, but with Face ID it’s one in a million. But is this level of security even necessary, especially at the expense of convenience? Apple claims that it makes using their products a more natural experience, but the iPhone X requires the user to fully look at and engage with their device, whereas most of the time a quick tap of a finger to check the time would be sufficient. Considering the price tag and the resources, Face ID doesn’t seem justifiable for some animated emojis.

 

Face ID definitely isn’t a major security threat at the moment. However, there may be a few things to keep an eye on for the future. Apple will allow third parties to use the software for their own apps, so always check app permissions, even if you think you’re in the know. In the future, this biometric, infrared face recognition may be used immorally, but that comes with the territory when developing any new piece of technology. Although Apple may be known for manipulating consumers into a cult following, they are also known for their thorough approach to security. So, no real life Black Mirror just yet.

What happens if you drink bleach?

James Vines

Before we begin, if you suspect someone has ingested a large quantity of bleach, this article does not contain the medical advice you need to help them. It does however contain a list of unfortunate consequences that will befall them should you not seek said medical advice immediately. A good place to find the lifesaving information they so desperately need is by phoning the emergency services.

With that out of the way, let’s talk about bleach. For most of us, bleach is used as a household cleaner. It is used for disinfecting toilets, drains and other smelly areas, due to its antimicrobial properties. Most bleaches used as household cleaners are chlorine based, and contain the active compound hypochlorous acid. A 2008 study shows that hypochlorous acid can cause proteins to unfold and clump together. Unfolding and clumping of proteins within a microbe will cause a loss protein functionality.

Consequently, the microbe will stop growing. Hypochlorous acid is also a chemical oxidiser, which strips electrons from anything it touches. This property is what makes
bleach corrosive to organic substances, and further aides with its antimicrobial activity.

800px-Kitchen_bleach

Image Credit: Wikimedia Commons

Other than cleaning, bleach is also used for another purpose. Bleaching things! Bleaches can be used to remove pigments from fabrics, hair, and…other places. Chlorine based bleaches perform this action by breaking chemical bonds in chromophores, meaning they no longer absorb visible light.

A final place you may have may have encountered bleach is on the internet. Here, you may have seen the phase “drink bleach” banded around by certain nefarious individuals. According to KnowYourMeme.com, the phase originates from way back in 2001. However, it was not until recently the phase gained ‘popularity’, due to its spread amount certain YouTube prank videos.

Meme’s aside, if you did happen to drink a large quantity of bleach, you’d be in a bit of a mess. Upon ingestion: feelings of pain, irritation and nausea are highly likely. The pain is the consequence of burns to the linings of the stomach and oesophagus. The longer the bleach sits in your stomach, the worse the burns will get, as the active compound has more time to oxidises its way through your gut.

Additionally, toxic chlorine gas can be formed in the stomach due to reactivity of hypochlorite with acid. Chlorine gas is an irritant which attacks the body’s mucous membranes and will cause burns within its own right. Breathing in chlorine gas is extremely fatal. Furthermore, your body will be faced with a sudden rise in sodium levels, due to the high sodium content of the bleach. This leads to a condition called hypernatremia, which can lead to circulatory and neuronal problems. After a relatively short amount of time, permanent and debilitating damage to the gut will have occurred,
and without medical treatment this will lead to a slow and painful death.

If you have ingested bleach, and seek medical advice, you will probably be told to drink a large amount of water to attempt to dilute the bleach while it sits in your stomach. It is not recommended to induce vomiting however, as this will re-expose the oesophagus, throat and mouth to the bleach. If you make it so far as a hospital you will most likely have your stomach pumped. Depending on damage to your digestive system you may have to have a esophagectomy and colon interposition performed, which involves attaching part of your small intestine to your throat. Unfortunately, this bypasses your chronically damaged stomach, consequently, you’ll have to eat through a straw for the rest of your life. On a slightly more positive note, these steps are normally effective; and most people admitted to hospital because of poisoning will survive.

Although chlorine based bleaches make up the majority of those commercially available, other kinds do exist. Peroxide bleaches are commonly used to bleach hair. Their activity is conferred through an oxygen-oxygen single bond, which can break to yield a highly reactive oxygen species. Peracetic acids and ozone are also used as a bleach in paper manufacturing, and bromates are used to bleach flour and other food products.

Wait, food products?! Yes, despite all the gruesome warnings described above, low concentrations of bleach can actually be used to protect us, not harm us. In fact, if a natural disaster has occurs, and clean drinking water cannot be found, bleach can be used to sterilise it. In these situations, the Centre for Disease Control and Prevention recommends adding 0.75ml of common household bleach to every gallon of water. As long as the concentration is low enough, bleach will just kill microbes and not us. In fact, most experts agree, that drinking a small cup of bleach straight from the container would only give you a bad stomach upset and probably wouldn’t kill you. This is because
household bleach only contains around 3% Sodium Hypochlorite. Overall, I wouldn’t recommend following the advice of internet trolls, and ask for a pint of bleach next time you’re down the pub. However, if you’ve got a dirty toilet, some hair that’s too dark, or
some water that seems a bit sketchy, bleach might be your answer! Just make sure you get it to the right concentration!

Why do we have different skin tones?

Emily Farrell

Around 6 million years ago, human like apes started walking on two legs. Hot under the
African sun they lost their hairy bodies, but started to burn. Only those with darker
pigmentation in their skin could continue to roam comfortably in the midday sun.

An excess of UV can not only burn and cause melanomas, but will also strip the body of folic
acid. This is essential for foetus development and this susceptibility could be a main factor in
driving natural selection towards darker skin.

Six million years later, humans living across the equator, where the sun is strongest, retain
this dark layer of protection against the sun’s UV. But the skin doesn’t block it all. It needs to
absorb a certain amount to convert into Vitamin D. This is used in processing calcium for
bone growth and maintenance, a lack of which can cause disorders such as rickets.

When humans migrated north, the sun disappeared. As well as being cold, rainy and sad, it
was harder to absorb the amount of UV needed when there was well evolved protection in
the way. Lower amounts of melanin in skin spread through the sun deprived population and
UV was now more easily absorbed. The further north, the less sun was available, the less
pigmentation people needed and the lighter the average skin tone became, until Northern
Europe where the palest skin is found.

A new diet rich in cereals from agricultural societies which were low in vitamin D
concentration, as opposed to a diet largely consisting of hunted meat; common amongst
sub Saharan African diets in the Palaeolithic, further exasperated this condition.

One exception to this are the communities in Northern Canada and Alaska. While in very
weak sunshine for most of the year, they retain darker skin due to the food they consume. A
diet high in seal and other marine sources, it contains all the Vitamin D they need. They do
not need to absorb UV, so their melanin composition does not matter. Instead, evolution
has focussed on creating a protective barrier against the harmful effects.

Interestingly, women are often paler than their male counterparts. Women need more
Vitamin D for pregnancy and lactation and are more at risk of osteoporosis in old age than
men. The cost of the dangers of UV are outweighed by this need to produce milk while
retaining enough nutrients to support their own body.

Albinism results in no pigmentation at all, including the hair and eyes. It is caused by a
recessive allele and creates an “all or nothing” response, as opposed the sliding scale of skin
pigmentation usually seen. It affects 1 in 5000 in sub-Saharan Africa and 1 in 20,000 in
Europe and North America and it varies between other countries too.

However, globalisation means people are no longer confined to the areas their ancestors
lived in. Short term precautions can render thousands of years of adaptations redundant.
Sunscreen can protect almost as well as extra melanin when properly applied. Vitamin D is
not naturally found in common food items, but now it is artificially added to cereals, soy
milk and other products. There is no reason for different skin tones in the modern world,
other than a way to express our heritage.