Denying the evidence – Why do people stick to their beliefs in the face of so much evidence? Emma Hazelwood

It has been accepted in the scientific community that climate change is a result of human activity for almost twenty years. However, a study in 2016 found that less than half of U.S. adults believed that global climate change is due to human activity. In 2012, Trump tweeted that “The concept of global warming was created by and for the Chinese in order to make U.S. manufacturing non-competitive”. In a world with overwhelming evidence to the contrary, how can people continue to believe that global warming doesn’t exist?

Once people believe an argument, it is very hard to persuade them otherwise, even if they are told that the information they based their opinion on is incorrect. In a study conducted at Stanford University, two groups of students were given information about a firefighter named Frank. One group were told that Frank was a good firefighter; the other that Frank was a poor firefighter. Participants were then told that the information they’d been given was fake. Afterwards, they were asked to give their own opinion on how Frank would respond to a high-risk situation. Those who had initially been told that Frank was a good firefighter thought that he would stay away from risks, but those who had been told that he was a poor firefighter thought that he would take risks. This study shows that, even though they were then told it was fabricated, the initial information influenced participants’ opinions.

Confirmation bias is when people are more likely to believe facts which support an opinion they already had, rather than evidence to the contrary. A study in Stanford in 1979 involved two groups of students. One group was for capital punishment, the other against. Both groups were shown two fabricated articles. One contained data that supported capital punishment, the other data that opposed it (the statistics were designed to be equally strong in each article). Both groups stated that the source which supported their argument was more reliable. Furthermore, when asked to express their opinions on capital punishment after the study, both groups supported their standpoint even more than before. This demonstrates human nature to selectively believe what we want to be true.

It is believed that humans act this way because it was beneficial in early hunter-gatherer societies. Confirmation bias not only encouraged humans in societies to collaborate, but it was also important for social status to be considered correct. One theory for why seemingly rational humans continue to think irrationally is that we get a rush of dopamine when we see evidence which validates our opinion.

However, early human societies were not teeming with “fake news” and fabricated studies as we are now. It is increasingly clear how having a public swayed by confirmation bias can be dangerous to modern society.

We live in an illusion, where we think we know more than we actually do. For instance, one study found that when people were told about the new (fictitious) discovery of a rock that glowed, if they were told that the scientists who discovered it did not know why it glowed, participants did not claim to know as much about the rock as those who were told that scientists understood how it works (even though the subjects were not given any information on why the rock glowed). This phenomenon of people thinking they understand more than they do is common, and has actually been advantageous in terms of scientific progress. As scientists, we do not need to understand every scientific discovery there has ever been – we rely on the knowledge of our ancestors and those around us.

Humans are programmed to be influenced by information which they are then told is fake, and to think of sources which support their pre-existing opinion as more reliable than those which question it. However, this can be dangerous in areas such as politics. For example, if people around an individual claim to know why Brexit would be economically beneficial to the country, then even when presented with evidence to the contrary the individual is less likely to believe it. Likewise, if a person believes that global warming is a conspiracy, they are more likely to believe Trump when he says it was created by the Chinese than ecologists who say we are pushing our planet to critical levels. In a world where we are bombarded with clickbait and fake news, it is more important than ever to think rationally and critically about every piece of information.

The Teenage Brain – Charlie Delilkan

We’ve all been there. “I’m leaving home and I’m never coming back!” “It’s not just a phase, Mum.” Slammed doors. Smashed plates. My Chemical Romance t-shirts and “bold” eyeliner. If you haven’t guessed already, I’m referring to those golden teenage years. Whilst we may have given our parents a hard time, we may not be completely responsible for that increased phone bill.

When we’re born, our brains aren’t fully formed so the first few years of our existence involve an expansion of connections – synapses – between cells. Approximately 10,000 different connections are made between the hundred billion brain cells you were born with by the time you are six-years-old!

But during our teenage years, these numerous connections are trimmed down; the brain decides which connections are important enough to keep, and which can be let go, depending on how frequently each neural link is used. This process is called synaptic pruning. This process actually continues well after we stop calling people “teenagers” – some researchers believe this only ceases in our mid twenties, sometimes later! But sometimes this process can go wrong, leading to important connections being lost which could lead to psychiatric disorders such as schizophrenia.

The synapses that are kept are then subjected to a process called myelination, where the synapse is given a sheath that helps them transmit signals more quickly. That is why the teenage years are so critical to your future development! Skills and habits laid down at this point are likely to stay in the long run.

Interestingly, the prefrontal cortex is the last part of the brain to fully mature (or finish pruning). However, this is the part that allows us to be an adult – it controls our emotions and helps us to empathise with others. Therefore, if your prefrontal cortex isn’t functioning fully, you tend to be impulsive and insensitive to other people’s feelings. Sound familiar? Don’t worry though – as teenagers mature, the prefrontal cortex is used a lot more when making decisions, showing that they start to consider others when making choices.

What about that stereotype that teenagers are “hormonal”? Well stereotypes usually come from some truth! Teenagers are hypersensitive to pleasure; rewards such as the neurotransmitter dopamine release is at its peak during adolescence. Any action that causes dopamine release is positively reinforced, but the actions that cause the most dopamine release are usually associated with a stereotypical teenager – reckless driving, drug taking, and/or risk taking. Or in my case, 7 hours of dungeons and dragons on a Friday night – please don’t judge. This reward system is also closely harmonious with the brain’s social network, which uses oxytocin, a neurotransmitter that strengthens bonding between mammals. This causes teenagers to strongly associate social interactions with happiness  and so constantly seek out social situations. This explains why we usually see a dynamic shift from kids being close to parents to teenagers having friends being their emotional centres.

So the next time the teenager in your life is threatening to throw a chair at you, just remember that parts of their brain are literally being destroyed. Cut them some slack, bro.

What Causes Alzheimer’s? Emma Pallen

Alzheimer’s disease is a chronic neurodegenerative disorder with a wide range of emotional, behavioural, and cognitive symptoms. It is the most common cause of dementia, causing around 60-70% of dementias and is primarily associated with older age, with around 6% of the global population over 65 being affected and risk increasing with age. This is especially concerning considering our ageing population and, by 2040, it is expected that there will be 81.1 million people suffering with Alzheimer’s worldwide. It is also one of the costliest conditions to society, costing the US $259 billion in 2017.

Symptoms of Alzheimer’s can be grouped into three categories. Perhaps the most recognisable category is cognitive dysfunction, which includes symptoms such as memory loss, difficulties with language, and executive dysfunction. Another category of Alzheimer’s symptoms is known as disruption to activities of daily living (ADLs). Initially this can be difficulty performing complex tasks such as driving and shopping, later developing to needing assistance with basic tasks such as dressing oneself and eating. A third category of AD symptoms are related to emotional and behavioural disturbances. This can range from depression and agitation in earlier stages of the disease to hallucinations and delusions as the disease progresses.

What causes Alzheimer’s Disease?

We know that the symptoms of Alzheimer’s are caused by a gross loss of brain volume, also known as atrophy, in a number of regions that progress as the disease develops. As brain tissue is lost, symptoms associated with the function of the lost area emerge, such as personality changes developing as tissue is lost in the prefrontal cortex.

We also know that this brain atrophy is caused by a loss of neurons and synapses in the brain. However, what we don’t know is exactly why this neuronal loss occurs. One way to attempt to solve this question is to compare the brains of Alzheimer’s patients to normally ageing brains. This has led to the observation that the brains of Alzheimer’s patients have two distinct biochemical markers: amyloid plaques and neurofibrillary tangles, which are both abnormal bundles of proteins. While these features are often present to some degree in normal ageing and are not always observed in Alzheimer’s, they are often more associated with specific brain regions, such as the temporal lobe, in Alzheimer’s than in regular ageing. There are a number of theories as to how these biochemical markers may be linked to neuronal and synaptic loss, however none are fully conclusive.

One such theory is the amyloid cascade hypothesis. This hypothesis suggests that amyloid plaques, which are made up of a protein known as amyloid beta, are the primary cause of the disease and that all other pathological features of Alzheimer’s are as a consequence. This theory suggests that the accumulation of amyloid beta into plaques leads to disrupted calcium homeostasis in the cells, which can lead to excitotoxicity and ultimately cell death. Evidence in support of this theory comes from the fact that Down’s Syndrome, a condition in which almost all sufferers display some degree of Alzheimer’s disease by age 40, is associated with a mutation on chromosome 21 which is also the location for the gene coding for Amyloid Precursor Protein (APP), a precursor protein that leads to the formation of amyloid beta.

However, if the buildup of amyloid plaques are the cause of cell death in Alzheimer’s disease, it stands to reason that the removal of these plaques should at the very least stop the progression of the disease, which has not been found to be the case. Furthermore, whilst APP producing transgenic mice do end up having more amyloid beta and amyloid plaques, this does not lead to other features of the disease such as neurofibrillary tangles and most importantly, no neuronal loss. This suggests that there may be some other cause for the neuronal loss seen in Alzheimer’s.

Another theory about the cause of neuronal loss in Alzheimer’s focuses on hyperphosphorylated tau, a protein that is the main component of neurofibrillary tangles. The tau hypothesis suggests that the hyperphosphorylation of tau leads to the formation of these neurofibrillary tangles which can result in depleted axonal transport, a potential cause of cell death. This idea is supported by the fact that the number of neurofibrillary tangles is linked to the degree of observed cognitive impairment. Additionally the progression of where tangles are found is similar to the known progression of atrophy observed in Alzheimer’s. Dysfunction of tau is also known to be linked to another type of dementia, frontotemporal dementia, so it seems plausible that similar mechanisms may be at work in Alzheimer’s.

Whilst these are the two of the most prominent explanations for neuronal death in Alzheimer’s, there are a multitude of other potential explanations, and it is likely that no single explanation will capture all facets of the disease. Rather, it is more likely that there is a complex interplay of biochemical reactions along multiple pathways that lead to the clinical features we see in Alzheimer’s disease. These are likely affected by many other risk factors, such as genetics, or environmental factors such as smoking or head trauma.

A, T, C, G… and more? Adding Letters to Life’s Genetic Code – Alex Marks

Scientists have created bacteria that carries two extra synthetic ‘letters’ of the genetic code.

The genetic code is made from four bases, more commonly known as the ‘letters’, A, T, C and G. It is the order of these ‘letters’ that create the genetic blueprint for all life: DNA. Scientists have modified the bacteria, E. coli, so that it can carry two unnatural ‘letters’ in its DNA.

By adding the extra two ‘letters’, which are named X and Y, scientists have increased the number of combinations that the ‘letters’ could make. These additional combinations could potentially increase the number of biological functions this bacterium could do. The international team of scientists hope that this can lead to the creation of new classes of drugs to treat diseases.

In a standard cell, the four ‘letters’ of the genetic code tell the cell how to make proteins. Proteins are responsible for almost every function and structure within a cell. They repair and maintain the cell; they transport atoms and small molecules; and they make up an important part of your immune system.

By expanding the genetic alphabet from four to six ‘letters’ the potential number of proteins that could be synthesised dramatically increases, allowing for semisynthetic organisms that have new qualities not found anywhere in nature.

It had already been shown that semisynthetic organisms could be created. However, the ones that had been made were slow to replicate and regularly lost their unnatural ‘letters’. The new study has “made this semisynthetic organism more life-like,” according to Prof Romesberg, senior author of the study.

By modifying the existing version of the genetic ‘letter’ Y, the team created a semisynthetic organism that could hold on to the unnatural ‘letters’ X and Y for 60 generations. The scientists believe that the bacterium will keep the letters indefinitely.  Making the DNA is still stable, even with the extra ‘letters’ in it.

“Your genome isn’t just stable for a day,” said Prof Romesberg. “Your genome has to be stable for the scale of your lifetime. If the semisynthetic organism is going to really be an organism, it has to be able to stably maintain that information.”

They managed to make the DNA stable by destroying the bacteria that lost the unnatural ‘letters’. Using CRISPR-Cas9 genome editing tool, the scientist could check the bacteria to see if they had retained X and Y. This tool can read specific parts of the DNA and can also add tags. If the bacteria had not kept X and Y, CRISPR-Cas9 marked them for destruction.

By destroying the unstable bacteria, only the stable bacteria could go on and replicate. By doing this, the scientist’s increased the chance that the replicated bacteria was stable.

“This science suggests that all of life’s processes can be subject to manipulation.” Said Prof Romesberg.

Being able to manipulate processes within cells will help us understand these processes and might be able to help cure diseases.

Natural Cycles – Rhiannon Lyon

Contraception can be a pain. From the long list of side-effects associated with hormonal pills, to the painful and invasive nature of implants and IUDs, women put up with a lot to avoid getting pregnant. And with the search for a male contraceptive pill that lacks undesirable side-effects (the type that women have put up with for decades) still unfruitful, things look set to stay this way for a while.

Or do they? As the first and only app to become certified as a contraceptive in Europe, Natural Cycles promises a hormone-free, non-invasive alternative to traditional forms of birth control.

Natural Cycles was developed by physicist Dr Elina Berglund, who works at CERN and was part of the team responsible for confirming the existence of the Higgs boson.  The app started out as an algorithm Berglund developed after deciding to stop taking hormonal contraceptives. She started looking into the biology of the menstrual cycle and found that ovulation can be accurately predicted by small changes in body temperature, and this data can be used to calculate when an individual is and is not fertile. Berglund began to monitor her own cycle using the algorithm, along with some of her colleagues at CERN. This ended up working so well that her Berglund and her husband decided to develop the algorithm into an app, so that more people could benefit from it. The latest study shows that the app is 99% effective when used perfectly, or 93% effective with typical use (for comparison, the pill is 91% effective with typical use).

So how does a simple fertility awareness method manage to have such success in preventing pregnancy? To answer this, we first need to understand a bit of the biology of the menstrual cycle.

nat cycles graphPhoto source:

The menstrual cycle can be roughly divided into three stages: the follicular (pre-ovulatory) phase, ovulation, and the luteal (post-ovulatory) phase. The levels of the hormones oestrogen, progesterone and LH vary over these stages, as shown in the diagram above, with the body’s basal body temperature (temperature at rest) changing as a result of these different levels. This is how Natural Cycles detects where the user is in their menstrual cycle: a temperature taken each morning with a two decimal place thermometer.

During the follicular phase oestrogen levels are high, and progesterone levels low, leading to a lower body temperature. At the end of the follicular phase is the fertile window. This is approximately six days long – starting five days before ovulation occurs. This is because sperm can survive in the uterus and fallopian tubes for up to five days waiting for an egg to fertilise.

At ovulation an egg is released by one of the ovaries, and travels through the fallopian tube, where it can be fertilised if it encounters a sperm (which could have been hanging around in the tube for several days).

After ovulation the luteal phase starts. Progesterone levels increase in order to aid the foetus’s development if fertilisation has occurred. The rise in progesterone causes the basal body temperature to go up an average of 0.3°C. If fertilisation has not occurred the progesterone levels then fall again, and the uterine wall begins to shed with the beginning of menstruation, which starts a new cycle.

From this we can see that there is actually only a window of around 6 days each cycle where fertilisation could actually occur, on all the other days of the cycle intercourse will not result in a pregnancy. The Natural Cycles app uses this logic to assign ‘red’ and ‘green’ days – those on which you do and do not need to use protection, respectively. Of course an app that accurately tracks fertility can also be used to increase chances of pregnancy, and around 20% of Natural Cycles users are in fact using it to aid in becoming pregnant.

However, the app may not be for everyone. Success depends on users strictly abstaining or using barrier protection such as condoms on red days, and making sure to take their temperature each morning, having had a decent amount of sleep (as sleep deprivation can cause fluctuations in the basal body temperature). Those who have irregular menstrual cycles, such as people with PCOS (polycystic ovarian syndrome), which affects around 10% of women, may not benefit so much from Natural Cycles, as the algorithm is likely to give them many more red days per cycle. A subscription to the app also costs around £40 per year, which is pretty pricey considering that all other birth control is free on the NHS (although you do get a thermometer thrown in). Whether that is value for money for a side-effect-free form of contraception is down to the individual.




The Northern Lights – Naomi Brown

At the beginning of November, residents of Scotland and Northern England were
able to view a dazzling light show in the sky: the Northern Lights. But what
causes them and how can we predict when it will happen again?
The Northern Lights are a natural phenomenon where brightly, coloured lights
are seen across the night sky in the appearance of sheets or bands. They are
generally seen close the magnetic poles in an area called the ‘auroral zone’. The
best time to spot the auroras is when the Earth’s magnetic pole is between the
sun and the location of the person observing. This is called magnetic midnight.
The Northern lights are caused by gaseous particles in the Earth’s atmosphere,
colliding with charged particles, released from the sun’s atmosphere.  The
charged particles are carried towards Earth by solar winds. The particles are
deflected from the Earth’s magnetic field. However, at the poles, the field is
weaker allowing a few particles to enter the atmosphere. Hence this is why
auroras are more likely to be seen close the magnetic poles; making Iceland and
Northern Scandinavia common destinations for travellers searching for the
Northern Lights.
The colours of the Northern Lights are dependent on the type of gas molecule
involved in the collisions. Green is one of the most common colours seen and is
caused by collisions of oxygen molecules, whereas blue or purple auroras are
caused by nitrogen molecules.
Why can the northern lights sometimes be seen in places further from the
Earth’s poles e.g. the UK ? The answer is the spread of aurora oval due to
ageomagnetic storm. Geomagnetic storms are more common after the maximum
in the solar cycle, a repeating 11-year cycle. The most recent solar maximum
was in 2013.
The Northern Lights are notoriously unpredictable. There are many forecast
apps available such as “My Aurora Forecast”. One of the best websites to check
out when the auroras will be visible from where you are is the Aurora Service
( forecast/). The site gives the Kp value
predicted for the next hour by using solar activity data obtained from a NASA
spacecraft, ACE. The ACE orbits 1.5 million kilometres from Earth: the prime
position to view the solar winds.
A common way to represent geomagnetic activity is the Kp index. Magnetic
observatories located all over the world use instruments to measure the largest
magnetic change every three hours. The recorded data from all these
observatories is averaged to generate Kp values, which range from 0 to 9. The
larger the value the more active the Earth’s magnetic field is due to geomagnetic
storms and the further the aurora oval spreads. If the Kp value is above 4, then it
is storm-level geomagnetic activity. These Kp values are useful in predicting
when auroras will be visible. To see the aurora from the UK, the Kp value would
have to be at least 6.

To get a great show, the conditions are important. Clear nights with no clouds
are best. It is also worth checking the moon cycle: the brightness of a full moon
drowns out the lights of aurora.

Apple’s newest innovation – Facial Recognition

Laura Bowles

“Pay with your face.” As threatening and sinister as this may sound, this isn’t a line from the new chapter of dystopian series Black Mirror. It’s a tagline for Apple’s newest technological innovation – the £999 iPhone X and its ‘Face ID’ feature. Apple’s approach to marketing seems to focus heavily on their development in facial recognition software. I’m quite attached to my face, in more ways than one, so this set off some alarm bells in the tin-hat conspiracy theorist deep inside me. Despite this, the scientist in me is more prominent, so I decided to give Apple the benefit of the doubt and get some questions answered. How does the technology work? If it doesn’t work as Apple promises, what will this mean for user security? Should we be worried about what information that organisations – legal or criminal – may be able to glean with this software? Is it even worth it?


It’s clear why Apple felt the need to give facial recognition a serious update. So far, it has been notoriously easy to trick. Nguyen Minh Duc, manager of the application security department at Hanoi University of Technology, succeeded in tricking Lenovo, Asus and Toshiba laptops with a photograph of the user. Alibaba (‘China’s answer to Amazon’) attempted to solve this problem when developing a service that allows customers to verify purchases by looking into their phone camera. The payment would only be accepted if the software could detect the user blinking. However, the average person could simply use a video of themselves blinking instead of a photo and manage to successfully deceive the system.


So, how does Apple believe it has achieved its “revolution in recognition?” They released a document to inform the consumer on their Face ID security in September 2017. When you want to unlock your phone, instead of comparing what the camera detects with a normal colour image, the iPhone camera uses infrared dots to create a sequence of 3D maps of depth and 2D infrared images (think heat-sensing photography). Because the technology uses light that isn’t in the visible spectrum of wavelengths, Face ID even works when the user is wearing sunglasses or in darkness. The camera then randomizes this data and creates a pattern that is specific to each device. This is then transformed into a string of code that allows your face to be recognised over a variety of expressions and poses, supposedly without being able to be tricked by photos, videos or even 3D face replicas.


Image credit: Wikimedia Commons

This can all be done using a piece of computer software called a biometric ‘artificial neural network,’ called biometric because it is inspired by biological brains. In a similar way to how the human mind might develop, a neural network ‘learns’ by experiencing examples to get closer to a desired result using a complex system of computer cells. Apple took infrared images and depth maps of thousands of people of different genders, ages and backgrounds, so their neural network would function for a diverse range of customers.


This all sounds very convincing, but if the saved data is such a close representation of my appearance, I would want to make certain that only the right people have access to it. In 2013, Apple changed the way that their iPhones were kept secure, using a processor chip called the Secure Enclave – a physical piece of biometric hardware for your data. It’s not interwoven with the software you use every day, as this is more vulnerable to infiltration. The string of code that allows Face ID to recognise your face is kept in this chip and isn’t sent to an external server for Apple or otherwise to access. The chip is not only well-encrypted (protected), but the images that are initially taken of your face are cropped, minimizing the amount of background information that is stored. This means that strangers won’t be able to find out where you live by seeing your road name in the corner of an image, and you won’t get targeted advertising from the stack of Domino’s boxes in the corner of your room. If someone can get a hold of your phone, they may be able to hack it remotely, but this may be unlikely due to Apple’s level of encryption.


Face ID isn’t the only feature that has been found to be controversial. The iPhone X is the first iPhone to not have a home button, meaning that Face ID will effectively replace Touch ID (uses fingerprint instead of facial recognition). In terms of security, it looks like Face ID comes out on top. The chances of someone else unlocking your phone with Touch ID are one in 50,000, but with Face ID it’s one in a million. But is this level of security even necessary, especially at the expense of convenience? Apple claims that it makes using their products a more natural experience, but the iPhone X requires the user to fully look at and engage with their device, whereas most of the time a quick tap of a finger to check the time would be sufficient. Considering the price tag and the resources, Face ID doesn’t seem justifiable for some animated emojis.


Face ID definitely isn’t a major security threat at the moment. However, there may be a few things to keep an eye on for the future. Apple will allow third parties to use the software for their own apps, so always check app permissions, even if you think you’re in the know. In the future, this biometric, infrared face recognition may be used immorally, but that comes with the territory when developing any new piece of technology. Although Apple may be known for manipulating consumers into a cult following, they are also known for their thorough approach to security. So, no real life Black Mirror just yet.