Controversial Science Studies

Richard Kaskiewicz

The history of science is littered with a myriad of debate and controversial studies undertaken to exact scientific progress. Take the theory of evolution, or the heliocentric model of the solar system, as two well-known examples. These theories challenged the societal norm and had stark implications on the nature of humanity.

In more modern times, the controversy is less of a crisis of faith and more bordering ethics and morality. Until recently, these were left unchecked and the decision of what is right and what is wrong was just made by the researchers and the funding bodies. Only once the boundaries were pushed too far did science gain the ethical framework within which we work today. Despite this, mankind has gained crucial knowledge and insight from these experiments that have guided many aspects of science and how we view the world.

(and if this article leaves you wanting to know more about ethics in science, there will be a link to a recent episode of Eureka!, our science podcast, at the end, for you to learn more!)

The Stanford Prison Experiment

None are perhaps quite as famous as the Stanford Prison Experiment. Conducted by Philip Zimbardo and his team in 1971, this experiment took a group of students at Stanford University and randomly assigned them into 2 groups: one group became the prisoners, and one became the guards. The basement of one of the University’s buildings was set up to be a similar condition to those found in a maximum-security prison.

The guards were given a set of menial tasks that they had to ask the prisoners to perform; if said prisoners did not complete the task, refused to partake, or disobeyed any direct order; the guards were instructed to give out an appropriate punishment. I might add, the use of force was not authorised.

Soon into the experiment, the prisoners refused to clean their plates after a meal and thus were instructed to do 50 press-ups; to which many of them refused. This led the guards to hand out harsher punishments, e.g. giving no meals. The prisoners continued to challenge the guards’ authority and several days into the 2-week long experiment, a heavily abusive environment in which physical and emotional force (i.e. not letting prisoners sleep and harassment) was utilised by the guards to gain compliance. The situation greatly escalated and the conditions became significantly abhorrent, yet it was still days before Zimbardo, whom had assumed the mantle of warden, aborted the experiment – leaving many of the participants at least temporarily traumatised.

The interpretation of this was that under certain situations, even people who are overall morally good will inevitably U-turn, strengthening the notion that absolute power corrupts. This has been used to explain some of the atrocities and abuse witnessed in many prisons and internment camps.

Plaque_Dedicated_to_the_Location_of_the_Stanford_Prison_Experiment wikimedia

Image Credit: Wikimedia Commons

The Milgram Shock Experiment

The next most famous, is probably the Milgram Shock Experiment, conducted by Stanley Milgram in the 1960s. Before the experiment, the participants were left in a room and told to socialise with the others there, essentially making friends. They were then split into 2 groups: teachers and learners, and taken to separate rooms. The teachers gave the learners a set of memory tasks, administering an increasing level of a painful electric shock for every subsequent task they failed. Unbeknownst to the teachers however, the learners were a group of actors, whom had pre-recorded cries of agony and pleas begging the teacher to stop the test. No-one was really being shocked, but the point was that the teachers thought they were.

After failing to complete several tasks successfully, the dial reading the levels for the shock began to show danger and the actors would begin to scream and bang on the walls, imploring the teacher to stop and that they couldn’t take it anymore. Despite this, there was a man stood behind the teachers wearing a lab coat and holding a clipboard, conveying a sense of authority. These individuals would tell the teacher to continue, to which all participants did, despite them revealing how distressed it was making them and how they did not wish to harm the learner, whom they had become friends with prior.

Eventually the level of shock surpassed a lethal quantity and no teacher had outright abandoned their role.  In due course, the screaming and cries from the learner stopped and all that could be heard from the other room was silence, several teachers stopped at this point, but the majority continued on, encouraged by the authority figure, even though they thought they had killed or seriously injured their learner, whom they thought could have been them if not for the actions of luck.

This experiment proved to show that that people will obey authority figures, even when the task they are performing strongly works against their code of ethics. This reasoning was used to explain many of the Nazi war crimes and absolve many participants of these in the Nuremberg trials, who as the saying commonly falls, were “Just following orders”.

There are many more of these ethically dubious experiments that have been conducted, which if done today would land the researchers with a hefty prison sentence. However, if it were not for these experiments, much of what we know to be true today, especially in the field of psychology, may never have been revealed.

 

Link to Eureka: https://www.mixcloud.com/EurekaOnForge/eureka-ethics-episode/

Is There a Right Answer in the Dieting Craze?

Fern Wilkinson

Looking for a diet to shift that Easter chocolate? From standard calorie cutting diets like Weightwatchers and carbohydrate restricted diets such as Atkin’s, to more outlandish claims such as the Cabbage soup and Juicing diets, there’s no shortage to choose from.  Despite being bombarded with choice and information telling us how each diet works and why each is best, the proportion of overweight and obese people in the UK continues to rise. How can it be, with so many seemingly successful and miraculous diets available, that we as a nation do not seem to be losing any weight?

People follow diets for any number of reasons, including medical recommendation, ethical or environmental concerns, and religious factors. However, most people who choose to “diet” in the typical sense do so for weight loss –  and for good reason. A 2011 study found that diet was a greater indicator of an individual’s weight than exercise alone. So what happens when you start to diet?

When a person begins a calorie restricted diet, their body begins to run on a calorie deficit. Once the body has burned through it’s available glucose around six hours after eating, it turns to glycogen reserves. These are converted into sugars and burned. However, once this is used up, the body begins to break down fatty acids into smaller molecules called ketone bodies, in a process known as Ketosis, which depletes the fat stores of the body.

Consider though, that many diets don’t just restrict calories. Many restrict other components such as carbohydrate intake as is the case with the Atkins diet which replaces carbs for fat, or the Paleo diet, which swaps them for protein. In both cases, the aim is to feel fuller for longer, since proteins and fats take more energy to digest than carbs, reducing overall calorie intake. If not carefully managed, a lack of fibre and carbohydrates in these diets can lead to some pretty awful side effects, including bloating, lack of energy and constipation!

f2

Image Credit: Tumblr

It is not a simple matter of moderating food intake, however. The hypothalamus in the brain has a “set point”, which is actually a weight range of around 4.5-6.5 kg, that it attempts to keep the body within by adjusting hunger and metabolic rate. If weight falls below this range, for example because of sudden calorie deficit, the body slows down its metabolic rate, burning between 200-400kcal less per 10% body weight lost. Evolutionarily, this makes a lot of sense, since in the event of a food shortage conserving energy helps you survive, and regaining lost weight later ensures you have enough reserves to live through the next shortage.

f3

Image Credit: YourHormones

Extreme dieting can have negative long term effects by increase the brain’s “set point”, which may impact upon an individual’s ability to lose weight in the future. Studies have shown that 97% of diets fail, and after five years most people gain back more weight than they lost the first time around. So, if fad-dieting and calorie-counting isn’t working, what is the right answer?

Psychologists believe that people’s eating habits can be divided into two categories. First are the “control eaters”, who may consciously override their bodies by carefully monitoring what they eat. This category is typically where dieters lie. Research indicates that those exercising such tight control over their diet are more likely to binge or overeat, negating any previous benefits.

On the other side are the “intuitive eaters”, who eat when hungry and stop when full, and allow their body’s signals to govern their food habits. Psychologists suggest that learning to listen to your own body when it comes to food, and interpreting those signals accurately, may be the secret to better diet and weight management. Combine this with small, manageable changes towards a healthier diet, throw in some exercise and you have a recipe for success!

Chocoholics Anonymous

Vanessa Kam

Oh, sugar!  Did I just single-handedly finish that 12 pack of creme eggs for lunch?

Perhaps you were disciplined enough to give chocolate up for lent, or maybe you ate all your Easter eggs in one day. Either way, it begs the question, is chocolate really addictive?  Can we really become hooked on certain foods?

How do we define food addiction?

Until recently, there was no recognised psychometric to identify food addictions.  Scales existed for binge eating, emotional overeating, eating disorders and alcohol consumption, but none to explore the behavioural indicators of food addiction.

In 2009, researchers from Yale University developed the Yale Food Addiction Scale to fill this gap, updated to version 2.0 in 2016.  This is a set of 35 questions on eating behaviour derived from criteria for substance use disorders, drawing parallels between drug addiction and food addiction. Each question is scored, from ‘never’ to different frequencies within a month, week or daily.

The scale shows how eating habits may fall within the boundaries of substance abuse:

tble.jpg

Source: Yale University

With a diagnostic tool for food addiction settled, what actually happens in the brain when we crave another bar of chocolate; when we snap another chunk off and let it melt in our mouths?

The biological basis of chocolate addiction

The simplest explanation for chocolate addiction is its activation of dopamine and opioid systems in the body, providing a sense of reward.  A study assessed the psychological response of subjects to chocolate using drug-effect questions, normally used to judge well-being, euphoria, and other sensations after taking drugs like morphine.

They found that chocolate consumption caused an increase in drug-like ‘psychoactive’ effects. Psychoactive substances change brain function and alter mental processing. The effects were proportional to the chocolate’s sugar and cocoa content and associated with a desire to consume more.

In another study, MRI scans of brain activity in young female subjects showed that those who scored higher on the Yale Food Addiction Scale had higher activation in brain regions regulating rewards and cravings when anticipating a chocolate milkshake.

These females also displayed lower activity in inhibitory brain regions while consuming the milkshake. This suggests that these individuals had less inhibitory control or a reduced feeling of fullness while eating palatable foods. These responses are strikingly similar to brain scans of drug users when presented with their substance of abuse.

Chocoholic or social animal?

Despite the evidence of the psychoactive effects of certain foods, many researchers remain reluctant to recognise ‘food addiction’ per se as it implies substance-based dependence, where specific nutrients evoke addiction.

Chocolate, for example, does have pharmacologically active substances like caffeine, theobromine and phenylethylamine. However, studies have shown that consumption of white chocolate, which does not contain cocoa and therefore lacks the above substances, provides a similar craving relief.  This suggests that our love for chocolate is less due to its pharmacological constituents and more to do with the combined sensory experience of aroma and texture from fat and sweetness from sugar.

Rogers and Smit offer an intriguing alternative perspective, citing the psychosocial factors behind why we might be self-proclaimed chocoholics.  They argue that chocolate is labelled ‘nice but naughty’ by society, a sugary, fatty but extremely tasty snack which ought to be eaten with restraint.  Attempts to restrict desires for chocolate only exacerbate them, so we become more conscious of our consumption, accompanied by feelings of guilt and a lack of self-control.

In a study comparing the attributes of 50 food items compared to their frequency of consumption, chocolate scored highest in ‘difficult to resist’ but was ranked 17th in consumption.  In contrast, tea and coffee ranked 18th and 25th respectively for the difficulty to resist, but were the most frequently consumed.  This is because tea and coffee are more socially acceptable sources of pharmacological stimulation, whereas the hedonistic effect of eating chocolate, an unhealthy treat, is negatively perceived as overindulgence.

In essence, Rogers and Smit claim that chocolate is most frequently pointed towards as an addictive substance because it is the one food most people try to resist; there is not enough evidence to demonstrate it has the same potent neuroadaptive effects as drug addictions.

The next time you find yourself proclaiming chocoholic status, ask yourself this: am I really addicted to chocolate, in the sense of cravings, tolerance, withdrawal, inhibited control, and impaired lifestyle, or am I simply responding to environmental or emotional cues, desiring the pleasure of some good old chocolate?

 

Do Video Games Really Cause Aggression?

Helen Alford

Over the years, there has been much controversy over whether video games are linked to aggression and violence in the younger population. Usually, the games discussed are first-person shooters or action-adventure games where the player has the option to use weapons. This type of game has often been cited as a potential influence in the behaviour of young offenders committing violent crimes, such as school shootings in the USA.

Might there be any truth to this kind of speculation?

A quick Google search for ‘video games and aggression’ will bring up as many articles in favour of a link as those against it. Two articles appear next to each other, published less than three weeks apart, titled “Study Reveals Players Don’t Become More Aggressive Playing Violent Video Games” and “Study Finds Violent Video Games Increase Aggression”. There appears to be a great deal of research for each side of the debate, but no consensus.

The fact is the research is murky at best. Scientists have been looking into violent video games for over 20 years but there are still no conclusive results – as Google shows us.

In 2015 the American Psychological Society (APA) published the results of a study investigating the proposed link. The study looked at over 100 pieces of research dating from 2005-2013 and ultimately concluded that video games do contribute to aggressive behaviour. However, they were quick to note that “It is the accumulation of risk factors that tends to lead to aggressive or violent behaviour. The research reviewed here demonstrates that violent video game use is one such risk factor”.

Video Game Controller Video Controller

Image Credit: Max Pixel

While the report made headline news in many newspapers, articles questioning its methodology and findings immediately popped up too. Over 200 academics signed a letter critiquing the research and labelling it as ‘controversial’. Some of these researchers agreed that the report highlighted important areas for further research, but ultimately didn’t tally with a near-global reduction in youth violence. On the other hand, video games really could be a factor in isolated cases of extreme violence.

Dr Vic Strasburger is a retired professor of paediatrics at the University of New Mexico’s School of Medicine. He has dealt with several ‘school-shooter’ youths and theorized that playing violent video games is one of four factors that drive these individuals to commit horrifying acts. The other factors were abuse/bullying, social isolation and mental illness. As with the APA report, he makes it clear that video games are just one factor contributing to such behaviour, and it is not a simple correlational relationship.

The Oxford Internet Institute has explored the topic from a different angle. They investigated whether the mechanism of a game contributed to feelings of frustration, rather than the actual content of the game itself. Interestingly, they found that if players were unable to understand controls or gameplay, they felt aggressive. Dr Andrew Przybylski said that “This need to master the game was far more significant than whether the game contained violent material”.

Interestingly, a few months after the APA report was released, researchers from Columbia University published a study looking at the positive aspects of playing video games. In many cases, children who play video games often are more likely to do well at school and experience better social integration. This is certainly a stark contrast to the ‘aggressive loner’ stereotype of gamers we have all come to recognize.

It seems that video games can actually have a plethora of positive effects. These include improved motor skills, improved vision and improved decision making skills. The hand-eye coordination of regular gamers tends to be better than those who rarely play or don’t play at all. There is also research that suggests playing video games enhances of attention span, ability to multitask and our working memory. Plus for many of us, they’re a good way to beat stress.

Ultimately, youth crime is falling while the accessibility of video games is increasing. While there may be a tentative link between playing video games and aggressive behaviour, other factors have a much greater influence. At best it seems that video games have a negligible effect on gamers, and that there are many positives to benefit from. So, ready, player one?

Are Aliens Out to Get Us?

Jonathan Cooke

For a species that is so often looking up to the stars thinking ‘are we alone?’, we tend to populate our fictional universes with less than benevolent compatriot species. Look at some of the more popular science fiction movies and stories to be released in the last century. War of the Worlds, Alien and even the recently released Life all approach the question of extraterrestrial life the same way: it’s out there, and it’s out to get us.

Since it is such a speculative field, there is virtually no consensus on how we might react upon first contact, simply because we don’t know what sort of aliens will turn up. The developing view is that, if there is other life in the universe it’s likely to be microbial in nature. If there is anything that the much-lauded tardigrades have taught us, it is that microbial life will find a way to survive. Therefore, most space-based programs are focused on the detection of this so called primitive life (Is it fair to call it primitive when they can do some pretty amazing things?).

Most missions have focussed on our closest sister, Mars, and its dry riverbeds that provide some tantalising bits of evidence that all might not be dead on the red planet. Methane is unusually high in the Martian atmosphere. As a gas that is highly reactive and therefore tends to disappear without regular top ups, this is indicative that something is replenishing it. Methane in our own atmosphere is typically produced from biotic sources; meaning that, from our experience, traces of methane might be indicative of life.

Of course, alternative theories exist for the presence of methane, including geological sources of the gas. But what if our rovers were to discover bacteria living on the surface of the red planet? What would we do with it? Well it wouldn’t be coming to our planet anytime soon –  none of the rovers currently on the planet are equipped for that sort of mission. Even then, the samples would have to be tested and tested again to ensure that they aren’t just contaminants from Earth. They’re unlikely to alive by the time they reach Earth under strict contamination procedures. So, don’t worry, no Martian plague will be giving you the sniffles.

Alien View From The Moon Earth

Image Credit: Max Pixel

Anywhere else we are currently scouting for life would face similar contamination issues. Europa for instance, one of Jupiter’s larger moons, is being targeted as our next life-seeking venture to the stars. With an ocean thought to be buried underneath its permanent ice-sheet casing, many scientists believe that ocean temperatures may just be warm enough to support the development of life, if again, simple in nature.

So that basically covers what’s known; in our solar system –  at least there won’t be any tripods bursting out the ground anytime soon to exterminate us and Tom Cruise! But what about farther afield? Well, many radio telescopes are turned to the farthest reaches of our galaxies; and news publishers love a good story of astronomers finding ‘habitable’ exoplanets. If you pay attention to the Drake equation, there should be 1,000 to 100,000 intelligent civilizations in our galaxy. So why haven’t we heard from our cosmic neighbours.

There are many reasons that we might not have heard from them, and many reasons we should be thankful for that. If we’ve learnt anything from our own behaviour on Earth, the less technologically advanced society rarely survives first contact with a more advanced society. The most glaring example of this is the fate of the Native Americans in the wake of Europeans discovering the New World.

This is the cautionary tale that Stephen Hawking told in 2010 when questioned about our first meeting with E.T. On the other hand, many scientists question the validity of Hawking’s reasoning. As mentioned above, many are more worried about what the aliens bring with them accidentally rather than deliberately. As illustrated in H. G. Wells’ famous novel The War of the Worlds, contact with a previously unencountered pathogen can be devastating to any organism. Whether it was the Mayans and typhoid and influenza, to African swine fever in the American pork industry, foreign pathogens tend to wipe out whole communities before any resistance can develop. Just ask the abandoned Mayan cities of the Amazon.

Of course, other questions arise which are a bit harder to answer. What if the alien civilization is warlike? What if their ethics system is not comparable to ours? What if, and this has been considered, we are the ‘life, but not as we know it’ variety in the universe? Many astrobiologists have postulated that silicon based lifeforms may exist (we are carbon based), so what if we’re just too alien for them to visit?

An even sadder alternative is that we are truly alone, that alien life is non-existent, (considered highly unlikely) or that we are one of the first intelligent civilizations to evolve in the galaxy. Perhaps intelligent life is the exception rather than the rule. The Fermi paradox points to how extremely unlikely our own path to survival was. Maybe many creatures on that road seemly get snuffed out by Natural Selection before that point.

In any case, what keeps many scientists up at night is not thoughts of alien invasions, but thoughts of alien illnesses. Perhaps what we should be preparing for, and indeed looking for, is what makes little green men feel ill.

TRAPPIST-1: Could This Newfound Star System Hold Alien Life?

Josh Bason

On February 20th 2017, NASA announced a press conference to discuss a “discovery beyond our solar system”. Two days later they revealed the TRAPPIST-1 system; a series of seven earth-size planets orbiting a star 39 light years from Earth. The announcement of this discovery and the discussion that followed has circled one tantalising question – could the TRAPPIST-1 star system harbour extraterrestrial life?

The scientists at NASA certainly seem excited about the concept. Their search for exoplanets – that is, planets orbiting stars other than our own sun – had until this point yielded only a handful of potentially habitable worlds. Among TRAPPIST-1’s seven planets, however, no less than three have shown this potential, setting a record for the most known earth-like planets orbiting one star.

These worlds were highlighted by the scientists primarily for their location in the so-called ‘habitable zone’. This describes the range of orbit sizes where, dependent on the size of the planets and the star, it is neither too hot nor too cold for life to sustain life.

pplanet

Artists impression of the surface of TRAPPIST-1f, the fifth planet in the TRAPPIST-1 system Credit: NASA/JPL-Caltech/R. Hurt, T. Pyle

Further investigations by NASA scientists have also yielded promising results. Using precise measurements of the size and mass of the planets, the researchers were able to calculate estimates for the density of each of TRAPPIST-1’s worlds. These density measurements are key to understanding exoplanets as they give us our first insight into their composition.

Of the seven planets in the newly-discovered system, six have been described as ‘rocky’ – that is, more comparable solid planets like Earth and Mars than gas giants like Jupiter and Saturn. The seventh planet, which has the widest orbit and an undetermined mass, has been provisionally described as ‘snowball-like’.

Despite these hopeful indications, there is also a body of evidence which is significantly less inspiring. Firstly, while it’s tempting to imagine TRAPPIST-1 as a distant copy of our own solar system, the absence of two planets is not where the dissimilarities end. The most striking difference between this newly-discovered system and our own is the star which lies at the centre.

The star is classified as an “ultra-cool dwarf”, meaning it is both ten times smaller than our sun and less than half its temperature. While this doesn’t sound like a recipe for warm earth-like planets, the small size of the star is counteracted by the proximity of the planets which orbit it.

plaet1.png

NASA’s illustration of the size of TRAPPIST-1. (Credit: NASA/JPL-Caltech/R. Hurt, T. Pyle)

The seven worlds of the TRAPPIST-1 system all orbit between one and five million miles from their star. This means that all seven planets could fit comfortably in the space between the sun and Mercury, with its 58-million-mile orbital distance. While the size of the TRAPPIST-1 system isn’t necessarily a barrier to the formation of life, it certainly represents a significant divergence from the only solar system where we’ve ever observed it.

It’s also important to bear in mind how little is known about the planets of the TRAPPIST-1 system. Despite the array of concept images released by NASA in the wake of the announcement we don’t, in fact, have any idea what the planets look like. The planets were found, or more accurately their existence was inferred, by observing the light emitted by the star they orbit.

This process, known as transit photometry, involved watching the star’s brightness over time and finding dips in luminescence when planets passed in front of it. From this information, NASA scientists extrapolated a range of information, such as their size, mass and orbital distance. What this technique doesn’t reveal, however, are other key factors that determine the habitability of a planet.

Because of this, we still do not know whether any of the TRAPPIST-1 planets contain atmospheres, which are vital for life, or magnetic fields, which can protect life from deadly solar wind. NASA are also not discounting the possibility that some or all the planets may be ‘tidally locked’, meaning that one side may permanently face the sun with the other half perpetually facing away. Conventional wisdom suggests life would be impossible on such a planet, as one half would be too hot for life and the other too cold. More recent evidence, however, has suggested otherwise.

planet2.png

NASA’s idyllic concept art is based almost entirely on speculation. (Credit: NASA/JPL-Caltech/R. Hurt, T. Pyle)

Furthermore, since the announcement of the NASA’s discovery, two pieces of research have poured cold water on hopes of life in the TRAPPIST-1 system. The first, published on March 30th, detected frequent flares emitted from the system’s star. Considering the small orbital distances of the nearby planets the authors feared that these huge releases of energy may disrupt the atmospheres of the planets and that without the protection of strong magnetic fields, life in the system may be impossible.

If that wasn’t disheartening enough, research published on April 6th revealed a new climate model to assess the habitability of exoplanets. The study concluded that only one of the planets, TRAPPIST-1e, was likely to support liquid water. If this planet does not possess a substantial enough magnetic field to weather the flares from its nearby star (something scientists feel is unlikely), all hope for life in TRAPPIST-1 may be lost.

Despite this disappointing news, research into TRAPPIST-1 continues. NASA has announced plans to use its new James Webb Space Telescope, launching in 2018, to search for key atmospheric components such as oxygen and water in the system.

The increased sensitivity of the Webb telescope will also allow the surface temperature and pressure of the planets to be measured, answering yet more questions about the habitability of the system. Until then, however, the hospitability of the TRAPPIST-1 system remains very much in question.

Accidental Genius: Science in Serendipity

Alice Whitehead

Many of the biggest discoveries in the science community have been borne out of the accidents of scientists or development of unrelated technologies. There have been countless examples over the past few centuries. Here we recount, arguably, the ten most important serendipitous discoveries:

10. Corn Flakes

Corn flakes

 

Image Credit: https://goo.gl/images/ZQiwuV

One day in 1895, Will Keith Kellogg was experimenting and attempting to perfect some cereal recipes when he forgot about some boiled wheat that was left on the side. The wheat had become flaky, but to not be wasteful, Kellogg and his brother cooked it nonetheless. The result was crunchy and flaky, yet it went on to become one of the biggest and most popular breakfast cereals, Corn Flakes.

9. Viagra

viagra.jpg

Image Credit: Online Doctor

Viagra must be in the running for the most inadvertent drug side effect ever. In the early 1900s, Simon Campbell and David Roberts had made a drug that was designed to treat high blood pressure and angina. But at the time they had no idea of the popularity their creation would have. Originally called UK92480, the pair discovered the powerful side effects that patients were experiencing during human trials. They had accidentally invented a drug to treat erectile dysfunction and subsequently the little blue pill was named Viagra.

8. Teflon

teflon

Image Credit: Makeaheadmealforbusymoms

Teflon, or ‘polytetrafluoroethylene’, is the slippery non-stick coating used in cookware. It was stumbled upon in 1938 by Roy Plunkett while he was trying to create a way to make refrigerators home-friendly, with a safe refrigerant. Plunkett found that the resin was resistant to extreme heat and chemicals but it wasn’t until the 60s that Teflon was employed for non-stick cookware, as we know it today.

7. Vaseline

vaseline

Image Credit: Liftable

In 1859, Robert Chesebrough was a 22-year-old British chemist visiting a small town in Pennsylvania where petroleum had recently been discovered. He became intrigued by a natural by-product of the oil drilling process. This product, petroleum jelly, appeared to be remarkably useful for healing skin cuts and burns. In 1865, after purification and patenting, Vaseline was complete.

6. Dynamitedynamite.jpg

Image Credit: The Specialists Ltd

Alfred Nobel, a Swedish chemist and engineer, was transporting the highly flammable liquid ‘nitroglycerin’ when he realised one of the container cans had broken, leaking the liquid. However, by chance the material in which the cans were being transported – a rock mixture called ‘kieselguhr’ – was able to absorb and stabilise the liquid nitroglycerin. The product was patented in 1867 and Nobel named it dynamite.

5. Anaesthetic

anaesy

Image Credit: Kinja-img

Without this hysterical accidental discovery, medicine would not be the same today. Ether and nitrous oxide were extensively used for recreation in the early 1800s. Gatherings called ‘laughing parties’ – where groups of people would inhale either of the gases – became increasingly popular. Coincidentally, it was found that those under the influence of these compounds didn’t feel any pain.

4. Super Glue

super glue

Image Credit: Geek.com

During World War II, Dr Harry Coover mistakenly came across an extremely quick and strong adhesive. Initially considered for clear plastic gun sights for allied soldiers, the product appeared to have great commercial potential. However, it wasn’t until 1951 that the product was rediscovered and was eventually rebranded to ‘Super Glue’ in 1958.

3. Microwave

microwave

Image Credit: Ytimg

In 1945, Percy Spencer accidentally stumbled upon discovering the cooking abilities of microwave radiation when working in a radiation laboratory. While fiddling with an active radar, again during World War II, he noticed that the chocolate bar in his pocket had melted. Originally almost 1.8 metres tall and weighing 340kg the microwave oven was first sold in 1946 under the name ‘Radarange’.

2. Renewable energy

rebeww.jpg

Image Credit: Pop.h-cdn

In October last year, scientists at the Oak Ridge National Laboratory in Tennessee were attempting to make methanol from carbon dioxide when they realised their small catalyst had, in fact, turned the carbon dioxide directly into liquid ethanol. The reaction uses tiny spikes of carbon and copper to reverse the combustion process. Unexpectedly, they had stumbled upon a way to convert a potent greenhouse gas into a sustainable renewable energy source – a well deserving second place.

1.Penicilin

peniciln

Image Credit: Emory Magazine

But, of course, the title of number one accidental genius has to go to Sir Alexander Fleming. In 1928, Fleming was a Professor of Bacteriology at St Mary’s Hospital in London. While experimenting with the influenza virus, Fleming had colonies Staphylococcus bacteria growing in Petri dishes around his lab. During this time, he decided to take a two-week holiday.

When he returned, he found that, oddly, the bacteria he was growing in cultures had been contaminated. Not only this, but the bacteria appeared unable to grow on or even near this contaminant! Fleming went on to culture this contaminant and discovered it was Penicillium mould. And so, the first ever antibiotic, Penicillin, was found.

 

 

There’s an App For That

Alys Dunn

Since the Apple app store’s opening in 2008, $60 billion has been made by developers of apps. On New Year’s Day alone this year, $240 million was made by purchasing just apps. With 2.2 million apps to choose from on the Apple store alone, we have come a long way from playing snake on our Nokia’s.

So isn’t it fitting that something as revolutionary as the smartphone app is now being used by an application that is equally as innovative?

mobile-phone-1087845_960_720 pixabay

Image Credit: Pixabay

Over fifty years ago the world was transformed by the advent of The Pill. Since its approval in the 1960s for contraceptive use, over 200 million women have taken it. It put women in control of their own lives enabling them to choose if and when they wanted children. For the first time in the history of mankind, women were able to take responsibility for a method of preventing pregnancy.

Swedish particle physicist Dr Elina Berglund has now made an app for that. Natural Cycles is an app that prevents pregnancy by tracking measurements you plug into it; measurements taken by a thermometer you pop into your mouth every morning. Depending on these measurements, the app’s algorithm will tell you if it’s safe to have unprotected sex, termed a ‘green day’, or whether you should take other forms of protection, a ‘red day’. The basic scientific fact behind this is that a woman can only get pregnant on six days of her monthly cycle. So by using an algorithm that takes into account temperature and many other factors like sperm survival, temperature fluctuations and cycle irregularities the app is able to predict the likelihood of becoming pregnant that day. Other factors can also be added into the app to reduce your red days and increase your green, like when you’re having intercourse or by taking LH tests (Luteinising Hormone tests can detect ovulation through urine).

The pearl index is a way of measuring how effective contraceptives are. The closer to zero, the better the contraceptive. Seven women in 100 per year using Natural Cycles as their main contraceptive got pregnant. This equates to a pearl index of seven: very high protection. Compare this to the pill which has a pearl index of nine, which is also very high protection, yet less effective than the app. Plus Natural Cycles has no added hormones which cause the side effects (like certain cancers and deep vein thrombosis) of the contraceptive pill.

The app is still not considered as effective as other contraceptive methods, such as the Nexplanon contraceptive implant, with a Pearl Index measured between 0.00 and 0.4. However, it’s really interesting how the advancement in computer technology and an algorithm written by a particle physicist has now meant that we are now able to control and understand our own bodies better via a mobile app, without harmful side effects.

Why Do So Many Drugs Fail?

Jonathan James

Major pharmaceutical companies like AstraZeneca and GlaxoSmithKline (GSK) invested a staggering 140 billion US dollars between 1997 and 2011 in drug research and development. The cost to the consumer also varies widely, with the drug Copaxone (used to treat multiple sclerosis) costing nearly $4,000 a dose in the US, compared to just $862 in the UK. Even so called ‘affordable’ drugs, like Nexium (used to treat stomach acid) cost several hundred dollars per dose. Why are these drugs so expensive?

In part it is due to the competitive and profit driven nature of pharmaceuticals; but the major reason is that so many drugs fail during development. As a result, in order to continue researching and producing new drugs, pharmaceutical companies have to charge enormous amounts to recoup the amount of money lost on failed drugs. AstraZeneca, which spent nearly 60 billion dollars between 1997 and 2011, had only five drugs approved in that time. To put that in perspective, it cost them approximately 12 billion dollars to produce one usable drug! Clearly it’s in the best interest of both the consumer and the drug companies to see more drugs successfully make it to market. With that in mind, what are the major reasons why so many drugs fail, and can we do anything about it?

medical-pills public domain.jpg

Image Credit: Public Domain Pictures

The drug discovery process is tightly regulated by different bodies depending on what country the company is operating in, but, as most major pharmaceutical companies are multinational, they effectively all follow a similar set of rules. Drug discovery begins by identifying a particular target – be it a protein to inactivate, a bacterium to kill, or a tumour marker to attack. From this, scientists can spend anywhere between three and 20 years working on new compounds. A lot of the time, they won’t even find anything useful!

Let’s assume that the company has found a useful compound it thinks could be a drug. The next step involves a series of trials. These begin with pre-clinical trials in non-human subjects such as mice and rats, and may progress onto dogs, cats, and primates. While controversial, drug companies are forced by law, to carry out animal testing – it’s not something they can just get away with ignoring. The Thalidomide tragedy, which resulted in severe birth defects in thousands of children, came about in part due to a lack of testing in model organisms.

Once the drug passes pre-clinical trials, and only about 10% make it this far, it then passes into three phases of clinical trials. This involves testing the drug on progressively larger groups of both healthy and affected patients to check safety, dosage, and side effects. Only when they are satisfied it’s an effective drug, can a pharmaceutical company apply for a licence to market their drug – and these are not cheap either! By the end of the process, a company may have spent billions investing in a potential drug, although only five per cent of them reach the end point.

Now that we know what’s happening, we have to ask why. Why do so many drugs fail clinical trials? There are a number of reasons. Firstly, model organisms such as rats and mice have different metabolic pathways to humans. This means that the way a drug interacts in one animal may be different from how it behaves in another. This doesn’t mean that animal testing is useless – it just means that more is needed to understand the differences.

Another major reason is that the theory behind a disease is wrong. At the beginning of the drug development process, we might understand very little about the disease. By the time the drug enters clinical trials (maybe 10-15 years later) we might have a much better understanding, only to realise that this means our original drug won’t work. Or we might not understand the disease at all (and this is surprisingly common). This is very true of Alzheimer’s, and is a major reason behind the lack of effective drugs. Whilst we might have a basic understanding of the factors at play, we still don’t know enough to make a drug that will actually work.

Side effects must also be considered. A drug might be very effective at treating an illness, only for it to have devastating side effects that make it a no-no for human use. For example, cancer researchers must try to limit the effects of chemotherapy agents (a tough job) in order to try and give the patient the best quality of life.

With all that in mind, you might be forgiven for thinking it’s a losing battle. Don’t despair. Our knowledge of disease is progressing at an ever increasing rate, and with it comes the hope of wonderful new ways of treating them. In the future, researchers hope to be able to better utilise specific cell cultures, taken from a patient, to better understand their unique disease profile and develop personalised medicines. Other technologies, such as gene editing, and nanotech, may also offer hope to millions of people suffering from disease.