Learning from our ancestors – how early humans worked together to survive a changing climate – Emily Farrell

If Yellowstone was to erupt tomorrow, America might not make it through the night. Yellowstone is a super volcano that erupts roughly every 650,000 years; the last eruption was 640,000 years ago. So while it is not “overdue” for an eruption as some conspiracy theorists may think, there is one on the way. This could spell disaster for the continent. Last time, 1000 cubic km of rock, dust and volcanic ash was blown into the sky, blocking the light from plants, catastrophically polluting the air and massively changing the climate. This spelt disaster for the animals living there at the time and could again if it were to erupt.

But would this mean the end for human kind? Not if we follow in the footsteps of our ancestors.

Around 40,000 years ago, Southern Italy had its own super eruption in the volcanic Phlegraean fields and archaeologists have been studying a site in Liguria to see how we were affected by this. Humans had only been in this area for about 1000 years before this event occurred. It would have changed their climate and possibly other aspects such as the food available and air and water quality.

Researchers believe that this change in climate is what drove the Neanderthals out of this area. Current theories suggest that they were not especially capable of adapting and would not have survived well in a suddenly new environment.

But regardless of how well Neanderthals coped, it seems some humans survived and even flourished in these conditions. It appears that their tactic was to maintain links between groups. The evidence for this is on the Italian site. Tools, ornaments and human remains from an ancient rock shelter were analysed and it was found that some of the flint they were using came from hundreds of kilometres away. Having this network would mean that knowledge of how to cope in different situations and habitats would be shared between the groups. When the climate did change, due to a super-eruption or other conditions, the information on how to survive in an unfamiliar environment would already be available.

We can apply this theory to our communities in modern day. By learning from each other we can share the knowledge of how to cope with changes in our climate. Globalisation has increased our capacity for this, so instead of hundreds of kilometres, we can gain knowledge from our networks across the world. We can learn how to build houses on the water from the Pacific Islands, we can learn how to make the most of a limited water supply from Singapore. Why spend time creating novel solutions when the perfect one may already be in place somewhere else on the globe?

If a super eruption occurs and dramatically changes our climate, or even if we continue to change the climate ourselves, we will need to be able to adapt to make our lives sustainable and to be able to endure the changes. By networking, by sharing our knowledge, we can follow in the lives of our ancestors and survive whatever this world throws at us.

237 Million Medication Errors Occur in NHS England Annually – an Interview with Researcher Fiona Campbell by Emma Hazelwood

A recent report revealed that 237 million medication errors occur in NHS England annually. Not only did the study reveal that these mistakes cause 712 deaths and could be a contributory factor to thousands more, but it is estimated that this costs the NHS £98.5 million a year.

Fiona Campbell, a research fellow at the University of Sheffield, was involved with the study. She met up with pH7 writer, Emma Hazelwood, to provide some more information on the report.

How did the project come about?

The team, which included Marrissa Martyn-St James and Eva Kaltenthaler, at the School of Health and Related Research were asked by the Department of Health to look at how prevalent medication errors in the NHS are, and to estimate the cost of these errors. The project was a collaboration between researchers in Sheffield, Manchester and York, with the team at Sheffield identifying and synthesising relevant literature.

How were the figures calculated?

There are many different ways that studies have measured medication errors. Some examples are to look at past prescribing practices, or adverse drug events (ADRs). The threshold for counting an error is very low – the figure of 237 million includes any small error at all.

What were the limitations of the study?

As with any study, there were some limitations. First, there was a strict time limit, set by the Department of Health – the team at Sheffield had about six weeks to analyse a mammoth amount of data. Secondly, calculations for medication errors are complicated. This is for several reasons – there are different definitions of an error across studies, and sometimes no one realises an error has been made so it may go unrecorded. There are also ethical implications of studying medication errors – if a researcher spots an error, they may feel that they have a moral obligation to stop it before it results in harm to a patient. Therefore, it is difficult to calculate what the impact of these mistakes would have been. In this study, some data goes back as far as ten years. Our healthcare system may have changed since then.

Is it a serious problem?

Considering that there are only about 50 million people in England, the figure of 237 million medication errors per year seems shocking. However, what is lost in this figure is that there are billions of prescriptions issued every year. Furthermore, the threshold for an error is very low – even if one is noticed by healthcare professionals and stopped before it reaches the patient, it is still included in these calculations. Of course, there are catastrophic errors which result in severe patient harm, or even death, but not all – in fact, three out of four result in no harm. Having 237 million medication errors does not mean that people have taken the wrong medication 237 million times. Although it is estimated that these errors are a contributory factor in 1,700 – 22,303 deaths a year, the true figure is most likely at the lower end of this range. Again, the threshold is very low – if someone dies and there was a medication error, even if it is unlikely that it was related to their death, it must be recorded as a potential contributory factor.

Although the errors result in hundreds of deaths, and cost the NHS £98.5 million per year, it seems that we are no worse than anybody else. In many countries, errors are not even recorded, and, when they are, rates are similar to those in this study. The fact that the team was able to undertake this project could be seen as a commitment to transparency within the NHS, and of the determination to reduce these errors.

What are the possible improvements for the NHS?

In order to stop these errors, we must continue to be vigilant in recording them. We rely on healthcare professionals to record their own mistakes, so it is vital that there is not an environment of guilt and shame. There are currently trials seeking to reduce error rates, in particular researching where they occur, and new systems for flagging them up. There are already different checks that happen within the NHS, and, for an error to reach the patient, every mistake has to align. The report supports more funding for research into what we can do to reduce medication errors.

What was the impact of the study?

This study has attracted the attention of a lot of media, from BBC News to Radio 4. Studies such as this highlight the role scientists have in discussing research and making it accessible to the public, without allowing it to be used as a political football.
Overall it’s clear that medication errors are prevalent in our healthcare system. On occasion they have devastating effects, and this quantification of the errors is shocking. That said, we can see that our system has a good rate of preventing these from reaching patients, and the fact that studies and trials are taking place demonstrates that the problem could be improved dramatically over the coming years.

 

Have we really found Amelia Earhart’s bones? Fatima Sheriff

Amelia Earhart is one of the most famous aviators of her time and throughout history – breaking record after record, blazing the trail for female pilots and in July 1937, then disappearing over the Pacific Ocean, never to be seen again. But this may not be the end of her story; a new study in Forensic Anthropology claims that a set of bones found within the vicinity of her disappearance in 1940 are in fact her remains.

Born at the turn of the 20th century, Earhart found herself within a society ingrained with “age-old customs”, where women were “bred to timidity”. However, after her first time in a plane in December 1920, she found her place in the world, starting lessons 6 days later and within a year passing the test for her National Aeronautics Association licence (the 16th woman ever to do this). She quickly rose to fame becoming the first woman to fly solo above 14,000 feet in 1922.

As a passenger in the first flight across the Atlantic in 1928, whereas others were paid thousands of dollars, she was paid in… “experience and opportunity”. Confident in her own ability, she followed suit in 1932, becoming the first woman to fly solo across the Atlantic. Along the way she had to deal with leaking fuel, flames in the engine and ice on the wings of her plane with “only tomato juice to keep her own energy levels up”. Despite all these challenges, her piloting skills and quick problem solving meant she landed safely. She was awarded the Distinguished Flying Cross and continued to add to her astonishing list of achievements with the mentality that “women must try to do things that men have tried. When they fail, their failure must be but a challenge to others”.  She inspired other women to join her, founding the Ninety Nines, an organisation for the advancement of licensed female pilots.

Her spirit of adventure led her to plan the next ambitious trip: a round the world flight of 29,000 miles over 40 days with 20 stops. Leaving on the 1st of June 1937 from Oakland, California, she flew east with her navigator, Fred Noonan. After many successful stops and only 7,000 miles to go before they reached Oakland again, on the 2nd July they went off course and were never found.

The stop they were heading for was Howland Island, only a square mile in size and therefore difficult to find. Radio messages from the pair stated: “we must be on you but we cannot see you, fuel is running low, been unable to reach you by radio, flying at 1000 ft” . Their last ‘frantic’ communication was “on the line 157, 337”. Searches were conducted, covering 250,000 square miles, but on the 19th July, the plane was officially declared lost at sea.

amelia earhart 2

Various conspiracies have gripped the world decades later but renewed efforts to find the plane have found nothing. A theory that gained a lot of momentum was that she had been stranded southwest along the line 157,337 on an island nearby Howland Island, Gardner Island, now called Nikumaroro. Although planes passing over it on the 9th July 1937 reported no visible activity on the island, items recovered later included a woman’s shoe and the box for a sextant (a navigation device that could have been Earhart’s). Most importantly, 13 bones were recovered in 1941 and examined by a D. W. Hoodless in Fiji. He determined the bones to be a ‘short, stocky European man’ and in shocking scientific practice (some may say suspiciously so) he discarded the bones.

Many people have doubted his identification. Using his records, in 1998 TIGHAR (The International Group for History Aircraft Recovery) re-estimated the bones to be that of a 5ft 5-9inch European woman fitting Amelia’s biological profile. She was several inches taller than the average woman of the time, potentially accounting for the dismissal of the skeleton’s sex as male. Richard Jantz released a paper in March this year, further disputing Hoodless’ original assertion. He compared the length of the humerus, radius and tibia to photos of Amelia, information on her pilot’s license and historic seamstress measurements. His conclusion was that “Earhart is more similar to the Nikumaroro bones than 99% of individuals in a large reference sample.” The reason why only 13 bones were found could be down to the native coconut crabs, the largest living arthropods on land (and frankly, terrifying) which could have carried off the rest of the remains.

However, the lack of a complete skeleton remains an obstacle for the definitive identification of the remains. For instance, a pelvis bone could clear up any ambiguity with the sex of the skeleton. Without the original remains it is also impossible to conduct DNA analysis to confirm identity, something that soil analysis and bone sniffing dogs have failed to do. Therefore, this is a compelling argument but not close to indisputable certainty. Whatever you choose to believe, Amelia left a legacy that won’t be forgotten…

“Adventure is worthwhile in itself” – Amelia Earhart (1897-1937)

Biohacking: an upgrade to “wearable tech”, or turning ourselves into cyborgs? Ellie Marshall

Anyone who’s watched the futuristic Netflix show ‘Black Mirror’ will know of how emerging technology and our reliance on it can have unanticipated consequences – If you have not seen it, I highly recommend giving it a watch!

Yet, we might be closer to the futuristic world of Black Mirror than you think. Around the world, people are pushing the gruesome boundaries of how far we integrate tech with our lives, through a series of implants and body modifications. This is a branch of biohacking – a blanket term used to describe a whole spectrum of ways that people modify or improve their bodies. People who hack themselves with electronic hardware to extend and improve human capacities are known as Grinders or Transhumanists.

Common procedures

A common procedure is to implant a strong magnet beneath the surface of a person’s skin, often in the tip of the ring finger. Nerves in the fingertips then grow around the magnet. This allows nearby magnetic and electrical fields along with their strength and shape to become detectable to the user, thanks to the subtle currents they provoke. For a party trick, the person can also pick up metal objects or make other magnets move around.

Calling this a procedure, though, gives rather the wrong impression. Biohacking is not a field of medicine. Instead it is carried out either at home with DIY kits purchased online or in piercing shops, but without an anaesthetic (which you need a licence for). If you think this sounds painful, you are correct. With no corporate help, the only way grinders can accomplish their goals is by learning from other grinders, mainly through online forums such as biohack.me.

Britain is the birthplace of grinders and in 1998 Kevin Warwick, professor of cybernetics at the University of Reading had a simple radio-frequency identification transmitter (RFID) implanted in his upper left arm, in an experiment that he called Project Cyborg. The chip didn’t do much – it mainly just tracked him around the university and turned on the lights to his lab when he walked in. Still, Warwick was thrilled, and the media were enchanted, declaring him the world’s first cyborg.

RFID implants are now common among grinders and allow users to unlock physical and electronic barriers. Similar technology is already widely used in contactless card payment systems and clothing tags, and Motorola are developing an RFID-activated ‘password pill’ that a user can swallow and access their devices without the hassle of remembering them.

Other examples of biohacking

Circadia, developed by Biohack.me offshoot company Grindhouse Wetware is another implantable device that constantly gathers the user’s biometric data, for example transmitting temperature data via Bluetooth. The medical potential for this device is vast, and it has the most immediately practical benefits.

Additionally, the first internal compass, dubbed the ‘Southpaw’ has been invented. It works by sealing a miniature compass inside a silicon coat, within a rounded Titanium shell, to be implanted under the skin. An ultra-thin whisker juts out, which is activated when the user faces north, to lightly brush an alert on the underside of the skin.

Rich Lee, a star of biohack.me forum, has magnets embedded in each ear so he can listen to music through them, via a wire coil he wears around his neck, that converts sound into electromagnetic fields, creating the first ‘internal headphones’. The implants allow him to detect different sensors, so he can ‘hear’ heat from a distance and detect magnetic fields and Wi-Fi signals too! There is a practical purpose to Lee’s experiments, as he suffers deteriorating eyesight and hopes to improve his orientation through greater sensory awareness.

A damaging concept to users and society?

The question we must ask ourselves is at what point does the incorporation of all this technology make us a different species and what are the ethics behind that?

The bluntest argument against biohacking is that it’s unnatural. For most people, especially those who benefit from medical advancements like pacemakers and cochlear implants, adding RFID or magnets to the body appears to have little value. There are very few people who can’t recognize the benefits of technological progress and how it has helped humanity. Grinding, however is often not recognized as an advancement.

Another argument against human augmentation mirrors the worries that commonly surround genetic engineering. A thought provoking possibility is that those who have access to (and can afford) augmentation procedures and devices will gain unfair advantages over those who do not. Over generations, this could create a large rift between the augmented and the unaugmented. Luckily, the grinder movement provides a solution to this problem as part of its central ethos: open source hardware and the free access of information.

A benefit to the individual and society?

To some, implanted technology represents the next stage in mankind’s evolution that may bring many medical advancements. And, indeed, the idea is not outlandish. Brain stimulation from implanted electrodes is already a routine treatment for Parkinson’s and other diseases, and there are prototypes that promise to let paralysed people control computers, wheelchairs and robotic limbs.

The Wellcome Trust has begun a trial with Alzheimer’s patients carrying a silicon chip on the brain itself, to predict dangerous episodes, and able to stimulate weakened neurons. Military researchers Darpa are also experimenting with a chip implant on humans to help control mental trauma suffered by soldiers.

There is potential to help visually and hearing impaired people by using a chip that translates words and distances into sound, which could mean the end of Braille and sticks. Neil Harbisson is the founder of the non-profit Cyborg Foundation in Barcelona and was born with achromatopsia, the inability to see colours. Since 2004, Harbisson has worn a device he calls the eyeborg, a head-mounted camera that translates colours into soundwaves and pipes them into his head via bone conduction. Today Harbisson “hears” colours, including some beyond the visible spectrum.

These experimental grinders are certainly laying the groundwork for more powerful and pervasive human enhancements in the future, but for now, a Fitbit is more than enough for me.

 

https://www.techopedia.com/definition/29897/biohacking

http://www.abc.net.au/news/2017-02-23/biohackers-transhumanists-grinders-on-living-forever/8292790

http://www.slate.com/articles/technology/superman/2013/03/cyborgs_grinders_and_body_hackers_diy_tools_for_adding_sensory_perceptions.html

https://gizmodo.com/the-most-extreme-body-hacks-that-actually-change-your-p-1704056851

https://hackaday.com/2015/10/12/cyberpunk-yourself-body-modification-augmentation-and-grinders/

https://www.wired.com/story/hannes-wiedemann-grinders/

https://www.theverge.com/2012/8/8/3177438/cyborg-america-biohackers-grinders-body-hackers

http://edition.cnn.com/2014/04/08/tech/forget-wearable-tech-embeddable-implants/index.html

https://www.digitaltrends.com/cool-tech/coolest-biohacking-implants/

The Meat Industry: friend or foe? Keerthana Balamurugan

Meat has been and still is a universal ingredient in numerous societies, not to mention a major part of many traditions, but recent studies have discovered that the consumption of meat is slowly decreasing. Those who have turned vegan or have been eating less meat in moderation have praised the fact that it reaps countless health benefits. Eating less meat has also proved its value towards our environment as problems once created by the meat industry diminish as it recedes. Counter-claims have also arisen declaring that the trend is damaging the multi-billion dollar meat industry and the economy. Where should we stand between the two sides?

Slowly replacing meat products with healthier options such as vegetables, whole grains and even seafood can alter your health immensely for the better. The WHO, World Health Organization, released a report last year linking the consumption of red meat with certain types of cancer and also stating that just by consuming up to 100 grams of meat daily, cancer risk can increase by up to 20%. This statistic jolted people into awareness of the set-backs. In certain countries, trying the vegan diet has become the new trend, with seemingly everyone raving about it on their social media, as people caught wind of how taking in less meat and replacing it with healthier alternatives can aid weight loss. Currently there are more people suffering from obesity than starvation and nutritionists are stating meat as one of the causes. From this aspect, consuming less meat would do us all a favour.

Even with such statistics that backs up claims of the positives of eating less meat, there are those who question this. If we remove meat from our diets what happens to our body with the decreased protein and iron intake? One of the most common disadvantages of not eating enough meat is iron deficiency which can drastically affect our immune systems and the speed at which our body functions. It cannot be disagreed that meat supplies us with a dense source of protein but studies from the Harvard School of Medicine proves that a healthy diet of leafy greens, mushrooms, legumes and other iron-rich plant foods can easily compensate for the nutrients meat provides us. It is simply a balancing act.

It comes as no surprise that the multi-billion dollar meat industry is damaging our ecosystem by tearing down acres and acres of woodland as well as increasing carbon emissions. Agricultural emissions alone account for 30 % of global emissions. Through the production of just 1 kg of beef, 15,000 litres of water is required and up to 30 kg of carbon dioxide is released which accounts to greenhouse gases. Now imagine this multiplied by thousands and thousands of kilograms of meat. Livestock production is the number one use of land by humankind meaning the largest deforestation contribution to our planet. In Brazil, their large-scale commercial beef farming is the cause of 70% of cleared forests in the Amazon. Precious water is being used up and wasted compared to vegan alternatives, ecosystems are being destroyed because of the land clearing and worsening climate change are all effects of the non-sustainable industry. Many would agree upon consuming less meat in order to try and lessen the harm that is being done to the planet.

In the U.S alone, the meat industry is worth more than 800 billion dollars annually, providing over 6 million jobs. Huge numbers of people see the colossal benefit towards cutting down on meat, but what would that mean to the economy, and to the millions of people who rely on it for their wages? Yes, it is true that the sudden economic shift from consuming less would affect a country’s gross domestic product as well as employment rates but only in the short term. Many protest against the cut-down on meat because of these reasons, so is the long-term effect worth the tremendous risk? There is a whole new type of industry that has been booming in the market and that is vegan alternatives. The relatively new category of food products has brought in a whole new economy to the table, providing more jobs for higher wages and with less grueling working conditions.

Consuming less meat has more benefits than drawbacks, leading to a much healthier lifestyle and a cleaner environment for our planet and its inhabitants. If everyone on the planet were to eat meat in moderation, we would have lower percentages of those suffering from obesity and certain types of cancer, not to mention the effects of climate change would be less severe. We live in the day and age where there are so many options available to replace meat in our diets and with just a change in mindset and perspective, many more people can get on board the change realising the environmental and health benefits towards eating less meat.

What is Earth Day, and do we really need it? Emma Hazelwood

Happy Earth Day!

Earth Day is the world’s largest environmental movement, celebrated by more than a billion people every year. It is a day dedicated to raising awareness of various environmental issues worldwide.

Earth Day was started in 1970. Its founder, Gaylord Nelson, had witnessed the appalling consequences of a massive oil spill in California, 1969. Inspired by the student anti-war movement in the US, he wanted to have a similar campaign for environmental protection, in the hopes that politicians would have no choice but to start taking the conservation of our planet seriously.

It is believed that on the first Earth Day, 22nd April 1970, twenty million Americans took part in rallies across the county. It united different groups of activists, as well as people from all walks of life, and is often credited with launching the modern environmental movement. The demonstration led to the creation of the Environmental Protection Agency, and the passing of the Clean Air, Clean Water, and the Endangered Species Acts.

Since then, the campaign has grown beyond what anyone could have predicted. An effort to make it go global in 1990 paved the way for the United Nations Earth Summit in 1992. With the 50th anniversary of Earth Day on the horizon, the campaign is now aiming to reignite the flame of environmental activism, in a bid to fight the rising atmosphere of cynicism and distrust surrounding climate change.

The Earth Day 2018 campaign is based on ending plastic pollution. Plastic doesn’t biodegrade, so it piles up in the environment, and poisons marine life and our water systems. More than eight million tonnes of plastic are dumped into our oceans every year, and by 2050, the oceans will contain more plastic than fish by weight. Plastic build up results in animal entanglement, ingestion, or habitat disruption. This not only depletes fish stocks, but can result in the build-up of toxins in our food, resulting in higher rates of cancer, birth defects, impaired immunity and many more health issues for humans. As is often the case with rising environmental issues, communities which suffer the most from plastic pollution are often already vulnerable. With the rise of zero waste shops (and David Attenborough’s calls for no more plastic straws), plastic-free life is becoming more and more achievable.

plastic

Plastic washed ashore on a beach in San Francisco.

However, plastic pollution is just one of the issues facing our planet. The recent death of the last male Northern white rhino has rendered the species functionally extinct (though there are hopes that IVF may be able to bring the species back from the brink). Unfortunately, this is not an isolated issue, with 5,583 animal species considered to be critically endangered. Habitat loss due to human expansion and hunting are often major causes.

Another area for concern is the bleaching of coral reefs. We have already lost half of the world’s coral reefs, and 90% will be gone by 2050. Death of the organisms which used to inhabited reefs is linked to a rise in ocean temperature, as a result of climate change. Bleaching happens when coral get so stressed by extreme temperature that they release the tiny algae, known as zooxanthellae, which provide the coral with their food. A higher concentration of CO2 in the atmosphere means that more is being dissolved into the ocean, causing ocean acidification. This, along with overfishing, is another problem for coral reefs, as it makes it harder for vital reef organisms to build their exoskeletons. Coral reefs are often described as “underwater rainforests” because of the vast numbers of species that rely on them. Their death is not only a tragedy for biodiversity, but could actually make global warming even worse, as they also produce oxygen. They are also important for tourism, and bring in billions of dollars in revenue in some places.

Coral

A bleached coral reef, which was once teaming with life and full of vibrant colours.

This Earth Day, there is a drive to cut down on plastic use. You can calculate your plastic pollution here: https://www.earthday.org/plastic-calculator/, and there is a guide to living plastic free here: https://myplasticfreelife.com/plasticfreeguide/. Hopefully Earth Day 2018 will convince both governments and individuals to start making meaningful and much needed changes to cut down on plastic pollution. However, with so many ecosystems being threatened by human activity, it is vital that we start to consider the wider effects of our lifestyle every day.

More information on Earth Day’s End Plastic Pollution Campaign: https://www.earthday.org/campaigns/plastics-campaign/

For more information on plastic build up in oceans: https://plasticoceans.org/the-facts/

More information on endangered species: http://www.bbc.co.uk/news/science-environment-43475872

More information on coral reef bleaching: https://www.independent.co.uk/environment/environment-90-percent-coral-reefs-die-2050-climate-change-bleaching-pollution-a7626911.html

A few zero waste shops will soon be opening in Sheffield – one in our very own  SU   https://www.facebook.com/OurZeroWasteShop/ and one in Crookes https://www.unwrappedshop.co.uk/

 

 

Shoot for the Moon: Would the USA’s Cold War plan to blow it out of our night sky really work? Fiona McBride

In 1958 – the year after the Soviet Union’s Sputnik became the first object to be launched into space by humankind – 60 years ago – the government of the USA began to work on a secret plan to assert their dominance on the stage of world power: by blowing up the moon. Known covertly as “project A119”, the intention was to make the military might of the USA abundantly clear to all on earth.

 Of course, the first question this raises is: would such a show of force actually be possible? Though it may look small from down here, and is supposedly made of green cheese, the moon is actually a seventy trillion megaton rock located four hundred thousand kilometers away. That’s quite a big thing to blow up, and a significant distance to send explosives. The explosion would have to have enough energy to not only break the moon into pieces, but also send them far enough away from one another that their gravitational fields – the attractive forces that act between all objects – wouldn’t be able to pull them back together. Otherwise, the single lump of geological matter we call our moon would simply be replaced by a pile of lunar rubble. It is estimated that such an explosion would be equivalent to the detonation of thirty trillion megatons of TNT; given that the Tsar Bomba – the most powerful nuclear bomb ever built – had an explosive power of fifty megatons, blowing up the moon would require six hundred billion of these. Humanity has neither the uranium supplies to build such a bomb, nor the rocket technology to get it there.

Other options include creating a “moon quake” to split apart the internal structure of the rock; this would need to be equivalent to a 16.5 on the Richter scale. The most violent earthquake recorded read just 9.5 on the Richter scale, so it’s unlikely that such a quake could be artificially produced on the moon. Alternatively, the moon could be zapped with a giant laser, however this would need to provide the same amount of energy instantaneously as the sun outputs every six minutes. Humans don’t really have the resources to power such a thing.

 It seems, therefore, that blowing up the moon to assert their dominance over the space and nuclear spheres wasn’t really an option for the USA in 1958 – or even sixty years later – due to a lack of both technology and resources. However, the idea of blowing a large crater in the moon, in order to produce a giant explosion to demonstrate to the world the might of the USA, and leave behind a crater visible from earth to remind them of it forevermore was also considered. This, too, was dismissed in 1959; the reasons for this are not clear, but perhaps those in charge of the project realised how utterly ridiculous their own idea sounded.

 But let’s just take a step back for a moment, and imagine if exploding the moon were possible: what would the consequences be here on earth? Would lumps of moonrock kill us all? What would life be like on a moonless planet?

So the moon has exploded. The first thing most humans notice is a big, bright cloud spreading out through the sky where the moon used to be. This is the light from the explosion illuminating the moon debris. Dust then covers the sky for a while, making daylight darker and air travel impossible for a few months. Our seas and lakes are still tidal – the sun exerts a gravitational pull on the earth that contributes to this, but does not move relative to the earth – so there will be no spring or neap tides – the water will rise to one-quarter the height of a spring tide and return to the same lower level each day. Fragments of moon start to fall to earth; some burn up as they enter our atmosphere; others hit the ground and wreak havoc where they land, though it is unlikely that this would be catastrophic for humanity, as they would move slowly in comparison to other astronomical objects that fall to earth, such as asteroids.

 Once the dust clouds have cleared, the next noticeable thing is a lot more stars. The moon is by far the brightest object in the night sky, so with it out of the way, nighttime will be darker and the stars much brighter by comparison. One –or more – smaller ‘moon replacements’ may also appear in the sky, if the explosion leaves some larger chunks of rock as well as debris and dust. Of course, this debris and dust continues to rain down on the earth whenever a piece falls out of orbit.

 Only after the majority of this debris has cleared – in perhaps a few thousand years – is the next major effect noticeable by humans: the earth will tip over. Gravitational interactions between the earth and the moon are what is currently preventing this; without it, the earth will tip on its axis, causing the poles to melt and an ice age to occur every few thousand years on whichever part of the planet is furthest from the sun at that point.

 So, although exploding the moon isn’t really possible – and certainly wasn’t in the 1950s – it wouldn’t have utterly catastrophic consequences for the earth, just bring significant change. However, as a show of force, it still seems somewhat excessive.