The Northern Lights – Naomi Brown

At the beginning of November, residents of Scotland and Northern England were
able to view a dazzling light show in the sky: the Northern Lights. But what
causes them and how can we predict when it will happen again?
The Northern Lights are a natural phenomenon where brightly, coloured lights
are seen across the night sky in the appearance of sheets or bands. They are
generally seen close the magnetic poles in an area called the ‘auroral zone’. The
best time to spot the auroras is when the Earth’s magnetic pole is between the
sun and the location of the person observing. This is called magnetic midnight.
The Northern lights are caused by gaseous particles in the Earth’s atmosphere,
colliding with charged particles, released from the sun’s atmosphere.  The
charged particles are carried towards Earth by solar winds. The particles are
deflected from the Earth’s magnetic field. However, at the poles, the field is
weaker allowing a few particles to enter the atmosphere. Hence this is why
auroras are more likely to be seen close the magnetic poles; making Iceland and
Northern Scandinavia common destinations for travellers searching for the
Northern Lights.
The colours of the Northern Lights are dependent on the type of gas molecule
involved in the collisions. Green is one of the most common colours seen and is
caused by collisions of oxygen molecules, whereas blue or purple auroras are
caused by nitrogen molecules.
Why can the northern lights sometimes be seen in places further from the
Earth’s poles e.g. the UK ? The answer is the spread of aurora oval due to
ageomagnetic storm. Geomagnetic storms are more common after the maximum
in the solar cycle, a repeating 11-year cycle. The most recent solar maximum
was in 2013.
The Northern Lights are notoriously unpredictable. There are many forecast
apps available such as “My Aurora Forecast”. One of the best websites to check
out when the auroras will be visible from where you are is the Aurora Service
( forecast/). The site gives the Kp value
predicted for the next hour by using solar activity data obtained from a NASA
spacecraft, ACE. The ACE orbits 1.5 million kilometres from Earth: the prime
position to view the solar winds.
A common way to represent geomagnetic activity is the Kp index. Magnetic
observatories located all over the world use instruments to measure the largest
magnetic change every three hours. The recorded data from all these
observatories is averaged to generate Kp values, which range from 0 to 9. The
larger the value the more active the Earth’s magnetic field is due to geomagnetic
storms and the further the aurora oval spreads. If the Kp value is above 4, then it
is storm-level geomagnetic activity. These Kp values are useful in predicting
when auroras will be visible. To see the aurora from the UK, the Kp value would
have to be at least 6.

To get a great show, the conditions are important. Clear nights with no clouds
are best. It is also worth checking the moon cycle: the brightness of a full moon
drowns out the lights of aurora.

Sheffield’s Giant Battery


Kirsty Broughton

A major step towards greener energy in the UK was taken last month with the opening of an industrial-scale ‘mega-battery’ site owned by E.ON in Sheffield.

The Sheffield site located in Blackburn Meadows is being hailed as the first of its kind in the UK. It has the capacity to store or release 10MW of energy – the equivalent of half a million phone batteries, and is contained in four 40 foot long shipping containers. The batteries are from the next generation of battery energy storage, and can respond in less than a second to changes in energy output – ten times faster than previous models.

Such promising technology has naturally lead to further investments, and the Sheffield site will soon be dwarfed by significantly larger plants. Centrica (the owner of British Gas) and EDF Energy are both in the process of creating 49MW facilities in Cumbria and Nottinghamshire respectively.

When more energy is being put out into the national grid than is being used by consumers, the batteries will take in the excess power and store it. Then, during periods when consumers are using more energy than the grid can provide, the batteries can release this excess energy into the grid, to ensure that everyone has access to power.

This is especially important considering that the UK energy mix is containing an ever-increasing proportion of intermittent sources, such as wind and solar power. June this year saw 70% of the electricity produced from nuclear, wind and solar sources. For the government to hit legally-binding carbon-cutting targets this needs to be the standard for electricity production, but storage is likely to be necessary to balance the intermittency of renewable supplies.

To meet these targets the government introduced a ‘capacity market’ – a subsidy scheme integral to the shake-up of the electricity market. It is designed to ensure energy security particularly during times of high demand, such as the winter months. The scheme has a pot containing £65.9 million, which it will divide between energy suppliers than can guarantee a constant energy supply. It may sound surprising that in the age of austerity the government that is ever-interested in penny pinching is wanting to hand out money. However, it is estimated that the Sheffield site alone could save £200 million over the next four years by increasing energy efficiency. This certainly makes the £3.89 million awarded to E.ON a worthy investment.

E.ON has seen share prices in Germany dramatically fall as it is undercut by abundant, cheaper renewable energy from other suppliers. Germany is often hailed as world leader in renewable energy production, and during a weekend in May of this year 85% of energy production was from renewable sources. E.ON in the UK was following down the same path, as in recent years UK profits have stagnated, and trade has fallen by up to 9%. It was only in March of this year that profits began to pick up again, due to the company shifting away from fossil-fuels and towards green energy production. The battery site in Sheffield is an excellent next step in this major shift.

Black Holes and Gravitational Waves


Alexander Marks

On the 14th August 2017, the fourth set of gravitational waves were detected. Although the first waves were recorded in early 2016 by LIGO (Laser Interferometer Gravitational-Wave Observation), this time three different observatories detected the gravitational waves. A pair of black holes caused these waves by violently merging together.

Three Scientists at LIGO, Rainer Weiss, Kip Thorne, and Dr Barry Barish have just been awarded the Noble Prize in Physics, for the first detection of gravitational waves and it was these three scientists who designed and ran the two LIGO observatories, situated in Washington and in Louisiana. In the most recent detections a new observatory in Italy called Virgo also measured the same waves.

Why are three detection’s better than two? Three detection’s allow scientists to better pin point the origins of the signals, 20 times more precisely than just two. This is key for follow up observations. It also provides more information about the object that made them, such as the angle they are tilted at compared to Earth.

Gravitational waves where first predicted by Einstein’s theory of relativity back in 1916. This theory was ground breaking and combines space and time to form the space-time continuum. His theory states that any object with mass warps the space-time continuum, the more massive the object the bigger the warp. It is these warps in space-time that cause gravity.

The famous equation of general relativity is incredibly hard to solve, and requires super computers to find solutions. One of the solutions predicts gravitational waves.

Gravitational waves are caused by all objects as they move through the space-time continuum.  Every object makes gravitational waves, meaning that even a tiny snail moving in the grass produces them. They ripple through space-time much like ripples caused by throwing stone in a still pond.  Gravitational waves were the last part of Einstein’s theory to be proven.

The equation predicts that gravitational waves would travel at the speed of light and carry information of the objects causing them. But most of the gravitational waves are too weak to be measured. It requires a massive object to create the large enough ripples in space time to be measured.

Enter black holes and neutron stars. Black holes are the most massive objects in the known universe. Their mass is so large that light cannot escape their gravity. When two black holes orbit very quickly around each other and eventually merge, they create immense distortions in the space-time that can be measured on Earth.

By measuring the gravitational waves and using Einstein’s theory of relativity, scientist can learn a lot about the darkest parts of our universe. Scientists can predict the mass, rotation and how powerful the event was.

Neutron stars are the remains of stars that have collapsed in on themselves, and are also very massive and could theoretically be detected as well. Yet, there has been no detection of gravitational waves caused by them but, there is promise that these will soon be detected as well.

Even the largest ripples in space time are very difficult to measure. LIGO and Virgo are carefully designed to detect these ripples. Each of the observatories is shaped like an L. Each arm of the L is a long tunnel that are vacuum sealed. At the end of each tunnel there is a mirror, and a split mirror where the two meet. (A split mirror can split laser light in two, and send it in different directions).

Lasers are sent down the tunnels at the same time, without the presence of gravitational waves both lasers would return at the same time. When gravitational waves are present the space-time is warped in such a way that one mirror gets closer and the other gets further away. This results in the laser beams returning at different times, allowing scientists to measure the amount the mirrors were warped. This measurement is very small, about 1000 times smaller than a proton.

This means the bigger the gravitational waves the larger the time gap between the lasers returning. As the time gaps are so small, only very massive object can produce waves big enough to be detected.


The black holes that created the most recently detected gravitational waves had masses of 25 and 31 times the mass of our sun. They were orbiting each other 1.8 billion light years away and merged into a black hole of 53 times the mass of our sun. This is a supermassive black hole, and is bigger than ever expect to be found.


This is the third black hole to be bigger than expected. Black holes of this size appear be more common than originally thought and the rate at which they occur will soon be figured out.


The observatories are currently being upgraded and will become even more sensitive. Scientist hope that when they are turned back on in Autumn 2018 they will detect up to ten of these events each year. There is also hope of detecting gravitational waves from neutron stars as well.


With observatories planned in Japan and India, it can be expected to find new phenomena occurring in the universe that may have been thought impossible.

One ticket for the Enterprise please! Has China successfully created an sustainable EM Drive?


320x240Shannon Greaves

Space is awesome. So awesome that it has had the global powers stuck in a space race even before America took that one giant leap for mankind. Everyone is eager to explore new planets, solar systems and travel faster than light (FTL) with their very own warp drive Enterprise. Well to all us astronauts at heart, that day may be coming sooner than later, now that China has released a video claiming not only to have a working EM drive, but also are claiming to have one already in space on their space laboratory ‘Tiangong-2’! Prior to this news of success from China, China was reported to have only been studying the EM drive, with no reports of successful functionality. Furthermore, both the UK and NASA have also been working on creating an EM drive, with a mixture of breakthroughs and problems. But before we get into the thick of things, let us have a quick review on some important information about the EM Drive.

The Electromagnetic drive (EM Drive), scientifically known as a radio frequency resonant cavity thruster, makes use of microwaves and particles that are bounced around inside an asymmetrical-shaped cavity, which produces thrust with an increasing momentum. Much like to if you were in a box, pushed on the side, and started to move with acceleration. What this means simply is that an EM drive creates thrust without the need for a propellant. Sadly, what an EM Drive isn’t is a warp drive as seen in Star Trek. Unlike how the EM Drive creates thrust, a warp drive appears to enable FTL travel through warping the fabric of space and time around a ship, allowing it to travel less distance.

Still, a working EM Drive would mean a whole bunch of good things for us, including a much faster way of travelling through space (just maybe not FTL Level). A fully functional EM drive would mean there would be no need for heavy propellants such as rocket fuel on board, and would result in a trip to mars only taking between 70-72 days, compared to the average of 270 days it takes us today. What’s even more impressive, is that according to NASA, with an EM drive it would only take us 92 years to travel to our nearest solar system! In addition to faster space travel, the EM drive would result in: cheaper space travel, solar power stations with cheap solar-harvesting satellites that could beam power back to earth, and generally provide us with a greener and convenient energy source for travel.

So, what are we waiting for!? Well before you go buy your space suits and tickets to China, there is a lot of discussion on whether China’s claims and experiments with the EM Drive are true. So far, all China has given us is a press conference announcement and a government sponsored Chinese newspaper (and China doesn’t have the best record of accomplishment for trustworthy research). Within the press conference they also claimed to need to do further experiments to try to increase the amount of thrust being produced. What we need is a peer-reviewed paper, which would not only provide conclusive evidence for their results with the EM Drive, but also confirm the reliability of their claims to testing it in space. China does have some stability to their claims however, with China claiming to have produced similar results to that of the work of NASA’s EM Drive experiments. NASA’s has been working equally as hard on the EM Drive, and have produced several models of EM Drives producing Thrust. They even finally managed to publish a peer review paper, with an EM Drive producing small amounts of Thrust within a vacuum. This gives a little backup to China’s claim of an EM Drive in space.

The biggest problem the EM Drive faces is that arguably its biggest contribution to science today is also its biggest problem and why many experts contest against it. The very physics of the EM Drive not requiring a propellant violates Newton’s third Law of Motion, “for every action there is an equal and opposite re-action”. So, on the one hand, where this would mean that the EM Drive would change the basis of how we understand physics, it also means that no one can explain how it works. Without this explanation, the consensus is that we can’t possibly use and sustain the EM Drive.

So, what happens now? Well we are going to have to wait to see if China releases that peer review paper, but even without that we have made a lot of development in our goal to space travel. The combined effort of China, NASA and other national institutions have brought the EM drive closer out of the theoretical, and into the possible. There has even been some theories created to explain how the EM Drive works, “quantised inertia” being responsible for creating this thrust. If true, this would mean that the EM Drive would not completely violate the conservation of motion, but adapt it. If you’re interested in the applications of “quantised inertia” to the EM Drive, then consider the works of Dr Mike McCullock. Furthermore, for those of you wanting that FTL warp drive, then there is some hope! NASA engineers have been reporting on forums that when they fired lasers into the EM Drive’s resonance chamber. The result was that some of the beams traveled faster than the speed of light. This suggests that the EM Drive may have the capacity to produce the needed “warp bubbles” for a warp drive! Nasa has even been designing a warp drive ship if you want to check that out to! Now, I’m off to watch some Star Trek, but keep an out for the announcement of a reality tv version!

Moore’s Law; Will it stop?


Harpreet Thandi

In 1965, Gordon E Moore, an electrical engineer from America, wrote an article in Electronics magazine. It suggested that every two years the capacity of transistors would double. Later his prediction was updated to processor power doubling every two years and is now known as Moore’s Law. He then became the co-founder of one the biggest creators of microprocessors that figure the speed of laptops and PCs.


This law has wider implications than simple processing power. Devices have become smaller and smaller. We went from a large mainframe to smartphones and embedded processors. This has resulted in a more expensive process where chips have become smaller.

In the larger scheme of things this two-year evolution is the underlying model for technology. It’s resulted in better phones, more lifelike computer games and quicker computers which we use every day. Maybe this effect came from goal setting: we must make processing power double every two-years, or maybe it was just a natural progression? Either way, Brian Krzanich-chief executive of Intel suggested this growth could be coming to an end but he still supports this; “we’ll always strive to get back to two years”. However, the firm still disproves the death of Moore’s Law, as future processors won’t be made so quickly. Technology users might realise their new phone or laptop is only a bit superior than the older model. There is a drastic need for Moore’s Law to be met again as this speed of development leads to more effective processors and save us so much money with efficiency.

To keep up with Moore’s law there have been some major compromises. Now we are at a crossroads, microprocessors are getting smaller and smaller but now they are reaching a fundamental limit due to their size. Transistors are a certain size for quantum effects to take place. “The number of transistors you can get into one space is limited, because you’re getting down to things that are approaching the size of an atom.”

A problem that started in the early 2000’s is overheating. As the devices have shrunk the electrons are more restricted and the resistance goes up dramatically in the circuits. This creates the heating problem in things such as phones and laptops. To counteract this the ‘clock rates’- the speed of microprocessors has not increased since 2004. The second issue is that we are reaching a limit the size and limit of a single chip. The solution is to have multiple processors instead of one. This means rewriting various programs and software to accommodate this change. As components get smaller they must also become much more robust and stronger.

Four and eight are standard quantities when it comes to the processors in our laptops. For example, “you can have the same output with four cores going at 250 megahertz as one going at 1 gigahertz” said Paolo Gargini-chair of the road mapping organisation. This lowers the clock speed of the processors also solving both problems at once. There are more new innovations being undertaken. However, many of these are simply too expensive to be effective.

According to the International Technology Roadmap for Semiconductors (ITRS) transistors will stop getting smaller by 2021. Since 1993 they have predicted the future of computing. After the hype in 2011 of graphene and carbon nanotubes, ITRS suggested it would take 10-15 years before these combine with logic devices and chips. Germanium and III-V semiconductors are 5-10 years away. The new issue is that transistors will not get smaller and move away from Moore’s Law.

Intel is struggling to make new breakthroughs. If they have not been resolved and they fall of the 2-year doubling target. However, there will be strong competition from their competitors. IBM have also started challenging them; a processor seven nanometres wide, 20 billion transistors and 4 times than today’s power. This will be available in 2017. “It’s a bit like oil exploration: we’ve had all the stuff that’s easy to get at, and now it’s getting harder, … we may find that there’s some discovery in the next decade that takes us in a completely different direction”-said Andrew Herbert who is leading a reconstruction of early British computers at the National Museum of Computing.

There is a new future for quantum computing. This works with qubits-quantum bits with values of 0 and 1. The nature of quantum mechanics can be to have multiple states in a system. We could get a quantum computer to work on multiple problems at once and come up with solutions in days that would naturally take millions of years traditionally.

  In May 2015 Moore spoke in San Francisco at an event celebrating the 50th anniversary of his article. He said “the original prediction was to look at 10 years…The fact that something similar is going on for 50 years is truly amazing…someday it has to stop. No exponential like this goes on forever.” At the time this was completely unknown that the total transistors in a computer chip would double every year. This has continued for a lot longer than expected and is now a major part of popular culture- Moore’s Law has become the underlying physical standard of the future that society has lived up to and has driven to meet.


Understanding the Four Forces

Harpreet Thandi

We want to understand the world around us. There are four theorized forces in our universe. These are the nuclear force (weak force), the strong force, gravity, and the electromagnetic force. These all act very differently around us.


The weak force is responsible for processes such as fission (radioactive decay), particles like muons, leptons, and others with short lifetimes. This is the 3rd strongest force and only stronger than gravity. It counteracts the strong force. With a range of just,10-18m smaller than an atom (10-15m). It exchanges energy with the bosons, the particles that carry charge. The Weak force has a very short lifetime. This seems like a problem. However, due to Heisenberg’s Uncertainty principle it is possible to have a large amount of energy for a short time.

One way to put this is if you multiply numbers to make 9 or another fixed value like ℏ/2 or higher. We can of course do 3×3; but if one of numbers is bigger let’s say, 3000000 then the other must be 0.000003 to compensate, now we have achieved 3000000×0.000003=9 as before.


The strong force binds (joins) the nucleus together. This has the 2nd   shortest range of 10-15m. This acts on quarks inside protons and neutrons equally to “glue it together”. The neutrons help control the atom and when they get too close this force keeps them apart. Like a sad romance. An analogy often given involves sellotape. First you feel nothing until, you get close and then it acts sticks “the strong force repels actually”. These two forces act inside of the atom. The outcome of these forces can be seen on the periodic table as the range is the size of a nucleus-this stops atoms from getting too big. In addition to this the larger atoms decay via the weak force.

Gravitation binds the universe together, keeps the planets in orbit, people grounded (well some of us!!), and acts on anything that has considerable mass, like Newton’s apple. In Einstein’s theory of general relativity, gravity causes a distortion of space and time. This is the weakest of the force, but has an infinite range and acts by using gravitons. These have never been observed yet, sadly.

Magnetism and Electricity were once thought of as separate concepts. However, after observations and mathematical reasoning were shown to be linked as a single force. Famously, in 1820 Hans Christian Ørsted saw a needle being deflected by a battery cable and James Clerk Maxwell proved the two waves were perpendicular to each other.

Electromagnetism binds atoms and anything else in the universe that has charge e.g. protons, electrons, muons. This is the 2nd strongest force and has an infinite range using photons. Another way of looking at this would be a fridge magnet. This is many magnitudes stronger than gravity-something to think about. These two forces act outside of the atom.

For the last 30 years of his life, Einstein tried to unify gravitation and electromagnetism without success. This seems possible, given the similarities with infinite range and both being the most visible to mankind. This pursuit was driven by a need to have things joined together which exist together. In a 1923 lecture stating “The intellect seeking after an integrated theory cannot rest content with the assumption that there exist two distinct fields totally independent of each other by their nature”. Back in the 1900s only protons, electrons and these two forces were known about. Einstein rejected the new quantum mechanics stating “god does not play dice”.  Over time Einstein became an outsider towards mainstream physics. Rather than using physical intuition “thought experiments” that birthed most of those great works, he now became obsessed with only mathematical understanding. Michio Kaku; professor of theoretical physics at the City College of New York, would consider Einstein to be thinking way ahead of his time. Most of the physics that Einstein would have needed as a base had not been discovered yet.

Physicists today take on this unification challenge. An idea called string theory is required. This requires 10 dimensions to explain the physics, and is a mathematical quest. It is an extension of Einstein’s 5 dimensions. This is hard to prove experimentally. However, researchers are constantly working on translating this into something observable. This is a very different and hard to imagine view of our universe. We must hope there is a way to translate these mathematical predictions into the real world.


Alan Turing – The Father of Computer Science

alan turing*Image reproduced with the Permission of James Evans Illustration.

Sintija Jurkevica

Could a computer ever be able to enjoy strawberries and cream? Could a computer ever make a human fall in love with it? These are types of questions Alan Turing (1912-1954) might ask one whole-heartedly at a dinner party, thereby unfolding the eccentricity of the genius himself. By profession, Turing was a distinguished British mathematician, logistician and philosopher, who pioneered the field of computer science, whilst his persona has been characterized as petulant and reserved, concealing a world of innocence and passion for nature and truth.

In the celebration of the 50 year milestone reached after the development of Sexual Offences Act 1967, highlighted in this article are some of the most influential Turing’s achievements, followed by a short biography on his personal life as a man who found himself attracted to other men at a time when same-sex attraction was illegal.

    #1: The Universal Turing Machine

Suppose a world in which computation, crudely defined as a mathematical calculation, is only carried out by humans. This almost begs one to ask the seemingly obvious question: could a physical machine be engineered to carry out simple calculations? And yet, at the technological limits of 20th century, this was not so obvious. Turing was fascinated by the possibility of building such a machine and in 1936 he conceptualised a mathematical model of a computer, named the Turing Machine.

The Turing Machine was conceived as an infinitely long paper tape divided into squares with erasable digits written on it which would act as storable memory of an output. The digits on the tape would be recognised and printed or erased by a read and print tape. Hypothetically, when given an instruction, as simple as the calculation of 2 + 2, the machine would read the digits individually and alter them appropriately following the set rules until the calculation is finished. For example the Turing Machine would re-read the tape of digits until it finds a solution of 4 when instructed to calculate 2 + 2.

Whilst each Turing Machine can only follow a single set of rules, namely a single program, a Universal Turing Machine can hypothetically compute an infinite amount of programs when its sets of instructions have been changed, or re-programmed. This concept of a universal, programmable computer has laid the foundation of the modern theory of computation, where a single machine can carry out the task of interpreting and obeying a program, just like, in essence, a standard digital computer does. Only 9 years later did the electronic technology evolve to transfer Turing’s mathematical concepts and logical ideas into practice engineering to demonstrate the feasibility and usefulness of such a device.

Upon a closer philosophical enquiry, one realises that Turing’s arguments for building the UTM connects logical instruction, something regarded as cognitive, with materiality of a physical machine; this is arguably Turing’s most significant legacy to the world that will influence the many generations after him. Throughout his lifetime, Turing would also relate his mathematical work to the functioning of the mind. For example, he regarded the building of UTM as “building a brain”, and has written an influential philosophical paper titled Computing Machinery and Intelligence, that has inspired the field of Artificial Intelligence.

    #2: Cracking the Unbreakable Enigma

During the Second World War, Turing worked at Bletchley Park, the British cryptanalytic headquarters. There, he designed and helped to build a functioning decryption system called the Turing-Welchman “Bombe”, which initially read the German Luftwaffe air force signals. Later, the codes, deemed as impossible to decrypt, generated by the German “Enigma” machine used in German naval communications, were cracked by Turing in 1939. Turing’s section ‘Hut 8’ deciphered Naval and U-boat messages on an industrial scale, and its influence has been argued to contribute towards the Allied victory over The Axis.

   #3: Work on Non-Linear Dynamic Theory

During his childhood, Turing was fascinated by nature and showed curious philosophical enquiry, exercising his ability to make connections between seemingly unrelated concepts. He would make degree level notes on the theory of relativity at school and pondered whether quantum-mechanical theory could explain the relationship between mind and physical matter during his undergraduate years at Cambridge.

In his older years working at Manchester University, Turing used the computers developed there to explain universal patterns in nature by mathematics, and published another classic paper titled ‘The Chemical Basis of Morphogenesis’ in 1952. His theory of growth and form in biology explains how the so called Turing patterns, such as leopard stripes and the spirals of snail shells, emerge from an initial mass of uniform matter.

   Turing’s Relationships

It was during his years at a boarding school in Dorset where he would find himself attracted to another able student, Christopher Morcom, who inspired young Alan to communicate more and pursue an academic path. Their intellectual companionship would leave a significant imprint on Turing after Morcom’s sudden death from tuberculosis, which inspired him to examine the problem of the mind and matter throughout his lifetime.

And it would around his undergraduate years at Cambridge when Turing realised that his attraction to men was a significant part of his identity, as he sought intimacy with an occasional lover, James Atkins, at the time a fellow mathematician. Only with years, he would become more outspoken about his sexual preference, leaving sexual conformity behind him. Curiously, when working at Bletchley Park, Turing had proposed to one of his female colleagues, Joan Clarke, who accepted the arrangement. However, Turing ended up retracting as he informed her of his true feelings.

On 31st March 1952, Turing was arrested and trialled for sexual indecency after police learnt of Turing’s intimacy with a young man from Manchester. As a man who honoured the truth, Turing would not deny his “illegal acts”, but admitted to no wrong-doing. As a severe consequence, Turing chose to undergo the year-long hormonal treatment – which in essence was chemical castration, over a prison sentence. In the light of Turing’s “indecency”, his security clearance was revoked, ending his ongoing work with the government and leaving him as a man with highly classified information who had to endure intrusive police searches.

Turing was found dead of a cyanide poisoning in 1954, administered from an apple. The coroner’s verdict was suicide.

Throughout his life, not only did Turing display an exceptionally profound mathematical and logical reasoning, his curiosity of nature allowed him to establish links between seemingly unrelated topics to lay the first solid foundations of computer science. Without Turing’s contributions, it would have taken another prodigy and a questionable amount of time to pioneer the age of computing which has developed the strong human reliance on smart devices existing today.

Alan Turing was a man who has, and continues to transform the world- regardless of his sexual preference.