Sheffield’s Giant Battery

blackburnmeadowsbattery-e1507558131138

Kirsty Broughton

A major step towards greener energy in the UK was taken last month with the opening of an industrial-scale ‘mega-battery’ site owned by E.ON in Sheffield.

The Sheffield site located in Blackburn Meadows is being hailed as the first of its kind in the UK. It has the capacity to store or release 10MW of energy – the equivalent of half a million phone batteries, and is contained in four 40 foot long shipping containers. The batteries are from the next generation of battery energy storage, and can respond in less than a second to changes in energy output – ten times faster than previous models.

Such promising technology has naturally lead to further investments, and the Sheffield site will soon be dwarfed by significantly larger plants. Centrica (the owner of British Gas) and EDF Energy are both in the process of creating 49MW facilities in Cumbria and Nottinghamshire respectively.

When more energy is being put out into the national grid than is being used by consumers, the batteries will take in the excess power and store it. Then, during periods when consumers are using more energy than the grid can provide, the batteries can release this excess energy into the grid, to ensure that everyone has access to power.

This is especially important considering that the UK energy mix is containing an ever-increasing proportion of intermittent sources, such as wind and solar power. June this year saw 70% of the electricity produced from nuclear, wind and solar sources. For the government to hit legally-binding carbon-cutting targets this needs to be the standard for electricity production, but storage is likely to be necessary to balance the intermittency of renewable supplies.

To meet these targets the government introduced a ‘capacity market’ – a subsidy scheme integral to the shake-up of the electricity market. It is designed to ensure energy security particularly during times of high demand, such as the winter months. The scheme has a pot containing £65.9 million, which it will divide between energy suppliers than can guarantee a constant energy supply. It may sound surprising that in the age of austerity the government that is ever-interested in penny pinching is wanting to hand out money. However, it is estimated that the Sheffield site alone could save £200 million over the next four years by increasing energy efficiency. This certainly makes the £3.89 million awarded to E.ON a worthy investment.

E.ON has seen share prices in Germany dramatically fall as it is undercut by abundant, cheaper renewable energy from other suppliers. Germany is often hailed as world leader in renewable energy production, and during a weekend in May of this year 85% of energy production was from renewable sources. E.ON in the UK was following down the same path, as in recent years UK profits have stagnated, and trade has fallen by up to 9%. It was only in March of this year that profits began to pick up again, due to the company shifting away from fossil-fuels and towards green energy production. The battery site in Sheffield is an excellent next step in this major shift.

Moore’s Law; Will it stop?

computer-inside

Harpreet Thandi

In 1965, Gordon E Moore, an electrical engineer from America, wrote an article in Electronics magazine. It suggested that every two years the capacity of transistors would double. Later his prediction was updated to processor power doubling every two years and is now known as Moore’s Law. He then became the co-founder of one the biggest creators of microprocessors that figure the speed of laptops and PCs.

pptmooreslawai

This law has wider implications than simple processing power. Devices have become smaller and smaller. We went from a large mainframe to smartphones and embedded processors. This has resulted in a more expensive process where chips have become smaller.

In the larger scheme of things this two-year evolution is the underlying model for technology. It’s resulted in better phones, more lifelike computer games and quicker computers which we use every day. Maybe this effect came from goal setting: we must make processing power double every two-years, or maybe it was just a natural progression? Either way, Brian Krzanich-chief executive of Intel suggested this growth could be coming to an end but he still supports this; “we’ll always strive to get back to two years”. However, the firm still disproves the death of Moore’s Law, as future processors won’t be made so quickly. Technology users might realise their new phone or laptop is only a bit superior than the older model. There is a drastic need for Moore’s Law to be met again as this speed of development leads to more effective processors and save us so much money with efficiency.

To keep up with Moore’s law there have been some major compromises. Now we are at a crossroads, microprocessors are getting smaller and smaller but now they are reaching a fundamental limit due to their size. Transistors are a certain size for quantum effects to take place. “The number of transistors you can get into one space is limited, because you’re getting down to things that are approaching the size of an atom.”

A problem that started in the early 2000’s is overheating. As the devices have shrunk the electrons are more restricted and the resistance goes up dramatically in the circuits. This creates the heating problem in things such as phones and laptops. To counteract this the ‘clock rates’- the speed of microprocessors has not increased since 2004. The second issue is that we are reaching a limit the size and limit of a single chip. The solution is to have multiple processors instead of one. This means rewriting various programs and software to accommodate this change. As components get smaller they must also become much more robust and stronger.

Four and eight are standard quantities when it comes to the processors in our laptops. For example, “you can have the same output with four cores going at 250 megahertz as one going at 1 gigahertz” said Paolo Gargini-chair of the road mapping organisation. This lowers the clock speed of the processors also solving both problems at once. There are more new innovations being undertaken. However, many of these are simply too expensive to be effective.

According to the International Technology Roadmap for Semiconductors (ITRS) transistors will stop getting smaller by 2021. Since 1993 they have predicted the future of computing. After the hype in 2011 of graphene and carbon nanotubes, ITRS suggested it would take 10-15 years before these combine with logic devices and chips. Germanium and III-V semiconductors are 5-10 years away. The new issue is that transistors will not get smaller and move away from Moore’s Law.

Intel is struggling to make new breakthroughs. If they have not been resolved and they fall of the 2-year doubling target. However, there will be strong competition from their competitors. IBM have also started challenging them; a processor seven nanometres wide, 20 billion transistors and 4 times than today’s power. This will be available in 2017. “It’s a bit like oil exploration: we’ve had all the stuff that’s easy to get at, and now it’s getting harder, … we may find that there’s some discovery in the next decade that takes us in a completely different direction”-said Andrew Herbert who is leading a reconstruction of early British computers at the National Museum of Computing.

There is a new future for quantum computing. This works with qubits-quantum bits with values of 0 and 1. The nature of quantum mechanics can be to have multiple states in a system. We could get a quantum computer to work on multiple problems at once and come up with solutions in days that would naturally take millions of years traditionally.

  In May 2015 Moore spoke in San Francisco at an event celebrating the 50th anniversary of his article. He said “the original prediction was to look at 10 years…The fact that something similar is going on for 50 years is truly amazing…someday it has to stop. No exponential like this goes on forever.” At the time this was completely unknown that the total transistors in a computer chip would double every year. This has continued for a lot longer than expected and is now a major part of popular culture- Moore’s Law has become the underlying physical standard of the future that society has lived up to and has driven to meet.

 

Alan Turing – The Father of Computer Science

alan turing*Image reproduced with the Permission of James Evans Illustration.

Sintija Jurkevica

Could a computer ever be able to enjoy strawberries and cream? Could a computer ever make a human fall in love with it? These are types of questions Alan Turing (1912-1954) might ask one whole-heartedly at a dinner party, thereby unfolding the eccentricity of the genius himself. By profession, Turing was a distinguished British mathematician, logistician and philosopher, who pioneered the field of computer science, whilst his persona has been characterized as petulant and reserved, concealing a world of innocence and passion for nature and truth.

In the celebration of the 50 year milestone reached after the development of Sexual Offences Act 1967, highlighted in this article are some of the most influential Turing’s achievements, followed by a short biography on his personal life as a man who found himself attracted to other men at a time when same-sex attraction was illegal.

    #1: The Universal Turing Machine

Suppose a world in which computation, crudely defined as a mathematical calculation, is only carried out by humans. This almost begs one to ask the seemingly obvious question: could a physical machine be engineered to carry out simple calculations? And yet, at the technological limits of 20th century, this was not so obvious. Turing was fascinated by the possibility of building such a machine and in 1936 he conceptualised a mathematical model of a computer, named the Turing Machine.

The Turing Machine was conceived as an infinitely long paper tape divided into squares with erasable digits written on it which would act as storable memory of an output. The digits on the tape would be recognised and printed or erased by a read and print tape. Hypothetically, when given an instruction, as simple as the calculation of 2 + 2, the machine would read the digits individually and alter them appropriately following the set rules until the calculation is finished. For example the Turing Machine would re-read the tape of digits until it finds a solution of 4 when instructed to calculate 2 + 2.

Whilst each Turing Machine can only follow a single set of rules, namely a single program, a Universal Turing Machine can hypothetically compute an infinite amount of programs when its sets of instructions have been changed, or re-programmed. This concept of a universal, programmable computer has laid the foundation of the modern theory of computation, where a single machine can carry out the task of interpreting and obeying a program, just like, in essence, a standard digital computer does. Only 9 years later did the electronic technology evolve to transfer Turing’s mathematical concepts and logical ideas into practice engineering to demonstrate the feasibility and usefulness of such a device.

Upon a closer philosophical enquiry, one realises that Turing’s arguments for building the UTM connects logical instruction, something regarded as cognitive, with materiality of a physical machine; this is arguably Turing’s most significant legacy to the world that will influence the many generations after him. Throughout his lifetime, Turing would also relate his mathematical work to the functioning of the mind. For example, he regarded the building of UTM as “building a brain”, and has written an influential philosophical paper titled Computing Machinery and Intelligence, that has inspired the field of Artificial Intelligence.

    #2: Cracking the Unbreakable Enigma

During the Second World War, Turing worked at Bletchley Park, the British cryptanalytic headquarters. There, he designed and helped to build a functioning decryption system called the Turing-Welchman “Bombe”, which initially read the German Luftwaffe air force signals. Later, the codes, deemed as impossible to decrypt, generated by the German “Enigma” machine used in German naval communications, were cracked by Turing in 1939. Turing’s section ‘Hut 8’ deciphered Naval and U-boat messages on an industrial scale, and its influence has been argued to contribute towards the Allied victory over The Axis.

   #3: Work on Non-Linear Dynamic Theory

During his childhood, Turing was fascinated by nature and showed curious philosophical enquiry, exercising his ability to make connections between seemingly unrelated concepts. He would make degree level notes on the theory of relativity at school and pondered whether quantum-mechanical theory could explain the relationship between mind and physical matter during his undergraduate years at Cambridge.

In his older years working at Manchester University, Turing used the computers developed there to explain universal patterns in nature by mathematics, and published another classic paper titled ‘The Chemical Basis of Morphogenesis’ in 1952. His theory of growth and form in biology explains how the so called Turing patterns, such as leopard stripes and the spirals of snail shells, emerge from an initial mass of uniform matter.

   Turing’s Relationships

It was during his years at a boarding school in Dorset where he would find himself attracted to another able student, Christopher Morcom, who inspired young Alan to communicate more and pursue an academic path. Their intellectual companionship would leave a significant imprint on Turing after Morcom’s sudden death from tuberculosis, which inspired him to examine the problem of the mind and matter throughout his lifetime.

And it would around his undergraduate years at Cambridge when Turing realised that his attraction to men was a significant part of his identity, as he sought intimacy with an occasional lover, James Atkins, at the time a fellow mathematician. Only with years, he would become more outspoken about his sexual preference, leaving sexual conformity behind him. Curiously, when working at Bletchley Park, Turing had proposed to one of his female colleagues, Joan Clarke, who accepted the arrangement. However, Turing ended up retracting as he informed her of his true feelings.

On 31st March 1952, Turing was arrested and trialled for sexual indecency after police learnt of Turing’s intimacy with a young man from Manchester. As a man who honoured the truth, Turing would not deny his “illegal acts”, but admitted to no wrong-doing. As a severe consequence, Turing chose to undergo the year-long hormonal treatment – which in essence was chemical castration, over a prison sentence. In the light of Turing’s “indecency”, his security clearance was revoked, ending his ongoing work with the government and leaving him as a man with highly classified information who had to endure intrusive police searches.

Turing was found dead of a cyanide poisoning in 1954, administered from an apple. The coroner’s verdict was suicide.

Throughout his life, not only did Turing display an exceptionally profound mathematical and logical reasoning, his curiosity of nature allowed him to establish links between seemingly unrelated topics to lay the first solid foundations of computer science. Without Turing’s contributions, it would have taken another prodigy and a questionable amount of time to pioneer the age of computing which has developed the strong human reliance on smart devices existing today.

Alan Turing was a man who has, and continues to transform the world- regardless of his sexual preference.

Do Video Games Really Cause Aggression?

Helen Alford

Over the years, there has been much controversy over whether video games are linked to aggression and violence in the younger population. Usually, the games discussed are first-person shooters or action-adventure games where the player has the option to use weapons. This type of game has often been cited as a potential influence in the behaviour of young offenders committing violent crimes, such as school shootings in the USA.

Might there be any truth to this kind of speculation?

A quick Google search for ‘video games and aggression’ will bring up as many articles in favour of a link as those against it. Two articles appear next to each other, published less than three weeks apart, titled “Study Reveals Players Don’t Become More Aggressive Playing Violent Video Games” and “Study Finds Violent Video Games Increase Aggression”. There appears to be a great deal of research for each side of the debate, but no consensus.

The fact is the research is murky at best. Scientists have been looking into violent video games for over 20 years but there are still no conclusive results – as Google shows us.

In 2015 the American Psychological Society (APA) published the results of a study investigating the proposed link. The study looked at over 100 pieces of research dating from 2005-2013 and ultimately concluded that video games do contribute to aggressive behaviour. However, they were quick to note that “It is the accumulation of risk factors that tends to lead to aggressive or violent behaviour. The research reviewed here demonstrates that violent video game use is one such risk factor”.

Video Game Controller Video Controller

Image Credit: Max Pixel

While the report made headline news in many newspapers, articles questioning its methodology and findings immediately popped up too. Over 200 academics signed a letter critiquing the research and labelling it as ‘controversial’. Some of these researchers agreed that the report highlighted important areas for further research, but ultimately didn’t tally with a near-global reduction in youth violence. On the other hand, video games really could be a factor in isolated cases of extreme violence.

Dr Vic Strasburger is a retired professor of paediatrics at the University of New Mexico’s School of Medicine. He has dealt with several ‘school-shooter’ youths and theorized that playing violent video games is one of four factors that drive these individuals to commit horrifying acts. The other factors were abuse/bullying, social isolation and mental illness. As with the APA report, he makes it clear that video games are just one factor contributing to such behaviour, and it is not a simple correlational relationship.

The Oxford Internet Institute has explored the topic from a different angle. They investigated whether the mechanism of a game contributed to feelings of frustration, rather than the actual content of the game itself. Interestingly, they found that if players were unable to understand controls or gameplay, they felt aggressive. Dr Andrew Przybylski said that “This need to master the game was far more significant than whether the game contained violent material”.

Interestingly, a few months after the APA report was released, researchers from Columbia University published a study looking at the positive aspects of playing video games. In many cases, children who play video games often are more likely to do well at school and experience better social integration. This is certainly a stark contrast to the ‘aggressive loner’ stereotype of gamers we have all come to recognize.

It seems that video games can actually have a plethora of positive effects. These include improved motor skills, improved vision and improved decision making skills. The hand-eye coordination of regular gamers tends to be better than those who rarely play or don’t play at all. There is also research that suggests playing video games enhances of attention span, ability to multitask and our working memory. Plus for many of us, they’re a good way to beat stress.

Ultimately, youth crime is falling while the accessibility of video games is increasing. While there may be a tentative link between playing video games and aggressive behaviour, other factors have a much greater influence. At best it seems that video games have a negligible effect on gamers, and that there are many positives to benefit from. So, ready, player one?

Leading the Blind

Andy Thompson

“There is something so totally purging about blindness, that one is either destroyed or renewed. Your consciousness is evacuated. Your past memories, your interests, your perception of time. Place itself. The world itself. One must recreate one’s life.”

These are the remarks of theologian John Hull who, after being visually impaired since childhood, lost his sight completely in 1983 at the age of 48. Over the next three years he documented his experience of adjusting to blindness through a series of audio diaries, which last year inspired the Bafta-nominated film Notes on Blindness, a biopic that charts Hull’s adjustment to life without sight.

39 million people worldwide share Hull’s experience of complete blindness, whilst a further 246 million are visually impaired in some way. But for these people the “purging” effects of blindness that Hull described are increasingly becoming curtailed.

6983492179_38face351a_k

Image Credit: Flickr

Rapid developments in technology have resulted in a proliferation of equipment designed to improve the way that the visually impaired experience and interact with the world. These technologies are hugely varied, ranging from voice-controlled home devices to advanced braille printers. This expansion of resources is only increasing, and 2017 looks set to see the release of some of the most ground-breaking products yet.

April this year will signal the launch of the first braille smartwatch; the ‘Dot’. The watch has 24 magnetically controlled touch sensors on its face, which can be made to rise and fall individually to spell out any word in Braille. The Dot works by connecting to a smartphone via Bluetooth, and then conveying information from the phone to its wearer through these Braille messages. This information could be almost anything, from text messages, to the name of someone who is calling the phone, to information from Google Maps.

The South Korean firm behind the Dot say the device has the potential to totally revolutionise how the visually impaired use smartphones, and claim to have over 140,000 pre-orders including one from Stevie Wonder. If the device proves to be a success the firm has plans to expand its range, and is already scheduled to release a tablet version of the Dot in collaboration with Google in 2018.

2017 may also see the release of the long-awaited ‘Smart Specs’, a pair of smart glasses being developed by the OxSight team at the University of Oxford. Smart Specs contain a complex camera system and a tiny computer, which together can improve a person’s ability to recognise faces, better avoid collisions, and even give them the ability to see in the dark. Whilst the glasses are still in development, a successful nationwide trial was carried out in 2016, and OxSight’s founder Dr Stephen Hicks is optimistic that Smart Specs will be finalised before the end of the year.

Another piece of approaching wearable technology is a device called ‘HandSight’, which aims to help improve how visually impaired people can read. HandSight is a ring with a tiny camera built in which, when the finger wearing it moves along a line of text, records footage of the text and transits it to a nearby computer, which then reads it aloud. Whilst other technology to help visually impaired people read already exists, the creators of the HandSight hope their device will allow blind people to read larger amounts of text at a greater speed. As with the Smart Specs the device is still in development but has been proved highly successful in trials.

All these and numerous other developments have huge potential to transform the lives of the visually impaired, but will sadly come too late for John Hull, who died in 2015. When Hull lost his sight in 1983 he adapted to his new life without much advanced technology, navigating with a traditional white cane, and employing numerous family and friends to record his numerous theology books onto cassettes. He described blindness as having the power to either destroy or renew, and whilst he eventually came to embrace life without sight, the work of these companies and more could soon mean that no one will be destroyed by it.