It’s all fun and games until someone loses a planet
In 2006, the International Astronomical Union decided that Pluto was no longer a planet and was instead to be referred to as a “dwarf planet”. Outcry ensued and eleven years later it has not abated.
The physicist Sean Carroll writes in one of his recent books “Pluto is the ninth planet and it’s my book so I’ll call it what I like”, while Neil deGrasse Tyson writes in one of his own “Pluto isn’t a planet, get over it.” There’s even an episode of Rick and Morty where Jerry delivers a speech to the Plutonians, declaring that Earth’s scientists were mistaken in reclassifying it.
The man largely responsible for the monumentous decision, Mike Brown, uses the twitter handle @plutokiller and has the Death star destroying Alderaan for his banner picture. So perhaps it’s all a matter of whimsy and tongue-in-cheek sport. Pluto is, after all, the furthest planet/dwarf from the Sun. Does it really matter what we call it?
I am going to argue that it does, not because astronomical terminology is crucial to our lives but because this debate reflects something important about how Science operates. So hold onto your preconceptions folks! Well, actually don’t. Let go of our preconceptions. But hang onto something.
I’m a Believer
I remember hearing the Pluto news on the radio and thinking it was pedantic nonsense. You can’t just change what Pluto is because someone decides to tweak a definition! I had images of pencil-pushing smart-alecs, smarming away to themselves at how clever they were, with no concern for public opinion.
Don’t misunderstand me here, public opinion does not dictate truth and reality is not flexible. But the definitions of words are, and the accepted meaning of a word should reflect its common usage. If everyone agrees on a particular definition, an organisation would be foolish to redefine it.
I also remember thinking the whole thing was bad for Science PR because organisations like the IAU should serve the public not dictate to them. If we use the word “planet” to refer to something which Pluto clearly is, that’s enough reason to preserve its status. But here’s the thing: Pluto doesn’t match the public definition of a planet. That’s why the IAU changed it.
What I was getting wrong eleven years ago was that the IAU genuinely was taking public opinion into account. The reclassification of Pluto was done out of respect for the lay public, not in spite of them.
The First Planets
Every ancient culture monitored the skies, charting the mysterious lights which roam above our heads, and every single one of them made the same discovery. The majority of the twinkling dots follow a clear pattern, changing position on a predictable 365 day-cycle...but five of them do not.
Five of the bright sky-things move on bizarre trajectories, weaving and wailing without rhythm or logic. The Greeks called these five objects “wanderers” (planetes in Greek) because they appeared to wander as if conscious beings. They were assumed to be Gods and were identified as Hermes, Aphrodite, Ares, Zeus and Chronos, later re-named for their Roman counterparts Mercury, Venus, Mars, Jupiter and Saturn.
The first definition of “planet” was therefore extremely simple. A planet was one of the bright lights which moved in non-predictable ways.
But thanks to the work of people like Eratosthenes, Ptolemy, Newton, Buridan, Copernicus, Kepler, Brahe and Galileo, we figured out that the planets were following a pattern, albeit a complex one.
The Sun was sitting at the centre of a circular plane with the planets orbiting at different speeds, one of which was the Earth we stood on. Sometimes Earth would be behind another planet and sometimes it would overtake it, giving the impression of the other planet zig-zagging across the sky - what astronomers call retrograde motion.
To further complicate things, it turned out this view was only about 90% accurate. Firstly, planets move in ellipses rather than circles and secondly, they aren’t going around the Sun at all. Planets and the Sun are actually orbiting each other, it’s just that the Sun is so much bigger so its movements are small. If you assume the Sun is stationary with planets moving around it (what you were probably taught in primary school) you will get the wrong answers when trying to account for planetary motion.
Nature does complicated things so we have to accept equally complicated explanations, even if they contravene what we learned when we were young.
Six and Beyond
By the 18th Century, the definition of a planet had evolved to “something which shares a common centre of mass with the Sun and has a fixed elliptical orbit”. In fairness, that definition is a mouthful so “things which orbit the Sun” will do in a pinch. And there were six planets rather than five, because Earth was one of them.
Then in 1781, the astronomer William Herschel discovered that one of the dimmest stars visible to the naked eye does the whole retrograde-motion thing. By carefully measuring its position with a telescope, Herschel realised that this object wasn’t a star at all, it was orbiting our Sun. This made Herschel the first person in modern history to discover a planet, yielding Mercury, Venus, Earth, Mars, Jupiter, Saturn and George.
The name George didn’t catch on in France however, where King George was despised, so it was eventually renamed after the God of the sky: Uranus. One of the most majestic and powerful figures in classical mythology. Today, it has come to mean something else...well...strictly speaking it should be pronounced “yor-ann-us” but the other way is definitely more fun. As a physics teacher I’m pretty sure I’ve heard every permutation of this joke but I have to be honest, I still find Uranus hilarious.
Then in 1801, Giuseppe Piazzi discovered the eighth planet, Ceres, lurking between Jupiter and Mars. Ceres was the smallest planet discovered to date, at least ten times smaller than the moon, but it orbited the Sun just like the others, so Jupiter was bumped down the list to become the sixth planet, Saturn the seventh and so on. Inconvenient, but as a scientist you change your view when the data forces you.
A few months later Heinrich Olbers discovered another planet at the same distance to the Sun, which he named Pallas. Then in 1804 Karl Harding discovered Juno. In 1807 Olbers discovered Vesta and in 1845 Karl Hencke discovered Astraea.
The thirteenth planet was a little different though. This one was discovered by equation rather than telescope. In 1821, Alexis Bouvard was taking precise measurements of Uranus (hur hur hur) and found that it didn’t move in a standard ellipse. Instead, it seemed to be pulled to the side as if there were another object attracting it and in 1846 Johann Galle finally observed it with a telescope, giving us Neptune.
Then Karl Hencke discovered the planet Hebe in 1847 along the same Mars/Jupiter orbit as most of the others. The fifteenth, Iris, was discovered the same year by John Russell Hind, the sixteenth, Metis, in 1848 by Andrew Graham and the seventeenth, Hygiea, in 1949 by Annibale Gasparis. Hold on a moment...
Back up, back up
Any textbook on astronomy in the 1850s would have listed our solar system as boasting seventeen planets. But as our telescopes got better, we discovered more and more objects floating between Mars and Jupiter and by the 1860s there were over a hundred of them, which led to a problem.
When people heard the word “planet” they imagined great big round things with their own orbits, not scraggly space-debris circling the Sun like a moat around a castle. Either we kept the definition of planet to mean “thing which goes round the Sun” or we start using it the way the general public used it, even though it would disqualify the rocks between Mars and Jupiter. After much deliberation we went with the second option.
Although never formally defined, astronomers started using the word planet to refer to what the general public thought the word meant. This meant we needed a new word for the thousands of rocky clumps swimming between Mars and Jupiter and the term “asteroid” was coined.
Really, the problem arose because language evolves slower than Scientific knowledge. We get a word like planet in our vocabulary and it hangs around for hundreds of years, colouring our perceptions. If we discover that reality has nuances to it, we either keep using the old terminology or we invent a new word to describe the stuff we didn’t originally know was there.
The goofy story about Pluto
In 1906, the astronomer (and millionaire) Percival Lowell decided it was time we discovered a ninth planet. He had good reason to suspect there might be something there - minor disturbances in Neptune’s orbit - but mostly he was motivated by the passionate desire to look beyond the edge of what was known. He poured a lot of money and resources into searching for “Planet X” and hired some of the world’s best astronomers to work at his observatory.
Sadly, Lowell died in 1916 before Planet X was discovered, but the mission continued in his absence. Under the direction of Vesto Slipher (who also discovered the redshift effect) Clyde Tombaugh was set the task of searching the sky beyond Neptune and on February 18th 1930, he captured images of what Lowell had hoped for - a ninth planet, roughly the size of the Earth.
Planet X-fever gripped the world and international headlines proclaimed the discovery of the first proper planet since Neptune. A competition was held to decide what we were going to call it and over a thousand names were suggested. The name Pluto was proposed by eleven-year-old Venetia Burney, and ultimately won by popular vote.
By 1948 however, precise measurements were taken on Pluto’s size and it turned out we had been a little premature in declaring it the same mass as the Earth. It was actually about a tenth as heavy. Never mind though, it was still bigger than Mercury.
Except it wasn’t. By 1978 we learned that Pluto was actually about a six hundredth the mass of the Earth, smaller than Mercury and even our planetary moon, making it the smallest planet in the solar system. But it still satisfied the main criterias for it to be a “planet”. It was orbiting the Sun, it was big enough to be round and it occupied a unique orbit. Except it didn’t.
The Second Belt
In 1992, the astronomer Jane Luu discovered a second object floating on Pluto’s orbit which she nicknamed Smiley but was given the official designation 1992-QB1. Then in 2003, the astronomer Mike Brown discovered an asteroid at the same distance, which he called Sedna. He went on to discover Haumea and Orcus in 2004, and then Makemake in 2005. But then, most disconcertingly, Brown discovered Eris, which turned out to be 25% heavier than Pluto.
We can argue that Pluto is a planet on the grounds of it being round, and we can dismiss all the small rocks nearby as asteroids. But when we discover objects heavier or bigger than Pluto on the same orbit, it’s time to rethink things.
Turns out there are over 2,000 objects orbiting past Neptune and Pluto is only one of them. Our solar system doesn’t have one asteroid belt, it has two! This second one has been called the Kuiper belt (pronounced Kie-pur) and its asteroids are very different from the ones we’re familiar with. A lot of them are huge chunks of ice and rock, often many times bigger than planetary moons. Pluto, it turned out, was Ceres all over again - the first object discovered in an asteroid belt and accidentally labeled as a planet.
So what do we do? If we keep calling Pluto a planet then we're misleading people. It’s not very big and it’s not a lone body, it’s just a fat asteroid which happened to get noticed first. But if we want to keep calling Pluto a planet, we need to redefine what that word actually means.
Eventually the IAU decided to repeat what was done in the 1860s. The definition of planet was fixed in people’s minds, so we left it and came up with a new word to fit the new thing: “dwarf planet”.
The definition of a planet is the same as it always has been. Something which a) goes round the Sun, b) is roughly spherical due to gravity and c) has cleared its orbit path so it’s the only dog in town. A dwarf planet is something which hasn’t done the third one...it’s big enough to be interesting, but it’s part of an asteroid belt. This means our solar systerm really has six dwarf planets: Ceres (reclassified from asteroid), Pluto, 2007-OR10, Eris, Haumea and Makemake. And there’s a good chance more will be discovered in the Kuiper belt with time.
Personaly I think the IAU made the right call. They were faced with either inventing a new word or changing the meaning of an old one. And the former option is usually the better idea. You can’t force people to change the words they’ve always used, but you can introduce new ones.
When I was a young warthog...
People get annoyed about the whole thing because Pluto, it would appear, has been unfairly demoted. But the thing is, it hasn’t at all. Pluto hasn’t been changed into a different thing - we just discovered what it was all along, like taking the mask off a Scooby-Doo villain.
Imagine you had nine spoons of sugar in front of you. You’re told by everyone that it’s definitely sugar in each one and you believe that for a long time. If you eventually discover the end one is really salt, what you’d say is “oh, I guess we made a mistake”. It would be bizarre to say “I’ve always been taught there are nine tablespoons of sugar and I still believe that’s true. I’m going to redefine what I mean by sugar as ‘any white powder’.”
You’re welcome to do that of course, but in doing so you’re bending the definition away from what everyone means. You’ve also redefined the word to include things like sherbert and powdered glass. Unless you’re extremely stubborn (in which case can I watch you eat your powdered glass cake?) you know what the sensible thing to do is, even if you don’t like it. The intellectually honest approach is to accept that you were taught a mistake. It wasn’t anyone’s fault and nobody lied to you, but you got told something incorrect.
So why do people object to learning the truth? Why do people get upset when a faulty fact is corrected? Shouldn’t that be a good thing?
In the process of writing this blog I consulted with my father, a passionate astronomer (the guy has a five-foot Russian-built telescope with a motor to compensate for Earth’s rotation in his garden shed) and he made a very important point: for a lot of people, this kind of thing can be more about emotion than intellect. If you grow up learning something, it can feel like the rug being pulled out from under you if it turns out to be wrong.
This is a fair point. When I tell the Pluto story to my younger students they are fine with it. I explain that there was a large asteroid which got mistaken for a planet and as soon as we realised the mistake we corrected it. There is no objection to this because “it was mistakenly identifed as a planet” is part of the fact they learn.
It’s only when we are victims of the mistake that it can be a human instinct to fight back. Intellectually we might accept Pluto’s status, but emotionally we are irritated because we are creatures of habit and familiarity. The same way people objected to Ceres and Pallas being reclassified in the 1860s, people in the 2000s objected to Pluto going the same way. And, just like Ceres and Pallas, people growing up after that decision are fine with Pluto being a dwarf planet. Finding out as an adult that one of your childhood facts was wrong can feel like a piece of your childhood has been knocked away. Nobody likes having their childhood messed with.
Why it Matters
Science offers us insight and knowledge, but it comes at a price - we have to be prepared to let go of familiar beliefs if they turn out to be wrong. This is one of the hardest parts of Science but it’s also one of the most important. It’s the reason we no longer believe the Earth is the centre of the Universe. It’s the reason we no longer believe the planets are Olympian Gods. It’s the reason we make progress in the first place.
And it doesn’t have to be a bad thing. Alright, we lost a planet. That sucks. But technically we gained six dwarf planets as well, so if you want a solar system full of planets, the 2006 ruling gave you exactly that. And, most importantly, we gained a deeper understanding of how complicated the solar system really is.
There are eight planets, hundreds of moons, thousands of asteroids in two different belts (as well as two clumps of asteroids called the Greeks and Trojans orbiting near Jupiter) and probably dozens of dwarf-planets. Not to mention comets from the Oort cloud.
We had to abandon our simple view of reality to get to this astonishing point, and it’s very probable some of what we currently “know” will turn out to be wrong ten years from now. When people are young, they learn a simple view of reality, just as out entire species did. Science is the thing which allows us to move beyond that and gain a more sophisticated and beautiful view of the Universe. It can be painful letting go, but it can be eye-opening and wonderful as well.
Right, now let's deal with this whole "conventional current" malarky...
Mr Arnold from Jurassic Park: blogspot
Arrogant IAU Member: ehowcdn
King Leonidas: huffingtonpost
Fred Durst: impericon
Pluto and Goofy: urdogs
Double belt: blogspot
The Last Jedi: Wallpapersite
Orbit animations: exploremars
In 45 BCE, Julius Caesar decided to make January the “first” month of the year. The reason was that Janus, the god after whom the month is named, was the god of doorways and new starts so it seemed an appropriate place to begin our cycles. The Earth isn’t in a particularly special place, but we designate the December/January switchover as a festival to take stock of the past and consider the future.
2017 has of course been full of negative “political” news stories - just like every other year - but I’m happy to report that - just like every other year - Science provided a candle of optimism in the perpetual darkness of parochial human affairs! The most important Science story was obviously that Lemmy, the late, great frontman of Motorhead had a dinosaur-alligator named in his honour called Lemmysuchus. Some other things happened too. Here’s my favourite picks of awesome Science stories from the last 365 days.
Not Today Tsunami
Tsunamis occur when an earthquake at sea sends water outward in all directions, devastating coastal towns and cities. Up until now, there has been no way to stop them but Usama Kadri, doctor of mathematics at Cardiff University, has hit on the solution. By creating enormous sound-blasts underwater, the acoustic-shockwave can be pointed at the oncoming tsunami like a deflector shield. When the kinetic energy of the water going toward land meets the kinetic energy of the soundwave moving away, the net energy of the water particles spreads out, raising the temperature and killing the tsunami. Kadri’s idea is the first of its kind and has already been tested in small, artificial settings with great success. All we need to do is scale up and choose what sound we use to blast the tsunamis apart.
New Continent Discovered
It sounds made up but it’s completely true. New Zealand, which everyone previously assumed to be an island on its own, appears to be the highest point of a unique continental plate, separate to all the countries around it. This continent, named Zealandia on February 9th, lies 94% below the surface of the ocean but really is there, making its disocverer, Maria Seton, the first person to discover a continent in over three centuries.
Mental Illness is Normal
A study conducted by J.D. Schaeffer in the newly continented New Zealand found that between the ages of 11 and 38 only 17% of people experience no mental health problems. Everyone else experiences at least one bout of depression or anxiety and 41% experience it for more than a year. It turns out that being mentally ill at some point during one’s life puts you in the majority. Perhaps this might not seem like an uplifting news story, but I think it’s encouraging. If you suffer from mental illness yourself or know somebody who does, don’t feel ashamed about it or stigmatised. We can now say categorically that it’s a standard part of being human,
Goldilocks and the seven planets
On February 21st, the Spitzer telescope at NASA discovered seven planets orbiting the star TRAPPIST-1, all of them in the Circumstellar Habitable Zone aka the "Goldilocks zone”. That’s the area around a star where the temperatures aren't too hot or too cold, making things just right for liquid water to flow and complex organic reactions to take place. TRAPPIST-1 is about 378 trillion kilometers away sadly, but the evidence is undeniable. Our solar system only has one planet in the CHZ for sure (Mars is up for debate), but apparently there are places in the Universe far more amenable to life. If it was able to arise in this barren cosmic wasteland, chances are it could have done so elsewhere.
Life Started in Canada
The oldest fossils of living things have long been assumed to be the samples found in Pilbara Australia dating to aboot 3.5 billion years old, but on March 1st Matthew Dodd published results that put new microfossils discovered in Quebec, Canada at 4.28 billion. That would be astonishing given that the Earth is probably no more than 4.5 billion years old itself. The results are disputed of course, but exciting...eh?
One of the most abundant resources we have on the planet is saltwater. Unfortunately it’s unpalatable to humans, making it approximately useless. But on the 3rd of April, Rahul Nair discovered a solution (pun intended) to the problem. Graphene, made from sheets of carbon atoms arranged like a chickenwire fence, has billions of tiny holes which water molecules fit through but salt particles do not. Graphene works like a sieve, purifying the water and leaving salt on the other side. By using Nair’s method we could turn the oceans into fresh drinking-water for millions.
We Shall Not Be Moved
Perhaps the biggest story from April was a story about scientists themselves. After Donlad Trump and many in his cabinet made comments denying climate change, asserting that vaccines caused autism or that the big bang was “a fairtyale”, the scientific community worried that governments were no longer going to be making decisions based on Science (aka reality). In response, an estimated 1.07 million people in over a hundred cities around the world took to the streets on the 22nd of April to march in protest of science-denialism. The March for Science was the biggest pro-Science public demonstration in history.
On the 25th, doctor Emily Partrdige and her team published a paper in which they reported keeping six prematurely born lambs alive inside artificially created wombs. The lambs were removed from their mothers via caesarean section and delivered at the equivalent of 23 weeks old. Partridge and her team were able to keep the lambs alive until they were fully grown, after which they were birthed successfully. If we can replicate this in humans it would mark the end for deaths of premature babies.
The world’s first nano-grand prix took place on the 28th. Cars no more than a billionth of a meter across, invisible even to an optical microscope, were raced on a track for the first time, demonstrating the versatility and applicability of nanotechnology. The Swiss team won with their mini-hovercraft “Nano-Dragster” although unfortunately the victory was undermined by the shape of the car itself...
Forget Shark-Nado, Meet Crystal-Nado
It sounds like a joke but it isn’t. Kathleen Benson reported, on May 1st, that occasionally amid the Andes mountains of Chile, whirlwinds of air can pick up thousands of crystals and transport them across distances of over 5 km, before showering them in a sparkly display of magicness. This isn’t a world-changing or far-reaching discovery, but it’s objectively awesome.
Transparent Frogs Exist
Juan Manuel Guayasamin and his team discovered a species of frog which is completely see-through; you can actually see their organs working from the outside, a bit like that scene in Hollow Man where we see Kevin Bacon's innards through the skin. Only this time, no uncomfortable Kevin Bacon nudity! They are called Hyalinobatrachium Yaku and are proof that sometimes nature does things for the hell of it.
Enceladus has food
In October 2015, Cassini (which collapsed into Saturn on September 15th of this year) flew through the hydrothermal plumes of the moon Enceladus. As it shot through the jets, it collected a vast amount of data which was analysed over the next two years and on April 14th one of the most startling results was published: Enceladus' sub-surface ocean has a lot of molecular hydrogen - the most likely source being organic molecules. Not only would these molecules serve as the building blocks for life, molecular hydrogen is often used as a food source for primitive microbes. It used to be Mars which was considered our best bet for finding extra-terrestrial life, now it looks like Enceladus is going to take the top spot for astrobiological research.
Out of Eden
The earliest fossils of human-like creatures come from a site called Omo Kibish in Ethiopia and date to around 200,000 years old. The assumption has always been that this is where humans first evolved. The Shangri-La or Garden of Eden described in so many mythologies. Turns out that’s not true. On June 8th, Professor Jean-Jacques Hublin announced that a site in Morocco called Jebel Irhoud has human-like fossils dating back to around 350,000 years. What's more, sites similar to Jebel Irhoud have been found all over Africa. The assumption has always been that these sites were later ones, representing our spread from the cradle of life in Ethiopia. But it looks like we had it backwards. If the Jebel Irhoud site has been dated accurately that would mean the various human species were covering Africa simultaneously rather than originating in one single place. This changes our understanding of not only human evolution, but how evolution itself works.
Here Comes the Sun
A novel but surprisngly simple idea to fight skin-cancer was published on 13th of July from Nisma Mujahid. A sun-cream which boosts melanin production in human skin. Melanin is the pigment which makes skin darker, meaning people with darker skin tend to be less at risk from skin cancer caused by UV rays. While most sun-creams merely cover the skin of white people like me in dark brown ink, this one actually causes melanin to produce under the skin’s surface, providing secure coverage. It’s been tested successfully on rats and isolated human skin to great effect. All that remains is human trials.
2016 saw the discovery of gravitational waves; ripples in spacetime caused by leviathan cosmic events. The ones discovered by LIGO back then were generated by the merger of two black holes and this year we got a second big discovery; the collision of two neutron stars. Essentially, atomic nuclei the size of Manhattan, neutron stars are the cores of dead suns spinning many thousands of times per second. When neutron stars fall into each other’s gravitational attraction, the resulting collision is so powerful that it generates gravitational waves, along with heavy elements like gold that get scattered into the universe and wind up as globe-shaped prizes for people like Kevin Bacon.
Gene Editing Achieved
On the 20th of September, research was published by Kathy Niakan and her team who managed to successfully edit a human embryo for the first time. Using the revolutionary CRISPR technique, Niakan was able to alter an embryo to give it a greater chance of forming a blastocyst in the womb. Baring in mind that roughly one in six women experience miscarriage at some point in their adult lives, the ability to edit human embryos would change the game completely. It would also allow us to remove diseases and illnesses from unborn children, giving them a better chance of life. People have speculated about the possibility of altering human genes for decades. Now, we have taken our first step toward doing so. Maybe one day everyone can be edited to look like Kevin Bacon.
Part of our Universe has been found
It’s no secret that most of our Universe is missing. Simply put, the Universe behaves in a way that suggests it should be heavier, but we’ve not been able to find where most of the missing mass is coming from. There are three suspected substances to blame. The first is Dark Energy, the second is Dark Matter and the third is Missing Baryons. And, on October 9th, the Baryon puzzle was solved. Independently, two teams led by Hideki Tanamura and Anna de Graaf discovered threads of particles trillions of kilometers long, linking up every galaxy in the cosmos. Although it looks like galaxies are lone specks of light floating amid darkness, it turns out they are all linked by unimaginably long clouds like highways connecting towns, accounting for 50% of the normal matter that’s out there. Dark matter and Dark energy are still mysteries, but that's one down two to go. Next mystery: why is Kevin Bacon doing the EE commercials?
It's Pronounced "Oh-Moo-er-Moo-er"
On 19th of October, the Hawaian astronomer Robert Weryk discovered a 230 x 35 meter cigar-shaped object floating through our solar system. What was bizarre about Oumuamua (as it was later named) was that its trajectory could not be explained as having originated from either of the asteroid belts in our solar system, making it the very first interstellar object to approach our sun. That we know of at least. Sadly, it turned out not to be an alien probe, but most likely a hunk of rock from a system around the star Vega which got knocked into its current orbit approximately 600,000 years ago. The same day Kevin Bacon was born.
Photons Behaving Badly
This one is seriously weird. On November 9th a team led by Ado Jorio was able to observe a bizarre interaction between particles of light (photons). By slamming a laser beam onto the surface of water, the team were able to emit pairs of photons which were able to “talk” to each other by sending temporary vibrations through the medium they were moving through. Electrons are known to do this in superconducting materials but seeing photons do it is baffling. Apparently, light particles can communicate information and energy with other light particles. There’s not really a whole lot else can be said about this one because it's such a shock. Watch this space.
In 2015 a six-year-old boy was admitted to hospital with a very rare genetic condition called Junctional Epidermolysis Bullosa. The condition is lethal in children, causing the skin to fall off, leaving you without your primary defence system. It’s genetic and there is no known cure. Well…there wasn’t a known cure. In what sounds like the plot of a movie, as the boy was down to 20% of his skin remaining, a group of scientists led by Michele DeLuca decided to try a never-attempted treatment in a last-ditch effort to save the boy. By taking a small sample of his remaining skin and infecting it with a virus designed to correct the JEB genes with healthy ones, the team were able to create new healthy skin cells which they grew and grafted to the boy. After eight months, the boy was finally given healthy skin and discharged from the hospital. Technically this story happened in 2016 but DeLuca’s results were not published until November 8th and it’s too good not to mention. The young boy in question has returned to school and DeLuca has genuinely found a cure for a formerly untreatable disease. I would like to say “this sort of thing doesn’t happen very often” but actually…in Science…it genuinely does.
Trump to the Moon
Say what you like about Donald Trump, he does seem to really like space. Whatever his motivations, I happen to agree with his ideology. Weird, right? The space program is crucial to our species’ survival (that’s not hyperbole, it’s just true) so if he’s serious about investing in it I’m all in favour. On the 11th of December, Trump announced that he wanted America to return to the moon with a mind for using it as a base to launch missions toward Mars and explore the rest of the solar system. He hasn’t given any specific deadlines for NASA, nor has he announced any additional funding he will be supplying for the target, but the sentiment is apparently there. At this point, I’ll take anything I can get.
Science provides hope even in hopeless places...
By all the stars!
A few days ago I was talking with someone who claimed her horoscope was always extremely accurate. Horoscopes claim that fusion reactions taking place trillions of kilometers away can influence the personality traits and lives of humans here on Earth. That’s an astonishing claim if it’s true. And millions of people seem to think it is...so maybe there’s something going on there.
I suggested to her that we carry out a series of simple tests to see if her horoscopes were genuinely as good as she thought. She agreed and we set about devising experiments to put them to the test. The outcome was sadly the same as every other test into the accuracy of astrology ever conducted: resoundingly negative. We couldn’t find any evidence that her horoscopes were trustworthy.
It's a real shame. Had the test yielded a positive result I would have been exhilarated. Imagine being the first person in history to confirm the existence of a link between star positions and human behaviour. I would have loved to find evidence in favour of horoscopes. Sadly however, that wasn’t what we found.
At this point, my friend became rather unhappy because we had ruined something. The result was easy for me to process because I went from “I’ve seen no evidence to trust horoscopes” to “I’ve seen no evidence to trust horoscopes”. Her journey was different however; she had to abandon a belief.
Admittedly, the more you get used to Science the easier this becomes, because you learn to be proven wrong regularly...but the first few times it happens it can sting like a nettle down the neck.
Intellectually, we should be just as satisfied with a negative result as a positive one because we still learn from it, but we are emotional beings as well as intellectual ones and having our cherished views dissolved can be horrible.
Scientists - ruining everyone’s fun forever
Perhaps unsurprisingly, people who believe in the supernatural often see Scientists as enemies. We are accused of trying to destroy belief systems or (far more often) of being “closed minded”. Scientists want to subject everything to tests, often sucking the beauty and mystery out of the world, and if we can’t find it in a test-tube we decide it isn’t real.
I know why we come across like that. Science has a long history of debunking and discrediting supernatural claims, so it’s no wonder people think Science is anti-supernatural. But this simply isn’t true. Scientists are very open minded. We are literally prepared to believe anything, no matter how ridiculous, if there is evidence for it. You want me to believe in Unicorns? Bring me a unicorn and I’m sold.
We’re not trying to ruin everybody’s fun at all. Scientists just go looking for answers by investigating. If someone claims there are Gods living on Mount Olympus we’re the ones who decide to climb the mountain and look. If we fail to find evidence for something you believe in, that’s not because we were trying to destroy it, it’s because the evidence was undetectable and that’s nature’s fault not ours.
The war on magic
There are all sorts of supernatural claims Science has investigated over the years and found zero evidence for. Mediumship, telekinesis, mind-reading, sympathetic magic, prophecy, ghosts, crystal-healing, oujia boards, homeopathy, dowsing, reiki and a whole buffet of others. They’ve all been scrutinised by Scientists who were trying to prove them right, and came up empty-handed each time.
These things could absolutely be real, but investigations have found nothing to support them, so we are left with a simple choice. Either we say “I don’t know if it’s true” or “I’m going to believe it anyway”. If you decide to pick that second option and believe in something without evidence, you have to answer the following question: what is your belief based on if not evidence?
The idea of magical forces watching over our destinies is exciting for sure, but to be a Scientist is to commit yourself to either evidence or ignorance. Nothing in between.
I’ve met people who have countered this line of argument by pointing out that even though the evidence is lacking for a claim, it could still be true. I agree of course, but believing something because it could be true is a dangerous position. Magnetism could be caused by invisible fairies who speak Welsh. Penguins might secretly be red and they put on the black outfits when they see us coming. Ewan McGregor could be Britney Spears...have you ever seen them together?
The problem with believing something because it could be true is that it’s a slippery slope to infinity. There are so many possible things out there it would be impossible to believe them all. And many of them would contradict. Scientists stick to what we’re confident probably is rather than what could be. It doesn’t mean we close our eyes to the possibilities, we just reserve judgement until we know more.
When demons walked
So, why can’t Scientists just leave things alone and let people believe what they want to? Why does it matter if someone has a few unsubstantiated ideas in their head? The problem is that a person’s beliefs determine their actions, so if their beliefs are crooked their behaviour will be too.
The Ku Klux Klan act on the belief that black people are inferior to white people. Nathalie Rippeberger’s parents caused her death by refusing to take her to see a doctor because they didn’t believe in medicine. We used to burn women alive at the stake for witchcract and we believed we were right to do so. Saying “everybody is entitled to their beliefs” only works if people decide what to believe based on reason. That’s why Scientists want to get things right, even if that means abandoning a supernatural explanation. A world where everyone believes what they want is an abhorrent and primitive one.
There was a time when people didn’t know about bacterial or viral infections. If you got sick it was the will of the spirits. People who heard voices weren’t treated for schizophrenia or epilepsy, they were possessed of demons or communing with angels. Science has made us abandon these supernatural explanations and it has replaced them with life expectancy and good mental health. That’s a fair trade, I think.
After all, there was a time when we didn’t even know what air was and doors slamming would have been the result of poltergeists rather than differences in air pressure. I mean, imagine growing up in a civilization where people didn’t know where the rain came from, why food spoiled, or where the Sun went at night.
The pre-scientific world was one of ghosts and goblins. It was a place where humans were diseased and helpless. Then along came the radical notion that you could learn what reality was like by investigating and testing it. Once we realised this elegant truth we began looking for answers rather than guessing at them. And the answer to every mystery so far has been predictable cause-effect relationships between testable laws and particles. The answer has never turned out to be magic or mysticism.
That doesn’t mean magic isn’t real. But if you want to use magical explanations to account for your world, you must also recognise that magic is an ever-receding pocket of ignorance which has been shrinking like a shadow before a candle. You’re welcome to choose magic, but I cannot help but wonder, why would you choose igorance and darkness?
Beyond our understanding
There are certainly deep and confounding mysteries which fill our Universe from edge to edge. What happens inside a black hole? Why is spacetime expanding? Does quantum gravity exist? Or, perhaps the greatest mystery of all time, when you listen to the song Doctor Jones by Aqua the guitar intro for the first 14 seconds is a beautiful piece of music, while the rest of the song sounds like Doctor Jones by Aqua. How is this possible?
Some things in the Universe are so strange that it can be hard getting our heads around them. But is it possible there are things which can’t be found in a laboratory? Things which don't conform to logical laws and textbook explanations? The answer is again, yes. There could be things which transcend natural law. But if such things really do exist, nobody would know about them...including the people making the claim in the first place.
Science is all about investigating the world through experimentation. Anything which can’t be tested for is supernatural. But if you claim to have knowledge of supernatural things, you are claiming that they are detectable because you yourself have detected them. And since the part of you which detected these things obeys natural laws (your brain) natural laws can clearly be used to search for them.
The laws which give you awareness are the same laws which underpin equipment in a laboratory. It’s not a matter of Scientists applying the wrong approach. Scientists are using the same approaches as supernaturalists, we’re just being cautious about it because we know how easily nature can play tricks on our senses. Salt looks like sugar and clouds look like cotton. We have to be better than that.
Kill the Myth
In a 2014 interview with Bill Moyers, Niel deGrasse Tyson explained that when people are wishing on stars they are more than likely wishing on planets. Moyers asks Tyson “Don’t you sometimes feel sad about breaking all these myths apart?” Tyson responds quickly: “No, because some myths deserve to be broken apart out of respect for the human intellect.”
This sentiment is close of the heart of Scientific motivation. We aren’t trying to ruin people’s fantasies, we just think people deserve to know the truth. We think people are smart enough to handle the facts, even if it means giving up a comforting superstition. We don’t think people should be patronised with fairy tales and spook-stories. We think grown-ups should have the right to grasp reality by the horns.
This doesn’t mean you have to abandon a world of mystery and wonder though. Supernatural beliefs offer you magical and fanciful ideas, but Science can beat them all.
Every atom in your body was formed in the core of a star and the atoms of your left hand came from a different star to the ones in your right. There are species of plant and jellyfish which are immortal. The sky is actually purple. Sugar glows in the dark when you crush it. On Venus it snows metal and on Neptune it rains diamonds. Some of the particles in your body can teleport to the moon. Time slows down or speeds up depending on where you’re standing. There are gases which can set fire to water. Whales used to walk on land. Diamonds are vomited to the Earth’s surface by volcanoes but we’ve learned to make them out of peanut butter. You can make frogs and oxygen levitate in magnetic fields. Last year we carried out real-life thought transplants. We have helped paraplegics walk and brought hearing to the deaf.
The world is full of strange stuff and there’s enough genuine magic out there for the entire species. To be quite frank, Science doesn’t oppose the supernatural, it just finds it a bit limited and boring.
Tiddlytubbies: Teletubbies wiki
Teletubbies 1: The Sun
Teletubbies 2: The Mirror
Teletubbies 3: hdnux
Teletubbies 4: onedio
Crying your pardon
Firstly, an apology. I’ve been completely inactive on my website for the past month. This is partly because I was preparing for the Institute of Physics annual public Science lecture (which I delivered on November 22nd to a gracious and patient audience).
Last year I was able to transcribe and summarise the lecture in a couple of blogs but this year I’m afraid that wouldn’t be possible. The main topic covered was the Standard Model of Particle Physics and that's not easy to describe in an essay. Personally, I foam at the mouth with excitement when the whole topic of particle behaviour is discussed, but apparently some plebs don’t share my excitement. As a result, there was a lot of stuff I had to edit out of my lecture...stuff I’m now going to subject you to.
Full disclosure: this will be a self-indulgent blog that will bore many of my readers. I’m doing it anyway, because it’s my website and I freaking love this stuff. How dost thou like them apples?
Seventeen is a Magic Number...Apparently
I once recorded a barbershop-quartet song I wrote about the standard model of particle physics (cos I’m just that awesome) but if you’ve not come across it before, the standard model is usually depicted like the grid above or sometimes in a wheel like this:
These are the seventeen fundamental particles which can’t be broken down into anything smaller. As strange as it sounds, you can’t chop these particles in half because there literally is no half for them to be chopped into.
What’s more, we’re fairly confident this really is the bottom rung of the ladder. We have lots of reasons to suspect that these particles are the true building blocks of the Universe, which means every object or process you can think of is the result of interactions between the particles listed above. With the exception of gravity (which doesn’t play nicely) what you see is the alphabet of reality. Well...almost.
In truth, the seventeen particles of the standard model are not the whole story. Nature is rarely so considerate or simple. In fact, she seems to have a complete disregard for what humans will find intuitive and tends to prefer intricate complexity wherever possible. It’s almost like our brains evolved for the purposes of hunting and breeding rather than conceptualising the quantum-mechanical nature of reality.
Divide and Describe
Asking how many types of particle there are is like asking how many types of human there are. If you speak to a contact-lens designer they might say there are five: brown-eyed, blue-eyed, grey-eyed, green-eyed and hazel-eyed humans. That’s not wrong of course but it would be useless information to a hemotologist. They might say there are eight types of human based on the blood groups A+, A-, B+, B-, AB+, AB-, O+ and O-.
To give a full picture that includes both properties, we might therefore say there are really forty types of human: blue-eyed people for each of the eight blood groups, brown-eyed people for each of the eight blood groups and so on. But we could always subdivide again based on something like hair colour for instance – blonde, brunette and ginger – to yield 120 types of human.
We could categorise and cross-categorise the human population according to gender, sex, sexuality, skin-colour, language, dietary habits or whatever else we felt like. The sheer number of possible “human particles” is staggering because there are so many different properties available. A similar complication arises when we want to describe particles of the Universe which is a much better way to spend the time. After all, putting humans into categories is sort of frowned upon these days.
Let’s Just Say There Are Four
For the sake of clarity I’m going to say there are four properties/available characteristics a particle can have. This isn’t really true, but a lot of particle properties aren't independent of each other so we count them as one.
For example, a tiger has the property of orange stripes and also the property of black stripes. Those are two distinct properties but logically they must occur together. We can group both properties into one and say that tigers have the property of being "stripey".
In the same way, you might have heard of many properties particles can have, but I'm going to ignore a lot of them because they make no difference. If you don’t like it, see above comment about apples. Here are the four properties...
Mass: This property means roughly the same thing as it does in our everyday life: it's a measure of how heavy a particle is or how reluctant it is to change trajectory. For particles it can take any value from zero to 0.02 milligrams (anything above that is impossible).
Charge: This property means how willing a particle is to be around other particles with the same property. It comes in two varieties called positive and negative. Particles with opposite charges will attract, while particles with identical charges will repel. Particles that have zero charge are unaffected. This one’s also easy to visualise because it’s similar to our notion of magnetism with like poles repelling and opposite poles attracting.
Colour: This one is a little harder to visualise because it doesn’t compare to anything in our everyday world. The name is also misleading because it doesn’t refer to the appearance of a particle (it’s just a word we use to describe it) it actually refers to whether a particle can be separated from particles with corresponding properties. Particles with zero "colour" are able to move around on their own but particles with "colour" must clump together in specific arrangements. The interactions and types of "colour" available are quite complicated so I won’t go into it here, but my recent Instagram post (@timjamesScience) explains the basics if you're curious.
Helicity: This property is by far the strangest. Whole books have been written about it, so if the explanation I give here seems a little incomplete, that’s because it is.
Particles have a property called "spin"; a name as misleading as "colour". It doesn’t refer to whether a particle is literally rotating in space, but the mathematics of rotating objects happen to match the behaviour of this property, so we use the same vocabulary.
Spin values can either be whole numbers or half-numbers. Particles with whole-number spin are able to occupy the same physical location as each other without interacting (think of two beams of light overlapping) and we call these particles bosons. Those with half-number spin will stack against each other (like your body and the chair you are sitting on) and we call these particles fermions.
As well as coming in different numerical values, spin can also come in two varieties which we usually call up-spin and down-spin, but I’m going to break with convention and use the words clockwise and anti-clockwise to describe the two types. My reason is that "spin", just like literally spinning objects, can appear different depending on your frame of reference.
A particle moving toward you can be spinning clockwise, but when it passes and you start watching it move away the same particle can suddenly appear to be spinning anti-clockwise. "Spin" behaves in a similar way. The spin of a particle can be measured differently depending on what is around it and how you look at it.
This makes spin a confusing thing to talk about, but there is another aspect to spin - a complication which actually ends up making things easier to manage. The spin of a particle can "point" in a specific direction as if particles were like spinning tops with pointy ends. If the pointy end is facing the direction in which the particle is moving then we call it “right-handed” but if it points in the opposite direction it is “left-handed”. It is also possible for the spin to be pointing at right-angles to the direction of travel, giving us the possibility of “neither-handedness”.
This property of being left, right or neither-handed is called helicity. So although it is really to do with the particle’s spin, the handedness of particles is what allows us to distinguish them. Clockwise and anti-clockwise particles are practically the same, but left and right-handed particles are not.
From these four properties we can describe all the different types of particle and the varieties which arise among them. On the plus side, there aren't many different properties to worry about. The problem of course is that it's hard to visualise what some of these properties actually are. At least when we were talking about human particles we could always get our heads around them. Human characteristics and behaviour are never beyond explanation.
Photons The simplest particles. They have no mass, no charge, no colour and only come in two helicities - left and right handed - each with a spin of 1.
Z’s – Z particles have mass, but no colour or charge. They can have any of the three helicities (left, right or neither) all with a spin of 1, giving us three possible types.
W’s – W’s are similar to Z’s. They have mass, spin of 1 and three helicities, but they also come in two different charge varieties, +1 or -1, meaning there are six W particles.
Gluons – Gluons have no mass or charge but they do have colour in eight versions (see my Instagram post) and helicity coming in left or right, giving us a total of sixteen gluons.
Higgs – The Higgs boson particle has mass but no colour or charge. It has a spin of 0, meaning there is only one helicity possible (neither-handedness) and therefore only one type of Higgs exists.
Fermions – spin 1/2
Quarks – These particles have all four properties. There are six different quark masses, each with a different name: up, down, charm, strange, top and bottom. Up, charm and top quarks have a charge of +2/3 whereas down, strange and bottom quarks have a charge of -1/3.
Quarks also possess one of three colours named red, green and blue, giving 18 types so far. Quarks can also come in charge/colour-reversed versions called anti-quarks. Anti-up, anti-charm and anti-top quarks have charges of -2/3 while anti-down, anti-strange and anti-bottom have charges of +1/3 (the reverse of the “ordinary” quarks). Anti-quarks also possess the colours: anti-red, anti-green and anti-blue. This gives us 36 types of quark.
Then, because quarks have spin, they come in either left or right-handed helicity, giving us a grand total of 72 possible quarks.
Leptons – These fermions possess no colour. There are three with mass, called the electron, the muon and the tau. They also have a charge of either -1 or +1 (the anti-versions). Then there are left and right handed helicities, giving 12 massive leptons.
The remaining leptons have no charge and possibly no mass (they sort of do have mass, but for now let’s say they don’t) and they are called neutrinos. Neutrinos come in three flavours: electron-neutrinos, muon-neutrinos and tau-neutrinos. There are also anti-neutrinos for each, but because neutrinos have a charge of zero you can almost think of anti-neutrinos as having a charge of anti-zero. I mean, you probably shouldn't think of it like that...I'm just saying you could.
What’s really strange (apart from the fact that they sort of have mass and sort of have anti-zero charge) is that all neutrinos are right-handed and all anti-neutrinos are left-handed. This gives us six types of neutrino in total, making for 18 leptons.
The Grand Total
Photons x 2
Z's x 3
W's x 6
Gluons x 16
Higgs's x 1
Quarks x 72
Leptons x 18
118 different types of particle
Is That It?
In all honesty we don’t know. These are the 118 types of particle which definitely exist and which can’t be broken down but there is absolutely the possibility, in fact the likelihood, of there being more. For instance, left-handed neutrinos and right-handed anti-neutrinos have never been discovered but it seems reasonable to suggest they exist somewhere.
There are lots of other hypothesised particles which many physicists believe may be real but haven’t been discovered. Things like the graviton (the particle responsible for causing gravity), the inflaton (the particle which played a key role in the early expansion of the Universe) or the Majorana fermion (which does things).
Then there are the so called quasi-particles, some of which don’t exist independently but merge with already existent particles (like Goldstone bosons) some of which aren’t exactly self-contained units but act as if they are (like phonons) and some of which don’t exist at all but we act as if they do to make the equations neater (like Popov-ghosts). In short, the Universe is awfully big and awfully complicated. The standard model we have now is probably only a glimpse of what nature has in store.
Standard Model Grid: businessinsider
Jerry Smith: pinimg
Standard Model Wheel: Symmetrymagazine
Nick Griffin: guim
Insane Clown Posse: wennermedia
Should we blame the government, or blame society...or should we blame the images on TV?
In August of 2011, riots broke out across London as thousands of people took to the streets and engaged in fighting, looting and wanton damage of property. Within days, the unbridled aggression had somehow spread to other cities across England and soon the entire nation was gripped in a frenzy of cosmopolitan outbursts.
The initial trigger had been the police shooting of London drug dealer Mark Duggan, but within 24 hours it had devolved into city-wide pandemonium...and then country wide. Five people were killed, hundreds were injured and repairs to the city of London totalled over £200 million.
Why were so many people getting involved? This wasn’t a student protest which got out of hand, nor did all these rioters know Mark Duggan. It was as though everyone was engaging in mania for the sheer blood-soaked hell of it.
At the time, numerous "social experts" were interviewed on national news and started blaming it on something they were calling mass hysteria – the idea that humans will uncontrollably copy each other in large groups, even to the point of going against their normal behaviours. I was skeptical of this explanation. It seemed more likely that it was just people exercising their sadism and exorcising their emotional demons.
However, as a Scientist, I have to be willing to forego gut-instinct and look at the evidence in detail. Is there any reason to believe that mass hysteria is a genuine phenomenon? Were the 2011 riots truly a form of extreme group hypnosis or was it individuals making conscious choices to be aggressive under the protection of crowd-anonymity?
Welcome to the Twilight Zone
To begin with, let's look at one of the strangest crime waves in recorded history. On November 19th 1938, in the peaceful English town of Halifax, two women named Gertie Watts and Mary Gledhill arrived at a police station and reported being viciously attacked and cut across the face by a man wielding a razor. Two days later a woman named Mary Sutcliffe stumbled in with a similar story, this time decorated with deep slashes on her arms.
By November 29th, six other women had been attacked in similar fashion and a manhunt began. Knife-crime experts were brought in from Scotland Yard and a reward of no less than £10 was offered to whoever caught the man papers began calling “The Halifax Slasher.”
It was then, while interviewing the nine young women involved, that Detective Chief Inspector William Salisbury uncovered an unprecedented twist. The Halifax Slasher never existed. Each woman had fabricated the attacks and self-inflicted the wounds. Independently.
After the first report of an attack, local newspapers had warned the public to be cautious of a knife-wielding monster and several women all decided to slice themselves in order to imitate the real victims, none of them realising there were no real victims.
Humans can obviously do very strange things in order to feel part of a group – even a group of attack survivors. Although nobody wants to be the victim of violence, it would appear that some people want to be part of a community so badly they will engage in self-harm to achieve it.
Peculiar for sure, but I don't think I'd class it as mass hysteria. These acts of self-harm could easily be the result of loneliness, mental illness or a prurient desire to take part in social drama. They were also acts which took place in private rather than as part of a hysterical group. Fascinating for sure, but not mass hysteria. The moral of the story is: when you invesitage spooky things around Halifax, the monster probably isn't real.
The High School Terror
Now let's consider another epidemic - one I myself witnessed in 2005. On the morning in question, as I approached my high school (the one I attended, not the one I currently teach at) I saw an ambulance parked outside with a girl being carted into the back, oxygen-mask in place. Things got a bit strange when another ambulance arrived an hour later for a different girl, and then they got downright frightening when a third and fourth arrived that afternoon.
Over the following week, eight or nine girls were hospitalised in similar fashion and people were beginning to suspect something like a chemical leak in the Science department. What was the origin of this mysterious illness?
By doing a bit of our own investigating, my friends and I we were able to get to the bottom of the whole thing and we discovered that every pupil was returned to school the following day with an identical diagnosis: they had each had an anxiety attack.
To be absolutely clear, anxiety attacks are a genuine ailment and should always be taken seriously. It's not just people getting worked-up (as I've heard them described). They are unpleasant and traumatic experiences for the sufferer and it's really no wonder ambulances were being called. Hyperventilation, chest pains, dizzyness, fainting and sometimes even vomiting, were symptoms all the girls displayed. And what made them particularly intriguing was their timeline.
Each sufferer had been present at the attack of the previous victim. The first girl - patient zero - had suffered an attack for some unknown reason and then, seeing the disturbing effects, the second girl became anxious herself. The third girl suffered a similar fate, as did the fourth and so on.
This story is relevant although sadly anecdotal (you’ll have to trust me that it happened), but it still doesn’t quite prove mass hysteria. Anxiety attacks can obviously be triggered by stressful situations and watching your friend get stuck in an ambulance is clearly a stressful situation. So, while it was happening to a mass, there may have been nothing hysterical going on. Who wouldn't get a little anxious after seeing a close friend suffering? And who wouldn't get even more worked up when other people started showing the same signs of illness? This could just have been friends sympathising with each other in that telepathic way they often seem to do.
The cheerful part of the blog
In November 1978 a community of socialist idealists living in Guyana, under the leadership of the reverend Jim Jones, apparently committed group suicide by drinking grape Flavour-Aid laced with cyanide. Over 900 people drank from the poisoned chalice, including large numbers of children, and were all killed in under an hour. Today this event is referred to as "The Jonestown Massacre".
This does sound like a genuine case of mass hysteria at first, and although it’s certainly very weird, I’m still not sure it counts. For starters, Jonestown was a radical political settlement populated by people who had fled their ordinary lives to live in huts as part of a socialist order they believed was inspired by God. It seems reasonable to suggest that there may have been a high proportion of extremist/unstable people in the community to begin with.
Furthermore, Jim Jones made a tape recording of the entire process and it’s clear that huge numbers of people either objected to what was happening but were violently coerced, or simply didn’t realise they were about to die. Jim Jones would run pretend-apocalypse drills regularly, so a lot of the victims probably thought it was an act and played along.
Furthermore, Jones had just announced to the entire village that capitalist soldiers would soon be parachuting into their community to kill or kidnap everyone, including the defenceless children. He suggested it would be better to die free, as a sign of protest, than to live as a prisoner. It's possible that a lot of people in Jonestown were killing themselves out of political and religious ideology.
While grim and extreme, lots of people are prepared to die or even kill for their principles and many parents would rather let their children go peacefully if the alternative is imprisonment and torture at the hands of a totalitarian government.
Jonestown wasn’t a group of perfectly stable people all suddenly doing something hysterical because everyone else was. This was a village of strong-willed, politicised people with religious convictions of salvation, commiting a powerful act of defiance, or simply being tricked, threatened and murdered. In other news, I’ll be writing a children’s self-help book about magical bunny rabbits over the Summer.
The bit where I am proven wrong...
There are numerous documented cases throughout history of fainting epidemics, outbreaks of dizziness, fevers, seizures, headaches and vomiting, although as we've already said, many of these episodes could be the result of anxiety or genuine contageous disease.
In order to confirm whether mass hysteria truly occurs we need examples of humans doing utterly uncharacteristic things for no political, religious or social reason other than “everyone else was doing it”. And, to my great surprise, it turns out there really are a few incidents which fit the bill.
The doctor JFC Hecker, in 1844, recorded an outbreak of “meowing” which took place in a medieval French convent. The nuns in question apparently began making cat noises uncontrollably one evening and were unable to stop for several hours.
Then, there was the dancing epidemic of July 1518 in which over 400 people began dancing in the streets of Strasbourg, including the sick and the elderly. Many died from exhaustion in that one, so I guess you could call that...dance fever! Look, if you don't like my jokes then go back and read the depressing section on Jonestown again. Stop judging me.
Speaking of inappopriate laughter, consider the giggling epidemic of 1962 in which students from Tanganyika began laughing at school and were unable to stop themselves. That particular epidemic went on for weeks and spread to over a thousand students and teachers at fourteen different schools.
The sheer number of people involved in these instances makes mental illness an unlikely explanation. It’s also not an example of “unleashing the beast” unless that beast is a cat who likes dancing and giggling a lot. Nor were these protests or acts of political and religious conviction. There is simply no reason to engage in these activities other than simple imitation so I hereby change my mind. It would appear that mass hysteria may be a genuine, although rare, phenomenon.
So, what causes it? This is gonna get uncomfortable...
In Science you always follow the evidence wherever it leads even if it takes you to an uncomfortable place. Having decided that I was wrong about mass hysteria, what I really wanted to do was try and find some potential explanation for what causes it and, as I looked into all the recorded historical accounts, I did notice a rather inconvenient theme. You're probably not going to like this, but trust me, neither do I.
It turns out that when mass hysteria occurs, the people engaged in the weird behaviour are more likely to be female than male.
This is a really inconvenient thing to have noticed because it will no doubt give fuel to people who are going to say things like "women are more hysterical" or some such nonsense. Please just bare with me on this. I'm not about to mansplain why women are naturally more emotionally fragile or something like that. I think there is something interesting going on here, but it's quite subtle. Give me a chance.
Also, please don't get angry at me for something nature has chosen to do. I'm just reporting what appears to be biologically true. If there is any misogyny here then it's to be found in the architecture of the human brain, not in my describing it.
This better be good...
The meowing epidemic took place in a convent. The giggling epidemic affected girl’s schools and mostly female teachers. The dancing fever was reportedly seen to affect women more so than men and the pseudo-mass hysteria cases like the Halifax Slasher or the anxiety epidemic from my own school again centred around young women. Why though?
Here's something which I think may potentially be to blame.
Human beings, like other primates, come equipped with a group of neurons in their frontal cortex called the mirror neuron system (MNS). These cells begin firing when you watch someone else perform an action...and they make you want to imitate it.
Suppose you’re watching a person who’s fairly similar to you in appearance or personality. Your brain recognises them as a kind of mirror image, so when they do something you imagine yourself doing it too. If that person twitches their left arm, your MNS sees the movement and immediately wants to copy it.
If you’ve ever found yourself yawning because you’ve seen someone else doing it, the reason (proposed by a 2013 study by Helene Haker) may be the MNS. The same mechanism might also explain why you’re more likely to laugh at a joke when you are in a crowd of people laughing together, than when you are on your own.
It’s even been suggested that these neurons may form the basis of empathy itself. The MNS in monkeys will trigger a pain-response when they see another monkey being hurt. This “sympathy pain” felt by the observer monkey looks identical on a brain scan to when the monkey itself is the victim.
There’s a clear reason for why the brain developed such a mutation – imitation is crucial to learning. A brain which repeats what it sees is a brain which picks up skills faster. We just need to make sure we can override the MNS when it’s not being helpful. And if you’re wondering how the MNS differs between men and women, the answer is probably what you already suspect.
Yawei Cheng carried out a study in 2008 which showed people footage of moving objects and found that the MNS response was stronger when the object was a human hand and, more significantly, women showed a much greater response than men, particularly when the hand was female.
It might not be as simple as women having more mirror neurons but it may be the case that women activate the MNS more readily than men, particularly when observing other women. It sounds like stereotyping but there could be a genuine neurological basis for the belief that women empathise better than men do, particularly with each other.
Or consider the creepy 2008 story of Identical twin sisters Ursula and Sabina Eriksson who were kicked off their cross-country coach (following unusual behaviour) and left stranded by a motorway. After disrupting traffic and eventually being stopped by Police, Ursula ran out into the path of a lorry in an apparent escape attempt/suicide. Sabina then did the same thing. After seeing her sister get hit, she tried to endure an identical injury for no reason. And if identical twins aren’t going to have a strong MNS response to each other, I don’t know who would.
A Cautious Hypothesis
Suppose there was a group of women living together/spending time with each other for an extended period, developing a strong MNS response. Most of the time the conscious brain would be able to override the copycat-instinct but if the environment became stressful or tiring, the brain could get tired, making it harder to suppress the urge.
If one woman began laughing uncontrollably, another might join in. Two could become three, three could become four and pretty soon everyone in the room is howling in unison. The mirror neurons don’t realise anything strange is happening, so they just force you to keep going, holding you hostage to your own behaviour.
It’s possible that mass hysteria may simply be an exaggerated by-product of women’s superior empathy skills, which in turn could be a result of superior MNS activity. Put a lot of humans together in a setting which will encourage stress and emotional tiredness and things are going to get weird.
So, if you’re female and under a lot of stress, you really might be able to blame your actions partly on mass-hysteria. It’s possible you didn’t have complete control over what you were doing at the time. If, on the other hand, you’re a man smashing a shop window to steal a television as part of a riot then there’s a simpler scientific explanation: you’re a jackass.
Science is evil...obviously
Last night I engaged in my favourite hobby - stealing things from blind nuns and laughing at their suffering. After all, I'm a scientist and we're morally bankrupt. We invented the atom bomb, chemical warfare and (as some conspiracists would have you believe) the Ebola and Zika viruses. Scientists are the heartless people in lab coats, electrocuting defenceless chimpanzees and cackling as they do so. In fact, when you pledge allegiance to the Head of Science, you have to kill a bunny and bathe in its blood. That's why I became a Scientist - I love human suffering.
I'm exagerrating for comic effect of course (not much though) but there really are people who see Science in this light. Some people seem to carry the notion in their heads that because Scientists want to understand how everything works, that means we are detached from the moral trappings of decency.
I was once asked whether Science had any moral compass or whether investigating the universe had to be done in a cold, moral vacuum. I originally gave a cursory answer in my "Q&A" section, but it's a brilliant philosophical question which deserves more thought. While there have been evil Scientists like Josef Mengele, Harold Hodge and Harry Harlow, is it true that all Scientists are destined to become purveyors of cruelty and sadism? Does Science make people evil?
Right and Wrong
Everybody carries ideas in their heads about right and wrong actions. To some people it's wrong to eat animals, to others it's fine. Some people think it's wrong to dance with members of the opposite sex, while others think it's wrong to even suggest there are such things as "sexes". Some cultures on Earth readily engage in cannibalism while others see it as one of the ultimate taboos. So how do you agree on morality when everybody disagrees?
Suppose a child slaps another child. An adult might disagree with their action and, ironically, slap the aggressive child themselves (I've seen it happen). Should we assume the adult's moral code is correct because they have experienced more of life? If we decided that adults know what they're doing and children don't, you'd have to explain how Malala Yousafzai won a Nobel Peace Prize for undermining the Taliban at the age of 11.
Even things we assume are obviously wrong are far from universal. Telling lies is often considered immoral yet millions of parents tell their children about Santa Claus and the Tooth Fairy. Or (to paraphrase Immanuel Kant) suppose a mad axe-murderer came to your door looking for someone you knew was hiding there. If it's morally wrong to lie, shouldn't you say "Yup, they're hiding in the closet, right this way!" It's the axe murderer who then finds the victim and kills them, not you. You are morally clean in that scenario because you didn't lie.
Or perhaps we could argue the mad axe murderer is not accountable for their actions because they are mad...perhaps they did nothing wrong other than obeying natural drives? And what do we make of all the murders which take place during a war? If a soldier shoots a terrorist that's still commiting a murder, any way you look at it.
Human morality is inevtiably subjective i.e. it depends on a person's opinion. You might think it's wrong to slap a granny but that's just it...it's what you personally think. If someone else says it's fine to go out granny-slapping, then it's a difference of opinion not fact. But can it be resolved with Science? After all, Science has a long history of settling debates by discovering "objective" truths i.e. facts independent of beliefs or values. Could Science discover such a thing as an objective morality?
The notion of objective morality would be a moral code which could not be disagreed with. Such principles would be an inherent part of the Universe, like gravity pulling objects together or heat moving from high temperature to low. Could we use Science to discover moral principles which are fundamental to reality, which transcend human opinion and desire?
I'll be honest, I think the answer is no, and the reason is simple: Science is concerned with what is not what ought to be.
Let's take murder as an example. Imagine I wanted to shoot someone in the face. Science can tell me that pulling the trigger will kill the person involved. I would still want to murder them and could ask the question "why should I not kill them?" Science can then demonstrate that the man would no longer be able to enjoy life. I could respond with "why should he be able to enjoy life?" Science could point out that his death will cause suffering to friends and family. But again, I could ask the question "why should I not cause suffering to others?"
Science could show that I would not like it if someone made me suffer and I could agree, but still respond "Why should I treat other people the way I want them to treat me?" The answer could be "Because it would be unfair to do otherwise?" and I would respond with "Why should the world be fair?"
Science could even argue that a violent species is at risk of wiping itself out and that by commiting violent acts constantly we could destroy the human race. But still, the murderer could respond with "Why shouldn't we destroy the human race? The Universe will carry on the same," and we could go on like that forever, never resolving anything.
No matter what we said to a murderer, we could not argue that the Universe requires them to not kill. The Universe doesn't permit objects to travel faster than light through spacetime but it does allow murder to take place. Clearly there is no fundamental law stopping it from happening, so a murderer has nothing preventing them from doing so other than the belief it would be better if they didn't.
If a person liked the idea of everybody being miserable, everybody suffering, and the human race going extinct, how could I show them they were incorrect for wanting that? How can a desire be incorrect?
Science can definitely show us that things like murder, theft, cruelty etc. make other people suffer and we can even show that their suffering is identical to ours. But "the decision to not cause suffering" cannot be shown to be something nature prefers. The Universe doesn't want or demand anything since it is not conscious and consciousness is required for morality.
Where would it come from?
If there is such a thing as objective morality, it must come from a supernatural source e.g. a God (a conscious entity not subject to natural laws). This doesn't mean atheists are horrible people however. I've seen many religious apologists say something like "so atheists don't believe in objective morality?" to which the atheist has to logically say "yes that's correct"...at which point the apologist springs their apparent trap: "Aha, so you think there is nothing objectively wrong with murdering people!" This tactic is a little underhanded, and I feel it gives apologetics a bad name.
Atheists can still think murder is evil and will still condemn those who do it, it's just that they think this belief comes from their personal desire to end suffering, not from a God. Atheists do not believe murder is objectively wrong but they also don't believe it is objectively purple. They just think words like right and wrong don't apply in the context of desires and values.
Atheists would also ask the question: if morality comes from God, who holds God accountable? In the Christian Bible for example the God of the Israelites threatens to make people cannibalise their own children (Leviticus 26:29 and Jeremiah 19:9) sends bears to maul 42 teenagers (2 Kings 2: 24) and seems to encourage the murder of babies and enslavement of virgin women (Numbers 31).
What do we make of something like that? What if we aren't comfortable with the idea of killing children and enslaving women? Are humans allowed to disagree with such a moral command if it has come from God?
This, incidentally, is why many atheists reject the notion of morality from God, since God is sometimes willing to enforce suffering and death. Many religious people find these questions difficult to answer, although for a fascinating defence of God's morality in the Old Testament, I reccommend the book Is God a Moral Monster by Paul Copan.
Putting it to the Test
The reason we would struggle to use Science as a measure of morality is also down to how Science works. In Science, if you want to know the truth about something, you ask questions and carry out experiments. That's the only way to do it.
But the moral question is as follows: should we do evil? There is no experiment which can answer such a query because the answer is always going to depend on a human answer. Electrons don't lose charge when you tell lies and black holes don't appear when you say mean things. There is no "moralon" particle which influences other particles to prevent suffering. Asking whether you should or shouldn't do something isn't a falsifiable question and Science only deals in falsifiable questions.
To be abundantly clear, I don't like the idea of the human race being wiped out or people suffering needlessly but that's just it. It is something I don't like. It's a feeling based on my personal tastes.
If you wanted to prove morality does exist external to human opinion, you would have to find an example of a moral act being somehow wrong...without there being a human mind involved. And I am not sure such an experiment even makes logical sense. The Universe seems to behave in a way which has no desire to appease or offend human sensibilities. Gravity works because it works, not because humans feel it ought to.
So...Scientists are Immoral after all then?
It would appear that using Science we cannot uncover an objective morality, which means any belief you have about right and wrong is either your opinion or coming from a supernatural source which Science cannot discover. Does this mean Scientists are immoral? Well, the answer is no. Scientists are not immoral, but they are "amoral", which means something different.
Immoral means knowing the difference between right and wrong, but doing the wrong thing anyway. Amoral means not being aware/not accepting there are such things as right and wrong. Satan would be considered immoral because he knows what right and wrong are, and chooses wrong. But a fox killing a rabbit is amoral because it isn't aware of morality. And in this sense, the Scientific worldview is an amoral one simply because there is no evidence morals actually exist, but that doesn't mean Scientists are evil people. Absolutely not.
Science cannot prove the existence of morals but it also cannot prove that Batman is better than Iron Man. It's a matter of opinion. Scientists are still able to have tastes and opinions about the world, they just can't prove their tastes and opinions are objective...which puts them in the same league as everyone else. Nobody can prove their tastes and opinions are objective, that's sort of what makes them tastes and opinions (unless you're Batman, in which case everything you do is morally right). So the answer is no, Science cannot help with morality, but I would like to make the case that it can help with something equally important: ethics.
Morals and Ethics
Although the words are used synonymously, ethics are not the same as morals. Morals are a person's individual decisions about what they consider to be good and bad acts. Ethics are laws a society collectively agrees on to make the world better for people. For example, morality might tell you not to cremate a corpse (there are many people who believe cremation is bad). That's fine because it's your opinion and you're entitled to it. Ethics takes a different approach. Ethics starts from the idea that we should try and make the world pleasant and minimise suffering wherever possible.
Cremation doesn't cause suffering to the deceased (they're dead), and it might actually solve the problem of overcrowding in cemeteries. Ethics looks at what the facts are and then makes a decision based on the notion that suffering is to be avoided. If the deceased's family would be greatly upset at their loved one being cremated, ethics could still decide cremation was wrong, but if the family had no objection and the family actually wanted them to be cremated, ethics says go for it.
Ethics are still based on the human opinion that we should do well as a species and end suffering, but it never claims to be objectively correct. It's interested in learning the facts and then making a decision as a result. And this is where Science does operate.
Some of the most controverial ethical/moral issues we face today are things like abortion, euthanasia, animal-testing, vegetarianism, capital punishment and what to do with psychopaths. Morally, everyone might have personal opinions about each of these issues but that won't get the debate settled.
In order to answer these tricky questions we have to rely on ethics, which means Science is relevant. Not in telling us what decisions to make of course, but in giving us the tools to make sure our decisions are well informed.
If we decide that causing others to suffer unnecessarily is something we want to avoid, then we can use Science to find out what causes suffering and how much is avoidable...but that initial decision still has to come from us. And I think this is where we have reason to be hopeful, because one thing Science has definitely shown is that humans have the capacity for empathy, sympathy, altruism and compassion. Just because the Universe is indifferent, doesn't mean we have to be :)
Right, I'm off to kick some orphans.
Everyone is Special
Talking about intelligence can rile people up, the same way talking about money or beauty can. It gets uncomfortable because sooner or later you have to address the fact that some people have more than others.
To combat this discomfort, educational movements have often tried to avoid the problem by deciding there is either no such thing as intelligence or that everybody has it.
It began in 1969 when the Canadian psychologist Nathaniel Branden published his landmark paper The Psychology of Self Esteem. Branden argued that self-esteem was a need like food or water, and that if it wasn’t met the person suffered. It was seized upon by witless educational theorists and the result was “The Self Esteem Movement”.
The idea was that telling children they were all highly intelligent would lead to more productive lives and greater happiness. It’s a well-meaning sentiment but it backfired for a pretty obvious reason any teacher could have told you. Praise is valuable, but if it’s given constantly and free-of-charge then it inflates egos, causes to laziness, and eventually loses meaning.
The sociologist Kay Hymowitz conducted a meta-analysis of 15,000 studies on the effectiveness of The Self Esteem Movement and concluded that “Many children who are convinced they are little geniuses tend not to put much effort into their work.” Funny that.
It’s a shame, because Nathanial Branden’s ideas were important and self esteem is necessary, but cheapening it to “tell every kid they’re brilliant” is not how you generate happiness. It's how you generate narcissists.
Intelligent in your own way
Another popular idea, proposed in 1983 by the American psychologist Howard Gardner, is that of multiple intelligences. Gardner decided (pretty much off the top of his head) that there was no such thing as intelligence. Rather, there were several different types, with little correlation between them.
Consider the footballing skills of former England captain David Beckham. During his peak, Beckham could be in the corner of a large field, with 21 players running in different directions, and calculate exactly where the ball should go in order to give his team a tactical advantage. Not only that, he’d figure out how to move his muscles to apply the correct force at the correct angle to achieve his desired trajectory and he could do it in a matter of seconds...in his head. It would probably take me hours to calculate the same thing and I’d need a calculator and data table.
Gardner would argue, quite reasonably, that Beckham was using his brain to achieve specific outcomes, the same way Einstein did - just different types of outcome. Beckham’s intelligence resided in the physical realm while Einstein’s was in the logical and mathematical.
On the basis of this argument Gardner proposed several different types of intelligence which people could possess: musical, visual, linguistic, logical, physical, interpersonal, intrapersonal and many more.
It was a popular idea in schools – I remember being given the multiple intelligence test myself - but there are many problems with it. The most obvious being that it repurposes the word "intelligence" to the point of confusion. If we’re going to do define intelligence in such a broad way, everything a human does is intelligent, because everything involves using your brain to exemplify an outcome.
If you know how to walk we could say you have “perambulatory intelligence”, if you know how to cook we could say you have “culinary intelligence”. If you are a fan of the movie Transformers 5, we could say you have “no intelligence” and so on.
What Gardner’s model does is redefine intelligence to mean ability. But when you redefine a word, the thing you originally needed it for still exists. If we decided to repurpose the word carrot to mean “any kind of vegetable”, those orange things sticking out of the ground would still be there...so we’d have to invent a new word for them and the whole thing would repeat.
We use the word intelligent because it describes something we all seem to agree is real and distinct from other abilities. You wouldn't describe a good sandwich as intelligent, because it's not an appropriate compliment. Likewise, if someone has significant sporting prowess we can describe them as "athletic", "fit", "sporty" etc. but intelligence is refering to a different thing. That's not saying intelligence and sport-skills are mutually exclusive, it's just saying they aren't concomitant.
David Beckham could be a very intelligent man, but his footballing skills aren't a sign of intelligence, they're a sign of atheltic ability. They are separate features and its unwise to pretend they're the same thing. Ultimately, the problem with Gardner's approach is that the word intelligent is describing a very specific quality, not a generic one.
So what IS Intelligence?
Consider the following sentence: there are a few bananas in the bowl. From context you understand that “few” probably means more than two but less than ten. If I said “a few members of parliament voted against the bill”, then the word “few” suddenly means twenty or thirty people.
Words change meaning depending on context and pinning them down to a single definition can sometimes be detrimental. Often it’s the vague boundaries around a word which give it utility. As a Scientist I want to subject everything to clear and rigorous definitions, but I recognise this isn’t the way we use language. This makes defining a nuanced word like intelligence quite tricky.
I was once observing a lesson where a teacher said to a student “you’re very artistically intelligent!” The student looked puzzled and said “yeah, but being artistically intelligent isn’t real intelligence.”
I spoke to her afterwards and asked what she meant. It took her a while to articulate but eventually she hit on a nugget of brilliant insight. “Intelligence is when you’re good at things which go on inside your head,” she said. I think she might be onto something.
The ability to play an instrument is a function of the brain but it is expressed through the fingers. Being a talented singer is a function of the brain but it is expressed as movement of the vocal chords. The same is true with painters, sportsmen, dancers etc. Their abilities are based on brain activity but the outcome is manifested physically. The word for these things might be “talents”. But when we refer to intelligence we seem to mean abilities which do not translate so obviously into a physical mode.
A person can use their vocal chords with skill and intonation to deliver a speech. We might describe them as a skilled raconteur or actor, but the person who wrote the speech, who actually thought of the words to use, is the person we describe as intelligent.
Let’s take an even more obvious example which I think may prove the student’s point perfectly: Professor Stephen Hawking.
Nobody’s going to object if I call Stephen Hawking an intelligent man. Fair enough, a lot of his fame may be due to his struggle with physical disability, but let’s be clear: his reputation as one of the world’s leading theoretical physicists is well deserved. Even without his inspiring life story, Hawking would still be regarded as one of the greatest living minds. And yet there is absolutely no physical manifestation. That’s probably why Hawking’s story is so moving in the first place. He cannot express his brilliance physically, it is entirely within his head.
I would therefore argue that intelligence is whatever we agree Professor Stephen Hawking has. He can’t sing, play the tuba or tap-dance, but the inner workings of his brain, which cannot be demonstrated physically, is what we mean by “intelligence”.
Knowledge is Power, but it’s not Intelligence
What’s so special about Hawking’s brain then? Well, the guy definitely knows a ton about physics. But there’s clearly more to it. I know a lot about physics, but I’m not going to claim I’m as clever as Hawking. Not by a long chalk.
Intelligence isn’t the same as knowing things because anyone can memorise facts. I could tell a room of people “fermions are defined by their adherence to the Pauli exclusion principle, a function of their half-integer spin”...but does everyone in the room suddenly become smarter if they don’t understand what that particular fact means?
Probably the most workable definition of intelligence I can think of is as follows: answering questions you know the answer to is knowledge, figuring out answers to a question you don’t know the answer to is intelligence.
I think this definition, although loose, is probably as good as we can get. Intelligence is how well we process unfamiliar information; how well we use things we do know to grasp things we don’t. It’s vague, but I’m hoping that makes it better.
The IQ Test
The most famous assessments of intelligence are of course IQ tests. And I’m not talking about those 15 minute online things which always give a mysteriously high score, as if they’re wanting to flatter you into returning to their website...Hmmmm
I was made to take a proper IQ test once, and it’s a very extensive procedure. It took about five hours and was carried out by an examiner with a stopwatch. There were bits of paper, little puzzles to complete, the whole works. And I’m afraid I’m not going to tell you what my IQ is. Sorry. I’ve told two people in my life. Ever.
The reason is not because I have an embarrassingly low score, it’s because I don’t put much faith in the tests and don’t want people getting hung up on it. IQ tests tell us something, but it’s not intelligence. I know there's an old joke which goes "the only people who object to IQ tests are people who do badly on them". But that's not true. For the record I actually scored highly. I just don't think the number I got tells you much.
A Brief History of IQ
IQ tests were invented in 1904 by the French psychologist Alfred Binet. The ministry of education in France was trying to identify students who were likely to struggle in school and Binet provided the answer. Every student was given a series of simple common-sense questions and if they answered poorly, they were given extra support in class.
The questions included things like identifying the names of certain foods, lifting objects and deciding which was heavier, and even looking at faces of women and deciding which was the prettiest. Your score was then calculated as a fraction compared to other people (a quotient) and that was the end of it. Binet was very clear that his test was not calculating a single measure of “general intelligence”. It was just giving a sense of how you stood at basic tasks compared to other people your own age.
About ten years after Binet introduced his test, the American military were looking to find a method of assessing which soldiers should be given officer training in preparation for the first world war. They asked the Stanford psychologist Lewis Terman to design a test and he turned to Binet’s, adapting it slightly for adults.
Over 1.5 million soldiers took Terman’s test and were given a ranking of A – E, with only the A-grade soldiers getting officer training. Terman also introduced the familiar numbering we still use, where 100 is considered average intelligence for an age group and 140 is arbitrarily termed “genius”.
Sadly, Terman later argued that only smart people should be allowed to breed in order to better the human race and he made one or two teeth-clenching comments about the link between intelligence and race, so that gives you some idea what he wanted his test to be used for. Here's a quick example of a straightforward IQ test. How many Indiana Jones movies are shown below?
The Feynman Problem
The example I always use when illustrating the fallibility of IQ tests is what I call "The Feynman Problem". Richard Feynman had an IQ of 125. That’s not bad of course, but it would only indicate him to be “reasonably smart”. Yet Richard Feynman was inarguably one of the most intelligent people to walk the Earth in the last hundred years.
He won a Nobel prize for working out the mathematics of quantum electrodynamics, the two main biographies written about him are called Genius and No Ordinary Genius, he taught at Princeton, MIT, Cornell and CalTech Universities, and was described by Robert Oppehnheimer on the Los Alamos project (the greatest Scientific minds living in one town) as “the most brilliant physicist here”.
He was a freak of intelligence but based on IQ score you wouldn't think he was anything special. Hell, James Franco has a higher IQ than Feynman. James Franco!!! Even I have a higher IQ than Richard Feynman and I am NOT smarter than he.
While IQ tests might be telling us something, I don’t think we should put too much stock in the numbers. It would be like measuring a person’s fingers to see whether they would be good at playing piano. There may be a moderate correlation but it’s far from the whole story.
If you’ve got a high IQ then you’re probably quite bright, but being any more specific is going beyond what we can actually know. A person with an IQ of 120 may not be any more intelligent than someone with a score of 110 – they might just better at doing the IQ test.
I think the ultimate problem I have with IQ tests is that because intelligence is a loosely defined word, we need a loosely defined way of measuring it. Trying to measure it with a number is like trying to nail a cloud to a piece of wood. It’s not the correct approach.
The Barmaid Test
There’s a famous Einstein quotation which goes: “if you can’t explain it simply, you don’t understand it well enough”. It’s a good phrase but it’s not real. It’s actually a mixture of his genuine quotation “the truth should be stated as simply as possible, but no simpler” and a quotation from Richard Feynman “if you can’t explain it to a freshman, that means you don’t understand it.”
Ernest Rutherford, another Nobel prize-winner, once said something with a very similar sentiment: “an alleged scientific discovery has no merit unless it can be explained to a barmaid.”
I feel this is a little unfair on barmaids but his point is valid i.e. if an idea is worth knowing, you should be able to explain it to someone who isn’t an expert in the field. The idea of the barmaid test is, at its core, that to understand an idea you should be able to state it simply. This, claim the great minds, is the best way of seeing if someone really understands something...get them to explain it in straightforward terms.
So I think Rutherford's Barmaid Test is probably a better measure of intelligence than IQ scores. If you really want to see how clever someone is, ask them to explain the clever-sounding thing they just said. If they can't, they're probably not as smart as they think they are.
Am I therefore saying that teachers are the smartest people on the planet? Yes. Yes I am.
Good luck in the new term everyone!
Done and Dusted
Thursday saw the release of GCSE exam results, marking the end of UK exam season. We now have a single week of breathing-space before it all starts again with the new cohorts in September. Bring it on.
Results days are some of the most emotionally charged days in the academic calendar but it’s always mixed with commentary from politicians and pundits talking about the state of the nation’s education and usually the need for reform. It’s only a matter of time before someone says those mortifying words: "Exams are getting too easy, they were tougher in my day!" I’ve heard politicians say it, people on buses, parents of students and so on. Everyone seems to think their exams were the most difficult to have ever exist, but is that fair?
It seems like an insult to the hard-working students who have bled themselves dry in order to do well, but I guess it makes you feel special if you truly believe your life has been a tougher struggle than anyone else’s.
But how are today’s exams different to those of the past? As someone on the front line of modern education (well, it’s really the students who are on the front line, I’m more like the drill sergeant who trains them and sends them off to war) I thought I’d share my thoughts on the topic.
How Grades Work – UK vs USA
Education is a tricky thing to get right and I don’t think any country has it figured out (although I’d take a glance in Canada and Scandanavia’s direction). Most of the web traffic I get comes from the UK and the US, so let’s take a look at how these two systems address the problem of getting an entire population educated.
In the USA nothing is standardised. Every pupil attends classes and their teacher is responsible for their overall grade. How that grade is reached varies between schools, subjects and teachers themselves. Typically between 40 – 50% of the grade is based on a final exam, written and marked by the school faculty, while the remaining 60 – 50% is based on things like coursework, class participation, attendance and behaviour.
At the end of the year, the teacher adds up your scores from these different streams and the grade boundaries pretty straightforward. Score 90% and you get an A. 80% gets you a B, 70% a C, 60% is a D. Anything below that and you get an F - a “Fail”. You can re-take the year however, so if you don’t get enough good grades you get another shot. And that’s the end of that.
In the UK, exams are written by privately run Exam-boards. Exam boards make money in two ways: entry fees (schools pay to enter a student for an exam) and things like text-books and online resources. This second one means an exam board tends to make more money if they change the content of the course every few years since schools have to buy new books and equipment to keep up with them.
The exams are sat nationally at the same time up and down and the country, before they get collected and distributed to markers (usually teachers earning a smidge of extra cash). That’s why it takes several months between the exams being sat in May/June and results day in August. Typically 90 – 100% of your score is determined by the final exam with things like coursework, homework and behaviour being irrelevant.
At GCSE level (age 16) the grades go from A* - G with a “U” grade being a fail. Except starting next year we’re switching to a numerical system where the grades go from 9 – 1 (9 being the highest). Then at A-level (age 18) the grades go from A* - E, then U being a fail.
The grade boundaries are moderated every year by a team of exam officers (slightly different for each board) so the score required to achieve a particular grade changes. Re-sitting is a complicated and expensive option so once you’ve done your exams that’s pretty much it unless you can afford the re-sit fees.
There are clear strengths and weaknesses with both systems. The UK model is obviously intended to be standardised so that an A from one school means the same as an A from another (although the fact that there are about five different exam boards sort of undermines that).
It does also prevent manipulation so a teacher can’t mark a student they don’t like harshly, or give extra credit to a student who’s good on the football team and the local community wants to see them going to college etc.
The US system has the clear advantage that the student has a chance to demonstrate skill over a long period of time, rather than being scrutinised on three year’s worth of a work in a single exam. I’ve known students who have suffered a personal tragedy a few days before their exam so obviously didn't do their best. In the American system I’d be able to give them the grade I felt they deserved, but in the UK if you’re ill on the day – too bad. Until we learn how to digitially upload information to the human brain, it's unlikely anyone will solve the problem perfectly.
Lies, Damned Lies and Statistics
Let me demonstrate something which I think is important. I wanted to look at the figures surrounding GCSE and A-level grades but it turns out getting hold of these statistics is surprisingly difficult. The UK government website doesn’t offer any publicly available information so you really have to go hunting to find what you want.
I am particularly grateful to Brian Stubbs from the University of Bath, who I contacted to help write this blog. If you’re interested, I strongly encourage you to check out his website: http://www.bstubbs.co.uk where he has collated decades of historical exam information. So what does the data show?
Well, in 1989 approximately 77,700 A grades were awarded to A-level students in the UK. This year, around 150,000 were awarded. So exams are twice as easy because the number of A grade students has doubled?
Let’s take another look. In 1989 an “A” grade was the highest grade you could get, but in 2017 it’s the second highest. The highest grade in 2017 is an A*...of which only 69,000 were issued. In other words the “top grade awarded” went down significantly, so exams are obviously getting much, much harder, right?
Not necessarily. In 1989 only around 600,000 students nationally even attempted A-levels whereas this year it was around 830,000. So if we take the top grade as a percentage we see the number of top grades awarded has gone from 11% to 8%, so exams have gotten harder but only by a small amount.
Now let’s look at GCSE grades. In 1988, 12.8% of students were getting the second-highest grade. In 2007 that number was 13.1%. So actually the difficulty level of exams hasn’t changed at all - the same number of students are getting the same kind of grades.
But a really interesting pattern emerges if we look at the years a new “top grade” is introduced. In 2011, 7.8% of GCSE students achieved a grade of A*. Compare that with 1994, the year the A* grade was introduced – that year only 2.8% of students got it. So the exams are getting easier?
Well no, this year they introduced the grade 9 and only 3% of students got it. So if you compare like with like, i.e. compare 2017 with 1994, then you get 3% of students achieving the top grade, so there has been no change. Exams are staying about the same.
This year there has been a 0.4% drop in grade 9s/8s compared to last year’s A* grades for English GCSE. That’s the headline most newspapers are worrying over. Except what’s not being mentioned is that this grade-dip is for English language and literature combined. If you look at English literature (all students sit two English GCSEs) we’ve actually gone up by over 2%.
The point I’m making should be obvious. Depending on which years I pick and which grades I choose to look at, I can spin any story I want. If I were in the government I might want to make it look like grades were going up under my party. Or perhaps I might want to make it look like grades went down under the opposition. If I were the head of an exam board I might want to make it look like grades are staying level and that everything is nice and fair. We have to be very careful what we’re looking at.
The statistics are complicated. However, it is reasonably accurate to say there has been a slight increase in the percentage of “top grades” being awarded over the past twenty years. Grade inflation is a real thing, albeit a very subtle one. But that doesn’t necessarily mean exams are getting easier. In Science you don’t just look at the data and immediately decide the explanation. You consider alternative explanations and see if they account for the data better.
If exams were getting easier then we wouldn’t see sudden dips when a new grading system is introduced, like we did this year and in 1994. Actually, the most sensible conclusion to draw would be that grades increase as a function of familiarity. Change how familiar the exam is and you’ll see a dip in grades. What you might really be seeing in those figures is that people do better each year, provided it’s the same style of exam.
Teachers get used to the types of questions, pupils have access to more past-year’s papers, examiners have more trustworthy mark schemes, exam-writers have done it before so they can give more training to teachers on what to expect etc.
Actually, a very steady increase in grades is precisely what you would expect if the exams were staying more or less the same. The grade-inflation data we see is very small, implying that it’s more about adaptation rather than exams getting easier.
Today, partly thanks to the fact that schools are shifting to online data storage, we can keep past-papers from previous years and give them to our students. In fact, at my school I have done video-recordings of myself answering previous years Physics papers. Students can log on to the physics network and watch me as I attempt a question, describing my method as I go. This is very specific coaching which gives them a slight edge. And that’s a good thing.
The downside is that we spend a lot of time “teaching to the test” rather than teaching a subject for the fun of it and we put waaaaay too much emphasis on answering exam-questions. It has to be said that I have been able to train some students to jump through hoops and over obstacles and squeeze them over the boundary of an A grade, when really they don’t understand the Science any better than a student who gets a B.
Maybe I’m actually causing problems for them further down the line by doing that. I have occasionally coached a student to get an A grade, and they’ve gone to University only to find they don’t really understand the subject as well as they thought and have dropped out. Perhaps I should just let students do a bit worse and not train them in the art of the exam? Hmmm that's a tricky one.
Ultimately, once teachers get to know how an exam system works they can train the students to do better at it, so we see an increase in grades. The problem is that this puts teachers in a difficult position. The government tells schools to raise their standards. If the grades don’t go up then we’ve failed to do it. If the grades do go up then it’s because the exams are easier. It’s a no-win scenario which is not something anyone wants to face.
Besides, I’m not sure “more A grades” necessarily equates to a higher standard of education. At the moment more A grades just means more students better trained to pass exams. Is there a risk that some of the A-grade students aren’t really comparable to A-grade students of yesteryear because they’ve been coached to pass an exam rather than having a deep understanding of the subject? I’m not sure what the solution is (like I said, education’s a tricky thing to get right) so I tend to keep things as simple as I can: if a student asks me for help...I give it to them.
What Are Exams Like Now?
A report commissioned by Ofqual (The Office of Qualifications and Examinations Regulation) in 2012 really irked me. It decided that looking at grade boundaries wasn’t a good way of deciding if exams were getting easier. So far I agree. In order to solve the problem, they did a detailed analysis of exam papers from 2005 and compared them with exam papers from 2008...in two subjects (Biology and Geography). That would be like looking at the weather in two cities a week apart and drawing conclusions about climate change. That’s a far too narrow data set.
The report then claimed that yes, exams really were getting easier. Most of this conclusion came from two factors. Let’s look at the first one.
Ofqual noted that older exams had more essay-questions while modern exams had more multiple-choice questions. Therefore modern exams are easier. The assumption seems to be that essays are hard and multiple-choice is easy. Let’s break that nonsense down.
Working as an exam-marker isn't exactly a soul-fulfilling job. You get paid for every exam script you mark (not very much) so the aim is obviously to get as many done as quickly as possible. After a 10-hour day in school you go home, log on and spend another five hours staring at the same question over and over again, clicking buttons on a screen.
Do you think every line of every essay is closely scrutinised? Or do you think some markers just skim read it and decide the mark based on a general impression? I’m not saying that’s what should happen...but what do you think does happen?
Personally I know a lot of students who feel very confident writing essays. Use the right keywords, keep your grammar up to scratch, drop in some phrases you know the examiner is looking for and you can bluff your way to a high grade. In multiple-choice there’s a clear right or wrong answer and you can’t argue the point. An essay gives you room for manoeuvre and interpretation. A ticked box does not.
You might argue that in a multiple-choice question at least you have the correct answer written somewhere in front of you. But if you know the answer to the question, having it as a multiple-choice makes no difference...you’d have written the correct answer anyway. If you don’t know the right answer then you’re still at no advantage. Yes the right answer is written in front of you, but so are four incorrect ones. If you make a guess you’re 80% likely to get it wrong. Does that make multiple-choice sound easier?
The second issue the Ofqual report highlighted was that some of the Biology papers had less emphasis on scientific content and more on softer things like context. That has been true, but it actually makes answering the question harder.
Here’s an example. When I worked as an examiner there was a question on a exam I marked which said “explain why graphite is used in pencils.” I saw one student who gave the following answer “graphite is composed of layers of hexagonally arranged carbon atoms in a 2D lattice. These layers have weak van der Waals interactions between them meaning they will slide off each other, allowing the graphite to be scraped as pencil lead.”
That answer is scientifically perfect. It’s a “hard science” answer. But guess what, that student got zero marks. The mark scheme wanted you to say “graphite is dark and brittle.” And there is the problem.
That’s a soft answer. It’s what a five-year-old child would say...but that doesn’t make the question easier to answer. It actually makes it harder because you’ve got no idea what the examiner wants you to say if they’re not looking for the specific Science.
I actually complained about that question because it was punishing students who had better scientific understanding and favouring those who answered like children. I wrote to the exam board and explained why I thought the mark-scheme should be changed. They ignored me, so I quit. They asked me to mark again for them the following year and I refused.
What the Ofqual report seemed to miss is that asking straightforward science questions is easier for a well-prepared student to answer because they know what’s expected. So I disagree with Ofqual vociferously. Exams are not getting easier unless you’re naïve enough to assume that certain types of question are inherently “easy” rather than acknowledging different students have different strengths and weaknesses.
Today’s GCSE physics students have to memorise 26 equations for their exam, whereas previous years were given a data-sheet to consult. I’m not sure I even know 26 equations off the top of my head. When I need to know an equation I do what every single scientist in the real world does...I look it up.
In English, students are no longer allowed to take their books into the exam to reference certain passages of text. In Chemistry A level, students are expected to know over 40 reaction pathways...most of which won’t get asked about. And the same is true across any subject. Exams are hard regardless of which year you’re looking at. But even doing that is a bit pointless because the grade boundaries are constantly changing.
How do Grade Boundaries get Decided?
Because the exam is different every year, grade boundaries change with it. At University the grade boundaries for your final exams don’t fluctuate so if you happen to sit your paper during a tougher year, then that’s just tougher luck. University departments always have internal moderation panels to try and make sure the exam questions are fair, but it’s never perfect obviously.
The idea of moderating grade boundaries is to get around this problem. If the exam is harder, the boundaries are lower so you don’t get everyone failing. If the exam was really easy the grade boundaries are higher so you don’t get everyone passing who doesn’t deserve it.
But we’re faced with the same problem: how do we actually do this moderation? Do we assume the top 10% of students will be the best, so we give them all A* grades no matter how well they did? Then we just go down by 10% for the A grades and so on?
There’s an obvious reason not to do that. It makes the assumption that every year the abilities of students will be in the same proportion. There are going to be fluctuations each year so chopping things off every ten percent doesn’t quite seem fair. And what if there are more students one particular year? That means you get more students with the top grades, but are they comparable to the students who got the top grade the previous year?
Usually around 5.5 million students are entered for GCSEs but in 2017 it was 3.6 million. A sociologist could spend years analysing this sudden dip in numbers, or we could just recognise that populations go up and down with time. That’s not a trend, that’s random noise.
Either we keep the grade boundaries the same each year and make an effort to keep the exams of comparable difficulty, or we go through all sorts of committee procedures to moderate the grades after the fact. And this is Britain...so it’s the latter option we go for.
The exact process by which grade boundaries are decided isn’t made clear unless you’re one of the senior examiners, but if you’re curious here’s the website of the Exam board OCR explaining how two students who both score 61/80 end up with different grades: http://www.ocr.org.uk/ocr-for/learners-and-parents/getting-your-results/calculating-your-grade/
That sounds potentially ludicrous. Part of the justification is that one of the students attempted more complicated questions. But more complicated according to whom? We’ve already seen that Ofqual considers essay questions harder than multiple-choice with very little justification, so deciding that one question is harder than another can vary from person to person.
Either the two students sat different papers (undermining the whole point of standardised testing) or they attempted trickier questions on the same paper. That’s like saying doing a single 2-mark question is more valuable than two 1-mark questions. Is it? Says who?
It turns out that grade boundaries are down to examiner opinion. If some examiners think a particular question is trickier or easier this can affect how well the student does after they have already sat the exam and there’s nothing the student can do about it.
The key message is that an A grade one year is not necessarily equivalent to an A grade the year after. You might immediately say “yes but the previous year’s paper was harder, so you can’t compare how they did one year with how they would have done the previous year.” Which is absolutely 100% exactly and entirely my point. You can’t compare two years because the exams are different. So there’s no point speculating on which was easier or harder. It’s too subjective.
I’m alright with you saying that a student who gets an A grade has done well, better than a student who gets a D...obviously that’s true. But that kind of broad statement is all we can honestly say. If we try and get more specific, analysing how students have gone up or down, we're extracting more information than is really there.
Likewise we can make general statements about exam difficulty. Calculus is harder than multiplying fractions. Balancing equations is tougher than counting electrons on a diagram, but again, being more specific is uncalled for. Is calculus harder than trigonometry? Is a pH calculation harder than an NMR spectrum analysis? It depends on the student and the examiner. It's too hard to call.
The problem is that when we try to compare exams between the present and the past we're getting too specific. We can't make accurate statements. Otherwise we're looking for patterns which are only there by coincidence.
Comparisons are Deadly
On the government GCSE-results website you can find the following quotation: “It is always difficult to compare in a meaningful way grade boundaries between old and new qualifications”. That’s actually a very fair thing to say. Well done government!
It’s just a shame they undermine their own message on the very same web-page with the phrase “Overall results are stable comparing outcomes last summer with outcomes this summer” (I’ve paraphrased it because the original sentence is three times as long and adds nothing).
Using the word stable seems like a mistake to me. Stability implies that something isn’t going to fall in the future, or hasn’t fallen compared to the past. But if we’ve already agreed we can’t compare present, past and future, what do we mean by saying the grades are “stable”? It’s almost like “stable” was just a positive-sounding buzzword which doesn’t actually convey much meaning.
The thing is, in the UK, exam criteria change every six years roughly. Each school picks a different exam board and as Ofqual’s own report found, there was a different style of exam even three years apart. How can we possibly hope to extract any meaningful data looking thirty years apart?
The Chemistry A level exam at the end of 2014 was fairly reasonable, but the one at the end of 2015 literally made the news because it was so difficult (I saw dozens of students coming out of the exam hall in tears that day). Two exams in the same topic one year apart can be wildly different.
With past papers available, teachers trying to teach to the test, exam boards have to constantly write tougher questions to make it more of a challenge. It’s an arms race between student’s preparations and an exam board’s desire to actually test them. It gets to the point where if a student writes that a chemical is “blue/green” they get the point, but if they write “turquoise” they don’t (I’ve seen that happen too).
The fact is that it’s not possible to make a meaningful or detailed statement about the quality of exams by simply looking at the grades. If you think exams used to be easier, try teaching a class of students. Or better yet, try sitting your children’s exams yourself and see how well you do. Here's a math question from an EdExcel GCSE paper a few years ago. Remember this is testing "General" maths education for 16 year olds.
Personally, I think things are Harder...but not because of the exams
As someone who sat A levels exams just over ten years ago, I’ve seen a decade’s worth of exam material and it looks about the same. Some bits were harder, some bits were easier.
I mean that’s just my personal opinion...but the exam boards are using that approach to measure grade boundaries, so I don’t see a problem. From what I can tell, the quality of questions is “stable”. There are fluctuations year on year but the exams today’s students are sitting are no harder, nor easier than the exams their parents sat.
However, there is something else which I think has to be factored in which you can’t measure or quantify. This makes it rather hard to write about it in a Science blog, so I’ll make it clear: at this point I’m going into anecdote and speculation. What I have noticed is that students today are under more pressure than their parents were. A lot more.
I have seen students vomiting in exams from stress. I’ve seen them pass out. I’ve seen scores of students having intense anxiety attacks and I’ve even seen one or two wetting themselves. Yes, this isn’t pretty. Horrible to read about right? Imagine you’re a teacher who cares about these kids. Or imagine you’re the actual student themselves.
Imagine you’ve been studying something for three years (in the case of GCSEs) or two years (in the case of A-levels) and now you have to prove yourself in the space of two hours and it’s your ONLY chance. Imagine knowing you’re in competition with 5 million other students and the grade boundaries are in free-fall based on the whims of the examiners. Imagine having to study 10 subjects (only 3 of which you actually chose to do). And imagine being told that your entire future depends on them.
Students are given benchmark grades in year 7. They’re given mock exams in year 10 and then twice in year 11. There are catch-up sessions, warm-up sessions, workshops, after-school extra lessons and students are constantly tested (every three weeks roughly). Not only this, but they are repeatedly warned about the risks of doing badly in exams and how their life will be over if they don’t get the right grades.
Imagine being in a frightening, results-driven environment which is compulsory, you don’t get paid for it and you’re judged as a person based on a few hours worth of work. School in the UK is stressful for kids. Give them a break.
Yes, of course exams should involve stress. I remember working myself silly when I studied for my A-levels, but it was nothing like what I’m seeing today. I don’t really know what the cause is (I have a few guesses but this blog is already too long) but something isn’t right with this picture. When you have dozens of students crying their eyes out before sitting a mock exam...something has gone wrong somewhere.
Better Late Than Inaccurate
I don’t often write about current affairs in Science for two reasons. The first is that when a "news-worthy" Science story breaks, it gets splashed everywhere in the media so there’s no need for me to report it too. The other reason is that I like to take my time with things. When you hear a Scientific claim, the best thing to do is check it carefully, do some research, find original sources, learn the background etc. Unfortunately, the media machine moves very fast so by the time I know what’s actually happened I’m usually behind curve.
And to be honest I like it that way. I’d much rather be cautious when I hear a news story than comment within the hour. Particularly if it’s complicated. So, despite many people trying to persuade me to write more up-to-date stuff, I’m going to be stubborn. Personally I value accuracy over expediency.
One of the exceptions to these rules is when the "hype" over a story has gotten out of proportion, or that people are misunderstanding what actually happened. In that case, I do feel more of an urge to try and put my thoughts out there to try and ground things a little. And this story of Artificial Intelligence (AI) gone haywire is a prime example.
You might have come across it a few days ago (31st July was when it broke). I ran across it on scaremongering Instagram feeds and ignored it, perhaps foolishly. When it kept coming back, I decided to look into it and see what the truth was. It’s taken me a few days to get to grips as I’m not an expert on AI technology, but I’m pretty confident I can report the story with reasonable insight.
What Got Reported/Is Being Reported
According to the headlines, facebook was doing research on AIs and successfully created two robots which possessed the ability to communicate with each other. The robots struck up a conversation, but very quickly decided to abandon English and invented their own language which the programmers could no longer decrypt.
The robots conversed in their secret “robot-ese” with increasing speed, hiding their conversation from us, learning as they went. Panicked and frightened, the facebook programmers immediately shut down the software before it got too smart. This is apparently the first instance of computers creating their own secret code system and attempting to outwit their human creators.
What Actually Happened...
Facebook, like many other companies who develop computer software, spend a lot of time researching chatbots. Chatbots are programs designed to mimic human speech, useful for all sorts of things like voice recognition software or operating systems that talk back etc.
The way they work is by picking up on certain words, applying the basic rules of grammar and syntax, interpreting the message and outputting a logical response. There’s a debate around whether this consistutes “speaking” a language, but a lot of chatbot software can be quite sophisticated.
And chatbots aren't anything new. In fact, there’s an annual competition called the Loebner Prize which has been running for twenty one years in which chatbots compete to try and convince a panel of judges they are human. These tests (where someone is talking to a screen and isn’t sure if it’s a person or a robot) are called “Turing tests” and there are lots of chatbots which have reasonable success-rates at passing them. Specific and detailed conversations are still impossible, but simple chats about the weather etc. can be simulated easily.
One of the things programmers of chatbots particularly like to do in order to road-test them, is therefore to put two chatbots into conversation with/against each other. Depending on your perspective this is either ingenius or hilarious. The result is that the two chatbots communicate and try to understand each other’s usage of a language.
Obviously, when two chatbots talk they end up exchanging complete gibberish because they don’t really understand English (that’s kind of the whole point of the research, to see how close a simulation can get). And that’s what these two chatbots ended up doing, the only difference was that their gibberish had a vague structure to it. The language they were using was still English, just a slightly distorted version which made little sense to any human reader.
If you’re curious what their conversation looked like, here’s a short section of it. In the spirit of the AI takeover, you should probably listen to the theme music from Terminator 2: Judgement Day (which I've helpfully included below) while you read the extract:
Extract from the Chatbot Conversation:
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Quaking in your boots right?
Not so much "The Terminator" as "Jonny-Five suffering a really bad tourettes outburst". The chatbots started producing conversations like that and as a result, the programmers switched them off. Not because they was being intelligent, but because they were being stupid. And here's the thing: this sort of thing happens all the time. Chatbots often start talking in garbled forms - the only reason this made the headlines was because it was facebook doing it and it's front page news when Mark Zuckerberg blows his nose.
The reason the malfunction occurred was because when the programmers wrote the chatbot they forgot to specify that the language had to stick certain grammatical rules. If we agree that a language has to have certain properties e.g. finite words, infinite sentences, recursivity, generativity, then there are 24 grammatical possibilities a language can take (linguistic logicians like Frederick NewMeyer have actually worked this out). Of the 6,000 languages on Earth, only 15 of the possible grammar structures are actually used, with most languages sticking to one of 4.
In other words, almost every human language on earth conforms to one of 4 types, but there are 20 largely unused ones out there. It’s no surprise that a mathematically-minded chatbot might select one of the others that might be far more efficient than ours. Really, it’s no surprise that a computer would butcher our language...our language is a mess.
So technically, the chatbots did start using their own language but it wasn’t an invented one. It was just one of the other possible ones and they were still using English words. There was nothing sinister going on and pulling the plug was done because the bots were failing to simulate our language...badly. It wasn't so much a case of "Oh God the robots are sentient, quick pull the plug!" it was more "Oh damnit...Steve, the stupid bots are talking like idiots again, can you hit the stop button, I can't reach it over my coffee!"
So no, we don't have anything to fear from facebook AIs getting too smart. The worst we could say is that a robot who spoke like Alice or Bob would be extremely irritating. This blog was brought to you by Skynet.
That's a Good Question
It's something I get asked by students of all ages. The Universe is expanding, but the Universe is (by definition) everything...so what the devil is it expanding into? Admittedly, most of my students don't phrase it like that because they're not 19th Century businessmen but you get the idea.
I usually do my best to give an answer on the fly but it's a surprisingly tricky thing to deal with because there are lots of misconceptions and variables we have to take into account. I'm afraid the answer isn't something simple like "your mum's face". It gets very strange, very fast.
The most straightforward response to the question is technically "we don't know" but NOT because we have no explanation - actually we have three - we just don't know which one is correct. So I decided it was time to do justice to the question and go through what we do and don't know. This topic is a bit of a head-pickler though, so if you find yourself getting confused don't worry, that means you probably understand it. It's the people who claim they understand the Universe you need to worry about. So, let's get down to business...
How do we know it's expanding?
If you look at the stars, everything seems simple. They follow predictable patterns and, the occasional comet or meteor aside, nothing seems to be moving around very much. For the longest time we assumed our Universe was completely static, but in 1912 we discovered something very unusual.
Imagine asking someone to do an impression of a car going past on a motorway. Pretty much everyone will make the same noise: Niieeeeeaaaaaoowwwww! It's hard to write it but you can imagine the sound I'm trying to describe. It starts off as a high pitched whine and then gets lower as it shoots past you. You may have also noticed the same effect when an ambulance goes past your house. The blaring of the siren seems to gradually droop as it moves away from you. This phenomenon is called Doppler shift and the diagram below shows where it comes from.
The sound waves are depicted as ripples. If you imagine standing in front of the car as it approaches, the soundwaves are being squashed since the car is moving toward its own wavefront. The result is that your ear drums pick up lots of compressions per second aka a high frequency of sound. By contrast, if you are standing behind the car the pulses are stretched out because the car is moving away from you and your ear will detect a low frequency sound. High frequency sound is what our brains percieve as higher pitch, while lower frequency sounds correspond to the lower notes. This is why the car's sound appears to go from high to low as it shoots past you. The waves are going from compressed to rarefied, creating a pitch differential. But it's not just sound waves that do this; any type of ripple exhibits the Doppler phenomenon.
As you probably remember from high-school physics, beams of light exhibit a wave-like property. The nature of light is complicated but we can think of it as a ripple in an invisible field. This means a beam of light can appear stretched if it's moving away from you and vice versa.
If I were to throw a torch at your head, the beam of light will be slightly compressed before it hits you. And if you throw it back to me, the beam will be stretched as it moves away. This means beams of light can appear as higher pitch or lower pitch frequencies. Except instead of giving the wave a different note it gives them a different colour. A high-pitched beam of light is what we think of as blue/violet, while a low-pitched beam of light is what we percieve as red-orange.
Although it sounds hard to believe, an object moving toward you appears slightly blue and an object moving away will appear slightly red. This is an imperceptible effect however, partly because light waves are tiny and partly because your eye isn't sensitive enough to pick up on it, but it is there and you can detect it with the right equipment.
It was in 1912 that a man whose name (amazingly) was Vesto Slipher discovered that light from other galaxies was red-shifted. If you want to go into detail then technically what he discovered was that the Fraunhoffer lines were redshifted (feel free to look that term up) but the result is the same. Distant galaxies give off light which is being stretched.
By 1917, the astronomer Edward James Keeler had made a careful measurement of all the known galaxies and discovered that on average everything was redshifted and therefore moving away from us. The Universe, it would appear, is expanding in all directions. There are a couple of exceptions e.g. the Andromeda galaxy is blue-shifted meaning it's headed straight for us, but the average picture is clear. Everything in the Universe is moving away which means the whole thing is expanding. Here's some photographs of Slipher and Keeler - they don't help with the explanation but I had to include them for the look on Slipher's face.
Where's the Centre?
The first gut-reaction everyone has to this discovery is to be spooked by it - is the Earth truly the centre of the Universe? Why is everything moving away from us? This is where most of the misconceptions stem from so let's get detailed. The idea that all galaxies are flying away from us is wrong. They aren't.
In 1927 Edwin Hubble discovered that the further out you looked the faster things appeared to be going. Imagine you were looking at a particular galaxy, call it "A" and measured its speed as 100 m/s. Then say there was another galaxy further away, call it "B", and B was also moving away from you at 100 m/s.
So far so good, both galaxies are flying away from us at 100 m/s. But now imagine standing on galaxy A and looking at B. B is moving at 100 m/s so you wouldn't see it moving at all. It would appear stationary because you're both matching velocity. You would see it at a constant distance from you and it would be planet Earth which would appear to be moving away.
What Hubble discovered is that galaxy B is actually moving at 200 m/s from our perspective. This means an observer in galazxy A would look at B and say "galaxy B is moving away from me at 100 m/s, just like the Earth is."
In other words, an observer in galaxy A would also see everything moving away from themselves. People living in galaxy A would think they were at the centre of the Universe. What Hubble showed was that because things further out appear faster relative to us, this means there is no "stationary point" of the Universe which everything is flying away from. Actually, everything is moving away from everything else. There is no "centre of the Universe". Every point could be described as the centre, which starts to make things hard to visualise. However, the point of Hubble's discovery is that nobody can claim to be the centre of the Universe no matter how much they might want it to be true.
Actually, things aren't moving at all
The fact that the Universe is expanding in all directions without a centre doesn't make sense. How can all the galaxies be flying away but not be flying away from a specific point? The answer to this question was actually solved in 1923...four years before we even knew it needed solving. Sometimes Science is like that.
The Russian physicist Aleksander Friedmann had been playing around with Einstein's theory of general relativity (1916) and discovered that if you assumed the fabric of space itself was somehow stretching, the equations still worked. I don't want to get caught up in general relativity but the basic premise is that Einstein's equations can be solved in different ways, corresponding to possible or impossible Universes.
Friedmann was, largely for fun, seeing if it was possible to create a theoretical universe in which the fabric of space was stretching and it turned out to be perfecetly legitimate. It sounds wrong to imagine empty space having any kind of property, but it was just equations on a piece of paper; a mathematical curiosity which only described a possible Universe, not the actual one.
Once Hubble had discovered the Universe was expanding in all directions however, people began looking at Friedmann's ideas seriously and realised they would actually match what we observed. In a Friedmann universe, it's not the objects which are all flying away from each other, but the background of empty space which is stretching, creating the illusion of objects moving. Was it possible that Friedmann's theoritcal universe was accidentally the real one?
Pretty soon we turned Friedmann's equation into a testable prediction: if the expansion is an illusion caused by "space-stretching" rather than "object-movement" it should be detectable in the form of a microwave signal in deep space. The reason why Friedmann's equations predict this are laborious and mathematical so I'll skip over them...the outcome is simple: if it's space which is expanding we should discover a microwave-hum to the entire Universe which would be caused by beams of light from the early expansion getting stretched out. Surprisingly, in 1964, such a signal was discovered by Arno Penzias and Robert Wilson and it's unmistakable.
As crazy as it sounds, the galaxies of our Universe are not actually flying away from each other like an explosion. They are actually standing still and it's the empty space between them getting bigger. And here's a photograph of Friedmann, again, just for the look on his face.
The Balloon Analogy
The most common way of illustrating the expansion of the Universe is with an analogy I have mixed feelings about. The idea is that you draw a bunch of dots on an uninflated balloon and then gradually blow into it. As you do so the elastic stretches and the dots (representing galaxies) give the illusion of moving away from each other. I've used it myself in class but there are a lot of potential misconceptions which can arise. Here's what it looks like...
The problem with this analogy is twofold. Firstly, the balloon clearly has a centre...it's the point inside the balloon itself where you're blowing more air into. It's also a problem because it shows the balloon expanding into the room you're doing the demonstration in. What we have to be clear about is that the interior of the balloon and the exterior of the balloon are NOT part of the analogy.
Essentially, you have to ignore the fact that you know the balloon is being inflated because we're pumping air in and ignore the fact that there is a room surrounding you. You have to focus on the surface of the balloon only. This two-dimensional surface is what the analogy is really about. If you imagine you're some kind of microscopic bug living on the surface of the balloon, as you look around you'll see galaxies moving away and space expanding. We can't easily demonstrate the 3D process but we can simplify it by compressing the third dimension into just two.
The second problem is that the rubber is still made of particles which are being spread as the balloon expands. The reality is that empty space is not made of particles which are rearranging and spreading. It's the fabric of empty space which is expanding and it doesn't have any finer structure we're aware of.
But, if you can bypass those two problems the balloon analogy is pretty good. It shows that it's the space between dots which is expanding, shows the overall space/volume of the territory getting bigger and shows that the dots themselves aren't expanding very much i.e. the galaxies themselves aren't slowly getting bigger, just the region between them. Technically, becase empty space is stretching then yes, the distance between two stars will slightly increase over time (the dots on the balloon will gradually grow larger as the ink molecules are moved away from each other) but the effect is too small to observe.
It's also very useful in showing that the Universe has no centre. If you imagine asking the little bug to find the central point of the balloon's surface it wouldn't be able to. For the same reason a circle has no start and stop point, the surface of the balloon has no centre. You could pick any point on the surface equally. The "centre" of the balloon exists in a higher dimension than the bug can percieve.
I've also heard a pretty good analogy which is to imagine the Universe as a blob of dough with chocolate chips in it. As the dough is cooked it expands and the chocolate chips end up further away from each other. Although even the word "expands" could be misleading. Stretching is really what Friedmann had in mind.
OK...but seriously, what's it stretching into?
Now that we've covered what the expanding universe theory actually says, we can address the question properly. Even though the balloon analogy isn't perfect, it shows that the volume enclosing all the galaxies is increasing. In the dough analogy you could eventually get to the edges of the dough and ask yourself what was beyond and in the balloon analogy you could measure the thickness of the balloon's elastic and notice that this is gradually getting thinner. So the questions is still there: what is the background that we measure our Universe against?
The question can be phrased in an even simpler form: what is outside the Universe? And this is where things get interesting. There are, at present, three llikely contenders for dealing with the question. And here they are:
1. The Universe is Finite
This one is the simplest to visualise. The idea is that our Universe really does have a limit 14 billion light years away from us and it separates our Universe from whatever is without. This "without" could have all sorts of properties, but it could also just be a complete vacuum. Perhaps the emptiness outside our Universe is like some kind of soup and our Universe has an edge made of big-bang material, or perhaps it's just sheer empty space which our "space" is moving into.
This boundary to our Universe constitutes an Event Horizon i.e. a surface which separates two regions and makes it impossible for them to communicate with each other. This doesn't mean it's a physical surface (although it might be) it could just be that once you get to the edge of empty space, you just find...even emptier space. This edge of the Universe is sometimes called the Cosmic Event Horizon and it really means the point of perfect ignorance, by definition we cannot know what is outside of it.
This does of course mean it's entirely possible there are lots of Universes out there which are all occupying this mysterious void and they are gradually expanding into it together. This isn't to be confused with the many-worlds interpretation of quantum mechanics, but it has a lot of the same outcomes: there are a huge number of Universes, possibly infinite, possibly not, and they are all occupying some kind of mega-space. Each Universe could have totally different laws of physics and different historical timelines, so anything could be possible, provided you pick the right pocket Universe.
2. The Universe is Infinite
The previous idea is a strange one but it's not intractably strange. We can just about imagine it. But this next one is a whole other kettle of carrots. It's possible the Universe simply is everything so the question of there being an outside is meaningless. It's like asking what is North of the North pole? Or what's more right-angled than 90 degrees? By definition there is nothing beyond, the universe has no edge, it is just everywhere. This isn't easy to swallow because as humans we aren't very good at picturing infinity. But here's a stab.
Consider the following numbers: 1 2 3 4 5 6 7 8 9. We can imagine the number line going on forever in both directions i.e. it is infinite. But now consider this number line: 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9. That number line has more numbers in it, I've included the decimal points halfway between each integer, so it's a bigger line...but it's still infinite in both directions. In other words, the second infinity is bigger than the first.
I could go further and include other decimal points between each number, in fact I could do that an infinite number of times. There are an infinite number of infinities. In(finite)ception! But the weird thing about infinities is that if an infinity expands (or stretches) then it doesn't have to be stretching into anything...it just is.
So it's possible the answer is that there is nothing outside the Universe, it is literally everything, so it's expanding...and that's all there is to it. That one isn't exactly easy to digest, and personally I'm doubtful of it (I won't bore you as to why). But it is a possibility. And that's just the nature of infinity. It goes on forever.
3. The Universe is Looped
Have you ever played the really old mobile-phone game "snake"? If you've not, the idea is simple. You control a pixellated snake which has to move around the phone screen eating other black pixels. I dunno, scones or something, whatever snakes eat.
What made the game really interesting was that if you went off the right-hand edge of the screen you just reappeared on the left-hand edge. If you went upwards you just appeared at the bottom and so on. The snake-game Universe was infinite as far as the digital snake could tell. If it went in one direction forever it just kept coming back to where it started. The snake was a 2D creature who thought it's Universe had no edge, but we as higher-dimensional beings (3D creatures) could see the entire size of the snake's Universe.
The real answer to how this would be possible is that the snake's Universe was actually curved in the third dimension (our Universe). It looped back on itself so that actually the 2D Universe the snake percieved was really the surface of a sphere. If you're a 2D creature living on a sphere then you would see the Universe expanding in all directions because it was looped around on itself in a higher dimension. Remember it's not the objects on the screen flying away from each other, it's the screen background itself stretching. Now all we have to do is go back and add one extra dimension.
We 3D creatures may find that if we travel in a straight line we end up back where we started. It would seem strange to us, but to a 4D being looking "down" they would see our Universe was curved back on itself.
So in a way the Universe is simultaneously finite and infinite depending on your perspective. It might be infinite in the 3rd dimension but finite in the 4th. It could have an edge as far as a higher-D creature can tell but to us we'd never see it because we're trapped in our 3D world. So what our Universe is expanding into could actually be a higher dimension. That's why I enjoyed playing Snake anyway.
Will we ever know?
The answer to the Universe expansion hinges on a lot of unknowns. There are sub-theories of the ones I've mentioned above and there are subtle details I've missed out, but it looks very likely that one of these three explanations is correct. Conclusively answering it is going to prove difficult however because there's a limit to how far out into space we can actually see.
The further out something is, the older it is. Which means that when we look at objects far away we're also looking back in time. The furthest objects we can see today are galaxies which formed a few million years after the big bang expansion started. We can literally take photographs of the early Universe and figure out how it evolves. But that presents several difficulties.
As far as we can tell, our Universe took its current form 14 billion years ago. This means the farthest out we could ever hope to see would be 14 billion light years. Beyond is also "before" and asking what happened "before the start of time" gets sticky and possibly meaningless.
There's also the fact that the very early Universe was opaque and glowy, meaning we won't really be able to see past the early wall of light to what came before it. I'm afraid going out to the edge of the Universe and looking to see what's there is probably not feasible...not with today's understanding of Physics at least. So it's going to have to be elsewhere that we need to look.
If there are higher dimensions then maybe we can detect them. If there are other pocket Universes then maybe they influence ours in some measurable way. At the moment, we just don't know so this question remains speculative. The Universe is expanding, that much is clear. The fabric of space is what causes it, but beyond that we are still piecing the puzzle together.
And there you have it. The Universe is either expanding into a multiverse, it is infinite so isn't expanding into anything, or it's expanding into itself via some hyperspace curvature. I'm afraid these questions always lead to wierd territory but that's because we're dealing with the fundamentals of reality, it would be a surprise if it didn't bake our brains. Not to mention a disappointment. Personally I'd rather live in a Universe which takes effort to explain.
Expanding Universe: internapcdn
Captain Li Shang: animatedheroes
Sound Waves: meet
Vesto Slipher: newspaperslibrary
Edward James Keeler: Britannica
Miley Cyrus: celebuzz
Aleksander Friedmann: Wikpedia
Balloon Stretch: Astronomer
Stretch pose: pinimg
Multiverse: space cheetah
Infinity Movie: Wikimedia
Snake game: Mothership
I love science, let me tell you why.