The other day I went to see Downsizing from writer/director Alexander Payne. It's set in a world where humanity is on the brink of mass-exinction (like real life). Carbon emissions are edging us toward climate catastrophe, there isn’t enough energy for a growing population and we don’t have the resoures to sustain our economy (like real life). The root cause of all our problems is determined to be overpopulation (I’d have put my money on Barbara Streisand, but oh well). There’s simply too many people for a planet this size and a solution must be found. Either we cut the population down or we minimise its impact.
As the movie starts, a group of Norwegian scientists make a game-changing discovery which could solve our problems and turn the tide on impending armageddon: shrink humans down to a fraction of their original size. Smaller people don’t eat as much, they don’t need as much electricity, they take up less space, require less raw materials and so on. If everyone shrinks, so does their impact on the environment.
Furthermore, anyone who underwent this procedure would immediately become wealthier. The price of fuel, medicine and food would remain the same, but you’d only need a small amount so your money would count for more. You could run a miniature car for a thimble of petrol and you could live in a mansion because it’s no more than a dollhouse. It seems like miniaturisation would be the solution not only to environmental problems but to those of social inequality as well.
Once these preliminaries are established, the film tells the story of Paul Safraneck (Matt Damon), a failed medic who decides to abandon his regular-sized world and regular-sized friends in order to minimise and relocate to a tiny city. From there, the film shows the ups and downs of what life would be like for the very small...at least, it tries to.
The film's message is noble but, if I’m honest, the actual story becomes very boring very quickly. There are a few funny and poignant moments but it’s a meandering affair, structured like a collection of short movies rather than a feature film. There's not much of a narrative and every time you think something's going to happen, it doesn't. I want to make some witty joke about how they needed to downsize the script but it's not a passionate enough movie to be worthy of such a pun. The whole thing is a wasted opportunity that feels like sitting on a laborious train journey while your uncle Derek talks you through his wristwatch collection. You smile out of politness but you want the whole thing to be over as soon as possible.
Bring On The Science
The idea of human-miniaturisation is fascinating and lots of writers have toyed with it. Probably the first example was the isle of Liliput in Jonathan Swift’s novel Gulliver’s Travels, although it’s made clear that the tiny Liliputians are a different species altogether, rather than shrunken humans.
The same is true in The Borrowers novels by Mary Norton, in which a family of miniature people live inside a London family house, stealing things and not contributing to the rent. In The Borrowers Afloat they go down a river on a toy boat, in The Borrowers Aloft, they get in a minature hot air balloon and in The Borrowers Discover Vacuum Cleaners things don’t go so well.
My personal favourite book in the "tiny human" subgenre is The Shrinking Man by Richard Matheson in which a radioactive fog alters the molecular sturcture of the protagonist Scott Carey. He descends into the realm of the microscopic, losing all ties with his wife before fighting a spider in his basement and eventually reconciling with a new personal philosophy. My question is: could it really happen?
There are obvious problems to consider from a Biological perspective. Smaller animals lose heat faster, need to be hydrated more regularly and their eyes aren't as good, but let's say we decided not to worry about such things and just go for it. Would it be possible?
Well, if we take a look at how living things on Earth are made, we find that everything is built from the same basic stuff. At the smallest level we get the fundamental particles; things like electrons and quarks. These are arranged into stable configurations called atoms and molecules, which meet each other in chemical reactions. The reactions take place inside cells (also made of atoms and molecules) and cells are stacked up to make a living thing.
Tiny creatures obviously exist in nature, and since they are made from the same ingredients list, it certainly makes the whole endeavour tantalising. So let’s consider what our options might be.
1) Shrink the Cells
This is the approach used in Downsizing. A cell is a membrane-bag of chemicals needed to perform certain functions, if we just made the bag smaller the resulting person would be smaller as well, right? Unfortunately it turns out this wouldn't work and the reason is simple: cells are always the same size.
Cells are chemical reaction factories and for reactions to take place in the correct way, you need the right concentrations. If the cell is smaller, you’re essentially cramming all your finely-balanced reactants together and reactions start happening which shouldn't. Not to mention the fact that the membranes are no longer absorbing and releasing the correct amounts of carbon dioxide and oxygen for their relative size.
If we corrected for this by lowering the concentrations of chemicals inside, there just wouldn’t be enough of each chemical to actually perform the necessary jobs. Cells are jam-packed already and they have to be. Lower concentration means removing the necessary ingredients for the cell to live.
Animals come in all different sizes, but smaller animals don’t have smaller cells, they just have less of them. Nature has found the optimum size for cells and uses it for everything. This option isn't going to work.
2) Use less Cells
If we can’t make the cells smaller, can we just remove 90% of them instead? A fully grown human has an estimated 37 trillion cells in its body while a mouse has closer to 12 billion. The mouse seems to function just fine, so what if we kept all our body parts in the right proportion - just used less material to make them? This could actually work from a chemical and biological perspective. There's nothing stopping us from carving tiny bones or building miniature hearts The only problem however, is that if we’re removing cells from every part of the body that would include the brain.
The average human brain has a volume of just over a litre and it needs to be that big in order to house 86 billion neurons, each long enough to connect to 10,000 others around it. Shrinking the brain means either making each neuron shorter (less neural links possible) or using less neurons full stop. Shrinking down to mouse size by deleting a lot of the cells would be feasible, but we would lose our minds in the process. Literally.
A mouse’s brain can fit around 75 million neurons which is a remarkably complex structure, more advanced than our best supercomputers, but it's still a thousand times less circuitry than we are used to using.
To be clear, the size of your brain doesn’t automatically correlate with intelligence but there is clear a link. Bigger animals need bigger brains because they’ve got more body to control (that’s why whales and elephants have the biggest brains on Earth), so really it's brain-to body ratio we need to consider, but it’s also true that having more parts in a machine means it can do more things.
The smartest animals on the planet are humans, chimpanzees, dolphins, whales, elephants, pigs etc. and they all have big brains. Some small-brained creatures are smart for their size e.g. magpies and rats, but they don't have enough room in their skulls for higher-order thinking. A five-inch human would be one of the smartest animals on the planet for sure, but it would be utterly stupid compared to regular-sized humans.
3) Shrink the atoms
We need to preserve the number of cells but we also need to keep the number of molecules inside those cells the same. So what if we just shrunk the atoms? Smaller atoms would mean smaller molecules which would mean smaller cells and so forth.
You've probably come across pictures of atoms showing electrons orbiting a nucleus with empty space in between. For the purposes of Chemistry this is a reasonable approximation to make (I make it myself in my upcoming book) so you might think we can shrink atoms by pushing the electrons toward the nuclei, but in reality it’s not so simple. There’s a lot of complicated reasons why it doesn't work, but I'll stick to one which is easy to conceptualise.
The space between the nucleus and the electrons is not really empty at all. Actually it’s a heaving soup of energetic particles frothing into and out of existence like a bubbling cauldron. The energy of this “particle soup” does all sorts of wierd things to the electrons around an atom, including telling them where they can and can’t go. You can think of it like an outward-pressure, mainting the atom's size. Electrons can be squeezed toward the nucleus (where the density of the particle soup increases) but its reluctant to do so.
That’s not even taking into account the fact that electrons repel each other unless they are at extreme temperatures. An atom, just like a cell, is already at the optimum size. But while squishing a cell would be possible, squishing an atom is going against nature's preferences. Nature will fight back. Technically, with enough pressure pointed inwards you could just about do it, but it would probably turn the matter into a black hole. Atoms don't shrink.
4) Shrink the Particles
We can't squash the atoms because particles don't like being close to each other, but could we maybe shrink the particles themselves? If the particles in the centre of the atom weren’t so big, the surrounding particle soup wouldn’t take up as much space (roughly speaking) and the electrons (which would would also have to shrink in order to repel each other less) could get closer to one another. Would this help us make everything smaller?
No. Not at all. This is even less feasible than squishing the atoms. To change the size of fundamental particles is to change the fabric of the Universe itself. You can’t change a fundamental particle because it’s fundamentally the way it is. Hence the name! Fundamental particles have specifically defined behaviours and energies which don't seem to be programmable. Once you get down to the quantum level, there's nothing you can do to to keep control.
There is always a possibility, of course, that we’re wrong about these particles being truly the smallest things, but we’re pretty confident. We’ve got good reason to believe things like quarks and electrons are genuinely the bottom rung of the ladder. Squashing them would be like trying to make gravity run backwards. It's just not the way things go.
Not only that, but when you get right down to the quantum level, it’s not exactly obvious what size even means. Particles aren’t little nuggets floating around in a vacuum, they are fluctuating packets of energy and they don’t have clear dimensions. We sometimes talk casually about the amount of space a particle occupies but that isn’t really its size. It's an old-fashioned view for something which defies human intuition. Particles are the way they are and to change them is to change reality. Norwegian scientists are great, but they’re not gods.
It would seem, sadly, that there isn’t an obvious way to get around the problem of human size. We're stuck like this and we're stuck with all the problems it causes. So if we really want to change how our species affects the planet we can’t just change what we are. We need to change the way we act. We need to stop seeing the planet as our personal playground and more as our responsibility. Ultimately it's not are shape we need to shrink, it's our ego.
I find flat-Earthers fascinating. I don’t agree with their picture of reality, but if I surrounded myself with people who agreed with me all the time, things would get boring. It’s often a good idea to “fraternise with the enemy” because you get exposed to fascinating and even surprising perspectives.
People outside the flat-Earth community tend to assume flat-Earthers are backward, torch-bearing yokels too busy marrying their cousins and doping up to understand how the world works. But I think flat-Earthers have a healthier outlook than people give them credit for. After all, flat-Earthers:
1) Are skeptical of accepted theories
2) Refuse to accept facts on authority
3) Want to do research
4) Follow evidence wherever it leads, even if that means public ridicule
Many flat-Earthers I’ve spoken to emphasise reproducible experimentation and the importance of a posteriori reasoning. They admit ignorance about questions the flat-Earth model presents e.g. what’s underneath it, what stops the moon falling, how do eclipses happen, what generates the magnetic field, how does retrograde motion occur etc. and this is a very honest, even refreshing, approach.
Let me be clear: I’m confident the world is an oblate spheroid orbiting the Sun elliptically at a mean distance of 150 million kilometers, but I don’t think flat-Earthers are idiots for disagreeing with that. In fact, I think flat-Earthers have a sensible-ish approach to analysing the world, just the wrong conclusions.
Bully for the Globe
The sad truth is that most people are taught "the Earth is round" as a brute fact. They are rarely given evidence for how we arrived at such a strange conclusion and when they get older they begin to ask questions...which is what we want them to do! We want a generation of people who aren't afraid to challenge convention. Being skeptical is half of what Science is about. It’s just that the other half is about how we arrive at good answers and that’s where flat-Earthers have been misled. But I don't think it's their fault.
Honestly, some Flat-Earth arguments sound pretty good at first. I came across one guy pointing out that gyroscopes spin vertically no matter what surface they rest on; if the Earth is round then why don’t we see gyroscopes in airplanes tilting as they curve over the surface of the planet? Or what about the fact we can see Mercury or Venus at night, despite them being inner planets and at night we are facing away from the Sun. There are simple and obvious explanations to these quetions of course, but they do initially make one go "hmmmm, that's curious."
Point is, the flat-Earth movement has questions and none of them should be answered with “you’re a dumbass”. The motto of the Royal Institution is Nullius Et Verbia which translates to “don’t take anybody’s word for it”. That should include our word too.
Science is built on the idea that every claim is open to question and we’re setting a bad example by mocking flat-Earthers. It’s essentially saying “be skeptical and question everything...but not that bit, don’t ask about that!” Some flat-Earthers can’t be reasoned with sure, but those who are prepared to listen to globists have the right to answers rather than abuse...which is what they usually get.
One of the flat-Earthers I know told me that not only does he get regular harassment from people online, he receives death threats aimed at his daughter. Now come on guys, think that through. If you want people to listen to what you have to say, don’t threaten to murder their children.
Another flat-Earther told me a story from when he was in school. He asked his Science teacher a reasonable question: if the Earth is round why don’t Australians fall off the bottom? The response he got was laughter. Twenty years later, he’s Flat-Earth and proud.
If you want to open a dialogue with people who don’t share your view, you need to approach them with dignity. By all means explain the flaws with their arguments (as I’m about to do) and if they’re open-minded they will take you seriously, but don’t be mean about it. Nobody gets bullied into the truth.
Who is this blog for?
What I’m not about to do is debunk the flat-Earth claims currently looping around the internet. That’s been done by far better writers than me. What I am going to do is discuss some common themes and mistakes which crop up again and again when discussing the whole issue.
I’ll be frank: if you are a proud flat-Earther, my blog is unlikely to change your mind. You probably spent months coming to your conclusion and I’m not arrogant enough to think I’ll change that in five minutes. My best hope is that I might give you one or two moments of “fair enough, that’s an interesting point”.
Really, this blog is for people who aren’t flat-Earthers, but are interested in it. People who’ve just come across the growing movement and are beginning to wonder if there might be something worthy of consideration. What I’m going to show, before your mind is made up, is why you need to be cautious of flat-Earth arguments and what the common traps to watch out for are.
Mistake One - Thinking Scientists Trust Each Other
One flat-Earther I debated for a long time (and enjoyed a reasonable friendship with) had a very bizarre view of Science. He would talk about how Scientists revere people like Einstein and Newton as if they were kings of knowledge, whose theories were trusted as law. I tried to explain to him that we never trusted them, just the experiments which proved their ideas correct, but he was having none of it.
A lot of people get this wrong in fact. They seem to think Scientists automatically trust whoever the most prestigious Scientist is. But there are no heads of Science, no organisations in charge of deciding facts and no Scientific authorities; only experts. There is never an official decision to promote a hypothesis to fact, it just happens gradually by consensus. And we certainly don’t listen to each other and agree blindly. Scientists actually spend half their time trying to disprove other Scientists... especially if they're famous.
Newton's gravitational law turned out to work extremely well, but he also claimed you could turn lead into gold using magic. We accepted his first claim but not his second because the first one agreed with experiment and the second didn't. Nobody trusted Newton "because he was Newton".
I mean, just pointing out the obvious here, at the time Newton suggested gravitation, nobody even knew who he was. He was an obscure, antisocial wierdo who kept to himself and barely left his bedroom. The most famous Scientist at the time was actually Robert Hooke, but his ideas couldn't cut the cosmic mustard so they were abandoned.
You need to be very wary of a flat-Earth argument which talks about Scientists believing claims of other Scientists. Scientists trust evidence and they criticise something which doesn’t sound right. Just like flat-Earthers today. In fact, flat-Earthers are part of the system which keeps Science honest. They ask questions to see if Scientists can answer them. Well...they certainly used to...
Mistake Two - Forgetting Scientists were flat-Earthers once
As I keep saying, it’s good when people ask questions of Scientists, but I’ve yet to come across a flat-Earth argument which is new. Scientists have heard all these arguments before because we invented most of them.
There were plenty of skeptical thinkers around when Eratosthenes suggested the globe and there were plenty more during the renaissance when the heliocentric model was revisited. The scientific community in both time periods comprised of flat-Earthers or geocentrists. They attempted to debunk the globe hypothesis, failed miserably, and so they switched sides. Flat-Earthers today aren’t some new breed of thinker destined to take down the global tyrrany. They are re-hashing questions we have already dealt with.
I think this might be why scientists sometimes get frustrated. It’s not because we think flat-Earthers are dumb (well, I don’t), it’s because we feel like a teacher who answers a question and then another student asks it thirty seconds later. Modern flat-Earth arguments sound like the kid who wasn’t listening the first time. Really, Johnny? I just explained this!
If you want to question a claim Scientists are making then question something like dark matter or dark energy or psychiatric medication vs placebos. Question the really weird stuff like quantum entanglement or information-loss in black holes. Those are debates happening right now and they’re awesome. Sure, flat-Earth arguments were exciting at the time, but the dust has settled on them now and the answer was pretty clear.
Mistake Three - Asking Questions = Puncturing the Theory
I accept that the burden of proof lies with globe-Earthers - we are the ones trying to prove the remarkable claim - but flat-Earthers have to set their bar reasonably. The mistake they often make is assuming that noticing an apparent puzzle or contradiction with a theory means they’ve undermined the entire thing. But it doesn't mean that. Not at all.
For example, if someone says water boils at 100 degrees Celsius, you could point out that puddles evaporate on cold days when the ground isn’t that hot. It’s a good query, but it hasn’t debunked the existence of boiling water (fun fact: if Earth was truly flat, water would't boil at 100 degrees C in the first place). Or when you learn the theory of sexual reproduction and you notice porcupines are spikey, you could ask how they have sex at all. Again, it's an excellent question but it doesn't destroy the theory of sex or the existence of porcupines.
It's the same with a huge number of flat-Earth “proofs”. Most of them aren’t proofs at all, they’re just intriguing “how comes?” or “what abouts?”. Interesting and worth discussing for sure, but questioning a theory is not the same as putting a theory in checkmate.
Mistake Four - Flat-Earth is Easy to Understand
There is a well-known principle in scientific thinking called Occam’s razor which says that if you have two explanations which account for all the data, the simpler one is more likely to be correct.
For example if you see footprints in your house, there are a few explanations available: it’s possible someone was walking here earlier, but it’s also possible it was a dog on its hind legs wearing shoes. Both explanations account for the data but it’s more likely to be the human than the dog.
The problem is that Occam’s razor sometimes get perverted into: the simplest explanation is correct. And that’s obviously not true. Dogs can be trained to walk on their hind legs wearing shoes, so you have be prepared to accept that explanation even though it wouldn’t be your first guess.
The simple human explanation is clearly more likely, but suppose you discovered paw marks on the door-handles and somebody had opened the cupboard and poured out Kibbles’n’bits. At this point the two explanations are no longer equal. The human story doesn’t explain everything anymore, so you have to consider the counter-intuitive and more complicated hypothesis.
It’s permissable to say “I don’t know the truth” of course...that’s always allowed in Science. If you find the dog-shoes idea a bit of a stretch then you don’t have to accept it. But you’re not allowed to return to the inadequate hypothesis. It no longer acounts for all the data, so there is no reason to use it any more. But this approach is common in flat-Earth arguments. They are so damn easy to understand that they hardly explain anything!
Consider gravity. Flat-Earthers don’t believe in gravity because gravity would have pulled the Earth into a ball by now. So they put forward a much simpler explanation for why apples fall from trees: dense objects fall through air and sparse objects like Helium balloons rise due to air’s buoyancy. Sounds good, but there’s a big problem.
The “things fall because they’re dense” explanation is easy to understand but misses pretty much everything else. It doesn’t explain why masses hung on strings tilt toward mountains (because they do). It doesn’t explain why objects get faster as they fall rather than dropping at a steady rate. It doesn’t explain why airplanes (denser than air) are able to float by accelerating into it. It doesn’t explain why comets occur with regularity or what keeps the moon at the same distance from us.
In fact, the more you look into it the more you realise the density explanation hardly covers anything. And, as it happens, Newton already knew about the density/buoyancy principle (as did everyone else in 1687). What he was trying to explain wasn’t why things fall. He was trying to explain how everything moved the way it did. Some flat-Earthers don’t seem to be aware of this however and stick to overly simplified explanations of overly simplified problems.
A simple explanation for a simple phenomenon is fine. But a simple explanation for a complicated range of phenomena is a sign somebody hasn’t done enough research.
Mistake Five - Rejecting Equations
This flaw with many flat-Earth arguments is similar to the last one, but quite specific. It’s an unfortunate fact that many heliocentric proofs happen to look like an intimidating wall of equations. This makes things awkward because the explanations for globe-theory are often so complicated they can look fantastical. Sadly, I’ve seen a lot of flat-Earthers get around this by deciding that equations are just a bunch of made up symbols which scientists use to blind people with...well, science.
Flat-Earthers do have a valid point in that equations don’t mean anything by themselves. Particles and fields don’t know to behave a certain way because someone wrote some symbols on paper. But what equations can do is track things which are too bizarre for our heads. The equation doesn’t mean anything but it describes something which does. So when Scientists present you with mathematical proofs they aren’t hoping to blind you. It’s because equations are sometimes the most accurate way of describing the world.
Take the following experiment: if you get into a room with 23 other people and ask everyone what their birthday is, two people in the room will often have the same one. But that doesn’t sound right! It feels like you should need more people. There are 365 days in a year so shouldn’t you need 366 people to get a birthday match? Nope. Once you get to 24 there's around a 50% chance it'll work. Try it for yourself and prepare to be spooked.
What this shows is that the human mind isn’t very good at guessing how things really work, especially when it comes to patterns and numbers. If you want to explain how something like this happens, you have to accept that nature is beyond what your mind can comfortably grasp. If you’re really committed to understanding a complex universe you have to accept complex explanations, including mathematical ones.
If you don’t feel confident with equations that’s absolutely fine, but you can't reject them because you don't understand them. That's like rejecting a German person’s opinion because you don’t speak German.
We invented mathematical techniques not to make things confusing but because the world is confusing and unless we invoke maths we get the wrong answers. The complicated equipment and calculations Scientists use are far more reliable than your eyes and ears. Although most flat-Earthers seem to be misled about this point too.
Mistake Six - Trusting Your Senses
Leonardo da Vinci once said “Experience is a truer guide than the words of others”. It’s a great quotation but we have to be careful. By "experience" he is referring to testing things for yourself. He doesn’t mean “trust your senses”. Well, maybe he did mean that, I didn’t know Da Vinci that well. But if he did think human senses were trustworthy, he was wrong. Come at me Da Vinci.
A huge number of flat-Earth “proofs” rely on you making simple observations with your senses.
I’ve heard flat-Earthers talk about how you can’t feel the Earth spinning beneath us, how we don't see clouds or rocket trails blown backwards across the sky or how the North star and Ursa Major appear fixed in place throughout the year. All of these are casual observations which appear to give the impression of a flat, stationary world. But guess what? They’re all wrong.
Your senses are not very good at interpreting their surroundings. This can be an uncomfortable thing to accept if you believe your senses are engineered to be trustworthy, but it’s unfortunately true. Your senses will mislead you at every opportunity. That’s one of the reasons we invented Science in the first place - to check what nature is really trying to say.
If you think your senses are giving you an honest picture then check out the image below. It is not a spiral, it’s a series of concentric circles. But even knowing that fact is true, your eyes and brain will trick you and tell you it’s a spiral.
Or if you want a really pertinent example, here is an optical illusion where your eyes mistake curvature for flatness. The wavy lines below are consistenly curvy, but the patches in grey look flat and angular. They aren’t.
Your senses, even when your logical brain knows you are being tricked, will still fool you. You can look at something which is absolutely curved but be tricked into seeing something straight. It works the other way round too. I’ve heard flat-Earthers claiming satellites are a myth because nobody has ever seen them with the naked eye from Earth. I’m afraid this one is just embarassing. You can see satellites with the naked eye! If you’ve not seen them in your city suburb it’s because your eyes aren’t good enough at picking out faint lights. Go out in the countryside some night and look up.
Mistake Seven - Cynicism
This is the saddest mistake every flat-Earth argument falls foul of and it’s a real tragedy. Most of the other mistakes I’ve listed are intellectual curiosities, worthy of debate. But this one just makes me unhappy and it’s the hardest to undo.
If you want to believe a flat-Earth argument you have to reject not only all the evidence for the globe Earth...but all the people presenting it. This doesn’t just mean millions of professional Scientists and astronomers. It doesn’t just mean every member of NASA, the ESA and every other space agency. It also means all the commerical pilots and air-traffic controllers. It means all the military navigators, mobile-phone engineers, sailors and meteorologists. It means every amateur backyard-astronomer. It means every kid with a telescope. We are talking tens of millions of people who’s job or hobby involves taking the shape of the Earth into account.
All these people agree the Earth is round and they have evidence to back it. Flat-Earth arguments must logically claim these people are lying. To believe the Earth is truly flat is to believe there is a conspiracy keeping the flat-Earth truth suppressed, with not a single honest whistleblower among them. Forgive me for saying, but that’s an intellectually dishonest approach to take, not to mention a mean-spirited one.
To question an accepted belief is skepticism and that’s great. But making the assumption (and it is an assumption) that every teacher, populariser or user of science is trying to trick everyone is not skepticism...it’s cynicism. It’s pre-deciding that Scientists are corrupt. It’s making a judgement withut evidence and it’s not how we do things as adults.
I understand people criticisng me and my fellow Scientists for being awkward or difficult. I accept that we sometimes preach facts and don’t respond well to questioning like we should. I accept that some Scientists can be arrogant. I even accept that some enjoy feeling superior to the lay-public. But the accusation that we are part of a giant conspiracy to mislead everybody? That’s not putting forward a decent argument, it’s just slander and it debases everyone on both sides.
There are evil scientists yes, but there are good ones too. Scientists who are trying to cure diseases, introduce clean water to poor countries, provide heating, lighting and shelter to millions, and to inspire kids who want to learn. Being a flat-Earther means you have to not only reject a lot of cool and amazing ideas but the cool and amazing people behind them. You have to accept a narrower, simpler, crueller view of who Scientists are and why we do what we do. You have to believe we are trying to decieve you. And that’s not a healthy outlook. I am a teacher because I want to open minds, not trick them. It's simply unkind to assume otherwise. What evidence have you got that I'm evil?
I agree that you should always seek evidence for a claim rather than taking it on faith. Having faith in facts is never a good idea. But having faith in people? That's not so bad.
Alessandro Manzoni: kym-cdn
Flat Earth Map: tfes.org
Admiral Akbar: telegraph
Newton magic: gnosticwarrior
Frustrated teacher: shutterstock
Dog shoes: Daily Mail
Spiral Illusion: croexpress
Curves and lines illusion: Sciencealert
Satellite timelapse: Quora
Trust People: humanengineers
It’s all fun and games until someone loses a planet
In 2006, the International Astronomical Union decided that Pluto was no longer a planet and was instead to be referred to as a “dwarf planet”. Outcry ensued and eleven years later it has not abated.
The physicist Sean Carroll writes in one of his recent books “Pluto is the ninth planet and it’s my book so I’ll call it what I like”, while Neil deGrasse Tyson writes in one of his own “Pluto isn’t a planet, get over it.” There’s even an episode of Rick and Morty where Jerry delivers a speech to the Plutonians, declaring that Earth’s scientists were mistaken in reclassifying it.
The man largely responsible for the monumentous decision, Mike Brown, uses the twitter handle @plutokiller and has the Death star destroying Alderaan for his banner picture. So perhaps it’s all a matter of whimsy and tongue-in-cheek sport. Pluto is, after all, the furthest planet/dwarf from the Sun. Does it really matter what we call it?
I am going to argue that it does, not because astronomical terminology is crucial to our lives but because this debate reflects something important about how Science operates. So hold onto your preconceptions folks! Well, actually don’t. Let go of our preconceptions. But hang onto something.
I’m a Believer
I remember hearing the Pluto news on the radio and thinking it was pedantic nonsense. You can’t just change what Pluto is because someone decides to tweak a definition! I had images of pencil-pushing smart-alecs smarming away to themselves at how clever they were, with no concern for public opinion.
Don’t misunderstand me here, public opinion does not dictate truth and reality is not flexible. But the definitions of words are, and the accepted meaning of a word should reflect its common usage. If everyone agrees on a particular definition, an organisation would be foolish to redefine it.
I also remember thinking the whole thing was bad for Science PR because organisations like the IAU should serve the public not dictate to them. If we use the word “planet” to refer to something which Pluto clearly is, that’s enough reason to preserve its status. But here’s the thing: Pluto doesn’t match the public definition of a planet. That’s why the IAU changed it.
What I was getting wrong eleven years ago was that the IAU genuinely was taking public opinion into account. The reclassification of Pluto was done out of respect for the lay public, not in spite of them.
The First Planets
Every ancient culture monitored the skies, charting the mysterious lights which roam above our heads, and every single one of them made the same discovery. The majority of the twinkling dots follow a clear pattern, changing position on a predictable 365 day-cycle...but five of them do not.
Five of the bright sky-things move on bizarre trajectories, weaving and wailing without rhythm or logic. The Greeks called these five objects “wanderers” (planetes in Greek) because they appeared to wander as if conscious beings. They were assumed to be Gods and were identified as Hermes, Aphrodite, Ares, Zeus and Chronos, later re-named for their Roman counterparts Mercury, Venus, Mars, Jupiter and Saturn.
The first definition of “planet” was therefore extremely simple. A planet was one of the bright lights which moved in non-predictable ways.
But thanks to the work of people like Eratosthenes, Ptolemy, Newton, Buridan, Copernicus, Kepler, Brahe and Galileo, we figured out that the planets were following a pattern, albeit a complex one.
The Sun was sitting at the centre of a circular plane with the planets orbiting at different speeds, one of which was the Earth we stood on. Sometimes Earth would be behind another planet and sometimes it would overtake it, giving the impression of the other planet zig-zagging across the sky - what astronomers call retrograde motion.
To further complicate things, it turned out this view was only about 90% accurate. Firstly, planets move in ellipses rather than circles and secondly, they aren’t going around the Sun at all. Planets and the Sun are actually orbiting each other, it’s just that the Sun is so much bigger so its movements are small. If you assume the Sun is stationary with planets moving around it (what you were probably taught in primary school) you will get the wrong answers when trying to account for planetary motion.
Nature does complicated things so we have to accept equally complicated explanations, even if they contravene what we learned when we were young.
Six and Beyond
By the 18th Century, the definition of a planet had evolved to “something which shares a common centre of mass with the Sun and has a fixed elliptical orbit”. In fairness, that definition is a mouthful so “things which orbit the Sun” will do in a pinch. And there were six planets rather than five, because Earth was one of them.
Then in 1781, the astronomer William Herschel discovered that one of the dimmest stars visible to the naked eye does the retrograde-motion thing. By carefully measuring its position with a telescope, Herschel realised this object wasn’t a star at all, it was orbiting our Sun. This made Herschel the first person in modern history to discover a planet, yielding Mercury, Venus, Earth, Mars, Jupiter, Saturn and George.
The name George didn’t catch on in France however, where King George was despised, so it was eventually renamed after the God of the sky: Uranus. One of the most majestic and powerful figures in classical mythology. Today, it has come to mean something else...well...strictly speaking it should be pronounced “yor-ann-us” but the other way is definitely more fun. As a physics teacher I’m pretty sure I’ve heard every permutation of this joke but I have to be honest, I still find Uranus hilarious.
Then in 1801, Giuseppe Piazzi discovered the eighth planet, Ceres, lurking between Jupiter and Mars. Ceres was the smallest planet discovered to date, at least ten times smaller than the moon, but it orbited the Sun just like the others, so Jupiter was bumped down the list to become the sixth planet, Saturn the seventh and so on. Inconvenient, but as a scientist you change your view when the data forces you.
A few months later Heinrich Olbers discovered another planet at the same distance to the Sun, which he named Pallas. Then in 1804 Karl Harding discovered Juno. In 1807 Olbers discovered Vesta and in 1845 Karl Hencke discovered Astraea.
The thirteenth planet was a little different though. This one was discovered by equation rather than telescope. In 1821, Alexis Bouvard was taking precise measurements of Uranus (hur hur hur) and found that it didn’t move in a standard ellipse. Instead, it seemed to be pulled to the side as if there were another object attracting it and in 1846 Johann Galle finally observed it with a telescope, giving us Neptune.
Then Karl Hencke discovered the planet Hebe in 1847 along the same Mars/Jupiter orbit as most of the others. The fifteenth, Iris, was discovered the same year by John Russell Hind, the sixteenth, Metis, in 1848 by Andrew Graham and the seventeenth, Hygiea, in 1949 by Annibale Gasparis. Hold on a moment...
Back up, back up
Any textbook on astronomy in the 1850s would have listed our solar system as boasting seventeen planets. But as our telescopes got better we discovered more and more objects floating between Mars and Jupiter and by the 1860s there were over a hundred of them, which led to a problem.
When people heard the word “planet” they imagined great big round things with their own orbits, not scraggly space-debris circling the Sun like a moat around a castle. Either we kept the definition of planet to mean “thing which goes round the Sun” or we start using it the way the general public used it, even though it would disqualify the rocks between Mars and Jupiter. After much deliberation we went with the second option.
Although never formally defined, astronomers started using the word planet to refer to what the general public thought the word meant. This meant we needed a new word for the thousands of rocky clumps swimming between Mars and Jupiter and the term “asteroid” was coined.
Really, the problem arose because language evolves slower than Scientific knowledge. We get a word like planet in our vocabulary and it hangs around for hundreds of years, colouring our perceptions. If we discover that reality has nuances to it, we either keep using the old terminology or we invent a new word to describe the stuff we didn’t originally know was there.
The goofy story about Pluto
In 1906, the astronomer (and millionaire) Percival Lowell decided it was time we discovered a ninth planet. He had good reason to suspect there might be something there - minor disturbances in Neptune’s orbit - but mostly he was motivated by the passionate desire to look beyond the edge of what was known. He poured a lot of money and resources into searching for “Planet X” and hired some of the world’s best astronomers to work at his observatory.
Sadly, Lowell died in 1916 before Planet X was discovered, but the mission continued in his absence. Under the direction of Vesto Slipher (who also discovered the redshift effect) Clyde Tombaugh was set the task of searching the sky beyond Neptune and on February 18th 1930, he captured images of what Lowell had hoped for - a ninth planet, roughly the size of the Earth.
Planet X-fever gripped the world and international headlines proclaimed the discovery of the first proper planet since Neptune. A competition was held to decide what we were going to call it and over a thousand names were suggested. The name Pluto was proposed by eleven-year-old Venetia Burney, and ultimately won by popular vote.
By 1948 however, precise measurements were taken on Pluto’s size and it turned out we had been a little premature in declaring it the same mass as the Earth. It was actually about a tenth as heavy. Never mind though, it was still bigger than Mercury.
Except it wasn’t. By 1978 we learned that Pluto was actually about a six hundredth the mass of the Earth, smaller than Mercury and even our planetary moon, making it the smallest planet in the solar system. But it still satisfied the main criterias for it to be a “planet”. It was orbiting the Sun, it was big enough to be round and it occupied a unique orbit. Except it didn’t.
The Second Belt
In 1992, the astronomer Jane Luu discovered a second object floating on Pluto’s orbit which she nicknamed Smiley but was given the official designation 1992-QB1. Then in 2003, the astronomer Mike Brown discovered an asteroid at the same distance, which he called Sedna. He went on to discover Haumea and Orcus in 2004, and then Makemake in 2005. But then, most disconcertingly, Brown discovered Eris, which turned out to be 25% heavier than Pluto.
We can argue that Pluto is a planet on the grounds of it being round, and we can dismiss all the small rocks nearby as asteroids. But when we discover objects heavier or bigger than Pluto on the same orbit, it’s time to rethink things.
Turns out there are over 2,000 objects orbiting past Neptune and Pluto is only one of them. Our solar system doesn’t have one asteroid belt, it has two! This second one has been called the Kuiper belt (pronounced Kie-pur) and its asteroids are very different from the ones we’re familiar with. A lot of them are huge chunks of ice and rock, often many times bigger than planetary moons. Pluto, it turned out, was Ceres all over again - the first object discovered in an asteroid belt and accidentally labeled as a planet.
So what do we do? If we keep calling Pluto a planet then we're misleading people. It’s not very big and it’s not a lone body, it’s just a fat asteroid which happened to get noticed first. But if we want to keep calling Pluto a planet, we need to redefine what that word actually means.
Eventually the IAU decided to repeat what was done in the 1860s. The definition of planet was fixed in people’s minds, so we left it and came up with a new word to fit the new thing: “dwarf planet”.
The definition of a planet is the same as it always has been. Something which a) goes round the Sun, b) is roughly spherical due to gravity and c) has cleared its orbit path so it’s the only dog in town. A dwarf planet is something which hasn’t done the third one...it’s big enough to be interesting, but it’s part of an asteroid belt. This means our solar systerm really has six dwarf planets: Ceres (reclassified from asteroid), Pluto, 2007-OR10, Eris, Haumea and Makemake. And there’s a good chance more will be discovered in the Kuiper belt with time.
I think the IAU made the right call. They were faced with either inventing a new word or changing the meaning of an old one. And the former option is usually the better idea. You can’t force people to change the words they’ve always used, but you can introduce new ones.
When I was a young warthog...
People get annoyed about the whole thing because Pluto, it would appear, has been unfairly demoted. But the thing is, it hasn’t at all. Pluto hasn’t been changed into a different thing - we just discovered what it was all along, like taking the mask off a Scooby-Doo villain.
Imagine you had nine spoons of sugar in front of you. You’re told by everyone that it’s definitely sugar in each one and you believe that for a long time. If you eventually discover the end one is really salt, what you’d say is “oh, I guess we made a mistake”. It would be bizarre to say “I’ve always been taught there are nine tablespoons of sugar and I still believe that’s true. I’m going to redefine what I mean by sugar as ‘any white powder’.”
You’re welcome to do that of course, but in doing so you’re bending the definition away from what everyone means. You’ve also redefined the word to include things like sherbert and powdered glass. Unless you’re extremely stubborn (in which case can I watch you eat your powdered glass cake?) you know what the sensible thing to do is, even if you don’t like it. The intellectually honest approach is to accept that you were taught a mistake. It wasn’t anyone’s fault and nobody lied to you, but you got told something incorrect.
So why do people object to learning the truth? Why do people get upset when a faulty fact is corrected? Shouldn’t that be a good thing?
In the process of writing this blog I consulted with my father, a passionate astronomer (the guy has a five-foot Russian-built telescope with a motor to compensate for Earth’s rotation in his garden shed) and he made a very important point: for a lot of people, this kind of thing can be more about emotion than intellect. If you grow up learning something, it can feel like the rug being pulled out from under you if it turns out to be wrong.
This is a fair point. When I tell the Pluto story to my younger students they are fine with it. I explain that there was a large asteroid which got mistaken for a planet and as soon as we realised the mistake we corrected it. There is no objection to this because “it was mistakenly identifed as a planet” is part of the fact they learn.
It’s only when we are victims of the mistake that it can be a human instinct to fight back. Intellectually we might accept Pluto’s status, but emotionally we are irritated because we are creatures of habit and familiarity.
The same way people objected to Ceres and Pallas being reclassified in the 1860s, people in the 2000s objected to Pluto going the same way. And, just like Ceres and Pallas, people growing up after that decision are fine with Pluto being a dwarf planet. Finding out as an adult that one of your childhood facts was wrong can feel like a piece of your childhood has been knocked away. Nobody likes having their childhood messed with.
Why it Matters
Science offers us insight and knowledge, but it comes at a price - we have to be prepared to let go of familiar beliefs if they turn out to be wrong. This is one of the hardest parts of Science but it’s also one of the most important. It’s the reason we no longer believe the Earth is the centre of the Universe. It’s the reason we no longer believe the planets are Olympian Gods. It’s the reason we make progress in the first place.
And it doesn’t have to be a bad thing. Alright, we lost a planet. That sucks. But technically we gained six dwarf planets as well, so if you want a solar system full of planets, the 2006 ruling gave you exactly that. And, most importantly, we gained a deeper understanding of how complicated the solar system really is.
There are eight planets, hundreds of moons, thousands of asteroids in two different belts (as well as two clumps of asteroids called the Greeks and Trojans orbiting near Jupiter) and probably dozens of dwarf-planets. Not to mention comets from the Oort cloud.
We had to abandon our simple view of reality to get to this astonishing point, and it’s very probable some of what we currently “know” will turn out to be wrong ten years from now. When people are young, they learn a simple view of reality, just as out entire species did. Science is the thing which allows us to move beyond that and gain a more sophisticated and beautiful view of the Universe. It can be painful letting go, but it can be eye-opening and wonderful as well.
Right, now let's deal with this whole "conventional current" malarky...
Mr Arnold from Jurassic Park: blogspot
Arrogant IAU Member: ehowcdn
King Leonidas: huffingtonpost
Fred Durst: impericon
Pluto and Goofy: urdogs
Double belt: blogspot
The Last Jedi: Wallpapersite
Orbit animations: exploremars
In 45 BCE, Julius Caesar decided to make January the “first” month of the year. The reason was that Janus, the god after whom the month is named, was the god of doorways and new starts, so it seemed an appropriate place to begin our cycle. The Earth isn’t in a particularly special place, but we designate the December/January switchover as a festival to take stock of the past and consider the future.
2017 has of course been full of negative “political” news stories - just like every other year - but I’m happy to report that - just like every other year - Science provided a candle of optimism in the perpetual darkness of parochial human affairs! The most important Science story was obviously that Lemmy, the late, great frontman of Motorhead had a dinosaur-alligator named in his honour called Lemmysuchus. Some other things happened too. Here’s my favourite picks of awesome Science stories from the last 365 days.
Not Today Tsunami
Tsunamis occur when an earthquake at sea sends water outward in all directions, devastating coastal towns and cities. Up until now, there has been no way to stop them but Usama Kadri, doctor of mathematics at Cardiff University, has hit on the solution. By creating enormous sound-blasts underwater, the acoustic-shockwave can be pointed at the oncoming tsunami like a deflector shield. When the kinetic energy of the water going toward land meets the kinetic energy of the soundwave moving away, the net energy of the water particles spreads out, raising the temperature and killing the tsunami. Kadri’s idea is the first of its kind and has already been tested in small, artificial settings with great success. All we need to do is scale it up and choose what sound to blast the tsunamis apart.
New Continent Discovered
It sounds made up but it’s completely true. New Zealand, which everyone previously assumed to be an island on its own, appears to be the highest point of a unique continental plate, separate to all the countries around it. This continent, named Zealandia on February 9th, lies 94% below the surface of the ocean but really is there, making its disocverer, Maria Seton, the first person to discover a continent in over three centuries.
Mental Illness is Normal
A study conducted by J.D. Schaeffer in the newly continented New Zealand found that between the ages of 11 and 38 only 17% of people experience no mental health problems. Everyone else experiences at least one bout of depression or anxiety and 41% experience it for more than a year. It turns out that being mentally ill puts you in the majority. Perhaps this might not seem like an uplifting news story, but I think it’s encouraging. If you suffer from mental illness or know somebody who does, don’t feel ashamed or stigmatised. We can now say categorically that it’s a standard part of being human.
Goldilocks and the seven planets
On February 21st, the Spitzer telescope at NASA discovered seven planets orbiting the star TRAPPIST-1, all of them in the Circumstellar Habitable Zone aka the "Goldilocks zone”. That’s the area around a star where the temperatures aren't too hot or too cold, making things just right for liquid water to flow and complex organic reactions to take place. TRAPPIST-1 is about 378 trillion kilometers away sadly, but the evidence is undeniable. Our solar system only has one planet in the CHZ for sure (Mars is up for debate), but apparently there are places in the Universe far more amenable to life. If it was able to arise in this barren cosmic wasteland, chances are it could have done so elsewhere.
Life Started in Canada
The oldest fossils of living things have long been assumed to be the samples found in Pilbara Australia, dating to aboot 3.5 billion years old, but on March 1st Matthew Dodd published results that put new microfossils discovered in Quebec, Canada at 4.28 billion years. That would be astonishing given that Earth is probably no more than 4.5 billion years old itself. The results are disputed of course, but exciting...eh?
One of the most abundant resources we have on the planet is saltwater. Unfortunately it’s unpalatable to humans making it approximately useless. But on the 3rd of April, Rahul Nair discovered a solution (pun intended) to the problem. Graphene, made from sheets of carbon atoms arranged like a chickenwire fence, has billions of tiny holes which water molecules fit through, but salt particles do not. Graphene works like a sieve, purifying the water and leaving salt on the other side. By using Nair’s method we could turn the oceans into fresh drinking-water for millions.
We Shall Not Be Moved
Perhaps the biggest story from April was a story about scientists themselves. After Donlad Trump and many in his cabinet made comments denying climate change, asserting that vaccines caused autism or that the big bang was “a fairtyale”, the scientific community was worried that governments were no longer going to be making decisions based on Science (aka reality). In response, an estimated 1.07 million people in over a hundred cities around the world took to the streets on the 22nd of April to march in protest of science-denialism. The March for Science was the biggest pro-Science public demonstration in history.
On the 25th, doctor Emily Partrdige and her team published a paper in which they reported keeping six prematurely born lambs alive inside artificial wombs. The lambs were removed from their mothers via caesarean section and delivered at the equivalent of 23 weeks old. Partridge and her team were able to keep the lambs alive until they were fully grown, after which they were birthed successfully. If we can replicate this in humans it would mark the end for deaths of premature babies.
The world’s first nano-grand prix took place on the 28th. Cars no more than a billionth of a meter across, invisible even to an optical microscope, were raced on a track for the first time, demonstrating the versatility and applicability of nanotechnology. The Swiss team won with their mini-hovercraft “Nano-Dragster” although unfortunately the victory was undermined by the shape of the car itself...
Forget Shark-Nado, Meet Crystal-Nado
It sounds like a joke but it isn’t. Kathleen Benson reported, on May 1st, that occasionally amid the Andes mountains of Chile, whirlwinds of air can pick up thousands of crystals and transport them across distances of over 5 km, before showering them in a sparkly display of magicness. This isn’t a world-changing or far-reaching discovery, but it’s objectively awesome.
Transparent Frogs Exist
Juan Manuel Guayasamin and his team discovered a species of frog which is completely see-through; you can actually see their organs working from the outside, a bit like that scene in Hollow Man where we see Kevin Bacon's innards through the skin. Only this time, no uncomfortable Kevin Bacon nudity! They are called Hyalinobatrachium Yaku and are proof that sometimes nature does things for the hell of it.
Enceladus has food
In October 2015, Cassini (which collapsed into Saturn on September 15th of this year) flew through the hydrothermal plumes of the moon Enceladus. As it shot through the jets, it collected a vast amount of data which was analysed over the next two years and on April 14th one of the most startling results was published: Enceladus' sub-surface ocean has a lot of molecular hydrogen - the most likely source being organic molecules. Not only would these molecules serve as the building blocks for life, molecular hydrogen is often used as a food source for primitive microbes. It used to be Mars which was considered our best bet for finding extra-terrestrial life, now it looks like Enceladus is going to take the top spot for astrobiological research.
Out of Eden
The earliest fossils of human-like creatures come from a site called Omo Kibish in Ethiopia and date to around 200,000 years old. The assumption has always been that this is where humans first evolved. The Shangri-La or Garden of Eden described in so many mythologies. Turns out that’s not true. On June 8th, Professor Jean-Jacques Hublin announced that a site in Morocco called Jebel Irhoud has human-like fossils dating back to around 350,000 years. What's more, sites similar to Jebel Irhoud have been found all over Africa. The assumption has always been that these sites were later ones, representing our spread from the cradle of life in Ethiopia. But it looks like we had it backwards. If the Jebel Irhoud site has been dated accurately that would mean the various human species were covering Africa simultaneously rather than originating in one single place. This changes our understanding of not only human evolution, but how evolution itself works.
Here Comes the Sun
A novel but surprisngly simple idea to fight skin-cancer was published on 13th of July from Nisma Mujahid. A sun-cream which boosts melanin production in human skin. Melanin is the pigment which makes skin darker, meaning people with darker skin tend to be less at risk from skin cancer caused by UV rays. While most sun-creams merely cover the skin of white people like me in dark brown ink, this one actually causes melanin to produce under the skin’s surface, providing secure coverage. It’s been tested successfully on rats and isolated human skin to great effect. All that remains is human trials.
2016 saw the discovery of gravitational waves; ripples in spacetime caused by leviathan cosmic events. The ones discovered by LIGO back then were generated by the merger of two black holes and this year we got a second big discovery; the collision of two neutron stars. Essentially, atomic nuclei the size of Manhattan, neutron stars are the cores of dead suns spinning many thousands of times per second. When neutron stars fall into each other’s gravitational attraction, the resulting collision is so powerful that it generates gravitational waves, along with heavy elements like gold that get scattered into the universe and wind up as globe-shaped prizes for people like Kevin Bacon.
Gene Editing Achieved
On the 20th of September, research was published by Kathy Niakan and her team who managed to successfully edit a human embryo for the first time. Using the revolutionary CRISPR technique, Niakan was able to alter an embryo to give it a greater chance of forming a blastocyst in the womb. Baring in mind that roughly one in six women experience miscarriage at some point in their adult lives, the ability to edit human embryos would change the game completely. It would also allow us to remove diseases and illnesses from unborn children, giving them a better chance of life. People have speculated about the possibility of altering human genes for decades. Now, we have taken our first step toward doing so. Maybe one day everyone can be edited to look like Kevin Bacon.
Part of our Universe has been found
It’s no secret that most of our Universe is missing. Simply put, the Universe behaves in a way that suggests it should be heavier, but we’ve not been able to find where most of the missing mass is coming from. There are three sub-categories. The first is Dark Energy, the second is Dark Matter and the third is Missing Baryons. And, on October 9th, the Baryon puzzle was solved. Independently, two teams led by Hideki Tanamura and Anna de Graaf discovered threads of particles trillions of kilometers long, linking up every galaxy in the cosmos. Although it looks like galaxies are lone specks of light floating amid darkness, it turns out they are linked by unimaginably long clouds like highways connecting towns, accounting for 50% of the normal matter that’s out there. Dark matter and Dark energy are still mysteries, but that's one down two to go. Next mystery: why is Kevin Bacon doing the EE commercials?
It's Pronounced "Oh-Moo-er-Moo-er"
On 19th of October, the Hawaian astronomer Robert Weryk discovered a 230 x 35 meter cigar-shaped object floating through our solar system. What was bizarre about Oumuamua (as it was later named) was that its trajectory could not be explained as having originated from either of the asteroid belts in our solar system, making it the very first interstellar object to approach our sun. That we know of at least. Sadly, it turned out not to be an alien probe, but most likely a hunk of rock from a system around the star Vega which got knocked into its current orbit approximately 600,000 years ago. The same day Kevin Bacon was born.
Photons Behaving Badly
This one is seriously weird. On November 9th a team led by Ado Jorio was able to observe a bizarre interaction between particles of light (photons). By slamming a laser beam onto the surface of water, the team were able to emit pairs of photons which were able to “talk” to each other by sending temporary vibrations through the medium they were moving through. Electrons are known to do this in superconducting materials but seeing photons do it is baffling. Apparently, light particles can communicate information and energy with other light particles. There’s not really a whole lot else can be said about this one because it's such a shock. Watch this space.
In 2015 a six-year-old boy was admitted to hospital with a very rare genetic condition called Junctional Epidermolysis Bullosa. The condition is lethal in children, causing the skin to fall off, leaving you without your primary defence system. It’s genetic and there is no known cure. Well…there wasn’t a known cure. In what sounds like the plot of a movie, as the boy was down to 20% of his skin remaining, a group of scientists led by Michele DeLuca decided to try a never-attempted treatment in a last-ditch effort to save the boy. By taking a small sample of his remaining skin and infecting it with a virus designed to correct the JEB genes with healthy ones, the team were able to create new healthy skin cells which they grew and grafted to the boy. After eight months, the boy was finally given healthy skin and discharged from the hospital. Technically this story happened in 2016 but DeLuca’s results were not published until November 8th and it’s too good not to mention. The young boy in question has returned to school and DeLuca has genuinely found a cure for a formerly untreatable disease. I would like to say “this sort of thing doesn’t happen very often” but actually…in Science…it genuinely does.
Trump to the Moon
Say what you like about Donald Trump, he does seem to really like space. Whatever his motivations, I happen to agree with his ideology. Weird, right? The space program is crucial to our species’ survival (that’s not hyperbole, it’s just true) so if he’s serious about investing in it I’m all in favour. On the 11th of December, Trump announced that he wanted America to return to the moon with a mind for using it as a base to launch missions toward Mars and explore the rest of the solar system. He hasn’t given any specific deadlines for NASA, nor has he announced any additional funding he will be supplying, but the sentiment is apparently there. At this point, I’ll take anything I can get.
Science provides hope even in hopeless places...
By all the stars!
A few days ago I was talking with someone who claimed her horoscope was always extremely accurate. Horoscopes claim that fusion reactions taking place trillions of kilometers away can influence the personality traits and lives of humans here on Earth. That’s an astonishing claim if it’s true. And millions of people seem to think it is, so maybe there’s something going on there.
I suggested to her we carry out a series of simple tests to see if her horoscopes were genuinely as good as she thought. She agreed and we set about devising experiments to put them to the test. The outcome was sadly the same as every other test into the accuracy of astrology ever conducted: resoundingly negative. We couldn’t find any evidence that her horoscopes were trustworthy.
It's a real shame. Had the test yielded a positive result I would have been exhilarated. Imagine being the first person in history to confirm the existence of a link between star positions and human behaviour. I would have loved to find evidence in favour of horoscopes. Sadly however, that wasn’t what we found.
At this point, my friend became rather unhappy because we had ruined something. The result was easy for me to process because I went from “I’ve seen no evidence to trust horoscopes” to “I’ve seen no evidence to trust horoscopes”. Her journey was different however; she had to abandon a belief.
Admittedly, the more you get used to Science the easier this becomes, because you learn to be proven wrong regularly...but the first few times it happens it can sting like a nettle down the neck.
Intellectually, we should be just as satisfied with a negative result as a positive one because we still learn from it, but we are emotional beings as well as intellectual ones and having our cherished views dissolved can be horrible.
Scientists - ruining everyone’s fun forever
Perhaps unsurprisingly, people who believe in the supernatural often see Scientists as enemies. We are accused of trying to destroy belief systems or (far more often) of being “closed minded”. Scientists want to subject everything to tests, often sucking the beauty and mystery out of the world, and if we can’t find it in a test-tube we decide it isn’t real.
I know why we come across like that. Science has a long history of debunking and discrediting supernatural claims, so it’s no wonder people think Science is anti-supernatural. But this simply isn’t true. Scientists are very open minded. We are literally prepared to believe anything, no matter how ridiculous, if there is evidence for it. You want me to believe in Unicorns? Bring me a unicorn and I’m sold.
We’re not trying to ruin everybody’s fun at all. Scientists just go looking for answers by investigating. If someone claims there are Gods living on Mount Olympus we’re the ones who decide to climb the mountain and look. If we fail to find evidence for something you believe in, that’s not because we were trying to destroy it, it’s because the evidence was undetectable and that’s nature’s fault not ours.
The war on magic
There are all sorts of supernatural claims Science has investigated over the years and found zero evidence for. Mediumship, telekinesis, mind-reading, sympathetic magic, prophecy, ghosts, crystal-healing, oujia boards, homeopathy, dowsing, reiki and a whole buffet of others. They’ve all been scrutinised by Scientists who were trying to prove them right, and came up empty-handed each time.
These things could absolutely be real, but investigations have found nothing to support them, so we are left with a simple choice. Either we say “I don’t know if it’s true” or “I’m going to believe it anyway”. If you decide to pick that second option and believe in something without evidence, you have to answer the following question: what is your belief based on if not evidence?
The idea of magical forces watching over our destinies is exciting for sure, but to be a Scientist is to commit yourself to either evidence or ignorance. Nothing in between.
I’ve met people who have countered this line of argument by pointing out that even though the evidence is lacking for a claim, it could still be true. I agree of course, but believing something because it could be true is a dangerous position. Magnetism could be caused by invisible gnomes who only speak Welsh. Penguins might secretly be red and they put on the black outfits when they see us coming. Ewan McGregor could be Britney Spears...have you ever seen them together?
The problem with believing something because it could be true is that it’s a slippery slope to infinity. There are so many possible things out there it would be impossible to believe them all. And many of them would contradict. Scientists stick to what we’re confident probably is rather than what could be. It doesn’t mean we close our eyes to the possibilities, we just reserve judgement until we know more.
When demons walked
So, why can’t Scientists just leave things alone and let people believe what they want to? Why does it matter if someone has a few unsubstantiated ideas in their head? The problem is that a person’s beliefs determine their actions, so if their beliefs are crooked their behaviour will be too.
The Ku Klux Klan act on the belief that black people are inferior to white people. Nathalie Rippeberger’s parents caused her death by refusing to take her to see a doctor because they didn’t believe in medicine. We used to burn women alive at the stake for witchcract and we believed we were right to do so. Saying “everybody is entitled to their beliefs” only works if people decide what to believe based on reason. That’s why Scientists want to get things right, even if that means abandoning a supernatural explanation. A world where everyone believes what they want is an abhorrent and primitive one.
There was a time when people didn’t know about bacterial or viral infections. If you got sick it was the will of the spirits. People who heard voices weren’t treated for schizophrenia or epilepsy, they were possessed of demons or communing with angels. Science has made us abandon these supernatural explanations and it has replaced them with life expectancy and good mental health. That’s a fair trade, I think.
After all, there was a time when we didn’t even know what air was and doors slamming would have been the result of poltergeists rather than differences in air pressure. I mean, imagine growing up in a civilization where people didn’t know where the rain came from, why food spoiled, or where the Sun went at night.
The pre-scientific world was one of ghosts and goblins. It was a place where humans were diseased and helpless. Then along came the radical notion that you could learn what reality was like by investigating and testing it. Once we realised this elegant truth, we began looking for answers rather than guessing at them. And the answer to every mystery so far has been predictable cause-effect relationships between testable laws and particles. The answer has never turned out to be magic or mysticism.
That doesn’t mean magic isn’t real. But if you want to use magical explanations to account for your world, you must also recognise that magic is an ever-receding pocket of ignorance which has been shrinking like a shadow before a candle. You’re welcome to choose magic, but I cannot help but wonder, why would you choose igorance and darkness?
Beyond our understanding
There are certainly deep and confounding mysteries which fill our Universe from edge to edge. What happens inside a black hole? Why is spacetime expanding? Does quantum gravity work? Or, perhaps the greatest mystery of all time, when you listen to the song Doctor Jones by Aqua the guitar intro for the first 14 seconds is a beautiful piece of music, while the rest of the song sounds like Doctor Jones by Aqua. How is this possible?
Some things in the Universe are so strange it can be hard getting our heads around them. But is it possible there are things which can’t be found in a laboratory? Things which don't conform to logical laws and textbook explanations? The answer is again, yes. There could be things which transcend natural law. But if such things really do exist, nobody would know about them...including the people making the claim in the first place.
Science is all about investigating the world through experimentation. Anything which can’t be tested for is supernatural. But if you claim to have knowledge of supernatural things, you are claiming they are detectable because you yourself have detected them. And since the part of you which detected these things obeys natural laws (your brain) natural laws can clearly be used to search for them.
The laws which give you awareness are the same laws which underpin equipment in a laboratory. It’s not a matter of Scientists applying the wrong approach. Scientists are using the same approaches as supernaturalists, we’re just being cautious about it because we know how easily nature can play tricks on our senses. Salt looks like sugar and clouds look like cotton. We have to be better than that.
Kill the Myth
In a 2014 interview with Bill Moyers, Niel deGrasse Tyson explained that when people are wishing on stars they are more than likely wishing on planets. Moyers asks Tyson “Don’t you sometimes feel sad about breaking all these myths apart?” Tyson responds quickly: “No, because some myths deserve to be broken apart out of respect for the human intellect.”
We aren’t trying to ruin people’s fantasies, we just think people deserve to know the truth. We think people are smart enough to handle the facts, even if it means giving up a comforting superstition. We don’t think people should be patronised with fairy tales and spook-stories. We think grown-ups should have the right to grasp reality by the horns.
This doesn’t mean you have to abandon a world of mystery and wonder though. Supernatural beliefs offer you magical and fanciful ideas, but Science can beat them all.
Every atom in your body was formed in the core of a star and the atoms of your left hand came from a different star to the ones in your right. There are species of plant and jellyfish which are immortal. The sky is actually purple. Sugar glows in the dark when you crush it. On Venus it snows metal and on Neptune it rains diamonds. Time slows down or speeds up depending on where you’re standing. There are gases which can set fire to water. Whales used to walk on land. Diamonds are vomited to the Earth’s surface by volcanoes but we’ve learned to make them out of peanut butter. You can make frogs levitate in magnetic fields. We have helped paraplegics walk and brought hearing to the deaf.
The world is full of strange stuff and there’s enough genuine magic out there for the entire species. To be quite frank, Science doesn’t oppose the supernatural, it just finds it a bit limited and boring.
Tiddlytubbies: Teletubbies wiki
Teletubbies 1: The Sun
Teletubbies 2: The Mirror
Teletubbies 3: hdnux
Teletubbies 4: onedio
Crying your pardon
Firstly, an apology. I’ve been completely inactive on my website for the past month. This is partly because I was preparing for the Institute of Physics annual public Science lecture (which I delivered on November 22nd to a gracious and patient audience).
Last year I was able to transcribe and summarise the lecture in a couple of blogs but this year I’m afraid that wouldn’t be possible. The main topic covered was the Standard Model of Particle Physics and that's not easy to describe in an essay. Personally, I foam at the mouth with excitement when the whole topic of particle behaviour is discussed, but apparently some plebs don’t share my excitement. As a result, there was a lot of stuff I had to edit out of my lecture...stuff I’m now going to subject you to.
Full disclosure: this will be a self-indulgent blog that will bore many of my readers. I’m doing it anyway because it’s my website and I freaking love this stuff. How dost thou like them apples?
Seventeen is a Magic Number...Apparently
I once recorded a barbershop-quartet song I wrote about the standard model of particle physics (cos I’m just that awesome) but if you’ve not come across it before, the standard model is usually depicted like the grid above or sometimes in a wheel like this:
A particle is something which holds itself together. Technically, this means you are a particle because your body doesn't fall apart spontaneously. If you want to separate a human into pieces then it can be done obviously, it just requires effort (energy) to do so. This means you aren't a "fundamental" particle because you have a structure i.e. you're made of smaller particles.
A fundamental particle is something which holds itself together but has no internal architecture; they are the tiniest nuggets of stuff and aren't made of smaller bits. As strange as it sounds, you can’t chop these particles in half because there literally is no half for them to be chopped into.
What’s more, we’re fairly confident this really is the bottom rung of the ladder. We have lots of reasons to suspect these particles are the true building blocks of the Universe, which means every object or process you can think of is the result of interactions between the particles listed above. With the exception of gravity (which doesn’t play nicely) what you see is the alphabet of reality. Well...almost.
In truth, the seventeen particles of the standard model are not the whole story. Nature is rarely so considerate or simple. In fact, she seems to have a complete disregard for what humans will find intuitive and tends to prefer intricate complexity wherever possible. It’s almost like our brains evolved for the purposes of hunting and breeding rather than conceptualising the quantum-mechanical nature of reality.
Divide and Describe
Asking how many types of particle there are is like asking how many types of human there are. If you speak to a contact-lens designer they might say five: brown-eyed, blue-eyed, grey-eyed, green-eyed and hazel-eyed humans. That’s not wrong, but it would be useless information to a hemotologist. They might say there are eight types of human based on the blood groups A+, A-, B+, B-, AB+, AB-, O+ and O-.
To give a full picture that includes both properties, we might therefore say there are really forty types of human: blue-eyed people for each of the eight blood groups, brown-eyed people for each of the eight blood groups and so on. But we could always subdivide again based on something like hair colour for instance – blonde, brunette and ginger – to yield 120 types of human.
We could categorise and cross-categorise the human population according to gender, sex, sexuality, skin-colour, language, dietary habits or whatever else we felt like. The sheer number of possible “human particles” is staggering because there are so many different properties available. A similar complication arises when we want to describe particles of the Universe...which is a much better way to spend the time - after all, putting humans into categories is sort of frowned upon these days.
Let’s Just Say There Are Five
For the sake of clarity I’m going to say there are five main properties/characteristics a particle can have. This isn’t the whole story, but a lot of particle properties aren't independent of each other so we can count them as one thing. For example, a tiger has the property of orange stripes and also the property of black stripes. Those are distinct things but logically they must occur together. We can group both properties into one and say tigers have the property of being stripey.
Also (for the quantum physicists among you) I'm going to totally ignore superpositions and list only particle types whose eigenstate has been measured. If you don’t like it, see above comment about apples.
Here are the five main properties I'm going to consider:
Mass: This property has several different meanings but I'm going to take the simplest one. Particle mass means roughly the same thing it does in everyday life: a measure of how heavy a particle is or how reluctant it is to change trajectory. For fundamental particles it can take any value from 0 up to 0.02 milligrams (anything above that is physically impossible).
Charge: This property means how willing a particle is to be around other particles with the same property. It comes in two varieties called positive and negative. Particles with opposite charges will attract while particles with identical charges repel. Particles that have zero charge are unaffected by particles with the other charges.
Colour: This one is a little harder to visualise because it doesn’t compare to anything in our everyday world. The name is also misleading because it doesn’t refer to the appearance of a particle (it’s just a word we use to describe it) it actually refers to whether a particle can be separated from other particles with corresponding properties. Particles with zero colour are able to move around on their own but particles with colour must clump together in specific arrangements. The interactions and types of colour available are quite complicated so I won’t go into detail here, although my recent Instagram post (@timjamesScience) explains the basics if you're curious. The important fact is that unlike charge which comes in two varieties, colour comes in combinations of three: red, green and blue.
Spin: Like colour, spin is a misleading name because it describes a property we don't have a way of visualising yet. It was originally believed that small particles were literally spinning as they moved through space. All particles had a spin and, if the particles also had charge, the two properties combined to form magnetism (non-charged particles still had spin, but weren't magnetic). It was then discovered that particles are not literally balls rotating in space, but the word had stuck.
Spin values can either be whole numbers or half-numbers (technically whole multiples or half multiples of a specific number called h but we won't worry about that). Particles with whole-number spin are able to occupy the same physical location as each other without interacting - think of two beams of light overlapping - and we call these particles bosons. Those with half-number spin will stack against each other, like your body and the chair you are sitting on, and we call these particles fermions. As well as having a numerical valule, spin also comes in two varieties called up and down (like positive and negative charges) as well as sometimes, for bosons, zero.
Chirality: Chirality is yet another property we can't easily visualise. It's also difficult to quickly describe what it does. Charge is all about repulsions and attractions, colour is all about whether particles can be independent or go around in groups and so on. Chirality's main feature takes some serious explaining. I'm going to make the decision to skip over it therefore, because this blog is already too chunky. The key feature of chirality is that every particle has it, and it comes in two varieties called left-handed and right-handed. E-mail me if you want to know more.
At this point, many particle physicists will be mashing their teeth in anger at how glibly I'm summarising the various properties, ignoring a lot of the subtlety. For example, chirality and mass are closely related, I've brushed over things like isospins and hypercharges. And then there's a whole other property called helicity which (confusingly) also has left and right-handed versions. This property relates to the spin but changes depending on how you view the particle. My five properties thing is a fudge but if you don't like it, see aforementioned apples comment.
What I'm going to say is that from these five properties we can describe all the known particles available in the Universe. We might not be able to visualise what the particles are actually doing to give them these properties but at least there are only five things to take into consideration. While human particles are easier to visualise, there's so much variety it becomes impossible to explain their behaviour.
Bosons (Spin is a whole number)
Photons - The simplest particles of all. They have no mass, no charge and no colour. They have two different spin possibilities (up and down) as well as two chiralities (left and right) giving us four types of photon in total: Up Left, Up Right, Down Left, Down Right.
Z’s – Z particles have no colour and no charge but they do have mass. They can also have one of the three spin values (Up, Down, None) and both chiralities, giving us six: Up L, Up R, None L, None R, Down L, Down R.
W’s – W’s are similar to Z’s. They have mass, three spins and two chiralities, but they also come in two different charges, positive and negative, meaning there are twelve W particles: UpLPositive, UpLNegative, UpRPositive, UpRNegative, NoneLPositive, NoneLNegative, NoneRPositive, NoneRNegative, DownLPositive, DownLNegative, DownRPositive, DownRNegative.
Gluons – Gluons have no mass or charge but they do have colour in eight versions (see my Instagram post) two spins and two chiralities, giving us a total of thirty-two gluons, which I'm not going to write out.
Higgs – The Higgs boson particle has mass but no colour or charge. It has a spin of 0 which makes the count a little simpler, but both chiralities, giving us only two types: Left-handed Higgs and Right-Handed Higgs.
Fermions – (Spin is a half)
Quarks – These particles have all five properties. There are six different quark masses available, each with a different name: up, down, charm, strange, top and bottom (NB: the quark names of "up" and "down" are unrelated to their spin i.e. you can have an Up-quark with a down spin. Yeah...I know). Up, charm and top quarks have a charge of +2/3 whereas down, strange and bottom quarks have a charge of -1/3.
Quarks also possess one of three colours (red, green or blue) giving 18 types so far. Quarks can also come in charge-reversed versions called anti-quarks. Anti-up, anti-charm and anti-top quarks have charges of -2/3 while anti-down, anti-strange and anti-bottom have charges of +1/3 (the reverse of the “ordinary” quarks). This gives us 36. Then we have either up or down spin, giving us 72, and then the left and right handed chiralities, giving us a grand total of 144 possible quark types.
Leptons – These fermions possess no colour. There are three "flavours" with mass, called the electron, the muon and the tau each with a charge of either -1 or +1 (six so far). Then there are the up and down spins (giving us twelve) and then the left and right chiralities (twenty four).
The remaining leptons have no charge (possibly no mass either) and are called neutrinos. Neutrinos come in three flavours: electron-neutrinos, muon-neutrinos and tau-neutrinos. There are also anti-neutrinos for each, but because neutrinos have a charge of zero you can almost think of anti-neutrinos as having a charge of anti-zero. I mean, you probably shouldn't think of it like that...I'm just saying you could. Then of course you factor in the up and down spins, giving us twelve neutrino types so far.
What’s really strange (apart from the fact that they sort of have mass and sort of have anti-zero charge) is that all neutrinos are right-handed and all anti-neutrinos are left-handed. This means we don't have to double our neutrino number because when we counted the anti-neutrinos, we already took chirality into account by accident...nature's weird. So that's twelve neutrinos.
The Grand Total
Photons x 4
Z's x 6
W's x 12
Gluons x 32
Higgs's x 2
Quarks x 144
Electrons x 24
Neutrinos x 12
236 different types of particle
Is That It?
In all honesty we don’t know. These are the 236 types of particle which definitely exist and which can’t be broken down, but there is absolutely the possibility, in fact the likelihood, of there being more. For instance, left-handed neutrinos and right-handed anti-neutrinos have never been discovered but it seems reasonable to suggest they exist somewhere.
There are lots of other hypothesised particles which many physicists believe may be real but haven’t been discovered. Things like the graviton (the particle responsible for causing gravity), the inflaton (the particle which played a key role in the early expansion of the Universe) or the Majorana fermion (which does things).
Then there are the so called quasi-particles, some of which don’t exist independently but merge with already existent particles (like Goldstone bosons) some of which aren’t self-contained units but act as if they are (like phonons) and some of which don’t exist at all but we act as if they do to make the equations neater (like Popov-ghosts). In short, the Universe is awfully big and awfully complicated. The standard model we have now is probably only a glimpse of what nature has in store.
Standard Model Grid: businessinsider
Jerry Smith: pinimg
Standard Model Wheel: Symmetrymagazine
Nick Griffin: guim
Insane Clown Posse: wennermedia
Should we blame the government, or blame society...or should we blame the images on TV?
In August of 2011, riots broke out across London as thousands of people took to the streets and engaged in fighting, looting and wanton damage of property. Within days, the unbridled aggression had somehow spread to other cities across England and soon the entire nation was gripped in a frenzy of cosmopolitan outbursts.
The initial trigger had been the police shooting of London drug dealer Mark Duggan, but within 24 hours it had devolved into city-wide pandemonium...and then country wide. Five people were killed, hundreds were injured and repairs to the city of London totalled over £200 million.
Why were so many people getting involved? This wasn’t a student protest which got out of hand, nor did all these rioters know Mark Duggan. It was as if everyone was engaging in mania for the sheer blood-soaked hell of it.
At the time, numerous "social experts" were interviewed on national news and started blaming it on something they were calling mass hysteria – the idea that humans will uncontrollably copy each other in large groups, even to the point of going against their normal behaviour. I was doubtful of this explanation. It seemed more likely that it was just people exercising their sadism and exorcising their emotional demons.
However, as a Scientist, I have to be willing to forego gut-instinct and look at the evidence in detail. Is there any reason to believe that mass hysteria is a genuine phenomenon? Were the 2011 riots truly a form of extreme group hypnosis or was it individuals making conscious choices to be aggressive under the protection of crowd-anonymity?
Welcome to the Twilight Zone
To begin with, let's look at one of the strangest crime waves in recorded history. On November 19th 1938, in the quaint English town of Halifax, two women named Gertie Watts and Mary Gledhill arrived at a police station and reported being viciously attacked and cut across the face by a man wielding a razor. Two days later a woman named Mary Sutcliffe stumbled in with a similar story, this time decorated with deep slashes on her arms.
By November 29th, six other women had been attacked in similar fashion and a manhunt began. Knife-crime experts were brought in from Scotland Yard and a reward of no less than £10 was offered to whoever caught the man papers began calling “The Halifax Slasher.”
It was then, while interviewing the nine young women involved, that Detective Chief Inspector William Salisbury uncovered an unprecedented twist. The Halifax Slasher never existed. Each woman had fabricated the attacks and self-inflicted the wounds. Independently.
After the first attack, newspapers warned the public to be cautious of a knife-wielding monster and several women all decided to slice themselves in order to imitate the real victims, none of them realising there were no real victims.
Humans can obviously do very strange things in order to feel part of a group – even a group of attack survivors. Although nobody wants to be the victim of violence, it would appear that some people want to be part of a community so badly they will engage in self-harm to achieve it.
Peculiar for sure, but I don't think it's mass hysteria. These acts of self-harm could easily be the result of loneliness, mental illness or a prurient desire to take part in social drama. They were also acts which took place in private, rather than as part of a hysterical group. Fascinating for sure, but not mass hysteria. The moral of the story is: when you invesitage spooky things around Halifax, the monster probably isn't real.
The High School Terror
Now let's consider another epidemic - one I myself witnessed in 2005. On the morning in question, as I approached my high school (the one I attended, not the one I currently teach at) I saw an ambulance parked outside with a girl being carted into the back, oxygen-mask in place. Things got a bit strange when another ambulance arrived an hour later for a different girl, and then they got frightening when a third and fourth arrived that afternoon.
Over the following week, eight or nine girls were hospitalised in similar fashion and people were beginning to suspect something like a chemical leak in the Science department. What was the origin of this mysterious illness?
By doing a bit of our own investigating, my friends and I we were able to get to the bottom of the whole thing and we discovered that every pupil was returning to school the following day with an identical diagnosis: they had each had an anxiety attack.
To be absolutely clear, anxiety attacks are a genuine ailment and should always be taken seriously. It's not just people getting worked-up (as I've heard them described). They are unpleasant and traumatic experiences for the sufferer and it's no wonder ambulances were being called. Hyperventilation, chest pains, dizzyness, fainting and sometimes even vomiting, were symptoms all the girls displayed. What made them particularly intriguing was their timeline.
Each sufferer had been present at the attack of the previous victim. The first girl - patient zero - had suffered an attack for some unknown reason and then, seeing the disturbing effects, the second girl became anxious herself. The third girl suffered a similar fate, as did the fourth and so on.
This story is relevant although sadly anecdotal (you’ll have to trust me that it happened), but it still doesn’t quite prove mass hysteria. Anxiety attacks can obviously be triggered by stressful situations and watching your friend get stuck in an ambulance is clearly a stressful situation. So, while it was happening to a mass, there may have been nothing hysterical going on. Who wouldn't get a little anxious after seeing a close friend suffering? And who wouldn't get even more worked up when other people started showing the same signs of illness? This could have been friends sympathising with each other in that telepathic way they often seem to do.
The cheerful part of the blog
In November 1978 a community of socialist idealists living in Guyana, under the leadership of the reverend Jim Jones, apparently committed group suicide by drinking grape Flavour-Aid laced with cyanide. Over 900 people drank from the poisoned chalice including large numbers of children. All killed in under an hour. Today this event is referred to as "The Jonestown Massacre".
This does sound like a genuine case of mass hysteria at first, but although it’s certainly weird, I’m still not sure it counts. For starters, Jonestown was a radical political settlement populated by people who had fled their ordinary lives to dwell in huts as part of a socialist order they believed was inspired by God. It seems reasonable to suggest there may have been a high proportion of extremist/unstable people in the community to begin with.
Furthermore, Jim Jones made a tape recording of the entire process and it’s clear that huge numbers of people either objected to what was happening but were violently coerced, or simply didn’t realise they were about to die. Jim Jones would run pretend-apocalypse drills regularly, so a lot of the victims probably thought it was an act and simply played along.
Further-furthermore, Jones had just announced to the entire village that capitalist soldiers would soon be parachuting into their community to kill or kidnap everyone, including the defenceless children. He suggested it would be better to die free as a sign of protest, than to live as a prisoner. It's possible that a lot of people in Jonestown were killing themselves out of political and religious ideology.
While grim and extreme, lots of people are prepared to die or even kill for their principles and many parents would rather let their children go peacefully if the alternative is imprisonment and torture at the hands of a totalitarian government.
Jonestown wasn’t a group of perfectly stable people all suddenly doing something hysterical because everyone else was. This was a village of strong-willed, politicised people with religious convictions of salvation, engaging in a powerful act of defiance, or simply being tricked, threatened and murdered. In other news, I’ll be writing a children’s book about magical ponies over the Summer.
The bit where I am proven wrong...
There are numerous documented cases throughout history of fainting epidemics, outbreaks of dizziness, fevers, seizures, headaches and vomiting, although as we've said, many of these episodes could be the result of anxiety or contageous disease.
In order to confirm whether mass hysteria truly occurs we need examples of humans doing utterly uncharacteristic things for no political, religious or social reason other than “everyone else was doing it”. And, to my great surprise, it turns out there are a few incidents which fit the bill.
The doctor JFC Hecker, in 1844, recorded an outbreak of “meowing” which took place in a medieval French convent. The nuns in question apparently began making cat noises uncontrollably one evening and were unable to stop for several hours.
Then, there was the dancing epidemic of July 1518 in which over 400 people began dancing in the streets of Strasbourg, including the sick and elderly. Many died from exhaustion in that one, so I guess you could call that...dance fever! Look, if you don't like my jokes then go back and read the depressing section on Jonestown again. Stop judging me.
Speaking of inappopriate laughter, consider the giggling epidemic of 1962 in which students from Tanganyika began laughing at school and were unable to stop themselves. That particular epidemic went on for weeks and spread to over a thousand students and teachers at fourteen different schools.
The sheer number of people involved in these instances makes mental illness an unlikely explanation. It’s also not an example of “unleashing the beast” unless that beast is a cat who likes dancing and giggling a lot. Nor were these protests or acts of political and religious defiance. There is simply no reason to engage in these activities other than imitation so I hereby change my mind. It would appear that mass hysteria may be a genuine, although rare, phenomenon.
So, what causes it? This is gonna get uncomfortable...
In Science you always follow the evidence wherever it leads, even if it takes you to an uncomfortable place. Having decided that I was wrong about mass hysteria, what I really wanted to do was try and find some potential explanation for what causes it and, as I looked into all the recorded historical accounts, I did notice a rather inconvenient theme. You're probably not going to like this, but trust me, neither do I.
It turns out that when mass hysteria occurs, the people engaged in the weird behaviour are more likely to be female than male.
This is a really unfortunate thing to have noticed because it will give fuel to people who are going to say things like "women are more hysterical" or some such nonsense. Please bare with me on this. I'm not about to mansplain why women are naturally more emotionally fragile or something like that. I think there is something interesting going on here, but it's quite subtle. Give me a chance.
Also, please don't get angry at me for something nature has chosen. I'm just reporting what appears to be biologically true. If there is any misogyny here then it's to be found in the architecture of the human brain, not in my describing it.
This better be good...
The meowing epidemic took place in a convent. The giggling epidemic affected girl’s schools and mostly female teachers. The dancing fever was reportedly seen to affect women more than men and the pseudo-mass hysteria cases like the Halifax Slasher or the anxiety epidemic from my own school again centred around young women. Why?
Here's something which I think may potentially be to blame.
Human beings, like other primates, come equipped with a group of neurons in their frontal cortex called the mirror neuron system (MNS). These cells begin firing when you watch someone else perform an action...and they make you want to imitate it.
Suppose you’re watching a person who’s fairly similar to you in appearance or personality. Your brain recognises them as a kind of mirror image so when they do something you imagine yourself doing it too. If that person twitches their left arm, your MNS sees the movement and immediately wants to copy it.
If you’ve ever found yourself yawning because you’ve seen someone else doing it, the reason (proposed by a 2013 study by Helene Haker) may be the MNS. The same mechanism might also explain why you’re more likely to laugh at a joke when you are in a crowd of people laughing together than when you are on your own.
It’s even been suggested that these neurons may form the basis of empathy itself. The MNS in monkeys will trigger a pain-response when they see another monkey being hurt. This “sympathy pain” felt by the observer monkey looks identical on a brain scan to when the monkey itself is the victim.
There’s a clear reason for why the brain developed such a mutation – imitation is crucial to learning. A brain which repeats what it sees is a brain which picks up skills faster. We just need to make sure we can override the MNS when it’s not being helpful. And if you’re wondering how the MNS differs between men and women, the answer is probably what you already suspect.
Yawei Cheng carried out a study in 2008 which showed people footage of moving objects and found that the MNS response was stronger when the object was a human hand and, more significantly, women showed a greater response than men, particularly when the hand was female.
It might not be as simple as women having more mirror neurons but it may be the case that women activate the MNS more readily than men, particularly when observing other women. It sounds like stereotyping but there could be a genuine neurological basis for the belief that women empathise better than men do, particularly with each other.
Or consider the creepy 2008 story of Identical twin sisters Ursula and Sabina Eriksson who were kicked off their cross-country coach (following unusual behaviour) and left stranded by a motorway. After disrupting traffic and eventually being stopped by Police, Ursula ran out into the path of a lorry in an apparent escape attempt/suicide. Sabina then did the same thing. After seeing her sister get hit, she tried to endure an identical injury for no reason. And if identical twins aren’t going to have a strong MNS response to each other, I don’t know who would.
A Cautious Hypothesis
I need to be very clear and explain that mirror neurons are currently a hotly debated topic in neuroscience and nobody is sure how much is real and how much is speculation. The neurologist Vilayanur Ramachandran has said "mirror neurons will do for psychology what DNA did for biology." However, psychologist Gary Marcus has said "mirror neurons are the most oversold idea in psychology". Everything's up in the air right now, which is extremely cool. It means all bets are on and we've got some learning ahead.
I'm not a neuroscientist at all (not even a Biologist) so I have to be very clear that a lot of what I'm saying is based on guesswork, potentially poor-quality guesswork. I have no idea if the MNS can really do what I'm proposing, but that's half the fun of Science...coming up with ideas that may or may not be true! My hypothesis to explain mass hysteria is therefore as follows:
Suppose there was a group of women living together/spending time with each other for an extended period, developing a strong MNS response to each other. Most of the time the conscious brain would be able to override the copycat-instinct but if the environment became stressful or tiring, the brain could get tired, making it harder to suppress that urge.
If one woman began laughing uncontrollably due to stress, another might join in. Two could become three, three could become four and pretty soon everyone in the room is howling in unison. The mirror neurons don’t realise anything strange is happening, so they just force you to keep going, holding you hostage to your own behaviour.
It’s possible mass hysteria may simply be an exaggerated by-product of women’s superior empathy skills, which in turn could be a result of superior MNS activity. Put a lot of humans together in a setting which will encourage stress and things are going to get weird.
So, if you’re female and under a lot of stress, you really might be able to blame your actions partly on mass-hysteria. It’s possible you didn’t have complete control over what you were doing at the time. If, on the other hand, you’re a man smashing a shop window to steal a television as part of a riot then there’s a simpler explanation: you’re a jackass.
Science is evil...obviously
Last night I engaged in my favourite hobby - stealing money from blind nuns. After all, I'm a scientist and we're morally bankrupt. We invented the atom bomb, chemical warfare and (as some conspiracists would have you believe) the Ebola and Zika viruses. Scientists are the heartless people in lab coats electrocuting defenceless chimpanzees and cackling as they do so. In fact, when you pledge allegiance to the Head of Science, you have to kill a puppy and bathe in its blood.
I'm exagerrating for comic effect of course (not much though) but there really are people who see Science in this light. Some people seem to carry the notion in their heads that because Scientists want to understand how everything works, that means we are detached from the moral trappings of decency.
I was once asked whether Science had any moral compass or whether investigating the universe had to be done in a vacuum. I gave a cursory answer, but it's a brilliant question which deserves more thought. While there have been evil Scientists like Josef Mengele, Harold Hodge and Harry Harlow, is it true that all Scientists are destined to become purveyors of cruelty and sadism? Does Science make people evil?
Right and Wrong
Everybody carries ideas in their heads about right and wrong actions. To some people it's wrong to eat animals, to others it's fine. Some people think it's wrong to dance with members of the opposite sex, while others think it's wrong to suggest there are such things as "sexes". Some cultures on Earth readily engage in cannibalism while others see it as one of the ultimate taboos. How do you agree on morality when everybody disagrees?
Suppose a child slaps another child. An adult might disagree with their action and, ironically, slap the aggressive child themselves (I've seen it happen). Should we assume the adult's moral code is correct because they have experienced more life? If we decided that adults know what they're doing and children don't, you'd have to explain how Malala Yousafzai won a Nobel Peace Prize for undermining the Taliban at the age of 11.
Even things we assume are obviously wrong are far from universal. Telling lies is often considered immoral yet millions of parents tell their children about Santa Claus and the Tooth Fairy. Or (to paraphrase Immanuel Kant) suppose a mad axe-murderer came to your door looking for someone you knew was hiding there. If it's morally wrong to lie, shouldn't you say "Yup, they're hiding in the closet, right this way!" It's the axe murderer who then finds the victim and kills them, not you.
Or perhaps we could argue the mad axe murderer is not accountable for their actions because they are mad...perhaps they did nothing wrong other than obeying natural drives? And what do we make of all the murders which take place during a war? If a soldier shoots a terrorist that's still commiting a murder, any way you look at it.
Human morality is inevtiably subjective i.e. it depends on a person's opinion. You might think it's wrong to slap a granny but that's just it...it's what you personally think. If someone else says it's fine to go granny-slapping, then it's a difference of opinion not fact. But can it be resolved with Science? After all, Science has a long history of settling debates by discovering "objective" truths i.e. facts independent of beliefs or values. Could Science discover such a thing as an objective morality?
The notion of objective morality would be a moral code which could not be disagreed with. Such principles would be an inherent part of the Universe, like gravity pulling objects together or heat moving from high temperature to low. Could we use Science to discover moral principles which are fundamental and transcend human opinion?
I'll be honest, I think the answer is no, and the reason is that Science is concerned with what is not what ought to be.
Let's take murder as an example. Imagine I wanted to shoot someone in the face. Science can tell me that pulling the trigger will kill the person. I could ask "why should I not kill them?" Science can then demonstrate that the man would no longer be able to enjoy life. I could respond with "why should he be able to enjoy life?" Science could point out that his death will cause suffering to friends and family. But again, I could ask "why should I not cause suffering to others?"
Science could show that I would not like it if someone made me suffer and I could agree, but still respond "Why should I treat others the way I want them to treat me?" The answer could be "Because it would be unfair" and I would respond with "Why should the world be fair?"
Science could even argue that a violent species is at risk of wiping itself out and that by commiting violent acts we could destroy the human race. But still, the murderer could respond with "Why shouldn't we destroy the human race?" and we could go on like that forever, never resolving anything.
No matter what we said to a murderer, we could not argue that the Universe requires them to not kill. The Universe doesn't permit objects to travel faster than light through spacetime but it does allow murder to take place. Clearly there is no fundamental law stopping it from happening, so a murderer has nothing preventing them from doing so, other than the belief it would be better if they didn't.
If a person liked the idea of everybody being miserable, everybody suffering, and the human race going extinct, how could I show them they were incorrect for wanting that? How can a desire be incorrect?
Science can definitely show that things like murder, theft, cruelty etc. make other people suffer and we can even show that their suffering is identical to ours. But "the decision to not cause suffering" cannot be shown to be something nature prefers. The Universe doesn't want or demand anything since it is not conscious and consciousness is required for morality.
Where would it come from?
If there is such a thing as objective morality, it must come from a supernatural source e.g. a God (a conscious entity not subject to natural laws). This doesn't mean atheists are horrible people incidentally. I've seen many religious apologists say something like "do atheists believe in objective morality?" to which the atheist has to logically say "not"...at which point the apologist springs their apparent trap: "Aha, so you think there is nothing objectively wrong with murdering people!" This tactic is a little underhanded and I feel it gives apologetics a bad name.
Atheists can still think murder is evil and condemn those who do it, it's just that they think this belief comes from their personal desire to end suffering, not from a God. Atheists do not believe murder is objectively wrong but they also don't believe it is objectively purple. They just think words like right and wrong don't apply in the context of desires and values.
Atheists would also ask the question: if morality comes from God, who holds God accountable? In the Christian Bible for example the God of the Israelites threatens to make people cannibalise their own children (Leviticus 26:29 and Jeremiah 19:9) sends bears to maul 42 teenagers (2 Kings 2: 24) and seems to encourage the murder of babies and enslavement of virgin women (Numbers 31).
What do we make of something like that? What if we aren't comfortable with the idea of killing children and enslaving women? Are humans allowed to disagree with such a moral command if it has come from God?
This, incidentally, is why many atheists reject the notion of morality from God, since God is sometimes willing to enforce suffering and death. Many religious people find these questions difficult to answer, although for a fascinating defence of God's morality in the Old Testament, I reccommend the book Is God a Moral Monster by Paul Copan.
Putting it to the Test
The reason we would struggle to use Science as a measure of morality is also down to how Science works. In Science, if you want to know the truth about something, you ask questions and carry out experiments. That's the only way to do it.
But the moral question is as follows: should we do evil? There is no experiment which can answer such a query because the answer is always going to depend on a human answer. Electrons don't lose charge when you tell lies and black holes don't appear when you say mean things. There is no "moralon" particle which influences other particles to prevent suffering. Asking whether you should or shouldn't do something isn't a falsifiable question and Science only deals in falsifiable questions.
To be abundantly clear, I don't like the idea of the human race being wiped out or people suffering needlessly but that's just it. It is something I don't like. It's a feeling based on my personal tastes.
If you wanted to prove morality does exist external to human opinion, you would have to find an example of a moral act being somehow wrong...without there being a mind involved. And I am not sure such an experiment even makes sense. The Universe seems to behave in a way which has no desire to appease or offend human sensibilities. Gravity works because it works, not because humans feel it ought to.
So...Scientists are Immoral after all?
Science cannot prove the existence of morals but it also cannot prove that Batman is better than Iron Man. It's a matter of opinion. Scientists are still able to have tastes and opinions about the world, they just can't prove their tastes and opinions are objective...which puts them in the same league as everyone else. Nobody can prove their tastes objective, that's sort of what makes them tastes (unless you're Batman, in which case everything you do is morally right). So the answer is no, Science cannot help with morality, but I would like to make the case that it can help with something equally important: ethics.
Morals and Ethics
Although the words are used synonymously, ethics are not the same as morals. Morals are a person's individual decisions about what they consider good and bad acts. Ethics are laws a society agrees on to make the world better for people. For example, morality might tell you not to cremate a corpse (there are many people who believe cremation is evil). That's fine because it's your opinion and you're entitled to it. Ethics takes a different approach. Ethics starts from the idea that we should try and make the world pleasant and minimise suffering wherever possible.
Cremation doesn't cause suffering to the deceased (they're dead), and it might actually solve the problem of overcrowding in cemeteries. Ethics looks at what the facts are and then makes a decision based on the notion that suffering is to be avoided. If the deceased's family would be greatly upset at their loved one being cremated, ethics could still decide cremation was wrong, but if the family had no objection and the family actually wanted them to be cremated, ethics says go for it.
Ethics are still based on the opinion that we should do well as a species and end suffering, but it never claims to be objectively accurate. It's interested in learning the facts and then making a decision as a result. And this is where Science does operate.
Some of the most controversial ethical/moral issues we face today are things like abortion, euthanasia, animal-testing, vegetarianism, capital punishment and what to do with psychopaths. Morally, everyone might have opinions about each of these issues but that won't get the debate settled.
In order to answer these tricky questions we have to rely on ethics, which means Science is relevant. Not in telling us what decisions to make of course, but in giving us the tools to make sure our decisions are well informed.
If we decide that causing others to suffer unnecessarily is something to avoid, then we can use Science to find out what causes suffering and how much is avoidable...but that initial decision still has to come from us. And I think this is where we have reason to be hopeful, because one thing Science has definitely shown is that humans have the capacity for empathy, sympathy, altruism and compassion. Just because the Universe is indifferent, doesn't mean we have to be :)
Everyone is Special
Talking about intelligence can rile people up the same way talking about money or beauty can. It gets uncomfortable because sooner or later you have to address the fact that some people have more than others. To combat this discomfort, educational movements have often tried to avoid the problem by deciding there is either no such thing as intelligence or that everybody has it.
It began in 1969 when the Canadian psychologist Nathaniel Branden published his landmark paper The Psychology of Self Esteem. Branden argued that self-esteem was a need like food or water, and that if it wasn’t met the person suffered. It was seized upon by witless educational theorists and the result was “The Self Esteem Movement”.
The idea was that telling children they were all highly intelligent would lead to more productive lives and greater happiness. It’s a well-meaning sentiment but it backfired for a pretty obvious reason. Praise is valuable, but if it’s given constantly and free-of-charge then it inflates egos, causes laziness, and eventually loses meaning.
The sociologist Kay Hymowitz conducted a meta-analysis of 15,000 studies on the effectiveness of The Self Esteem Movement and concluded that “Many children who are convinced they are little geniuses tend not to put much effort into their work.” Funny that.
It’s a shame, because Nathanial Branden’s ideas were important and self esteem is necessary, but cheapening it to “tell every kid they’re brilliant” is not how you generate happiness. It's how you generate narcissists.
Intelligent in your own way
Another popular idea, proposed in 1983 by the American psychologist Howard Gardner, is that of multiple intelligences. Gardner decided (pretty much off the top of his head) that there was no such thing as intelligence. Rather, there were several different types, with little correlation between them.
Consider the footballing skills of former England captain David Beckham. During his peak, Beckham could be in the corner of a large field with 21 players running in different directions, and calculate exactly where the ball should go in order to give his team a tactical advantage. Not only that, he’d figure out how to move his muscles to apply the correct force at the correct angle to achieve his desired trajectory and could do it in a matter of seconds...in his head.
Gardner would argue, quite reasonably, that Beckham was using his brain to achieve specific outcomes the same way Einstein did - just different types of outcome. Beckham’s intelligence resided in the physical realm while Einstein’s was in the mathematical.
On the basis of this argument Gardner proposed several different types of intelligence which people could possess: musical, visual, linguistic, logical, physical, interpersonal, intrapersonal and many more.
It was a popular idea in schools – I remember being given the multiple intelligence test myself - but it's multiply confused. The most obvious problem is that it repurposes the word "intelligence". If we’re going to do define intelligence in such a broad way then everything a human does is intelligent, because everything involves using your brain to exemplify an outcome.
If you know how to walk we could say you have “perambulatory intelligence”, if you know how to cook we could say you have “culinary intelligence”. If you are a fan of the movie Transformers 5, we could say you have “no intelligence” and so on.
What Gardner’s model does is redefine intelligence to mean ability. But when you redefine a word, the thing you originally needed it for still exists. If we decided to repurpose the word carrot to mean “any kind of vegetable”, those orange things would still be there, so we’d have to invent a new word for them and the whole thing would repeat.
We use the word intelligent because it describes something we all seem to agree is real and distinct from other abilities. You wouldn't describe a good sandwich as intelligent because it's not an appropriate compliment. Likewise, if someone has significant sporting prowess we can describe them as "athletic", "fit", "sporty" etc. but intelligence is refering to a different thing. That's not saying intelligence and sport-skills are mutually exclusive, it's just saying they aren't concomitant.
David Beckham could be a very intelligent man but his footballing skills aren't a sign of intelligence, they're a sign of atheltic ability. They are separate features and its unwise to pretend they're the same. Ultimately, the problem with Gardner's approach is that the word intelligent is describing a very specific quality, not a generic one.
So what IS Intelligence?
Words change meaning depending on context and pinning them down to a single definition can sometimes be detrimental. Often it’s the vague boundaries around a word which give it utility. As a Scientist I want to subject everything to clear and rigorous definitions but I recognise this isn’t the way we use language. This makes defining a nuanced word like intelligence tricky.
I was once observing a lesson where a teacher said to a student “you’re very artistically intelligent.” The student looked puzzled and said “yeah, but being artistically intelligent isn’t real intelligence.”
I spoke to her afterwards and asked what she meant. It took her a while to articulate but eventually she hit on a profound insight: “intelligence is when you’re good at things which go on inside your head.” I think she might be onto something.
The ability to play an instrument is a function of the brain but it is expressed through fingers. Being a talented singer is a function of the brain but it is expressed as movement of vocal chords. The same is true with painters, sportsmen, dancers etc. Their abilities are based on brain activity but the outcome is manifested physically. When we refer to intelligence we seem to mean abilities which do not translate so obviously into a physical mode.
A person can use their vocal chords with skill and intonation to deliver a speech. We might describe them as a skilled raconteur or actor, but the person who wrote the speech, who actually thought of the words to use, is the person we consider intelligent.
Let’s take an even more obvious example: Professor Stephen Hawking.
Nobody’s going to object if I call Stephen Hawking an intelligent man. Some of his media fame may be due to his struggle with physical disability, but let’s be clear: his reputation as one of the world’s leading theoretical physicists is well deserved. Even without his inspiring life story, Hawking would still be regarded as one of the greatest Scientific minds alive today. And yet there is no physical manifestation. That’s probably why Hawking’s story is so moving in the first place. He cannot express his brilliance physically, it is entirely within his head.
I would argue that intelligence is whatever we agree Professor Stephen Hawking has. He can’t sing, play the tuba or tap-dance, but the inner workings of his brain, which cannot be demonstrated physically, is what we mean by “intelligence”.
Knowledge is Power, but it’s not Intelligence
What’s so special about Hawking’s brain then? Well, the guy definitely knows a ton about physics. But there’s more to it. I know a lot about physics too, but I’m not going to claim I’m as clever as Hawking.
Intelligence isn’t the same as knowing things because anyone can memorise facts. I could tell a room of people “fermions are defined by their adherence to the Pauli exclusion principle, a function of their half-integer spin”...but does everyone in the room suddenly become smarter if they don’t understand what that fact means?
Probably the most workable definition of intelligence I can think of is as follows: answering questions you know the answer to is knowledge, figuring out answers to a question you don’t know the answer to is intelligence.
I think this definition, although loose, is probably as good as we can get. Intelligence is how well we process unfamiliar information; how well we use things we do know to grasp things we don’t.
The IQ Test
The most famous assessments of intelligence are IQ tests. And I’m not talking about those 15 minute online things which always give a mysteriously high score as if they’re wanting to flatter you into returning to their website...Hmmmm. I mean the real things.
I was made to take a proper IQ test once, and it’s an exhaustive procedure. It took about five hours and was carried out by an examiner with a stopwatch. There were bits of paper, little puzzles to complete, picture cards; the whole works. And I’m afraid I’m not going to tell you what my IQ is. Sorry.
The reason is not because I have an embarrassingly low score, it’s because I don’t put much faith in the tests and don’t want people getting hung up on it. IQ tests tell us something, but it’s not intelligence. I know there's an old joke which goes "the only people who object to IQ tests are people who do badly on them". But that's not true. For the record I actually scored highly. I just don't think the number tells you much.
A Brief History of IQ
IQ tests were invented in 1904 by the French psychologist Alfred Binet. The ministry of education in France was trying to identify students who were likely to struggle in school and Binet provided a diagnostic. Every student was given a series of common-sense questions and if they answered poorly, they were given extra support in class.
The questions included things like identifying the names of certain foods, lifting objects and deciding which was heavier, and even looking at faces of women and deciding which was the prettiest. Your score was then calculated as a fraction compared to other people (a quotient) and that was the end of it. Binet was very clear that his test was not calculating a single measure of “general intelligence”. It was giving a sense of how you performed at basic tasks.
About ten years after Binet introduced his test, the American military were looking to find a method of assessing which soldiers should be given officer training in preparation for the first world war. They asked the Stanford psychologist Lewis Terman to design a test and he turned to Binet’s, adapting it for adults.
Over 1.5 million soldiers took Terman’s test and were given a ranking of A – E, with only the A-grade soldiers getting officer training. Terman later introduced the familiar numbering we still use, where 100 is considered average and 140 is termed “genius”. More unsettling still, many US states with the death penalty require an IQ score of 70 to be worthy of execution. About 20% of Death-Row inmates are only just over that line. Apparently a score of 70 is enough to make you culpable for your actions but a score of 69 is not.
Terman later argued that only smart people should be allowed to breed in order to better the human race and he made one or two teeth-clenching comments about the link between intelligence and race, so that gives you some idea what he wanted his test to be used for. Here's a quick example of a straightforward IQ test. How many Indiana Jones movies are shown below?
The Feynman Problem
The example I always use when illustrating the fallibility of IQ tests is what I call "The Feynman Problem". Richard Feynman had an IQ of 123-125. That’s not bad, but it would only indicate him to be “reasonably smart”. Yet Feynman was inarguably one of the most intelligent people to walk the Earth in the last hundred years.
He won a Nobel prize for working out the mathematics of quantum electrodynamics, the two main biographies written about him are called Genius and No Ordinary Genius, he taught at Princeton, MIT, Cornell and CalTech Universities, and was described by Robert Oppehnheimer on the Los Alamos project (the greatest Scientific minds living in one town) as “the most brilliant physicist here”.
He was a freak of intelligence but based on his IQ score you wouldn't think he was anything special. Hell, James Franco has a higher IQ than Feynman. James Franco!!! Even I have a higher IQ than Richard Feynman and I am NOT smarter than he.
While IQ tests might be telling us something, I don’t think we should put too much stock in the numbers. It would be like measuring a person’s fingers to see whether they would be good at playing piano. There may be a moderate correlation but it’s far from the whole story.
If you’ve got a high IQ then you’re probably bright, but being any more specific is going beyond what we can know. A person with an IQ of 120 may not be any more intelligent than someone with a score of 110 – they might just better at doing the IQ test.
Because intelligence is a loosely defined word, we need a loosely defined way of measuring it. Trying to measure it with a number is like trying to nail a cloud to a piece of wood.
The Barmaid Test
There’s a famous Einstein quotation which goes: “if you can’t explain it simply, you don’t understand it well enough”. It’s a good phrase but it’s not real. It’s actually a mixture of his genuine quotation “the truth should be stated as simply as possible, but no simpler” and a quotation from Feynman “if you can’t explain it to a freshman, that means you don’t understand it.”
Ernest Rutherford, another Nobel prize-winner, once said something with a similar sentiment: “an alleged scientific discovery has no merit unless it can be explained to a barmaid.”
I feel this is a little unfair on barmaids but his point is valid i.e. if an idea is worth knowing, you should be able to explain it to someone who isn’t an expert in the field. The idea of the barmaid test is, at its core, that to understand an idea you should be able to state it simply. This, claim the great minds, is the best way of seeing if someone really understands something...get them to explain it in straightforward terms.
So I think Rutherford's Barmaid Test is probably a better measure of intelligence than IQ scores. If you really want to see how clever someone is, ask them to explain the clever-sounding thing they just said. If they can't, they're probably not as smart as they think they are.
Am I therefore saying teachers are the smartest people on the planet? Yes. Yes I am.
Good luck in the new term everyone!
Done and Dusted
Thursday saw the release of GCSE exam results, marking the end of UK exam season. We now have a single week of breathing-space before it all starts again with the new cohorts in September. Bring it on.
Results days are some of the most emotionally charged days in the academic calendar but it’s always mixed with commentary from politicians and pundits talking about the state of the nation’s education and usually the need for reform. It’s only a matter of time before someone says those mortifying words: "Exams are getting too easy, they were tougher in my day!" I’ve heard politicians say it, people on buses, parents of students and so on. Everyone seems to think their exams were the most difficult to have ever exist, but is that fair?
It seems like an insult to the hard-working students who have bled themselves dry in order to do well, but I guess it makes you feel special if you truly believe your life has been a tougher struggle than anyone else’s.
But how are today’s exams different to those of the past? As someone on the front line of modern education (well, it’s really the students who are on the front line, I’m more like the drill sergeant who trains them and sends them off to war) I thought I’d share my thoughts.
How Grades Work – UK vs USA
Education is a tricky thing to get right and I don’t think any country has it figured out (although I’d take a glance in Canada and Scandanavia’s direction). Most of the web traffic I get comes from the UK and the US, so let’s look at how these two systems address the problem of getting a population educated.
In the USA nothing is standardised. Every pupil attends classes and their teacher is responsible for their overall grade. How that grade is reached varies between schools, subjects and teachers themselves. Typically between 40 – 50% of the grade is based on a final exam, written and marked by the school faculty, while the remaining 60 – 50% is based on things like coursework, class participation, attendance and behaviour.
At the end of the year, the teacher adds up your scores from these different streams and the grade boundaries pretty straightforward. Score 90% and you get an A. 80% gets you a B, 70% a C, 60% is a D. Anything below that and you get an F - a “Fail”. You can re-take the year however, so if you don’t get good grades you get another shot instead. And that’s the end of that.
In the UK, exams are written by privately run Exam-boards. Exam boards make money in two ways: entry fees (schools pay to enter a student for an exam) and things like text-books and online resources. This second one means an exam board tends to make more money if they change the content of the course every few years since schools have to buy new books and equipment to keep up.
The exams are sat nationally at the same time up and down and the country, before they get collated and distributed to markers (teachers earning a smidge of extra cash). That’s why it takes several months between the exams being sat in May/June and results day in August. Typically 90 – 100% of your score is determined by the final exam with things like coursework, homework and behaviour being irrelevant.
At GCSE level (age 16) the grades go from A* - G with a “U” grade being a fail. Except starting next year we’re switching to a numerical system where the grades go from 9 – 1 (9 being the highest). Then at A-level (age 18) the grades go from A* - E, then U being a fail.
The grade boundaries are moderated every year by a team of exam officers (slightly different for each board) so the score required to achieve a particular grade changes. Re-sitting is a complicated and expensive option so once you’ve done your exams that’s pretty much it unless you can afford the re-sit fees.
There are clear strengths and weaknesses with both systems. The UK model is obviously intended to be standardised so that an A from one school means the same as an A from another (although the fact that there are five different exam boards sort of undermines that).
It does also prevent manipulation, so a teacher can’t mark a student they don’t like harshly, or give extra credit to a student who’s good on the football team and the local community wants to see them going to college etc.
The US system has the clear advantage that the student has a chance to demonstrate skill over a long period of time, rather than being scrutinised on three year’s worth of a work in a single exam. I’ve known students who have suffered a personal tragedy a few days before their exam so obviously didn't do their best. In the American system I’d be able to give them the grade I felt they deserved, but in the UK if you’re ill on the day – too bad. Until we learn how to digitally upload information to the human brain, it's unlikely anyone will solve the problem.
Lies, Damned Lies and Statistics
Let me demonstrate something which I think is important. I wanted to look at the figures surrounding GCSE and A-level grades but it turns out getting hold of these numbers is surprisingly difficult. The UK government website doesn’t offer any publicly available information so you really have to go hunting to find what you want.
I am particularly grateful to Brian Stubbs from the University of Bath, who I contacted to help write the blog. If you’re interested, I strongly encourage you to check out his website: http://www.bstubbs.co.uk where he has gathered decades of historical exam information about UK exams. Now what does the data show?
Well, in 1989 approximately 77,700 A grades were awarded to A-level students in the UK. This year around 150,000 were awarded. So exams are twice as easy because the number of A grade students has doubled?
Let’s take another look. In 1989 an “A” grade was the highest grade you could get, but in 2017 it’s the second highest. The highest grade in 2017 is an A*...of which only 69,000 were issued. In other words the “top grade awarded” went down significantly, so exams are obviously getting much, much harder, right?
Not necessarily. In 1989 only around 600,000 students nationally even attempted A-levels whereas this year it was around 830,000. So if we take the top grade as a percentage we see the number of top grades awarded has gone from 11% to 8%, so exams have gotten harder but only by a small amount.
Now let’s look at GCSE grades. In 1988, 12.8% of students were getting the second-highest grade. In 2007 that number was 13.1%. So actually the difficulty level of exams hasn’t changed at all - the same number of students are getting the same kind of grades.
But a really interesting pattern emerges if we look at the years a new “top grade” is introduced. In 2011, 7.8% of GCSE students achieved a grade of A*. Compare that with 1994, the year the A* grade was introduced – that year only 2.8% of students got it. So the exams are getting easier?
Well no, this year they introduced the grade 9 and only 3% of students got it. So if you compare like with like, i.e. compare 2017 with 1994, then you get 3% of students achieving the top grade, so there has been no change. Exams are staying about the same.
This year there has been a 0.4% drop in grade 9s/8s compared to last year’s A* grades for English GCSE. That’s the headline most newspapers are worrying over. Except what’s not being mentioned is that this grade-dip is for English language and literature combined. If you look at English literature (all students sit two English GCSEs) we’ve actually gone up by over 2%.
The point I’m making should be obvious. Depending on which years I pick and which grades I choose to look at, I can spin any story I want. If I were in government I might want to make it look like grades were going up under my party. Or perhaps I might want to make it look like grades went down under the opposition. If I were the head of an exam board I might want to make it look like grades are staying level and that everything is nice and fair. We have to be very careful what we’re looking at.
The statistics are complicated. However, it is reasonably accurate to say there has been a slight increase in the percentage of “top grades” being awarded over the past twenty years. Grade inflation is a real thing, albeit a very subtle one. But that doesn’t necessarily mean exams are getting easier.
If exams were getting easier then we wouldn’t see sudden dips when a new grading system is introduced like we did this year and in 1994. Actually, the most sensible conclusion to draw would be that grades increase as a function of familiarity. Change how familiar the exam is and you’ll see a dip in grades. What you might really be seeing in those figures is that people do better each year, provided it’s the same style of exam.
Teachers get used to the types of questions, pupils have access to more past-year papers, examiners have more trustworthy mark schemes, exam-writers have done it before so they can give more training to teachers on what to expect etc.
Actually, a very steady increase in grades is precisely what you would expect if the exams were staying more or less the same. The grade-inflation data we see is small, implying that it’s more about adaptation rather than exams getting easier.
Today, partly thanks to the fact that schools are shifting to online data storage, we can keep past-papers from previous years and give them to our students. In fact, at my school I have done video-recordings of myself answering previous Physics papers. Students can log on to the school network and watch me as I answer a question, describing my method as I go. This is very specific coaching which gives them a slight edge. And that’s a good thing.
The downside is that we spend a lot of time “teaching to the test” rather than teaching a subject for the fun of it and we put waaaaay too much emphasis on answering exam-questions. It has to be said that I have been able to train some students to jump through hoops and over obstacles and squeeze them over the boundary of an A grade, when they don’t really understand the Science any better than a student who gets a B.
Maybe I’m actually causing problems for them further down the line by doing that. I have occasionally coached a student to get an A grade, and they’ve gone to University only to find they don’t really understand the subject as well as they thought and have dropped out. Perhaps I should just let students do a bit worse and not train them in the art of the exam? Hmmm that's a tricky one.
Ultimately, once teachers get to know how an exam system works they can train the students to do better at it so we see an increase in grades. The problem is that this puts teachers in a difficult position. The government tells schools to raise their standards. If the grades don’t go up then we’ve failed to do it. If the grades do go up then it’s because the exams are easier. It’s a no-win scenario which is not something anyone wants to face.
Besides, I’m not sure “more A grades” necessarily equates to a higher standard of education. At the moment more A grades just means more students better trained to pass exams. Is there a risk that some of the A-grade students aren’t really comparable to A-grade students of yesteryear because they’ve been coached to pass an exam rather than having a deep understanding of the subject?
What Are Exams Like Now?
A report commissioned by Ofqual (The Office of Qualifications and Examinations Regulation) in 2012 really bugged me. It decided that looking at grade boundaries wasn’t a good way of deciding if exams were getting easier. In order to solve the problem they did a detailed analysis of exam papers from 2005 and compared them with exam papers from 2008...in two subjects (Biology and Geography). That would be like looking at the weather in two cities a week apart and drawing conclusions about climate change. That’s far too narrow a data set.
The report then claimed that yes, exams were getting easier. Most of this conclusion came from two factors. Let’s look at the first one.
Ofqual noted that older exams had more essay-questions while modern exams had more multiple-choice. Therefore modern exams are easier. The assumption seems to be that essays are hard and multiple-choice is easy. Let’s break that nonsense down.
Working as an exam-marker isn't a fulfilling job. You get paid for every exam script you mark (not very much) so the aim is obviously to get as many done as quickly as possible. After a 10-hour day in school you go home, log on and spend another five hours staring at the same question over and over again, clicking buttons on a screen.
Do you think every line of every essay is closely scrutinised? Or do you think some markers just skim read and decide the mark based on a general impression? I’m not saying that’s what should happen...but what do you think does happen?
Personally I know a lot of students who feel confident writing essays. Use the right keywords, keep your grammar up to scratch, drop in some phrases you know the examiner is looking for and you can bluff your way to a high grade. In multiple-choice there’s a clear right or wrong answer and you can’t argue the point. An essay gives you room for manoeuvre and interpretation. A tick-box does not.
You might argue that in a multiple-choice question, at least you have the correct answer written somewhere in front of you. But if you know the answer to the question, having it as a multiple-choice makes no difference...you’d have written the correct answer anyway. If you don’t know the right answer then you’re at no advantage. Yes the right answer is written in front of you, but so are four incorrect ones. If you make a guess you’re 80% likely to get it wrong. Does that make multiple-choice sound easier?
The second issue the Ofqual report highlighted was that some of the Biology papers had less emphasis on scientific content and more on softer things like context. That has been true, but it actually makes answering the question harder.
Here’s an example. When I worked as an examiner there was a question on a exam I marked which said “explain why graphite is used in pencils.” I saw one student who gave the following answer “graphite is composed of layers of hexagonally arranged carbon atoms in a 2D lattice. These layers have weak van der Waals interactions between them meaning they will slide off each other, allowing the graphite to be scraped as pencil lead.”
That answer is scientifically perfect. It’s a “hard science” answer. But guess what, that student got zero marks. The mark scheme wanted you to say “graphite is dark and brittle.” That’s a soft answer. It’s what a five-year-old child would say...and that doesn’t make the question easier to answer. It makes it harder because you’ve got no idea what the examiner wants you to say if they’re not looking for the Science.
I actually complained about that question because it was punishing students who had better scientific understanding and favouring those who answered like children. I wrote to the exam board and explained why I thought the mark-scheme should be changed. They ignored me, so I quit.
What the Ofqual report seemed to miss is that asking straightforward science questions is easier for a well-prepared student to answer because they know what’s expected. So I disagree with Ofqual vociferously. Exams are not getting easier unless you’re naïve enough to assume that certain types of question are inherently “easy” rather than acknowledging different students have different strengths and weaknesses.
Today’s GCSE physics students have to memorise 23 equations for their exam, whereas previous years were given a data-sheet to consult. I’m not sure I even know 23 equations off the top of my head. When I need to know an equation I do what every single scientist in the real world does...I look it up.
In English, students are no longer allowed to take their books into the exam to reference certain passages of text. In Chemistry A level, students are expected to know over 40 reaction pathways most of which won’t get asked about. And the same is true across any subject. Exams are hard regardless of which year you’re looking at. But even doing that is a bit pointless because the grade boundaries are constantly changing.
How do Grade Boundaries get Decided?
Because the exam is different every year, grade boundaries change with it. At University the grade boundaries for your final exams don’t fluctuate so if you happen to sit your paper during a tougher year, then that’s just tougher luck. University departments always have internal moderation panels to try and make sure the exam questions are fair, but it’s never perfect obviously.
The idea of moderating grade boundaries is to get around this problem. If the exam is harder, the boundaries are lower so you don’t get everyone failing. If the exam was really easy the grade boundaries are higher so you don’t get everyone passing who doesn’t deserve it.
But we’re faced with the same problem: how do we actually do this moderation? Do we assume the top 10% of students will be the best, so we give them all A* grades no matter how well they did? Then we just go down by 10% for the A grades and so on?
There’s an obvious reason not to do that. It makes the assumption that every year the abilities of students will be in the same proportion. There are going to be fluctuations each year so chopping things off every ten percent doesn’t seem fair. And what if there are more students one particular year? Usually around 5.5 million students are entered for GCSEs but in 2017 it was 3.6 million. That means you got more students with the top grades; are they comparable to students who got top grades the previous year?
Either we keep grade boundaries the same each year and make an effort to keep exams of comparable difficulty, or we go through all sorts of committee procedures to moderate the grades after the fact. And this is Britain...so it’s the latter we go for.
The exact process by which grade boundaries are decided isn’t made clear unless you’re one of the senior examiners, but if you’re curious here’s the website of the Exam board OCR explaining how two students who both score 61/80 end up with different grades: http://www.ocr.org.uk/ocr-for/learners-and-parents/getting-your-results/calculating-your-grade/
That sounds potentially ludicrous. Part of the justification is that one of the students attempted more complicated questions. But more complicated according to whom? We’ve already seen that Ofqual considers essay questions harder than multiple-choice with very little justification, so deciding that one question is harder than another can vary from person to person.
Either the two students sat different papers (undermining the whole point of standardised testing) or one of them attempted trickier questions on the same paper. That’s like saying a single 2-mark question is more valuable than two 1-mark questions. Is it? Says who?
It turns out that grade boundaries are down to examiner opinion. If some examiners think a particular question is trickier or easier this can affect how well the student does after they have already sat the exam and there’s nothing the student can do about it.
The key message is that an A grade one year is not necessarily equivalent to an A grade the year after. You might immediately say “yes but the previous year’s paper was harder, so you can’t compare how they did one year with how they would have done the previous year.” Which is entirely my point. You can’t compare two years because the exams are different. So there’s no point speculating on which was easier or harder.
I’m alright with you saying that a student who gets an A grade has done well, better than a student who gets a D...obviously that’s true. But that kind of broad statement is all we can honestly say. If we try and get more specific, analysing how students have gone up or down, we're extracting more information than is really there.
Comparisons are Deadly
On the government GCSE-results website you can find the following quotation: “It is always difficult to compare in a meaningful way grade boundaries between old and new qualifications”. That’s a very fair thing to say. Well done government!
It’s just a shame they undermine their own message on the very same web-page with the phrase “Overall results are stable comparing outcomes last summer with outcomes this summer” (I’ve paraphrased it because the original sentence is three times as long and adds nothing).
Using the word stable seems like a mistake to me. Stability implies something isn’t going to fall in the future, or hasn’t fallen compared to the past. But if we’ve already agreed we can’t compare present, past and future, what do we mean by saying the grades are “stable”? It’s almost like “stable” was just a positive-sounding buzzword which doesn’t actually convey much meaning. Hmmm.
In the UK, exam criteria change about every six years. Each school picks a different exam board and as Ofqual’s own report found, there was a different style of exam even three years apart. How can we possibly hope to extract meaningful data looking thirty years apart?
The Chemistry A level exam at the end of 2014 was fairly reasonable, but the one at the end of 2015 literally made the news because it was so difficult (I saw dozens of students coming out of the exam hall in tears that day). Two exams in the same topic one year apart can be wildly different.
With past papers available, teachers trying to teach to the test, exam boards have to constantly write tougher questions to make it more of a challenge. It’s an arms race between student’s preparations and an exam board’s desire to actually test them. It gets to the point where if a student writes that a chemical is “blue/green” they get the point, but if they write “turquoise” they don’t (I’ve seen that happen too).
The fact is that it’s not possible to talk about the quality of exams by looking at the grades. If you think exams used to be easier, try teaching a class of students. Or better yet, try sitting your children’s exams yourself and see how well you do. Here's a math question from an EdExcel GCSE paper a few years ago. Remember this is testing "General" maths education for 16 year olds.
Personally, I think things are Harder...but not because of the exams
As someone who sat A levels exams just over ten years ago, I’ve seen a decade’s worth of exam material and it looks about the same. Some bits were harder, some bits were easier.
I mean that’s just my personal opinion...but the exam boards are using that approach to measure grade boundaries so I don’t see a problem. From what I can tell, the quality of questions is “stable”. There are fluctuations year on year but the exams today’s students are sitting are no harder, nor easier than the exams their parents sat.
However, there is something else which I think has to be factored in which you can’t quantify. This makes it rather hard to write about it in a Science blog, so I’ll make it clear: at this point I’m going into anecdote and speculation. What I have noticed is that students today are under more pressure than their parents were. A lot more.
I have seen students vomiting in exams from stress. I’ve seen them pass out. I’ve seen scores of students having intense anxiety attacks and I’ve even seen one or two wetting themselves. Yes, this isn’t pretty. Horrible to read about right? Imagine you’re a teacher who cares about these kids. Or imagine you’re the actual student themselves.
Imagine you’ve been studying something for three years (in the case of GCSEs) or two years (in the case of A-levels) and now you have to prove yourself in the space of two hours and it’s your ONLY chance. Imagine knowing you’re in competition with 5 million other students and the grade boundaries are in free-fall based on the whims of the examiners. Imagine having to study 10 subjects (only 3 of which you actually chose to do). And imagine being told your entire future depends on them.
Students are given benchmark grades in year 7. They’re given mock exams in year 10 and then twice in year 11. There are catch-up sessions, warm-up sessions, workshops, after-school extra lessons and students are constantly tested (every three weeks roughly). Not only this, but they are repeatedly warned about the risks of doing badly in exams and how their life will be over if they don’t get the grades.
Imagine being in a frightening, results-driven environment which is compulsory, you don’t get paid for it and you’re judged as a person based on a few hours worth of work. School in the UK is stressful for kids. Give them a break.
Yes, of course exams should involve stress. I remember working myself silly when I studied for my A-levels, but it was nothing like what I’m seeing today. I don’t really know what the cause is (I have a few guesses but this blog is already too long) but something isn’t right with this picture. When you have dozens of students crying their eyes out before sitting a mock exam, something has gone wrong somewhere.
I love science, let me tell you why.