One year ago, I published a couple of blogs outlining (1) How I somehow became a science author and (2) How I wrote my first book: Elemental. I concluded them both by saying that whether my book was a success or not, I was just honored to have the chance at getting one published. It probably seemed like I was covering my ego’s back there, but I was being truthful. Really.
It was actually quite a surprise to feel that way. I assumed that in the run up to release-date I’d be pining for a success, but I actually became very stoic about the whole thing…I was just happy to have my own book. If it flopped then so be it. How many people get such an opportunity in the first place? Obviously, I wanted people to read and enjoy my work, but I was not aiming for glory.
It was released on July 1st 2018 and, after finishing up my day-job as a school teacher, I headed to my local bookshop to do a signing. It has to be said, it’s a very cool feeling to see your book on the shelves amid the works of…y’know…real authors. But it felt more like the end of a journey rather than the beginning of one. The book was complete, the work was over, now I just had to accept whatever happened.
In fact, if I’m being totally honest, I didn’t expect Elemental to be a hit. It’s a biography of the periodic table - something most people famously hated in school. I figured I’d sell a few hundred copies to my gran perhaps, but that would be the end of it. The shock I got when I learned that Elemental has been a success still hasn't sunk in.
“Yeah it’s doing OK thank you”
People are so supportive when you do something like this and I get asked all the time how the book is doing. It’s really nice that people take such an interest, but I tend to respond in the same way every time I'm quizzed about it. I sort of shuffle my feet and mumble bashfully that it’s doing fine thanks.
I have no clue why I act like this! I respond to people asking about my book the same way I would respond if they were asking about an inflamed gall bladder - like I’m ashamed of the success or something? It’s flattering that people actually want to hear about it, but I guess it's because I don’t know what the etiquette is for an author whose book is doing well. Do people want sales figures? Do they want to know how much money I’ve made? Do they want to know what the critics have been saying?
I tend to be fairly coy about the whole thing and people have to dig it out of me, but I’ve been informed by enough people that being a tad boastful about my achievement would be acceptable, even healthy.
I dunno, it feels a bit weird to acknowledge it, but I will say that Elemental did a lot better than I or my publishers anticipated. It sold out on Amazon within a few weeks and they had to print a second run. Bookshops had to order double the typical amount and it was stocked in at least eight countries I’m aware of, being translated into three other languages. I got featured in Science magazines, did interviews for BBC radio and The Daily Mail listed it as one of the top books of 2018. Even The New York Post arranged an interview with me for the US release - although sadly that never made it to print (I’m not quite that famous yet!)
However, the most gratifying thing about the whole experience, more exciting than the prestige of telling people I’m an author, is the messages and reviews I get from people telling me how much they enjoyed and learned from it.
I’ve received e-mails from people I’ve never met in countries I’ve never visited whose language I don’t even speak, telling me they enjoyed Elemental. I’ve had people e-mail me saying the book has persuaded them to study chemistry at University and I’ve even had people tell me they’re reading it to their kids as a kind of bizarre bedtime story.
The positive response has been worth all the stress and gave the publishers confidence in me as a writer. That was the main reason I wanted to do well…so they’d give me a chance to write more! And, thanks to the response of my loyal readership, two weeks after the release of Elemental my publishers at Little, Brown offered me a deal for a sequel.
They liked the format of Elemental, being a humorous and informal guide to Chemistry, so they asked what other topics I could do it with. This was quite different to last time. When my agent and I first approached publishers we were trying to persuade them to take a chance on me, but now I had proven myself they wanted to see if I had more tricks up my sleeve. I did. There wasn’t even a moment’s hesitation. It had to be quantum mechanics...
When people ask what my favourite area of Science is, I usually respond with the same joke: “Oh I don’t have a favourite, I love all of it. Also, quantum mechanics.”
When I was a teenager my science teacher, Mr Evans, gave me a textbook on the subject and, putting it simply, I fell in love. That sounds mawkish but honestly the feeling wasn’t all that different. I became obsessed with it to the point of adoration and could think of nothing else. Studying it made me happy and I wanted other people to see its beauty.
The basic premise of quantum mechanics is that there are two Universes around us. There’s the universe of everyday “big” things, where laws of logic and common sense hold and then beneath the surface, at the scale of atoms, there’s a different world entirely; a world where the normal laws of physics no longer work and you have to let go of common sense for it to make sense. Quantum mechanics is full of parallel universes, teleportation and time travel as well as approaching profound questions of spirituality and consciousness.
Don’t get me wrong, I love chemistry and Elemental was a really fun book to write. But this one was going to be a passion project. Something I would be writing from the heart. First though, I had a difficult question to answer.
How the hell do you write about quantum mechanics in plain English?
As soon as the publishers gave me the greenlight, I outlined my chapters, got a library of textbooks by my bedside for research and then…I’m just going to admit this plainly…I was hit by a wave of self-doubt.
The pressure of a second book was enormous. Surely I should be playing it safe and writing about something easy! Why had I picked, of all things, quantum mechanics for my sequel? I kept thinking of when Josh Trank got hired to be the director of the Fantastic 4 movie following the success of his small indie-sci-fi horror Chronicle. Trank was not ready to tackle such a huge project and it resulted in a total mess - one of the worst super hero movies in history. Was I in danger of making the same mistake? What if I was a one-hit wonder whose first book did well only as a fluke?
Elemental worked because chemistry can be described in simple terms, without having to get too bogged down in technicality. Quantum mechanics, on the other hand, is so abstract and counter-intuitive that explaining it in plain English is impossible without covering the deep science. In chemistry, you can go straight to the fun bits without having to lay any conceptual groundwork, but in quantum mechanics it’s the reverse. In order to get to the cool bits you have to do the tricky stuff first.
That’s the reason physicists prefer to communicate about quantum mechanics through equations - describing this stuff in words is difficult, so it’s easier to come up with a bunch of symbols that represent “the weirdness” and not worry about understanding them. But I wanted to write a book about quantum mechanics without a single equation. That’s not impossible - if you can say something in mathematical symbols you can say it in english language symbols - but it’s a major challenge.
Then there was the problem of which bit of quantum mechanics to focus on. Some books focus on the experimental details, some tell the historical story of how we came up with it, some focus on pure explanation and some handle the philosophical implications. I wanted my book to be all of those things. I wanted to write a complete tour of the quantum landscape, but maybe I was in danger of becoming the Victor Frankenstein of Science popularisers - cobbling together things which did not belong and, in my hubris, creating a monster.
Then there was the biggest threat of all...quantum mechanics is a subject close to my heart. It’s always a risk when writers, musicians, filmmakers etc. get to make their passion projects because they can become self-indulgent. I wanted to make sure my second book wasn't just me going on about something I loved, I needed to show other people why I loved it and why they should love it too.
A Sweeping Epic
As soon as the contract was signed I felt I had bitten off more than I could chew. I started to doubt it could be done or whether I was the right person to do it. In fact, for the first few weeks I didn’t even begin typing - I was too afraid of writing something dreadful. But then I was reminded by a friend that this was a book I’d wanted to write since I was a teenager and that I had a lot to say. If I just got to work without second-guessing myself, maybe the book would just flow out of me. I decided to heed this advice and got on with writing the damn thing. Sure enough, once I started, I couldn’t stop.
Initially, one of the things which intimidated me was that the publishers asked for an 80,000 word manuscript. Elemental was half that size. The task of writing something so huge was daunting, but once I began, I found the stories I wanted to tell forming on the page as if it wasn't me writing them. By late August, I was up to 60,000 words with roughly two thirds of the intended material covered. I was going to hit my target…or so I thought.
A week before school term started I headed to publisher HQ in London to discuss my progress. I explained in a meeting, rather proudly, that I was on track with 60,000 words done already. At which point my publisher stared in confusion: “How are you going to cut it down?” he asked.
I think everyone experiences these moments of horror at some point in their lives. It’s the feeling you get when you suddenly know exactly what the bad news is going to be, but you have to ask for it anyway. Turns out there had been a typographical error in the contract. They wanted the book to be 45,000 words max.
God knows how someone accidentally types 80 instead of 45, but I was now seriously over my word limit, with only two thirds of the book done, and the deadline approaching in a few months. And I was about to start back at school (which is a pretty time-consuming job).
The suggestion was made at one point that I split the book into two - one focusing on the history of quantum physics and one focusing on recent developments. I probably would have made a bucket load more money doing that, but I didn’t want to pull a Deathly Hallows on my readers. People don’t like paying twice to get one story. So I decided I would just write the book in full, then trim it down from whatever size it ended up as. The final first draft wighed in at 76,000 words which I had to reduce by 40%. The only way to do this was to be ruthless.
My Only Advice
I don’t feel like I have much advice to give on the topic of writing. I’m new to it as a professional, but the one thing I would say to anyone wanting to become a writer is: pick your test audience well and listen to what they say. Chances are your first draft isn’t going to be a masterpiece and by the time you’ve finished it and put all that work in, you’re too close to know which bits work and which bits don’t. You need to get outsider opinions, you need to trust that they’ll be honest, and you need to act on their feedback.
As with my first book, I recruited a group of people to read the book from different perspectives and be cuttingly honest. I got friends who knew nothing about quantum mechanics, friends who were enthusiastic about it but not necessarily experts, friends who had degrees in the subject and friends who had no interest whatsoever…and asked them all to tear it to pieces as best they could.
Your ego has to take a hike here, because you’re not writing the book for yourself anymore, you’re writing for your readers. The early drafts are where you selfishly write the book as you think it should be…then you have to make it worthy of others. You can’t just sit there feeling smug; you have to expose it to criticism and actually accept it. Don’t argue with the people who review your early drafts, otherwise what’s the point in getting them to read it?
There were jokes which didn’t work and had to be removed. There were sections that made no sense or contradicted what I’d said earlier. There was even a bit where the legal team had to intervene because I spent a whole chapter making fun of a scientist I had forgotten was still alive and liable to sue. But, over the course of several stressful but productive months, we battered the book into shape and by the end of January 2019 it was ready. 45,000 words and a week left on my deadline
Ready for Round Two
The title for my second book had been something I’d joked about since before Elemental. Because Elemental was all about the elements, a book about the fundamental laws of particle physics should be called Fundamental. Presumably my future books will have to be about the brain (Mental), climate change (Environmental) and teeth (Dental).
Discussions then began about what the front cover would look like. As I explained in my previous blogs, the cover is of great importance because that is often the only advertising a book gets. We decided to model the design on a similar theme to Elemental - a simplistic image that would communicate a straightforward approach, as well as looking vaguely friendly and non-intimidating.
At least a dozen e-mails were exchanged about capitalization of words in the subtitle and which letters should be upper and lower case (really) as well as font sizes and styles. This attention to detail still surprises me, but it really is a testament to how seriously publishers and graphic designers take their craft. They absolutely want to hone the design to a point of perfection, so that everything about the cover says “give this book a go”.
Then came the audiobook. With Elemental, the audio was recorded by voiceover artist Roger Davies but for this one we decided it would work best coming from my own throat. I headed down to ID Audio Studios in London and spent two days sitting in a studio where such luminaries as Olivia Coleman, Bill Nighy, Roger Moore and Richard E Grant have recorded books, and then I talked for two days into a microphone as a producer directed me (mostly telling me to slow down because I have a tendency to talk fast when I get enthusiastic).
And now, Fundamental is ready. It will be published in the UK and a few other European countries on August 1st 2019 in paperback, e-book and audiobook. You can pre-order it now on Amazon if you want (which may seem pointless from a consumer perspective, but it helps me as an author by encouraging bookshops to stock it), and now I am ready for round two.
I’ve been here before of course, but this time I’m far more nervous. With my first book, I was just thrilled to have gone on the adventure. But as I write this, with publication a few weeks away, I’m feeling very different. It’s not that this book is a more ambitious project, nor is it the fact that there’s more money involved. When I really think about it, my anxiety comes down to something very simple: I don’t want to disappoint my readers.
With Elemental I didn’t have a fanbase so to speak. I mean the website gets hits and I have followers on Instagram and YouTube, but my debut book was published all over the world to people who had never heard of me. This time I have fans to satisfy. A group of people who enjoyed and learned from my first book and I want them to feel I’ve done them a service with the sequel.
I once heard an author, the name of whom I’ve forgotten, saying “I hope my readers enjoy reading it as much as I enjoyed writing it”. I see where they’re coming from but actually I want my readers to enjoy it more. Readers give writers their purpose and if you’re not concerned with keeping them happy, you’re just obnoxiously writing for yourself!
Fundamental was a fun book to write, but the only thing that matters is that other people read it, enjoy it, and learn from it. So, to all my fans out there, thank you for the overwhelming support you’ve shown for Elemental. I’ve put a huge amount of myself into Fundamental but I’ve written it for you. I hope you enjoy what I’ve created!
You can pre-order it here if you want to support the writing: Fundamental: How Quantum and Particle Physics explain absolutely everything (except gravity)
Welcome To Jurassic Park
If you’re anything like me, you probably have fond memories of Mr DNA, the animated strand of genetic material from Jurassic Park (shown below). During the first act of the film, entrepreneur John Hammond asks Mr DNA to explain how scientists have brought dinosaurs back to life so the audience can understand the plot. Interestingly, Hammond’s budget was sufficient enough to reverse 65 million years of evolution, but didn’t extend to animating Mr DNA with a head.
You probably also remember from school that the people who discovered DNA and figured out how it worked were James Watson and Francis Crick, who shared the 1962 Nobel prize for their work. But if you talk to most biologists today, you find that Watson and Crick are spoken of in the same shady tones that wizards use when discussing Lord Voldemort.
These two iconic figures, once heralded as the greatest biologists of the 20th century, have fallen into ill repute and their role in the DNA story has been exposed as a little less shiny than textbooks usually claim. Let’s look at the sordid story of DNA.
Oh and by the way, it’s important in scientific discussion to separate the scientist from their work. You may dislike a particular researcher but if their findings point to an obvious conclusion you have to put personal flaws aside and evaluate the discovery on its own merit. The fact that James Watson is on record as having made racist comments like claiming black people are intellectually inferior to white people is not something I need to mention in this paragraph. I probably won’t bring it up at all in fact.
What Is DNA?
When your mother was pregnant with you, her uterus had to find a way of turning all the food she ate into your body (happy belated mother’s day by the way). You did the same thing as you grew from a baby into an adult and are doing it right now as your cells die and need to be replenished.
You’re able to reconstitute food this way thanks to nanoscopic biological machines called ribosomes that live in your cells and have the ability to draw in chemicals from digested food before sticking them together in the right order to make a bit of liver, a bit of heart, a bit of lung etc. Ribosomes are like building contractors, but in order to do their job they need a blueprint. This is where DNA comes in.
DNA is the molecule which stores information the ribosomes use. It’s the molecule responsible for all your inherited characteristics and the reason evolution takes place at all. The way DNA works is ingenious but confoundingly complex, so I’m going to simplify it and give a crude physicist’s understanding of the process. Enjoy…
Firstly, there are four molecules we need to meet called Adenine, Thymine, Guanine and Cytosine. These molecules - collectively called nucleobases - are each bonded to two other types of molecule called phosphate and deoxyribose, which join together in a long chain (shown below). The backbone of the chain is made from alternating phosphate-deoxyribose units, with the nucleobases hanging off like pegs on a clothes line.
Nucleobases are attracted to each other and if you get two of these strands lined up side by side, the nucleobases link to form the rungs of a ladder. Due to their specific sizes and shapes, A always pairs opposite T and G always pairs up opposite C, meaning the backbones of the structure stay at a constant distance. Then, as you probably know, the chains twist into a double helix, like so…
When DNA is needed for decoding, the strands of the ladder are unzipped, exposing the nucleobases so that a ribosome can read them. There’s a whole bunch of steps which take place but the gist is that the sequence of As, Ts, Gs and Cs, are read by a ribosome like a cassette-tape fed through a player (if that analogy doesn’t make sense because you have no idea what a cassette tape is…ouch).
As ribosomes move along the nucleobase chain, they analyse it like fingers gliding over Braille. The ordering of the ATG and C molecules tells the ribosome how to arrange molecules from your food into a specific body-part protein and thus the living organism itself (shown below).
Changing the order of the nucleobases completely changes what the ribosomes build, which is why tiny variations in DNA can lead to major differences in the organism. Put the bases together in one order and the ribosomes will build a goldfish. Rearrange them just a little and you get a gooseberry.
Oh and technically I should mention there is a fifth nucleobase called Uracil which your bio-machinery uses as part of the process, but I’m going to ignore it in my explanation because it just convolutes things. Sorry Uracil, you aren’t needed for this. Ura-still important though. (I don’t know what I’m doing with my life).
So, Watson and Crick Figured That All Out?
The general idea of DNA was actually suggested by Charles Darwin in 1859 when he published On the Origin of Species. In order for his theory of evolution to work, it was necessary that genetic information be encoded inside a living thing somehow and copied with occasional errors. Obviously Darwin had no idea we needed to be looking for a specific molecule (we didn’t even know atoms existed at this point) but he knew the body had to have some mechanism for storing genetic information. Frankly, if we hadn’t discovered and figured out the behaviour of DNA, Darwinian evolution would still be just a hypothesis rather than a theory we teach in Kentucky high schools.
DNA itself was discovered ten years later by Friedrich Miescher who was doing experiments on bandage-pus obtained from a Swiss hospital (There. Right there. That’s why I chose physics and chemistry over biology). Miescher discovered that most white blood cells contain an acidic chemical in their nucleus - hence the “NA” part of Nucleic Acid - which had a lot of phosphates in it. Miescher had no idea what the significance of the chemical was, just that the body seemed to contain a lot of it.
Then, in 1878, Albrecht Kossel found that nucleic acid contained the nucleobases A,T,G and C, while Phoebus Laverne discovered they were bonded to deoxyribose sugars - hence the “D” part of the name “Deoxyribose Nucleic Acid”. The idea of DNA being made of chains with nucleobases sticking off them was suggested by Nikolai Koltsov and we thus had a good idea of what DNA was. We just didn’t know what it was for.
That was until 1944 when Oswald Avery discovered something surprising about it. Avery found that by transferring the DNA of a harmful virus into a harmless one he could convert the safe virus into a lethal one i.e. the defining characteristics of a thing, the very notion of inherited characteristics Darwin had proposed, was the DNA molecule. Figuring out the structure of DNA would give us the key to life itself.
Then, in 1950 Erwin Chagraff discovered that the amount of Adenine in DNA is always equal to the amount of Thymine, while the amount of Guanine is always equal to the amount of Cytosine - suggesting nucleobases were somehow paired up. All we had to do was figure out how. And this is where the backstabbing begins. (Unlike Watson's racist comments which came several years later)
Lady of Crystal
The same year as Chagraff’s discovery, a talented physical chemist named Rosalind Franklin came to work at King’s College London as a research associate with the Medical Research Council. She was given the task of analysing crystals of DNA using X-ray crystallography (a way of taking photographs of a molecule) alongside another scientist named Maurice Wilkins.
Franklin was a skilled scientist with several papers to her name, but felt a bit of an outsider, being one of the only Jewish researchers at King’s College. Her feeling of isolation was not helped by Maurice Wilkins who openly badmouthed her and treated her as a lab assistant rather than an accomplished scientist in her own right.
She persevered however and by 1951 had gathered useful data about DNA. In November of that year, she gave a lecture in which she explained “the results suggest a helical structure which must be very closely packed, containing 2, 3 or 4 co‐axial nucleic acid chains.” In attendance at this lecture was James Watson, a geneticist from America studying at the Cavendish laboratory in Cambridge. A week after hearing Franklin’s lecture, Watson and his lab partner Francis Crick proposed that DNA might be helical. Wonder where they got that idea from.
Watson explained in his book The Double Helix that he hadn’t really been paying attention to Franklin’s lecture however, because he was more distracted by her unflattering womanly appearance…so I guess…that’s a defence??? I mean we only have his word for it that he was more of a misogynist than a plagiarist, but in any case he relayed the gist of Franklin’s lecture to Crick and they built a 3D model of the structure: a triple-helix of deoxyribose-phosphate threads with nucleobases sticking out the sides.
As chance would have it, the following month Rosalind Franklin was visiting the Cavendish laboratory, having been invited by its director Lawrence Bragg. When Franklin saw the triple-helix model she immediately explained that it was chemically impossible because phosphate backbones repel each other, meaning the helix Watson and Crick had proposed would tear itself apart in seconds.
Bragg was so embarrassed by this that he told Watson and Crick to drop the project and leave the structure of DNA to Franklin. They officially complied and sent Franklin their disassembled model, possibly to give her a hand but possibly as a childish taunt.
Franklin continued her research and by May 1952 had perfected the technique required to crystalise DNA and take a snapshot. Her best result was an X-ray plate titled Photograph 51 (shown below) taken right down the axis of the helix, which was then written up for the Medical Research Council.
In January of 1953, Maurice Wilkins (the guy who hated Franklin) wrote to Francis Crick and suggested they collaborate on the structure of DNA again. He finished his letter by stating: “Let’s have some talks…when the air is a little clearer. I hope the smell of witchcract will soon be getting out of our eyes” – referring to Rosalind Franklin who had recently applied to be transferred.
Then, on 30th January, James Watson was visiting Wilkins to complain that if they didn’t solve the structure of DNA, somebody else would get the glory (most likely the American Nobel prize winner Linus Pauling who had recently published his own triple-helix model). Unable to find Wilkins, Watson instead went to Rosalind Franklin and got into a row with her after telling her she wasn’t able to interpret her own data and would need his and Crick’s help to do so. Wilkins arrived on the scene and took his friend Watson away from “the witch” and then decided to comfort him by showing him Photograph 51 – without Franklin’s permission.
Watson went straight back with the information and Crick began speculating on what it might be showing. He had recently come across Chagraff’s discovery that nucleobases were paired together but couldn’t figure out how. Then came the crucial month. February 1953.
Round about Valentine’s day, Rosalind Franklin wrote in her lab notebook that DNA was made from two chains of deoxyribose-phosphates, wrapped around the outside with nucleobases on the inside. Basically, she solved the structure of DNA. At roughly the same time, Max Perutz, Francis Crick’s thesis advisor, showed Crick Franklin’s data from the unpublished MRC report – again without Franklin’s permission – and Crick made a crucial deduction. The two strands of DNA wound about each other in opposite directions.
He and Watson set about building a model to show this and finally, on 28th February, Crick announced to his friends in a local pub that the structure had been solved. Franklin was already in the process of writing up her own research and, on 17th March, learned that Crick had already begun announcing himself and Watson as the discoverers.
Graciously, she added a note to her paper saying that her results agreed with their structure and on 25th April, Watson and Crick published the idea. Watson and Crick did at least admit in the article that their work was “stimulated by the unpublished ideas” of Franklin but gave little indication that she basically came up with most of it.
Sadly, Rosalind Franklin died in 1958, four years before the Nobel prize committee decided to award that year’s prize for DNA and the prizes are not awarded posthumously so her name was not featured. Instead, the prize went to Francis Crick (who published the double helix theory first), Maurice Wilkins (who did some of the experimental work) and James Watson (a scientist).
So the timeline is roughly as follows...
1859 – Darwin proposes the idea of a genetic code
1869 – Meischer discovers DNA
1878-1928 – Kossel, LaVerne and Kotslov figure out what DNA is made of
1944 – Avery discovers what DNA does
1950 – Chagraff discovers nucleobase pairing
1951 – Franklin suggests DNA is a helix, Watson attends the lecture but doesn’t get it right
1952 – Franklin takes “Photograph 51” which looks helical (May)
1953 – Maurice Wilkins shows Watson photograph 51 (January) who then tells Crick about it
1953 – Franklin almost figures out the structure (early February)
1953 – Perutz shows Franklin’s data to Crick (mid February) who figures out the structure
1953 – Crick announces the structure has been solved (late February)
I am the Law
Franklin was treated horribly by the men involved; that much isn’t in dispute. Even Crick admitted “I'm afraid we always used to adopt -- let's say, a patronizing attitude towards her.” The human interest story is therefore that Franklin was mistreated by three men who got rewarded, with her name becoming a footnote. However, the question remains: did the men break any codes of conduct or were they just being sneaky?
Was Wilkins wrong to show Watson Photograph 51 without Rosalind Franklin’s consent? Was Perutz wrong to show Franklin’s data to Crick? Was Watson “stealing” Franklin’s helix idea after seeing her lecture or was he simply building on her work? The morality is a little unclear for one big and important reason: there is no law or governing body in Science. Science works as a collaborative effort and the sharing of ideas is a necessary part of the process – which kind of muddies the waters on what counts as stealing an idea and what counts as testing it.
The only real law scientists hold to is: “don’t make it up”. Other than that, Science is the search for truth and you can’t trademark that because it belongs to everyone. It’s largely accepted that scientists should give each other credit when appropriate, but if people choose not to, there is no “official punishment”. Science is a self-regulating community with nobody in charge, which means that if a Scientist is unethical it’s up to other scientists to exact informal justice.
Sometimes, the scientist’s university will strip them of their titles (as happened to Watson when he made those comments about black people), sometimes they will not get funded again, or never be published in another journal. But they don’t have their Science license revoked and go to Science prison because there’s no such thing.
In the case of Watson (and to a lesser extent Crick) the general response has been to simply judge them as jerks and subtly badmouth them wherever possible. What else can we do? Franklin was 95% of the way to solving DNA but in fairness Crick was the guy who made the final step and published first.
If we assess the facts dispassionately then I think Crick does deserve some of the credit for the DNA discovery. That doesn’t seem fair because he solved it by nefarious means, but I said at the beginning that we have to evaluate the science and the scientist separately. Crick did make a contribution so he deserves to be acknowledged, but I am still allowed to say that what happened to Franklin was downright despicable!
The Great Relay Race of Science
Watson, Crick and Wilkins’s behaviour toward Franklin was not nice but DNA got solved and that’s what matters. We got to the final answer in stages rather than as one revolutionary breakthrough and it’s hard to single out any one person as having been the most instrumental, (although Rosalind Franklin is probably the standout candidate having both carried out the experiments and interpreted the data correctly).
Science is often like a relay race where each person gets the baton for their stretch of the track. The person who actually crosses the finish line (Crick) might get the cheer but they are no more important than the other members – remove any one of them and the whole team loses.
Sadly, or perhaps fairly, that’s how credit tends to work. It’s the person who gets the answer first who is praised, even if they were just adding final touches to other people’s ideas. This is a result of human psychology more than anything else. We like praising people for achievements and we aspire to be like our heroes, but our brains are wired to focus on individuals rather than ensembles. Unfair perhaps, but nobody ever said evolution was fair – thank DNA for that.
As a final thought, I will share that I was recently the subject of outright scientific plagiarism myself. A blog I wrote on “The Science of Infinity Stones” was copied word for word by someone who I will not name and reposted on Instagram without crediting me. They didn’t even paraphrase the damn thing – they literally copy-pasted it word for word and blocked me before I could let anyone know. The person’s account has tens of thousands of followers, many of whom commented how great the post was and that the person should write a book (irony).
I was mildly annoyed about this for a moment, but then I realised I didn’t care that much. I wrote the blog for free and just wanted to entertain and educate. The phrasing and humour is obviously my invention but the Science isn’t “mine” at all. Ultimately, my ideas were being read and people liked it – that’s pretty gratifying in itself!
Getting credit is nice because it boosts the ego but if I’m honest, that’s not the reason to do Science or to teach it. You do it to make the world a better place and sometimes that has to be good enough. Of course, if that guy wins a Nobel prize for my post, I might change my tune.
If you want to find out a bit more about the complex history of Franklin, Crick and the other two, check out my sources...
If I Could Talk With The Animals...
Most animals on Earth engage in some form of communication. Baboons rub feet in each other’s faces to signify “I am in the mood for sex,” herring gulls tap their beaks on the ground to let the young know “I have food,” and cats sharpen their claws on your ankles to make sure you know “you ain’t all that.”
My favourite mode of animal communication however is easily that employed by honey bees. When scouts want to describe their nectar finds to the rest of the hive, they perform what are genuinely called in the literature a “waggle-dance”. They shake their rears around in a figure eight with the length of dance indicating distance from the hive, and the angle they make to the vertical axis of the hive translating to the angle between the sun’s position in the sky and the food source. They also secrete pheromones to indicate how good the source is, meaning rival bees have to argue for whose find is superior. That's right. Bees communicate using bum-dance trigonometry battles.
Sadly, humans have not mastered this subtle art, but we have invented something truly remarkable for sharing ideas and information: languages. Six thousand five hundred of them are known to our species alone, so is it possible that other species could develop something similar?
First off, I think we can argue that many other species have “words” – unique noises which convey a meaning. Chickens for instance have distinct clucking sounds for “predator approaching from above” and “predator approaching on the ground”, indicating that the noise is not just a panic - it is telling other chickens vital information.
Squirrels take this even further with their barks; they are more likely to make a warning call if members of their family are close and less likely to do so if there is a rival squirrel in the area i.e. some animals can not only use sound to convey information, they can change their noises depending on who is listening. There is even a fascinating project being carried out at the University of Washington called Deep-Squeak which aims to build a computer capable of translating mouse-squeaks into English.
You might consider these noises to be nothing like words because they are just simple sounds, but I would immediately dispute that. Consider Silbo, a language which is entirely whistled, allowing shepherds to communicate across the valleys of La Gomera island. Or take the Taa language of West Africa which contains 164 letters, 111 of which are clicking sounds. Or how about the Wakashan language of British Columbia which features throat-grunts as well as vowels. If we consider clicks, whistles and grunts to be legitimate word sounds, why not the noises animals make too?
But Is It Language?
This all sounds pretty optimistic but there is something really important we need to consider. As the linguistic philosopher Noam Chomsky pointed out when addressing this issue, language is more than just a collection of words – it is also the rules for how those words can be combined.
A vocabulary is not the same as a language, in the same way a dictionary is not the same as a play by Shakespeare. In fact, the English language contains over 171,000 words and the average English-speaking adult speaks 60,000 of them, meaning the average English-speaking adult only knows 35% of their own language. Clearly there is more to a language than just "knowing the words".
For example, here is a sentence I doubt anyone has ever written: In Antarctica there are a species of pink pandas who eat wood shavings. That sentence is not one you have seen before, so you cannot simply be recognising the combination. Yet you still know exactly what the sentence means. Language is not just memorising and regurgitating words. It allows us to generate new combinations that are still meaningful.
Another key feature of language is that as we increase the length of word combinations (the sentence) we increase the information contained within them. For instance:
1) I like hats.
2) Janet said “I like hats”.
3) According to Frank the fishmonger, Janet said “I like hats.”
4) According to Frank the fishmonger, who really should not be trusted given the fact he is a Twilight fan, Janet said “I like hats”.
The more words we put in, the more information we convey and we can do this infinitely. Then of course we have to consider the order the words come in. The sentence “Margaret likes Jeff and hates Richard,” means something different to “Margaret hates Richard and likes Jeff”.
These are the kinds of features we do not find comparisons for in other species. Animal noises are mostly isolated and combinations of them contain no new information. A chicken can utter the squawk “predator approaching” over and over, but this longer sentence does not increase the meaning, it is just repetition. Animals also do not seem to invent new sentences or have a grammar to their limited sounds, but this makes sense from a neurological perspective because, as it turns out, human brains are genuinely different to those of most animals.
Dawn of the Language of the Apes
I’m about to horrendously simplify half a century’s careful study into the field of linguistic neurology, but hopefully you should pick up the gist even if my words are not precise. There’s another feature of human languages - a distinction between literal and implied meaning.
The human brain has two main centres for processing language (found on the left side of the brain for 90% of the population) called Broca’s area and Wernicke’s area. Put crudely, Wernicke’s area is the part which deals with comprehension while Broca’s area deals with remembering words and generating combinations.
People who suffer damage to Broca’s area are still able to follow complex instructions and listen attentively to what people are saying, but speak in a halting, staggered fashion. “Me…want…food” etc.
By contrast, people who suffer damage to Wernicke’s area are able to speak fluently and elaborately, but their sentences are meaningless word-salads: “There wasn’t four parsons undulating to birefringent celery opacity for plums with your mesmerisation.”
Most other animals do not even have a Broca's and Wernicke's area, so language is physically beyond their capability, with the exception of the ape species of course. Chimpanzees, orangutans, gorillas and bonobos have very small Broca’s and Wernicke’s areas. Nowhere near as develeoped as ours, but these emerging structures might imply that apes can learn something akin to language.
The first attempt to teach an ape to speak was made by Catherine and Keith Hayes in 1951 with their chimpanzee Viki. By rewarding the chimpanzee and moving its mouth to encourage certain sounds, the Hayes were able to get Viki to “say” four words: mama, papa, up and cup. And yes, listening to recordings of Viki is every bit as disturbing as you might imagine.
A lot of debate raged over why Viki could only master those four words for several years, until someone pointed out the freaking obvious: chimpanzees don’t have the vocal chords needed. In fact, no other animals do.
While many animals have a larynx and tongue, the arrangement of them inside the human throat allows us to make a wider variety of sounds than any other creature. There are obviously animals which can make noises we can’t - pistol shrimps produce screaming sounds which reach 200 dB - but those are the only sounds these animals can make. It isn’t just our brains which are unique, but also our mouths and throats.
However, just because apes cannot make sophisticated vocalisations does not mean they cannot learn language. After all, there is a whole category of human languages that involve no sound whatsoever: sign languages.
Sign of The Times
Sign languages have all the same features that “verbal” languages have. American Sign Language for instance has over 50,000 words as well as grammatical rules and syntax. Word-order matters in ASL, longer sentences contain more information, new sentences can be invented, different people sign with different "hand-accents" and they can be used to express metaphor, write poetry and tell jokes.
In fact, deaf and mute babies start “babbling” with their hands at the same age hearing and speaking children start babbling with their mouths. They begin to wave their hands in an incoherent fashion as they grasp the language they are exposed to…which means language is not really about moving your throat and engaging your ears, it is about understanding the meaning behind the signifiers.
And, perhaps most importantly, Broca’s and Wernicke’s areas light up in brain-scans just as much for deaf and mute people as for speaking and hearing people. These brain regions are not really about the ears or the throat, they are about the far more abstract notion of processing and creating meaning for symbols. So whichever muscles your language uses are irrelevant. Your brain still allows you to speak. So obviously we tried it with apes.
The first chimpanzee to learn sign language was named Washoe. In 1967 she was adopted by Beatrix and Allen Gardner who taught her 350 words in American Sign Language, and there were many remarkable findings. Washoe could hold basic conversations with the Gardners and was observed “talking to herself” i.e. signing when nobody else was around.
More remarkable was that Washoe, on a few occasions, was apparently able to join words together to form new combinations. When presented with a picture of a duck for instance, Washoe signed “water-bird” which at the time seemed astonishing.
However, the primatologist Herb Terrace was skeptical of these claims and by studying both Washoe and his own chimp fluent in ASL (which he named Nim Chimpsky), he discovered that what the chimps were doing was not really language.
Apes can only make two-three word combinations at most and are unable to increase the amount of information by extending a sentence. For instance, one of Washoe’s favourite phrases was “tickle me” (Chimpanzees love to be tickled) but if the Gardners refused to tickle her she would simply repeat the phrase over and over: “tickle me, tickle me, tickle me, tickle me”. She could not handle a longer sentence like “tickle me now” or “tickle me or I will be sad.”
Washoe was also not able to get word-order correct. Just as often as “tickle me” she was liable to sign “me tickle”. So her signing ability had no syntax and no extension of meaning. In fact, as Terrace pointed out, even the water-bird phenomenon was nothing special. Washoe could just as easily have been signing “water” because there was water in the picture and then “bird” because there was a bird. The sign combination “water-bird” did not mean “bird that floats on water” but simply “there is some water…there is a bird.”
Whereas children start inventing new sentences (and sometimes words) around 15 months old, the chimps never did. They were just repeating the physical signs they had been taught and were not understanding them the same way we do. Don’t get me wrong, it’s still impressive that a chimpanzee was able to look at a picture of a bird on water, process the information and sign the correct symbols…but it’s not what we would call a language. Sign language does not allow apes to speak, unless we somehow strap them into some kind of robotic talking device...
The Koko Kontroversy
Perhaps even more famous than the Washoe experiments was the work of Penny Patterson, a former Stanford psychology doctoral student who, in 1979, decided to recreate the Washoe trials with a female gorilla called Koko, borrowed from San Francisco Zoo (“borrowed” is a generous term because Patterson actually refused to return Koko after the agreed lease was over, claiming Koko did not want to live among gorillas anymore and had fully acclimated to humans…which many people considered a form of animal cruelty, isolating Koko from her kin).
Patterson taught Koko a lot of signs (she claims over 1000) and it appeared for a long time that Koko’s verbal skills were even greater than Washoe’s. Koko could identify colours, answer simple questions and even expressed sadness at the death of Robin Williams (a celebrity she had met many years prior). Koko would also issue Christmas cards online through Patterson’s website, wishing the world peace and love. This is a gorilla supposedly making abstract comments about an entire species. It seemed too good to be true. And of course, as Herb Terrace began to demonstrate, it was.
The first suspicious thing was that studies published by Patterson were non-existent. She did not publish data or describe any controlled experiments, preferring to communicate everything through press conferences and edited online appearances. It’s easy to get “oohs” and “aahs” from an audience, but this does not prove Koko was doing the things Patterson claimed. In fact, when Terrace got hold of the original videos of Patterson communicating with Koko, the story which emerged was quite absurd and a little worrying. Here’s the kind of thing that would happen:
Patterson might hold up an object like a banana and ask Koko to sign it. Koko would then sign something like the word for “building,” to which Patterson might respond “come on Koko, stop being silly, what is it?” Koko would then make the sign for “trousers” and Patterson would laugh and say “she’s being funny, come on Koko what is it?” And then, after a bit of cue-ing from Patterson herself, Koko would finally symbol something like plant and Patterson would go “well done Koko it is a plant! What kind of plant?” Koko would then symbol “pain” and Patterson would respond with “yes that’s right, if you eat too many plants your stomach can be in pain! Well done!”
Patterson would sometimes claim Koko was being ironic when signalling the wrong words (I’m not kidding) or that the end of October was tough on her because it was the anniversary of another gorilla’s death. I don’t think anyone would dispute that Koko could be sad when remembering the death of another animal, but Patterson was claiming Koko knew how to use the Gregorian calendar and acknowledged anniversaries the same way humans do. Gradually the scientific community began to distance themselves from Patterson and she was accused of delusion, misrepresenting data and, by her harshest critics, mistreating Koko as some sort of party-trick animal.
Also, bizarre true story: Patterson claimed Koko had an obsession with nipples and there were several charges of sexual harassment against her, claiming she would instruct her students to expose their nipples to Koko (as she would regularly do herself).
From a scientific point of view the Washoe and Koko experiments are super-cool but they don’t prove apes have the capacity for language. In fact they seem to prove the opposite. We could maybe go so far as to say apes have proto-language ability and there is one bonobo currently being studied (Kanzi) who seems to show word-combination skill. But, I’m afraid if we are honest, we cannot justify saying that apes have language. However there is one other place we should consider looking.
Under The Sea
Humans do not have the biggest brains in the animal kingdom by a long shot, but this is not necessarily the most important thing to look at. Elephants have huge brains, but considering the size of their bodies they need them just to move around. Instead, it makes more sense to consider what is called the encephalisation quotient, which measures how big the animal is in relation to its body volume. On this scale humans have the highest score with apes coming in third place and then, sitting in between them, are the cetaceans: whales, dolphins and porpoises. This is where the research gets really interesting.
In 2018, Stephanie King working in Shark Bay, made the discovery that dolphins appear to have what we might consider “names”. A specific sound can be uttered by a member of the pod and only one dolphin repeats it back. When King recorded the same sounds and played them herself, the same individual dolphin echoed it and none of the others paid attention. It’s almost like the Dolphins are shouting “You there Harry?” and the other one shouts back “This is Harry.”
Then there was an intriguing study carried out in 2016 by Vyacheslav Ryabov who analysed the clicks and whistles exchanged between two Black Sea bottle nosed dolphins named Yasha and Yana. What he found is that the noises were broken down into as many as five distinct sound-chunks which he likens to five-word sentences. Even more crucially, he discovered that dolphins do not interrupt each other when doing this.
If you bring two chimpanzees together who have both been taught sign language (as has happened) they do not exchange information. They “talk” over each other constantly, and repeat the same symbols back and forth. Dolphins however, pause when the other dolphin is making their noises and they do not repeat the same sounds. This could genuinely be a form of language.
Then there is whale song which is a total mystery. A pod of whales will sing patterns of notes which can carry several kilometers across the water to a different group who can modify and send it back, or pass it on to another pod. Some researchers have suggested that whale songs are transferred across great distances like a whale internet and everything from mating calls to story telling has been touted as a possible explanation. Although obviously I’ve seen Star Trek IV, so I know exactly what's going on.
Why Cetaceans Are Tricky
Unfortunately we know hardly anything about cetacean neurology because of the obvious problem…all the methods we use to analyse the brain cannot be used under water. We tend to figure out how brains work by performing brain scans on live animals, observing the behaviour of brain-damaged individuals, observing the effects of medication, or by carrying out various tests in a controlled environment.
Brain-scanners are out of the question because (funnily enough) sensitive electrical equipment doesn’t work under water. We also cannot tell if a whale has been brain-damaged or suffered a stroke because all we can observe is how they swim about. We cannot ethically give them psychiatric medication either and because they move in vast arenas (it’s the sea) it is not easy to perform any kind of controlled experiment.
The only things we can sensibly do are make deductions about their behaviour in captivity or analyse the brain once the animal has died. But even that is difficult because you either have to wait for a carcass to wash up on shore and hope the brain is in tact by the time you extract it (a rare occurence) or you hunt and kill a whale yourself. Curiously, people who feel passionate about cetaceans and want to study them are not the same people who want to hunt them.
On the rare instances we do get hold of a cetacean brain in good condition, there is still a limit to what we can find out about it. We cannot watch it in action, so we can only make statements about things like size, mass and chemical composition, which is like trying to figure out what software a computer was running by looking at the hard drive after it has broken down. And even when we do this we run into a huge problem: cetacean brains are not put together the same way ours are. It isn’t known if they even have Broca’s and Wernicke’s areas. Their brains look different so, simply put, nobody has a clue what’s going on inside them.
Personally, I feel there is just enough evidence to answer the overall question of animal language with a hard "maybe". Whales and dolphins are our best bet and, given that the field of cetacean linguistics is new, we could be in for some exciting surprises over the next few years.
Maybe if we can learn to speak dolphin we will get an insight into how another intelligent creature views reality. Maybe we can learn how our own minds work from studying those which are drastically different. And maybe, just maybe, if we can prove these wonderful creatures are capable of language it will persuade those who hunt them to reconsider what they are doing. Maybe Science won't just save our species, but others too!
Unicorn Hunters of Ye Olden Days
The King James Bible has unicorns in it. There are nine separate references in the Old Testament to these magical beasts (Num 23:22 & 24:8, Deut 33:17, Job 39:9-10, Psalm 22:21, 29:6 & 92:10, Is 34:7) and baring in mind the Old Testament gives historical records of ancient culture, are we to conclude there were genuine unicorns roaming the Earth at this time?
Well, probably not. The King James Bible is a 1611 translation of the Biblical books, derived in part from a 4th Century Latin translation called The Vulgate, based on a 3rd Century BC Greek translation called The Septuagint, based on earlier texts written in Hebrew, Persian and a few other languages.
In the original Hebrew, the animal being referred to is called a re’em and unfortunately we have lost the identity of whatever this animal was. All we know for definite is that it was a strong creature with...ironically...more than one horn. Which is kinda weird. In Deuteronomy 33:17 the writer talks about the horns (plural) of a single re’em so it was obviously not believed to be a unicorn and nobody knows how the term 'unicorn' entered the language.
The hypothesis I find most reasonable is that re’em is close to the older Assyrian word rimu, which referred to a species of now-extinct ox called, in English, an auroch. In Assyrian art, aurochs were depicted in side profile (see below) giving them the appearance of a one-horned animal and thus early writers may have mistook auroch paintings for one-horned animals.
It could also be that "one-horn" was a nickname for an actually two-horned beast. Like the species Bradypus Variegatus which is nicknamed "three-toed sloth" despite obviously having twelve toes. The name is not meant to be taken literally, but understood in a certain context. Just like the equally disappointing Vampyroteuthis infernalis, more commonly known as a "vampire squid" despite being neither a vampire nor a squid. Sometimes we just give animals dumb names.
Either way, rimu in Assyrian seems to have become re’em in Hebrew, which became ‘monokeros’ in Greek (which means one-horn) and then finally 'unicorn' in Latin and thus English. It is tempting to ridicule the early translators for being careless, but we shouldn't judge them too harshly. Unicorns were once beleived to be genuine creatures. Even the Greek scholar Ctesias described a one-horned beast native to India which he called a rhinokeros (nose-horn). That animal was almost certainly a rhinoceros, but once again a series of mistranslations and misunderstandings led many to believe Ctesias had discovered unicorns in the Asiain subcontinent.
The idea of unicorns being horses with spiralled horns seems to have begun during the middle ages, probably due to sailors bringing home narwhal tusks (which are spiraled) and selling them to buyers as "unicorn horns". Even the throne of Denmark is constructed from narwhal tusk but was originally claimed to be bona fide unicorn.
For obvious reasons, unicorns were perceived as creatures who refused to be captured and many houses of Scotland during the 1400s displayed unicorns on their banner-crests to represent a refusal to submit to English rule. Even today, the Unicorn is the official emblematic animal of Scotland (the Welsh flag features a dragon for similar reasons).
It is only in the last couple of centuries that people have finally accepted unicorns are probably not real. However, I am pleased to inform you that there is nothing about them which is biologically far-fetched. After all, many different species have evolved horns. There are breeds of lizard, mammal, fish and even one species of bird (the cassowary) which have horny structures on their heads, so it's obviously something evolution is fine with.
The primary function of horns is for fighting rivals or predators but they also serve for the purpose of attracting a mate. Because living things use most of their energy on movement, brain-activity and maintaining a healthy immune system, if your immune system is in perfect order you have energy to spare. What better way to advertise that than adorning yourelf with unnecessary decorations which would hinder a lesser creature?
It’s called the Zahavi Handicap Principle and is often used to explain why certain animal species evolve completely unnecessary features; even features which serve as a handicap. Peacocks grow spectacular tails, giraffes grow inconvenient necks and we may even see evidence of it in humans (the only species whose female members have engorged breasts all year round rather than exclusively at the time of ovulation). So why not horses too?
Unicorns of the Sea
Horses do not have horns of course, and usually attract their mates via a combination of elaborate tail flicks and enticing urination (yeah, I know) but there is no reason they could not have evolved down the route of growing horns. In fact, some of them sort of did.
It’s widely accepted that life began in the oceans and eventually made its way to land, but it can happen in reverse sometimes. That’s what whales are. Whales were originally land-dwelling creatures, similar to hippos, but gradually moved into the ocean as a permanent residence, losing their legs over time. That’s why whales have useless hip-bones under their blubber. Sometimes they use these hips as slightly ineffecient sex-anchors to attach themselves to prospective mates, but the shape and design is clear. Whales used to be hippies.
That is also why whales and dolphins move their spines vertically as they swim, reminiscent of horses galloping, while fish (who have always been aquatic) move their spines horizontally. Whales and dolphins are effectively trotting and cantering through the ocean. Now, since narwhals are a species of whale and whales are descendants of horsey creatures, evolution can, in a certain sense, if you are very patient, give horns to horses.
But I don't want to wait millions of years, tim!
Of course you don't. You want a live unicorn without having to rely on the chance-nature of Darwinism and hippos who like to swim. Is there a scientific way of justifying the existence of real equine unicorns? The answer (magically) is maybe. Provided we invoke the right kind of tumor!! And I know what you're thinking at this point. This is a family-friendly blog and I’ve just given unicorns cancer. But fear not, the kind of tumors we are talking about will be totally safe. Twilight Sparkle will go unharmed if you bear with me...
A tumor is not an infectious disease caused by a bacteria, virus or parasite, it’s certain cells of the body growing too much, too fast. If cells in one area start growing at an accelerated rate they begin absorbing nutrients away from other cells or squashing everything in their vicinity to one side, damaging the organs and preventing them from doing their job. That’s when a tumor becomes a cancer. But tumors can be harmles. In fact, to avoid the negative association, let's call them "neoplasms" which is a friendlier-sounding word for the same thing.
Since neoplasms are cells growing out of control, what can sometimes happen is that the cells get so excited they mistakenly believe they are a different part of the body and turn into that instead of what they’re supposed to. This is possible because every cell’s nucleus contains a full DNA strand, with the genetic information necessary to become any part of the body. A cell in your kidney contains the information required to build a heart or a lung and if the cell is activated incorrectly (which can happen if it’s growing too fast) you can grow body parts in the wrong place.
It’s called a teratoma and although it sounds like something from a David Cronenberg movie, it’s absolutely real. It is rare to develop whole organs though (the creepiest example is an instance reported in 1999 by doctor Otto Herwart who discovered a fully grown eye inside a neoplasm) but keratin, which horns are made from, is straightforward for your body to produce so teratomas can easily manufacture horns.
It’s a rare condition called cornu cutaneum and in all honesty nobody knows what causes it. The skin spontaneously begins growing a neoplasm on its surface which overproduces keratin and thus ends up forming a horn. They are completely harmless and easily removed since they have no bones or nerve endings, but my advice would be not to Google image-search them right before a meal.
The most astonishing instance of this condition in humans is that of Zhang Ruifang, a 101-year old woman from China who grew a pair of horns on either side of her forehead in 2009 which she refused to have removed, despite them earning her the obvious nickname in her neighborhood of ‘devil-woman’.
And it’s not just humans who can get horns. There are reports of dogs, cats, cows and, fortunately, even horses, developing horn-bumps as a result of a neoplasm. In fact, Unicorns are not only within the realm of possibility, one or two may have existed by accident.
Yes, you read that right. I, a public school teacher with a responsibility to educate future generations, am throwing my lot in and saying “sure, there has probably been at least one unicorn”. Horses have been around for over 50 million years and today there are an estimated 60 million roaming the Earth, mostly in the wild. Chances are that at least once, somewhere, purely by chance, one of those horses developed a teratoma neoplasm which gave it a horn between the eyes.
Resurrecting A Dead Idea
As a Science educator, I get asked a bunch of interesting questions by students. Many of them are about standard topics like explosions, space or quantum zeno effects during antimatter hadron collisions. The usual. But by far the most common things I get quizzed about are dreams and death.
Dreams are a tricky one to answer because nobody has a clue what's going on there, so most of the time I have to answer such questions with a shrug. Death, on the other hand, is a really interesting topic and well worth exploring because we actually do have some good knowledge about what happens. Especially the issue kids always want to know about: "could a zombie apocalypse really happen?" It's a question we've all asked ourselves at some point, often while watching the shopping channel, and I addressed it briefly in an article I wrote a while back (here ya go kids).
In that article I focused on what would happen to our infrastructre during such a crisis and how we would survive. Short answer: move to Scandanavia (actually that solves a lot of problems). But at the end of the article I dismissed the whole thing as fanciful because it was basically impossible for zombies to exist. Once you're dead, you're dead.
However, part of being a Scientist is changing your mind when the evidence requires it, and I am happy to say that on the issue of zombification I may need to do just that. Research I've slowly become aware of has made me re-evaluate the whole question and I have come to the conclusion that if you are prepared to be a little generous in your definition of "zombie" then they aren't totally ridiculous.
Which is just as well. The US Department of Health commissioned a report in 2011 on how the CDC might prevent a zombie apocalypse in a paper called ‘The CDC Zombie Initiative’. Originally published as a tongue-in-cheek way of exploring the idea of mass panic, they quickly had to issue a public retraction and apology when people assumed they were proposing a real zombie-outbreak was imminent...ironically causing a mass panic. Now, thanks to this blog, a zombie apocalypse might really be on the cards after all. Hurrah for Science!
The Biology of Death
At a cellular level, death is a simple process. A cell is a bag of chemical reactions so it’s easy to pinpoint when it has died because the reactions change irreversibly, usually accompanied by the outer membrane dissolving and everything leaking out. Once a cell has expired there is no going back, but at the scale of a whole organism it’s harder to define when death occurs because everything shuts down at different rates.
We’ve all heard stories of people who were apparently declared dead by a doctor, only to return to life after a brief intermission. They sound like urban legends but it’s actually a recognised medical phenomenon called Lazarus syndrome. Since 1982, when it was first defined, nearly 40 people have been declared dead by a qualified physician but then decided to make a comeback. The most extreme case being that of 78-year-old Walter Williams from Mississipi, who was registered dead in February of 2014...only to be discovered a few days later by coroners trying to kick his way out of a body bag. Alive and kicking indeed.
Because different organ systems shut-down at different times it can occasionally happen that the main systems go offline, giving the appearance of death, but there can be plenty of chemistry still going on and if the right reactions occur, the whole system can reboot. There are several animals which do precisely that during hibernation after all, including extreme examples like the Alaskan wood fog, Rana Sylvatica, which can survive with two-thirds of its blood frozen solid during winter, before thawing itself back out.
Cells which have truly died can’t be brought back, but new cells can always be regrown (you do it every time you recover from a cold) so it’s possible for parts of an organism to shut down but then rebuild. In fact, according to one research paper published in 2017 by Peter Noble, some cells actually increase their productivity after the body is dead, so even a corpse is still alive in many ways. Death is not clear-cut and can, in some rare cases, be reversed.
We can definitely agree however, that a body eventually reaches a point of no return, even if we can’t say precisely when this point occurs. People have returned from year-long comas but nobody has returned from something like rigor mortis (when the body stiffens up because you are no longer making ATP, the chemical required to break down bridges between muscle fibres). If something fully dies, it’s not possible to bring it back because the cells have burst, but we might be able to justify a zombie apocalypse by picking the right biochemistry to make a person appear to die.
Horrors in the Night
Most people have a fear of death so it’s no surprise that zombies are a common staple of horror stories. Zombies represent the inevitability of death slowly creeping toward us without pause, breaking through our barricades to consume us no matter what we do.
The modern depiction of zombies as shambling corpses seeking the living to eat them alive comes almost entirely from the 1968 movie Night of the Living Dead by George Romero, in which a group of middle-class Pennsylvanians get trapped in a country house under siege by re-animated corpses, originally referred to in the script as "ghouls".
Over the course of six films, Romero invented most of the familiar rules we know today for zombie stories e.g. being bitten will turn you into one, zombies can’t be stopped unless you destroy their brain and they gradually get more rotten as their flesh decays.
Prior to Romero’s hexology of films, zombies were already a well-known monster in folklore but they were just corpses who came back to life, often with their brains in tact. Some of them even ran for government. The Bible itself refers to mass outbreaks of zombies stumbling from their graves and tormenting the living (Zechariah 14:12, Matthew 27:52) as shown in the Fresco below from Notre Dame de Bayeuax Cathedral.
In Romero’s series, it is hinted that the outbreak is caused by a space probe detonating in the atmosphere and showering the ground with radioactive debris. Radioactive material can certainly do icky things to you like make your skin fall off, but it can’t make you impervious to pain and turn you into a cannibal. If we want to legitamise the zombie outbreak we'll have to look somewhere other than 1960s cold war paranoia-infused science fiction...which is always a disappointing sentence.
Taste the Rainbow
The word ‘zombie’ comes from the Haitan word zombi (originally Central-West African) and the folkore of Haiti features tales of people brought back from the dead in a trance to do the bidding of the witch who summoned them.
In 1985 the ethno-botanist Wade Davis even wrote a book about the science of Haitan zombification called The Serpent and the Rainbow, which was adapted into a semi-decent horror movie directed by Wes Craven and starring, of all people, Bill Pullman...whose only other horror credit is freaking Casper.
In The Serpent and the Rainbow, Davis analysed the powder being used by Haitan witches to zombify people and found it to contain a chemical called tetrodotoxin - the active ingredient in puffer-fish poison. Davis claimed this dust would send a victim into a comatose state for a few hours, giving the appearance of death, before they would rise in a hypnotic trance, ready to do the bidding of the witch master.
His results were resoundingly panned by other scientists who tried to repeat the findings but couldn’t because his methodology was so poor. Furthermore, when the powder was analysed by other teams they found it didn’t contain enough tetrodotoxin to even make someone sick, let alone knock them into a coma or put them in a trance (which is not a known tetrodotoxin side-effect anyway).
To give you some idea of how lousy the science in the book is, Davis claims that a witch only had to sprinkle zombi-dust on the road in front of their intended victim to achieve the effect. It sounds like the whole thing had more to do with the power of belief than the power of witches.
If you are raised in a culture that believes ardently in zombification at the hands of a witches, it’s conceivable you might go along with it because of something called ‘the nocebo effect’ - a reverse placebo where you can be convinced you have an illness you don’t really have.
Stranger things have happened. Take the case of Sam Shoeman who was diagnosed with cancer in 1973 and died on schedule according to doctor predictions. It was only at his autopsy that it was discovered his doctors had made a mistake and the tumour was benign. He showed all the appropriate symptoms of slowly dying of cancer, despite not actually having it! Apparently, you can literally talk someone to death. Something I will have to remember for my classes.
Nobody knows how placebos or nocebos work, but our minds are evidently capable of doing things to our bodies through willpower alone. It seems likely to me that so called "witches" are simply telling people they have to act like zombis and the victims just go along with the ritual because they believe they have no choice. So I don’t think we can trigger a genuine zombie apocalypse using puffer-fish. Sorry. Puffer-fish are useless.
What about that guy on the news..
So you may have heard about 'bath salts’ (if you haven't, your kids will have), it's the street drug which hit headlines in 2012 because it reportedly turned people into violent flesh-eaters. Just to make sure we're handling rumour control here, there's no such thing. It's a twisted account of a real event which happened once and once only. This gets a little weird though.
On May 26th 2012, in Miami, a man named Rudy Eugene decided for some reason to attack a homeless man named Ronald Poppo and...eat his face. While holding a Bible. Naked. For eighteen minutes.
Eugene only stopped after being shot five times by a police officer and when he turned out to be a proud Haitan, it was no surprise the press jumped on the story and dubbed him The Miami Zombie.
Police at the time described his relentless and delusional behaviour as consistent with someone taking ‘bath salts’ (a mixture of mephodrone and dimethocaine) and hence bath salts became unofficially known as the ‘zombie drug’ because it made you impervious to pain and susceptible to cannibalism.
However, the toxicology screening of Eugene’s corpse didn’t find any trace of bath salts, only large amounts of cannabis (and large amounts of Ronald Poppo I guess). So I don’t think street drugs are going to give us the zombie apocalypse we’re looking for. I think if we want to come up with a plausible zombie pandemic, it’s going to have to come from the world of infectious diseases.
In the Danny Boyle movie 28 Days Later, the outbreak is started by an engineered super-virus which amplifies the aggression centre of the amygdalae and turns the host into a maniac. Although technically the monsters in 28 Days Later aren’t true zombies because they don’t die and come back, they just go nuts. They also run and zombies are supposed to represent the inevitable onslaught of creeping mortality...we can’t have them being zoom-bies.
I'm going to totally steal their idea however, because it seems to me that it would be the best way of doing it. What we're looking for is some sort of infection which can cause a person to apparently die, before coming back to life as an indestructable flesh-eater on a shulking, rotting rampage.
To start with, there are plenty of diseases which can affect a creature's brain, sometimes in very surprising ways. Consider ophiocordyceps, a breed of fungus which has a disturbing effect on carpenter ants. Once the ants breathe in the spores, it somehow alters their neural chemistry and forces them to abandon whatever they are doing and climb the nearest plant, after which the fungus blasts through their skull, releasing more spores to rain down on the ants below.
This is a parasite which makes its host no longer care about their own safety or other members of their species. It just makes them climb no matter what, forcing them to their own head-exploding doom. If we could propose a comparable fungus for humans that would give us a start. Some sort of fungus which infects the brain and makes the host not care about pain or the well-being of other people. There is no known fungus at present which does this (probably a good thing) but baring in mind we haven't explored something like 90% of our own rainforests, fingers crossed such a human-brain fungus could be out there.
The next thing to study I reckon is lyssavirus, the viral infection which causes rabies. Rabies can be transferred through the bite of an infected host and has an eerily familiar effect on its victims. The symptoms present slightly differently depending on the person, but it usually causes paralysis and apparent death-like symptoms in the early stages, before heightened aggression and violence a few days later.
Thinking about it, perhaps that’s where the original zombie myth came from? The word zombi does originate in Central-West Africa where rabies is common, so maybe that’s where we got the first stories of people who apparently die, then come back and attack us? Zombie myths could be the result of rabies. Rabies doesn’t easily transfer from human to human though, so we’d be talking about an unknown strain which can transfer rapidly, moving to anyone who gets bitten.
If we combine this hypothetical rabies virus with our hypothetical brain-controlling fungus then we might be onto something at last. Contracting a fungal and viral infection simultaneously is pretty rare, although there is one known species of mycovirus (a virus which infects a fungus) which can be transferred to humans called AfuTmV-1. Clearly it’s possible for a fungal infection to hitch a ride with a virus molecule, so let’s say that’s what our zombie pathogen consists of.
Now all we need to do is throw in an aggressive bacteria which can cause necrotising fasciitis - a disease in which the soft tissue of your skin starts rotting while you’re alive, making the infected parts of your body look corpse-like, the so called "flesh eating bacteria" (do NOT google that before a meal). There are quite a few bacteria which do this so let’s propose a variety which spreads in the saliva.
Why I went into Science
So, let’s say that by sheer bad-luck, there is a double epidemic unleashed on the Earth by the powers of fate. A rabies/fungal infection hits first, making all the victims appear dead for a few days before they rise to perpetrate attacks on the living (caused by the rabies). The fungus simultaneously switches off their sense of self-preservation (as it does in ants), meaning infected people will stop at nothing to get food, ignoring all injuries...unless we deactivate their ability to move by destroying the brain.
Then, gosh-darn it, an outbreak of flesh-eating bacteria just happens to hit us as well. All the infected people whose immune systems have been weakened by the rabies-fungus cocktail now end up catching this bacterial inconvenience, making their skin decay as they go after the living. Voila. Zombie-apocalypse achieved. Also, probably the only blog on the internet to feature Disney and Bible references alongisde Motorhead and cannibalism!
I'll be right back, there's some guy in a CDC jacket at my door...
This has been one of the trickiest blogs I have ever written. Race is a delicate issue and I'm writing as a white, English-speaking male. In other words, I'm writing from a position of privilege without having been on the receiving end of racial abuse. I have not suffered the oppression and discrimination people of colour regularly endure and I would not pretend otherwise. I am also not an expert on Biology, so to write an article on the Biology of race has been a huge challenge.
I would like to start by expressing enormous gratitude to my friend Lee Agostini, a genetic researcher at Thomas Jefferson University who consulted on the Biology, and also offered insights into the experience of being a black man in modern America. You should absolutely follow him on instagram: @lee_the_scientist and check out his awesome Science-themed website: BioIsLifeMedia.
The important thing to say up front is that we all agree racism is bad (unless you're a racist I guess?) but when issues of race get discussed there are misconceptions and cultural confusions which make the debate appear not so black and white, if you'll excuse the pun. My aim in this article is to highlight the hyporcisy of racism from a scientific point of view because as far as Science is concerned racism makes no...freaking...sense!
However I'm fully aware that as a white non-Biologist I may have missed crucial nuances of the discussion. Please contact me if I'm getting stuff wrong (I want to learn) but also please appreciate that if I say something you find offensive it's coming from a place of accidental ignorance, not wilfull malice. If I upset you, I pre-emptively cry your pardon and ask you to help me do better!
The Strange Case of the Black Woman Who Wasn't
I want to kick things off by reviewing one of the most bizarre media firestorms I have ever seen. You may recall hearing about Rachel Dolezal on the news but if not, here’s a quick summary: Rachel Dolezal was elected the Spokane chapter president of the National Association for the Advancement of Coloured People in 2014. She taught classes on African-American culture at Eastern Washington University and served on the Police Ombudsman Commission for Spokane, representing the black community. Dolezal was a well-respected public figure who spoke out on black issues...until a year after her election when a member of the press exposed something remarkable. Dolezal was not really black. She was white.
To be clear, there’s no reason a white person shouldn’t be working for the NAACP - that wasn’t the issue. The issue was that she had been claiming to be something she wasn’t. She was committing fraud.
She doesn’t exactly deny these allegations either. In one interview with NBC, a reporter asked her “Do you feel you’ve been deceptive at all?” to which Dolezal responded: “There have been some moments with a level of creative non-fiction,” which I think is a fancy way of saying “yes, I was lying,” although she insists she never intended to deceive anyone. Her skin is kinda dark for a white person (see below) but when she claimed her parents were black, that was flat-out creative non-fiction.
When Dolezal’s story was inevitably brought into the media glare it sparked an international debate about her mindset. To some, she was highlighting issues of race and identity, to some she was a con-artist wanting attention and to others she was a mentally-troubled woman desperately seeking identity.
Dolezal has written a book about her experiences - In Full Colour - and there is a documentary about her on Netflix called The Rachel Divide charting her life after the chaos. What fascinates me most as a scientist however, isn’t her motivation, it's her terminology. Dolezal has described herself as transracial, transblack and even explained that she “identifies as black”. What exactly is she talking about?
Using the phrase “I identify as black” has obvious parallels with the vernacular of the transgender community. I’ve written in great detail (here) about the biological difference between male and female brains and how transgender people are not ”making it up” or “wanting to be something they’re not” (quite the opposite…they’re wanting to be something they are). I won’t rehash that whole essay, I'll just say that the biological evidence comes down firmly in support of transgender people. But the language we use is very important.
If a transgender woman says “I am a woman,” then critics could fire back by saying “No you’re not. You don’t have a uterus or XX chromosomes,” which would technically be correct. But if a transgender woman said “I want to be a woman,” that wouldn’t be accurate either. A transgender woman doesn’t simply like the idea of being female, her neural architecture means she is female.
That’s why the phrase “I identify as a woman” is so useful. It is stronger than saying “I want to be female,” but doesn’t make a false claim about biological anatomy which gives ammunition to critics. So when Dolezal “identifies as black” we have to question what she means. She seems to be saying that being black is an inherent thing and that she is (to put it crudely) a black woman in a white woman's body. So, fully aware of the potential minefield involved, I’m going to do my best to explore what Biology says about "race".
It’s In Your Genes
Every nucleus in your body contains a set of DNA strands collectively called your genome. It’s split into chunks called genes which are bits of biological information telling your body what to be. Genes code for things like eye colour, tongue length, heritable diseases etc. and although it’s not as simple as "one gene = one feature", the basic principle is more or less that.
The percentage of genes which actually makes us different to each other is very small (we're more alike than we are different) but within that small percentage, there's a huge amount of diversity accounting for variety among the human population.
Different versions of a gene are called alleles and because our species has been spreading across the planet for a very long time, adapting to different environments, this has led to certain alleles cropping up in some regions more than others.
We can measure the probability of a particular allele occurring with what’s called the allele frequency - how often an allele appears within a group of people. For instance, 25% of people in Central Asia have B-type blood whereas in Northern America it’s closer to 5%. That means if you test the DNA of an unknown individual and find it contains genes for B-type blood, it’s more likely they are from Central Asia, but you can't say for certain. B-type blood appears all over the world and obviously 75% of Central Asian people do not have it, so it wouldn’t be accurate to say Central Asian people have B-type blood. Just slightly more likely.
The same is true with diseases. Sickle-cell anemia is more common in Afro-Carribean people and cystic fibrosis is more common among Europeans, but white people still get sickle-cell anemia and black people still get cystic fibrosis. Certain groups may have a higher allele frequency overall, but on an individual basis you can’t tell anything about a person’s geographical region from their genome.
So when a person says they are something like “50% Irish” this is biologically meaningless. There is no such thing as an Irish gene, just allele frequencies which may be higher on average in the Irish population as a whole. You can't be half of one race and half of another. Unless you exist in the Star Trek episode Let That Be Your Last Battlefield (TOS Season 3 Episode 15)...
Obviously there isn't one country where black people come from, but there is an obvious biological difference between people of different skin colour...they look different! Black and white people's genes are obviously causing differences in appearance, so doesn't that mean there is a genetic difference between black and white people after all?
Well, technically yes. There are several genes which work together to define skin colour but the primary one is called MC1R and I'm going to use that as a shorthand for the whole collection. MC1R alleles tell your skin what colour to be, so yes black and white people do have different versions of one particular gene. But that is the only difference. MC1R doesn't code for anything else about the person, not even hair or eye colour.
The colour of your skin is unrelated to the rest of your genome and that’s crucially important. You can genetically determine (with reasonable accuracy) if a person is black or white by looking at their MC1R, but that’s all the gene tells you. There is no other physiological or neurological feature black people have that white people don’t or vice versa. Two black people (people with the same MC1R allele) can otherwise have totally different genomes while a white and black person (people with different MC1R alleles) can otherwise be genetically identical.
You can’t be white on the surface but internally black because “internally black” doesn’t mean anything. Your black or white characteristic is exclusively external and unrelated to everything else about your body. Black people's brains are no different to white people's brains so I'm afraid the word "trans-racial" is not an actual thing as far as Biology is concerned.
Besides, skin colour is a spectrum. Everyone has melanin in their skin (including white people) and there is no cutoff between someone being black and someone being white. It would be like defining a mountain as being split into the summit and base, or defining a rainbow’s colours as either red or violet. There's a lot of stuff in between the extremes.
Our brains work by putting things in categories because it’s easier to store information. But “ease of classification” probably shouldn’t be our priority when we’re dealing with actual human beings.
I mean, if we absolutely have to split people into categories then why stop at skin-colour? Shouldn't we start seeing redheads as a different “race” to blondes and brunettes? Or blue-eyed people as a different race to brown-eyed people? There's the same amount of genetic difference between them as between black and white.
So as far as Biology goes, there really is no such thing as race. People have different colours on their surface but that is as far as the difference goes. It’s almost like people of different colour…are all equal????? How about that.
What about DNA testing?
You’ve probably seen adverts for DNA-testing kits. These are products which take a sample of your DNA (usually from a cheek swab) which you send off for analysis and get a profile back. They can tell you things like eye colour, shape of your ear-lobes or even ear-wax consistency.
The problem comes when people claim they can determine your ancestry from your DNA i.e. saying things like: you are 20% European, 50% scandanavian, 6% hispanic etc. I’m not going to outright say these companies are misleading anyone (I don't want to get sued) but they don’t seem to be going out of their way to correct certain misconceptions about genetics.
The first problem is that genome analysis is looking at allele frequency so everything is based on probability not certainty. The second flaw is that the precision of DNA testing is not as good as CSI might have you believe. Not even close.
In one disturbing 2010 study conducted by Itiel Dror, 17 DNA specialists were given a sample of DNA used in a criminal trial and asked to compare it to the defendant. One specialist concluded the defendant was guilty, four said it was inconclusive and twelve said the defendant was innocent. Depending on which laboratory the court hired to consult with, the trial could have gone in very different directions…and these are specialists hired by our legal system.
One DNA-testing company I looked at claimed an ancestral gene precision of +/- 30%. That means any percentage they give you is falling within a range 60 percentage-points wide! Suppose you get your results and it says you're 40% South American. The precision is 30% either way which means you could actually be anywhere from 10% to 70%. Really it’s not much better than guessing.
The third big flaw with ancestral DNA testing is that their allele databases are drawn from modern populations. Nobody has an allele library for cavemen because not many cavemen were voluntarily giving blood. So if your profile says you're 70% Indian, that doesn't mean 70% of your ancestors come from India. It means your genome is similar to 70% of the modern Indian populace. So unless they were going back in time and becoming your ancestors, it's very misleading to say your genetic commonality with modern people can be "traced back" to ancestors.
It's also important to remember that genetics doesn't work like blending paints together. It's not as if you are a 50% hybrid of mom and dad. DNA sequences get mixed up in chunks so although you have genes from both your parents they can be rearranged in a novel way which neither of them has.
There’s also the pretty important point that you are a mutant. The average human genome contains over 400 mutations; features not found in either of your parents or any of your ancestors. The further back you go, the more mutations you have to factor in and eventually you get to a point where a lot of your ancestors become undetectable!
So even if there were racial genes they would fade from the genome after a few rounds of breeding. And besides, we couldn't trace your nationality back more than about 18-19 generations because prior to that, countries didn't exist.
Nationality Is New
It might seem like the idea of countries has been around forever but they’re a pretty recent invention. Monarchs have always fought over territories but the concept of national borders wasn’t formalised until 1648 at the Westphalia peace treaties. Prior to that, kings and queens were interested in cities, farms, mines etc. but the land in between was irrelevant. There were no officially recognised borders because nobody cared.
This made sense because society used to work in a “vertical” way. A person would know who the count of their land was, the name of the lord who reigned over them and then the king or queen in charge, but they had no interest in which other lords, lands or towns were governed by the same monarch.
The problem with a vertical system of course is that disputes started happening when leaders wanted more power. As their empires expanded, people started to disagree about who owned what and decided it was necessary to draw boundaries to prevent endless wars. So the monarchs of various regions agreed to draw lines on their maps which would correspond to invisible and made-up "borders." Thus people started defining themselves “horizontally” by who else lived within the imaginary lines.
When Italy was established in 1861 for instance (making it less than 160 years old) the statesman Massimo d’Azeglio remarked in his personal diary: “we have invented Italy, now we must invent Italians,” because there was no sense of national identity and less than 3% of people within the established border even spoke the Italian language. The idea of a nationality is something politicians had to introduce but humans have existed for 200,000 years without it.
In fact, the human brain might only be equipped to handle meaningful relationships with about 150 people (it's called the Dunbar number) so belonging to a country with a population of millions, which most of us do, is actually something we can't really grasp. I'm not saying the idea of nationality is a bad thing of course, I'm just wondering if we might do away with it once it has served its purpose...whatever that might be.
Black Is A State of Mind?
It’s possible that when Rachel Dolezal says she identifies as black she means culturally. It would make more sense (not hard to do, since Biologically it makes no sense) although it’s still a pretty nebulous thing to say since there are numerous cultures of people around the world whose members have black skin and saying they are all the same is a bit…well…kinda racist.
There’s a vast difference between the culture of the Masaai tribes of Kenya and the aboriginal people of Australia. There’s an even bigger difference between the cultures of Detroit Michigan (80% black) and people living in the Republic of Congo. In Nigeria, where I grew up, there are intense tribal rivalries and members of such tribes would be sickened at the suggestion they share a culture with their rivals just because they have the same colour skin.
There is no such thing as one "black culture," because there's huge cultural diversity among the billions of black people on Earth, just as there is between all the white people. Saying you identify with black culture is like saying you identify as "religious”…it’s a sweeping statement with little specificity. Which one do you mean?
Besides, if you appreciate certain cultural ideologies which are more common among black populations, you can still appreciate them as a white person. If race is a social construct (as Dolezal has claimed often enough) surely identifying as black is buying into that same social construct? Why not just be a white person who admires the culture of a particular group of black people?
A lot of members of the NAACP were understandably outraged by Rachel Dolezal's actions because the discussion suddenly shifted away from how black people are treated in society to…what is going on with this woman? It made everything a media circus and nobody was listening to what really mattered anymore.
I suppose the only thing you could argue all black people have in common is the centuries of suffering they and their families have endured at the hands of white people. Countless black people experience racism (both overt or passive). Some would even say that experiencing racism is a critical aspect of the African American experience, and this is not something Dolezal has lived through. Her great-great grandparents were not slaves. She did not go to school with children who called her the N-word at recess. She has not had to put up with constant harmful stereotypes since birth and she does not fear for her life when she gets pulled over.
If Dolezal wants to admire and champion a particular community then good for her. But when she claims to be black it doesn't mean anything biologically and its only cultural meaning involves things she has not experienced.
Ultimately, people have different coloured skin and that’s all there is to say. You country of birth doesn’t affect your genetics and you can’t learn a person’s racial origin based on DNA. Skin colour has no more to do with a person's brain than does their eye colour. Race does not exist, but sadly racism does. And it's not just ethically awful, it's sloppy Science.
Take A Ride on (Falcon) Heavy Metal...
On February 6th of this year, Elon Musk’s private space organisation SpaceX launched a Tesla Roadster into space, which is going to end up locked into stable orbit between Mars and Jupiter. While I desperately want humans to survive as a species, there’s a tiny part of me which thinks it would be hilarious if civilization collapsed and in a few thousand years whatever species replaces us discovers Musk’s car floating out there with no idea of how the hell it happened.
When images of the astronaut mannequin began cropping up on social media, I cracked a few jokes about the opening scene of Heavy Metal which features a red mustang crashing into a planet from space, but nobody got it. I was showing my age (or my nerdiness) but fortunately it wasn't just space-cars which made headlines this year; a whole ton of awesome Science has been taking place - as per usual - so as we approach the end of Gregorian year 2018, let's reflect on the groundbreaking inventions and discoveries we have made since January last.
Obviously the biggest scientific event of the year was the release of my debut book Elemental on July 5th, but even if we discard that clear high-point, 2018 has still been pretty cool. Here are my picks for the top ten most inspirational and exciting scientific moments we've enjoyed.
February - Women Are Officially Good At Science
It’s no secret we have a gender divide in the STEM fields, with far more boys studying the subjects than girls. The debate has always been whether this was down to boys being more interested or because boys were just better at it. I've always felt that the former explanation makes more sense; women and men are just as good at Science but the reason women don’t pick it as a subject is more down to societal expectation or preference rather than a lack of skill. This view is not shared by everyone of course. In fact, I once had a female student tell me an engineer tried to persuade her away from studying engineering because, in his words, "girls can't do it". I’m therefore delighted to say that my hypothesis has finally been validated with hard data.
In a vast study of 475,000 adolescents spanning 67 nations, researchers Gijsbert Stoet and David Geary published a report in Psychological Science concluding that women are just as good at scientific and mathematical subjects as boys, performing just as well (occasionally better) in all controlled tests.
There’s a lot more to unpick in the Stoet-Geary paper of course, and some of their findings are really fascinating e.g. in countries where women are given less freedom, a higher number of women go into STEM subjects as a way of becoming financially secure, meaning that paradoxically countries with poor gender equality have more women in STEM rather than less (not the result anyone was expecting). The takeaway for me is pretty simple however: girls can do Science just as well as boys. If a woman chooses to go down the humanities route then fine, that’s her preference. But if she wants to go down the STEM route (whoop!) the results are conclusive: she’s going to be fine, thank you very much.
April - Mars and Back!
Right now, there are three ways we can gather information about the planet Mars. One is by looking at it through a telescope. The second is by sending robots there who radio back data about their findings. The third is to wait patiently for meteorites to strike the surface of Mars and hope it scatters dust into space which occasionally lands on Earth (like the Alan Hills meteorite). Those methods have sufficed, but what we’ve never done is sent anything to Mars to actually grab a chunk of rock and bring it back for analysis.
Which is why it’s good news NASA and the ESA finally announced this to be their next big target. They both signed an agreement to work together to achieve the goal of sending a reconissance probe to complete the very first Martian round-trip. No longer will we have to rely on Curiosity shouting back at us through space, we’re gonna bring a piece of the action to Earth! Hopefully on the return trip we can change the tires on Musk's car.
June - The Ebola Vaccine
We found out about the Ebola virus in 1976 and since then we haven’t done a lot about it. But in June of this year, the Democratic Republic of Congo began administering a vaccine called rVSV-ZEBOV and according to early reports it’s having an extremely high success rate combating an outbreak, thus far preventing 680 cases of the disease. An additionally heart-warming facet of the story is that the company who sent the vaccine did so free of charge. The pharmaceutical giants Merck donated 7,500 vaccine units to the DRC which is enough to stymie the outbreak and hopefully prevent it happening again.
You might cynically argue that Merck were only doing this for publicity. Or maybe you want to complain that we only started looking into Ebola vaccines once it started affecting European and American countries i.e. we’ve been ignoring it for decades because it was only affecting people in far-away Africa, but once it became a threat to us we decided to intervene.
Those might be fair points, but my response is: who cares? Saying big companies like making money or that people are sometimes selfish is hardly an insightful observation, or even worth pointing out. On the other hand, the fact that hundreds of people are being spared from a life-threatening disease down to sheer generosity is worth celebrating. Whatever cynical spin people might try to put on this story, I think it's worth pointing out that any way you slice it, where there was once disease, now there is health. You can't be cynical about that.
July - Icy Neutrinos
Near the South Pole, there’s a huge research apparatus called IceCube located at the Amundsen-Scott Station. It consists of a kilometer-cubed block of ice festooned with 60 particle detectors at various depths, all primed to detect cosmic rays - beams of particles streaming toward Earth from outer space. One of the big puzzles we’ve always had is where these cosmic rays are coming from and how the particles hitting Earth have so much energy. In July we finally got a pretty good answer. By observing a single neutrino (a weakly interacting, neutrally charged particle moving near the speed of light) which slammed into the ice at the end of 2017, researchers at IceCube spent six months back-tracking its trajectory and finally identified its source. It came from a blazar 3.7 billion light-years away in the middle of the Orion constellation. A blazar is a galaxy whose center is moving so fast it starts spitting out high-energy particles in a sort of vortex (shown in the diagram above), like an epic version of a black hole...and we apparently have one pointed directly at our planet.
July - Underground Lake on Mars
In the world of extremely easy newspaper headlines, there’s this one. The ESA’s Mars Express spacecraft was beaming radar signals off Mars to see what was lurking beneath the surface. The beams strike the different chemicals and densities of material beneath the ground and bounce them back at varying speeds, kind of like giving the whole planet a giant ultrasound scan. And as we did this scan we discovered a 12 km squared lake of liquid water beneath the surface of the planet’s South pole.
There could be dozens of these subsurface lakes all over the planet for all we know, but it’s the first evidence that Mars not only has liquid water - it has a lot. This news story on its own may not sound that thrilling but remember the rule for our own planet: wherever there is water, there is life. If we have any hope of discovering life on Mars, these underground lakes are likely to be our best bet.
August - Schrodinger's Drum
In quantum mechanics, particles have the ability to exist in more than one energy state at the same time (sort of). This means they should be capable of exhibiting two distinct behaviours simultaneously and even occupy two locations at once (sort of). We had always assumed this phenomenon was unique to tiny particles but a team lead by Michael Vanner proved otherwise.
Vanner was able to create a tiny membrane only a few millimeters across which he bombarded with particles of light, a bit like chucking pebbles at a drum-skin. Because the particles of light were in two states at once, that means the drum skin could be as well, simultaneously vibrating and staying still. Vanner managed to thus persuade a large-ish object to do two contradictory things at once (sort of). You know, someone really ought to write a book about all this quantum stuff. Hmmmmm.
September - Meet Your Great Great Great (x 100000) Grandma
Nobody knows what the earliest form of life on Earth was, but the debate over the earliest animal got really interesting this year. The earliest known animals had previously been dated to around 610 million years ago, but a new discovery seems to have pushed it back by as much as 20 million years! It’s called a Dickinsonia unfortunately (named after the scientist Ben Dickinson) and although we’ve known about its existence for a long time due to fossil remains, we’ve always assumed it to have been some sort of fungus.
Until, that is, researchers led by Ilya Bobrovskiy discovered a sub-structure to the fossils indicating the presence of the biochemical cholesterol - something only produced by animals. Dickinsonia, it turns out, is the oldest known animal on Earth. Well, the second oldest technically. The most archaic fossil was obviously that engineer who spoke to my student.
September - The Paralysed Walk…Seriously
As if Scientific achievement couldn't get any more awesome we have this remarkable story from September where we cured paralysing spinal injuries for five separate patients. Susan Harkema (above left), head of the Kentucky Spinal Chord Injury Reserch Center, has been pioneering a technique whereby motor neurons in a damaged spinal column are stimulated with electricity and taught to reactivate, independent of the brain.
Rather than waiting for signals from the brain-stem to tell muscles what to do, Harkema's device requires that the patient re-train their muscles to respond to electric signals coming from the reactivated neurons, so it does take a lot of work and practice on the part of the patient but that's a small price to pay for literally "making the lame to walk."
The word miraculous might be tempting to use, but in truth it is nothing of the sort. A miracle implies temporary suspension of the laws of nature...Harkema didn't have to break any laws of nature to achieve the seemingly impossible, she just re-arranged them in an inventive way nobody else thought of doing. This is no miracle. This is pure Science.
October - New Dwarf Planet…And Maybe Planet??
Pluto is not a planet, and never was (check out my blog on the subject) but if you’re yearning for a ninth planet then we may have some good news on the horizon. In October, we discovered a new dwarf-planet orbiting beyond Neptune which has genuinely been named “The Goblin”.
What makes the Goblin so exciting is that as it orbits the sun, the trajectory of its path seems to be warped slightly, as if something big and heavy out in the darkness is tugging on it gravitationally. That's how we discovered Neptune in fact - we saw Uranus getting bent slightly (hurr hurr hurr), so presumably something must be doing the same thing to The Goblin.
The estimate is that this object, whatever it is, may be as much as seven times the mass of the Earth and if so, we're potentially gonna have a ninth planet after all. And this time, a proper planet, not just a fatsteroid, which is really all Pluto ever was.
December - Where Once There Was Death, We Created Life
This story was perhaps the most touching of the year for me. Perhaps not the most headline grabbing or the most influential, but it's still amazing. In 2013 we carried out the first successful uterus transplant, allowing infertile women to receive a working set of reproductive organs and thus have children. The only problem with the procedure is that for it to be an effective treatment for infertility, you need a woman willing to part with her own functional uterus which, understandably, is a pretty big ask. What we achieved in February this year however, was something remarkable...a transplant of a uterus from a recently deceased donor to a live recipient.
At the University of Sao Paulo, a team of medical researchers led by Dani Ejzenberg, were able to help a woman born without a uterus in February when an organ-donor died from a stroke and left a working uterus to be re-used. For nine months the team waited anxiously, watching as the baby steadily grew and finally, on December 22nd, the baby was born healthy.
To me this is beautiful. We were able to take a death and literally use it to spawn life. It's hard to think of a more hopeful image than a healthy baby successfully born from a death. 2018 is about to die, but a new year is born from it, one which will yield ever more thrilling and wonderful things form our species and its desire to make the world better.
Mars Ticket: Futurist
Ebola Vaccine: Inhabitat
Blazar: Boston University
Underground Mars Lake: Resonance Science Foundation
Schrodinger's Drum: Physics APS
Susan Harkema: MadisonCourier
Not a great plan?
The Avengers movie franchise began in 2008 with Jon Favreau’s Iron Man and has since taken in $17.5 billion for Marvel Studios. You might be caught up with every installment, but if not (there are 20 of them) I’ll give a brief overview of the story-arc without getting too spoilerish.
Far away, on the planet Titan, the supervillain Thanos figures out a solution to the Universe’s biggest problem. With dwindling resources across every galaxy and species proliferating, we are in a state of chaos as every being on every planet fights for survival. He decides the answer is simple: kill half of everything. Less living things = less mouths to feed = less need to fight.
It’s a bold and utilitarian approach to the problem of overpopulation, but on sheer logic it would technically work. I feel there’s probably a simpler solution (sharing stuff???) but you’ve got to hand it to him; galactic murder would do the trick. Many philosophers throughout history have proposed answers to the overpopulation problem, with the most widely-debated being Thomas Maltus’ 1798 suggestion of limiting the number of children permitted per family. Thanos isn’t interested in something like that however. He’s a man of extremes who doesn’t do anything by half. Except for genocide I suppose.
As supervillain schemes go it’s not the stupidest I’ve ever come across. That title goes to the time Wonder Woman uncovered a plot by Nazis to buy global milk supplies and sell it to America at an inflated price in order to make it unaffordable for poor children, leading to widespread osteoporosis, allowing Nazis to invade twenty years in the future. I did not make that up by the way, they really ran that story (Sensation Comics, Issue 7, May 1st 1942).
Killing half the population of a planet could be achieved with a fat load of bombs, but Thanos wants to wipe out half of all life in the Universe. Given the sheer size of space (it's pretty big), this is no minor feat. Visiting every planet and blowing up half of every city would take considerable time, so he opts for a much simpler idea - obtain the six infinity stones.
In the Marvel Universe these are six mystical gemstones which control six facets of the Universe: time, space, power, reality, mind and soul. Any being who possesses just one of these items becomes immeasurably powerful, but if Thanos can get his hand on all six he will wield absolute power over the cosmos with a snap of his fingers. It's a cool idea, so I decided to take a look at the scientific plausibility of Thanos' plan.
Space and Time Stones
Every event in the Universe takes place in three-dimensional space (potentially higher dimensions if you’re into string theory) but defining everything spatially is not enough to describe the laws of physics. You need time as well.
The simplest way of demonstrating this is to try imagining an object which occupies space but not time i.e. it has size but doesn’t occupy any seconds. To say an object doesn’t exist for any measure of time is to say the object doesn’t really exist, ergo time has to be thought of as a fourth dimension. If you leave it out and just have a Universe made of space, you essentially have no Universe.
It’s a slightly unusual dimension compared to length, breadth and height, because we can only move through it in one direction, but it is part of our Universe’s backdrop. If you want to control or influence anything you need to talk about things happening in space and time simultaneously.
The physicist Hermann Minkowski therefore proposed we stop thinking of the Universe as objects in space moving through time, but rather as objects moving through a unified material he called ‘spacetime’. Einstein figured out the basic rules for how spacetime should behave in his general theory of relativity and it turns out everything checks with experiment.
If you want to control an event across the entire Universe, you would need some way of controlling space and time simultaneously. Therefore, Thanos’ need to have the space and time stones makes sense. So far, so good.
It’s never explicitly stated what the reality stone does in the movies, but you can infer it from seeing how people use it. There’s a scene in Avengers: Infinity War where Thanos uses it to turn pulses of laser-bullets into bubbles and generates cities out of thin air. I propose that while the space and time stones control the background of the Universe, the reality stone is manipulating the particles within it, telling them how to arrange.
There are something like 200 types of particles known in physics (I’ve written a post about them here) and the key message is that every object you come across is made from these particles in one way or another. If you control every type of particle, you control literally every object. While there are undoubtedly some types of particle we have not yet discovered, we can assume the reality stone influences them too, even if we don't know about them.
So, given the first three infinity stones, Thanos can control every particle in spacetime. But since he’s wanting to create an event (lots of death) he also needs to control how objects interact with each other.
Every law of Science involves three principles: particles, the spacetime they inhabit, and the energy involved in the interaction. Power is the word we use to describe how quickly energy is transferred, so I propose the fourth stone is the one which controls energy and thus interactions.
Energy is sometimes described as a substance a particle can possess, but this is a tad misleading. Energy is not really a thing, it’s a way of expressing the concept of cause and effect mathematically. When you eat food, chemical reactions take place between the food molecules and those in your body, which allow you to move and live. The food was the cause and your movement was the effect, so we can measure precisely how much ability the food had to cause an effect on your body. This is what we mean when we describe something "having energy".
The food you eat doesn’t contain a glowing fluid-substance called energy; it contains particles. But those particles have ability to exert an effect on other particles. The power stone is perhaps the most philosophical therefore, because it allows Thanos to make sure cause and effect are working when he snaps his fingers and kills everyone. And this might be all he needs.
Every law of physics we know, and by extension all of chemistry and biology, rests on these three ideas: particles, their interactions and the background in which they live. Given control over these things, Thanos would have the bare minimum required to bend things to his will. But there are some scientists who would argue for another stone, since it’s possible something else exists in the Universe. Something not made of particles at all...
It’s clear that your brain is made of particles, meaning the reality and power stones should be more than capable of manipulating it. It’s also undeniable that we can influence and change the mind’s inner working with the right particles e.g. psychiatric medications, narcotic substances, anesthetics and electromagnetic fields. But is that all there is to consciousness? Might there be something more than biochemistry going on?
The answer is not settled and indeed some hard-nosed and brilliant scientists (including Nobel prize winners like Niels Bohr and Eugene Wigner) have argued that the mind may in fact have an entirely non-particle component to it...maybe.
I’ve written about it in more detail (here) but it comes down to a phenomenon in quantum mechanics called the measurement problem. When quantum particles are left to their own devices they behave a certain way. But when we take measurements in the lab, they behave in a totally different way. The puzzle comes from the fact that the lab equipment we’re using to take our measurements are made from the same quantum particles, so they shouldn’t have any sort of spooky effect. Quantum particles interacting with more quantum particles shouldn’t change their behaviour. And yet they do.
There are many answers given and one of them is that consciousness itself exerts influence on particles, that our very act of observing the experiment changes it somehow. I have to be clear that this is a minority view and only one of the many possible solutions to the measurement problem, but we cannot discard it. There is just enough to the theory to make it worth considering, so let’s go along with it for the sake of the movie. If the mind genuinely is a separate substance to particles, Thanos would need a fifth stone to be omnipotent. So what of the sixth?
OK, this is a little more controversial and I’m going to have to tread cautiously so as not to upset anyone on either side of the fence. Physics definitely agrees the Universe is made of particles exchanging energy in spacetime and there is a small group of quantum physicists who suspect the mind may be involved in some way. The soul is a less well-defined concept however, because it has different meanings to different people.
A good way to demonstrate this difficulty is to ask: what, specifically, is the difference between the soul and the mind? The mind is a collection of a person’s memories, beliefs, ideas, hopes, fears, perceptions of the world and self-awareness. A person’s identity can be neatly summed up as their mind, so what is missing from the list which requires a soul? What additional ingredient is needed in defining a person which the mind doesn’t already cover?
Some religions teach that while all animals have a mind of some sort (even jellyfish have a central nervous system), when the animal dies their mind perishes as well, but humans continue to exist once their body is done. The soul is therefore, under some definitions ‘that which keeps the mind preserved once the brain has died’. In other words, the notion of souls takes us into the realm of an afterlife.
Science has a lot to say on the topic of the mind and the debate is very exciting, but there is no scientific debate on the soul because there is no clear information one way or the other. There are all sorts of claims of course, but someone claiming something is not enough to warrant a scientific perspective. This doesn’t mean the afterlife is not real mind you, absolutely not. It’s just that the scientific answer to such a question is “hmmm, we have no idea, could go either way.”
Because we can’t prove or disprove the existence of souls we therefore can’t say much about what the soul stone does. But, let’s say that the theology of the Marvel Universe is correct and souls are genuine things. Thanos obviously doesn’t want to destroy half the planets and stars in the Universe…that would defeat his whole plan. He wants to leave all the planets and resources in their current state but cut the number of living things by 50%. Therefore he’ll need a way to exert his power on only living organisms and not inanimate objects too. If we propose that all living things have a 'soul' distinguishing them from non-living objects, then he would logically need a stone for that.
What Are Infinity Stones?
The answer, I think, can be found nestled in both the theories of quantum cosmology and in a scene from Guardians of the Galaxy…both of which are pretty awesome. Here’s the scene in question (movie is rated PG-13 in America and 12A in the UK):
In this scene, during which the power stone is revealed, The Collector explains “before creation itself there were six singularities, then the Universe exploded into existence and the remnants of these systems were forged into concentrated ingots - infinity stones.”
A singularity is an object whose properties are so extreme we can’t describe them with our current knowledge of physics. The center of a black hole is considered a singularity for instance, since according to the laws we know, a black-hole’s center has an infinite amount of density and gravitational pull. Anything which predicts an infinite property cannot possibly be the right answer since the Universe is a finite system, therefore singularities are really a physicist’s word for “objects we can’t figure out yet” and any theory which predicts a singularity is not complete.
The origin of the Universe is just such a singularity. All we know is that 13.8 billion years ago, the entire Universe was condensed into a tiny speck which started expanding. We have no idea where this speck came from, what it was like, what made it start expanding or if anything came before it (assuming there was a ‘before’ since time may have ‘started’ with the Universe). We can explain the evolution of the Universe pretty well after a certain point, about a quadrillion quadrillion quadrillionth of a second, but anything before that is a total mystery.
According to The Collector, there were actually six singularity objects alongside ours 13.8 billion years ago and when our Universe began expanding, for whatever reason, these six singularity objects got absorbed into ours, rather than expanding themselves. These objects would therefore be like miniature Universes which somehow avoided the expansion. They stayed as ‘concentrated ingots’ and contain, it would appear, properties we would normally attribute to an entire universe.
Since we currently have no idea how the Universe began or how many singularities there were, or how they interacted, or why our Universe’s singularity expanded we can sort of shrug a little bit and say “yeah, OK, why not”. There was at least one singularity at the beginning which started expanding so why not six others which didn’t? There are stranger things in nature, like the Lophorina bird (see below). If that is real, I can swallow infinity stones. Not literally of course, did you see the video???
With a Snap of his Fingers
The really scary idea of infinity stones is that once Thanos has all six of them, he can exert his will over everything instantly from one place. But how could such a thing be possible? Surely you couldn’t immediately effect every point of the Universe at once? Well, fortunately for the Marvel universe, there is one possible mechanism by which this could potentially be achieved.
It’s a phenomenon called quantum entanglement and it’s really strange. I don’t want to get hung up on technicalities but if you’re curious, the book I’m currently writing (a sequel to Elemental) which will be released in the Summer goes into all the gory details, so look out for that. The gist is that two particles which interact can become linked together in such a way that doing something to one will instantly affect the other no matter what distance there is between them.
The mechanism of how entanglement works is one of the biggest mysteries facing physicists today but it’s an undeniable effect. It genuinely is possible to impact a particle on the other side of the galaxy, or even the Universe, if you entangle it with a particle here on Earth. There’s all sorts of limitations and caveats on entanglement and there is no obvious way of sending a self-destruct order to half the particles in the Universe, but this is a sci-fi film so let’s just say there’s a way of doing it.
In order for two particles to entangle, they have to meet each other at an earlier point, so in order to affect the whole Universe simultaneously, you’d need to somehow wind back to a time when all the particles were close together - say when the Universe was small and everything was concentrated - 13.8 billion years should do the trick.
If you had an object or a bunch of objects hanging around when the Universe was still a singularity, they could entangle themselves with each other and with the Universe as a whole, so once the Universe expanded, they would stay in direct quantum-contact (quantact anyone?) with everything inside it. Every particle would be linked together by their entanglements to the six infinity stones and thus, if you manipulated them in the right way, you could genuinely snap your fingers and affect everything everywhere.
So if you had a way to control all matter (reality) across spacetime (space and time) assuming quantum consciousness is required (mind) then you can control everything in the Universe. If you send a pulse out to every particle via the entanglement links established at the beginning of the Universe, you could target all the particles incorporated into living things (soul) and tell them to dissociate from each other (power), dissolving half of all living things. Hooray??#
Being a Grown-Up
Last week I read about the death of legendary comic-book writer Stan Lee and, like millions of people across the world, felt we’d lost a great writer. Stan Lee was the creator of The Incredible Hulk, Iron Man, Thor, Fantastic Four, Black Panther and many other comic book characters including his most iconic creations: The X-Men and Spider-Man. Lee was a talented and inventive storyteller but also a really witty and cheerful guy who everyone seemed to love. Who doesn't cheer at a Stan Lee cameo in a Marvel movie??
I was therefore puzzled earlier this morning to read the satirist and political commentator Bill Maher’s take on Lee's death. You can read his brief blog in full here but the gist is that Bill Maher doesn’t understand what the fuss is about. He doesn’t see why people are mourning the death of Stan Lee because, as he sees it, comic books aren’t important.
Some choice quotes include "America is in mourning. Deep, deep mourning for a man who inspired millions to, I don’t know, watch a movie, I guess," "comics are for kids, and when you grow up you move on to big-boy books without the pictures," and perhaps the most dramatic: "I don’t think it’s a huge stretch to suggest that Donald Trump could only get elected in a country that thinks comic books are important."
He seems really quite angry about people reading comic books and uses Lee’s death to attack the whole of America because he thinks adults reading comic books is a form of arrested development. His belief that growing up means reading books without pictures seems a little odd to me, however. What’s wrong with looking at pictures? Novels are a legitimate art form...pictures are a legitimate art form. Why does combining words with images suddenly make the story-telling childish? I personally define being an adult as more to do with recognising other people's right to form their own opinions and tastes while taking responsibility for your own actions...not just "reading books without pictures".
I mean, just to point out the obvious here: Bill Maher stars in a TV show. He knows that’s made from lots of pictures played fast, right? I mean, he knows his very own medium of communication involves little-to-no reading?
Maher is correct that at one point comic books were aimed at children, but there was also a time when television was assumed to be for illiterate commoners and no dignified person would own a television set, but art forms are allowed to change with time. Comic books started out for kids but they aren’t so exclusive anymore. The same way some books are written for grown-ups, some comic books are too. I think Maher just isn’t very widely read.
Although if it’s the subject matter he objects to e.g. science fiction and superheroes, then that seems a little curmudgeonly. Does he know people like to have fun at the cinema or that sometimes adults like to read books for fun? Actually, I think he must know that, since he filmed a scene for Iron Man 3…a comic book movie based on Stan Lee’s characters.
Books For Smart People
I’m an adult and I’ve read plenty of great literature. I’ve read the works of Plato, Aristotle, Shakespeare, Marlowe, Dickens, Austen, Twain, Melville, Eliot, Hemingway, Orwell, Ishiguro etc. but I’m also a fan of comic book writers like Alan Moore, Frank Miller, John Wagner and Stan Lee. I distinctly remember having a copy of Bertrand Russell’s History of Western Philosophy sitting next to Judge Dredd: Total War on my bedside table at one point. Just because I read comic books doesn't mean I can't also appreciate "classics".
In fact, every Christmas I tend to read Dickens’ A Christmas Carol and the comic book Batman Noel one after the other in the same day. I enjoy the escapist entertainment and haunting artwork of one and the linguistic brilliance and sentimental wit of the other. I'm also in the process of writing a book on quantum mechanics due for publication this Summer...I would like to think reading comic books clearly hasn't dulled my critical faculties or stunted my intellectual growth.
I mean, I agree that you should grow out of childish stories as you get older, but these days there are lots of sophisticated comic books written for adults. Take Maus by Art Spiegleman, a comic book about the Holocaust which won a Pulitzer prize. That book made my skin crawl with horror and made me tear up with emotion. I have also read Schindler’s Ark (the book on which Schindler’s List was based) and found it equally moving. Is one of them a more adult form of art because it doesn’t contain pictures? Can't they both be powerful and thought provoking pieces of literature?
Comic books today have evolved beyond Dennis the Menace and Stan Lee was central to that deveopment because he was one of the first writers to introduce adult issues to his stories. Before him, every comic book character was a 2D square-jawed hero who saved some damsel in distress from a moustache-twiddling "foreigner". Lee began creating characters with emotional complexity.
His comic books dealt with issues like racism, sexism, drug abuse and political corruption. He wrote comic books in which women were central characters with complicated emotional lives rather than foils for male heroes to save, and Lee fought hard to include black characters in his works without stereotyping them. Yes, Stan Lee’s early comic books were written for children, but as the children grew up, so did his writing.
But, let’s say Maher was right for a moment and that comic books are for children. In what way does this make them unimportant? Stan Lee was, according to Maher, an author of children’s literature. Do we no longer celebrate children's literature in the Maher-niverse?
I’m wondering if Bill Maher will be as equally disparaging when JK Rowling dies? Or if he thought it was ridiculous when people got sad over the death of Dr Seuss or Beatrix Potter? I think Stan Lee was plenty important to our society, unless Maher is going to claim children reading isn't important?
Stan Lee made a lot of kids happy and millions of people have fond memories of reading his stories. By contrast, Maher's job is making caustic remarks about politicians behind a desk. That's his role in society. It's an important one of course, satire is crucial to an informed democracy, but is it more noble a profession than getting young people reading? I don't think so.
Besides, Stan Lee did something even more important for pop culture, which I am going to elucidate on now (because you might be wondering why I’m writing about comic books on a Science blog)…he made scientists the good guys.
The Evil Genius
Typically in movies, comic books and pulp-fiction novels of the day, scientists were depicted as the villains, without fail. We were always the maniacs who reached too far and accidentally unleashed a deadly plague on the Earth or brought space-vampires from Mars down to eat our livers. Stan Lee made scientists heroes of his stories instead, and showed how they used their intelligence to outwit common criminals. He made scientists look awesome!
Reed Richards of The Fantastic Four, got his powers on a scientific expedition. T’Challa, The Black Panther was a diplomat and scientist. Charles Xavier from X-men was a biologist and anthropologist who lectured at Oxford. Bruce Banner was a nuclear physicist. Tony Stark was an engineer. Peter Parker was a high school physics student. Hank Pym was a particle physicist...I could go on.
Stan Lee respected the importance of getting kids interested in Science and I would argue that along with Gene Roddenberry (creator of Star Trek) he did more to raise the profile of fictional scientists than anyone else in popular culture.
Stan Lee also used scientifically legitimate devices to get stories going and showed his heroes using science to defeat bad guys. Sometimes Lee’s physics wasn’t quite right (he wasn't a scientist after all) but oftentimes it was gosh-darned impressive. There’s a Spider-Man story where Electro uses his electrical powers to generate induced magnetic fields inside an iron beam and scale a building to escape via Lenz's law. Lee is teaching children about electromagnetic fields here. Rather than having mega-ray death-lasers controlled by evil gnomes, Lee would often ground his fanciful stories with real scientific terminology and make geeks look like heroes for a change.
How I Use Comic Books
As a physics teacher, what you’re usually doing is teaching kids a quick equation or law, which can sometimes be quite dry, especially for an hour. The best thing to do (the really important thing to do) therefore is show how physics relates to the real world. But most text-books do this in a very plain fashion.
Physics textbooks are a world of perfectly spherical balls rolling down frictionless surfaces and John and Jane calculating the mass of a pulley given the acceleration of a cube as it is pulled upwards etc. etc. How many young people do you think are going to get fired up about physics because of that? Not many. But if you can push physical laws to their extremes by relating them to sci-fi stories, you can get debates going. You can get people to use the equations in a novel way and see how they really work in outrageous scenarios. Here are some examples of how I have used comic books and comic book movies in my lessons:
There’s an iconic Spider-Man story where Peter Parker tries to save Gwen Stacy falling off the golden gate bridge, but his webbing catches her and brings her to a halt too fast, potentially snapping her neck. Peter Parker then has to live with the guilt of maybe killing his girlfriend because he didn’t take into account changes in momentum (yeah, a kid’s story…sure). I use this comic book scene with my A-level students to calculate the forces involved and answer the question of whether Parker really kills Gwen or not. It’s a great way of teaching concepts like forces, elasticity and gravitational energy.
There’s a scene in The Avengers where Hulk stops a crashing alien spacecraft with his fist. I show this clip and contrast it with the scene in Superman Returns where Kal-El catches an airplane and we use Newtonian mechanics to determine which one of these characters is more powerful.
In my lesson on velocity, we use panels from comic books to see who would win in a race between The Flash, Superman and Quicksilver. I’ve used scenes from Ant-Man to talk about quantum mechanics and how object sizes are determined by inter-particle forces. I use clips from X-Men 2 to illustrate how electromagnetism works and a scene from Spider-Man 2 to teach nuclear physics.
I’ve used clips from The Dark Knight Rises to calculate the radius of an explosion outside Gotham city. I’ve used panels from Aquaman to teach the behaviour of waves. I’ve taught lessons on radioactivity using Spider-Man, The Hulk, Daredevil, Reed Richards and The Phoenix (who all got their powers from radioactivity).
Even if the kids don’t really care about comic books, they can at least tell I’m trying to have a bit of fun with the topic and show how physics can be applied to novel situations. So I say thank you to Stan Lee and all the other comic book writers and comic-book movie makers who give me so many cool and over the top moments to showcase to my students and get them thinking.
I’m not saying a person’s literary diet should consist solely of comic books. But let me put it this way: if you want to teach a 12 year old about Newton’s second law, which do you think is going to get them more engaged - making them read an excerpt from Principia Mathematica or showing them the scene in The Dark Knight where the Joker flips an articulated lorry in mid-air using helicopter cables?
Comic books tell stories. They do it with words and pictures. Some are written for children, some are written for adults. The artwork is often remarkably detailed and the dialogue often snappy. Stan Lee was a key figure in developing an art form and getting real science into his stories, as well as depicting scientists as good guys. I think that’s pretty important Mr Maher and frankly I think Stan Lee rocked.
Ask someone to name a bunch of famous Scientists. Assuming they don't opt for TV-figures like Brian Cox or Bill Nye, they'll probably pick Albert Einstein, Isaac Newton, Stephen Hawking or (if they're a maverick) Nikola Tesla. There's nothing wrong with those titans of course, but it's interesting that they're all physicists.
If you ask someone to narrow their list to famous biologists, they'll probably go with Charles Darwin, Louis Pasteur, Alexander Fleming or potentially Watson & Crick. Once again, nothing wrong with these luminaries (apart from Watson who's a total jerk) and it's great we can name biologists who shaped our understanding of the world. But what happens if you ask someone to name a famous chemist? There's a few obvious fictional ones like Henry Jekyll or Beaker from the muppets, but how many real-life chemists can we actually name? Unfortunately, this is where the gears of memory jam and it's something I want to change.
Of the three main Scientific disciplines, chemistry is the one we can actually do stuff with. You can't tell a quark how to oscillate or a strain of bacteria how to evolve, but chemicals are things we can influence. We can use chemistry to build the world we want to live in, and I think we need to bump and brag the chemists who put us on the right path.
I got all my sisters and me
The three Science textbooks we use at my school have pictures of great Scientists on their covers. Darwin for biology, Hawking for physics and, for some bizarre reason, Marie Curie for chemistry. Curie was the only person to win Nobel prizes in both chemistry and physics so she is definitely someone to champion...but especially for chemistry? Her Nobel prize was for discovering the elements radium and polonium; obviously impressive, but no more so than the other 116 on the table. If we hail Curie as one of the most important chemists of all time we'd have to justify why radium and polonium are more important discoveries than the other 116. Truthfully they are not.
Really, it's her work in physics which revolutionised Science and while her chemistry was outstanding (far better than mine) it wasn't a game-changer for chemistry theory. Marie Curie was one of the world's greatest physicists...even her chemistry Nobel was awarded largely because of physics experiments she did...and she would absolutely belong on that list. But there are much bigger and grander chemical discoveries made than discovering two elements which aren't used for much.
I sometimes worry people include Marie Curie because they feel obligated to include a woman in the chemical pantheon, but that's insulting to her legacy and reducing her to "token female Science person". Her achievements in physics are some of the most important in history and she needs to be remembered for that, not as a half-hearted nod to feminism.
The problem unfortunately is that without Curie, my list of great chemists becomes ten male names and that's a problem. It has overtones of the physicist Alessandro Strumia who recently said in a conference that physics was a subject "built by men". Yikes.
It is true most of the names in early Science history are male, but that's because women were not allowed to do Science!!! Most Universities in Europe refused women admission and even when they were permitted, they were often bullied out of them. Curie herself had to study in secret as a member of "The Floating University" (not as exciting as it sounds) because patriarchal attitudes were so engrained, the very notion of a woman being good at physics was abhorrent to University administrators. That's the reason the names are male...it's because men were being total jackasses to the women. So please remember, the names on my list are all dudes because of historical sexism and not a lack of female talent.
The trend is gradually starting to change I'm happy to say, and a list of great chemists fifty years from now will hopefully be more balanced. But I can't fudge historical facts and I'm not going to include less-impactful female people on my list because that would be patronising to them, not to mention insulting to the male Scientists I'd be overlooking. I'm hoping we can still appreciate the brilliance of these ten great chemists of history and not hold their testicles against them.
On that note, here's the video I did about why we need to be able to name more female Scientists: Great female scientists
Oh and here's the blog I wrote on why a lack of female representation in Science needs to change: Feminism in Science
1. Hennig Brandt
Chemistry began in Germany during the 1670s when the alchemist Hennig Brandt decided to boil his own urine to see if he could extract any gold. He couldn't. What he did discover was a waxy white powder which glows in the dark, stinks of garlic and bursts into flame with no provocation. He had discovered phosphorus, the first element isolated in recorded history. While some elements had been known since ancient times (e.g. gold and iron) Brandt's discovery showed that the substances around us aren't pure - they are made up of other stuff mixed together somehow.
Brandt began ordering barrels of excess urine from the German army (spending his wife's money) to extract their phosphorus and carried out numerous experiments to see what it could do. Although a complete fluke, Brandt's discovery marked a turning point for laboratory practice. Rather than chucking a bunch of stuff together in a pot and hoping for the best, Brandt stumbled across a whole layer of chemical reality hidden below the surface. Alchemy became chemistry and four hundred years later we have 118 known elemental substances with which the Universe does her cooking.
2. Antoine Lavoisier
About a hundred years after Brandt was boiling his own pee, Chemistry began to explode in Europe, both figuratively and literally. Antoine Lavoisier was the guy who began collecting the information, verifying it in his lab (with the help of his wife Marie-Anne) and categorising the growing list of elements. He started grouping chemicals together by property and thus gave us our first periodic table - the Chemist's infographic.
Lavoisier's table wasn't complete of course and he considered things like heat and light to be pure substances, but he gave us the notion that chemical reactions obeyed predictable laws. In the same way physics had strict principles governing the whole show (discovered by Newton), Lavoisier probed chemistry for its own patterns and showed that reactions didn't happen at random. Although later Scientists like Dobereiner, Newlands, Mendeleev and Seaborg crafted the periodic table into its current shape, Lavoisier was the one who suggested the idea in the first place.
3. Jons Berzelius
Berzelius is the reason a lot of people hated chemistry in school. Originally a physician in the late 1700s, Berzelius decided that since physics and mathematics had terminology and notation, chemistry ought to have them too, so he set about formalising the language of this burgeoning field. He's the one who came up with chemical equations and the symbol system we use today with all those little numbers and arrows. What an absolute legend.
Berzelius also discovered silicon, thorium, cerium and selenium and was the first person to start weighing masses of molecules to figure out how many atoms they contained. That's pretty good going seeing as the existence of atoms wasn't proven until 150 years later. Berzelius discovered that when a chemical reaction occurs, all the atoms still exist at the end, even if they've escaped as something like a gas. This had confused previous chemists because it looked as though stuff could pop into and out of existence at will, but Berzelius showed that matter was a conserved quantity; a principle I take great pleasure in tormenting my students with today.
4. Humphry Davy
Humphry Davy began his life as a poet, but when he turned to Science he became the most accomplished chemist in Britain, sometimes referred to as the British Berzelius. He holds the record for the most naturally-occuring element discoveries (six) did a lot of work on acid-base properties, invented the first anaesthetic and came up with the electroplating method we still use to protect ships. However, Davy's biggest contribution to chemistry was cataloguing reactivity itself.
Because most elements are bonded to others and don't occur in their native state, a lot of chemistry involves mixing the right chemicals together and causing atoms to shift partner. Chemical reactions are all about breaking one set of particles and rearranging them to a new one. Davy essentially figured out which chemical combinations would react and which did nothing. His studying of reactivity cost him his eyesight when a plate of nitrogen trichloride exploded in his face, but studying unreactivity led him to observe the properties of glowing metal in inert gases and thus Davy invented the very first light bulb...in your face Edison.
5. Svante Aarhenius
Arrhenius was the founder of the Nobel prize committe (he won it in 1903 of course) and invented what we now call 'physical chemistry'. It's the result of physics and chemistry getting amorous and concerns itself with things like rate of reaction (the equation for which is his), electrochemistry (for which he won the Nobel prize) equilibrium (a concept he largely invented) acid-base reactions (he was the first person to figure out what they were) and forgetting to wear your lab specs (as shown in the above photograph). His greatest contribution to Science, and the world however, was establishing the link between chemistry and the environment.
In the 1890s everyone assumed the natural world was simply too big for humans to have any effect on. Darwin had shown us to be a tiny a twig on the tree of life, but Arrhenius put us right back in the centre of things when he began taking measurements of carbon dioxide in the atmosphere and comparing it to historic levels from ice cores. Arrhenius learned that chemical reactions humans were carrying out affected chemical reactions in the air around us and was the first person to ring an alarm bell on the most pressing and crucial chemistry challenge we face today: climate change. The entire ecosystem of the Earth is a giant chemical system and we play a significant role. How we choose to wield that power is up to us, and Arrhenius showed us we have that power in the first place.
6. Fritz Haber
I don't really like the term "evil scientist" because a person's moral code is often a product of their environment. Fritz Haber essentially invented chemical-weaponry for Germany during WWI by using chlorine gas to suffocate and acidify British troops in trenches. But from Haber's perspective he was being a patriot, helping his government defeat the invading British who were getting involved in a conflict they had no stake in. Haber's desire to help his country's war-effort doesn't necessarily make him evil. However, going on holiday to watch the massacre itself from a protected balcony probably does.
The greatest thing Haber did for chemistry was industrialise it. Prior to him, reactions were carried out in clunky batch processes by small teams prpducing tiny amounts. Haber figured out a way to manufacture ammonia (a key ingredient in fertilisers and therefore essential to food production) on a factory scale at a permanent output. The Haber process allows us to set our starting materials and keep them in constant reaction for as long as we need, rather than relying on a dozen lab-coat wearing glassware experts measuring out precise doses. Prior to his input, the main way to get fertiliser was from bat excrement and I think the Haber process is a better way of maintaining our food-economy than constantly feeding bats laxatives.
7. Gilbert Lewis
Everyone knew by the mid-twentieth century that atoms were made of protons and neutrons in their nucleus with electrons orbiting in shells. But nobody could figure out how they stuck together. Berzelius had been banging on about atoms combining for centuries, but how exactly did they do it? Lewis was the man who proposed "the chemical bond".
A chemical bond is a link between atoms where electrons are shared in a combined region of space, equally attracted halfway between both nuclei. Originally Lewis began drawing his atoms as cube-shapes with electrons on corners, but a lot of people misunderstood and thought he was claiming atoms were square. He wasn't, he was just coming up with a way to keep track of electrons and their shells. We still use his "dot" method today, except we draw everything in circles fortunately. Lewis was sadly overlooked for the Nobel prize 40 times, which seems ridiculous to me because chemistry theory without the idea of bonding would be like mathematics without the equals sign.
8. Linus Pauling
One afternoon, while suffering from a cold and reading sci-fi novels in bed, Linus Pauling decided to start cutting strips of paper out of his newspaper and began drawing atoms on them before folding them at what he calculated to be their correct bond angles. By doing so, he solved an important protein structure that had been baffling biologists for decades. This sounds like a kooky way to do chemistry but he wasn't practising origami. Pauling was basing his paper-angles and shapes on quantum theory, the new branch of physics taking the science world by storm. By applying quantum mechanics to chemical bonding and chemical bonding to proetin shape, Pauling created a bridge between physics, chemistry and biology, showing all three Sciences were part of the same dance. He won the Nobel for chemistry, although it just as easily could have been awarded for the other two.
He was arguably the greatest multi-disciplinary scientist of the twentieth century, writing books and papers in mathematics, physics, chemistry, biochemistry and also did a lot of work persuading governments to de-escalate their nuclear armament programs (for which he won his second Nobel prize). Toward the end of his life, he went a bit off the deep end and claimed you could cure cancer by taking "mega-doses" of orange juice, but in his prime he was the chemist's Einstein. Oh, and he came up with the helix structure for DNA before Watson and Crick. So there.
9. Robert Burns Woodward
Chemistry is split into four main disciplines. Physical chemistry is about the mathematics of how chemicals move, flow and react (Arrhenius). Quantum chemistry is getting down to the nitty gritty of how electrons behave within a molecule (Pauling) and then the study of elements and compounds is split into two branches: organic which is the study of carbon-based molecules, and inorgnaic...the study of everything else. Inorganic chemistry was arguably invented by Davy and Berzelius, but the indisputed king of carbon was R.B. Woodward.
Because most of the important molecules in the world are carbon-based, organic chemistry is mostly about analysing their structure - a process called spectroscopy - and then creating them ourselves - a process called synthesis. Woodward invented both techniques. Woodward was an architect of molecular dimensions, building such complex structures as quinine, cholesterol, chlorophyll and vitamin B12 from scratch. Most of the medicines in your bathroom cabinet are only possible thanks to Woodward and his synthetic techniques. A-level chemists in the UK are required to learn a huge number of synthetic maps charting how we turn one carbon molecule into another. Woodward is the man who drew the map.
10. Leo Baekeland
If Baekeland's life were to have a title it would be How to get rich by doing simple organic chemistry. Woodward was the master of complicated molecules, but Baekeland was the man who invented the most ubiquitous carbon-based substance in modern civilization. Historically we classify human eras as the Stone Age, Bronze Age and Iron Age, but the present day will almost certainly be known as the Plastic Age.
Although a few chemists had accidentally discovered the process of sticking simple carbon-molecules together in chains and tangling them up - notably Eduard Simon and Alexander Parkes - Baekeland was the person who mastered it and gave the world its plastic. Prior to him, most hard substances were either metal, rock, wood or shellac (a substance made from sticky beetle-egg-glue - ewwwww). Baekeland envisioned a material we could make on demand, customise to fit a purpose, alter to be hard, soft, flexible, brittle or tough, and would not corrode over time. The plastics industry, which gives us everything from stationary to furniture to breast implants made him an untold fortune. Bravo Leo. And thanks for all the ocean-garbage!
If you're curious about the story of chemistry and how we developed the whole thing check out my book: Elemental - How the Periodic Table Can Now Explain (Nearly) Everything
Stanley Testube (not real name): bbc
Susan Frontczak (real name): businessinsider
I love science, let me tell you why.