Do Herbs Get a Bad Press?

An neat little study in BMC Medicine investigates how newspapers report on clinical research. The authors tried to systematically compare the tone and accuracy of write-ups of clinical trials of herbal remedies with those of trials of pharmaceuticals. The results might surprise you.

The research comes from a Canadian group, and most of the hard slog was done by two undergrads, who read through and evaluated 105 trials and 553 newspaper articles about those trials. (They didn't get named as authors on the paper, which seems a bit mean, so let's take a moment to appreciate Megan Koper and Thomas Moran.) The aim was to take all English language newspaper articles about clinical trials printed between 1995 and 2005 (as found on LexisNexis). Duplicate articles were weeded out and every article was then rated for overall tone (subjective), the number of risks and benefits reported, whether it reported on conflicts of interest or not, and so forth. The trials themselves were also rated.

As the authors say

This type of study, comparing media coverage with the scientific research it covers is a well recognized method in media studies. Is the tone of reporting different for herbal remedy versus pharmaceutical clinical trials? Are there differences in the sources of trial funding and the reporting of that issue? What about the reporting of conflicts of interest?
There were a range of findings. Firstly, newspapers were generally poor at reporting on important facts about trials such as conflicts of interest and methodological flaws. No great surprise there. They also tended to understate risks, especially in regards to herbal trials.

The most novel finding was that newspaper reports of herbal remedy trials were quite a lot more likely to be negative in tone than reports of pharmaceutical trials. The graphs here show this: out of 201 newspaper articles about pharmaceutical clinical trials, not one was negative in overall tone, and most were actively positive about the drug, while the herbs got a harsh press, with roughly as many negative articles as positive ones. (Rightmost two bars.)


This might partly be explained by the fact that slightly more of the herbal remedy trials found a negative result, but the difference in this case was fairly small (leftmost two bars). The authors concluded that
Those herbal remedy clinical trials that receive newspaper coverage are of similar quality to pharmaceutical clinical trials ... Despite the overall positive results and tone of the clinical trials, newspaper coverage of herbal remedy clinical trials was more negative than for pharmaceutical clinical trials.
Bet you didn't see that coming - the media (at any rate in Britain) are often seen as reporting uncritically on complementary and alternative medicine. These results suggest that this is a simplification, but remember that this study only considered articles about specific clinical trials - not general discussions of treaments or diseases. The authors remark:
[The result] is contrary to most published research on media coverage of CAM. Those studies consider a much broader spectrum of treatments and the media content is generally anecdotal rather than evidence based. Indeed, journalists are displaying a degree of skepticism rare for medical reporting.
So, it's not clear why journalists are so critical of trials of herbs when they're generally fans of CAM the rest of the time. The authors speculate:
It is possible that once confronted with actual evidence, journalists are more critical or skeptical. It may be considered more newsworthy to debunk commonly held beliefs and practices related to CAM, to go against the trend of positive reporting in light of evidence. It is also possible that journalists who turn to press releases of peer-reviewed, high-impact journals have subtle biases towards scientific method and conventional medicine. Also, journalists turn to trusted sources in the biomedical community for comments on clinical trials, both herbal and pharmaceutical, potentially leading to a biomedical bias in reporting trial outcomes.
If you forgive the slightly CAM-ish language (biomedical indeed), you can see that they make some good suggestions - but we don't really know. This is the problem with this kind of study (as the authors note) - the fact that a story is "negative" about herbs could mean a lot of different things. We also don't know how many other articles there were about herbs which didn't mention clinical trials, and because this article only considered articles referring to primary literature, not meta-analyses (I think), it leaves out a lot of material. Meta-analyses are popular with journalists and are often more relevant to the public than single trials are.

Still, it's a paper which challenged my prejudices (like a lot of bloggers I have a bit of a persecution complex about the media being pro-CAM) and a nice example of empirical research on the media.

ResearchBlogging.orgTania Bubela, Heather Boon, Timothy Caulfield (2008). Herbal remedy clinical trials in the media: a comparison with the coverage of conventional pharmaceuticals BMC Medicine, 6 (1) DOI: 10.1186/1741-7015-6-35

The Spooky Case of the Disappearing Crap Science Article

Just a few hours ago, I drafted a post about a crap science study in the Daily Telegraph called "Stress of modern life cuts attention spans to five minutes".

The pressures of modern life are affecting our ability to focus on the task in hand, with work stress cited as the major distraction, it said.
Declining attention spans are causing household accidents such as pans being left to boil over on the hob, baths allowed to overflow, and freezer doors left open, the survey suggests.
A quarter of people polled said they regularly forget the names of close friends or relatives, and seven per cent even admitted to momentarily forgetting their own birthdays.
The study by Lloyds TSB insurance showed that the average attention span had fallen to just 5 minutes, down from 12 minutes 10 years ago.
But the over-50s are able to concentrate for longer periods than young people, suggesting that busy lifestyles and intrusive modern technology rather than old age are to blame for our mental decline.
"More than ever, research is highlighting a trend in reduced attention and concentration spans, and as our experiment suggests, the younger generation appear to be the worst afflicted," said sociologist David Moxon, who led the survey of 1,000 people.
Almost identical stories appeared in the Daily Mail (no surprise) and, for some reason, an awful lot of Indian news sites. So I hacked out a few curmudgeonly lines - but before I posted them, the story had vanished! (Update: It's back! See end of post). Spooky. But first, the curmudgeonry:
  • Crap science story in "crap" shocker
The term "attention span" is meaningless - attention to what? Are we so stressed out that after five minutes down the pub, we tend to forget our pints and wander home in a daze? You could talk about attention span for a particular activity, so long as you defined your criteria for losing attention - for example, you could measure the average time a student sits in a lecture before he starts doodling on his notes. Then if you wanted you could find out if stress affects that time. I wouldn't recommend it, because it would be very boring, but it would be a scientific study.

This news, however is not based on a study of this kind. It's based on a survey of 1,000 people i.e. they asked people how long their attention span was and whether they felt they were prone to accidents. No doubt the questions were chosen in such a way that they got the answers they wanted. Who are "they"? - Lloyds TSB insurance, or rather, their PR department, who decided that they would pay Mr David Moxon MSc. to get them the results they wanted. He obliged, because that's what he does. Then the PR people wrote up Moxon's "results" as a press release and sent it out to all the newspapers, where stressed-out, over-worked journalists (there's a grain of truth to every story!) leapt at the chance to fill some precious column inches with no thinking required. Lloyds get their name in the newspapers, their PR company gets cash, and Moxon gets cash and his name in the papers so he gets more clients in the future. Sorted!

How do I know this? Well, mainly because I've read Ben Goldacre's Bad Science and Nick Davie's Flat Earth News, two excellent books which explain in great detail how modern journalism works and how this kind of PR junk routinely ends up on the pages of your newspapers in the guise of science or "surveys". However, even if I hadn't, I could have worked it out by just consulting Google regarding Mr Moxon. Here is his website. Here's what Moxon says about his services:
David can provide a wide range of traditional behavioural research methods on a diverse range of social, psychological and health topics. David works in partnership with clients delivering precisely the brief they require whilst maintaining academic integrity.
The more commonly provided services include:
  • The development and compilation of questionnaire or survey questions

  • Statistical analysis of data (including SPSS® if required)

  • The development of personality typologies

  • The production of media friendly tests and quizzes (always with scoring systems)

  • The production of primary research reports identifying ‘top line findings’ as well as providing detailed results and conclusions.

In other words, he gets the results you want. And he urges potential customers to
Contact the consultancy which gives you fast, highly-creative and psychologically-endorsed stories that grab the headlines.
  • The Disappearance
The mystery is that the story, so carefully crafted by the PR department, has gone. Both the Telegraph and the Mail have pulled it, although it was there last time I checked, a couple of hours ago. Googling the story confirms that it used to be there, but now it's gone. Variants are still available elsewhere, sadly.

So, what happened? Did both the Mail and the Telegraph suddenly experience an severe attack of journalistic integrity and decide that this story was so bad, they weren't even going to host it on their websites? It seems doubtful, especially in the case of the Mail, but it's possible.

I prefer a different explanation: my intention to rubbish the story travelled forwards in time, and caused the story to be taken down, even though I hadn't posted about it yet. Lynn McTaggart has proven that this can happen, you know.

Update 27th November 13:30: And it's back! The story has reappeared on the Telegraph website. The Lay Scientist tells me that the story was originally put up too prematurely and then pulled because it was embargoed until today. I don't quite see why it matters when a non-story like this is published - it could just as well have been 10 years ago - but there you go. And in a ridiculous coda to this sorry tale, the Telegraph have today run a second crap science article centered around the concept of "5 minutes" - according to the makers of cold and flu remedy Lemsip, 52% of women feel sorry for their boyfriends when they're ill for just five minutes or less. Presumably because this is their attention span. How I wish I were making this up.

Totally Addicted to Genes

Why do some people get addicted to things? As with most things in life, there are lots of causes, most of which have little, if anything, to do with genes or the brain. Getting high or drunk all day may be an appealing and even reasonable life choice if you're poor, bored and unemployed. It's less so if you've got a steady job, a mortgage and a family to look after.

On the other hand, substance addiction is a biological process, and it would be surprising if genetics did not play a part. There could be many routes from DNA to dependence. Last year a study reported that two genes, TAS2R38 and TAS2R16, were associated with problem drinking. These genes code for some of the tongue's bitterness taste receptor proteins - presumably, carriers of some variants of these genes find alcoholic drinks less bitter, more drinkable and more appealing. Yet most people are more excited by the idea of genes which somehow "directly" affect the brain and predispose to addiction. Are there any? The answer is yes, probably, but they do lots of other things beside cause addiction.

A report just published in the American Journal of Medical Genetics by Argawal et. al. (2008), found an association between a certain variant in the CNR1 gene, rs806380, and the risk of cannabis dependence. They looked at a sample of 1923 white European American adults from six cities across the U.S, and found that the rs806380 "A" allele (variant) was more common in people with self-reported cannabis dependence than in those who denied having such a problem. A couple of other variants in the same gene were also associated, but less strongly.

As with all behavioural genetics, there are caveats. (I've warned about this before.) The people in this study were originally recruited as part of an alcoholism project,COGA. In fact, all of the participants were either alcohol dependent or had relatives who were. Most of the cannabis-dependent people were also dependent on alcohol. However, this is true of the real world as well, where dependence on more than one substance is common.

The sample size of nearly 2000 people is pretty good, but the authors investigated a total of eleven different variants of the CNR1 gene. This raises the problem of multiple comparisons, and they don't mention how they corrected for this, so we have to assume that they didn't. The main finding does corroborate earlier studies, however. So, assuming that this result is robust, and it's at least as robust as most work in this field, does this mean that a true "addiction gene" has been discovered?

Well, the gene CNR1 codes for the cannabinoid type 1 (CB1) receptor protein, the most common cannabinoid receptor in the brain. Endocannabinoids, and the chemicals in smoked cannabis, activate it. Your brain is full of endocannabinoids, molecules similiar to the active compounds found in cannabis. Although they were discovered just 20 short years ago, they've already been found to be involved in just about everything that goes on in the brain, acting as a feedback system which keeps other neurotransmitters under control.

So, what Argawal et. al. found is that the cannabinoid receptor gene is associated with cannabis dependence. Is this a common-sense result - doesn't it just mean that people whose receptors are less affected by cannabis are less likely to want to use it? Probably not, because what's interesting is that the same variant in the CNR1 gene, rs806380, has been found to be associated with obesity and dependence on cocaine and opioids. Other variants in the same gene have shown similar associations, although there have been several studies finding no effect, as always.

What makes me believe that CNR1 probably is associated with addiction is that a drug which blocks the CB1 receptor, rimonabant, causes people to lose weight, and is also probably effective in helping people stop smoking and quit drinking (weaker evidence). Give it to mice and they become little rodent Puritans - they lose interest in sweet foods, and recreational drugs including alcohol, nicotine, cocaine and heroin. Only the simple things in life for mice on rimonabant. (No-one's yet checked whether rimonabant makes mice lose interest in sex, but I'd bet money that it does.)

So it looks as though the CB1 receptor is necessary for pleasurable or motivational responses to a whole range of things - maybe everything. If so, it's not surprising that variants in the gene coding for CB1 are associated with substance dependence, and with body weight - maybe these variants determine how susceptible people are to the lures of life's pleasures, whether it be a chocolate muffin or a straight vodka. (This is speculation, although it's informed speculation, and I know that many experts are thinking along these lines.)

What if we all took rimonabant to make us less prone to such vices? Wouldn't that be a good thing? It depends on whether you think people enjoying themselves is evidence of a public health problem, but it's worth noting that rimonabant was recently taken of the European market, despite being really pretty good at causing weight loss, because it causes depression in a significant minority of users. Does rimonabant just rob the world of joy, making everything else less fun? That would make anyone miserable. Except for neuroscientists, who would look forward to being able to learn more about the biology of mood and motivation by studying such side effects.

ResearchBlogging.orgArpana Agrawal, Leah Wetherill, Danielle M. Dick, Xiaoling Xuei, Anthony Hinrichs, Victor Hesselbrock, John Kramer, John I. Nurnberger, Marc Schuckit, Laura J. Bierut, Howard J. Edenberg, Tatiana Foroud (2008). Evidence for association between polymorphisms in the cannabinoid receptor 1 (CNR1) gene and cannabis dependence American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 9999B DOI: 10.1002/ajmg.b.30881

Educational neuro-nonsense, or: The Return of the Crockus

Vicky Tuck, President of the British Girls' Schools Association, has some odd ideas about the brain.

Tuck has appeared on British radio and in print over the past few days arguing that there should be more single-sex schools (which are still quite common in Britain) because girls and boys learn in different ways and benefit from different teaching styles. Given her job, I suppose she ought to be doing that, and there are, I'm sure, some good arguments for single-sex schools.

So why has she resorted to talking nonsense about neuroscience? Listen if you will to an interview she gave on the BBC's morning Today Program (Her part runs from 51:50s to 55:10s). Or, here's a transcript of the neuroscience bit, with my emphasis:

Interviewer: Do we know that girls and boys brains are wired differently?
Tuck: We do, and I think we're learning more and more every day about the brain, and particularly in adolescents this wiring is very interesting, and it's quite clear that you need to teach girls and boys in a very different way for them to be successful.
Interviewer: Well give us some examples, how should the way in which you teach them differ?
Tuck: Well, take maths. If you look at the girls they sort of approach maths through the cerebral cortex, which means that to get them going you really need to sort of paint a picture, put it in context, relate it to the real world, while boys sort of approach maths through the hippocampus, therefore they're very happy and interested in the core properties of numbers and can sort of dive straight in. So if a girl's being taught in a male-focused way she will struggle, whereas in an all-girl's school their confidence in maths is very, very high.
Interviewer: So you have no doubt that all girls should be taught separately from boys?
Tuck: I think that ideally, girls fare better if they're in a single sex environment, and I think that boys also fare better in an all boy environment, I think for example in the study of literature, in English, again a different kind of approach is needed. Girls are very good at empathizing, attuning to things via the emotions, the cerebral cortex again, whereas the boys come at things... it's the amygdala is very strong in the boy, and he will you know find it hard to tune in in that way and needs a different approach.
Interviewer: And yet we've had this trend towards co-education and we've also had more boys schools opening their doors to girls... [etc.]
This is, to put it kindly, confused. Speaking as a neuroscientist, I know of no evidence that girls and boys approach maths or literature using different areas of the brain, I'm not sure what evidence you could look for which would suggest that, and I'm not even sure what that statement means.

Girls and boys all have brains, and they all have the same parts in roughly the same places. When they're reading about maths, or reading a novel, or indeed when they're doing anything, all of these areas are working together at once. The cerebral cortex, in particular, comprises most of the bulk of the brain, and almost literally does everything; it has dozens of sub-regions responsible for everything from seeing moving objects to feeling disgusted to moving your eyes. I don't know which area is responsible for the the boyish "core properties of numbers" but for what it's worth, the area most often linked to counting and calculation is the angular gyrus, part of... the supposedly girly cerebral cortex!

The gruff and manly hippocampus, on the other hand, is best known for its role in memory. Damage here leaves people unable to form new memories, although they can still remember things that happened before the injury. It's not known whether these people also have problems with number theory.

When it comes to literature, things get even worse. She says - "Girls are very good at empathizing, attuning to things via the emotions" - which I guess is a pop-psych version of psychologist Simon Baron-Cohen's famous theory of gender differences: that girls are, on average, better at girly social and emotional stuff while boys are better at systematic, logical stuff. This is, er, controversial, but it's a theory that has at least some merit to it.

However, given that the amygdala is generally seen as a fluffy "emotion area" while the cerebral cortex, or at least parts of it, are associated with more "cold" analytic cognition, "The amygdala is very strong in boys" suggests that they should be more emotionally empathic. If Tuck's going to deal in simplistic pop-neuroanatomy, she should at least get it the right way round.

The likely source of Tuck's confusion, given what's said here about Harvard research, is this study led by Dr. Jill Goldstein, who found differences in the size of brain areas between men and women. For example she found that men have, on average, larger amygdalas than women. Although they also have smaller hippocampi. Whatever, this study is fine science, although bear in mind that there could be a million reasons why men's and women's brains are different - it might have nothing to do with inborn differences. Stress, for example, makes your hippocampus shrink.

More importantly, there's no reason to think that "bigger is better", when it comes to parts of the brain. (I make no comment about other parts of the body.) That's phrenology, not science. Is a bigger mobile phone better than a smaller one? Bigger could be worse, if it means that the brain cells are less well organized. Likewise, if an area "lights up" more on an fMRI scan in boys than in girls, that sounds good, but in fact it might mean that the boys are having to think harder than the girls, because their brain is less efficient.

I'm a believer in the reality of biological sex differences myself - I just don't should try to find them with MRI scans. And Vicky Tuck seems like a clever person who's ended up talking nonsense unnecessarily. She could be making a good argument for single-sex schools based on some actual evidence about how kids learn and mature. Instead, she's shooting herself in the foot (or maybe in the brain's "foot center") with dodgy brain theories. Save yourself, Vicky - put the brain down and walk away.

Link Cognition and Culture who originally picked up on this.
Link The hilarious story of "The Crockus", a made-up brain area which has also been invoked to justify teaching girls and boys differently. It's weird how bad neuroscience repeats itself.

[BPSDB]

Deep Brain Stimulation Cures Urge To Break Glass

Deep Brain Stimulation (DBS) is in. There's been much buzz about its use in severe depression, and it has a long if less glamorous record of success in Parkinson's disease. Now that it's achieved momentum as a treatment in psychiatry, DBS is being tried in a range of conditions including chronic pain, obsessive-compulsive disorder and Tourette's Syndrome. Is the hype justified? Yes - but the scientific and ethical issues are more complex, and more interesting, than you might think.

Biological Psychiatry have just published this report of DBS in a man who suffered from severe, untreatable Tourette's syndrome, as well as OCD. The work was performed by a German group, Neuner et. al. (who also have a review paper just out), and they followed the patient up for three years after implanting high-frequency stimulation electrodes in an area of the brain called the nucleus accumbens. It's fascinating reading, if only for the insight into the lives of the patients who receive this treatment.

The patient suffered from the effects of auto-aggressive behavior such as self-mutilation of the lips, forehead, and fingers, coupled with the urge to break glass. He was no longer able to travel by car because he had broken the windshield of his vehicle from the inside on several occasions.
It makes even more fascinating viewing, because the researchers helpfully provide video clips of the patient before and after the procedure. Neuropsychiatric research meets YouTube - truly, we've entered the 21st century. Anyway, the DBS seemed to work wonders:
... An impressive development was the cessation of the self-mutilation episodes and the urge to destroy glass. No medication was being used ... Also worthy of note is the fact that the patient stopped smoking during the 6 months after surgery. In the follow-up period, he has successfully refrained from smoking. He reports that he has no desire to smoke and that it takes him no effort to refrain from doing so.
Impressive indeed. DBS is, beyond a doubt, an exciting technology from both a theoretical and a clinical perspective. Yet it's worth considering some things that tend to get overlooked.

Firstly, although DBS has a reputation as a high-tech, science-driven, precisely-targeted treatment, it's surprisingly hit-and-miss. This report involved stimulation of the nucleus accumbens, an area best known to neuroscientists as being involved in responses to recreational drugs. (It's tempting to infer that this must have something to do with why the patient quit smoking.) I'm sure there are good reasons to think that DBS in the nucleus accumbens would help with Tourette's - but there are equally good reasons to target several other locations. As the authors write:
For DBS in Tourette's patients, the globus pallidus internus (posteroventrolateral part, anteromedial part), the thalamus (centromedian nucleus, substantia periventricularis, and nucleus ventro-oralis internus) and the nucleus accumbens/anterior limb of the internal capsule have all been used as target points.
For those whose neuroanatomy is a little rusty, that's a fairly eclectic assortment of different brain regions. Likewise, in depression, the best-known DBS target is the subgenual cingulate cortex, but successful cases have been reported with stimulation in two entirely different areas, and at least two more have been proposed as potential targets (Paper.) Indeed, even once a location for DBS has been chosen, it's often necessary to try stimulating at several points in order to find the best target. The point is that there is no "Depression center" or "Tourette's center" in the brain which science has mapped out and which surgery can now fix.

Second, by conventional standards, this was an awful study: it only had one patient, no controls, and no blinding. Of course, applying usual scientific standards to this kind of research is all but impossible, for ethical reasons. These are people, not lab rats. And it does seem unlikely that the dramatic and sustained response in this case could be purely the placebo effect, especially given that the patient had tried several medications previously.

So what the authors did was certainly reasonable under the circumstances - but still, this article, published in a leading journal, is basically an anecdote. If it had been about a Reiki master waving his hands at the patient, instead of a neurosurgeon sticking electrodes into him, it wouldn't even make it into the Journal of Alternative and Complementary Medicine. This is par for the course in this field; there have been controlled trials of DBS, but they are few and very small. Is this a problem? It would be silly to pretend that it wasn't - there is no substitute for good science. There's not much we can do about it, though.

Finally, Deep Brain Stimulation is a misleading term - the brain doesn't really get stimulated at all. The electrical pulses used in most DBS are at such a high frequency (145 Hz in this case) that they "overload" nearby neurons and essentially switch them off. (At least that's the leading theory.) In effect, turning on a DBS electrode is like cutting a hole in the brain. Of course, the difference is that you can switch off the electrode and put it back to normal. But this aside, DBS is little more sophisticated than the notorious "psychosurgery" pioneered by Walter Freeman performed back in the 1930s and that have since become so unpopular. I see nothing wrong with that - if it works, it works, and psychosurgery worked for many people, which is why it's still used in Britain today. It's interesting, though, that whereas psychosurgery is seen as the height of psychiatry barbarity, DBS is lauded as medical science at its most sophisticated.

For all that, DBS is the most interesting thing in neuroscience at the moment. Almost all research on the human brain is correlational - we look for areas of the brain which activate on fMRI scans when people are doing something. DBS offers one of the very few ways of investigating what happens when you manipulate different parts of the human brain. For a scientist, it's a dream come true. But of course, the only real reason to do DBS is for the patients. DBS promises to help people who are suffering terribly. If it does, that's reason enough to be interested in it.

See also: Someone with Parkinson's disease writes of his experiences with DBS on his blog.

ResearchBlogging.org
I NEUNER, K PODOLL, D LENARTZ, V STURM, F SCHNEIDER (2008). Deep Brain Stimulation in the Nucleus Accumbens for Intractable Tourette's Syndrome: Follow-Up Report of 36 Months Biological Psychiatry DOI: 10.1016/j.biopsych.2008.09.030

Kruger & Dunning Revisited

The irreplaceable Overcoming Bias have an excellent post on every blogger's favorite psychology paper, Kruger and Dunning (1999) "Unskilled and Unaware Of It".

Most people (myself included) have taken this paper as evidence that the better you are at something, the better you are at knowing how good you are at it. Thus, people who are bad don't know that they are, which is why they don't try to improve. It's an appealing conclusion, and also a very intuitive one.

In general, these kind of conclusions should be taken with a pinch of salt.

Indeed, it turns out that there's another more recent paper, Burson et. al. (2006) "Skilled or Unskilled, but Still Unaware of It", which finds that everyone is pretty bad at judging their own skill, and in some circumstances, more skilled people make less accurate judgments than novices. Heh.

Prozac Made My Cells Spiky

A great many neuroscientists are interested in clinical depression and antidepressants. We're still a long way from understanding depression on a biological level - and if anyone tries to tell you otherwise, they're probably trying to sell you something. I've previously discussed the controversies surrounding the neurotransmitter serotonin - according to popular belief, the brain's "happy chemical". My conclusion was that although clinical depression is not caused by "low serotonin" alone, serotonin does play an important role in mood at least in some people.

A paper published recently in Molecular Psychiatry makes a number of important contributions to the literature on depression and antidepressants; I haven't seen it discussed elsewhere, so here is make take on it. The paper is by a Portuguese research group, Bessa et. al., and it's titled The mood-improving actions of antidepressants do not depend on neurogenesis but are associated with neuronal remodeling. The findings are right there in the title, but a little history is required in order to appreciate their significance.

For a long time, the only biological theory which attempted to explain clinical depression and how antidepressants counteract it was the monoamine hypothesis. During the early 1960s, it was noticed that early antidepressant drugs, such as imipramine, all inhibited either the breakdown or the removal (reuptake) of chemicals in the brain called monoamines, including serotonin. This led many to conclude that antidepressants improve mood by raising monoamine levels, and that depression is probably caused by some kind of monoamine deficiency. For various reasons (not all of them good ones), it was later decided that serotonin was the crucial monoamine involved in mood, although for several years another, noradrenaline, was favored by most people.

This "monoamine hypothesis" was always a little shaky, and over the past decade or so, an alternative approach has become increasingly fashionable. If you were so inclined, you might even call it a new paradigm. This is the proposal that antidepressants work by promoting the survival and proliferation of new neurones in certain areas of the brain - the "neurogenesis hypothesis". Neurogenesis, the birth of new cells from stem cells, occurs in a couple of very specific regions of the adult brain, including the elaborately named subgranular zone (SGZ) of the dentate gyrus (DG) of the hippocampus. Many experiments on animals have shown that chronic stress, and injections of the "stress hormone" corticosterone, can suppress neurogenesis, while a wide range of antidepressants block this effect of stress and promote neurogenesis. (Other evidence shows that antidepressants probably do this by inducing the expression of neurotrophic signaling proteins, like BDNF.)

The literature on stress, neurogenesis, and antidepressants, is impressive and growing rapidly. For good reviews, see Duman (2004) and Duman & Monteggia (2006). However, the crucial question - do antidepressants work by boosting hippocampal neurogenesis? - remains a controversial one. The hippocampus is not an area generally thought of as being involved in mood or emotion, and damage to the human hippocampus causes amnesia, not depression. Given that the purpose (if any) of adult neurogenesis remains a mystery, it's entirely possible that neurogenesis has nothing to do with depression and mood.

To establish whether neurogenesis is involved in antidepressant action, you need to to manipulate it - for example, by blocking neurogenesis and seeing if this makes antidepressants ineffective. This is practically quite tricky, but Luca Santarelli et. al. (2003) managed to do it by irradiating the hippocampi of mice with x-rays. They found that this made two antidepressants (fluoxetine, aka Prozac, and imipramine) ineffective in protecting the animals against the detrimental effects of chronic stress. This was a landmark result, and raised a lot of interest in the neurogenesis theory.

This new paper, however, says differently. The authors gave lab rats a six-week Chronic Mild Stress treatment, a Guantanamo Bay-style program of intermittent food deprivation, sleep disruption, and confinement. Chronic stress has various effects on rats, including increased anxiety and decreased time spent grooming leading to fur deterioration. These behaviours and others can be quantified, and are treated as a rat analogue of human clinical depression - whether this is valid is obviously debatable, but I'm willing to accept it at least until a better animal model comes along.

Anyway, some of the rats were injected with antidepressants during the final two weeks of the stress procedure. As expected, these rats coped better with the stress at the end of six weeks. This graph shows the effects of stress and antidepressants on the rat's behaviour in the Forced Swim (Porsolt) Test. Higher bars indicate more "depressed" behaviour. The second pair of bars, representing the stressed rats who got placebo injections, is a lot higher than the first pair of bars representing rats who were not subjected to any stress. In other words, stress made rats "depressed" - no surprise. The other four pairs of bars are pretty much the same height as the first pair; these are rats who got antidepressants, showing that they were resistant to the effects of stress.

The crucial finding is that the white and the black bars are all pretty much the same height. The black bars represent animals who were given injections of methylazoxymethanol (MAM), a cytostatic toxin which blocks cell division (rather like cancer chemotherapy). As you can see, MAM had no effect at all on behaviour in the swim test. It had no effect on most other tests, although it did seem to make the rats more anxious in one experiment.

However, MAM powerfully inhibited neurogenesis. This second graph shows the number of hippocampal cells expressing KI-67, a protein which is a marker of neuroproliferation. As expected, stress reduced neurogenesis and antidepressants increased it. MAM (black bars again) reduced neurogenesis, and in particular, it completely blocked the ability of antidepressants to increase it.

But as we saw earlier, MAM did not stop antidepressants from protecting rats against stress. So, the authors concluded, neurogenesis is not necessary for antidepressants to work. This contradicts the landmark finding of Santarelli et. al. - why the discrepency? There are so many differences between the two experiments that there could be any number of explanations - the current study used rats, while Santarelli used mice, for one thing, and that could well be important. Whatever the reason, this result suggests at the least that neurogenesis is not the only mechanism by which antidepressants counteract the effects of stress in animals.

The most interesting aspect of this paper, to my mind, was an essentially unrelated new finding. Stress was found to reduce the volume of several areas of the rat's brain, including the hippocampus and also the medial prefrontal cortex (mPFC). Unlike the hippocampus, this is an area known to be involved in motivation and emotion. Importantly, the authors found that following stress, the mPFC did not shrink because neurones were dying or because fewer neurones were being born, but rather because the existing neurones were changing shape - stress caused atrophy of the dendritic spines which branch out from neurones. Dendrites are essential for communication between neurones.

As you can see in the drawings above, stress (the middle column) caused shrinking and stunting of the dendrites in pyrimidal neurones from three areas relative to the unstressed rats (left), while those rats recieving antidepressants as well as stress showed no such effect (right). The cytostatic MAM had no effect whatsoever on dendrites. Further work found that antidepressants increase expression of NCAM1, a protein which is involved in dendritic growth.

So what does this mean? Well, for one thing, it doesn't prove that antidepressants work by increasing dendritic branching. Cheekily, the authors come close to implying this in their choice of title for the paper, but the published evidence shows no direct evidence for this. To find out, you would have to show that blocking the effects of antidepressants on dendrites also blocks their beneficial effects. I suspect this is what the authors are now working hard to try to do, but they haven't done so yet.

It also doesn't mean that taking Prozac will change the shape of your brain cells. It might well do, but this was a study in rats given huge doses of antidepressants (by human standards), so we really don't know whether the findings apply to humans. On the other hand, if Prozac changes the shape of your cells, this study suggests that stressful situations do too - and Prozac, if anything, will put your cells back to "normal".

Finally, I don't want to suggest that the neurogenesis theory of depression is now "dead". In neuroscience, theories never live or die on the basis of single experiments (unlike in physics). But it does suggest that the much-blogged-about neurogenesis hypothesis is not the whole story. Depression isn't just a case of too little serotonin, and it isn't just a case of too little neurogenesis or too little BDNF either.

ResearchBlogging.org
J M Bessa, D Ferreira, I Melo, F Marques, J J Cerqueira, J A Palha, O F X Almeida, N Sousa (2008). The mood-improving actions of antidepressants do not depend on neurogenesis but are associated with neuronal remodeling Molecular Psychiatry DOI: 10.1038/mp.2008.119

BBC: Bullies have Bad Brains

It was only last week that fMRI explained human hatred. Now it's revealed why some kids are horrible to others. Behold -

  • "Bullying tendency wired in brain"
  • "Bullies' brains may be hardwired to have sadistic tendencies"
  • "Bullies' brains may be wired differently"

At least according to the BBC. You may not be surprised to learn that I'm skeptical. The Neurocritic is too, and indeed he beat me to it on this one, having critiqued the paper in question, "Atypical Empathetic Responses in Adolescents with Aggressive Conduct Disorder: A functional MRI Investigation" (here), remarkably quickly.

So I wasn't going to post about this study, but then The Onion covered it and inspired me to write something. Or rather, I'm going to write about the BBC's story, which was impressively rubbish even by the standards of neuro-journalism.

Basically, all of the statements I quoted at the top of this post are nonsense. They're science fiction. For one thing, this study wasn't about "bullies", but teenage boys diagnosed with severe "Conduct Disorder" (CD) who had committed multiple serious crimes. That's just nitpicking though. The study found, using fMRI, that when you show these CD-diagnosed boys videos of people suffering pain, different parts of their brain activate, compared to a control group of nice, non-violent boys. On some interpretations these areas included the brain's "pleasure centers" although this is controversial (and according to one commentator, it may be all based on someone flunking Anatomy 101).

I've previously berated laymen and journalists (and all-too-many neuroscientists) for being mystified by coloured blobs on the brain. They see them as revealing profound truths about humanity, and in particular, they see them as pointing to "nature" over "nurture" explanations for behaviour. This is rarely explicitly stated, but the BBC did so with the line "Bullies' brains may be hardwired to have sadistic tendencies". Essentially, they are implying that there is something biologically wrong with the brains of bullies which leads to them taking pleasure in the pain of others.

Is this completely unfounded? After all, the study did find differences in the brains of the bullies vs. the normal kids. Surely that means they were "wired differently", maybe even "hardwired" differently? Well, yes, but only in an utterly trivial sense. Everything we do is the result of our brain activity - and every difference between two people is a result of differences in the "wiring" of their brains. The only reason that you're not sitting here like me, writing a cynical blog post about neuroscience, is that you have the good fortune to have a brain wired differently from mine. The only reason I wrote the word "cynical" in that last sentence rather than, er, "snarky", is that my brain was wired that way. And so on.

Brains get wired the way they do through the interacting influence of genes (which tell your neurons how to grow and how to connect up during brain development) and the environment (e.g. as you learn to do something, new connections between your neurons are formed, sometimes leading to massive reorganization of the brain - a fascinating topic in itself).

So, that one person's brain is "wired differently" to another person's is a completely mundane fact. In fact it's as dull as saying that no two people have the same fingerprints. It tells you nothing about how it got to be wired the way it did, and in particular it tells you nothing about whether it was "hardwired" to be that way, i.e. genetically determined. Just by reading this post, your brain has got rewired! So even if you accept that this fMRI study found that bullies take pleasure in watching people suffer (dubious as I mentioned above), this tells you nothing about why. Maybe they were brought up to be sadistic. Maybe they see other people suffering a lot, and have got used to it.

So when the BBC quote Dr Mike Eslea as saying
A better understanding of the biological basis of these things is good to have but the danger is it causes people to leap to biological solutions - drugs - rather than other behavioural solutions
They should perhaps heed his warning rather than "biologizing" bullying so keenly.

The interesting thing about this is that the BBC journalist was probably not stupid. He or she is just human. I think we feel intuitively that any biological difference between two groups of people implies a biological cause for that difference, because we intuitively have a dualistic concept of the relationship between the mind and the brain. Mind and brain are separate entities. We can just about accept that the brain (biological) can influence behaviour (psychological), although we find this idea outlandish and vaguely disturbing, because we think it undermines the idea of "free will". But we can't see how behaviour could influence the brain. Hence the headline "Bullying tendency wired in brain". It's common sense, but it's also nonsense.

Shock and Cure

For regular readers (and Google Analytics assures me I do have some - hello), this post may be a bit of a change of pace.

Many mental health blogs have alerted readers to the case of Ray Sandford, a Minnesota man who's being given ECT (electroconvulsive therapy), as an outpatient, against his wishes. Encouraging people to write to the Governor of Minnesota in protest, Philip Dawdy of the Furious Seasons said

I am officially neutral on voluntary ECT--if someone wants it, it's their brain--but involuntary ECT is barbarous and amounts to torture. If anyone would like to defend involuntary ECT, let me hear from you. If you make a good argument, I might even post it.
I have nothing to say about Mr Sandford's case, but I'll defend involuntary ECT in general. Why? Because I believe the evidence supports it, but if I'm honest, the reason I'm writing this is because of my granddad.

During the 1940s and 1950s my grandfather, at the time a junior doctor, suffered from several bouts of severe depression. Antidepressants not yet existing, the only available treatment was ECT. He was given it - with his consent - and it worked. In fact, he was given ECT using a number of different stimulus parameters and he says that bilateral treatment rapidly lifted his mood while unilateral was useless.

ECT was the only thing that could lift my grandfather out of his illness. Fortunately, he never became so ill that he was unable to give his consent. But if he had - if his depression had ever got so bad that he could not summon up the courage to receive treatment, if he had given up on life or just given up talking - then all this would have meant was that he needed ECT even more. If his illness had robbed him of the wish to get better, as I know it nearly did, it would have been a tradgety for his doctors to not have helped him fight back.
  • Sometimes consent is a luxury
My argument is that involuntary ECT is sometimes justified because in psychiatry, involuntary treatment is sometimes necessary, and ECT is sometimes the only treatment that works. Some people object to all forced treatments, whether ECT, or medication, or anything else. I respect this, and I agree that in principle, treatment should only ever be given with consent. Adults should not be treated if they have made an informed choice not to be - as patients we have a right to autonomy, including a right to refuse treatment and even a right to die.

However - in psychiatry, things are not so simple. It's often those who are most ill, those who have the most to gain from treatment, who are most likely to refuse it. From my own experience I know how even moderate depression can warp your thinking - severe illness can lead people to be, temporarily, unable to make informed decisions. They may not know that they are ill, or they may not be able believe that there is any hope of recovery.

If someone is in such a state - whether they are extremely depressed, manic, or psychotic - it would be cruel and neglectful not to treat them, by any means necessary. Surely that's common sense - if your friend was blind drunk and tried to drive home because he thought he was completely sober, you'd be responsible for his safety if you didn't try to stop him. If I got drunk and started acting stupidly I'd want my friends to look after me, and when I woke up the next morning, I'd be angry if they hadn't.

Many people are concerned about psychiatrists forcibly treating patients as a punishment or as a way of keeping them quiet. I don't know how often this happens, but whenever it does, it needs to be stopped. No-one would disagree with that. But these aren't the only reasons why people are treated without their consent. Sometimes it really is for their own good.

The costs and the benefits of any treatment have be balanced, which is difficult, and it may be difficult to decide whether someone is able to give informed consent. Mistakes will be made. This is why I have no opinion on Ray Sandford. He may or may be the victim of a mistake, and I don't think anyone who hasn't met the man can judge that.
  • ECT works
ECT of course has a bad reputation. Dawdy calls it "barbarous" - he's much more restrained than some. It's true that ECT is a crude procedure, in the sense that we don't know how it works. But it does work. There's very strong evidence that ECT is highly effective for depression - ECT is much more effective than the placebo "sham ECT", showing that the benefits are real and not simply placebo. ECT is also effective in other acute psychiatric states; e.g. according to the U.S. Surgeon General's office
Accumulated clinical experience—later confirmed in controlled clinical trials, which included the use of simulated or “sham” ECT as a control — determined ECT to be highly effective against severe depression, some acute psychotic states, and mania. No controlled study has shown any other treatment to have superior efficacy to ECT in the treatment of depression.
Any psychiatrist who has used it will agree. I know that many people feel very strongly about ECT, including some who are likely to be reading this. Some people have had very negative experiences. But not everyone has. I was going to cite a list of references to surveys on patient views of ECT here, but I decided against it. There are dozens of papers and as many different findings.

Rose et. al. (2003) reviewed the literature and found an enormous range of opinions, from strongly positive to strongly negative, amongst ECT patients. This paper is fairly skeptical, and reports that a third of people given ECT report memory problems. My grandfather didn't - his memory is fading now, because he's 85, but he managed to carry on a career as a very succesful doctor for 50 years. The debate over the costs and benefits of ECT is an important one. This paper appeared in the British Medical Journal, the official publication of the British medical establishment. The debate is not being suppressed. I've read this paper. I'm not ignoring the voices of the ECT survivior movement, but they are not the only ones.

ECT is a controversial therapy, but there's no doubt that it has helped many people, and in some cases it is certainly the only treatment that works. Official guidelines, such as those of the British NICE agency, all advise that ECT should only be used as a last resort where other treatments have failed. No-one is rushing to give ECT to everyone, but when all else has failed, it can work. And if someone is too ill to give their consent, and all else has failed, forced ECT remains an option. It is an extreme one, but I can honestly say that if I, or anyone in my family, ever became so ill that this was the only option, I would want it done.
  • Conflicts of Interest?
Just to be clear: I have never received or been offered ECT, but like my grandfather I suffered from moderate-to-severe depression for a long time. I'm currently taking 40mg per day Celexa (citalopram.) Like ECT, SSRIs have a bad reputation. My experiences have been entirely positive. With citalopram I've gained energy, optimism, and the ability to enjoy life. The worst thing I've suffered has been a dry mouth. In general, I am strongly pro-psychiatry, and I work as a researcher on the neurobiology of depression and antidepressant action. I'm not medically qualified. I am a big fan of David Healy (like me, a defender of ECT), but have a very low opinion of people like Szasz and Laing.

Update : 12 . 11 . 2008 16:20 GMT
Philip Dawdy put a portion of this post up on Furious Seasons, where it sparked a, well, a lively debate.

Thanks to an anonymous commenator (below) who did some googling and found that Mr Sandford's residence, Victory House, seems to be a home for people suffering from Alzheimer's Disease. Although I still make no comment on Mr Sandford, this might be of interest to any who dislike the idea of involuntary "outpatient" ECT.

Life is Actually Quite Complicated

In this excellent post, new blogger Mike Eslea (the Punk Psychologist) takes British newspapers to task for their sensationalist coverage of some new statistics about knife crime. For non-British readers, I should explain that knife crime is a hot button issue in this country at the moment, with the narrative that there's a "knife crime epidemic" in progress being widely accepted.

As Eslea explains, when the new crime statistics were released, all of the headlines talked of a "22% rise" in knife incidents. This sounds pretty dramatic, and straightforward - 22% more stabblings, oh no! But in fact the picture is much less clear - most of this rise was probably due to changes in the way such crimes are reported, and even defining knife crime is not as easy as it seems. He also notes that last year the Times managed to extract the headline "Knife Crime Doubles In Two Years", based on a report which found nothing of the sort, through careful cherry-picking of the statistics. You should read the whole of the post - it's enlightening (and if you don't, me and Eslea will stab you up.)

Anyway, what's interesting is that this is just the kind of thing that we also see in much of science journalism. Ben Goldacre's excellent Bad Science is full of examples of the way in which the media mislead in their coverage of scientific and medical research. Reporting on violence, drugs, teenage pregnancy and other social sins often misleads for exactly the same reasons - i.e. statistics are cherry-picked to support the most dramatic conclusions, caveats and methodological weaknesses are ignored, and evidence which doesn't fit with the narrative are not reported on at all (in this case the narrative is "knife crime epidemic!" but we have also had "autism epidemic!", "diet determines health!" etc. etc.) Science, medicine, or crime, the numbers get spun in the same ways.

The basic problem, as I see it, is that people just don't like doubt. We want a clear story, even if the available evidence doesn't support any firm conclusions. Look at the Daily Mail's regular headlines about something causing or preventing cancer - any epidemiologist knows that establishing risk factors for cancer is a very difficult job, and there is a huge amount of uncertainty, and a lot of the research out there is crap. For the Mail, on the other hand, one small study constitutes proof. Until the next small study comes along and proves that what we thought cured cancer actually causes it, and vice versa. Experts despair at this, but they're in a minority.

This great BBC article accuses politicians of being unwilling to admit doubt about whether policies will work. To be fair to them, though, they're in an impossible position, because the public and the media demand certainty. We know that knife crime is skyrocketing and we want someone who knows how to stop it. By which I mean that most of us do. I don't - from what I've read about knife crime, nationwide it probably isn't rising, or maybe it is a bit, but in some parts of the country and among some communities it could be rising a lot, although even if it is, we have no idea why, and we don't have any proven ways of improving things... The point is that it's complicated. There's a lot of evidence to consider and most of it's flawed in one way or another. That's true for neuroscience, and it's also true for public policy.

She's Hearing Voices

If you hear voices that aren't really there, are you "mad"? Maybe - auditory hallucinations, most commonly voices, are one of the characteristic symptoms of schizophrenia and are sometimes also a feature of severe depression and mania. (For more on hallucinations, try this paper. It's long, but it's worth a read because it's one of the few academic papers containing phrases like "Do YOU want a slap in the head?" and "Die, bitch".) But anyone can find themselves hearing things that others don't - it just takes a little prompting.

Via the excellent Mind Hacks, I came across this great little page by Cambridge neuroscientist Matt Davis, which gives some examples of Sine Wave Speech. Essentially, Sine Wave Speech is a voice recording which has been digitally degraded so that it's little more than a collection of beeps and bleeps (sine waves, in fact). When you first listen to it, it sounds something like R2D2 on ecstasy.

However, if you listen to the original, unaltered voice and then go back to the sine-wave version, you can hear the words - the bleeps suddenly sound like a voice. The change is striking - so much so that if you didn't know what was going on, you'd probably think you were listening to a whole new clip. This effect is one example of the brain's tendency to perceive a pattern in a stream of noise, when an expected pattern is known. We're all familiar with this in the realm of vision - if you're looking for something amongst a pile of clutter, your eye is drawn to it, but you won't even register all the other rubbish that's around it. We're not used to the same thing happening with sound, though, I suppose because we generally hear things without much difficulty.

Sine Wave Speech really is a form of voice, but the same principle can lead people to hear voices in the unlikeliest places. Hence the recent case of the Amazing Satanic Islamic Doll, also known by its Jihad name, "Little Mommy Cuddle 'n Coo". This is a kid's toy made by Fisher Price which, when hugged, plays recorded baby noises. A month or so ago a concerned parent somewhere in America decided that one of the recorded baby noises sounded like a voice with a disturbing message. Have a listen.

Did you hear any speech the first time you heard the doll's noise? Probably not, but I'd bet that after you'd heard the claim that the doll was saying "Islam is the Light", the same noise started to sound rather different. After several listens, I now can't hear the noise as anything other than a voice saying that phrase, even though intellectually I know that this is absurd (if Fisher Price were going to plant a message like that they'd be more likely to use "Buy more toys").

The doll is a post-9/11 rerun of the infamous Satanic backmasking scare. If you a play a record backwards, you'll hear various distorted sounds, some of which could be interpreted as speech, if that's what you're expecting to hear. During the 1980s, some concerned citizens (again in America) thought that they could hear Satanic or sexual messages hidden in heavy metal records when they were played backwards. Upon hearing this, several bands liked the idea and actually did put backwards messages in future songs, but most of the allegations were based on people hearing voices in noise, because that's what they expected to hear.

One man believes that the problem goes far beyond metal music and that everyone has hidden backwards messages in everything they say, messages which reveal their subconscious thoughts. Cripes. He gives a lot of examples here. Note that he helpfully tells you exactly what to expect to hear before you've heard to recordings.

Anyway, in original video above, a concerned mom argues that the Muslim message must be real because she can't believe that people across the country are "all hearing things". She doesn't understand that you don't have to be hallucinating to over-interpret noise. It happens to the best of us, and once you know what to expect you really hear the voice, clear as day, whether or not you "believe in it" or "want to hear it". It's not simply the power of suggestion.

On the other hand some psychologists have claimed that suggestion alone can make people hear things. This is generally called the "White Christmas Effect". The original experiment in support of this claim took 78 female students who were told - "in a firm and serious tone of voice" -

I want you to close your eyes and to hear a phonograph record with words and music playing White Christmas. Keep listening to the phonograph record playing White Christmas until I tell you to stop.
As you may have guessed from the language used in sentence, this experiment took place in 1964. Anyway, the students were ordered to hear White Christmas playing in a silent room, and then asked to check a box indicating whether they in fact heard it. 5% said that "I heard the phonograph record of White Christmas clearly and believed that the record was actually playing" while another 49% said "I heard the phonograph record of White Christmas clearly but knew there was no record actually playing."

So according to this experiment, over half of 1960s female students actually "heard" a record playing, simply because someone told them to. (In another part of the experiment, many of them "saw" a cat.) It's very hard to know what to make of this, because it's obvious that some of them may have just been saying that they heard the music because they thought this is what was expected of them. Bear in mind as well that the lead experimenter, Theodore Xenophon Barber, later became interested in some rather dubious stuff. Even so, the paper became popular, and led to a small industry of music-based hallucination research.

More modern work inspired by the White Christmas experiment has been a bit more rigorous. In 2001, two Dutch psychologists took 47 students and told them to listen to some white noise. The students were told that the song White Christmas might be played quietly at some points, and they should press a button whenever they heard it. 14 of them pressed the button at least once (on average, three times each), although the song was never actually played. This implies that a substantial number of healthy people will hear something simply because they expect to hear it, even if what they are actually listening to sounds nothing like it.

The lesson of all of this, if you need it spelled out, is that your eyes and ears are not windows through which you have direct access to reality. Your brain is actively constructing your perceptions of the world based on prior knowledge as well as sense data. But I feel like I'm getting dangerously close to talking philosophy here, so I'd better quit while I'm ahead.

[BPSDB]

Registration: Not Just For Clinical Trials

In a previous post, I said that I'd write about how to improve the quality of scientific research by ending the scrabbling for "positive results" at the cost of accuracy. So here we go. This is a long post, so if you'd prefer the short version, the answer is that we ought to get scientists in many fields to pre-register their research - to go on record and declare what they are looking for before they start looking for anything.

This is not my idea. Clinical trial registration is finally becoming a reality. Several organizations now offer registration services - such as Current Controlled Trials. Their site is well worth a click, if only to see the future of medical science unfolding before your eyes in the form of a list of recently registered protocols. Each of these protocols, remember, will eventually become a published scientific paper. If it doesn't, everyone will know that either the trial was never finished, or worse, it was finished and the results were never published. Without registration, a trial could be run and never published without anyone knowing what had happened - making it very easy for "inconvenient" data to never see the light of day. This is publication bias. We know it happens. Trial registration makes it all but impossible. It's important.

In fact, if someone were designing the system of clinical trials from scratch, they would, almost certainly, make registration an integral step right from the start. Unfortunately, no-one intelligently designed clinical trials. They evolved, and they're still evolving. We're not there yet. Trial registration is still a "good idea" rather than a routine part of clinical research, and while many first-class medical journals now require pre-registration and refuse to publish unregistered trials, plenty of other respectable publications have yet to catch up.

What I want to point out is that it's not just clinical trials which would benefit from registration. Registration is a way to defeat publication bias, wherever it occurs, and any field in which there are "negative results" is vulnerable to the risk that they won't be reported. In some parts of science there are no negative results - in much of physics, chemistry, and molecular biology, you either get a result, or you've failed. If you try to work out the structure of a protein, say, then you'll either come up with a structure, or give up. Of course, you might come out with the wrong structure if you mess up, but you could never "find nothing". All proteins have a structure, so there must be one to find.

But in many other areas of research there is often genuinely nothing to find. A gene might not be linked to any diseases. A treatment might have no effect. A pollutant might not cause any harm. Basically, if you're looking for a correlation between two things, or an effect of one thing upon another, you might get a negative result. Just off the top of my head, this covers almost all genetic association and linkage studies, almost all neuroimaging, most experimental psychology, much of climate science, epidemiology, sociology, criminology, and probably others I don't know about. Oh, and clinical trials, but we already knew that. People don't tend to publish negative results, for various reasons. Wherever this is a problem, trial registration would be useful.

Publication bias is known to be a problem in behavioural genetics (finding genes associated with psychological traits). For example Munafo et. al. (2007) found pretty strong evidence of publication bias in research on whether a certain allele (DRD2 Taq1A) predisposes to alcoholism. They concluded by saying that

Publication of nonsignificant results in the psychiatric genetics literature is important to protect against the existence of a biased corpus of data in the public domain.
Which is true, but saying it won't change anything, because everyone already knew this. No-one likes publication bias, but it happens anyway - so we need a system to prevent it. Curiously however, registration is rarely mentioned as an option. Salanti et. al. (2005) wrote at length about the pitfalls of genetic association studies, but did not. Colhoun et. al. (2003) , in a widely cited paper in the Lancet, explained how publication bias was a major problem but then flat-out dismissed registration, saying that
an effective mechanism for establishment of prospective registers of proposed analyses is not feasible.
They didn't say why, and if it works for clinical trials, I can see very little reason why it shouldn't work for other research. Indeed another similar paper in the same journal raised the idea of "prestudy registration of intent". Clearly it deserves serious thought.

Registration would also help combat "outcome reporting bias", or as it's known in the trade, data dredging. Any set of results can be looked at in a number of ways, and some of these ways will lead to different conclusions to others. Let's say that you want to find out whether a certain gene is associated with obesity. You might start by taking a thousand men and seeing whether the gene correlates with body weight. Let's say it doesn't, which is really annoying, because you were hoping that you could spend the next five years getting paid to find out more about this gene. Well, you still could! You could check whether the gene is associated with Body Mass Index (weight in proportion to height.) If that doesn't work, try percentage of body fat. Still nothing? Try eating habits. Eureka! Just by chance, you've found a correlation. Now you report that, and don't mention all the other things you tried first. You get a paper, "Gene XYZ123 influences eating behaviour in males", and a new grant to follow up on it. Sorted. Lynn McTaggart would be proud.

This kind of thing happens all the time, although that's an extreme example. The motives are not always selfish - most scientists genuinely want to find positive results about their "pet" genes, or drugs, or whatever. It is all too easy to dredge data without being aware of it. Registration would put an end to most of this nonsense, because when you register your research - before the results are in - you would have to publically outline what statistical tests you are planning to do. Essentially, you would need to write the Methods section of your paper before you collected any results.

If you were feeling particularly puritan, you could make people register the Introduction in advance too. Nominally, this is a statement of why you did the research, how it fits into the existing literature, what hypothesis you were testing and what you expected to find. In fact, it's generally a retrospective justification for getting the results you did, along with a confident "prediction" that you were going to find ... exactly what you found. This is not a serious problem, as publication bias is, because everyone knows that it happens and so no-one (except undergraduates) takes Introductions seriously. But writing Introductions that no-one can read with a straight face ("Oh sure, they really predicted that ahead of time" "Ha, sure they didn't just decide to do that post-hoc and then PubMed a reference to justify it") is silly. Registration would be a way of getting everyone to put their toys away and get serious.

 
powered by Blogger