Depression Treatment Increased From 1998 to 2007

A paper just out reports on the changing patterns of treatment for depression in the USA, over the period from 1998 to 2007.

The headline news is that it increased: the overall rate of people treated for some form of "depression" went from 2.37% to 2.88% per year. That's an increase of 21%, which is not trivial, but it's much less than the increase in the previous decade: it was just 0.73% in 1987.

But the increase was concentrated in. some groups of people.

  • Americans over 50 accounted for the bulk of the rise. Their use went up by about 50%, while rates in younger people stayed almost steady. In '98 the peak age band was 35-49, now it's 50-64, with almost 5% of those people getting treated in any given year.
  • Men's rates of treatment went up by over 40% while women's only increased by 10%. Women are still more likely to get treated for depression than men, though, with a ratio of 1.7 women for each 1 man. But that ratio is a lot closer than it used to be.
  • Black people's rates increased hugely, by 120%. Rates in black people now stand at 2.2% which is close behind whites at 3.2%. Hispanics are now the least treated major ethnic group at 1.9%: in previous studies, blacks were the least treated. (There was no data on Asians or others).
So the increase wasn't an across the board rise, as we saw from '87 to '98. Rather the '98-'07 increase was more of a "catching up" by people who've historically had low levels of treatment, closing in on the level of the historically highest group: middle-aged white women.

In terms of what treatments people got, out of everyone treated for depression, 80% got some kind of drugs, and that didn't change much. But use of psychotherapy declined a bit from 54% to 43% (some people got both).

What's also interesting is that the same authors reported last year that, over pretty much the same time period ('96 to '05), the number of Americans who used antidepressants in any given year sky-rocketed from 5% to 10% - that is to say, much faster than the rate of depression treatment rose! And the data are comparable, because they came from the same national MEPS surveys.

In other words, the decade must have seen antidepressants increasingly being used to treat stuff other than depression. What stuff? Well, all kinds of things. SSRIs are popular in everything from anxiety and OCD to premature ejaculation. Several of the "other new" drugs, like mirtazapine and trazodone, are very good at putting you to sleep (rather too good, some users would say...)

ResearchBlogging.orgMarcus SC, & Olfson M (2010). National trends in the treatment for depression from 1998 to 2007. Archives of general psychiatry, 67 (12), 1265-73 PMID: 21135326

XMRV - Innocent on All Counts?

A bombshell has just gone off in the continuing debate over XMRV, the virus that may or may not cause chronic fatigue syndrome. Actually, 4 bombshells.

A set of papers out today in Retrovirology (1,2,3,4) claim that many previous studies claiming to have found the virus haven't actually been detecting XMRV at all.

Here's the rub. XMRV is a retrovirus, a class of bugs that includes HIV. Retroviruses are composed of RNA, but they can insert themselves into the genetic material of host cells as DNA. This is how they reproduce: once their DNA is part of the host cell's chromosomes, that cell is ends up making more copies of the virus.

But there are lots of retroviruses out there, and there used to be yet others that are now extinct. So bits of retroviral DNA are scattered throughout the genome of animals. These are called endogenonous retro-viruses (ERVs).

XMRV is extremely similar to certain ERVs found in the DNA of mice. And mice are the most popular laboratory mammals in the world. So you can see the potential problem: laboratories all over the world are full of mice, but mouse DNA might show up as "XMRV" DNA on PCR tests.

Wary virologists take precautions against this by checking specifically for mouse DNA. But most mouse-contamination tests are targeted at mouse mitochondrial DNA (mtDNA). In theory, a test for mouse mtDNA is all you need, because mtDNA is found in all mouse cells. In theory.

Now the four papers (or are they the Four Horsemen?) argue, in a nutshell, that mouse DNA shows up as "XMRV" on most of the popular tests that have been used in the past, that mouse contamination is very common - even some of the test kits are affected! - and that tests for mouse mtDNA are not good enough to detect the problem.

  • Hue et al say that "Taqman PCR primers previously described as XMRV-specific can amplify common murine ERV sequences from mouse suggesting that mouse DNA can contaminate patient samples and confound specific XMRV detection." They go on to show that some human samples previously reported as infected with XMRV, are actually infected with a hybrid of XMRV and a mouse ERV which we know can't infect humans.
  • Sato et al report that PCR testing kits from Invitrogen, a leading biotech company, are contaminated with mouse genes including an ERV almost identical to XMRV, and that this shows up as a false positive using commonly used PCR primers "specific to XMRV".
  • Oakes et al say that in 112 CFS patients and 36 healthy control, they detected "XMRV" in some samples but all of these samples were likely contaminated with mouse DNA because "all samples that tested positive for XMRV and/or MLV DNA were also positive for the highly abundant IAP long terminal repeat [found only in mice] and most were positive for murine mitochondrial cytochrome oxidase sequences [found only in mice]"
  • Robinson et al agree with Oakes et al: they found "XMRV" in some human samples, in this case prostate cancer cells, but they then found that all of the "infected" samples were contaminated with mouse DNA. They recommend that in future, samples should be tested for mouse genes such as the IAP long terminal repeat or cytochrome oxidase, and that researchers should not rely on tests for mouse mtDNA.
They're all open-access so everyone can take a peek. For another overview see this summary published alongside them in Retrovirology.

I lack the technical knowledge to evaluate these claims, no doubt plenty of people will be rushing to do that before long. (Update: The excellent virologyblog has a more technical discussion of these studies.) But there are a couple of things to bear in mind.

Firstly, these papers cast doubt on tests using PCR to detect XMRV DNA. However, they don't have anything to say about studies which have looked for antibodies against XMRV in human blood, at least not directly. There haven't been many of these, but the paper which started the whole story, Lombardi et al (2009), did look for, and found, anti-XMRV immunity, and also used various other methods to support the idea that XMRV is present in humans. So this isn't an "instant knock-out" of the XMRV theory, although it's certainly a serious blow.

Secondly, if the 'mouse theory' is true, it has serious implications for the idea that XMRV causes chronic fatigue syndrome and also for the older idea that it's linked to prostate cancer. But it still leaves a mystery: why were the samples from CFS or prostate cancer patients more likely to be contaminated with mouse DNA than the samples from healthy controls?

ResearchBlogging.orgRobert A Smith (2010). Contamination of clinical specimens with MLV-encoding nucleic acids: implications for XMRV and other candidate human retroviruses Retrovirology : 10.1186/1742-4690-7-112

The Almond of Horror

Remember the 90s, when No Fear stuff was cool, and when people still said "cool"?

Well, a new paper has brought No Fear back, by reporting on a woman who has no fear - due to brain damage. The article, The Human Amygdala and the Induction and Experience of Fear, is brought to you by a list of neuroscientists including big names such as Antonio Damasio (of Phineas Gage fame).

The basic story is nice and simple. There's a woman, SM, who lacks a part of the brain called the amygdala. They found that she can't feel fear. Therefore, it's reasonable to assume that the amygdala's required for fear. But there's a bit more to it than that...

The amygdala is a small nugget of the brain nestled in the medial temporal lobe. The name comes from the Greek for "almond" because apparently it looks like one, though I can't say I've noticed the resemblance myself.

What does it do? Good question. There are two main schools of thought. Some think that the amygdala is responsible for the emotion of fear, while others argue that its role is much broader and that it's responsible for measuring the "salience" or importance of stimuli, which covers fear but also much else.

That's where this new paper comes in, with the patient SM. She's not a new patient: she's been studied for years, and many papers have been published about her. I wonder if her acronym doesn't stand for "Scientific Motherlode"?

She's one of the very few living cases of Urbach-Wiethe disease, an extremely rare genetic disorder which causes selective degeneration of the amygdala as well as other symptoms such as skin problems.

Previous studies on SM mostly focussed on specific aspects of her neurological function e.g. memory, perception and so on. However there have been a few studies of her "everyday" experiences and personality. Thus we learned that:

Two experienced clinical psychologists conducted "blind" interviews of SM (the psychologists were not provided any background information)... Both reached the conclusion that SM expressed a normal range of affect and emotion... However, they both noted that SM was remarkably dispassionate when relating highly emotional and traumatic life experiences... To the psychologists, SM came across as a "survivor", as being "resilient" and even "heroic".
These observations were based on interviews under normal conditions; what would happen if you actually went out of your way to try and scare her? So they did.

First, they took her to an exotic pet store and got her to meet various snakes and spiders. She was perfectly happy picking up the various critters and had to be prevented from getting too closely acquainted with the more dangerous ones.

What's fascinating is that before she went to the store, she claimed to hate snakes and spiders! Why? Before she developed Urbach-Wiethe disease, she had a normal childhood up to about the age of 10. Presumably she used to be afraid of them, and just never updated this belief, a great example of how our own narratives about our feelings can clash with our real feelings.

They subsequently confirmed that SM was fearless by taking her to a "haunted asylum" (check it out, even the website is scary) and showing her various horror movie clips, as well as through interviews with herself and her son. They also describe an incredible incident from several years ago: SM was walking home late at night when she saw
A man, whom SM described as looking “drugged-out.” As she walked past the park, the man called out and motioned for her to come over. SM made her way to the park bench. As she got within arm’s reach of the man, he suddenly stood up, pulled her down to the bench by her shirt, stuck a knife to her throat, and exclaimed, “I’m going to cut you, bitch!”

SM claims that she remained calm, did not panic, and did not feel afraid. In the distance she could hear the church choir singing. She looked at the man and confidently replied, “If you’re going to kill me, you’re gonna have to go through my God’s angels first.” The man suddenly let her go. SM reports “walking” back to her home. On the following day, she walked past the same park again. There were no signs of avoidance behavior and no feelings of fear.
All this suggests that the amygdala has a key role in the experience of fear, as opposed to other emotions: there is no evidence to suggest that SM lacks the ability to experience happiness or sadness in the same way.

So this is an interesting contribution to the debate on the role of the amygdala, although we really need someone to do equally detailed studies on other Urbach-Wiethe patients to make sure that it's not just that SM happens to be unusually brave for some other reason. What's doubly interesting, though, is that Ralph Adolphs, one of the authors, has previously argued against the view of the amygdala as a "fear center".

Links: I've previously written about the psychology of horror movies and I've reviewed quite a lot of them too.

ResearchBlogging.orgJustin S. Feinstein, Ralph Adolphs, Antonio Damasio,, & and Daniel Tranel (2010). The Human Amygdala and the Induction and Experience of Fear Current Biology

The Scanner's Prayer

MRI scanners have revolutionized medicine and provided neuroscientists with some incredible tools for exploring the brain.

But that doesn't mean they're fun to use. They can be annoying, unpredictable beings, and you never know whether they're going to bless you with nice results or curse you with cancelled scans and noisy data.

So for the benefit of everyone who has to work with MRI, here is a devotional litany which might just keep your scanner from getting wrathful at the crucial moment. Say this before each scan. Just remember, the magnet is always on and it can read your mind, so make sure you really mean it, and refrain from scientific sins...

*

Our scanner, which art from Siemens,
Hallowed be thy coils.
Thy data come;
Thy scans be done;
In grey matter as it is in white matter.
Give us this day our daily blobs.
And forgive us our trespasses,
As we forgive them that trespass onto our scan slots.
And lead us not into the magnet room carrying a pair of scissors,
But deliver us from volunteers who can’t keep their heads still.
For thine is the magnet,
The gradients,
And the headcoil,
For ever and ever (at least until we can afford a 7T).
Amen.

(Apologies to Christians).

The Time Travelling Brain

What's the difference between walking down the street yesterday, and walking down the street tomorrow?

It's nothing to do with the walking, or the street: that's the same. When seems to be something external to the what, how, and where of the situation. But this creates a problem for neuroscientists.

We think we know how the fact that the brain could store the concept of "walking down the street" (or "walking" and "street"). Very roughly, simple sensory impressions are thought to get built up into more and more complex combinations, and this happens as you move away from the brain's primary visual cortex (V1) and down the so-called ventral visual stream.

In area V1, cells respond mostly to nothing more complex than position and the orientations of straight lines: / or \ or _ , etc. Whereas once you get to the temporal lobe, far down the stream, you have cells that respond to Jennifer Aniston. In between are progressively more complex collections of features.

Even if the details are wrong, the fact that complex objects are composed of simpler parts and ultimately raw sensations, means that our ability to process complex scenes doesn't seem too mysterious, given that we have senses.

But the fact that we can take any given scene, and effortlessly think of it as either "past", "present", or "future", is puzzling under this view because, as I said, the scene itself is the same in all cases. And it's not as if we have a sense devoted to time: the only time we're ever directly aware of, is "right now".

Swedish neuroscientists Nyberg et al used fMRI to measure brain activity associated with "mental time travel": Consciousness of subjective time in the brain. They scanned volunteers and asked them imagine walking between two points, in 4 different situations: past, present, future, or remembered (as opposed to imagined in the past). This short walk was one which they'd really done, many times.

What happened?
Compared to a control task of doing mental arithmetic, both remembering and imagining the walk activated numerous brain areas and there was very strong overlap between the two conditions. No big surprise there.

The crucial contrast was between remembering, past imagining and future imagining, vs. imagining in the present. This revealed a rather cute little blob:

This small nugget of the left parietal cortex represents an area where the brain is more active when thinking about times other than the present, relative to thinking about the same thing, but right now. They note that this area "partly overlaps a left angular region shown to be recruited during both past and future thinking and with parietal regions implicated in self-projection in past, present, or future time."

So what? This is a nice study, but like most fMRI it doesn't tell us what this area is actually doing. To know that, we'd need to know what would happen to someone if that area were damaged. Would they be unable to imagine any time except the present? Would they think their memories were happening right now? Maybe you could use rTMS could temporarily inactivate it - if you could find volunteers willing to lose their sense of time for a while...

ResearchBlogging.orgNyberg L, Kim AS, Habib R, Levine B, & Tulving E (2010). Consciousness of subjective time in the brain. Proceedings of the National Academy of Sciences of the United States of America PMID: 21135219

Wikileaks: A Conversation

"Wikileaks is great. It lets people leak stuff."

"Hang on, so you're saying that no-one could leak stuff before? They invented it?"

"Well, no, but they brought leaking to the masses. Sure, people could post documents to the press before, but now anyone in the world can access the leaks!"

"Great, but isn't that just the internet that did that? If it weren't for Wikileaks, people could just upload their leaks to a blog. Or email them to 50 newspapers. Or put them on the torrents. Or start their own site. If it's good, it would go viral, and be impossible to take down. Just like Wikileaks, with all their mirrors, except even more secure, because there'd be literally no-one to arrest or cut off funding to."

"OK, but Wikileaks is a brand. It's not about the technical stuff - it's the message. Like one of their wallpapers says, they're synonymous with free speech."

"So you think it's a good thing that one organization has become synonymous with the whole process of leaking? With the whole concept of openness? What will happen to the idea of free speech, then, if that brand image suddenly gets tarnished - like, say, if their founder and figurehead gets convicted of a serious crime, or..."

"He's innocent! Justice for Julian!"

"Quite possibly, but why do you care? Is he a personal friend?"

"It's an attack on free speech!"

"So you agree that one man has become synonymous with free speech? Doesn't that bother you?"

"Erm... well. Look, fundamentally, we need Wikileaks. Before, there was no centralized system for leaking. Anyone could do it. It was a mess! Wikileaks put everything in one place, and put a committee of experts in a position to decide what was worth leaking and what wasn't. It brought much-needed efficiency and respectability to the idea of leaking. Before Wikileaks, it was anarchy. They're like... the government."

"..."

Edit: See also The Last Psychiatrist's take.

Meditation vs. Medication for Depression

What's the best way to overcome depression? Antidepressant drugs, or Buddhist meditation?

A new trial has examined this question: Segal et al. The short answer is that 8 weeks of mindfulness mediation training was just as good as prolonged antidepressant treatment over 18 months. But like all clinical trials, there are some catches.

Right mindfulness, sammā-sati, is the 7th step on the Buddha's Nobel Eightfold Path of enlightenment. In its modern therapeutic form, however, it's a secular practice: you don't have to be a Buddhist to meditate here (but it presumably helps).

Mindfulness meditation is also branded nowadays as mindfulness-based cognitive behavioural therapy (MCBT), although how much it has in common with regular CBT is debatable. The technique is derived from the Buddhist tradition.

The essence of mindfulness is deceptively simple: you try to become a detached observer of your own feelings and thoughts. Rather than just getting angry, you notice the feelings of anger, without letting them take over. As I've written before, while this might sound easy, we're not always aware of our own feelings.

MCBT has attracted a lot of attention as a possible way of helping people with depression achieve relapse prevention. The idea is that if you can train people to become aware of depressive thoughts and feelings if they start to reappear, they'll be able to avoid being sucked into the cycle of depression.

The 160 patients in this trial were initially treated with antidepressants, starting with an SSRI, and if that didn't work, moving onto venlafaxine (up to 375 mg, as necessary, which is a serious dose) or mirtazapine for people who couldn't take the side effects. This is a sensible treatment regime, not one relying on low doses and doubtful drugs, as in many other antidepressant trials.

About half of the patients both stayed in the trial and achieved remission. After 5 months of sustained treatment, these 84 patients were randomized into 3 groups: continuation of their antidepressant, placebo pills, or mindfulness. The people who ended up on placebo had their antidepressants gradually replaced by sugar pills over a number of weeks, to avoid withdrawal effects.

Here's what happened:

People on placebo did very badly, with only 20% remaining well 18 months later. People who either stayed on the drugs, or who got the mindfulness training, did a lot better, with 70% staying well, and there were no differences between the two.

However here's the catch. This was only true of a sub-set of the patients, the ones who had an "unstable remission", meaning that when they were originally treated with drugs, their symptoms went up and down a bit. The "stable remission" people showed no benefits of either treatment, with the ones on placebo doing slightly better, if anything.

Overall, though, this is a decent study, and shows that, for some people, mindfulness can be helpful. A skeptic could complain that mindfulness was no better than medication, but it might have two advantages: cost, and side effects, though this would depend on the medication you were talking about (some are a lot more expensive, and more prone to side-effects, than others.) The mindfulness meditation also wasn't double-blind, so the benefits may have been placebo effects, but that could be said of almost any trial of psychotherapy.

I also wonder whether you'd do even better if you became all mindful and stayed on medication: this study had no combined-treatment group, unfortunately, but this is something to look into...

ResearchBlogging.orgSegal ZV, Bieling P, Young T, Macqueen G, Cooke R, Martin L, Bloch R, & Levitan RD (2010). Antidepressant Monotherapy vs Sequential Pharmacotherapy and Mindfulness-Based Cognitive Therapy, or Placebo, for Relapse Prophylaxis in Recurrent Depression. Archives of general psychiatry, 67 (12), 1256-64 PMID: 21135325

Delusions of Gender

Note: This book quotes me approvingly, so this is not quite a disinterested review.

Cordelia Fine's Delusions of Gender is an engaging, entertaining and powerfully argued reply to the many authors - who range from the scientifically respectable to the less so - who've recently claimed to have shown biological sex differences in brain, mind and behaviour.

Fine makes a strong case that the sex differences we see, in everything from behaviour to school achievements in mathematics, could be caused by the society in which we live, rather than by biology. Modern culture, she says, while obviously less sexist than in the past, still contains deeply entrenched assumptions about how boys and girls ought to behave, what they ought to do and what they're good at, and these - consciously or unconsciously - shape the way we are.

Some of the Fine's targets are obviously bonkers, like Vicky Tuck, but for me, the most interesting chapters were those dealing in detail with experiments which have been held up as the strongest examples of sex differences, such as the Cambridge study claiming that newborn boys and girls differ in how much they prefer looking at faces as opposed to mechanical mobiles.

But Delusions is not, in Steven Pinker's phrase, saying we ought to return to "Blank Slatism", and it doesn't try to convince you that every single sex difference definately is purely cultural. It's more modest, and hence, much more believable: simply a reminder that the debate is still an open one.

Fine makes a convincing case (well, it convinced me) that the various scientific findings, mostly from the past 10 years, that seem to prove biological differences, are not, on the whole, very strong, and that even if we do accept their validity, they don't rule out a role for culture as well.

This latter point is, I think, especially important. Take, for example, the fact that in every country on record, men roughly between the ages of 16-30 are responsible for the vast majority of violent crimes. This surely reflects biology somehow; whether it's the fact that young men are physically the strongest people, or whether it's more psychological, is by the by.

But this doesn't mean that young men are always violent. In some countries, like Japan, violent crime is extremely rare; in other countries, it's tens of times more common; and during wars or other periods of disorder, it becomes the norm. Young men are always, relatively speaking, the most violent but the absolute rate of violence varies hugely, and that has nothing to do with gender. It's not that violent places have more men than peaceful ones.

Gender, in other words, doesn't explain violence in any useful way - even though there surely are gender differences. The same goes for everything else: men and women may well have, for biological reasons, certain tendencies or advantages, but that doesn't automatically explain (and it doesn't justify) all of the sex differences we see today; it's only ever a partial explanation, with culture being the other part.

Online Comments: It's Not You, It's Them

Last week I was at a discussion about New Media, and someone mentioned that they'd been put off from writing content online because of a comment on one of their articles accusing them of being "stupid".

I found this surprising - not the comment, but that anyone would take it so personally. It's the internet. You will get called names. Everyone does. It doesn't mean there's anything wrong with you.

I suspect this is a generational issue. People who 'grew up online' know, as Penny Arcade explained, that

The sad fact is that there are millions of people whose idea of fun is to find people they disagree with, and mock them. And they're right, it can be fun - why else do you think people like Jon Stewart are so popular? - but that's all it is, entertainment. If you're on the receiving end, don't take it seriously.

If you write something online, and a lot of people read it, you will get slammed. Someone, somewhere, will disagree with you and they'll tell you so, in no uncertain terms. This is true whatever you write about, but some topics are like a big red rag to the herds of bulls out there.

Just to name a few, if you say anything vaguely related to climate change, religion, health, the economy, feminism or race, you might as well be holding a placard with a big arrow pointing down at you and "Sling Mud Here" on it.

The point is - it's them, not you. They are not interested in you, they don't know you, it's not you. True, they might tailor their insults a bit; if you're a young woman you might be, say, a "stupid girl" where a man would merely get called an "idiot". But this doesn't mean that the attacks are a reflection on you in any way. You just happen to be the one in the line of fire.

What do you do about this? Nothing.

Trying to enter into a serious debate is pointless. Insulting them back can be fun, just remember that if you find it fun, you've become one of them: "he who stares too long into the abyss...", etc. Complaining to the moderators might help, but unless the site has a rock solid zero-tolerance-for-fuckwads policy, probably not. Where the blight has taken root, like Comment is Free, I'd not waste your time complaining. Just ignore it and carry on.

The most important thing is not to take it personally. Do not get offended. Do not care. Because no-one else cares. Especially the people who wrote the comments. They presumably care about whatever "issue" prompted their attack, but they don't care about you. If anything, you should be pleased, because on the internet, the only stuff that doesn't attract stupid comments is the stuff that no-one reads.

I've heard these attacks referred to as "policing" existing hierarchies or "silencing" certain types of people. This seems to me to be granting them far more respect than they deserve. With the actual police, if you break the rules, they will physically arrest you. They have power. Internet trolls don't: if they succeed in policing or silencing anybody, it's because their targets let them boss them around. They're nobody; they're not your problem.

If you can't help being offended by such comments, don't read them, but ideally you shouldn't need to resort to that. For one thing, it means you miss the sensible comments (and there's always a few). But fundamentally, you shouldn't need to do this, because you really shouldn't care what some anonymous joker from the depths of the internet thinks about you.

Autism and Old Fathers

A new study has provided the strongest evidence yet that the rate of autism in children rises with the father's age: Advancing paternal age and risk of autism. But questions remain.

The association between old fathers and autism has been known for many years, and the most popular explanation has been genetic: sperm from older men are more likely to have accumulated DNA damage, which might lead to autism.

As I've said before, this might explain some other puzzling things such as the fact that autism is more common in the wealthy; it might even explain any recent increases in the prevalence of autism, if people nowadays are waiting longer to have kids.

But there are other possibilities. It might be that the fathers of autistic people tend to have mild autistic symptoms themselves (which they do), and this makes them likely to delay having children, because they're socially anxious and so take longer to get married, or whatever. It's not implausible.

The new study aimed to control for this, by looking at parents who had two or more children, at least one of them with autism, and at least one without it. Even within such families, the autistic children tended to have older fathers when they were born - that is to say, they were born later. See the graphs below for details. This seems to rule out explanations based on the characteristics of the parents.

However, there's another objection, the "experienced parent" theory. Maybe if parents have already had one neurotypical child, they're better at spotting the symptoms of autism in subsequent children, by comparison with the first one.

The authors tried to account for this as well, by controlling for the birth-order ("parity") of the kids. They also controlled for the mother's age amongst several other factors such as year of birth and history of mental illness in the parents. The results were still highly significant: older fathers meant a higher risk of autism. As if that wasn't enough, they also did a meta-analysis of all the previous studies and confirmed the same thing.

So overall, this is a very strong study, but there's a catch. The study population included over a million children (1,075,588) born in Sweden between 1983 and 1992. Of these, there was a total of 883 diagnosed cases of autism. That's a rate of 0.08%. In other words, although older fathers raised the risk of autism by quite a lot relatively speaking, the absolute rate was still tiny.

The most recent estimates of autism prevalence in Britain have put the figure at somewhere in the region of between 1% and 2% e.g. Baird et al (2006) and Baron-Cohen et al (2009) with American studies, using slightly different methods, generally coming in just below 1%. So the Swedish figure is more than 10 times lower than modern estimates. Whether this reflects different criteria for diagnosis, national differences, or increased prevalence over time, is debatable but it does raise the question of whether these findings still apply today.

The only way to know for sure would be to do a randomized controlled trial - get half your volunteer men to wait 10 years before having children - but I don't think that's going to happen any time soon...

ResearchBlogging.orgHultman CM, Sandin S, Levine SZ, Lichtenstein P, & Reichenberg A (2010). Advancing paternal age and risk of autism: new evidence from a population-based study and a meta-analysis of epidemiological studies. Molecular psychiatry PMID: 21116277

How To Fool A Lie Detector Brain Scan

Can fMRI scans be used to detect deception?

It would be nice, although a little scary, if they could. And there have been several reports of succesful trials under laboratory conditions. However, a new paper in Neuroimage reveals an easy way of tricking the technology: Lying In The Scanner.

The authors used a variant of the "guilty knowledge test" which was originally developed for use with EEG. Essentially, you show the subject a series of pictures or other stimui, one of which is somehow special; maybe it's a picture of the murder weapon or something else which a guilty person would recognise, but the innocent would not.

You then try to work out whether the subject's brain responds differently to the special target stimulus as opposed to all the other irrelevant ones. In this study, the stimuli were dates, and for the "guilty" volunteers, the "murder weapon" was their own birthday, a date which obviously has a lot of significance for them. For the "innocent" people, all the dates were random.

What happened? The scans were extremely good at telling the "guilty" from the "innocent" people - it managed a 100% accuracy with no false positive or false negatives. The image above shows the activation associated with the target stimulus (birthdays) over and above the control stimuli. In two seperate groups of volunteers, the blobs were extremely similar. So the technique does work in principle, which is nice.

But the countermeasures fooled it entirely, reducing accuracy to well below random chance. And the countermeasures were very simple: before the scan, subjects were taught to associate an action, a tiny movement of one of their fingers or toes, with some of the "irrelevant" dates. This, of course, made these dates personally relevant, just like the really relevant stimuli, so there was no difference between them, making the "guilty" appear "innocent".

Maybe it'll be possible in the future to tell the difference between brain responses to really significant stimuli as opposed to artifical ones, or at least, to work out whether or not someone is using this trick. Presumably, if there's a neural signiture for guilty knowledge, there's also one for trying to game the system. But as it stands, this is yet more evidence that lie detection using fMRI is by no means ready for use in the real world just yet...

ResearchBlogging.orgGanis G, Rosenfeld JP, Meixner J, Kievit RA, & Schendan HE (2010). Lying in the scanner: Covert countermeasures disrupt deception detection by functional magnetic resonance imaging. NeuroImage PMID: 21111834

 
powered by Blogger