Tuesday, December 23, 2014

He ain't heavy, he's my carer




Anne: There's no point in going on living. That's how it is. I know it can only get worse. Why should I inflict this on us, on you and me?
Georges: You're not inflicting anything on me.
Anne: You don't have to lie, Georges.
Georges: Put yourself in my place. Didn't you ever think that it could happen to me, too?
Anne: Of course I did. But imagination and reality have little in common.
Georges: But things are getting better every day.
Anne: I don't want to carry on. You're making such sweet efforts to make everything easier for me. But I don't want to go on. For my own sake, not yours.
Georges: I don't believe you. I know you. You think you are a burden to me. But what would you do in my place? 

Amour 


In 2006 the Central Statistics Office estimated that nearly 5% of the Irish population is engaged in providing hands-on care for a relative. While this can be very rewarding for many people, it presents many challenges that can be highly stressful, particularly for those caring for a relative with dementia. In addition to seeing the loss of independence in a loved one, dementia can be associated with challenging behaviours such as night-time wandering, irritability and aggression. 

A recent study has indicated that people providing care to family members with Alzheimer's disease have higher levels of depression as well as stress and anxiety compared to non-caregivers. This may be sadly predictable, but the results also indicated that caregivers performed more poorly on a number of tests of cognitive function. Furthermore, caregivers had disrupted output of cortisol-a hormone which is activated by acute psychosocial stress (and associated with fight-or-flight mechanisms such as increased heart rate) but whose function can be damaged by long-term stress. 

Many caregivers for people with dementia are their spouses, who are elderly themselves. As the ageing process can have effects on the brain which are similar to those of chronic stress, there may be a compounding effect of these two factors. At the same time, where children are providing care for elderly parents they may have to balance responsibility for their parent with work and/or children of their own. This may partly explain why spouses are sometimes more willing to take part in research in this area than sons/daughters. 

I wonder how often is the question is also asked how stressful is it to receive caregiving? Again, this can be difficult to disentangle from other stresses, such as those of deteriorating health and losing one's independence, but (for example) trying to minimise the extent of one's dementia to one's family caregiver must no doubt act as an additional source of stress for some people in the early stages of dementia.

There are psychological interventions such as cognitive-behavioural group therapy which have been shown to help caregivers to better cope with the challenges they face. At a broader level, perhaps one of the most important contributors to the level of caregiving burden is not just the presence of problems but the lack of an extra pair of hands. Indeed, there is evidence that cortisol output is more normalised on days when caregivers avail of adult day services compared to days when they do not. Sadly, too often there aren't enough people other than the primary caregivers pitching in a few hours a week to give the main caregiver a break. At present I would include myself in that criticism. At a societal level, I can't help but imagine that this concentration of the burden of care in the hands of fewer people also leaves older people more vulnerable to elder abuse. While "isolated individuals" may carry out such acts, they can only do so because so many of us do not watch out for each other. 



Saturday, December 6, 2014

Who's studying who? Demand characteristics in psychological research


Trolls are psychopaths! A recent study understandably gathered a lot of attention by showing that those who enjoyed trolling were more likely to report higher levels of psychopathic traits. (I'm sure much of print media was keen to use these findings to take a dig at the online world which has cost them so much income). However, if asked to complete questions about enjoyment of trolling, followed by questions of how much of a psychopath one is, one does not need a Bachelor's degree in psych to guess that the researchers might be trying to see if online trolls are, in fact, psychopaths. So, being the poo-stirring troll that you are, might it not be fun to exaggerate your level of antisocial behaviour out of spite, just to enjoying the resultant media poo-storm?

Q 13. Do you have any further comments on your experience of trolling? (Please write these into the space below).

A 13. uruururur how i enjoin throwing kats in de bornfire with ma knuckly-dragging companions, den boastin bout it online to liberaln00bs wuh wuh wuh (Written by someone who wouldn't say boo to a cat in real life)

Perhaps a little unfair to pick on this one study, as this is a pervasive problem in psychological research. Demand characteristics are changes in behaviour which are due to the person taking part in the study seeing what the researcher is trying to show, and then changing their behaviour accordingly.

There are various ways to get around or minimise demand characteristics in human research. One handy method is to use a placebo condition. Say if you were studying caffeine it would be relatively easy to have two pills which look identical, but only one contains caffeine. Better yet, go double blind, so that the researcher (whichever one interacts with participants) doesn't know which condition they're in. However, for a lot of psychological research this is not so easy to pull off. If you're looking at the effects on depression of a type of psychotherapy versus a waiting list control, people can see whether or not they're getting the treatment. One way to address this is to have an active control, where people do engage in some kind of intervention (say a group meeting with other people with depression), so that there will still be demand characteristics, but the key ingredient of your new psychotherapy is missing.

It was cool to see an article taking this issue a bit further. Boot et al. suggested that in addition to having an active control condition, one should also look at the extent to which the active control and the actual intervention under study are associated with an expectation of an effect. In this way one can get a better sense of whether any change is due to a positive effect of the intervention or just an expectation that it will do as much. At the same time, one needs to watch out if the more novel therapy is perceived as what the researcher is really interested in, as even with equivalent expectations participants may still display more positive effects if they correctly perceive which one is the experimental  condition.

When doing studies about chewing gum it occurred to me that there was no placebo condition.  Unlike the caffeine pill scenario, there isn't really a strong equivalent with chewing gum; people can tell if they're chewing or not. I was surprised at the relative lack of research on demand characteristics in psychology more generally (perhaps it's a Pandora's Box), so I conducted a study of my own with Prof Andy Smith. People taking part were given chewing gum, rated their level of alertness and completed attention tasks. For the "positive" condition, I told them at the beginning that previous research had shown a positive effect of chewing gum on alertness and attention (I also chewed during this condition-always the method actor). In the "negative" condition I told them previous research had shown a negative effect. There was also a neutral condition where no explicit prompting was given.

We found that people in the negative condition reported lower alertness in response to chewing gum than those in the positive condition or the neutral. The lack of a serious difference between the neutral and positive conditions was interesting, although I'm still undecided if that's due to a genuine effect or the mere presence of some manipulation being enough to induce demand characteristics. In contrast to the observed effects on reported alertness, there was no performance difference in terms of attention. If people weren't showing better sustained attention when told that gum was good, this suggests they may have only been reporting greater alertness rather than actually experiencing it. I suppose the lesson is that if you actually have some kind of overt behaviour in your study this will be less vulnerable to demand characteristics.

While designing the study I recall watching the film "Inception". The plot concerns people who can enter other people's dreams. Furthermore, they can enter dreams within these dreams. I sometimes have trouble finding films escapist, as I can end up thinking about how some aspect of the film could say something about my own life. In this case, I started to think about demand characteristics within demand characteristics-what if people perceived that the study was about demand characteristics, and started responding so as to confirm my hypothesis that demand characteristics impact upon performance? A senior academic at Cardiff brought up this point as well. Unfortunately, I have no easy answer. I did ask people at the end of the study if they could guess what the hypothesis was, and only one person guessed correctly, although maybe some people did and decided that they'd keep quiet, and let me hear again what I wanted to hear.....

Allen, A. P., and A. P. Smith. (2012). Demand characteristics, pre-test attitudes and time-on-task trends in the effects of chewing gum on attention and reported mood in healthy volunteers. Appetite, 59(2), 349-356.

Wednesday, November 19, 2014

Wolves in the throne room: Sexism in the academy

"I was just curious about the image you used". I paused, unsure of how to respond. "Okay-em, what about it?" I asked. "Well, you are going to be presenting this to women in the department, you know, as equals..." The sentence was left hanging, allowing the others in the room to see the implied accusation of sexism.

I'm paraphrasing a bit-I don't remember the exact words exchanged. I had wanted to use an image that concisely summed up the idea of someone chewing gum-a simple idea, but quite difficult to capture in a still image, given most people chew gum with their mouths clothed. Hey presto-I found a clear image of someone chewing gum, stretching it out of their mouth-a nice, unambiguous pic of gum-chewing. I hadn't really thought of the image as sexualised, let alone sexist. What a fool I was. If you're in work, you'd better shut down your PC in case you get sued for the hardcore porn you're about to witness:



Okay, so in hindsight it IS quite a sexualised image, but it was only a small part of an overall slide with other images (hence I needed something that was obviously a mouth with chewing gum associated with it). Looking at it now with the benefit of hindsight (and blown up larger on this page than it was in the PowerPoint! I swear!) I can see how someone could see this as a bit wuh wuh wuh. I did it in a rush, not really taking account of how it might be perceived (I swear!).

On the opposite end of the spectrum, I recall a female postgrad (obviously not the same person as the PowerPoint accuser) voicing an opinion that feminism had had its day. (Here I pause for effect). Okay......

What connects these contrasting stories is that I could imagine at least a handful of people who hear these stories thinking that BOTH the high sensitivity to perceived sexism AND the assumption that sexism no longer exists could stem from a common source: that academia has lower rates of sexism than many other workplaces, and this is perhaps particularly the case for certain departments, such as sociology or psychology. There are not-unreasonable reasons for thinking this. At Cardiff University, where I did my PhD, the School of Psych has won an Athena Swan award for advancing women in psychology. Unfortunately, sexism is everywhere, and it's usually worse than not thinking through a throwaway PowerPoint pic, or indeed worse than a poor choice of shirt.

Here's an example; a male PhD student within another department of psychology expressed resentment towards a female academic for not marrying her long-term partner, with whom she had a child. (I should add the PhD student expressed this resentment to me, not to this woman or her family's face). The couple were living together and both caring for the child. The resentment didn't seem to come from any religious sense or "think about the children" mentality, but rather a sense of insult that the male's desire to marry was not reciprocated by his partner. The fact that the man had decided to stay with his partner and child did not seem to soothe the nerves of the guy taking offence on another man's behalf. It seems marriage should be according to the man's will; not by mutual consent.

There are various factors that could explain sexism. Although it may not explain the above example, as a psychologist, I have to give a mention to the just world hypothesis. This does what it says on the tin-basically, it's the attitude that the world is essentially fair. This can have the unsavoury effect of victim-blaming. After all, if someone brought misfortune on themselves, then the world is still basically a fair (and therefore safer) place, right? Right? I would suggest this can have a related effect whereby when people see discrimination (which is pretty hard to deny) they can at least make it seem fairer by assuming that each side is as bad as the other. Yes, blacks have it hard, but look at all the abuse whites get! Yes, women are picked on for their gender, but this is true of men too! Just in the last week or two an email went around the Uni about a workshop for helping women progress their careers-a male academic replied that there should be a similar workshop for males. It was a particularly delicious irony that this gender warrior was a Professor-after all, there is no gender disparity at the top of academia, so it makes no sense to have such a workshop herpa derpa doo. Although it is prosaic to point out that sexism against men exists, the idea that this is somehow as damaging or dangerous as sexism against women may be an example of the just-world hypothesis (even if those who think this way may cry "injustice!").

The rules around gender are changing; some men in science are too starting to thinking about whether they can "have it all". The work culture in academia doesn't lend itself to very hands-on family life, and if more men want to be more involved with their families this has obvious implications. Even if times are changing, and it's harder to get away with sexism, sexism is not going to just go away any time soon, and having a lot of education is no guarantee of good behaviour. When our job calls us to hold our mirror up to humanity, we need to hold that mirror up close to our own faces. I probably should've just used a pack of Doublemint.

Thursday, November 13, 2014

Conference review: ECNP congress 2014



I recently attended the European College of Neuropsychopharmacology (ECNP) congress 2014. I was doubly fortunate in that (a). I received a travel grant to this congress, which (b). was happening in Berlin, the home of an old friend. In so many words, the ECNP congress discusses the use of brain science to better understand and treat disorder of the brain.


It's very easy to get lost at a Congress (compared to a conference)-the scope of a large subject can leave one drowning in a sea of breadth, presenting at sessions where no one knows much about what any other speaker is talking about. What's neat about ECNP is they divide sessions into tracks-preclinical research, clinical research, clinical treatment etc. and within this the sessions have well-defined titles that still allow for multidisciplinary input. 


I was presenting a poster with colleagues at Dept of Psychiatry in UCC concerning the effects of ketamine on treatment resistant depression. We classified this as people with major depression who had failed to respond to two or more adequate trials of antidepressants. We also looked at brain-derived neurotropic factor (BDNF)-a substance involved in the production of brain cells. BDNF was reduced in people with treatment resistant depression compared to healthy controls, as has been shown before. What was interesting was that BDNF increased in depressed people, but it was delayed compared to a rapid improvement in feelings of depression. What was more, this effect did not seem to persist after multiple infusions, even though multiple infusions maintained a reduction in depressive symptoms.


The topic of depression and ketamine was pretty hot at the congress-there was a brainstorming session on this topic that was well attended despite a start time of 7.45 (in comparison, a session the previous day on getting ERC grants didn't seem so well attended{!!!!}). The ketamine session was more focussed on whether ketamine is really ready for clinical use; there was a lot of focus on the clear impact on symptoms in treatment-resistant depression, as well as possible risk factors-a problematic aspect being a dearth of evidence on the long-term effects of ketamine therapy. There was a lot of (too much) anecdotal evidence at the discussion-what would be great in the long run would be to develop some kind of decision tree clinicians could use to decide whether ketamine use can be recommended and when to cease its use, although in the absence of long-term evidence such work may be preliminary. Although ketamine may improve BDNF, I shouldn't make it sound as though other drugs can't impact on the development of brain cells-there was a poster from Portugal indicating that anti-psychotics can enhance the growth of brain cells in rodents-although this effect is present for some of these drugs but not others.

On the topic of depression, there was a nice psychology talk from Catherine Harmer suggesting that changes in processing of emotional stimuli may change following treatment with drugs such as NRI's (norepinephrine reuptake inhibitors-these increase the level of norepinephrine in the synapse by preventing reuptake back into the cell). Interestingly, these changes in information processing seem to occur prior to the improvement of symptoms, suggesting that this cognitive effect could underpin the subsequent emotional effect.

Other highlights included a keynote speech from Karl "Future Nobel Laureate" Deisseroth, having a chat about how he invented optigenetics and CLARITY. There was also an interesting session on the neurobiology of ADHD-there seems to be quite a lot of work going on across Europe looking at how genetics may explain changes in the structure of the brain associated with ADHD (as well as other disorders)-I asked one of the speakers at a later session if there had been much work along these lines using fMRI during sustained attention to look at the function of brain structures during performance of the types of tasks impaired by ADHD-apparently there may be a gap in the literature there...

You can download our posters from this congress and other posters from the Dept of Psychiatry here



Tuesday, November 11, 2014

Being right wing and doing the right thing



From personal experience, although I have met a few psychologists with right-of-centre views, they tend to be more likely to get into arguments with colleagues than their leftie counterparts. There is good reason to believe that psychologists, and perhaps social psychologists in particular, tend not to be right-of-centre. But I've always found the terms "left-wing" and "right-wing" a little ambiguous. Can someone's political worldview be summed up with such a simplifying term? I think we all know the answer is "no", but what ways are there of further dividing "right wing" into sub-types? As it happens, a recent article looked at two different ways of being right wing:

Right-wing authoritarianism (RWA/"right-wingaz with attitude"?) describes a mindset that places a high value on authority. Rules are there not to be broken-and those who step over the line should be punished. "It's a good idea because it's the way people have always done things!" is more likely to appeal if you have a high level of RWA.

Social dominance orientation refers to have a preference for social groups to be hierarchical, so that some people or groups are higher on the food chain and others are less so.

The authors were interested in how these different traits could impact on prosocial behavior: specifically one's willingness to help a drug-addicted individual (a heroin addict referred to as "Nicole").

Survey results indicated that people higher on SDO indicated less approval for public spending on helping people with addiction, and were less willing to offer help personally. In contrast, people with higher RWA were more willing to help personally, although they too did not approve of forking out government funds. It seems that a right wing person with an authoritarian outlook rather than a desire for social hierarchy may nonetheless be willing to engage in prosocial behaviour at an individualistic level. (However, actual helping behaviour was not observed here-the skeptic in me wonders how many people are really going to reach in their pockets for a drug addiction charity).

Notwithstanding the problem of whether people on RWA would actually help or not, the researchers delved a bit deeper into why the RWA folk were reporting a greater willingness to personally help, developing a model in the process which seemed to explain at least some of the variability in desire to help. Funnily enough, they did build up an interesting chain of thought which indicates why some people with high RWA do NOT help. In terms of the items they measured, it seems as though RWA is associated with being more likely to attribute people's life circumstances to internal causes (e.g. thinking someone's an addict because they were personally drawn to taking drugs rather than due to peer pressure). This in turn led them to attribute greater responsibility to the addict, resulting in less sympathy and less willingness to help. So why did many of them still report wanting to help? The researchers suggested that the "direct" positive effect of RWA on willingness to help personally might be borne from a sense of moral duty.

The picture is thus complex, and right-wing authoritarianism may lead to conflicting motives within people. It is worth noting also that there was a positive correlation between RWA and SDO, so we should not think that for any individual you can only have one or the other but both.

As a friend said at the time of the London riots a few years back-it's funny how events can bring a right-winger in many people who would not consider themselves to be that way inclined (and no doubt other events can make people more "left wing", although again this term can be unpacked). The study discussed here just looked at what people's "stable personality traits" are. It would be interesting to try and manipulate how right wing people are to see how this impacts on their current level of pro-social behaviour; for example, terror management theory suggests that reminders of mortality make you more prone to place more importance on ingroup values (both SDO and RWA are associated with more negative regard for outgroup values). Perhaps reminding someone that they (and indeed their way of life) won't be around forever will make them less likely to help that drug addict.....unless of course, they start to see that person with addiction as part of a bigger ingroup.

Halkjelsvik, T., Rise, J. (2014). Social dominance orientation, right-wing authoritarianism and willingness to help addicted individuals: The role of responsibility judgments. Europe's Journal of Psychology, 10(1), 27-40.

Sunday, October 26, 2014

Irritable bowel syndrome-mens morba in corpore morbo



The stiff-upper-lips of this world will probably always claim that irritable bowel syndrome (IBS) is "just" or "only" IBS. However, this syndrome can be highly painful and even debilitating at its most severe. It is characterised by abdominal pain and bloating, as well as constipation and/or diarrhoea. It tends to be associated with high levels of stress; childhood trauma is more prevalent in people with IBS, who have also been shown to have an altered stress response to an acute stressor in not one but two studies. IBS is also comorbid with certain stress-related disorders such as depression, although the interaction between depression and IBS can sometimes be complex-see this clip with Kurt Cobain, who suffered from both IBS and suicidal ideation for an example.

Recent research from our group (particularly Paul Kennedy, who completed his PhD in this area) has suggested that people with IBS perform more poorly on a test of memory than a healthy control group. The test (paired associates learning) involves remembering the location of abstract shapes hidden behind boxes-a bit like the kind of card game one would play as a kid. It is thought to involve activation of the hippocampus, a brain region which is also involved in the regulation of the stress response. It should be noted that although the number of errors people with IBS were making was higher than the healthy group, this was a subtle difference-but a noticeable one nonetheless.

In a just published study which I had the good fortune to get involved with, we looked at the potential neurochemical underpinnings of this change in memory performance. Specifically, we looked at whether memory performance in IBS could be changed by altering tryptophan levels. Tryptophan is an essential amino acid ("essential" because we have to derive it from our diet). It's used by the body to produce serotonin-a household name among neurochemicals thanks, among other things, to the class of antidepressants known as SSRI's (selective serotonin reuptake inhibitors). HOWEVER, most tryptophan is broken down along the less-famous kynurenine pathway, where is it catabolised into kynurenine and various other neuroactive chemicals. The enzymes in our bodies which make this happen, such as IDO and TDO, have previously been found to be altered in IBS, and there is some evidence that the chemicals produced in the kynurenine system can have a negative impact on memory. This suggests the kynurenine system could play a role in IBS-related memory attenuation.

In our study we gave people with IBS and healthy controls a drink with amino acids other than tryptophan-these compete with tryptophan at the blood-brain barrier, and consequently reduce the amount of tryptophan getting through to the brain. As a control condition, on a different testing day we gave them the same drink but with tryptophan included. We then had everyone complete a series of cognitive tasks.

What the results indicated was that people with IBS did worse than healthy controls at the memory task, similar to what had been shown before. However, this only happened when they had consumed the drink containing tryptophan. Analysis of their blood samples by Gerard Clarke indicated that the drink containing tryptophan led to higher levels of kynurenine-thus breakdown of tryptophan along the kynurenine system may explain this memory difference in IBS.

As drinking a mix of amino acids can taste unpleasant I wouldn't necessarily advise people with IBS to run to the health shops to see if they have them in. I would say that one useful step people with IBS can take is to try to keep stress at a manageable level-excessive stress is likely to lead to negative impact on your cognitive performance, particularly in the case of more complex tasks such as remembering large amounts of info. Further, although there is no cure for IBS, symptoms can be managed and minimised, and avoiding too much stress is part of this process.

***UPDATE*** For those who want to know more about the science of IBS, I forgot to mention that while  I was working on this blog post Paul Kennedy et al. published a review paper looking at irritable bowel syndrome and the microbiome (i.e. the community of microorganisms living inside us humans, and their genes). Check it out!

Kennedy, P.J., Allen, A.P., O’Neill, A., Quigley, E.M.M., Cryan, J.F., Dinan, T.G., Clarke G. (2014). Acute tryptophan depletion reduces kynurenine levels: Implications for treatment of visuospatial memory performance in irritable bowel syndrome. Psychopharmacology, doi: 10.1007/s00213-014-3767-z

Saturday, October 11, 2014

Two different ways: Creativity and two modes of thought

Whenever one encounters a child genius one is tempted to think that their creativity cannot stem from "normal" thought processes, and one sometimes sees in DVD commentaries some writers and directors can have a surprising lack of insight or ability to articulate where their ideas come from-this can be the case with actors too. A lot of people assume that creativity is fundamentally mysterious-after all, they may say, if we fully understood it how would it be creative? Let's imagine one is not Mozart, but wishes to engage in a task which requires at least moderate creativity-let's say writing a script for a romcom which will hopefully earn loads of money. Is one doomed to wait upon some elusive muse, or can we think about the types of underlying thought processes one has to draw upon to complete this task?

In discussing this I draw on a distinction from the broader literature on thinking and reasoning. There has been a lot of interest in type 1 and type 2 thinking, which have been popularised in a recent book by behavioural economist extraordinaire Daniel Kahenman. A bat and a ball cost 1.10 - if the bat costs one euro more than the ball, how much do they cost? Type 1 thinking is fast and automatic (it shouts out "the bat's one euro and the ball's ten cent!), type 2 is slower and sequential (it realises that one euro is not one euro more than ten cent, so you have to adjust the estimate-it's actually 1.05 for the bat and five cent for the ball). I like to remember which is which by thinking of type 1 as the type which arrives first, because of its speed, and vice versa. The mysterio approach to creativity would seem to favour the account that, inasmuch as creativity is a thought process, it should therefore by type 1 (indeed, so fast that it is not accessible to conscious introspection). 

When working through creative projects, one tends to go through stages-firstly, you prepare the groundwork; say, checking out the other films in the genre to see if you are sinking into cliché (or committing outright plagiarism). There often follows an impasse, where one is stuck on what to do next-where to go relative to what's gone before (how many ways can you get two kooky singletons to go through the will-they/won't-they and maintain tension in the audience?). Following a period of "incubation" with a bit of luck there is then insight; the aha! moment when it occurs to you how to move past the mental block (the unique selling point of the script is the male lead is actually a salmon trapped in the body of a windsurfing CEO played by Matthew McConaughey). But often the creative process doesn't end there- you then have evaluation of the idea ("the salmon thing isn't realistic, but is it compelling?") and then dissemination (having the script sent to directors/producers who realise your meisterwork on the big screen). There are other accounts of creativity that propose more stages, but I think this broad framework is a nice place to start.

I have argued previously in an article with Kevin Thomas that both type 1 and type 2 thinking are important at various stages of creative thinking. During the preparation phase, ideas for creative projects can come through applying simple heuristics (e.g. "do the opposite of what they've done") or through careful consideration of what your predecessors have done. Insight can be a very sudden thing (many riddles used in psychology experiments can be like this) or it can occur through a more gradual process of working through concepts (consider the process of composing a score for a film; there are many points at which you have to decide which ntoes to use next). Evaluation can be type 1 ("does the script for the romcom look okay?") or type 2 ("how exactly should I phrase the funeral speech to avoid undermining the frothy comedy?") 

It's always flattering when one's work is cited and it was cool to see our paper get cited in a recent article in Thinking & Reasoning which indicated that people prone to engage in type 2 thinking perform better at the remote associates test (e.g. "what word connects the three words HOUSE, APPLE and SURGEON?") This test is thought to tap into the slope of one's associative hierarchy (basically, the ease with which one can draw connections between concepts which, at first glance, may seem unconnected). Such a skill is surely important for the striking juxtapositions that can create a great scene in a novel or film. At the same time, the remote associates test is very quick- in terms of stages of the creative process it tends to be associated with a very sudden moment of insight and very little evaluation. It would be interesting to see more research being done on the more long-term creative thinking projects that artists engage in, where insight happens at a more gradual pace (e.g. choosing various chords in the writing of a song) and evaluation of a larger set of ideas must be carried out (e.g. honing the phrasing of the quips Jennifer Lawrence speaks as her six-packed beau flounders on the beach in salmon-brained lunacy). 

Since writing the paper with Kevin Thomas, I also came across evidence that what can aid creativity is using whichever type of thought one is less likely to rely upon generally. It seems that maybe it is not just the ability to use both slow, logical reasoning and fast simple heuristics, but the ability to switch between the two that can help the creative thought process. When the deadline for the romcom script roles around you'll have to tap into faster thinking processes. 

Will the project of understanding the underlying cognitive psychology of creativity effectively undermine the existence of creativity itself? I don't think so. The technological, social and ecological sands shift beneath our feet, and even when the process of creative thinking is understood, this process will still have to be applied to novel problems requiring different solutions-and of course the process itself may need to be tweaked in future. Perhaps the romcom will be replaced by a greater form of light romantic titillation, and new tropes will have to be negotiated. (If you didn't get it, the answer is TREE-or at least TREE is one possible answer).

Allen, Andrew P., and Kevin E. Thomas. (2011). "A dual process account of creative thinking." Creativity Research Journal 23(2), 109-118.

Friday, August 29, 2014

Blurred lines: A review of "Intuition Pumps" by Daniel Dennett

Just finished reading "Intuition Pumps" by Daniel Dennett. Although Dennett is perhaps best known as one of the so-called "New Atheists", along with Sam Harris and Richard Dawkins (the latter seems to be a chum of his), he has perhaps made the biggest contribution in the area of philosophy of mind. Besides being a philosopher of mind, Dennett takes a great interest in relevant disciplines such as biology and psychology. He is also a charming, avuncular storyteller, which probably does no harm to his relatively broad appeal, and he aims for a non-specialist audience with this latest offering.

Before grappling with some larger topics, the book opens with some general "thinking tools". I read through this section of the book in no time-it's a great little page-turning pamphlet in itself, even if a few of the ideas (or versions thereof) are probably already in many intelligent people's arsenals. A personal favourite is his point about the phrase surely, which he suggests is often used when someone kind of realises that their argument is just about obvious enough that they shouldn't have to justify/explain it too much (although maybe they should!)

His interest in biology is evident in his discussion of evolution, an engaging enough read (I say this as someone who knows very little about evolution). However, it is his discussion of consciousness (an area I've recently become interested in) that I was most interested in checking out. My first experience of Daniel Dennett was reading (a chunk of) his "Consciousness Explained". With the advent of fMRI allowing researchers to observe blood flow to specific regions of the brain in response to thinking tasks, one might be tempted to look into "pinning" consciousness on a particular part of the brain. Although fMRI has been used to assess whether people are conscious of certain things, in very interesting circumstances, in "Consciousness Explained" Dennett came out against the implication that there is some endpoint in the brain where consciousness occurs, suggesting instead that it is something which is sorta (a word he uses a lot in "Intuition Pumps") present at some level in one part of the brain before becoming more present at a "later" part. This approach to having conscious experience leaves a lot of people cold, as far as I can see, but I must say I'm biased towards Dennett's view on consciousness; when I hears thinkers talk about consciousness as an indivisible, all-or-nothing process it doesn't really appeal to me at an intuitive level-don't we all have experience of our level of awareness diminishing as sleep sets in?

Having been drawn in by the previous book, I was keen to take in the section in this book dealing with consciousness. Here Dennett takes on classic philosophical problems such as zombies (i.e. people who behave as if they are conscious, but are not) and Searle's infamous Chinese Room. Unfortunately, the book gets a little too quick to refer the reader on to other sources at times in this section, although the chapter "The Tuned Deck" may be one of the most entertaining parts of the book (I won't spoil it for you), if a little vague on detail. In some ways this book's section on artificial intelligence seems to say more about consciousness than the section entitled "Tools for thinking about consciousness". Here, Dennett dissects how bits of information, too simplistic by themselves to represent anything in the world, can be gradually combined with extremely simple operators to complete tasks such as addition and subtraction (he even sets a homework assignment to be done using a computer program which allows you to design these types of operations from the bottom up, which I'll admit I didn't do). Reading this section I got a gut feeling for how an incredibly simple process, such as a neuron firing or not, when combined with billions of other neurons in a vast interacting (and sometimes self-referential?...) networks could gradually build up to something complex enough to be capable of self-awareness. I would be curious what Dennett thinks/would say about recent developments in robotics, with robots being created that appear to have some degree of self-awareness.

Turning towards the topic of free will, Dennett takes a compatabilist viewpoint (i.e. that free will and determinism are compatible). Similar to his sorta approach to consciousness, he argues that the idea that determinism rules out free will is based on an idea of free will which is too absolutist. Again, I found this intuitively appealing-I feel that I have a greater amount of free will about things I think of doing tomorrow compared to what I do a few seconds from now. However, I still found this part of the book a little difficult to swallow-I may need to read it a second time, but he seems to put the burden of proof on those who think determinism rules out free will without being 100% clear as to why a sorta free will would allow for determinism-does determinism then become sorta determinism?

Dennett closes with some words on being a philosopher. He suggests that pursuing a more long-term problem in philosophy may be a more fruitful approach in the long run than grappling with a "hot" topic-he quips that these are the quickest to burn out. Probably true in psychology, although this is advice which is easier to give than follow when one is a junior thinker chasing funding/job opportunities...

http://www.amazon.com/Intuition-Pumps-Other-Tools-Thinking/dp/0393348784/ref=sr_1_1?ie=UTF8&qid=1409346685&sr=8-1&keywords=intuition+pumps

Sunday, August 10, 2014

The perks of a pack of gum

Besides drinking caffeine-rich beverages, I've heard anecdotally that some long-distance drivers will chew gum behind the wheel to avoid paying an untimely visit to the land of snooze. A recent research article has looked at the potential alerting effect of gum in the lab, looking not only at feelings of alertness and associated attention performance, but also at brain activity and heart rate changes which can give a broader picture of what's happening.

We had participants complete a task where they watched a stream of 3-digit numbers-generally each number differed from the previous one, but every so often there would be a repetition (a bit like a quality control job on a production line). A vigilance task (i.e. sustained attention to rare target stimuli) such as this tends to be associated with a vigilance decrement (i.e. you perform more poorly at the task as time goes on). A previous paper indicated that chewing gum could improve performance on this type of task, but only after a certain amount of time performing it-that is, as eyes were starting to droop. In this study we looked at chewing gum on the same vigilance task, but this time we also measured electroencephalography (EEG) as well as heart rate, to see what physiological changes might accompany such effects. The vigilance decrement was probably enhanced by the fact that having the EEG equipment set up can take a longish while of sitting still (as anyone who has taken part in this study will attest to).

When participants chewed gum during the first stage of the vigilance task, they showed less of a post-baseline decrement on the vigilance test compared to those who did not chew gum (interestingly enough, this difference was still evident when further vigilance tasks were completed post-chewing). 

The EEG indicated that beta activity (which is associated with an alert state) was heightened at frontal and temporal areas following chewing gum; this effect was strongest straight after chewing, although it seemed to persist post-chewing. Furthermore, chewing gum was also associated with heightened heart rate, although this effect on heart rate seemed to dissipate quite quickly once chewing ended. It may be the case that central nervous system activity is a more likely explanation of ongoing effects of chewing than sympathetic nervous system activity (i.e. heart rate).

The study adds to a growing amount of evidence suggesting that chewing gum can enhance not only subjective alertness but also sustained attention. There has been some evidence that this can occur even in the absence of a vigilance decrement; I would speculate that this may have happened in the linked study as they used a task which I believe requires much more frequent responses to stimuli-this may reduce the extent of the decrement somewhat, compared to the vigilance task we used. (Consider running quality control on a production line where you have to spot the occasional mistake -vigilance- versus an assembly task whereby you have to continuously perform an operation on each passing piece of equipment -continuous performance. I think it's easier to tune out when you only have to do something every so often than when the default mode is responding to stimuli). 

Have I tested this on myself? I can recall attending one lecture where what I'll generously call a vigilance decrement (there was nodding involved) started to set in. This affliction was relieved to some extent by chewing some gum for a few minutes-thereby just about preventing an embarrassing collision of my head and the seat in front of me. Just don't go on an three-night, no-sleep road trip and blame me if anything goes wrong!


Allen, A.P., Jacob, T.J.C., Smith, A.P. (2014). Effects and after-effects of chewing gum on attention, heart rate, EEG and mood. Physiology & Behavior, 133, 244-251. 


Monday, August 4, 2014

Rant: On established validity and good questions

When using psychological tests, it is important to ascertain if they have good validity (i.e. test the concepts they claim to test). Obviously, if a test has already undergone evaluation to show that it has good validity (e.g. it correlates well with other measures assessing the same concept) then this can be a great time saver for you. However, I've recently been struck by a few measures which, while having previously undergone testing to examine validity, seem a priori to either be muddled or downright contradictory. (No doubt more seasoned psychologists than me can think of other examples).

A while back I completed an online survey whereby one was asked to place oneself in an imaginary scenario (stage 1). One is asked to imagine oneself committing a crime (let's say robbing a bank). In this story one is requested to imagine that one feels a sense of excitement and enjoyment in committing this crime. After having read (and presumably daydreamed about) this scenario, there are a series of follow-up questions (stage 2). One of these questions asks you to indicate how much one would enjoy robbing the bank (with the option of saying "not at all"). But you have already been told to imagine that you do enjoy it at stage 1. The question can't really be probing how much the respondent would actually enjoy such an scenario, as the survey as a whole is telling the person that they would enjoy it in this imaginary world. On mentioning this to a researcher using these questions he pointed out that he had previously noticed this shortcoming and was working on re-validating the questions-it annoyed me that he was left doing this as the original designers had not taken more care to ensure that the questions made sense in light of the scenarios that went with them.

A second example is perhaps less irritating, as the question that annoyed me was not necessarily contradictory but rather a bit muddled. However, the response to my griping from the author who used it was telling. An example question they mentioned was a triple-barrel question, let's say along the lines of "In order to enhance economic growth, our nation should cut corporation tax, reduce workers' rights and abolish the minimum wage". When I pointed out that this was a triple-barrel question which didn't allow for people having differing views on the impact of minimum wage versus the impact of corporation tax on economic growth, the person using the scale pointed out that as the validity and reliability of the questionnaire had been assessed in that format, they were sticking with it.

Why rush into spending the time and effort of collecting a large set of data to ensure the validity of a measure when it only takes an intelligent person in a room to see whether or not the questions actually make sense and are clear-cut?


Wednesday, July 23, 2014

Greetings

Congratulations! You have entered the world of my blog, Andrew's Psychology Archive.

I (Andrew Patrick Allen) am a psychological researcher interested in the psychology of stress, cognition, drugs effects (e.g. alcohol/hangover), foodstuffs (e.g. probiotics), the brain-gut axis, behavioral economics, philosophy of mind and a host of other fascinating aspects of human experience.

I am currently working in University College Cork (UCC), working with a multidisciplinary group investigating the brain-gut axis. With the Department of Psychiatry at UCC I am also doing some work looking at depression biomarkers (e.g. the cortisol awakening response, a rise in the stress hormone cortisol following awakening that is altered by different forms of depression) and how these might be affected by treatments for depression.

Should be bringing you plenty of ideas concerning my research as well as that of others over the coming months, maybe years. Stay tuned.