In order to boost my attention, executive function, learning, memory, and creativity, I popped a few tablets of Modafinil before beginning this item. I cannot yet detect how far it has embellished my prose style, but in a possible first flush of boosted intellect I have continued writing whilst also, at the same time, and in parallel, avidly reading a paper published this very morning:
Modafinil for cognitive neuroenhancement in healthy non-sleep-deprived subjects: a systematic review. R.M. Battleday, A-K. Brem
DOI: http://dx.doi.org/10.1016/j.euroneuro.2015.07.028 Reference: NEUPSY11089 To appear in: European Neuropsychopharmacology
First of all, I am very glad that Europeans have a journal of Neuro-psycho-pharma-cology. Europeans have not been having a good time recently, what with bailing out Greece (cradle of insolvency) so they need any comfort they can get. The title of the journal is awkward and contrived, as if put together by a European Union committee, but it is August, Europe is on holiday, and one should be as kind as possible.
The authors do not say whether they took Modafinil while weaning down the 267 initial papers so as to then plough through the 24 acceptable papers they found on the presumed cognitive effects of this particular substance. By the way, it is time for all scholars to stop rejecting papers not written in English. This linguistic imperialism must cease forthwith. Also, don’t just reject those studies which don’t use a placebo: put them in a quarantine box and estimate the maximal claimed effects of the intervention, which serves as a reality check. For example, if all students now start popping Modafinil after reading this paper (or newspaper accounts of it) then they will be getting the maximal effects of Modafinil plus the placebo effects of all the publicity, which ought to be higher than the pure effectiveness measures produced by the best research. If even those boosted effects are small compared to a cup of coffee, (which you can partly judge from the open, non-placebo trials) why bother any further?
On drugs or not, Battleday and Brem have written a fine paper, best of all when outlining how future research should proceed. I will not bother you with the presumed biochemical bonds involved, but just the cognitive results and the very good discussion about methods.
With a few exceptions, the sample sizes are pitifully small, in the 10-20 subject range, with a few at 64. Yes, these are experimental manipulations, and ought to be able to detect the effects of the drug being administered, but a lot hinges on the volunteers being representative of the population. Experimentalists sneer at that sort of consideration, seeing the world through a Latin square.
The authors say: When simple psychometric assessments are considered, modafinil intake appears to enhance executive function, variably benefit attention and learning and memory, and have little effect on creativity and motor excitability. When more complex tasks are considered, modafinil appears to enhance attention, higher executive functions, and learning and memory. Negative cognitive consequences of modafinil intake were reported in a small minority of tasks, and never consistently on any one: decreased performance on a cognitive flexibility task (the intra/extra-dimensional set shift task in Randall et al., 2004), increased deliberation time during harder trials on a planning task (the One-Touch Stockings of Cambridge task in Randall et al., 2005a), increased deliberation time on one divergent thinking task (the Cambridge Gambling Task in Turner et al., 2003), and decreased performance on another (the abbreviated Torrance in Mohamed, 2014). It appears that modafinil exerts minimal effects on mood – if anything improving it – and only rarely causes minor adverse effects.
Table 1 in the paper gives the brief results for each of the included studies.
Methodological points. The discrepancy between the mainly null results from simple tests, with the exception of those assessing executive functions, and the mainly positive results from more complex testing paradigms highlighted by this review warrants further discussion and investigation. In terms of complex tasks, a systematic bias towards positive results could have been introduced through study design or study execution (a universal bias from task design is less likely because of the varied nature of these tasks). One source of study-design-based error could be the equal weighting we have accorded to results from crossover and between-subject trials. In the former, a participant’s performance on a test under one condition (for example, modafinil intake) is compared to their own performance under another condition (for example, placebo), and in the latter one group’s performance under one condition on a test is compared to a control group’s performance on the same test. It has been argued that repeating psychometric tests in crossover tests introduces practice effects that vary unpredictably between individuals and cognitive tasks, and could bias results (Hartley et al., 2003; Lowe and Rabbitt, 1998; Randall et al., 2004; Rose and Lin, 1984). Fortunately, however, the preponderance for each study design is roughly equal within simple and complex task groups, and repeated dose studies use complex tasks with adaptive assessment platforms that should obviate these practice effects. Equally, the prolonged testing experience associated with complex tasks may have allowed more opportunities for experimenters to influence participants. Against this argument is the fact that in the two studies that did assess participant blinding, participants were able to guess they had taken modafinil 55% of the time in a complex crossover trial (Gilleen et al., 2014), but 75% in a simple between-subjects study (Turner et al., 2003).
Conversely, the simple psychometric tasks used by the majority of studies could have lacked sufficient sensitivity to detect cognitive effects in the healthy and mostly student-based populations tested. With this in mind, it is a notable non sequitur that tests that reliably report cognitive dysfunction are equally qualified to detect improved cognitive performance in healthy adults. A key example of the inadequacy of some testing paradigms is the use of the “clock test” in some of these studies, which involves drawing hands onto a clock at specific times (for example, in Randall et al. 2005a). Whilst in ill populations this test offers a 18 valuable screening tool of poor cognitive function (Shulman, 2000), it is clearly a poor differentiator of normal or high-performing healthy individuals. Indeed, ceiling performances were consistently observed within simple tasks, for example on the pattern recognition memory task (Müller et al., 2013; D. Randall et al., 2005a; Turner et al., 2003; WinderRhodes et al., 2010), the delayed matching to sample task (Randall et al., 2004; Randall et al., 2003; Turner et al., 2003), the rapid visual information processing task (Randall et al., 2005a), the spatial working memory task (Turner et al., 2003; Winder-Rhodes et al., 2010), the preparing-to-overcome-pre-potency tasks (Minzenberg et al., 2014, 2008), and the Sternberg number recognition task (Makris et al., 2007). When these ceiling effects were lessened by only analysing data from low baseline performers, many studies actually did detect significant differences between modafinil and placebo groups (Minzenberg et al., 2014; Mohamed, 2014; Müller et al., 2004; Randall et al., 2005b). Several groups have commented this issue, noting, for example, that it may explain why robust effects on these same tasks are seen with sleep deprivation (Müller et al., 2004), when all participants effectively become low baseline performers. They also suggest that these tasks are in their current state inappropriate for detailed assessment of healthy individuals (Müller et al., 2004), and must be revised or abandoned in favour of more complex testing paradigms (Finke et al., 2010; Müller et al., 2013; Pringle et al., 2013). Recognition of the limitations of simple psychometric tests is also seen in the temporal succession of simple with complex ones over the last decade. Thus, it appears that within research on modafinil, any consensus about cognitive benefits has to this point been limited by the use of simplistic testing paradigms. These ceiling effects must be addressed in future work; certainly before discourse on the ability of low and high baseline participants to benefit from modafinil can offer real value.
We propose the following framework, centred on the principles of, on one hand, sensitivity and reproducibility, and, on the other, ecological validity (see Table 3).
The ‘simple’ task designs described above are extremely useful tools for dissecting the influence of a substance or process on higher cognitive functions. Equally important, their internal validity is high, at least within clinical populations (Levaux et al., 2007; Sweeney et al., 2000). Hence, if the ceiling effects encountered in these studies could be ameliorated, they would still add much valuable information to any assessment of supra-normal cognition. One solution to this problem is to integrate them into more advanced software platforms, which would still be standardised, but could be set to increase task difficulty via more complex task demands and shorter response windows. More complex tasks could then be integrated into the basic software package as additional modules, meaning more nebulous domains and cognitive processes could be investigated with reference to changes in basic systems, and a high internal consistency in the literature could be established. The additional advantages of such an approach are myriad: integration of adaptive training and testing regimes, so that learning in each cognitive-subdomain could also be measured; more comprehensive analysis of participant performance, with the ability to compare every aspect of their actions; and game-based incentive structures, obviating the decline in performance that follows prolonged testing (Kennedy and Scholey, 2004). Using the same system, testing could be conducted on untrained tasks and their cognitive sub-domains, to identify any transfer of cognitive ability (see Gilleen et al., 2014), or be used for re-testing at later dates, to identify lasting effects. The output of neuro-enhancement-related research is aimed at a fundamentally different population from most prior work on cognitive modulation – those seeking elective self-improvement of their own cognitive abilities, rather than those hoping to treat cognitive deficits. Consequently, methodologies of research in this area need to be considered anew, in order to probe supra-normal cognitive enhancement in ecologically valid settings whilst retaining rigorous testing conditions. Most tasks and projects in life necessitate learning and operating within a system for multiple days, and individual users are interested primarily in how their own performance will change, rather than the average of a group. Thus, testing regimes should be based over multiple days, and allow analysis within and between single participants’ performances. Baseline testing is also essential; as an absolute measure of individual performance, to ensure that ceiling and floor performances are not limiting the usefulness of results, and to allow speculation on whether and why some groups benefit more from particular agents or techniques. Finally, Makris and colleagues’ finding of decreased reaction times four and five hours after modafinil ingestion serves as a reminder that under real-life conditions, performance is likely to be affected by fatigue even within a single working day (Makris et al. 2007). In this case, modafinil’s eugeroic properties are evidently beneficial; however, more generally studies should make more effort to examine the length of performance benefits offered by an agent or technique for neuro-enhancement.
I like this paper, because it goes into the details of test characteristics, looking at their sensitivity and appropriateness as measures of complex performance. These considerations are sometimes glossed over in reviews. For once, studying students makes ecological sense, because the effect being sought is better studying. I do not find the case for Modafinil proved, and I suspect the authors are not bowled over by the effect, but if their suggestions are followed a larger study with more sensitive, repeated cognitive measures could settle the issue. Till then, it might be best to stick to black coffee.
Psychometric tests are the best predictors of real life outcomes, so now for a real life complex test: disconnect your old router provided as standard by your ISP, as I did last night, and then attempt to connect a new, enhanced, updated and reputedly more powerful, and more expensive version bought elsewhere. I had achieved partial success around about midnight, with many blinking lights blinking, but a few minor details like getting actual internet connectivity remained. This morning the man on the helpline was very kind, and all is now working well, though the wifi seems no stronger or faster than the old model. Whilst filling in the VPN/VCI and subnet masks, together with the usernames, passwords and new wireless PIN I asked him “What characteristics are required for success in your job?”. There was a very long pause, the usual “good question” interpolated in order to gain more time, and finally he said “Patience and compassion”.
By their deeds ye shall know them.