The Evidence for Brain Training – Seeing the Whole Picture

Jul 26, 2018 | Articles

Brain training works! Brain training doesn’t work! Every week, I see a new headline in the news about brain training, and they reliably go back and forth between these two extremes like the tick-tock of a clock.

Why the back and forth? If a study shows that brain training works in one week, how can a study the next week show the opposite?

One big reason that this happens is because the popular press considers all brain training programs to be the same. So when one study – usually with a BrainHQ exercise – shows positive results, the popular press says “brain training works!” And then when next week, when another study – usually not with a BrainHQ exercise – shows negative results, the popular press says “brain training doesn’t work.” And, so, we get the back and forth.

But what if instead of the moment-to-moment looks at individual clinical trials, we took a step back and looked at all of the clinical trials? That’s what a meta-analysis is – a scientifically rigorous way of averaging a set of clinical trials together to ask what effects are seen reliably across trials? So we didn’t see the back-and-forth, but just the overall picture?

Dr. Jerri Edwards, an academic researcher who has been studying cognitive training since the earliest days of the ACTIVE study, has led a team of researchers to do exactly that. In their recent paper “Systematic review and meta-analyses of useful field of view cognitive training,” published this month in Neuroscience and Biobehavioral Reviews, the reviewers looked at the overall pattern of results in 17 different randomized controlled trials (resulting in 44 peer-reviewed publications), each of which used “useful field of view” (UFOV) training, which is now the Double Decision exercise in BrainHQ.

The reviewers analyzed the pattern of results according to a framework recommended by the National Academy of Medicine, asking five key questions

  1. Does the training reliably show “near transfer” – improvements in cognitive tests that are generally related to the training?
  2. Does the training reliably show “far transfer” – improvements in real-world measures of cognitive function?
  3. Has the training been compared to “active controls” – other cognitively stimulating activities – to show that it is uniquely the training that drives benefit?
  4. Have the effects of training been shown to endure for a period of time after training is complete?
  5. Have studies been completed by independent scientists that are not financially affiliated with the group that developed the training?

The reviewers found that each of these five requirements was reliably met across the 17 trials. Training showed reliable improvements in cognitive tests related to speed, and generalized to real-world measures including timed everyday activities, independent living skills, and driving skills. Benefits from training were reliably larger than those from active controls (like training on how to use the internet) or alternate activities (like crossword puzzles). Finally, the majority of the trials were conducted by truly independent investigators – researchers with no financial connection to the group that originally developed the training (or with Posit Science).

Why did they get such clear results? One important reasons is that they focused on one specific type of cognitive training – UFOV training (Double Decision). By not including a lot of studies with failed brain training approaches, the reviewers were able to get a clearer understanding of what a single effective brain training approach can do.

This is a landmark result! Showing that a specific type of brain training reliably meets the National Academy of Medicine’s five criteria for an effective brain training program should put an end to the debate “does brain training work?” and should help researchers move to a more productive discussion – “which specific types of brain training have been shown to be effective and which have not?”

Those of us involved in building, testing, refining, and validating BrainHQ are highly confident about the answer to that question. We continue to explore how to make BrainHQ training even more efficient, effective, and engaging. This new meta-analysis and systematic review should be an encouragement to anyone starting BrainHQ, sticking with their BrainHQ, or coming back to BrainHQ. Happy brain training!

Dr. Henry Mahncke

Dr. Henry Mahncke is the CEO of Posit Science, the company that makes BrainHQ. He joined Posit Science at its inception in 2003 as Vice President of Research & Outcomes, where he led the first large-scale clinical trials of a publicly available cognitive training program. He became CEO of Posit Science in 2011.