If you repeat a specific mental task—say, memorizing a string of numbers—you’ll obviously get better at it. But what if your recollection improved more generally? What if, by spending a few minutes a day on that simple task, you could also become better at remembering phone numbers, or recalling facts ahead of an exam, or bringing faces to mind?
This is the seductive logic of the brain-training industry. These companies offer brief, simple video games that are meant to boost mental abilities like memory, attention, and processing speed. By reacting to on-screen objects as quickly as possible, you could become better at spotting details in your environment, like approaching cars or pedestrians. Or by holding numbers in your mind as they flash by, you can increase your intelligence. The tasks vary but the idea is always the same: By playing these specific games, you secretly train deeper mental abilities, and so improve every aspect of your life that depends on those abilities—all while having fun.
One product, BrainHQ from Posit Science, promises everything from “2x faster visual processing speed” and “10+ years in memory” to “more happy days,” “lower medical costs,” “reversal of age-related slowing,” and “more self-confidence.” Another, Cogmed, claims to have improved “attention in many with ADHD,” as well as “learning outcomes in reading and math [for] underperforming students.” Lumosity by Lumos Labs, perhaps the most pervasively marketed of them all, ran ads that included characters from the Pixar film Inside Out.
People are certainly buying the hype—and the games. According to one set of estimates, consumers spent $715 million on these games in 2013, and are set to spend $3.38 billion by 2015.
And they might be wasting their money, according to a team of seven psychologists led by Daniel Simons at the University of Illinois. The team, most of whom have worked on brain-training themselves but have not received money from the industry, spent two years reviewing every single scientific paper cited by leading brain-training companies in support their products—374 in total.
Their review was published today, and it makes for stark reading. The studies, they concluded, suffer from a litany of important weaknesses, and provide little or no evidence that the games improve anything other than the specific tasks being trained. People get better at playing the games, but there are no convincing signs that those improvements transfer to general mental skills or to everyday life. “If you want to remember which drugs you have to take, or your schedule for the day, you’re better off training those instead,” says Simons.
“The review really leaves nothing out—and the evidence is unimpressive,” says Ulrich Mayr from the University of Oregon, who studies mental flexibility. “Seeing it so clearly is a service for the whole field.” Michael Kane from the University of North Carolina in Greensboro, who studies attention and memory, agrees. “It’s a tour de force,” he says. “It’s exceedingly fair, and a model of what a skeptical but open-minded evaluation of evidence should look like.” (Both Mayr and Kane recently signed a consensus statement from 70 psychologists and neuroscientists disputing the “frequently exaggerated and at times misleading” claims around brain-training games.)
Open-minded? Hardly, says Henry Mahncke, a neuroscientist and CEO of Posit Science, who accuses Simons’s team of being biased and inaccurate. “They twisted every one of those studies to fit their theories that cognitive training can’t work,” he says. “This is what happens when one paradigm topples another. It’s like [the authors] locked themselves in a cell for the last 100 years and talk about a style of psychology uncontaminated by neuroscience.” Brain-training, he asserts, isn’t a magic bullet, but does indeed generalize to important real-world tasks. To say otherwise is “completely wrong.”
Others in the field are more sanguine. “The evidence could be stronger,” says George Rebok from John Hopkins Bloomberg School of Public Health, who took part in one of the best brain-training studies around. “The review is very timely, and will help us raise the bar on the science of brain-training.” Similarly, Erica Perng, director of communications for Lumos Labs, sent a statement saying: “We strongly believe in the value of cognitive training. Our hope is that this debate enables more researchers to produce high-quality, replicable results that will move both the industry and scientific community forward.”
Brain-training certainly makes intuitive sense, which partly explains its charm. But it butts up against 100 years of psychological studies showing that practice only makes perfect in limited ways. “The things you train will improve, but they don’t generalize,” says Simons. Chess grandmasters can recall the position of every piece on a board, but aren’t better at remembering anything else, for example. Or, “if you search baggage scans for a knife, you don’t get better at spotting guns—or maybe not even other knives.”
But over the past few decades, researchers finally seemed to be bucking that trend, and seeing the “transfer effects” that had long eluded their peers. Their studies fueled the hope and hype of brain-training, and are frequently listed in support of it. For example, a website called Cognitive Training Data, created by the chief scientific officer of Posit Science, lists 132 such studies. Accompanying them is another consensus statement with 133 signatories that describes a “substantial and growing body of evidence shows that certain cognitive training regimens can significantly improve cognitive function, including in ways that generalize to everyday life.”
That evidence is “inadequate,” say Simons and his colleagues. Many of the studies are too small to produce statistically reliable results. Others seem to have selectively reported and analyzed parts of their data. “It’s astonishing how poor some of the experimental designs have been, violating many of the most fundamental principles that we regularly teach to undergraduates in introductory courses,” says Kane.
For example, to show that brain-training works, you need a good control group—that is, you need to compare people who play the games against others who didn’t. More than that, the people who aren’t being trained need to do something of equal time and effort—otherwise, you couldn’t say if the brain-training group was benefiting because of those specific games, or just because they were playing any game at all. Very few studies met these standards. Some had no control group at all. Some asked the control volunteers to just sit around passively. Some gave them tasks that weren’t really comparable, like watching educational DVDs.
And very few studies accounted for expectations. People who play brain-training games might reasonably expect to become smarter, so they might do better in later tests simply because they were more motivated or confident. These effects are a problem for psychological studies; unlike medical trials, where placebo pills can be made indistinguishable from actual drugs, brain-training players will always know what they’re doing. You can’t get rid of expectation effects, but you can at least measure and adjust them—and no study did. “I don’t fault studies for not doing this perfectly,” says Simons, “but most didn’t come close.”
That’s unrealistic, counters Machnke. “What they want is a control that is identical to the intervention and doesn’t improve cognitive function. That can’t be done,” he says. “You need to look at the picture that emerges from all these controls across these studies. These folks came to the forest and looked at every tree, but at no point did they step back and say: Wow, we have a forest full of trees.”
Simons argues that the size of the forest has been greatly exaggerated. For example, among the 132 papers cited by the Cognitive Brain Data website, 21 reviewed or analyzed results from earlier studies without presenting anything new. In other cases, results from the same study had been spread across several papers, and were then treated as independent entities. “There’s a relatively small set of independent data sets behind the large numbers of papers that the industry likes to cite,” says Mayr. “This has always bugged the hell out of me.”
It wasn’t all bad. The team singled out a few studies for their quality, including the ACTIVE trial—a large, thorough, and meticulously planned study involved over 2,800 elderly people. “It met good practice, especially compared to everything else,” says Simons. The volunteers were randomly assigned to train their memory, reasoning, or processing speed, or to sit passively. Each of the three trained groups could act as a control for the others—they all made the same effort, but were working on very different skills.
But even ACTIVE didn’t find strong evidence for transfer effects. “You practice fast responses, you get faster on a keyboard. You practice memory, you get better performance in memory tests,” says Simons. “There was almost no crossover.” Nor were there clear signs that any group got better at everyday tests involving the same skills they had trained.
“I think the evidence [for transfer] is admittedly weak,” says Rebok, who worked on the trial. He notes that some of the volunteers became better at managing their medications, preparing meals, and other activities of daily life—but these improvements were all based on the volunteers’ own reports. “If people think they should improve, and are asked repeatedly about whether they have, they will be more likely to report improvements,” says Simons. What’s more, “the trial gave counseling sessions to help people apply what they were doing to the real world.”
Even if you take the claims of transfer effects at face value, it’s unclear if the benefits are big enough to be worth the time and money spent on these products. For example, brain-training proponents note that ACTIVE volunteers who trained their brain speed were half as likely to experience a car crash. That sounds incredible, but based on the absolute figures from the study, Simons’ team calculated that someone who did the training could expect one fewer crash every 200 years.
Mahncke thinks the criticism is absurd. “[The authors] are moral monsters for making that argument, and you can quote me on that,” he says. “This is a public health [issue]. Senior driving is a problem, which is important at a population level. A person in health sciences who argued that we shouldn’t reduce heart attacks because heart attacks are rare would be rightfully drummed out of the profession.”
To run with his analogy, a new heart disease drug would never be assessed in isolation; it would instead be compared to the best drugs on the market to see if it was any better. So, are brain-training games better than the alternatives? “If you want to improve driving in the elderly, you’re probably going to get a more efficient benefit practicing the things that actually get impaired like making left turns, rather than doing it indirectly,” says Simons.
Dorothy Bishop, a neuroscientist at the University of Oxford, agrees. These games, she notes, “encourage people to engage in solitary activities at home, when they could be getting out and doing something that would not only stimulate the brain, but also be fun and sociable, such as learning a foreign language.”
Or, say, taking classes. “My joking answer to ‘Does brain training work?’ is: Yes, it’s called school,” says Elizabeth Stine-Morrow, an educational psychologist from the University of Illinois and one of Simons’s team of seven. But formal education, she notes, is a rich set of social and intellectual experiences in which kids are learning in many different contexts, not practicing the same decontextualized tasks again and again. “I don’t want people to get discouraged from reading this, and think that important abilities are unchangeable,” she adds. “It’s not impossible, just more complicated than brain-training companies would have us believe.”
She and her colleagues end their 70-page review with 12 pages of recommendations for conducting better brain-training studies in the future. The tips are fair, says Rebok, but the problem is that such studies are very expensive. “ACTIVE cost millions of dollars and took us 10 years to do. It’s the gold standard but there are problems with it,” he says. “If you’re going to raise the bar, how do you get there?”
That might be especially difficult given the controversy generated by the field’s own self-delivered hype. Just last month, the Federal Trade Commission ruled that Lumos Labs had “deceived consumers with unfounded claims.” Through extensive advertising, they claimed that their games would boost performance at work and school, or reduce the mental decline of old age, but they “simply did not have the science to back up its ads.” The company agreed to pay a $2 million fine.
“Those who market brain-training products have effectively boxed themselves in,” says Bishop. “I sometimes get approached by such people because they know that without evidence from a proper trial, they won’t be taken seriously. I don’t want to spend my time doing a trial of an intervention I’m dubious about, but I’m also aware that if they do the trial themselves, they’ll be accused of conflict of interest. I suspect the moral here is that it is very dangerous to launch into commercialization of a product before you have solid evidence of effectiveness, because it’s extremely hard to get it later on.”
from Technology | The Atlantic http://ift.tt/2dm8uyQ