Memory tests carried out by Daryl Bem in 2010 ‘proved’ that we have the ability to predict the future by picking out certain words.
The Cornell University academic published a paper which claimed subjects shown a list of words could better recall the ones that they would later be shown again – but before they actually knew which words made up the second list.
The apparently rigorous exam seemed to prove that participants could see into the future, and ‘pick out’ the words that they would soon come across.
The report was published in the Journal of Personality and Social Psychology, and sparked outrage among many psychologists, who could not believe the result.
But now other scientists have tried to replicate the results – and found nothing.
University of Hertfordshire psychologist Richard Wiseman and University of London psychologist Christopher French both conducted the rebuttal experiment using a group of 50 people.
Explaining his scepticism in the original study, Wiseman told LiveScience: ‘It’s almost as if you study for an exam, you do the exam and then you study for it afterwards and then you get a better mark.
‘So you can see why we were kind of surprised by that.’
‘We found nothing. It might just be because the [original] statistics were a fluke.
‘You’re going to get some false positives sometimes.’
In the original study, Bem’s participants saw a list of 48 words flashed onto a computer screen, and were then treated to a surprise memory test in which they were asked to type in as many of the words as they remembered.
Then a random sample of 24 of the previous 48 words was presented again, and the participants did some practice exercises with these words.
Analysing the results, Bem found that the students were more likely to recall the words that were about to appear on the later exercise list, implying they could predict the future.
Bem published his research and methods, and encouraged other testers to try to replication of his results. The computer program used in the experiment was placed online.
Following the follow-up studies, Bem said more research was needed, and suggested the experimenters were sceptical, which may have unwittingly influenced the results – athough the computer-based study is designed to alleviate this effect.
Bem said: ‘This does not mean that the results are unverifiable by independent investigators, but that we must begin regarding the experimenter as a variable in the experiments that should be included in the research designs.’
Wiseman said the follow-up research did show some insight and issues with current reporting of experiments.
The Journal decided to publish his follow-up studies, and Wiseman said: ‘There’s a real problem with finding shocking findings and then not being interested in publishing replications.
‘It’s kicked up a huge debate about how scientists do work and how journals publish that work, and I think that’s very valuable in itself – even if I’m not that confident that these findings are real.’