Cookie policy: This site uses cookies to simplify and improve your usage and experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Your privacy is important to us and our policy is to neither share nor sell your personal information to any external organisation or party; nor to use behavioural analysis for advertising to you.

Distorted visions 3

Brian Josephson's arguments are scientifically and statistically correct.

In contrast, the statement of Richard Wiseman - "I don't see how you could argue there's anything wrong with having to get five out of seven when (Natasha) agrees with the target in advance" - demonstrates a lack of understanding of how experimental data should be interpreted statistically.

The experiment is woefully inadequate. The chance of the observed four successes in seven subjects by pure guessing is 1 in 78. But suppose Natasha had a diagnosis rate of 1 in 2, compared with the chance rate of 1 in 7: then there is equal chance of getting 4 or more from 7, or 3 or less from 7. That is, the probability of detecting a true 50 per cent diagnosis rate on 7 subjects using a 0.01 significance level is only 50 per cent. There should have been at least 21 subjects to ensure a 90 per cent probability of detecting a true diagnosis rate of 50 per cent (using a 0.01 significance level test). Only if Natasha had a true diagnosis rate as high as 72 per cent would there have been a 90 per cent chance of detecting the effect using a 0.01 test on 7 subjects.

The experiment had high chances of failing to detect important effects, but this may have been due merely to no statistician being involved.

Keith Rennolls
Professor of applied statistics
University of Greenwich

  • Print
  • Share
  • Save
  • Print
  • Share
  • Save
Jobs