Cookie policy: This site uses cookies to simplify and improve your usage and experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Your privacy is important to us and our policy is to neither share nor sell your personal information to any external organisation or party; nor to use behavioural analysis for advertising to you.

In praise of fishing trips

The tyranny of 'the hypothesis' has made science too timid, says Tim Birkhead

In 1996, John Horgan produced a book entitled The End of Science, whose main thesis was that scientists have been so successful that they have answered all the major questions so there is nothing new to discover. Eight years on, it is clear that Horgan was wrong - there is no sign that science is coming to an end. Or is there?

The bedrock of the scientific enterprise is the "hypothesis", and the clumsily named hypothetico-deductive method (h-d method) is how we make progress. Hypothesis testing is what distinguishes good science from pseudoscience.

It was the philosopher Karl Popper who introduced the idea of falsifiability as a way of testing hypotheses: a rigorous test of a hypothesis, he said, is to devise a way of trying to falsify it. If you cannot, then the hypothesis stands - for the time being at least.

In my own field of research, behavioural ecology, hypothesis testing became explicit only in the mid-1970s, when a paradigm shift created a body of new theory for us to test - and explicitly to test using the h-d method. In other areas of research the h-d method was unknown, and when I first came to Sheffield a biologist colleague (now deceased) told me that the h-d method was nonsense. His idea of doing research was to "first find an effect, and then work backwards to figure out what is going on". Not, "first find an effect and then generate a hypothesis/predictions/tests to figure out what is going on". His was a post-hoc kind of approach.

Since then we seem to have been completely seduced by "the hypothesis" as the only way forward. In some ways the h-d method has become a researcher's mantra, almost a religious belief. The truth is somewhat different. The h-d method is fine for certain types of problem, but it is spectacularly useless at generating new ideas or information.

The scientific research councils seem to be obsessed by hypothesis testing. Many times I have heard it said by referees rejecting a proposal: "But there was no hypothesis." The research council modus operandi now seems to be the h-d method, but taken to an almost ludicrous extreme - researchers basically have to know what they are going to find before putting in a research application (in truth, they need to have done the research, even though they will not actually say so, to write a convincing account of what they intend to do). That's hardly the best way of doing science. The other problem, and it is a much more serious one, is that by knowing what you will find out, science becomes trivially confirmatory and inherently unlikely to discover anything truly novel.

Most researchers are driven by curiosity, and often (not always) would prefer to say, "I'm interested in this; I'm just going to have a look at X or Y, and see what I find out." In fact, this is the initial part of doing science, for you cannot formulate a hypothesis without the initial observations. Exploratory science is not inherently bad science and good researchers will still be productive.

Of course, the research councils say they encourage blue-skies research, but the general feeling is that such research is blue skies only as long as you know the answer. The entire funding strategy by the research councils is risk averse, and even though individual referees may favour some risky science, by using relatively large numbers of referees and requiring a consensus, the usual outcome is that projects deemed to be exploratory (often disparagingly referred to as "fishing trips") are dumped in favour of something safer.

It must be deeply frustrating for those really novel thinkers to see their ideas bounced in favour of mediocre projects. Luckily, there are a few funding bodies less bonded to the hypothesis as the only way of making progress - and they presumably are able to do so because they are charities rather than government owned, and can afford to be more liberated.

As Donald Braben has pointed out in his excellent book Scientific Freedom (2008), there appears to have been a spectacular drop in the number of UK Nobel prizes since the 1970s when we first started to adopt a hard-nosed Thatcherite hypothetico-deductive approach to funding science. The bottom line is that the h-d method is unlikely to create new thinking, or what Braben calls transformative science: science that transforms the way we think. Few of the really great thinkers of the 19th and 20th centuries - and that includes Darwin, Einstein, Planck and many others - would have secured funding under the current arrangements. Doesn't that tell us something, Braben asks?

Unless we break out of the hypothetico-deductive stranglehold and shrug off the lumbering bureaucracy associated with the acquisition of research funding, and give ourselves a bit more air to breathe and a bit more freedom of thought, we will indeed see the end of science.

Readers' comments (1)

  • Too true, unfortunately. Also in Australia. The least complicated solution would be for funding bodies, and scientists themselves, to agree to adopt the h-d framework merely as a logical test for scientificity (does such a word exist?) and cease treating it as a normative prescription for science. The steps by which a scientist reaches a conclusion can, both in logic and creativity, be any steps in any order. What makes a discovery scientific as opposed to non-scientific is that those same steps, side-tracks and failures aside, can afterward be re-written into the h-d format.
    An experiment surely is called for. Allocate a proportion of total funding to a new 'fishing-trip' grant screening process and evaluate the outcomes of fishing-trip vs. normal-grant science after, say, five years.

    Unsuitable or offensive? Report this comment

  • Print
  • Share
  • Save
  • Print
  • Share
  • Save
Jobs