Peter Bowditch's Web Site
 

Australasian SciencePeer review – Close is enough better than nothing.

Pseudoscientists don't like being told that they would have more credibility if their work was published in peer-reviewed journals. Often they will attack the peer review process itself and try to pretend that because it is not perfect it is not useful. (The "Nirvana Fallacy".) Of course it is not perfect, because it is an invention and construction of humans not gods, but it is still better than the alternative of being able to say, claim and publish anything at all. You can be a bit more confident when you read something in a peer-reviewed journal because you know that more than one person has read the paper before publication. Errors can still get through, and sometimes the flaw in the research is missed by the reviewers. I saw a case of this a few years back.

When I was studying perception we had a guest lecturer who told us about his latest research. He was very proud that his paper had passed all the checks and reviews and was about to be published in a prestigious journal. He had shown that the sense of smell diminishes with age, and that older people could not smell as well as young people could. The experimental method had been to expose people of various ages to the smell of broccoli and ask them to identify it. Older people were much less able to identify the smell, so he claimed that this showed that they could not smell as well.

Any questions? I was the first, and I said something like: "My grandfather owned fruit and vegetable shops, my uncle ran a wholesale vegetable distribution business in the largest farm produce market in Australia, my parents ran fruit and vegetable shops and for the first five years of my life I lived over one of the shops. I never saw broccoli until I was about 15 years old. Is it possible that the older folk couldn't identify broccoli not because they couldn't smell it but because it was a smell that they had not experienced when young and therefore simply could not recognise"? He was not amused, as he hadn't thought of this.

In another example of not thinking out the research properly (although this one was caught before much time had been wasted) was someone who was working on a Master's thesis based on the discriminatory hiring practices of the university. The problem seemed to be that while 55% of undergraduates were women, only 11% of the tenured staff were women. I pointed out that undergraduates were not the pool from which academic staff were hired. The proportion of women dropped only slightly for people doing Masters degrees, but only 10% of doctoral candidates were women. As all academic staff were expected to hold doctorates it looked like the university was doing better than the average. My suggestion was to research ways that women in their late twenties with young children could be encouraged and assisted to continue their education and that this would be much better than any form of affirmative action

Statistics are often a big issue in scientific papers, but you have to understand what the numbers say and mean. I have an excellent t-shirt with an explanation of Type 1 (false positive) and Type 2 (false negative) results on it. (It has an arrow to Type 2 and says "This is the sort of error I make, where I am right but the data is wrong".) The biggest area of difficulty is understanding statistical significance.

I studied cognitive psychology and in one experiment we achieved beautiful normal distributions of reaction times in the two groups. The problem was that the means were only a tiny fraction of a second different, putting the two curves almost on top of each other. I was asked to give a presentation about these results as a way of explaining significance, and after the talk I was approached by a lady who told me that she was studying mathematics and I was a fool and mathematically illiterate for saying that two numbers were equal when they were not exactly so. No amount of saying that I had never said they were equal, only that the difference meant that no decision could be made, or pointing out that swapping the slowest subject in one group with the fastest in the other would reverse the difference could make her see the point. She stormed off in a huff at my ignorance.

Part of the function of peer review is to remove the need for readers to know everything there is to know about how research is done. We can assume that the reviewers know about statistics and the general principles of their disciplines. Like I said, it isn't perfect but it's usually good enough.

This article was published as the Naked Skeptic column in the June 2012 edition of Australasian Science
Australasian Science


See more of Cectic here





Copyright © 1998- Peter Bowditch
Logos and trademarks belong to whoever owns them


Authorisation to mechanically or electronically copy the contents of any material published in Australasian Science magazine is granted by the publisher to users licensed by Copyright Agency Ltd. Creative Commons does not apply to this page.