In his book A Short History of Nearly Everything, Bill Bryson tells the harrowing story of Guillaume le Gentil:
Le Gentil set off from France a year ahead of time to observe the transit [of Venus] from India, but various setbacks left him still at sea on the day of the transit – just about the worst place to be since steady measurements were impossible on pitching ships.
Undaunted, le Gentil continued on to India to await the next transit in 1769. With eight years to prepare, he erected a first-rate viewing station, tested and retested his instruments, and had everything in a state of perfect readiness. On the morning of the second transit, June 4, 1769, he awoke to a fine day, but just as Venus began its pass, a cloud slid in front of the Sun and remained there for almost exactly the duration of the transit: three hours, fourteen minutes, and seven seconds.
Stoically, le Gentil packed up his instruments and set off for the nearest port, but en route he contracted dysentery and was laid up for nearly a year. Still weakened, he finally made it onto a ship. It was nearly wrecked in a hurricane off the African coast. When at last he reached home, eleven and a half years after setting off, and having achieved nothing, he discovered that his relatives had had him declared dead in his absence and had enthusiastically plundered his estate.
This is what we’ve chosen as researchers: a life of failure and rejection. In all seriousness, though, research is a process of trial and error almost by definition. It is hard. We do not always succeed. And that’s ok.
If our hypotheses were always spot on, if our procedures always worked exactly as expected – if life was really that predictable – there wouldn’t be much point in conducting research at all. Thankfully for those of us who love research, there are still plenty of things that we don’t know and don’t understand that require investigation. That said, the arduous process of developing new knowledge is replete with surprises and setbacks.
Not knowing any more about le Gentil or his story than what’s written above, I would still question the assertion that he “achieved nothing.” He may not have achieved what he set out to, but that should not by default mean that the entire adventure was without merit. I suspect that the experience of spending eight years in a foreign culture mastering his instruments must have had some unanticipated (and perhaps undocumented) benefits. In my own case, arriving in the field only to find my methods unsuitable was a fortuitous fork in the road. A nightmare at the time, this wholly unexpected scenario presented an opportunity to change tack and experiment with visual methods. Five years later, visual methodology is at the core of my research agenda. It hasn’t been an easy journey, but it has certainly been an interesting one.
In some ways, I have also been incredibly lucky. Although my PhD fieldwork did not go at all according to plan, my essentially made-up method worked well enough that I was able to return home with sufficient data to successfully complete my thesis on schedule. Not everyone is so lucky. And I’m not so lucky all of the time. Sometimes despite doing everything right, our research still goes awry. A cloud passes in front of the sun. What then?
I don’t know when or how it started, but a culture has developed in academia that rewards ‘success!!’ at the expense of knowledge and understanding. We are under enormous pressure to get it right. Some, though not all, of this pressure comes from the need to publish (‘as much as possible!!’). Journals accept papers that present significant (i.e. positive) findings. Professor Keith Laws recently observed that:
This publication bias* is pervasive and systemic, afflicting researchers, reviewers and editors – all of whom seem symbiotically wed to journals pursuing greater impact from ever more glamorous or curious findings.
He goes on to say that the solution is not the creation of special journals that publish negative or null findings. (An idea I’ve personally heard discussed on more than one occasion.) Instead, Laws argues that we need to make room for these “unloved” findings in mainstream journals. True, this depends in part on the cooperation of reviewers and editors. It also depends on us; we supply the content.
About a year ago, I submitted a manuscript to a top methodology journal. The article details three attempts a photographic data collection, two of which were only moderately successful. The third attempt was undertaken in conditions that were far from ideal and was largely unsuccessful as a result. One reviewer picked up on this, questioning why I chose to proceed with the research. I responded truthfully that that’s the nature of my work. The article is now in press.
In a previous post, Monica voiced concern that university metrics encourage the mass-production of ‘plywood’ rather than oak- or mahogany-quality research. The expectation that our research will churn out positive results (within a 2-3 year timeframe) compounds the problem and changes the very nature of the endeavor. Sometimes your procedure won’t go to plan. Sometimes your results won’t be what you expected. Sometimes a cloud passes in front of the sun at exactly the wrong moment. That’s the harsh reality of research. And, it’s ok.
(And if we’re bold, we can even get it published: warts, failings and all.)