Researcher Ramblings: Science as its Biggest Stakeholder

Time for another Rambling Post. This time, the point I want to make simply takes a few sentences.

In our everyday struggles as researchers, we are caught up in being answerable to our supervisors, funding bodies, publication agencies and in some cases, our immediate consumers. These are, of course, our most obvious stakeholders, with a clear impact on us, and vice versa. In reality though, the biggest stakeholder science has, is science itself. Its everyone who ever did research, every new researcher who would ever join, everyone who would read research and use it in their practice/product development.
To emphasize and illustrate this, I ask for your patience, and request you to read on.

First, I’d like to start off by getting you to imagine two situations.

Scenario one –
You are PhD Candidate 1. You have spent 3 years of your PhD life learning about complicated methodologies, working in a group of well-established researchers. You’ve finally reached the publication phase of research. You want to be a good scientist and report everything. While preparing the results, you notice that a certain ad-hoc analysis doesn’t make sense. You wonder if you missed something in the main analysis. Maybe you exported incorrect data, missed a value somewhere, forgot a line in the script. Any of these mistakes would mean nothing you publish is trust-able. Maybe you should re-run some statistics? Maybe you haven’t actually found what you think you have?

You talk to your experienced colleague. He says, “Oh forget about it! You don’t need to report it. Your main findings are robust enough, just send it for publication already! If you start to re-do everything, it’ll take at least 6 more months”. You try to argue that sending in results without a thorough analysis is not the best practice. But then again, your funding ends in 4 months and you need to start your next project almost immediately after. Your colleague argues again – “well, if you want to do this extra work, sure. But remember, someone is paying for you, someone is waiting for your results, grants are on holds because you need to finish publishing first. There are stakeholders here, whom you are answerable to”.

Well…true, you think. There is a lot at stake here. I’m sure the results are fine. You proceed with the publication process, and manage to get published.

Scenario two – 
You are Candidate 2. You just started your PhD. You have a first meeting with your supervisor, all excited. Your supervisor thinks you will be great at this project. You have so much to learn and experience! He wants you to start by replicating a recent study, in a high impact journal with very robust results. Once you can replicate this, he has a number of modifications in mind, that will add considerably to the literature on this phenomenon. You read all the popular articles on this phenomenon, and get on with designing the study. After a year of hard work, you get your data. Its not as robust as the original study.

You go to your supervisor and he says – “That’s not possible. Maybe you missed something”. You go back to the literature. You look at other studies. Some have a cloudy methods section, while some don’t report everything, and some use methodologies you can’t replicate. You do everything all over again, go to your supervisor with all the evidence and say – “there must have been a mistake. I can not replicate this”. (A side note here – psychology is tougher to replicate than harder sciences, but even in psychology, replication is key). Your supervisor believes you. He writes to the original group and gets their original dataset. You run the same analysis on their dataset, and surely enough, the findings they reported were wrong. They made a mistake somewhere. The original team apologizes, the paper is retracted, the person in charge is duly punished.

The day is saved. You’ve lost almost 2 years of your PhD on a phenomenon that was never established.

Where are these scenarios coming from? For one, I see these kinds of scientific practices in the scientific community. I am happy to say that till date, I haven’t seen this in my lab and my team. I’ve had the opportunity to work with very critical, thorough, researchers. But not all new researchers have that luxury.

I recently attended a workshop on ‘Good Scientific Practice’. Apart from learning a lot about our community’s rules, we also learnt a lot about people who faked data, how they got exposed and the amount of retractions that take place in the scientific community each year! The biggest take home message was that if someone consistently practices bad science, they will be caught (Phew).

For more on this, check out Retraction Watch, an awesome blog that posts all about retracted papers, some of which make for great pass-time readings! It’s amazing how much bad science people get away with. Every retraction gives me hope. Hope that science has a self-eliminating mechanism, where bad science, eventually, at some point, gets caught and punished.

Some retractions take place 8-9 or even 30 years after publishing. This means that this small PhD who manipulated data has now risen to a professorship, maybe even nominated for HOD. He may have finished his PhD top-notch, but now, he loses a Professorship, gets withdrawn from the HOD nominations and has to pay a hefty fine (+interest) to the funding body that paid his project.

This article is not about good scientific practice. This article is about the stakeholders of science. The most obvious ones we consider are, like our Candidate 1 did, our boss, supervisor, funding bodies, and in some cases our consumers or the industry. But this is where Candidate 2 comes in to show us that the biggest stakeholders of scientific research, are other scientists. If you are at the place Candidate 1 was, you are answerable, right now, to all future scientists, who will ever read your research. To all the scientists, who will attempt to replicate it. To all of those who will attempt to grow the phenomenon based on your findings. And to all of those, not in your direct fields, who will attempt to make products (including therapy, diagnostic tools and gadgets for future investigations) based on what you found.

Research is built on the shoulders of previous research. And if these shoulders are weak, some day, the whole research body will come crashing down. It takes time, of course. That’s probably why all research data (in its raw form) needs to be stored for a minimum of 10 years. You need to give science 10 years to verify your findings.

As PhDs, probably working in bigger research groups, under financial, supervisor, publishing pressures, the end of our research is pretty much all we can see. An end with (hopefully) multiple publications, in high ranking journals, leading directly to a well paid Post-doc/industry position. We may be at the highest educational degree, but we are pretty much at the bottom of our academic ladders. Hence, we neglect the fact that our actions also have consequences.

This butterfly effect may seem too important to be true, but it really is. As present scientists, we should encourage future scientists. If our work is shabby, unsystematic, chaotic, and un-replicable, we may not disappoint the immediate stakeholders (not severely at least), but we would disappoint the scientific community and all future science.

To hear me ramble some more about what goes on in a researcher’s brain, check out;
Researcher Like Thinking – How this profession has changed my thinking.

To know more about my life as a researcher, check out my previous posts πŸ™‚
The Storm – a special, overwhelming period in researchers’ life
The Wait – preceding every storm
“I had Fun” – Why this line is so special to cognitive scientists.
Monday Morning Blues – A researcher’s monday morning blues come on any day! πŸ˜‰
50 Shades of Research – Everything you will have to do…and more!
I have the…Moves? – Why this blog?

4 Comments

  1. Well put, Divya! Just one question: what would you recommend to those who are in the position of Candidate 1? Perhaps, I should go and re-analyse some (or all) of my data now…..:)
    I definitely think there should be more transparency in scientific research. Wonder how this could be achieved.

  2. Haha, yes, I would recommend re-evaluating your data. If you've already found something fishy, re-evaluate all of it. If you are just scared that you may have done something wrong, you could pick some (depends on total data size) datasets at random and see if everything was done correctly. If you re-check a good number, you would find mistakes, if any.

    Transparency is a topic for another post πŸ˜‰

Leave a Reply to Gabriella Simak Cancel reply

Your email address will not be published. Required fields are marked *