Hilda Bastian, chief editor of PubMed Commons, talks about what this means to the scientific field and how it could impact the general public.">

Tags:

May 4, 2017 — Recent studies show the majority of published research cannot successfully be replicated. This could potentially question the validity of tens of thousands of scientific studies. Hilda Bastian, chief editor of PubMed Commons, talks about what this means to the scientific field and how it could impact the general public.

Interview

Announcer: Examining the latest research and telling you about the latest breakthroughs, The Science and Research Show is on The Scope.

Interviewer: I'm talking with Hilda Bastian, the Chief Editor of PubMed Commons. Hilda, in recent years, it's come to light that really the majority, it turns out, of scientific research cannot be replicated. This calls into question the validity of thousands of research studies, maybe tens of thousands, and I think it was you who said that this term, "research reproducibility," is really a euphemism for all of science's problems. What do you mean by that?

Hilda: Well, when you look at the way that people try to define it, they struggle with putting it into different little boxes and breaking it down into parts. But when they start to talk about what needs to change to solve those problems that they're identifying, they basically start to tackle every problem to do with science from how it's done to how it's reported and how people deal with it afterwards and so on. In effect, it's a euphemism, I think, for just anything that can go wrong in science because they are just so many things that can lead to science not being reliable.

Interviewer: Yeah. So there has been a lot of talk about kind of steps people can take, scientists can take before they start the research, maybe planning it very carefully and very well, steps they can take during the research, you know, recording things properly and being very careful and thorough. Something that I thought was interesting is what people can do after publishing their research. I think most people think that once they publish their work, that's the end of the line. It's said and done. But you're looking at it a little bit differently. Can you talk about that a little bit?

Hilda: Yes. Getting published is a really quite important milestone in any kind of research project, but it isn't the end of it. Once something is published, that's really the point at which other people can try to engage with it. They can start to see whether they've got questions about it, whether there is enough information in the publication. They can start to see errors that nobody spotted or have questions about the validity of certain things that nobody could have spotted beforehand.

And as well, people do other research. I mean, things move on, and they could have consequences for things that were published before. Now, sometimes, that can be your own work that you need to do the next project and the next project, and then you realize, gee, we were wrong back then or we found this mistake in it or we'd rather people didn't read that. We'd rather they paid attention to this other paper as well.

So there are all sorts of things that happen after a paper is published, and in fact, there is a lot more that of that kind of engagement with other people often after than there is before. Quite often, the only people who've had anything to do with it, a project beforehand, could be a very small number of people that were actually writing it and then perhaps a couple of peer reviewers, maybe an editor at the publication. The peer reviewers may have spent a half an hour reading the article or whatever, the draft article. So for an awful lot of projects, if they're going to be of value, publication is really just the start, not the end.

Interviewer: So yeah, I could definitely see how, you know, once you open it, publication, published research, up to a lot of people with maybe a lot of different backgrounds or expertise, that they can add their different perspective to what you're looking at. Are there formalized ways for collecting those comments for kind of starting that second wave of discussion?

Hilda: It's kind of patchy because some journals have very, very well-established systems for that, very lively online communities around their journal and established ways of getting letters to the editor to publish and reacting to what people say, whereas other journals don't accept any feedback at all once something is published, or they accept it for only a very short period of time. Now, we've got PubMed Commons, which is the project that I'm involved with, which enables people who are authors of scientific papers to comment on other scientific papers that are in PubMed, which is this enormous biomedical database. And there are other websites that do that sort of thing.

It is largely fragmented around the place, and people are kind of engaging with research in lots of different ways. They're talking about it at conferences. They're talking about it at journal clubs which is sort of gatherings where people get together regularly to talk about research, kind of like a book club. They talk about it at conferences, they're writing blog posts, they're talking on Twitter. They're emailing authors. There is this vast amount of activity that can be going on.

Interviewer: But I think you would argue that this kind of post publication reflection phase is really important and as you said, this could mean sort of changing the culture of science. I mean, how do you begin doing something like that?

Hilda: Well, I think that there is a lot of different ways that it has to happen and it is really quite a big cultural challenge. Part of the first solution for that is for more of this to be done in the open. A real lot of peer review, but before publication and then afterwards, can happen behind curtains if you like and nobody sees it. Sometimes, people are doing it by email or people have made a public comment or criticism of a piece of work, but then the way that the authors and the journals deal with it is completely untransparent. They don't respond, you hear nothing, and you've got no idea what happened behind the scenes.

And so some of the process, I think, that's going to be quite important is for more of this stuff to come out in the open. I think that's both important for people learning how to do it because they can see and see by other people's responses what actually is a useful constructive way to go about these things and find out what works. But it's also important for there starting to be some kind of consequences for this. It's just too easy at the moment for people to just ignore even really quite serious profound criticisms of their work, which is really problematic for that piece of work but also for any other work that they're going to do if they're going to continue making the same mistake and not with their confidence not dented even the slightest bit by the fact that they're probably completely wrong in what they're doing.

So I think there is a range of things that has to happen, but that very thing of asking people to be open and asking for more consequences are really quite profound. People quite find all those things around openness quite challenging partly because they kind of hang onto ideas and thoughts to use later perhaps in the background of a paper of their own. There is level where people actually have to actually take the time to go and contribute their thoughts about somebody else's work in a timely fashion in someone.

Then you have those whole issues of editors and journals and different people or even funding agencies. They all kind of finish something and they move on to the next thing. They're not necessarily continuing to invest effort into the great big pile of things that they're gathering up behind themselves. So they're looking at the new articles coming out, not going back and revisiting all those thousands, if not millions of ones that are lying behind them.

Interviewer: You know, if scientists start taking these steps that we talked about, incorporating constructive criticism, having this time of reflection, how do you think that could improve science?

Hilda: It very clearly, for the person who we're talking about who's doing work, there's a clear benefit for whatever it is that they do next. There is an important benefit if this process of criticism also involves people actually correcting errors when they see an error, retracting a paper without as much fuss and bother as happens now. People are actually more willing to actually correct the mistakes. Then there is also a really big benefit to everybody else who might be using it. You've got people, of course, when somebody publishes something, they might go and start off and spend the next two years of their life trying to extend the work or the idea that they got from somebody else's publication.

Now, if the people who did that publication, the people of that journal now no longer believes that publication is right and they kind of don't let the broader community know, then there is going to be people who are just wasting a colossal amount of time trying to do something that's never ever going to work, and that has enormous consequences for them and it can have enormous consequences obviously for something that gets used if it's clinical research or so on. You can have patients and doctors making decisions based on information that's fundamentally flawed, and so that becomes absolutely essential that people know, look, don't touch this.

Announcer: Interesting, informative, and all in the name of better health. This is The Scope Health Sciences Radio.