Episode Transcript
Interviewer: Systematic reviews or meta-analyses may be trickier to carry out than you think. But there's help out there. We'll talk about that next on The Scope.
Announcer: Examining the latest research and telling you about the latest breakthroughs. "The Science and Research Show" is on The Scope.
Interviewer: I'm talking with Melissa Rethlefsan and Mellanye Lackey from the Eccles Health Sciences Library at the University of Utah. Melissa and Mellanye you're putting a lot of efforts into improving the quality of systematic reviews. And that's a very particular type of research. First of all, can you tell me what systematic reviews are?
Melissa: A systematic review is a pre-specified methodology that looks at the literature to answer a very focused question, usually about patient care.
Interviewer: So can you give a certain example?
Melissa: So for example, usually it's a PICOT type of question with a patient intervention, comparison and outcome that a researcher might be looking to answer. So for example, it might be looking at teenagers who are depressed and whether or not SSRIs would be better than placebo or a different drug class in preventing the onset of further depression or suicide or some other outcomes that would be of interest to a researcher.
Interviewer: So it's really important that these studies are set up correctly. How can not taking some of those steps lead to issues in research reproducibility?
Melissa: If you do not have a librarian or someone who's extremely familiar with literature searching methods involved, then what you can produce is the systematic review that may answer the question, but that because it was not well documented, it can't be replicated. So people, we can go in and they can look at the systematic review and they can think that they're getting an unbiased answer. But then, when you look at the details of the study, you realize that you have no idea how the study was actually performed.
It's very similar to in a clinical trial that's not well reported. You might have no idea if the results that they're getting are actually true because you can't tell enough detail from the methods to be able to ascertain that for yourself.
Interviewer: Really, it starts with the very question that they decide to ask in the first place. What are some of the issues that come about in that arena and how can you address those?
Mellanye: Sure. One of the things we do as librarians is help make sure that people are asking questions that can be answered by literature that aren't too big or that aren't too narrow. And once they have a question that there's adequate literature to support studying that question and so we can do initial searches into the databases to identify gaps in the literature, to see if that study has already been done, if it needs to be updated, if it hasn't been done in a while. We can find if it has been done, was it done accurately? Was it done well? And if not, what areas they could . . . if they need to change their question slightly so as to carve out a unique area of research for themselves. We can help identify just the body of literature that addresses their question.
Interviewer: And how you do those studies, of course, is important too.
Mellanye: Absolutely. And so we as librarians encourage people to register their study in a protocol registry and this helps them ensure that they're going to be doing . . . that they've thought about as a research team, do they have the capacity to do the whole study? It makes them think about every single step of what they will do in their study and outline it and submit that to an international body that is open to anyone to read. And it just helps the researchers go through the entire process, think through the entire process before they actually get started.
Interviewer: So how does it do that? Does it prompt with certain questions or . . .
Mellanye: Definitely. It's a lengthy, not too lengthy, adequately lengthy set of questions that asks them to describe previous studies in their field, that asks them to describe their search strategy and describes their approach and anything that makes their contribution unique or any limitations that their study will have. And then it puts it out into the Internet, into the field for their peers to look at and they can comment on it. It also secures their place, so this says, it lets them set a flag, "Yes. We are doing this study, " helps them find out if anyone else has already started on that study and it prevents them from being scooped.
Interviewer: What is that registry called?
Mellanye: PROSPERO. P-R-O-S-P-E-R-O.
Interviewer: And do you feel like people are using that or is this something that's kind of new?
Melissa: I think people are using it. I think it is somewhat new. People don't have to publish a protocol in PROSPERO. A lot of times, people will also publish their protocols in a journal. And there is a specific type of systematic review called the Cochrane systematic review. And in order to do a Cochrane systematic review, you have to publish your protocol in the Cochrane database of systematic reviews prior to the publication of the full systematic review.
So I think it is something that's been around for a long time. I think what we often find, though, is the lower quality systematic reviews out there aren't doing that. And we're trying to really help elevate the quality of systematic reviews much in the same way that clinical trials are trying to do by pre-registering their trial protocols, by making sure that all of the inclusion and exclusion criteria are out there from the beginning, that they know the outcomes that they're going to be looking for so that people can't go back later and switch the outcomes or make up new outcomes or decide because they're an expert that they want to include a certain study that doesn't actually meet their eligibility criteria.
So it's a quality measure, people are definitely accepting of it. And registering your protocol is actually part of the PRISMA guidelines, which is a reporting guideline for systematic reviews. It stands for Preferred Reporting Items for Systematic Reviews and meta-analyses. And it was published in 2009, I believe. And since then, the incidence of protocol registration has gone up quite a bit. It is still not great, but it has gone up significantly.
Interviewer: And you just touched on reporting guidelines. Tell me a little bit more about that and what that involves and why it's useful.
Melissa: Sure. Well, reporting guidelines, there are hundreds of them out there these days. But for systematic reviews, there are two that are out there that are really well disseminated and used. PRISMA, which I already mentioned, and the other one is MOOSE.
Interviewer: Oh. So many acronyms.
Melissa: Meta-Analysis Of Observation of Studies in Epidemiology. But reporting guidelines are really there to guide a researcher through the process of what things are really key to the reporting process. When they're actually writing that final journal article, what has to be in there so that this study can be reproduced and understood by the reader? And for systematic reviews, research time and time and time again shows that people are just not reporting their systematic reviews in such a way that is actually reproducible. And this is one of the ways that I think that librarians are really key, is because we can really help increase the reproducibility of this specific type of methodology.
Interviewer: Well, Mellanye, we've quite a bit about another common pitfall, which is how to search through data.
Mellanye: [inaudible] has 25 million citations in it. And you don't want to ask a question, you don't want to have a search strategy that's so large, you get lots of irrelevant results and you bring up a lot of static. That can be a real burden on the research team to have to go through extra thousands of results. But you don't want to ask the search strategy in such a way that it misses very relevant results. So as librarians, we can work on the search strategies to make sure we get exactly the right amount, not too many, not few, just right.
Interviewer: So give me an example of a search that might give you the wrong . . . maybe not the wrong information, but not enough, or too much. Either one.
Mellanye: Sure. One of the searches that I have worked on before, the research team did their search strategy with just the term "developing countries" and they missed a lot of highly relevant research. When they did their search they got about 17,000 results. When I added my search strategy in, including country names, including directions to the database and then form field tags and only searching for specific words in certain fields, it's kind of technical, but it really improved the results that came back. And it brought back about 3,500 results, many of which were actually relevant and would have been missed by that group.
Interviewer: You know, we're kind of going through the beginning, through to the end. What sort of an end step that you can help with?
Melissa: I think the end step, really, is the production of that final manuscript. Here in our systematic reviews core team, we do require authorship on manuscripts. And that's so that we can actually control how our literature searches are being reported. Because there's a tendency if you're not an expert in that area that you might not know what things actually need to be reported in order to make a literature search reproducible. So that really is our final step.
Announcer: Discover how the research of today will affect you tomorrow. The Science and Research Show is on The Scope.