Now that the body of literature has been compiled and data has been extracted, it is time to assess quality of the studies reviewed for relevance to the research question.
When reviewing study quality, assessing for bias is necessary to ensure reliability. Biases to assess for include the following, to name a few (Boland, Cherry, & Dickson, 2017):
How generalizable are the results of the study to your target population?
Is the sample representative?
How were participants assigned to treatment groups?
Is this a predictable assignment, or was it random?
Were participants and/or researchers aware of their treatment group assignment?
Were studies single-blind? Double-blind?
If participants are aware of their assignment, they may feel an obligation to perform a certain way in alignment with the study's goal.
Were data analyzers and those reviewing study outcomes aware of individual identities/treatment group assignments?
This may bias study results.
What portion of participants stopped treatment prior to the end of the study?
Did participants dropout of the study, withdraw, or not meet inclusion criteria?
The more participants that drop out, the less generalizable the results.
Were all outcomes reported on?
Did authors fail to report outcomes that did not hold significance?
When assessing for quality, there are numerous tools available for researchers to use. See the following resources for more information.
The following are standard tool used for quality assessment. Note, AMSTAR and PRISMA are standards in assessing systematic reviews, while MOOSE is the standard for assessing meta-analyses.