3) Appraise -- Six levels of Evidence From "Booth & Brice, 2004"
(From Most Valuable to Least Valuable)
- Meta-analyses: Methods of synthesizing the data from more than one study, in order to produce a summary statistic
- Systematic Review: [tries] to answer a clear question by finding and describing all published, and if possible, unpublished work, on a topic. [It] uses explicit methods to perform a thorough literature search and critical appraisal of individual studies and uses appropriate statistical techniques to combine these valid studies (Booth & Brice, 2004).
- Randomized Controlled Trials (RCT): are also called 'randomized clinical trials.' They involve the random assignment of subjects to groups that are then given different interventions to assess the effects of the interventions.
- Controlled Comparison or Case Control Study: is an observational study in which the cases have the issue of interest
- Descriptive Surveys: studies aimed at describing certain attributes of a population, specifying associations between variables, or searching out hypotheses to be tested, but which are not primarily intended for establishing cause-and-effect relationships or actually testing hypotheses.
- Case Studies: describe a particular service or event, often focusing on unusual aspects of the reported situation or adverse occurrences, commonly have exploratory, descriptive, or explanatory purposes.
Evaluation criteria are:
- Credibility (Internal Validity)
- Transferability (External Validity)
- Dependability (Reliability)
- Confirmability (Objectivity)
Credibility: looks at truth and quality and asks, "Can you believe the results?"
Some questions you might ask are: Were patients randomized? Were patients analyzed in the groups to which they were (originally) randomized? Were patients in the treatment and control groups similar with respect to known prognostic factors?
Transferability: looks at external validity of the data and asks, "Can the results be transferred to other situations?"
Some questions you might ask are: Were patients in the treatment and control groups similar with respect to known prognostic factors? Was there a blind comparison with an independent gold standard? Were objective and unbiased outcome criteria used? Are the results of this study valid?
Dependability: looks at consistency of results and asks, "Would the results be similar if the study was repeated with the same subjects in a similar context?"
Some questions you might ask are: Aside from the experimental intervention, were the groups treated equally? Was follow-up complete? Was the sample of patients representative? Were the patients sufficiently homogeneous with respect to prognostic factors?
Confirmability: looks at neutrality and asks, "Was there an attempt to enhance objectivity by reducing research bias?"
Some questions you might ask are: Were 5 important groups (patients, care givers, collectors of outcome data, adjudicators of outcome, data analysis) aware of group allocations? Was randomization concealed?