Skip to Main Content

Evidence Based Physical Therapy: Five Steps of EBPT

Created by Health Science Librarians

Summary of the Five Steps

  1. Ask: Convert the need for information into an answerable question.
  2. Find: Track down the best evidence with which to answer that question.
  3. Appraise: Critically appraise that evidence for its validity and applicability.
  4. Apply: Integrate the critical appraisal with clinical expertise and with the patient's unique biology, values, and circumstances.
  5. Evaluate: Evaluate the effectiveness and efficiency in executing steps 1-4 and seek ways to improve them both for next time.

#1 Ask: Using the PICO Model

A question is considered well-built if it addresses the most pertinent parts of your information need and will lead to a focused answer for your clinical question.

PICO is an acronym for the four parts of a well-articulated clinical question:

P = Population or Problem - recipients or potential beneficiaries of a service or intervention, or the situation being examined

I = Intervention or exposure - the service or planned action to be delivered to the population

C = Comparison - an alternative service or action that may or may not achieve similar outcomes

O = Outcome - the ways in which the service or action can be measured to establish whether it has had a desired effect

#2 Find: Formulate the Search Strategy

Think about the keywords for each of the PICO parts of the clinical question.

Sample Question: Is prophylactic physical therapy for patients undergoing upper abdominal surgery effective in preventing post-operative pulmonary complications?

The PICO parts with keywords for this question would look like this:

Parts of the Question Clinical Scenario Keywords
Patient Population patients undergoing upper abdominal surgery upper abdominal surgery
Intervention prophylactic physical therapy prophylactic physical therapy
Comparison (if any) no prophylactic physical therapy none
Outcome prevent post-operative pulmonary complications prevent pulmonary complications
Type of Study RCT Randomized Controlled Trial

#3 Appraise: Evidence Hierarchy and Evaluation Criteria

Hierarchy of evidence for clinical questions (Booth & Brice, 2004)

Always start an EBPT search looking for the highest level of evidence. If a meta-analysis is not available on the topic, look next for systematic reviews without statistical synthesis, next for randomized control trials, next for controlled comparison or case control studies, etc.

  • Meta-analyses: Methods of synthesizing the data from more than one study, in order to produce a summary statistic
  • Systematic Review: [tries] to answer a clear question by finding and describing all published, and if possible, unpublished work, on a topic. [It] uses explicit methods to perform a thorough literature search and critical appraisal of individual studies and uses appropriate statistical techniques to combine these valid studies (Booth & Brice, 2004).
  • Randomized Controlled Trial (RCT): are also called 'randomized clinical trial.' They involve the random assignment of subjects to groups   that are then given different interventions to assess the effects of the interventions.
  • Controlled Comparison or Case Control Study: is an observational study in which the cases have the issue of interest
  • Descriptive Surveys: studies aimed at describing certain attributes of a population, specifying associations between variables, or searching out hypotheses to be tested, but which are not primarily intended for establishing cause-and-effect relationships or actually testing hypotheses.
  • Case Studies: describe a particular service or event, often focusing on unusual aspects of the reported situation or adverse occurrences, commonly have exploratory, descriptive, or explanatory purposes.

Evaluation Criteria:

  • Credibility (Internal Validity)
  • Transferability (External Validity)
  • Dependability (Reliability)
  • Confirmability (Objectivity)

Credibility: looks at truth and quality and asks, "Can you believe the results?"

Some questions you might ask are: Were patients randomized? Were patients analyzed in the groups to which they were (originally) randomized? Were patients in the treatment and control groups similar with respect to known prognostic factors?

Transferability: looks at external validity of the data and asks, "Can the results be transferred to other situations?"

Some questions you might ask are: Were patients in the treatment and control groups similar with respect to known prognostic factors? Was there a blind comparison with an independent gold standard? Were objective and unbiased outcome criteria used? Are the results of this study valid?

Dependability: looks at consistency of results and asks, "Would the results be similar if the study was repeated with the same subjects in a similar context?"

Some questions you might ask are: Aside from the experimental intervention, were the groups treated equally? Was follow-up complete? Was the sample of patients representative? Were the patients sufficiently homogeneous with respect to prognostic factors?

Confirmability: looks at neutrality and asks, "Was there an attempt to enhance objectivity by reducing research bias?"

Some questions you might ask are: Were 5 important groups (patients, care givers, collectors of outcome data, adjudicators of outcome, data analysis) aware of group allocations? Was randomization concealed?

#4 Apply: Using Evidence in Clinical Practice

Guidelines for applying evidence in clinical practice can be found in the classic text:

Guyatt, G., Rennie, D., Meade, M., and Cook, D. (2008) Users' guides to the medical literature: a manual for evidence-based clinical practice (2nd ed.). New York, NY: McGraw-Hill Professional.

Chapters in this guide are organized by type of clinical question: therapy, harm, diagnosis, and prognosis.

Other good resources for both appraisal and applying evidence in clinical practice can be found on these two websites:

#5 Evaluate Your Performance as a EBPT Practitioner

Ask yourself:

  1. Did you ask an answerable clinical question?
  2. Did you find the best external evidence?
  3. Did you critically appraise the evidence and evaluate it for its validity and potential usefulness?
  4. Did you integrate critical appraisal of the best available external evidence from systematic research with individual clinical expertise in personal daily clinical practice?
  5. What were the outcomes of your application of the best evidence for your patient(s)?