is Evidence-Based Practice?
Implementation of evidence-based practice
Why does OTseeker contain only systematic reviews
and randomised controlled trials?
What is a systematic review?
What is a randomised controlled trial?
Are the findings of this trial likely to be valid?
§ Random allocation
§ Concealment of Allocation
§ Intention to treat analysis
Is the therapy clinically useful?
§ Size of the therapy's effect
§ Dichotomous outcomes
§ Confidence intervals
What is Evidence-Based Practice?
Evidence-based practice (EBP) encourages the integration of high quality quantitative
and qualitative research, with the clinician's clinical expertise and the
client's background, preferences and values. It involves the client in
making informed decisions and should build on, not replace, clinical judgement
and experience. For more on this read:
D., Rosenberg, W., Gray, M., Haynes, R., & Richardson, S.
(1996). Evidence based medicine: what it is and what it isn't. BMJ, 312,
The following steps can be undertaken by clinicians
to optimise the use of research evidence
to inform practice.
1.Identify clinical questions relevant to
your information needs
Evidence-based practice is a problem-based process that arises from information
needs occurring during the occupational therapy treatment process.
A. Is relaxation training effective in reducing anxiety in adolescents receiving
B. What are the main concerns for adolescents receiving chemotherapy?
2.Search the literature to locate research
relevant to the type of clinical question
Clinicians aim to locate research evidence
that is the most appropriate for answering
particular clinical questions. This may be
either quantitative or
qualitative research depending on the type of question being asked. For example,
question A is a question about treatment effectiveness and may be informed
by systematic reviews or randomised controlled trials if available. Question
B however, would best be informed by qualitative research because it is about
the client’s concerns or feelings.
3.Critically appraise the research to determine
how valid or believable it is, and to decide
if the results are clinically important.
Not all research uses rigorous methodology and conclusions may be affected
by bias or confounding. It is also important to look at the results to see
if they are clinically significant, not just statistically significant. A detailed
discussion about this is provided in the rest of the tutorial.
4.Consider how the information from the research
can be applied in the clinical setting or with
Occupational therapists need to determine whether the evidence 'fits' with
the features of the client's context (person, occupation, and environment).
Consideration must also be given to the practice setting, clinical expertise,
and resources available to the therapist.
To conclude, research is just one resource
to draw on when making clinical decisions.
Ultimately clinical reasoning and expertise,
and considered dialogue with our clients remain
at the heart of occupational therapy practice.
Implementation of evidence-based
In busy clinical settings, implementing EBP may be difficult. There are many
potential barriers including lack of time, lack of access to literature, and
lack of skills in finding and interpreting research. Some of the strategies
that have been suggested for supporting evidence-based practice in the workplace
§ Fostering a
supportive environment in the workplace for
§ Providing continuing education to develop skills in literature searching,
critical appraisal, and research methods;
§ Collaborating/ participating in research evaluating occupational therapy
§ Participating in or establishing a journal club;
§ Focusing on reading research articles that have a rigorous study design,
or reviews that have been critically appraised;
§ Seeking out evidence-based clinical guidelines.
OT seeker is a resource that has been designed with the principle aim of increasing
access to research to support clinical decisions. It contains abstracts of
systematic reviews and randomised controlled trials relevant to occupational
therapy. Trials have been critically appraised and rated to assist occupational
therapists in evaluating their validity and interpretability. These ratings
help determine the quality and usefulness of trials for informing clinical
Why does OTseeker contain only systematic
reviews and randomised controlled trials?
Does client education enhance the knowledge
and quality of life of people who have rheumatoid
arthritis? Does cognitive rehabilitation
improve the cognitive
status of people with schizophrenia? How effective is neuro-developmental treatment
for children with cerebral palsy? A common information need that occurs in
clinical practice is the effectiveness of
interventions. Systematic reviews and randomised
controlled trials have the capacity to provide strong evidence about the effectiveness
or ineffectiveness of interventions. The methods that they use mean that their
conclusions are usually more reliable and accurate than many other methodologies
for establishing treatment effectiveness (Cook, Guyatt, Laupacis, Sackett & Goldberg,
At present OTseeker only contains abstracts of systematic reviews and randomised
controlled trials relevant to occupational therapy. It is acknowledged that
therapists are concerned with many other issues other than treatment effectiveness
is hoped that OTseeker will expand in the future to meet some of these needs.
For example, qualitative research may be added to OTseeker in future, to guide
occupational therapists’ questions about the experience and concerns
What is a systematic review?
Systematic reviews use rigorous methods to locate, assess, and summarise the
results of many individual studies in a way that limits bias. They outline
what is known or unknown about the effectiveness of a treatment. Systematic
reviews may be qualitative or quantitative. In many cases the review summarises
primary studies, but does not statistically combine the results. This is
sometimes called a qualitative systematic review (not to be confused with
qualitative research). A quantitative review statistically combines results
of a number of primary studies, and is sometimes referred to as a meta-analysis.
Systematic reviews use explicit methods to limit bias in identifying and
rejecting studies and therefore their conclusions are usually more reliable
and accurate than a narrative or literature review. Literature reviews
provide a useful introduction and overview of a topic but are not as valuable
as systematic reviews for providing current evidence about the effectiveness
of interventions (Cook, Mulrow & Haynes, 1998).
If you want to read further about systematic
reviews you could try: Greenhalgh, T. (1997).
How to read a paper: Papers that summarise
other papers (systematic reviews and meta-analyses).
BMJ, 315, 672-675.
What is a randomised controlled trial?
This is a study in which a group of clients is randomly allocated into either
an experimental group or a control group. These groups are followed up for
the variables / outcomes of interest. Randomised controlled trials potentially
offer less bias and more certainty than other study designs that the outcomes
being measured are actually due to the experimental treatment condition, rather
than other factors (Fletcher, 2002). More detailed information on randomised
controlled trials is provided later in this tutorial.
Are the findings of this trial likely
to be valid?
The next part of this tutorial is designed
to help readers of clinical trials differentiate
those trials which are likely to be valid
from those that might
not be. It also looks briefly at how therapists might use the findings of properly
performed studies to make clinical decisions. The approach used here borrows
heavily from the "Readers' Guides" first produced by the Department
of Clinical Epidemiology and Biostatistics at McMaster University and published
in the Canadian Medical Association Journal. The Guides were subsequently revised
by the Evidence-Based Medicine Working Group as "Users' Guides" and
published in the Journal of the American Medical Association (Guyatt,
G.H., & Rennie,
D. (1993). JAMA, 270, 2096-2097). The Users Guides are highly recommended as
a more detailed source of information on clinical trials and evidence-based
practice in general. Citations are given below.
Rigorous answers to
questions about treatment effectiveness can
be provided by properly designed,
properly implemented clinical trials. Unfortunately
the literature contains both well performed
trials which draw valid conclusions and badly
performed trials which draw invalid conclusions;
the reader must be able to distinguish between
the two. This tutorial describes key features
of clinical trials (or "methodological
filters") which confer validity.
Some studies that purport
to determine the effectiveness of occupational
simply assemble a group of participants with
a particular condition and take measures of
the severity of the condition before and after
treatment. If participants improve over the
period of treatment, the treatment is said
to have been effective. Studies which employ
these methods rarely provide satisfactory evidence
of treatment effectiveness because it isn’t
certain that the observed improvements were
due to the treatment, and not to extraneous
variables such as natural recovery, statistical
regression (a statistical phenomena whereby
people become less "extreme" over
time simply as a result of the variability
in their condition), placebo effects, or the "Hawthorne" effect
(where participants report improvements because
they think this is what the investigator wants
to hear). The only satisfactory way to deal
with these threats to the validity of a study
is to have a control group. Then a comparison
is made between the outcomes of participants
who received the treatment and participants
who did not receive the treatment.
The logic of controlled
studies is that, on average, extraneous variables
should act to
the same degree on both treatment and control
groups, so that any difference between groups
at the end of the experiment should be due
to treatment. By way of example, it is widely
known that most cases of acute low back pain
resolve spontaneously and rapidly, even in
the absence of any treatment, so simply showing
that participants improved with a course of
a treatment would not constitute evidence of
treatment effectiveness. A controlled trial
which showed that treated participants fared
better than control participants would constitute
stronger evidence that the improvement was
due to treatment, because natural recovery
should have occurred in both treatment and
control groups. The observation that treated
participants fared better than control participants
suggests that something more than natural recovery
was making participants better. Note that,
in a controlled study, the "control" group
need not receive no treatment. Often, in controlled
trials, the comparison is between a control
group which receives conventional therapy and
an experimental group which receives conventional
therapy plus treatment. Alternatively, some
trials compare a control group which receives
conventional treatment with an experimental
group that receives a new therapy.
|Five features affecting the internal
validity of trials will now be considered:
Importantly, control groups only provide protection against the confounding effects
of extraneous variables in so far as treatment and control groups are alike.
Only when treatment and control groups are the same in every respect that determines
outcome (other than whether or not they get treated) can the experimenter be
certain that differences between groups at the end of the trial are due to
treatment. In practice this is achieved by randomly allocating the pool of
available participants to treatment and control groups. This ensures that extraneous
factors such as the extent of natural recovery have about the same effect in
treatment and control groups. In fact, when participants are randomly allocated
to groups, differences between treatment and control groups can only be due
to treatment or chance, and it is possible to rule out chance if the differences
are large enough - this is what statistical tests do. Note that this is the
only way to ensure the comparability of treatment and control groups. There
is no truly satisfactory alternative to random allocation.
Concealment of allocation
The benefits of random allocation may be undone if the implementation of the
allocation sequence is poorly handled. If the person who determines whether
the participant is eligible for a trial can influence what treatment the participant
receives, this can disrupt the randomisation process. Allocation concealment
seeks to eliminate selection bias (who gets into the trial and the group they
are assigned to). Allocation sequence can be concealed by ensuring the person
who generates the allocation sequence is not the person who determines eligibility
and entry of participants, and by not using people involved in running the
trial to handle the mechanism for treatment allocation. This may be done by
using a central telephone randomisation system or by using opaque, sealed envelopes
for concealing allocation. For more information on allocation concealment see:
D. & Schulz, K. (2001). Statistics Notes: Concealing treatment
allocation in randomised trials. BMJ, 323, 446-447.
Even when participants are randomly allocated
to groups, it is necessary to ensure that
the effect (or lack of effect) of treatment
is not distorted by "observer
bias". This refers to the possibility that the investigator’s belief
in the effectiveness of a treatment may subconsciously distort the measurement
of treatment outcome. The best protection is provided by "blinding" the
observer - making sure that the person who measures outcomes does not know
if the participant did or did not receive the treatment. It is generally desirable
that participants and therapists are also blinded. When participants have been
blinded, you can know that the apparent effect of therapy was not produced
by placebo or Hawthorne effects. Blinding therapists to the therapy they are
applying is often difficult or impossible, but in those studies where therapists
are blind to the therapy, you can know that the effects of therapy were not
produced by the therapist's enthusiasm with the therapy, rather by the therapy
It is also important that few participants
discontinue participation ("drop-out")
during the course of the trial. This is because dropouts can seriously distort
the study’s findings. A true treatment effect might be disguised if control
participants whose condition worsened over the period of the study left the study
to seek treatment, as this would make the control group’s average outcome
look better than it actually was. Conversely, if treatment caused some participants'
condition to worsen and those participants left the study, the treatment would
look more effective than it actually was. For this reason dropouts always introduce
uncertainty into the validity of a clinical trial. Of course the more dropouts,
the greater the uncertainty - a rough rule of thumb is that if more than 15%
of participants drop out of a study, the study is potentially seriously flawed.
Some authors simply do not report the number of dropouts. In keeping with the
established scientific principal of guilty till proven innocent, these studies
ought to be considered to be potentially invalid.
Intention to treat analysis
Intention to treat analysis in randomised controlled trials means that each
participant’s data are analysed in the groups to which he or she
were originally randomly assigned regardless of whether he or she ends
up receiving that treatment. Overestimation of clinical effectiveness may
occur when an intention to treat analysis isn’t done. The intention
to treat analysis maintains the benefits of randomisation. A full application
of the intention to treat approach is possible only when complete outcome
data are available for all randomised participants. Hollis and Campbell
(1999) found a major problem in the application of intention to treat is
the inappropriate handling of missing responses producing misleading conclusions.
To fully appreciate the potential influence of missing responses, some
form of sensitivity analysis is recommended, examining the effect of different
strategies on the conclusions.
Further reading on intention to treat analysis
can be found:
S. & Campbell, F. (1999). What
is meant by intention to treat analysis? Survey
of published randomised controlled trials.
BMJ, 319, 670-674.
To summarise, the more that
clinical trials have the following features,
the more certain you can be that the results
found are reliable and accurate:
§ Random allocation of participants to treatment and control groups
§ Concealed allocation
§ Blind observers, and preferably participants and therapists as well
§ Few dropouts
§ Intention to treat analysis
The next time you read
a clinical trial of an occupational therapy
treatment, ask yourself
if the trial has these features. As a general
rule, those trials which do not satisfy some
or all of these criteria don’t constitute
strong evidence of treatment effectiveness
If you want to read further about assessing trial validity, try:
Guyatt, G.H., Sackett, D.L., & Cook, D.J. (1993). User's guide to the medical
literature: II. How to use an article about therapy or prevention: A. Are the
results of this study valid? JAMA, 270, 2598-2601.
Is the therapy clinically useful?
How can therapists interpret those trials which appear to be methodologically
sound? The message is that it is not sufficient to look simply for evidence
of a statistically significant effect of the therapy. You need to be satisfied
that the trial measures outcomes that are meaningful, and that the positive
effects of the therapy are big enough to make the therapy worthwhile. The
harmful effects of the therapy must be infrequent or small so that the
therapy does more good than harm. Lastly, the therapy must be cost-effective.
Of course, for a trial
to be useful it must investigate meaningful
effects of treatment.
This means that the outcomes must be measured
in a valid way. In general, because we usually
judge the primary worth of a treatment by whether
it satisfies clients’ needs, measurement
outcomes should be meaningful to our clients.
Thus a trial which shows that motor training
reduces spasticity is less useful than one
which shows it enhances functional independence.
Size of the therapy's effect
The size of the therapy's effect is obviously important, but often overlooked.
Perhaps this is because many readers of clinical trials do not appreciate
the distinction between "statistical significance" and "clinical
significance". Or perhaps it reflects the preoccupation of many authors
of clinical trials with whether "p < 0.05" or not. Statistical
significance ("p < 0.05") refers to whether the effect of
the therapy is bigger than can reasonably be attributed to chance alone.
That is important (we need to know that the observed effects of therapy
were not just a chance finding) but on its own tells us nothing about how
big the effect actually was.
The best estimate of the size of the effect
of a therapy is the average difference between
groups. Thus, if a hypothetical trial on the
effects of relaxation reports that back pain,
as measured on a 10 cm visual analogue scale
(VAS), was reduced by a mean of 4 cm in the
treatment group and 1 cm in the control group,
our best estimate of the mean effect of treatment
is a 3 cm reduction in VAS (as 4 cm minus 1
cm is 3 cm). Another hypothetical trial on
home modification advice to prevent falls might
report that 10% of clients in the home modification
group subsequently had falls, compared to 20%
in the control group. In that case our best
evidence is that home modification advice reduced
the risk of fall by 10% (as 20% minus 10% is
10%). Readers of clinical trials need to look
at the size of the reported effect to decide
if the effect is big enough to be clinically
worthwhile. Remember clients may not be interested
in therapies that have only small effects.
There is an important subtlety in looking at the size of a therapy's effects.
It applies to studies whose outcomes are measured with dichotomous outcomes
(dichotomous outcomes can have one of two values, such as dead or alive,
injured or not injured, admitted to nursing home or not admitted; this
contrasts with variables such as VAS measures of pain, which can have any
value between and including 0 and 10). Many studies that measure dichotomous
outcomes will report the effect of therapy in terms of ratios, rather than
in terms of differences. (The ratio is sometimes called a "relative
risk" or "odds ratio" or "hazard ratio", but it
comes by other names as well). Expressed in this way, the findings of our
hypothetical home modification advice study would be reported as a 50%
relative reduction in injury risk (as 10% is half of 20%).
Usually the effect
of expressing treatment effects as ratios
is to make the effect of
the therapy appear large. The better measure
is the difference between the two groups. (In
fact, the most useful measure may well be the
inverse of the difference. This is sometimes
called the "number needed to treat" (NNT)
because it tell us, on average, how many participants
we need to treat to prevent one adverse event
- in the home modification example the NNT
is 1/0.10 = 10, so one fall may be prevented
for every 10 participants who have received
home modification advice).
For more information on this two useful papers
Herbert, R.D. (2000). Critical appraisal of
clinical trials. I: estimating the magnitude
of treatment effects when outcomes are measured
on a continuous scale. Australian Journal of
Physiotherapy, 46, 229-235.
Herbert, R.D. (2000). Critical appraisal of clinical trials. II: estimating
the magnitude of treatment effects when outcomes are measured on a dichotomous
scale. Australian Journal of Physiotherapy, 46, 309-313.
An extra level of sophistication in critical appraisal involves consideration
of the degree of imprecision of estimates of effect size offered by clinical
trials. Trials are performed on samples of participants that are expected
to be representative of certain populations. This means that the best a
trial can provide is an (imperfectly precise) estimate of the size of the
treatment effect. Clinical trials on large numbers of participants provide
better (more precise) estimates of the size of treatment effects than trials
on small number of participants. Ideally readers should consider the degree
of imprecision of the estimate when deciding what a clinical trials means,
because this will often affect the degree of certainty that can be attached
to the conclusions drawn from a particular trial. The best way to do this
is to calculate confidence intervals about the estimate of the treatment
effect size, if these are not explicitly supplied in the trial report.
could consult Sim, J. & Reid,
N. (1999). Statistical inference by confidence
intervals: issues of interpretation and utilization.
Therapy, 79, 186-195. A confidence interval calculator is available by clicking here (URL: http://www.graphpad.com/quickcalcs/index.cfm)
The last part of deciding the usefulness of a therapy involves deciding if
the therapy is cost-effective. This is particularly important when health
care is paid for, or subsidised, by the public purse. There will never
be enough resources to fund all innovations in health care (probably not
even all good innovations). Thus the cost of any therapy is that money
spent on it cannot be spent on other forms of health care. Sensible allocation
of finite funds involves spending money where the effect per dollar is
greatest. Of course a therapy cannot be cost-effective if it is not effective.
But effective therapies can be cost-ineffective. For more information read:
Drummond, M.F., Richardson, W.S., O'Brien, B.J., Levine, M., & Heyland,
D. (1997). User's guide to the medical literature: XIII. How to use an article
on economic analysis of clinical practice: A. Are the results of the study
O'Brien, B.J., Heyland,
D., Richardson, W.S., Levine, M., & Drummond,
M.F. (1997). User's guide to the medical
literature: XIII. How
to use an article on economic analysis of clinical
practice: B. What are the results and will
they help me in caring for my patients? JAMA,
To summarise this section:
Statistical significance does not equate to clinical usefulness. To be clinically
useful, a therapy must:
§ affect outcomes that clients are interested in
§ have big enough effects to be worthwhile
§ do more good than harm
§ be cost-effective
If you want to read further on assessing effect
size, you could consult:
Guyatt, G.H., Sackett, D.L., & Cook, D.J.
(1993). User's guide to the medical literature:
II. How to use an article about therapy or
prevention: B. What
were the results and will they help me in caring for my patients? JAMA, 271,
Cook, D., Guyatt, G.,
Laupacis, A., Sackett, D., & Goldberg,
R. (1995). Clinical recommendations using
levels of evidence for antithrombotic
agents. Chest, 108 (4Suppl), 2275-305.
Cook, D., Mulrow, C., & Haynes, B. (1998).
Synthesis of the best evidence for clinical
decisions. In Mulrow , C., & Cook, D. (Eds).
Systematic reviews: synthesis of best evidence
for health care decisions. Philadelphia: American
College of Physicians.
Fletcher, R. (2002). Evaluation of interventions.
Journal of Clinical Epidemiology, 55 (12),
We gratefully acknowledge that the majority
of this tutorial was put together by members
of the Centre for Evidence-Based Physiotherapy
(http://www.pedro.fhs.usyd.edu.au/tutorial.html) with some changes
and additions made by the OTseeker team.