Prévia do material em texto
<p>Journal of Evaluation in Clinical Practice,</p><p>9</p><p>, 2, 287–301</p><p>©</p><p>2003 Blackwell Publishing Ltd</p><p>287</p><p>Blackwell Science, LtdOxford, UKJEPJournal of Evaluation in Clinical Practice1365-2753Blackwell Publishing Ltd 20039</p><p>2287301</p><p>Original Article</p><p>Evidence-based practice in mental healthS. Tanenbaum</p><p>Correspondence</p><p>Dr Sandra Tanenbaum</p><p>School of Public Health</p><p>Ohio State University</p><p>1583 Perry Street</p><p>Columbus</p><p>OH 43210-1234</p><p>USA</p><p>E-mail: tanenbaum.1@osu.edu</p><p>Keywords:</p><p>evidence-based medicine,</p><p>mental health, health policy</p><p>Accepted for publication:</p><p>5 February 2003</p><p>Abstract</p><p>Rationale, aims, and objectives</p><p>Evidence-based medicine (EBM) has</p><p>given rise to evidence-based practice (EBP) in the field of mental health.</p><p>EBP too is predicated on an evidence hierarchy and has the goal of using</p><p>the “best evidence” (usually randomized controlled trials) to improve prac-</p><p>tice. EBP is increasingly influential in mental health care in the U.S. Grow-</p><p>ing numbers of researchers and public officials endorse its claims and</p><p>pursue its benefits. The rationale for this paper is to examine the potential</p><p>of EBP for the field of mental health—and public mental health care spe-</p><p>cifically. Is it likely to contribute to improved lives for mentally ill people?</p><p>If so, how?</p><p>Methods</p><p>This qualitative study relies on archival, and to a much</p><p>lesser extent, informant interview data. Informants were mostly public</p><p>mental health officials because they are in a position to implement EBP on</p><p>a large scale and their policies are a matter of public record. Interviews</p><p>were semi-structured, held in person and on the telephone, and lasted one</p><p>to two hours. Archival research included the substantial literature on EBM</p><p>and EBP plus studies and articles on the practice and policy of U.S. public</p><p>mental health care.</p><p>Results</p><p>The results of this study were that there exists</p><p>an extensive, coherent literature critical of EBM and of EBP specifically.</p><p>Attempts to implement EBP will falter on epistemological and organiza-</p><p>tional barriers. Still, as a public idea—that more science will bring about</p><p>better mental health practice—EBP may well serve political purposes,</p><p>especially in the U.S. public mental health system, where more overtly</p><p>ideological policies have been inadequate in the past. EBP, as a public</p><p>idea, has the advantage of ambiguity, accountability, quantifiability, etc.</p><p>Conclusions</p><p>This paper concludes that EBP is growing more influential in</p><p>public mental health care in the U.S. Its practical strengths, i.e., its improve-</p><p>ment of mental health practice, may turn out to be less than its strengths as</p><p>a public idea in the formulation and dissemination of mental health policy.</p><p>E D I TO R ’ S C H O I C E</p><p>Evidence-based practice in mental health: practical weaknesses meet</p><p>political strengths</p><p>Sandra Tanenbaum PhD</p><p>Associate Professor, School of Public Health, Ohio State University, Columbus, OH, USA</p><p>Introduction</p><p>‘Evidence-based practice’ (EBP) is a way of treating,</p><p>serving or empowering patients, clients or consum-</p><p>ers. It is predicated in mental health as well as med-</p><p>icine and other clinical fields, on the belief that there</p><p>exists a valid evidence hierarchy and that the ‘best</p><p>evidence’ can be applied to produce the best practice.</p><p>Although EBP is not explicit policy in the public sys-</p><p>tems of most states, its use is growing (Carpinello</p><p>et al</p><p>. 2002), in part through the efforts of the Robert</p><p>Wood Johnson Foundation (RWJF) and the National</p><p>Institute of Mental Health (NIMH). Evidence-based</p><p>medicine (EBM), which preceded EBP and on which</p><p>Ideas for this paper were stimulated by informational interviews</p><p>with directors of mental health departments in a number of states</p><p>and with others.</p><p>S. Tanenbaum</p><p>288</p><p>©</p><p>2003 Blackwell Publishing Ltd,</p><p>Journal of Evaluation in Clinical Practice</p><p>,</p><p>9</p><p>, 2, 287–301</p><p>EBP is modelled, is by now a juggernaut (Miles</p><p>et al</p><p>.</p><p>2002).</p><p>Evidence-based practice promises substantial</p><p>improvement in the lives of mentally ill people. This</p><p>paper, however, argues that EBP’s practical</p><p>strengths – its capacity to turn the best research find-</p><p>ings into the best clinical care – are less than propo-</p><p>nents make them out to be. On the other hand, EBP</p><p>is also a public idea (Reich 1988a) and accomplishes</p><p>political purposes apart from the practical ones</p><p>promised. This paper has two parts. The first is writ-</p><p>ten to inform the reader of the weaknesses of the evi-</p><p>dence in EBP; this part will offer a sample of the</p><p>epistemological difficulties of finding the best evi-</p><p>dence for mental health practice. The second is writ-</p><p>ten to introduce EBP as a public idea, one that serves</p><p>the political needs of the public mental health system</p><p>and may be fruitfully, if carefully, exploited by the</p><p>leaders of that system.</p><p>Part one</p><p>Evidence-based practice derives from EBM,</p><p>described by proponents as ‘the conscientious,</p><p>explicit, and judicious use of the current best evi-</p><p>dence in making decisions about the care of individ-</p><p>ual patients’ (Sackett</p><p>et al</p><p>. 1996). They leave a place</p><p>for ‘clinical expertise, by which we mean the profi-</p><p>ciency and judgment that individual clinicians</p><p>acquire through clinical experience and clinical prac-</p><p>tice’, but this is found at the bottom of EBM’s evi-</p><p>dence hierarchy (Levels of Evidence 1998). When</p><p>David Sackett, a prominent proponent of EBM, was</p><p>asked by a</p><p>New York Times</p><p>reporter about the art of</p><p>medicine, his response was ‘Art kills’ (Zuger 1997).</p><p>He and his colleagues make clear that for questions</p><p>about medical therapy, the best source of evidence is</p><p>the randomized controlled trial (RCT), which pro-</p><p>vides ‘experimental’ findings as does no other study</p><p>design (Sackett 1986).</p><p>Ideally, the RCT determines the efficacy of a clinical</p><p>intervention for a given population. Whereas medical</p><p>instincts may prove very wrong (Yamey 2000); experts</p><p>may be biased and often disagree among themselves</p><p>(Rizzo</p><p>et al</p><p>. 2001); and observational studies may</p><p>exaggerate an intervention’s effects (Barton 2000);</p><p>RCTs minimize bias through randomization, and for</p><p>EBM, this makes them the gold standard. As statistical</p><p>studies, RCTs can offer only measures of probability</p><p>across patients and never certainty about the effects</p><p>of intervention in a particular patient. Nevertheless,</p><p>EBM holds RCTs to provide the truest achievable</p><p>knowledge of patients; moreover, doctors may draw</p><p>good inferences from trial findings about how to treat</p><p>the individual patients they see (Dans</p><p>et al</p><p>. 1998;</p><p>McAllister</p><p>et al</p><p>. 2000).</p><p>A better approach than rigidly applying the</p><p>study’s inclusion and exclusion criteria is to ask</p><p>whether there is some compelling reason why the</p><p>results should not be applied to the patient. A</p><p>compelling reason usually won’t be found, and</p><p>most often you can generalize the results to your</p><p>patient with confidence (Guyatt</p><p>et al.</p><p>1994, for</p><p>the Evidence-Based Medicine Working Group).</p><p>The regular use of RCTs in clinical medicine</p><p>requires an extensive database of high-quality stud-</p><p>ies. For most clinicians, it also requires ‘systematic</p><p>reviews’ – syntheses of what qualified reviewers con-</p><p>sider the best evidence (Bero & Rennie 1995; Clarke</p><p>& Oxman 2002). Reviews may include meta-analysis,</p><p>a complex and controversial (Bailar 1995; Feinstein</p><p>1995; Miettinen 1998) statistical technique by which</p><p>the data from different studies are combined to</p><p>strengthen the evidence from any one RCT. The</p><p>Cochrane Collaboration based in the UK and the US</p><p>Agency for Healthcare Research and Quality main-</p><p>tain substantial libraries of trials, reviews, and meta-</p><p>analyses on which doctors may draw in the practice</p><p>of EBM.</p><p>Although proponents of EBM have begun to dis-</p><p>claim access to the ‘right answers’ to clinical ques-</p><p>tions (Donald 2002; Guyatt</p><p>et al</p><p>. 2002), government</p><p>and professional organizations are using systematic</p><p>reviews and similar evidence to set standards of</p><p>behaviour</p><p>is needed to</p><p>inform mental health policy? Journal of Mental Health</p><p>Policy and Economics 2, 141–144.</p><p>Tanenbaum S.J. (1986) Engineering Disability: Public Pol-</p><p>icy and Compensatory Technology. Temple University</p><p>Press, Philadelphia.</p><p>Tomlin Z., Humphrey C. & Rodgers S. (1999) General</p><p>practitioners’ perceptions of effective health care. Brit-</p><p>ish Medical Journal 318, 1532–1535.</p><p>Torrey W.C., Drake R.E., Dixon L., Burns B.J., Flynn L.,</p><p>Rush A.J., Clark R.E. & Klatzker D. (2001) Implement-</p><p>ing evidence-based practices for persons with severe</p><p>mental illness. Psychiatric Services 52, 45–50.</p><p>Tuschen-Caffier B., Pook M. & Frank M. (2001) Evaluation</p><p>of manual-based cognitive-behavioral therapy for</p><p>bulimia nervosa in a service setting. Behaviour Research</p><p>and Therapy 39, 299–308.</p><p>Upshur R.E.G., VanDenKerkhof E.G. & Goel V. (2001)</p><p>Meaning and measurement: an inclusive model of evi-</p><p>dence in health care. Journal of Evaluation in Clinical</p><p>Practice 7, 91–96.</p><p>USDHHS (Department of Health and Human Services)</p><p>(1999) Mental Health: A Report of the Surgeon Gen-</p><p>eral – Executive Summary. USDHHS, SMHSA, Cen-</p><p>ter for Mental Health Services, NIH, NIMH, Rockville,</p><p>MD.</p><p>Vakoch D.A. & Strupp H. (2000) The evolution of psycho-</p><p>therapy training: reflections on manual-based learning</p><p>and future alternatives. Journal of Clinical Psychology</p><p>56, 309–318.</p><p>Weisz J.R., Hawley K.M., Pilkonis P.L., Woody S. & Fol-</p><p>lette W.C. (2000) Stressing the (other) three Rs in the</p><p>search for empirically supported treatments: review pro-</p><p>cedures, research quality, relevance to practice and the</p><p>public interest. Clinical Psychology: Science and Practice</p><p>7, 243–258.</p><p>Wells K.B. (1999) Treatment research at the crossroads:</p><p>the scientific interface of clinical trials and effective-</p><p>ness research. American Journal of Psychiatry 156, 5–</p><p>10.</p><p>Wennberg J.E. (1984) Dealing with medical practice vari-</p><p>ations: a proposal for action. Health Affairs 3, 6–32.</p><p>Williams D.D.R. & Garner J. (2002) The case against the</p><p>evidence: a different perspective on evidence-based</p><p>medicine. British Journal of Psychiatry 180, 8–12.</p><p>Woolf N. (2000) Using randomized controlled trials to</p><p>evaluate socially complex services: problems, challenges</p><p>and recommendations. Journal of Mental Health Policy</p><p>and Economics 3, 97–109.</p><p>Yamey G. (2000) Subjectivity can be inhumane. Western</p><p>Medical Journal 173, 143.</p><p>Zuger A. (1997) New way of doctoring: by the book. New</p><p>York Times 16 December.</p><p>http://www.mentalhealth.org/</p><p>http://</p><p>that could have serious and far-flung con-</p><p>sequences. In fact, for growing numbers of clinicians,</p><p>EBM consists of using critical pathways or practice</p><p>guidelines operationalized from others’ appraisal of</p><p>the evidence (Ghali & Sargius 2002). These guide-</p><p>lines should be distinguished from those based on</p><p>expert consensus or organizational prerogatives</p><p>(Drake</p><p>et al</p><p>. 2001).</p><p>Evidence-based medicine has strongly influenced</p><p>the field of mental health. Usually known as EBP or</p><p>treatment but also as empirically validated or sup-</p><p>ported treatment, evidence-based mental health</p><p>Evidence-based practice in mental health</p><p>©</p><p>2003 Blackwell Publishing Ltd,</p><p>Journal of Evaluation in Clinical Practice</p><p>,</p><p>9</p><p>, 2, 287–301</p><p>289</p><p>(EBMH) adheres to the same general principles as</p><p>EBM, so much so that the Cochrane Collaboration’s</p><p>journal</p><p>Evidence-Based Medicine</p><p>spawned, in 1998, a</p><p>sister journal called</p><p>Evidence-Based Mental Health</p><p>(Purposes and Procedure 2002). Evidence-based</p><p>practice emulates EBM. Advocates of EBP cast men-</p><p>tal health treatments and services as applied science,</p><p>just as EBM does, and hold that mental health can</p><p>use clinical epidemiology, as medicine does, to dis-</p><p>cover ‘what works’.</p><p>Although critics of EBP sometimes argue that</p><p>mental health cannot be understood in the way med-</p><p>icine can (Richardson 2001), EBP is in some senses</p><p>more vigorous than EBM. The public, and the payers,</p><p>are sceptical that mental health practice works at all,</p><p>and EBP seeks to put mental health at least on a par</p><p>with medicine. In public mental health care, this</p><p>motivation is intensified, what with the apparent fail-</p><p>ures of previous mental health practice living on the</p><p>streets or in prison. One mental health director holds</p><p>that whether it is EBP or just ‘promising practices’,</p><p>innovation has got to be better than what the depart-</p><p>ment is doing now.</p><p>Perhaps more than other fields, mental health is</p><p>broadly defined and includes vastly different beliefs</p><p>about human nature and how to relieve mental suf-</p><p>fering. Evidence-based practice as currently con-</p><p>strued will surely legitimate some schools of</p><p>psychological thought and delegitimate others (see</p><p>below). Although surgeons and radiologists may be</p><p>vindicated in a specific breast cancer study, neither</p><p>speciality is likely to be marginalized as medical prac-</p><p>tice. The internecine stakes are higher for EBP, and</p><p>one knowledgeable observer reports that interest in</p><p>EBP is accelerating.</p><p>For proponents of EBMH, evidence is indispens-</p><p>able and authoritative, as in ‘Evidence-Based Treat-</p><p>ment Gains Congressional Support’ in the</p><p>Action</p><p>Alert</p><p>of the National Alliance for the Mentally Ill</p><p>(NAMI 2000a). The</p><p>Action Alert</p><p>and accompanying</p><p>press release (NAMI 2000b) state no fewer than four</p><p>times that the Program for Assertive Community</p><p>Treatment, which NAMI proposes to expand, is evi-</p><p>dence-based. Barlow (2000) editorializes in a promi-</p><p>nent mental health journal that except for managed</p><p>care, ‘there is no issue more central to the practice of</p><p>clinical psychology’ than EBP. In fact, he continues,</p><p>evidence about efficacy and effectiveness of mental</p><p>health treatments may weaken the hold of managed</p><p>care on the market and allow clinicians to deal</p><p>directly with payers.</p><p>One of the most ambitious pursuits of EBMH was</p><p>that of the American Psychological Association’s</p><p>Division 12 (Clinical Psychology) Task Force on the</p><p>Promotion and Dissemination of Psychological Pro-</p><p>cedures. The task force was charged in 1993 with</p><p>deriving principles for the determination of empiri-</p><p>cally validated treatments, and with making recom-</p><p>mendations for educating clinicians and payers about</p><p>these practices. The original report accomplished the</p><p>charge and included a short list of treatments that</p><p>met the criteria for ‘well-established’ or ‘probably</p><p>efficacious’. The task force had made several policy</p><p>decisions about what would appear on the list of val-</p><p>idated treatments. They put RCTs at the top of their</p><p>evidence hierarchy. They decided, moreover, that</p><p>treatments would be considered only as they applied</p><p>to specific mental health conditions and that ‘treat-</p><p>ment’ itself was defined by the existence of a treat-</p><p>ment manual. The list is revised annually. Empirically</p><p>validated treatments are now referred to as less</p><p>definitive empirically supported treatments.</p><p>The work of the Division 12 task force served as</p><p>the foundation for a major public sector effort in</p><p>EBP (Chorpita</p><p>et al</p><p>. 2002). In 1999, the state of</p><p>Hawaii established, as part of a legal settlement, a</p><p>panel to review the efficacy and effectiveness of</p><p>treatments for a range of childhood and adolescent</p><p>mental health conditions. The Empirical Basis to Ser-</p><p>vices (EBS) Task Force was asked to make a formal</p><p>literature review and evaluation of controlled studies</p><p>in childhood mental health. The multidisciplinary</p><p>task force included health administrators and parents</p><p>of mentally ill children as well as clinicians and aca-</p><p>demics. The criteria for an empirical basis were</p><p>modelled very closely on Division 12’s criteria for</p><p>empirical validation, although the Hawaii task force</p><p>attempted to determine effectiveness as well as effi-</p><p>cacy. They also expanded Division 12’s treatment cat-</p><p>egories, adding ‘possibly efficacious’, ‘unsupported’,</p><p>and ‘possibly harmful’ to ‘well established’ and ‘prob-</p><p>ably efficacious’. As in the case of Division 12, RCTs</p><p>were the gold standard. Reaction to the work of the</p><p>EBS task force was widely positive; critics often</p><p>wanted more in the way of EBMH rather than less</p><p>(Bickman 2002; Henggler</p><p>et al</p><p>. 2002; Hogan 2002).</p><p>S. Tanenbaum</p><p>290</p><p>©</p><p>2003 Blackwell Publishing Ltd,</p><p>Journal of Evaluation in Clinical Practice</p><p>,</p><p>9</p><p>, 2, 287–301</p><p>Despite the broad base of support for EBP, it is</p><p>subject to a number of weaknesses. Some of these it</p><p>shares with EBM. The evidence hierarchy that guides</p><p>EBP, for example, prizes above all the RCT.</p><p>Although the gold standard for clinical epidemiol-</p><p>ogy, RCTs do not satisfy everyone. To begin, it has</p><p>been argued that no study is as objective as the RCT</p><p>is considered to be. All knowledge is socially con-</p><p>structed, and researchers have subjectively defined</p><p>the terms of an RCT before it begins (Berg 1997;</p><p>Gonzales</p><p>et al.</p><p>2002; Stone 2002). In other words, the</p><p>experimental quality of an RCT is bounded, just as</p><p>it is in, say, observational studies. Furthermore,</p><p>because RCTs yield only probabilistic knowledge,</p><p>the clinician’s decision about how to treat an individ-</p><p>ual patient remains a fundamentally subjective one</p><p>(Mulrow & Rohr 2001).</p><p>Randomized controlled trials are widely consid-</p><p>ered internally valid but much less externally so</p><p>(Seligman 1995; Silverman 1996; Knottnerus &</p><p>Dinant 1997; Wells 1999; Parry 2000). Randomized</p><p>controlled trial methodology entails rigorous inclu-</p><p>sion and exclusion requirements; investigators enroll</p><p>homogenous patients and not complex ones, such as</p><p>those with comorbidities. Older people and women</p><p>are underrepresented in clinical trials, as are</p><p>patients with less severe manifestations of a given</p><p>medical condition. Trials are mostly conducted in</p><p>academic medical centres with fully qualified staff. It</p><p>is uncertain whether an intervention will be as effi-</p><p>cacious in a community hospital with fewer medical</p><p>resources. In mental health, RCTs provide impor-</p><p>tant information about discrete choices under ideal</p><p>conditions, but many clinicians need to make com-</p><p>plex patient management decisions and implement</p><p>them in community settings. A practitioner who con-</p><p>siders using RCT findings has to judge how signifi-</p><p>cantly different the study’s patients or setting are</p><p>from her/his own.</p><p>Meta-analysis of RCTs may make for more potent</p><p>EBM, but critics question the methodology and the</p><p>methodologists. Miettinen (1998) and Bailar (1995)</p><p>are concerned that meta-analysts know too little,</p><p>especially about the specific medical subjects they</p><p>analyse. Meta-analysis is rarely followed by a meta-</p><p>trial and depends on the research available for study.</p><p>Feinstein (1995), who refers to meta-analysis as ‘sta-</p><p>tistical alchemy for the 21st century’, holds that even</p><p>combining RCTs will not provide what most doctors</p><p>want: something more than averages and measures</p><p>of death and survival. Rather, they want to know, for</p><p>example, what happens when treatment schedules</p><p>change or pertinent cotherapy is undertaken and</p><p>what happens to functional status and quality of life.</p><p>Improved research design may increase the useful-</p><p>ness of RCTs in EBP (Parry 2002). Investigators</p><p>should be able to avoid: systematic between-group</p><p>differences (that arise even with randomization);</p><p>attrition that compromises the interpretation of</p><p>results; lack of equipoise, etc. More fundamentally,</p><p>however, the ‘drug model’ does not work for psycho-</p><p>therapy research (Adams & Gilbody 2001; Parry</p><p>2002). The therapist and patient cannot be blinded,</p><p>and how would a placebo in psychotherapy trials be</p><p>conceptualized? Except for the narrowest manual-</p><p>ized interventions, total standardization of treatment</p><p>is also difficult to achieve because therapists respond</p><p>to what happens in session as it happens. Further-</p><p>more, most psychotherapists practise an eclectic form</p><p>of psychotherapy, while RCTs consider therapeutic</p><p>types one at a time.</p><p>Effectiveness studies have more external validity,</p><p>measuring treatment effects in the community with</p><p>representative patients, providers and practice</p><p>settings. In order to do this, they use quasi-</p><p>experimental, as opposed to the RCT’s experimental</p><p>design. The statistical findings of effectiveness studies</p><p>are less rigorously derived; they are therefore less</p><p>highly valued, for example, by the Division 12 and</p><p>Hawaii task forces, in determining which treatments</p><p>are evidence-based. Efficacy and effectiveness</p><p>research both come up short on a number of points</p><p>(Wells 1999). Neither provides information to, say,</p><p>policy-makers about long-term social productivity.</p><p>Neither does a good job of documenting service</p><p>delivery context or identifying contextual factors</p><p>likely to affect outcomes. Moreover, even RCTs have</p><p>internal validity problems, including the treatment of</p><p>an as-treated sample as an intention-to-treat one.</p><p>Woolf (2000) offers a meticulous consideration of</p><p>the usefulness of ‘RCT-based effectiveness research’</p><p>in evaluating mental health programmes – what she</p><p>calls ‘socially complex services’ (SCS). She finds that</p><p>SCS are unlike the ideal clinical interventions studied</p><p>in RCTs: the latter have a single provider, a profes-</p><p>sional staff, protocol specificity that is concrete and</p><p>Evidence-based practice in mental health</p><p>©</p><p>2003 Blackwell Publishing Ltd,</p><p>Journal of Evaluation in Clinical Practice</p><p>,</p><p>9</p><p>, 2, 287–301</p><p>291</p><p>measurable, an illness/problem with a low level of</p><p>professional uncertainty, high subject insight into</p><p>her/his illness and hard external boundaries.</p><p>Woolf concludes that three problems will continue</p><p>to stymie the use of clinical trials in mental health</p><p>programmes. Standardized intervention – an impor-</p><p>tant assumption of the RCT model – is problematic</p><p>for SCS. Woolf gives the example of case manage-</p><p>ment, which typically figures in programmes for seri-</p><p>ously mentally ill (SMI) people. Case management</p><p>services vary substantially in their philosophies, orga-</p><p>nizational structures, interaction and engagement</p><p>processes and styles and outcome measures. The dif-</p><p>ficulty of standardizing case management introduces</p><p>noise into the causal pathway of the RCT model and</p><p>allows incorrect inferences to be drawn.</p><p>A second assumption of RCTs, sample equiva-</p><p>lence, is equally problematic for SCS. The character-</p><p>istics of the population are ambiguous, with different</p><p>definitions of mentally ill issuing from clinical</p><p>researchers’ interview schedules, programmes and</p><p>policies of service agencies, and even the law. As for</p><p>the sample, self-selection is likely to distort the dis-</p><p>tribution of actual cases because people with mental</p><p>illness may not agree with the diagnoses they are</p><p>given, and more often than physically ill people,</p><p>reject the opportunity to participate in clinical trials.</p><p>A third assumption of RCTs is environment equiv-</p><p>alence and neutrality. Woolf points out that unlike</p><p>the hard boundaries of most RCTs, SCS studies have</p><p>soft boundaries allowing the dose intervention to</p><p>affect the social environment which in turn affects</p><p>the dose intervention and the behaviour of subjects.</p><p>‘Ideally, one would need to randomly assign pro-</p><p>grams to environments’ (Woolf 2000).</p><p>Another objection to the preferred evidence base</p><p>for EBP is that it excludes whole fields of knowledge.</p><p>Qualitative research, for example, appears on the</p><p>periphery of most EBP knowledge hierarchies and</p><p>figures in virtually none of the systematic reviews.</p><p>Two quantitative/qualitative EBM researchers have</p><p>observed ‘a great deal of disagreement and debate</p><p>about the appropriate criteria for critical appraisal of</p><p>qualitative research’ (Greenhalgh & Taylor 1997).</p><p>More recently, the Evidence-Based Working Group</p><p>at McMaster University published a two-part article</p><p>describing good qualitative research, but would con-</p><p>cede only that ‘qualitative studies offer an alternative</p><p>when insight into the research is not well established</p><p>or when conventional theories seem inadequate’</p><p>(Giacomini & Cook 2000a,b).</p><p>To be sure, qualitative research is an important</p><p>complement to quantitative research, for it does not</p><p>count things so much as interpret them. That is, it</p><p>searches out the meanings of things, especially to the</p><p>people who experience them. Qualitative research is</p><p>therefore able, for example, to discover the unin-</p><p>tended effects of health care services or practices</p><p>(Barbour 2000). Qualitative research is frequently</p><p>considered, as above, an exploratory exercise that</p><p>helps generate hypotheses for quantitative work. It</p><p>could, however, also stand on its own, seeking a</p><p>deeper and more complex truth about how patients,</p><p>families and doctors make sense of illness and med-</p><p>icine. It has already been used to learn how medical</p><p>researchers understand the RCT protocols they</p><p>devise (Berg 1997, 1998).</p><p>Some researchers push the point further, offering</p><p>‘an inclusive model of evidence in health care’</p><p>(Upshur</p><p>et al.</p><p>2001). They question the place of</p><p>RCTs at the top of the EBM hierarchy. They propose</p><p>instead a taxonomy of evidence that runs along two</p><p>axes: particular to general and meaning to measure-</p><p>ment. Two meaning quadrants correspond to quali-</p><p>tative research, and in only one quadrant,</p><p>quantitative general, does mathematical evidence</p><p>combine with quantitative reasoning style to produce</p><p>clinical epidemiology, including RCTs. According to</p><p>this inclusive model, evidence of various types can be</p><p>reasoned about in a plurality of ways, and all should</p><p>contribute to evidence-based health care. A similar</p><p>inclusive model is found in Jonas (2001), where it is</p><p>applied specifically to complementary medicine.</p><p>Anecdotes and case reports are found at the bottom</p><p>of every EBP hierarchy. Still, they remain important</p><p>forms of clinical thinking. Enkin & Jadad (1998) sug-</p><p>gest that anecdotes, that is, information derived from</p><p>one’s own experience or that of another, retain their</p><p>power for several reasons. They are more emotionally</p><p>interesting because they happened to one or someone</p><p>one knows. The information is concrete and vivid and</p><p>especially potent when derived from a face-to-face</p><p>encounter. Anecdote also plays a (controversial) role</p><p>in the drawing of causal inferences; it can be inter-</p><p>preted by rules of thumb that simplify complex situ-</p><p>ations and allow for rapid decision making. Finally,</p><p>S. Tanenbaum</p><p>292</p><p>©</p><p>2003 Blackwell Publishing Ltd,</p><p>Journal of Evaluation in Clinical Practice</p><p>,</p><p>9</p><p>, 2, 287–301</p><p>anecdote may override formal research findings</p><p>because of the particularity of clinical practice. ‘Ran-</p><p>domized clinical trials can tell us which treatment is</p><p>better, but they cannot tell use for whom they are bet-</p><p>ter’ (Enkin &</p><p>Jadad 1998). Enkin and Jadad conclude</p><p>that formal studies should prevail over anecdote in</p><p>most instances, but that there are at least some legit-</p><p>imate, if peripheral, uses for anecdote as evidence.</p><p>Browman (1999) writes a fascinating case report</p><p>illustrating, among other things, that there is much to</p><p>be learned from case reports, especially about the</p><p>complexity of medical decision making. He relates</p><p>the history of a patient with locally advanced pros-</p><p>tate cancer who searches the literature, both pub-</p><p>lished and unpublished, for information about the</p><p>possible use of biphosphonates to prevent bone</p><p>metastases and increase survival. The literature turns</p><p>up some RCTs but none framed specifically to</p><p>answer his clinical question. The patient consults a</p><p>clinical epidemiologist and three specialist doctors,</p><p>who range from ‘it wouldn’t hurt’ to ‘I’d definitely</p><p>consider it’ in their opinions of biphosphonate use in</p><p>this case. The clinical epidemiologist concludes in his</p><p>report to the patient’s principal doctor that there is</p><p>not enough evidence to support the use of biphos-</p><p>phonates but that if a well-informed patient</p><p>requested the treatment, he would comply. All the</p><p>doctors agree that they cannot support a policy of</p><p>biphosphonate use in cases of this kind, but the</p><p>(Canadian) patient decides to proceed with treat-</p><p>ment and pay for it out of his own funds.</p><p>Browman pursues the complexity of the case. He</p><p>asks if when the evidence is weak, the burden of</p><p>proof should be on those who generalize from indi-</p><p>rect evidence or on those who do not. He points out</p><p>that the same information can lead to different con-</p><p>clusions, depending on whether one is making health</p><p>or clinical policy and that this has implications for the</p><p>authority of practice guidelines. He wonders whether</p><p>EBM, including full disclosure, is on a collision</p><p>course with patient choice. Finally, he reminds the</p><p>reader that all of these important issues were raised</p><p>in a case report, a form of medical evidence relegated</p><p>to the very bottom of the evidence hierarchy.</p><p>As suggested above, EBM seems not to make pro-</p><p>vision for narrative reasoning. Some, however,</p><p>believe that narrative is indispensable to medicine</p><p>and especially to mental health (Launer 1998); telling</p><p>and retelling stories are what allows for healing to</p><p>occur. The clinical relationship originates in the</p><p>patient’s story. More importantly, narrative is the</p><p>structure of the clinician’s practical reason – the way</p><p>one clinician thinks about one patient (Hunter 1996).</p><p>It is surely the way the clinician interprets the ‘over-</p><p>lapping stories’ (Greenhalgh 1998) of the healing</p><p>process to the patient (Elwyn & Gwyn 1999). Narra-</p><p>tive is not necessarily at odds with EBM – Green-</p><p>halgh (1998) sees each as critical to the other – but</p><p>narrative is subjective, and to use it explicitly, practi-</p><p>tioners of EBM must admit to the limits of their</p><p>hierarchy.</p><p>The EBMH work of Division 12, and by extension,</p><p>the Hawaii project, has drawn substantial criticism.</p><p>The dissemination of a list of empirically supported</p><p>treatments is considered premature; there is simply</p><p>too little knowledge about how specific treatments</p><p>affect specific disorders to issue such a list. Even</p><p>Division 12 has softened on this point, warning that</p><p>absence from the list in no way means that a treat-</p><p>ment is ineffective (Silverman 1996; Elliott 1998).</p><p>Critics have found the Division 12 list unsatisfac-</p><p>tory for other reasons (Elliott 1998; Henry 1998;</p><p>Richardson 2001). The effectiveness data for the</p><p>listed treatments are inadequate, so the external</p><p>validity of the RCTs cited cannot be demonstrated.</p><p>(The Hawaii task force addresses this point.) The lists</p><p>discriminate against certain kinds of treatment and</p><p>research, for example, psychodynamic and experien-</p><p>tial therapies, which are virtually immeasurable by</p><p>the task forces’ methodology. The bias against these</p><p>therapies would seem to be ‘hard-wired’ into the lists;</p><p>the manuals approved by Division 12 are almost</p><p>exclusively behavioural (41%) and cognitive-</p><p>behavioural (41%). The lists’ assumption of diagnos-</p><p>tic specificity is controversial on the grounds that</p><p>comorbidities are the rule and conditions overlap.</p><p>Some critics believe specificity encourages therapists</p><p>to focus on the disorder (as defined in the latest ver-</p><p>sion of the</p><p>Diagnostic and Statistical Manual</p><p>) rather</p><p>than the whole person and on technique rather than</p><p>the therapeutic relationship.</p><p>The agenda for Division 12 (Weisz</p><p>et al</p><p>. 2000)</p><p>points up potential sticking points for EBP. In deter-</p><p>mining the list of well-established treatments, when</p><p>are two interventions the ‘same’? A very strict defi-</p><p>nition will mean virtually no treatment can claim the</p><p>Evidence-based practice in mental health</p><p>©</p><p>2003 Blackwell Publishing Ltd,</p><p>Journal of Evaluation in Clinical Practice</p><p>,</p><p>9</p><p>, 2, 287–301</p><p>293</p><p>support of more than one study. Similarly, when are</p><p>two treatments ‘different’ enough to be considered</p><p>different interventions? Do the lists’ categories con-</p><p>vey the range of differences across complex studies?</p><p>Do they imply greater certainty than is possible? What</p><p>if different interventions do better on different out-</p><p>come measures or among different subpopulations?</p><p>What about on long-term vs. short-term effects?</p><p>Should the evaluation of well-established treatments</p><p>include psychodynamic and other non-behavioural</p><p>therapies? Will this mean broadening treatment out-</p><p>comes to include relationships with family and</p><p>friends, functioning at work or school, and future uti-</p><p>lization of mental health services? These and other</p><p>measures will surely raise questions about which out-</p><p>comes are the most important treatment effects.</p><p>A final weakness of EBP is that it is hard to accom-</p><p>plish. Speaking of mental health specifically, Essock</p><p>(1999) writes, ‘. . . I have been humbled by how hard</p><p>it is to change practice’. Investigators (Haines &</p><p>Donald 1998; Corrigan</p><p>et al</p><p>. 2001) have compiled</p><p>lists of obstacles to change, environmental and per-</p><p>sonal, including: limitations of time and of practice</p><p>organization, lack of financial resources, health poli-</p><p>cies that promote ineffective activities, obsolete</p><p>knowledge and patients’ demands for care. There are</p><p>furthermore the problems of scarce professional</p><p>incentives and counterproductive demands for</p><p>greater ‘productivity’ (Weisz</p><p>et al</p><p>. 2000). Guidelines</p><p>are only minimally helpful (Bauer 2002) and fre-</p><p>quently lack an evidence base. A study from general</p><p>practice medicine (Tomlin</p><p>et al.</p><p>1999) finds that</p><p>effectiveness, the ostensible goal of EBM, is not the</p><p>sole driver of clinical practice. Clinicians admit that</p><p>although they are sometimes ineffective out of insuf-</p><p>ficient knowledge or skill, or fatigue, on other occa-</p><p>sions, they do what they think is right, given a</p><p>patient’s circumstances, even if this is not technically</p><p>the most effective thing to do.</p><p>An important obstacle to EBP is that clinicians</p><p>think like clinicians, rather than scientists or manag-</p><p>ers. Their work is organized around the care of indi-</p><p>viduals, and as noted above, the evidence in EBP is</p><p>not. The use of treatment manuals has sharpened this</p><p>issue in mental health. Although the manuals have</p><p>many supporters (even, e.g. Henry 1998), others</p><p>warn of their limitations. Vakoch & Strupp (2000)</p><p>concede that training manuals impart technique, but</p><p>psychotherapy is ‘considerably more than applying</p><p>techniques promulgated by treatment manuals. Ther-</p><p>apy training similarly occurs through mentoring,</p><p>internalization, identification, role modeling, guid-</p><p>ance, socialization, interaction, and group activity’.</p><p>Manuals are insufficient because clinicians rely on</p><p>tacit as well as explicit knowledge. This is how Schon</p><p>(1983) describes the ‘reflective practitioner’, a psy-</p><p>chotherapist, who must decide in context how he</p><p>should act. Klein (1998) similarly finds that expertise</p><p>in action is more akin to pattern recognition than to</p><p>following the rules.</p><p>In summary, EBP is vulnerable to a thoroughgoing</p><p>critique. With an evidence hierarchy that puts RCTs</p><p>at the top, EBP assumes an objectivity in research</p><p>that does not exist. It privileges a probabilism that</p><p>does not describe the individual patient and so does</p><p>not fully inform the individual clinician. It sacrifices</p><p>external validity to internal validity and excludes</p><p>many types of knowledge from the evidence base. It</p><p>does not recognize narrative as the dominant mode</p><p>of clinical reasoning. It encourages exclusionary lists</p><p>of specific conditions paired with behavioural and</p><p>cognitive-behavioural treatments. This is not to dis-</p><p>miss EBP entirely, but only to say that its evidence</p><p>has serious limitations as knowledge for practice. Its</p><p>real strengths may lie elsewhere.</p><p>Part two</p><p>Evidence-based practice is becoming, among other</p><p>things, a public idea (Reich 1988b). By this is meant</p><p>an idea, held by policy-makers and then the public,</p><p>that describes a public problem and suggests a solu-</p><p>tion. It is a response to the experience of public</p><p>troubles and is an argument for the wisdom of</p><p>a particular response. Some views of public policy</p><p>making posit a neutral policy-maker who takes the</p><p>pulse of a narrowly self-interested public. Public</p><p>ideas, however, are rooted in a different conception</p><p>of public policy, where policy-makers give voice to cit-</p><p>izens’ concerns for the public interest – for the right</p><p>thing to do about collective problems (Reich 1988a).</p><p>(Of course, public and self-interest may be the same.)</p><p>Public ideas sometimes originate with policy-makers</p><p>alone. In this case, they give expression to the prob-</p><p>lem and introduce it to the citizenry, with whom it</p><p>may or may not resonate (Heymann 1988). Not</p><p>S. Tanenbaum</p><p>294</p><p>©</p><p>2003 Blackwell Publishing Ltd,</p><p>Journal of Evaluation in Clinical Practice</p><p>,</p><p>9</p><p>, 2, 287–301</p><p>always meant for the general public, a public idea</p><p>may have a more limited audience: a particular issue</p><p>network, geographic region or professional group.</p><p>The issue network or, more broadly, the attentive</p><p>public for mental health ideas is an example.</p><p>One of the best-known studies of a public idea is</p><p>The Culture of Public Problems: Drinking Driving</p><p>and the Symbolic Order</p><p>by Gusfield (1981). Briefly,</p><p>Gusfield argues that the drunk driver became the</p><p>focus of transportation safety efforts when a public</p><p>idea cast him as the cause of automobile accidents.</p><p>Although accidents were undoubtedly being caused</p><p>by drunk drivers, Gusfield points out that they were</p><p>also caused by drivers who fell asleep, by poor road</p><p>design, by happy hours near the highway, etc.; the</p><p>extent of harm to accident victims resulted from</p><p>automotive design. In Gusfield’s example, the public</p><p>idea was that drunk drivers are the essential threat to</p><p>transportation safety. The idea is not untrue, but it</p><p>takes one slice of a complex problem, focuses public</p><p>attention on it, and calls for the logical solutions to it.</p><p>In the case of EBP, policy-makers in the U.S. public</p><p>mental health system also confront a complex reality.</p><p>In response to unsatisfactory programme results,</p><p>some have adopted the idea that the problem is unin-</p><p>formed practice and that results would therefore</p><p>improve if clinicians conformed practice to research</p><p>findings (Geddes 1996; Carpinello</p><p>et al</p><p>. 2002). There</p><p>are of course other causes of poor performance in the</p><p>mental health system – limited funds, restrictive state</p><p>regulations, hostile communities, etc. – and policy-</p><p>makers do acknowledge them. Still, for a small group,</p><p>the lack of EBP dominates and is considered espe-</p><p>cially grievous, in part because EBP promises to solve</p><p>other problems as well. For example, if EBP makes</p><p>mental health departments more accountable, will</p><p>not states be willing to fund them more generously?</p><p>Evidence-based practice is more and more a public</p><p>idea. So far, only a small number of states are actively</p><p>implementing an EBP agenda. There is, however, sol-</p><p>idarity on the issue and some states have been funded</p><p>by NIMH and RWJF to disseminate practice-related</p><p>evidence. The states work together through the</p><p>National Association of State Mental Health Pro-</p><p>gram Directors, which in turn has a research arm that</p><p>contributes to EBP. As an instance of EBM, EBP is</p><p>ubiquitous, and even as mental health care, it is</p><p>widely spoken and written about. Evidence-based</p><p>practice was, for example, the subject of a</p><p>Best Prac-</p><p>tices 2001</p><p>conference that hosted 700 people in the</p><p>state of New York (OMH Quarterly 2001a); it was the</p><p>theme for an entire year’s volume of the journal,</p><p>Psy-</p><p>chiatric Services</p><p>(Torrey</p><p>et al</p><p>. 2001). The new journal</p><p>Evidence-Based Mental Health</p><p>was noted above. One</p><p>prominent psychologist referred publicly to EBP as a</p><p>‘movement’ (Chambless 2002). Evidence-based prac-</p><p>tice is on the minds of officials, advocates and clini-</p><p>cians. State mental health commissions recognize a</p><p>‘growing insistence on evidence-based practice’ and</p><p>regret that public mental health ‘practices lag behind</p><p>our knowledge’ (Bell & Shern 2002).</p><p>That there are public ideas does not mean that they</p><p>are not about real things. There is something called</p><p>EBP – the actions of clinicians relative to research</p><p>results. Then, why is EBP an idea at all, much less a</p><p>public idea? The answer is that real things can be con-</p><p>strued and communicated in many ways. As above,</p><p>the world is socially constructed. The idea of EBP is</p><p>one way of putting the world together – of making</p><p>sense of mental health. It is an idea as well as a prac-</p><p>tice, and in the public realm, it is an idea in the minds</p><p>of public actors who define and respond to it. Calling</p><p>EBP a public idea does not lessen its importance as a</p><p>practice. It only means that EBP’s meaning is nego-</p><p>tiated outside the clinical relationship as well as within</p><p>it. Its influence on the outside may even be greater.</p><p>The potential of EBP to be a public idea derives in</p><p>part from its powerful rhetoric. It is, in fact, a rhetor-</p><p>ical triumph, for who can argue with evidence? Crit-</p><p>ics of EBP literally have nothing to call themselves or</p><p>their position; it is not evidence, but the limitations of</p><p>certain evidence hierarchies that they oppose (Miles</p><p>et al</p><p>. 2002). The original Evidence-Based Working</p><p>Group chose its title wisely, substituting for one evi-</p><p>dence hierarchy the name of all evidence for practice.</p><p>Moreover, the rhetoric of EBP raises an important</p><p>question in the listener’s mind: if EBP is the intro-</p><p>duction of evidence into practice, how have clinicians</p><p>been practising all along? What is there besides evi-</p><p>dence? Proponents of EBM have various answers to</p><p>that question (Eddy 1984; Wennberg 1984); some are</p><p>more damning than others. Even if the public never</p><p>gets specifics, however, it should be clear to them that</p><p>clinicians are in the wrong. Blame can be crucial to a</p><p>public idea because the solution to the problem is</p><p>implicit in who is to blame.</p><p>Evidence-based practice in mental health</p><p>©</p><p>2003 Blackwell Publishing Ltd,</p><p>Journal of Evaluation in Clinical Practice</p><p>,</p><p>9</p><p>, 2, 287–301</p><p>295</p><p>Some of EBP’s rhetorical potency comes from its</p><p>ambiguity (Stone 2002). Like public ideas generally,</p><p>this one is stronger for remaining unclear. Ambiguity</p><p>allows for broad support even when different people</p><p>understand an idea differently; there need not be an</p><p>exploration of differences. Supporters of EBP dis-</p><p>agree about what it is: how much evidence is enough,</p><p>whether the best evidence is the same as best prac-</p><p>tices, whether promising practices are evidence-</p><p>based, whether evidence should trump choice, etc. For</p><p>some adherents to EBP, evidence has the meaning of</p><p>specific evidence hierarchies. Among the uninitiated,</p><p>however, evidence may easily mean any epidemio-</p><p>logical findings or any combination of epidemiology</p><p>and bench science. It may mean any form of medical</p><p>knowledge, even the clinician’s knowledge of the indi-</p><p>vidual patient. It may mean any knowledge at all.</p><p>Even ‘based’ is an ambiguous term. What is the ideal</p><p>relationship of evidence</p><p>to practice? Why is it treated</p><p>as self-evident? Does ‘based’ mean a natural sciences</p><p>model of application or a human sciences model of</p><p>interpretation? Is it synonymous with a stated decision</p><p>rule? The term ‘EBP’ implies that there is an evidence</p><p>base on which some other kind of decision is built. Is</p><p>that the role for clinical judgement?</p><p>Public policy loves numbers (Stone 2002), and</p><p>EBP will be a more powerful idea for being based on</p><p>the statistics of clinical epidemiology. Numbers seem</p><p>authoritative. Although they are subjective, not least</p><p>in determining what to count and in what units, they</p><p>seem to replace subjectivity with objectivity and intu-</p><p>ition with science. They promise to make things less</p><p>ambiguous although they are always open to inter-</p><p>pretation. Morone (1994) writes about U.S. health</p><p>care cost-control programmes what might be said of</p><p>systematic reviews and meta-analyses: ‘. . . they</p><p>appear in politics (and in politics, appearances are</p><p>crucial) to operate automatically, scientifically . . .</p><p>The illusion of self-enforcing, automatic, scientific</p><p>process is important . . .’ The quantitative nature of</p><p>EBP makes it appear less ideological than other</p><p>issues recently debated in the public mental health</p><p>system (Geller 2000; Mowbray & Holter 2002). One</p><p>view juxtaposes ‘empirically validated clinical find-</p><p>ings’ and the preceding ‘untested ideological vision’</p><p>(Mowbray & Holter 2002).</p><p>Finally, public ideas tell good stories, causal stories</p><p>about how things got to be the way they are and how</p><p>they can change (Stone 2002). Proponents of EBM</p><p>hold that although positive change has been wrought</p><p>in the U.S. mental health system, through community-</p><p>based care, fiscal restructuring, etc., it has been insuf-</p><p>ficient to ensure good results (Goldman</p><p>et al</p><p>. 2001).</p><p>As in the case of EBM, small area variation studies</p><p>depict clinical decision making on seemingly unsci-</p><p>entific grounds (Drake</p><p>et al</p><p>. 2001). Simultaneously,</p><p>clinical epidemiology has made apparently scientific</p><p>findings about what would make for better clinical</p><p>care (Lehman & Steinwachs 1998; Tuschen-Caffier</p><p>et al.</p><p>2001). Then, EBP is the attempt to bring knowl-</p><p>edge and practice together, and the story’s resolution</p><p>is successful dissemination of research to clinicians.</p><p>Stone (2002) observes that there are a few common</p><p>story lines for the causal stories told by public ideas.</p><p>One is the story of stymied progress: things were ter-</p><p>rible and got better, but now there is a new obstacle</p><p>to progress, and we must act to remove it. This is the</p><p>pattern for the story of EBP. People were institution-</p><p>alized and then deinstitutionalized, but poor practice</p><p>stands in the way of real improvement. When practice</p><p>improves, the condition of SMI people will too.</p><p>Although the story may not sit well with clinicians,</p><p>like other good policy stories, it is utterly believable</p><p>and makes an important normative leap to action,</p><p>that is, it tells us what to do. This idea even has a ‘tool-</p><p>kit’ (Torrey</p><p>et al</p><p>. 2001), a package of multimedia dis-</p><p>semination materials to facilitate the use of research</p><p>in practice. Whatever its practical value turns out to</p><p>be, the toolkit is a perfect symbol of fixing what is bro-</p><p>ken in the public mental health system.</p><p>As a public idea, EBP organizes public efforts to</p><p>solve a collective problem (Moore 1988). It provides</p><p>a way for public agencies, legislators, interest groups</p><p>and the public to form an interest in, and even act</p><p>upon, a problem they share, in this case, the insuffi-</p><p>ciency of the public mental health system. A public</p><p>idea is an interpretation of an inchoate problem. In the</p><p>case of EBP, insufficiency is interpreted as the poor</p><p>quality of provider practice. A public idea is also the</p><p>match of the interpreted problem to what seems an</p><p>obvious solution, for example, the match of unin-</p><p>formed practice to an evidence base. The pairing of</p><p>problem and solution, a public idea elicits interest,</p><p>then encourages responses from the interested parties.</p><p>Evidence-based practice is a relatively simple and</p><p>optimistic idea – to ‘disseminate what we know to</p><p>S. Tanenbaum</p><p>296</p><p>©</p><p>2003 Blackwell Publishing Ltd,</p><p>Journal of Evaluation in Clinical Practice</p><p>,</p><p>9</p><p>, 2, 287–301</p><p>those who need it most’ (Mechanic 2001) – and</p><p>although it is difficult to accomplish, its potential</p><p>appeal is hard to overstate: from scientistic officials</p><p>who are glad for more science to taxpayers who hear</p><p>the sound of fiscal discipline. Moreover, the rhetoric</p><p>of EBP blames the individual clinician, who can pre-</p><p>sumably be informed or dismissed, rather than some</p><p>faceless system – bureaucratic and intransigent. Evi-</p><p>dence-based practice allows broad mental health</p><p>interests to unite behind a seeming certainty about</p><p>what works in a system where most of the time very</p><p>little seems to work. This may benefit the public men-</p><p>tal health agency in the form of support from other</p><p>agencies (who may themselves take up EBP), fund-</p><p>ing and regulatory changes from the executive and</p><p>legislature, the cooperation of advocacy groups in</p><p>promoting agency goals, and aid in the form of tax</p><p>levies from the general public.</p><p>Ideas matter in public policy, in part because they</p><p>allow public officials to set the terms of the debate</p><p>and thereby create enemies and allies (Moore 1988).</p><p>The idea of EBP seems to offer such terms. Among</p><p>state and federal officials, academics and consumers,</p><p>the idea of EBP can define the debate about what is</p><p>best for SMI people (Goldman</p><p>et al</p><p>. 2001). A focus</p><p>on clinical practice is by no means self-evident. Not</p><p>long ago and for many years, service locus was the</p><p>dominant public idea in U.S. mental health (Geller</p><p>2000). The restructuring of mental health financing</p><p>has also been a prominent idea. Even the Surgeon</p><p>General’s Report (USDHHS 1999), while it supports</p><p>EBP, raises other issues, such as entry into treatment,</p><p>stigma, the mental health workforce, etc. Sometimes</p><p>it is the purpose of a public idea to keep alternative</p><p>formulations off the public agenda. However, EBP is</p><p>a means to multiple ends, including quality, cost-</p><p>effectiveness and credibility. As a public idea, EBP</p><p>implicates them all.</p><p>In what is sometimes called the ‘mobilization of</p><p>bias’ (Schattschneider 1960), a public idea aligns par-</p><p>ticipants in a debate by drawing its boundaries; the</p><p>lines of conflict become clear once the issue is</p><p>defined. In the case of EBP, and taking a simple view,</p><p>the public idea would set proponents – some academ-</p><p>ics and other researchers, many of whom are orga-</p><p>nized through their professional associations, and</p><p>some state and federal officials, NIMH and RWJF –</p><p>on one side, and other officials (who may be ignorant</p><p>of EBP rather than opposed to it) and many (largely</p><p>unorganized) clinicians on the other. [Lipman (2002)</p><p>found that among doctors in the UK, general practi-</p><p>tioners felt empowered vis-à-vis specialists by the</p><p>ready availability of ‘evidence’. Something like this</p><p>may be true of generalists and specialists in US men-</p><p>tal health as well.] Advocacy groups such as NAMI</p><p>would mostly side with researchers and against clini-</p><p>cians – the latter having apparently failed them –</p><p>although for consumers, science is a double-edged</p><p>sword. It gives them independent knowledge of what</p><p>makes for good practice but may also support prac-</p><p>tices consumers find objectionable. Furthermore,</p><p>EBP can work against choice, with more recovered</p><p>consumers desirous of greater choice (Frese</p><p>et al</p><p>.</p><p>2001).</p><p>Under the rubric of EBM, mental health interests</p><p>might be joined with more prestigious and powerful</p><p>counterparts in medicine. Although clinicians may as</p><p>a rule resist the coming of EBP, those practitioners</p><p>who are more sympathetic to it may find themselves</p><p>allied across professions, that is, psychiatry, psychol-</p><p>ogy, social work and nursing; EBP may undermine</p><p>the divisiveness and resentments. The credibility that</p><p>comes with EBM may also promote interagency alli-</p><p>ances in state government, including among agencies</p><p>with a shared clientele,</p><p>for example, substance abuse</p><p>agencies (OMH Quarterly 2001b). (It will likely</p><p>remain difficult to get and hold the attention of pow-</p><p>erful Medicaid agencies.)</p><p>Public ideas give meaning to the work of govern-</p><p>ment organizations and social movements (Moore</p><p>1988). Some states have committed themselves to</p><p>EBP and are defined by it (Carpinello</p><p>et al</p><p>. 2002).</p><p>Others devote units within their organizations to evi-</p><p>dence and its implementation (Roth 2000). It is not</p><p>the full agenda of any mental health advocacy orga-</p><p>nization, but EBP is prominent on NAMI’s, as a</p><p>demand for quality services and a hope for real</p><p>recovery. As noted, at least one leading researcher</p><p>considers EBP a movement of its own. Applying evi-</p><p>dence to practice, EBP is coherent enough to impart</p><p>meaning to bureaucrats and activists; in this culture,</p><p>the cause of science is widely meaningful and easy to</p><p>adopt. It is ironic that in one sense, EBP has little</p><p>meaning as public policy. The evidence it describes</p><p>does not answer questions about how governments,</p><p>as opposed to clinicians, should act (Sturm 1999).</p><p>Evidence-based practice in mental health</p><p>©</p><p>2003 Blackwell Publishing Ltd,</p><p>Journal of Evaluation in Clinical Practice</p><p>,</p><p>9</p><p>, 2, 287–301</p><p>297</p><p>For the public mental health system, numbers are</p><p>the language of accountability (Goldman</p><p>et al</p><p>. 2001;</p><p>Carpinello 2002) to government, taxpayers, consum-</p><p>ers and managed care organizations. Evidence-based</p><p>practice would allow mental health programmes to</p><p>show that they operate by the objective standard of</p><p>quantitative research findings; that they are doing</p><p>only what has been proven to work; that the money</p><p>and effort expended are therefore well spent.</p><p>Resources are not being wasted, and this is a power-</p><p>ful message to payers in an era of shrinking</p><p>resources. The numbers of EBP would create the</p><p>authority of precision (Stone 2002). Accountability</p><p>can go both ways, however. Managed care organiza-</p><p>tions and other payers may care less for empirical</p><p>support than for benefits minimization (Williams &</p><p>Garner 2002; Fox 2000). Moreover, once account-</p><p>ability is identified with one kind of measurement,</p><p>whatever is not or cannot be measured, including</p><p>some valuable forms of mental health treatment,</p><p>may be suspect and perhaps unreimbursable. Thus,</p><p>the dispute over the Division 12 and Hawaii lists: are</p><p>therapies without efficacy data needlessly made to</p><p>look inefficacious (Bohart</p><p>et al.</p><p>1998)?</p><p>As noted, EBP would contribute to the credibility</p><p>of mental health system officials. Their requests for</p><p>resources and regulatory change and their ongoing</p><p>reports of programme performance would have a uni-</p><p>fying and convincing logic. There would be a scientific</p><p>basis to what officials say, and science, even in gov-</p><p>ernment, is a most credible discourse. Even the simple</p><p>promise to do what works, instead of something more</p><p>ideological or haphazard, might make mental health</p><p>system officials more credible to payers, bureaucrats,</p><p>legislators, advocacy groups and consumers.</p><p>Recovery is arguably another public idea, one to</p><p>which EBP contributes despite recovery’s challenge</p><p>to EBP. The recovery movement holds that most SMI</p><p>people want pretty much what other people want: a</p><p>home, a job and friends. More importantly, SMI peo-</p><p>ple want to decide for themselves how to go about</p><p>living their lives. On the other hand, EBP identifies</p><p>and provides the treatments and programmes for</p><p>which there are supporting data of a certain kind;</p><p>those may or may not coincide with what people</p><p>want. This is not to say that SMI people want inef-</p><p>fective services. There are many unstudied services</p><p>and services studied in limited populations, and even</p><p>at its best, effectiveness is a result across people and</p><p>not necessarily what an individual wants. Frese</p><p>et al</p><p>.</p><p>(2001) argue that SMI people who are just beginning</p><p>to recover will be glad for the guidance EBP pro-</p><p>vides. Those who are more fully recovered, however,</p><p>will strain against its prescriptive aspects.</p><p>Evidence-based practice is a relatively new and def-</p><p>initely exciting (Chorpita</p><p>et al</p><p>. 2002; Hoagwood 2002)</p><p>public idea. It seems so simple and foolproof that men-</p><p>tal health officials may find it easier to engage their</p><p>staffs, their advisory bodies and other public actors</p><p>and even to make new claims on public resources. The</p><p>history of the public mental health system is fraught</p><p>with frustration; EBP offers programmes that are</p><p>already known to work. The energy and enthusiasm</p><p>that EBP generates may be as important as the content</p><p>of the idea or the individual practices.</p><p>Recent developments suggest the possible useful-</p><p>ness of EBP as a public idea. President Bush’s New</p><p>Freedom Commission on Mental Health was estab-</p><p>lished in April 2002, with the charge to study and</p><p>advise the President about the mental health service</p><p>delivery system. The Executive Order establishing</p><p>the Commission does not mention EBP</p><p>per se</p><p>, but</p><p>it does refer repeatedly to effectiveness and to</p><p>research, and requests the identification of interven-</p><p>tions that are ‘demonstrably effective’ (Executive</p><p>Order 2002). The Commission has formed an EBP</p><p>subcommittee. The Commission’s interim report,</p><p>issued in October 2002, describes nine model pro-</p><p>grammes, and favourable research results were</p><p>required for any programme seeking to serve as a</p><p>model. The research bases for the model pro-</p><p>grammes, however, vary considerably. Apparently,</p><p>only three of the nine have RCT support, and in one</p><p>of these the findings are still preliminary. One pro-</p><p>gramme cites its annual report, and one nothing at all.</p><p>Three programmes cite single peer-reviewed articles</p><p>(President’s New Freedom Commission on Mental</p><p>Health 2002). Still, the idea of evidence, of demon-</p><p>strable effectiveness, remains thematic in the interim</p><p>report. The Commission broadens EBP in two impor-</p><p>tant ways. First, its evidence requirements are seem-</p><p>ingly less rigorous; and second, given its charge, the</p><p>Commission will infer from evidence of good practice</p><p>the necessity for system, and not practitioner, change.</p><p>It is too soon to know what the EBP subcommittee</p><p>will conclude or, more importantly, what the effect of</p><p>S. Tanenbaum</p><p>298</p><p>©</p><p>2003 Blackwell Publishing Ltd,</p><p>Journal of Evaluation in Clinical Practice</p><p>,</p><p>9</p><p>, 2, 287–301</p><p>the Commission’s final report will be. It may turn the</p><p>attentions of the public mental health system back to</p><p>system design alone. More likely, the roles of EBP,</p><p>both practical and political, will continue to expand.</p><p>The Commission’s highly visible efforts make evi-</p><p>dence the handmaiden of policy change.</p><p>Conclusion</p><p>One implication of the analysis above is that public</p><p>mental health officials may use the strengths of EBP</p><p>as a public idea to motivate the actual application of</p><p>research to practice. On the other hand, they may</p><p>recognize these strengths and use them for political</p><p>purposes, apart from any real benefit from clinical</p><p>epidemiology to mental health practice. This may</p><p>alleviate the puzzlement many EBP proponents</p><p>express about the chasm, as they see it, between</p><p>research and practice (Corrigan 2001).</p><p>Evidence-based practice in the U.S. public mental</p><p>health system is expressly an effort to change the</p><p>behaviour of clinicians through the dissemination of</p><p>research findings. It is an attempt at innovation and</p><p>may be conceptualized using standard innovation</p><p>models (Tanenbaum 1986). In these views, EBP is a</p><p>process that begins with research and ends with prac-</p><p>tice, a mostly unidirectional movement from a set of</p><p>knowers to a set of doers. Evidence-based practice is</p><p>difficult to accomplish, and this is a source of conster-</p><p>nation for officials of the mental health system</p><p>(USDHHS 1999) and the scientists who do the</p><p>research (Drake</p><p>et al</p><p>. 2001). Why are clinicians so</p><p>resistant to change? It is worth considering another</p><p>view of the innovation process itself – one that takes</p><p>a cross-sectional rather than longitudinal approach.</p><p>In this alternative view,</p><p>scientists, officials and clini-</p><p>cians are all, at every moment, knowers and doers</p><p>both; a clinician’s knowledge cannot by definition be</p><p>a scientist’s. Statistical studies remain a single form of</p><p>evidence, one that contributes to changes in clinical</p><p>practice but cannot change practice alone. Knowl-</p><p>edge does not originate with the researcher and pass</p><p>through management into practice. It is conceived</p><p>differently by the three groups of knower–doers.</p><p>These conceptions sometimes coincide, and then</p><p>change is possible.</p><p>In the meantime, EBP can serve a political func-</p><p>tion in the public mental health system. Officials can</p><p>ride the coattails of the immensely influential EBM.</p><p>They can distance themselves from earlier, more</p><p>overtly ideological, mental health policies and claim</p><p>that they are appealing to reason alone. They can</p><p>back up requests for resources and regulatory change</p><p>with the logic of what works. They can give their</p><p>efforts a quantitative rationale. One risk of embrac-</p><p>ing EBP is that political audiences may dismiss the</p><p>idea as ‘academic’ or ‘intellectual’ – not fit for the</p><p>‘real world’. Public mental health officials can antic-</p><p>ipate this response and be ready to offer other ideas,</p><p>such as quality, that might subsume EBP. Evidence-</p><p>based practice would never be the sole public idea; it</p><p>would only be surprisingly useful given the epistemo-</p><p>logical limitations of its practice. Officials can know</p><p>these limits well and still reap the benefits of an over-</p><p>riding endorsement of this public idea.</p><p>Acknowledgements</p><p>The research for and writing of this paper was funded</p><p>by a commission from the Division of Services and</p><p>Intervention Research, National Institute of Mental</p><p>Health.</p><p>References</p><p>Adams C.E. & Gilbody S. (2001) Nobody ever expects the</p><p>Spanish inquisition.</p><p>Psychiatric Bulletin</p><p>25</p><p>, 291–292.</p><p>Bailar J.C. III (1995) The Practice of Meta-analysis.</p><p>Jour-</p><p>nal of Clinical Epidemiology</p><p>48</p><p>, 145–157.</p><p>Barbour R.S. (2000) The role of qualitative research in</p><p>broadening the ‘evidence base’ for clinical practice.</p><p>Jour-</p><p>nal of Evaluation in Clinical Practice</p><p>6</p><p>, 155–163.</p><p>Barlow D.H. (2000) Evidence-based practice: a world</p><p>view.</p><p>Clinical Psychology: Science and Practice</p><p>7</p><p>, 241–</p><p>242. http://clipsy.outjournals</p><p>Barton S. (2000) Which clinical studies provide the best</p><p>evidence?</p><p>British Medical Journal</p><p>321</p><p>, 255–256.</p><p>Bauer M.S. (2002) A Review of Quantitative Studies of</p><p>Adherence to Mental Health Clinical Practice Guide-</p><p>lines.</p><p>Harvard Review of Psychiatry</p><p>10</p><p>, 138–153.</p><p>Bell N.N. & Shern D.L. (2002)</p><p>State Mental Health Com-</p><p>missions: Recommendations for Change and Future</p><p>Directions.</p><p>National Technical Assistance Center for</p><p>State Mental Health Planning, Washington, DC.</p><p>Berg M. (1997)</p><p>Rationalizing Medical Work: Decision-</p><p>Support Techniques and Medical Practices</p><p>. MIT Press,</p><p>Cambridge.</p><p>http://clipsy.outjournals</p><p>Evidence-based practice in mental health</p><p>© 2003 Blackwell Publishing Ltd, Journal of Evaluation in Clinical Practice, 9, 2, 287–301 299</p><p>Berg M. (1998) Order(s) and disorder(s): of protocols and</p><p>medical practices. In Differences in Medicine: Unraveling</p><p>Practices, Techniques, and Bodies (eds M. Berg & A.</p><p>Mol), pp. 226–246. Duke University Press, Durham.</p><p>Berg M. & Mol A., eds. (1998) Differences in Medicine:</p><p>Unraveling Practices, Techniques, and Bodies. Duke Uni-</p><p>versity Press, Durham.</p><p>Bero L. & Rennie D. (1995) The Cochrane Collaboration:</p><p>preparing, maintaining, and disseminating systematic</p><p>reviews of the effects of health care. Journal of the Amer-</p><p>ican Medical Association 274, 1935–1938.</p><p>Bickman L. (2002) The death of treatment as usual: an</p><p>excellent first step on a long road. Clinical Psychology:</p><p>Science and Practice 9, 195–199.</p><p>Bohart A.C., O’Hara M. & Leitner L. (1998) Empirically vio-</p><p>lated treatments: disenfranchisement of humanistic and</p><p>other psychotherapies. Psychotherapy Research 8, 141–157.</p><p>Browman G.P. (1999) Essence of evidence-based medi-</p><p>cine: a case report. Journal of Clinical Oncology 17, 1969.</p><p>Carpinello S.E., Rosenberg L., Stone J., Schwager M. & Fel-</p><p>ton C.J. (2002) Best practices: New York State’s campaign</p><p>to implement evidence-based practices for people with</p><p>serious mental disorders. Psychiatric Services 53, 153–155.</p><p>Chambless D.L. (2002) Further Bases of “Practice: Debate</p><p>on Empirically-supported Treatments” Plenary Session –</p><p>110th. Annual Meeting of the American Psychological</p><p>Association, August 22–25.</p><p>Chorpita B.F., Yim L.M., Dankervoet J.C., Arensdorf A.,</p><p>Amundsen M.J., McGee C., Serrano A., Yates A., Burns</p><p>J.A. & Morelli P. (2002) Toward large-scale implemen-</p><p>tation of empirically supported treatments for children: a</p><p>review and observations by the Hawaii Empirical Basis</p><p>to Services Task Force. Clinical Psychology: Science and</p><p>Practice 9, 165–190.</p><p>Clarke M. & Oxman A.D., eds. (2002) Cochrane Review-</p><p>ers Handbook 4.1.5. (Updated April 2002). Update Soft-</p><p>ware, Oxford.</p><p>Corrigan P.W., Steiner L., McCracken S.G., Blaser B. &</p><p>Barr M. (2001) Strategies for disseminating evidence-</p><p>based practices to staff who treat people with serious</p><p>mental illness. Psychiatric Services 52, 1598–1606.</p><p>Dans A.L., Dans L.F., Guyatt G.H. & Richardson S. (1998)</p><p>User’s guide to the medical literature. XIV. How to</p><p>decide on the applicability of clinical trial results to your</p><p>patient. Journal of the American Medical Association</p><p>279, 545–549.</p><p>Donald A. (2002) Evidence-based medicine: key concepts.</p><p>Medscape Psychiatry and Mental Health Ejournal 7.</p><p>Drake R.E., Goldman H.H., Leff H.S., Lehman A.F.,</p><p>Dixon L., Mueser K.T. & Torrey W.C. (2001) Imple-</p><p>menting evidence-based practices in routine mental</p><p>health service settings. Psychiatric Services 52, 179–182.</p><p>Eddy D.M. (1984) Variations in physician practice: the role</p><p>of uncertainty. Health Affairs 3, 74–89.</p><p>Elliott R. (1998) A guide to the empirically supported treat-</p><p>ments controversy. Psychotherapy Research 8, 115–125.</p><p>Elwyn G. & Gwyn R. (1999) Stories we hear and stories we</p><p>tell: analysing talk in clinical practice. British Medical</p><p>Journal 318, 186–188.</p><p>Enkin M.W. & Jadad A.R. (1998) Using anecdotal infor-</p><p>mation in evidence-based health care: heresy or neces-</p><p>sity? Annals of Oncology 9, 963–966.</p><p>Essock S.M. (1999) State-of-the-art challenges for mental</p><p>health services research. Journal of Mental Health Policy</p><p>and Economics 2, 9–12.</p><p>Executive Order (2002) President’s New Freedom Com-</p><p>mission on Mental Health.</p><p>Feinstein A.R. (1995) Meta-analysis: statistical alchemy for</p><p>the 21st century. Journal of Clinical Epidemiology 48,</p><p>71–79.</p><p>Fox R.E. (2000) The Dark Side of Evidence-Based Treat-</p><p>ment. Practitioner Focus Winter,</p><p>http://www.apa.org/practice/pf/janoo/cappchair.html</p><p>Frese F.J. III, Stanley J., Kress K. & Vogal-Scibilia S.</p><p>(2001) Integrating evidence-based practices and the</p><p>recovery model. Psychiatric Services 52, 1462–1468.</p><p>Geddes J. (1996) On the need for evidence-based psychi-</p><p>atry. Evidence-Based Medicine Nov./Dec., http://www.</p><p>acponline.org/journals/ebm/novdec96/ebmpsych.htm</p><p>Geller J.L. (2000) The last half-century of psychiatric ser-</p><p>vices as reflected in. Psychiatric Services 51, 41–67.</p><p>Ghali W.A. & Sargius P.M. (2002) The evolving paradigm</p><p>of evidence-based medicine. Journal of Evaluation in</p><p>Clinical Practice 8, 109–112.</p><p>Giacomini M.K. & Cook D.J. (2000a) User’s guide to the</p><p>medical literature. XXIII. Qualitative research in health</p><p>care A. Are the results of the study valid? Journal of the</p><p>American Medical Association 284, 357–362.</p><p>Giacomini M.K. & Cook D.J. (2000b) User’s guide to the</p><p>medical literature. XXIII. Qualitative research in health</p><p>care B. What are the results and how do they help me</p><p>care for my patients? Journal of the American Medical</p><p>Association 284, 478–482.</p><p>Goldman H.H., Ganju V., Drake R.E., Gorman P., Hogan</p><p>M., Hyde P. & Morgan O. (2001) Policy implications for</p><p>implementing evidence-based practices. Psychiatric Ser-</p><p>vices 52, 1591–1597.</p><p>Gonzales J.J.,</p><p>Ringeisen H.L. & Chambers D.A. (2002)</p><p>The tangled and thorny path of science to practice: ten-</p><p>sions in interpreting and applying ‘evidence’. Clinical</p><p>Psychology: Science and Practice 9, 204–209.</p><p>Greenhalgh T. (1998) Narrative based medicine in an evidence</p><p>based world. In Narrative Based Medicine (eds T. Green-</p><p>halgh & B. Hurwitz), pp. 247–265. BMJ Books, London.</p><p>http://www.apa.org/practice/pf/janoo/cappchair.html</p><p>http://www</p><p>S. Tanenbaum</p><p>300 © 2003 Blackwell Publishing Ltd, Journal of Evaluation in Clinical Practice, 9, 2, 287–301</p><p>Greenhalgh T. & Hurwitz B., ed. (1999) Narrative Based</p><p>Medicine. BMJ Books, London.</p><p>Greenhalgh T. & Taylor R. (1997) How to read a paper:</p><p>papers that go beyond numbers (qualitative research).</p><p>British Medical Journal 315, 740–743.</p><p>Gusfield J.R. (1981) The Culture of Public Problems:</p><p>Drinking Driving and the Symbolic Order. University of</p><p>Chicago Press, Chicago.</p><p>Guyatt G., Haynes R.B., Jaeschke R.Z. et al. (2002)User’s</p><p>guide to the medical literature. XXV: Evidence-based</p><p>medicine: principles for applying the user’s guide to</p><p>patient care. Journal of the American Medical Associa-</p><p>tion 284, 1290–1296.</p><p>Guyatt G., Sackett D.L. & Cook D.J. (1994) User’s guide</p><p>to the medical literature. II. How to use an article about</p><p>therapy or prevention: B. what were the results and will</p><p>they help me in caring for my patients? Journal of the</p><p>American Medical Association 271, 59–63.</p><p>Haines A. & Donald A. (1998) Making better use of</p><p>research findings. British Medical Journal 317, 72–75.</p><p>Henggler S.W., Lee T. & Burns J.A. (2002) What happens</p><p>after the innovation is identified? Clinical Psychology:</p><p>Science and Practice 9, 191–194.</p><p>Henry W.P. (1998) Science, politics, and the use and misuse</p><p>of empirically validated treatment research. Psychother-</p><p>apy Research 8, 126–140.</p><p>Heymann P.B. (1988) How government expresses public</p><p>ideas. In The Power of Public Ideas (ed. R.B. Reich), pp.</p><p>85–108. Harvard University Press, Cambridge.</p><p>Hoagwood K. (2002) Making the translation from research</p><p>to its application: the Je Ne Sais Pas of evidence-based</p><p>practices. Clinical Psychiatry: Science and Practice 9,</p><p>210–213.</p><p>Hogan M.F. (2002) A partly built bridge between science</p><p>and services: commentary on the Hawaii Empirically</p><p>Based Services Task Force. Clinical Psychology: Science</p><p>and Practice 9, 200–203.</p><p>Hunter K.M. (1996) Narrative, literature, and the clinical</p><p>exercise of practical reason. Journal of Medicine and</p><p>Philosophy 21, 303–320.</p><p>Jonas W.B. (2001) The evidence house: how to build an</p><p>inclusive base for complementary medicine. Western</p><p>Journal of Medicine 175, 79–80.</p><p>Klein G. (1998) Sources of Power: How People Make Deci-</p><p>sions. MIT Press, Cambridge.</p><p>Knottnerus A. & Dinant G.J. (1997) Medicine based evi-</p><p>dence, a prerequisite for evidence based medicine. Brit-</p><p>ish Medical Journal 315, 1109–1110.</p><p>Launer J. (1998) Narrative and mental health in primary</p><p>care. In Narrative-Based Medicine (eds T. Greenhalgh &</p><p>B. Hurwitz), pp. 93–102. BMJ Books, London.</p><p>Lehman A.F. & Steinwachs D.M. (1998) Translating</p><p>research into practice: the Schizophrenia Patient Out-</p><p>come Research Team (PORT) treatment recommenda-</p><p>tions. Schizophrenia Bulletin 24, 1–10.</p><p>Levels of Evidence – Grades of Recommendations (1998)</p><p>EBM homepage:</p><p>http://www.cebm.net/levels_of_evidence.asp</p><p>Lipman T. (2002) Power and influence in clinical effective-</p><p>ness and evidence-based medicine. Family Practice 16,</p><p>213–215.</p><p>McAllister F.A., Straus S.E., Guyatt G.H. & Haynes B.</p><p>(2000) User’s guide to the medical literature. XX. Inte-</p><p>grating research evidence with the care of the individual</p><p>patient. Journal of the American Medical Association</p><p>283, 2829–2836.</p><p>Mace C., Moorey S. & Roberts B., eds. (2001) Evidence</p><p>in the Psychological Therapies: a Critical Guide for Prac-</p><p>titioners. Taylor & Francis, Inc., Philadelphia.</p><p>Mechanic D. (2001) Closing gaps in mental health care for</p><p>persons with serious mental illness. Health Services</p><p>Research 36, 1009–1018.</p><p>Miettinen O.S. (1998) Evidence in medicine: invited com-</p><p>mentary. Canadian Medical Association Journal 158,</p><p>215–221.</p><p>Miles A., Grey J.E., Polychronis A. & Melchiorri C. (2002)</p><p>Critical Advances in the evaluation and development of</p><p>clinical care. Journal of Evaluation in Clinical Practice 8,</p><p>87–102.</p><p>Moore M.H. (1988) What sort of ideas become public</p><p>ideas? In The Power of Public Ideas (ed. R.B. Reich), pp.</p><p>55–84. Harvard University Press, Cambridge.</p><p>Morone J.A. (1994) The bureaucracy empowered. In The</p><p>Politics of Health Care Reform: Lessons from the Past,</p><p>Prospects for the Future (eds J.A. Morone & G.S. Bel-</p><p>kin), pp. 148–164. Duke University Press, Durham.</p><p>Mowbray C. & Holter M. (2002) Mental health and mental</p><p>illness: out of the closet? Social Service Review 76, 135–</p><p>179.</p><p>Mulrow C.D. & Rohr K.N. (2001) Proof and policy from</p><p>medical research evidence. Health Politics, Policy and</p><p>Law 26, 249–266.</p><p>NAMI (National Alliance for the Mentally Ill) (2000a) Evi-</p><p>dence-based treatment gains congressional support.</p><p>Action Alert. NAMI homepage:</p><p>http://www.nami.org/update/001027.html</p><p>NAMI (National Alliance for the Mentally Ill) (2000b)</p><p>NAMI Applauds Introduction of State Medicaid Option</p><p>for PACT Services. Press release, 27 October. NAMI</p><p>homepage: http://www.nami.org/update/001027.html</p><p>OMH Quarterly (2001a) New York State Office of Mental</p><p>Health, p. 3.</p><p>OMH Quarterly (2001b) New York State Office of Mental</p><p>Health, pp. 12–14.</p><p>http://www.cebm.net/levels_of_evidence.asp</p><p>http://www.nami.org/update/001027.html</p><p>http://www.nami.org/update/001027.html</p><p>Evidence-based practice in mental health</p><p>© 2003 Blackwell Publishing Ltd, Journal of Evaluation in Clinical Practice, 9, 2, 287–301 301</p><p>Parry G. (2000) Evidence based psychotherapy: special</p><p>case or special pleading? Evidence-Based Mental Health</p><p>3, 35–37.</p><p>Pratt D.J. (2000) A critical review of the literature on foot</p><p>orthoses. Journal of the American Podiatric Medicine</p><p>Association 90, 339–341.</p><p>President’s New Freedom Commission on Mental Health</p><p>(2002) Interim Report. http://www.mentalhealth.org/</p><p>publications/allpubs/NMH02-0144/default.asp</p><p>Purposes and Procedure (2002) Evidence-Based Mental</p><p>Health Online. Stanford’s High Wire Press, Palo Alto.</p><p>Reich R.B., ed. (1988a) Introduction. In The Power of</p><p>Public Ideas, pp. 1–12. Harvard University Press,</p><p>Cambridge.</p><p>Reich R.B., ed (1988b) The Power of Public Ideas. Har-</p><p>vard University Press, Cambridge.</p><p>Richardson P. (2001) Evidence-based practice and the psy-</p><p>chodynamic psychotherapies. In Evidence in the Psycho-</p><p>logical Therapies: a Critical Guide for Practitioners (eds</p><p>C. Mace, S. Moorey & B. Roberts), pp. 157–173. Taylor</p><p>& Francis, Inc., Philadelphia.</p><p>Rizzo J.D., Seidenfield J., Piper M., Aronson N., Lichtin A.</p><p>& Littlefield T.J. (2001) Erythropoietin: a paradigm for</p><p>the development of practice guidelines. American</p><p>Society of Haematology Education Book, http://</p><p>www.asheducationbook.org/cgi/content/full/2001/1/10</p><p>Roth D. (2000) New Research in Mental Health 1998–1999</p><p>Biennium, Vol. 14. Ohio Department of Mental Health,</p><p>Office of Program Evaluation and Research, Columbus.</p><p>Sackett D.L. (1986) Rules of evidence and clinical recom-</p><p>mendations on the use of antithrombotic agents. Chest</p><p>89, 2. Cited in Pratt 2000.</p><p>Sackett D.L., Rosenberg W.M.C., Muir-Gray J.A., Haynes</p><p>R.B. & Richardson W.S. (1996) Evidence-based medi-</p><p>cine: what it is and what it isn’t. British Medical Journal</p><p>312, 71–72.</p><p>Schattschneider E.E. (1960) The Semi-Sovereign People: A</p><p>Realist’s View of Democracy in America. Harcourt Brace</p><p>Jovanovich, New York.</p><p>Schon D.A. (1983) The Reflective Practitioner: How Pro-</p><p>fessionals Think in Action. Basic Books, New York.</p><p>Seligman M.E.P. (1995) The effectiveness of psychother-</p><p>apy: the Consumer Reports Study. American Psycholo-</p><p>gist 50, 965–974.</p><p>Silverman W.H. (1996) Cookbooks, manuals, and paint-by-</p><p>numbers: psychotherapy in the 90s. Psychotherapy 33,</p><p>207–215.</p><p>Stone D.A. (2002) Policy Paradox: the Art of Political Deci-</p><p>sion-Making, Revised Edition. Norton, New York.</p><p>Sturm R. (1999) What type of information</p>