Talk:Evidence-based medicine/Draft

From Citizendium
< Talk:Evidence-based medicine
Revision as of 13:46, 20 November 2007 by imported>D. Matt Innis (→‎Steps_in_evidence-based_medicine: new section)
Jump to navigation Jump to search
This article has a Citable Version.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
To learn how to update the categories for this article, see here. To update categories, edit the metadata template.
 Definition The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. [d] [e]
Checklist and Archives
 Workgroup category Health Sciences [Categories OK]
 Talk Archive 1  English language variant American English


I will be gad to help author here, and would like to go over a plan for the article. I think that, as this article covers a a special sort of medical field that we should discuss "audience". Please, fellow editors, argue with any of these points if they differ from your understanding. Evidence based medicine is certainly all about clinical care of patients- but, unlike an article on dermatology, say, it really is about a way of thinking about medicine, an approach. Reading what is written so far- it is really meaty and presents that approach, but, in my mind suffers from 2 faults, one is that there is too much technical language without explanation, and (2) the history of medicine (in a way) has to be presented so that the naive reader understands that actually, "regualar medicine" is not evidenced based. I tyhink also, that including some real examples of changes in clinical practice that are based on evidence based medicine, may be helpful. I am going to add some of this and am open to discussion, especially from Supten. Nancy Sculerati 09:35, 15 May 2007 (CDT)

References-with notes

O'Malley P. Order no harm: evidence-based methods to reduce prescribing errors for the clinical nurse specialist. [Review] [17 refs] [Journal Article. Review] Clinical Nurse Specialist. 21(2):68-70, 2007 Mar-Apr. UI: 17308440 Classed under evidenced based medicine by Ovid (Medline) , his article reviews actual sources of medication errors.

Doumit G. Gattellari M. Grimshaw J. O'Brien MA. Local opinion leaders: effects on professional practice and health care outcomes.[update of Cochrane Database Syst Rev. 2000;(2):CD000125; PMID: 10796491]. [Review] [54 refs] [Journal Article. Review] Cochrane Database of Systematic Reviews. (1):CD000125, 2007. UI: 17253445

Lorenz LB. Wild RA. Polycystic ovarian syndrome: an evidence-based approach to evaluation and management of diabetes and cardiovascular risks for today's clinician. [Review] [60 refs] [Journal Article. Review] Clinical Obstetrics & Gynecology. 50(1):226-43, 2007 Mar. UI: 17304038

Jordan A. McDonagh JE. Transition: getting it right for young people. [Review] [29 refs] [Journal Article. Review] Clinical Medicine. 6(5):497-500, 2006 Sep-Oct. UI: 17080900

Thanigaraj S. Wollmuth JR. Zajarias A. Chemmalakuzhy J. Lasala JM. From randomized trials to routine clinical practice: an evidence-based approach for the use of drug-eluting stents. [Review] [48 refs] [Journal Article. Review] Coronary Artery Disease. 17(8):673-9, 2006 Dec. UI: 17119375

Stanley K. Design of randomized controlled trials. [Review] [9 refs] [Journal Article. Review] Circulation. 115(9):1164-9, 2007 Mar 6. UI: 17339574

Sectioning

Are there perhaps more sections than are useful here? CZ:Article Mechanics recommends against many relatively short sections in favor of relatively few, longer sections. But I don't think we have any very hard-and-fast rules about this.

Glad to see you here, Dr. Badgett! --Larry Sanger 22:01, 23 October 2007 (CDT)

Thanks - Robert Badgett 22:37, 31 October 2007 (CDT)

'Main' template not working

I added a new call to the main template, and now all three calls are not displaying correctly. - Robert Badgett 22:37, 31 October 2007 (CDT)

Misuses of EBM

The article ignores the misuses of EBM in the real world. Very few of the methods actually used in medicine have ever been validated by independent prospective randomized double-blind studies, or are likely to be. The main use of EBM is by HMOs and other prepaid managed care organizations, as an excuse to refuse to pay for expensive studies or treatments, while happily paying for inexpensive, untested, unproven treatments, such as herbal and other "alternative" medicines. I do not think this misuse of EBM should be ignored in this otherwise wholly laudatory article. Harvey Frey 17:20, 12 November 2007 (CST)

Hi!
The use of the "there is no evidence that" is becoming a little too frequent in clinical medicine. I suggest these two articles for inclusion; unfortunately I cannot access them (full text) right now.
J Med Ethics 2004;30:141-145 Evidence based medicine and justice: a framework for looking at the impact of EBM upon vulnerable or disadvantaged groups. W A Rogers
S I Saarni and H A Gylling Evidence based medicine guidelines: a solution to rationing or politics disguised as science?
J. Med. Ethics, Apr 2004; 30: 171 - 175.
May I summarize the two abstracts in the Criticisms section?
Pierre-Alain Gouanvic 23:34, 12 November 2007 (CST)

Problem with the references

Somewhere around the 50th reference, there is a bug. Can someone fix this? Pierre-Alain Gouanvic 23:47, 12 November 2007 (CST)

Great! Pierre-Alain Gouanvic 13:50, 13 November 2007 (CST)

Criticisms that may be incorporated into the Section

I think more needs to be added about the sources of much so-called EBM, from sources interested in minimizing expenses of government health plans, like the Cochrane group, or through medical auditors interested primarily in maximizing profits of private HMOs, like Milliman & Robertson. There also needs to be a fair admission of how little of accepted medical practice has actually been validated by 'gold-standard' studies. When should a procedure be denied based on lack of EBM support? And, to what extent are surrogate measures acceptable when, say, survival data is unavailable? For instance in Radiation Oncology (my own specialty) if you know that higher radiation doses kill more cancer cells, and high doses are usually limited by doses to surrounding tissues, and if you can show that some new technique gives less dose to surrounding tissues this allowing higher doses to cancers, is it irrational to take that as evidence that the new technique is superior? Must an HMO insist on a prospective randomized double-blind study using 20 year survival as an endpoint before allowing use of the new technique? The other issue is the extent to which 'cost' should be involved in EBM studies, and if it IS allowed, what should be the conversion factor between dollars and years of life, or dollars and years of pain-free life. Should we EVER do a coronary bypass operation, given that the same number of dollars could save thousands of lives if spent on malaria prevention instead? But, WOULD the dollars saved be spent on malaria prevention, or would it go to executive perks and stockholder dividends? One doctor in California recently received almost a billion dollars selling his share of an HMO. Those were dollars not spent on medical care, often justified by calling some procedure "not medically necessary", or "investigational"! And, what weight should be given to the EBM "guidelines"? Should they be used to overrule the decision of the primary doctor on the case? If so, who takes responsibility for adverse results? The clerk who countermanded a doctor's order based on an M&R cookbook? Harvey Frey

I think these are all legitimate issues. What we have so far is a pretty mainstream article, your stuff would help. Much of this could be added to the 'criticisms' section, which is currently sparse. Some of what you suggest might be better on the clinical guidelines page. Robert Badgett
Here's another example: http://www.careguidelines.com/ An entirely PROPRIETARY set of "EBM Guidelines" from Milliman, originally a hospital accounting firm, based on no known public peer review, widely sold to managed care organizations in the US, for the express purpose of controlling cost. And, of course they come with disclaimers, to avoid liability if anyone is injured by one of their clients using them. I do remember a case in California a few years ago when they figured prominently when a hospital prematurely discharged a woman post-delivery, based on these guidelines. Unfortunately, it wasn't an reported appellate case, so I'm having trouble finding it now. Harvey Frey
Interesting. I cannot find their guidelines to assess their methods, but from your description, it sounds like they hijacked the label evidence-based. Robert Badgett
If I understood you well, the example you provide from oncology:
For instance in Radiation Oncology (my own specialty) if you know that higher radiation doses kill more cancer cells, and high doses are usually limited by doses to surrounding tissues, and if you can show that some new technique gives less dose to surrounding tissues this allowing higher doses to cancers, is it irrational to take that as evidence that the new technique is superior?
is an illustration of the difficulty of using causal inferences and, for that matter, common sense, in the framework of EBM. I unearthed something like a little gem, which could be useful in defining EBM from the practicioner's and patient's point of view (I'm not saying that this article is "one of its kind" though): Critique of (im)pure reason: evidence-based medicine and common sense [1]
While the goal of evidence-based medicine (EBM) is certainly laudable, it is completely based on the proposition that 'truth' can be gleaned exclusively from statistical studies. In many instances, the complexity of human physiology and pathophysiology makes this a reasonable, if not necessary, assumption. However, there are two additional large classes of medical 'events' that are not well served by this paradigm: those that are based on physically required causality, and those that are so obvious (to the casual observer) that no self-respecting study will ever be undertaken (let alone published). Frequently, cause-and-effect relationships are so evident that they fall into both categories, and are best dealt with by the judicious use of common sense. Unfortunately, the use of common sense is not encouraged in the EBM literature, as it is felt to be diametrically opposed to the very notion of EBM. As is more fully discussed in the manuscript, this active disregard for common sense leaves us at a great disadvantage in the practical practice of medicine.
I believe that this criticism is important because it brings in bright light the relationship between EBM and fundamental research: the latter deals with complex-cause-and-effect relationships, the former with specific effects, out of the black box of human physiology. Pierre-Alain Gouanvic 12:05, 14 November 2007 (CST)

Some problems

"Evidence-based medicine seeks to promote practices that has been shown, through the scientific method to have validity by empiric proof." This needs re-thinking; I think that what is meant here is "promoting practices the effectiveness of which has been supported by stringent statistical analysis of the results of carefully controlled clinical studies."

Evidence-based medicine is not science-based medicine. Science-based medicine works from a fundamental understanding of basic mechanisms to generate a rationally designed intervention strategy. Not all medical interventions are actually based in science in this sense (and some would say that relatively few are). More commonly, they are based empirically on experience of what actually works, and the scientific rationale or explanation comes later (if at all).

Most importantly here though, the scientific method would test the explanations for the effectiveness of particular treatments by hypothesis-based experimental testing. Whether this has been done or not would not really influence the decision to use a particular intervemntion or not.Gareth Leng 03:55, 14 November 2007 (CST)


I haven't checked the references, only put them into what I think is style consistent within the article and consistent with Biology work group style; I've shorten author lists to et al. when there are more than 2 authors and omitted issue numbers as redundant, generally to try to keep the list concise for printing. My general feeling is that it seems over-referenced - I'd be wary of this as a large current reference list becomes outdated fast, a smaller list of elite core references has a longer shelf life. The size is also a burden for verification. However it's a very nicely written very helpful article. I'd just return to the use of the word "proof" which I'd strongly urge that you avoid. Scientists would rarely consider anything to be proved; the evidence might be strong enough to accept a conclusion (provisionally), but if a conclusion rests on statistics then there is always a margin for error.Gareth Leng 06:51, 14 November 2007 (CST)

required fixes, self approval?

Several things need fixing prior to approval. The article needs to be consistant, ie "evidence-based" vs "evidence based" oocurs in the article, as well as minor typos. At least two sections are completely empty somewhere near the bottom, including the "Apply" and "Assess" sections. They need to be removed or expanded. Finally, the nominating editor appears to have created and written on this page. I suggest removal of nomination, a careful read and editing, and then re-nomination David E. Volk 09:18, 14 November 2007 (CST)

Studies of effectiveness section

The last sentence in this paragraph is not a sentence. I can't figure out what was meant. I inserted EBM in a few sentences where it seemed to be missing. David E. Volk 10:15, 14 November 2007 (CST)

Is this ready for approval?

Several section were blank. i added text from related articles to give an overview but we can't have an approved aricles with blank sections. Also it seems incomplete in places. Particularly, There are four cases of one sub section in a hierarchy. This seems to imply there is another sub section that could be added. If not then the subsection seems unnecessary. For example;

7 Incorporating evidence into clinical care
7.1 Medical informatics
7.2  ?
8.3 Clinical reasoning
8.3.1 Improving clinical care
8.3.2  ?
9.4 Apply
9.4.1 Clinical reasoning
9.4.2  ?
10.3 Epistemology
10.3.1 Complexity theory
10.3.2  ?

In all these cases it seems like there should either be another subsection or that the x.x.1 sub heading is not required. I dpn't know enough about the topic to know what the ? might be. Chris Day (talk) 22:44, 14 November 2007 (CST)


What does this mean?

I was trying to edit the second paragraph in the first section but came to the conclusion that i did not really know what it means.

"Evidence-based medicine seeks to promote practices that have been shown, through the scientific method to have validity by empiric proof. As such, it currently encompasses only a few of the actual practices in clinical medicine and surgery. More often, recommendations are made on the basis of best evidence that are reasonable, but not proven. Evidence-based medicine is also a philosophy, however, that seeks to validate practices by finding proof."

The first sentence I would change to:

"Evidence-based medicine seeks to promote practices that have been shown to have validity using the scientific method."

The second sentence reads I'm unsure what the point is. Is the implication that the actual practices in clinical medicine and surgery do not follow the scientific method? Is so, does this even need to be said, this is redundant with the first paragraph.

The third sentence seems redundant with the first sentence. It says the same as "promote practices that have been shown to have validity using the scientific method"

The last paragraph relating to phiolosophy loses me. Phiolosophy and EBM seem to be the opposite of each other but this sentence seems to be saying they are both? i find this very confusing. I hope these comments are useful. Chris Day (talk) 23:16, 14 November 2007 (CST)

It looks like there's some redundancy in it, from an outsider point of view. Even when it says that "Evidence-based medicine is also a philosophy, however, that seeks to validate practices by finding proof", it seems to read that A is something that relies on B, but A also seeks to prove B but I would not use the word "philosophy" but rather restate it as "Part of the ultimate goal of evidence-based medicine is to validate practices by establishing proof of the results." --Robert W King 23:41, 14 November 2007 (CST)

Isn't that exactly the same as the first sentence in the paragraph? Here I slightly reworded it and you'll see what I mean.

"Part of the ultimate goal of evidence-based medicine is to validate practices by using the scientic method."

It seems to me that the whole paragraph distills down to the first sentence:

"Evidence-based medicine seeks to promote practices that have been shown to have validity using the scientific method."

Chris Day (talk) 23:50, 14 November 2007 (CST)

If the rest of the paragraph is superfulous because of redundancy, I'd probably just remove the errant content! --Robert W King 00:24, 15 November 2007 (CST)
That's what i have done. Let's see what the health editors think. Chris Day (talk) 00:35, 15 November 2007 (CST)

Industry and publication bias

Reading this again, it struck me that an important part of meta-analysis and appraisal is neglected. Most studies where they are misleading are I think misleading because of flaws in design or conduct or analysis. I don't know that I've ever seen a study where legitimate criticisms can't be raised, that might affect the interpretation. A good meta-analysis grades the quality of the trials, weighting the outcomes by quality, and I think attempts to come to a global recommendation on the basis that while individual trials might be imperfect for different reasons, when collectively they come to a common conclusion that conclusion is probably reliable.

The issue of publication bias works two ways. First, negative or inconclusive results are less likely to be reported. Second, positive results may be more likely to be reported when they are confirmatory of already published findings even when the quality of the trial is poor.

Overall, industry-sponsored trials are given a rough ride here. It should be remembered I think that, without industry sponsorship, there would be far fewer trials in the first place. The quality of studies very much depends on the integrity and competence of the academic or clinical scientists conducting them. We as academics can't blame industry for our own shortcomings. I do think that the major pharmaceutical companies try to find academic partners whose integrity and competence are unimpeachable; it's very much in their interests to do so, whatever the outcome of the trials.

Gareth Leng 03:59, 16 November 2007 (CST)

There are many "jargon terms" introduced here e.g. Relative risk ratio Relative risk reduction Absolute measures Absolute risk reduction Number needed to treat Number needed to screen Number needed to harm

I wonder if it would be sensible to add a glossary as a subpage that gave definitions of these? Or is there some other solution? Perhaps there's a case for making a stub for each of these, with a brief definition and an external link.Gareth Leng 07:17, 16 November 2007 (CST)

I like the glossary idea on subpages. It goes along with the definitions template that Larry created as well. --Matt Innis (Talk) 07:56, 16 November 2007 (CST)

Delayed approval

Although this article is outside of my sphere of competence, I recomment delaying the approval for at least 7 days, so that changes can be made. There are too many comments here from senior scientists, which may not be taken into account if the approval occurs as scheduled. Please indicate your support or opposition for my proposal of a short delay. --Martin Baldwin-Edwards 08:40, 16 November 2007 (CST)

I think 7 days may not be enough unless we are able to fill in the blank spaces toward the bottom and clean up the criticism section. I have added the Healing Arts Workgroup as this article affects them as well. --Matt Innis (Talk) 07:46, 16 November 2007 (CST)

I agree that some delay is sensible. I added a stub for Odds ratio as an example for my comment above, sadly I noticed too late that this is a term omitted (dooh)Gareth Leng 08:49, 16 November 2007 (CST)

Notice what I did with odds ratio. You put the name in the r template like this {{r|odds ratio}} and then click on the little 'e' and put in the defintiion. The it shows up anywhere we put that on any page about odds ratio. --Matt Innis (Talk) 08:31, 16 November 2007 (CST) ThanksGareth Leng 09:35, 16 November 2007 (CST)

Cut this?

I cut this short section. It is in the criticism section, and I couldn't identify in what respect there is any criticism. I guess I think that this may be interesting but is probably only tangential to the article?

Complexity theory

Complexity theory is proposed as further explaining the nature of medical knowledge.[2][3]

Gareth Leng 09:35, 16 November 2007 (CST)

Restored, renamed, and expanded this section with context. See what you think. - Robert Badgett 08:57, 16 November 2007 (CST)
Complexity theory needs to be defined or explained. --Matt Innis (Talk) 09:06, 16 November 2007 (CST)
Liked what you did Robert; also agree with Matt - maybe though it's just again that Complexity theory needs a stub rather than a definition here.Gareth Leng 11:54, 16 November 2007 (CST)
That might work, depending on how you do it, but I like having at least some clarity here and then they can click on the link for more, especially on those more scientific terms that their meanings cannot be easily inferred by the average reader. --D. Matt Innis 11:44, 16 November 2007 (CST)

??

"were more likely to adopt COX-2 drugs before the drugs were recalled by the FDA"

well yes, they would be wouldn't they? But were they recalled by the FDA?Gareth Leng 08:54, 16 November 2007 (CST)

Quality of references

I have started to do some checking of the references. I looked at this "A randomized controlled trial supports the efficiency of this approach.[7]" which is a reference used 3 times. This is a study of 32 medical students assigned to one of 2 search protocols. Frankly it has statistical weaknesses; most obviously the analysis is based on the numbers of answers to questions not on the individual performances; as the individuals are independent but their answers are obviously not, this seems inappropriate. I don't mean to rubbish this small trial, only to say that I think, especially in this article, we should set the bar for citing studies as evidence at an appropriately high level - i.e. at a level appropriate for the topic. Using small or poorly controlled studies as evidence to support conclusions about EBM is surely not what we want to do? I really would recommend trimming the references in the article down to a sustainable core of unimpeachably strong studies.Gareth Leng

Agreed. Considering we have the expertise here, we don't need to reference things that are reasonably understandable unless there is a conflict or questionable synthesis of the information. --Matt Innis (Talk) 09:47, 16 November 2007 (CST)

Disagree. I think trimming this is one of the same mistakes that we accuse EBM of doing. We accuse EBM of only acting on RCT data and not allowing lesser data with expert judgment (note the new section someone added about parachutes). I missed that the trial did not adjust for intrauser correlation. I suggest noting that the trial had issues, but not delete the trial. If you delete this trial, then you are left without guidance on how to teach searching, so we are in the situation that we are not sure whether to use parachutes because there are no RCTs of parachutes. The results of this trial are very plausible based on other studies of how much time various search strategies require, so if you take a Bayesian approach to significance testing, this trial is ok, even with its statistical problems.
I do not see a reason to be parsimonious with references. Documenting sources for arguments, and not throwing out unsupported opinions, is a major function of CZ. - Robert Badgett 11:41, 18 November 2007 (CST)

alt med

I added a blurb about alt med being evaluted using EBM as well. Feel free to clarify or clean it up. I think it is important that this method might be the way to evaluate the claims made by techniques that are subject to bias. --Matt Innis (Talk) 09:40, 16 November 2007 (CST)

COx-2 drugs = COX-2 inhibitors?

Robert, can you rephrase COX-2 "drugs" to be more descriptive?

done - Robert Badgett 10:57, 18 November 2007 (CST)

Server timing

In case anyone is wondering, the timing of the server is way off and clocks are not valid, so the order of edits in the history do not necessarily follow when they were actually made, i.e. Gareth's last edit was actually made before mine, but shows an hour later. --Matt Innis (Talk) 09:42, 16 November 2007 (CST)

file drawer

This sentence:

  • In performing a meta-analyses, a file drawer[15] or a funnel plot analysis[16][17] may help detect underlying publication bias among the studies in the meta-analysis [1].

sure could use a quick explanation of file drawer and funnel plot. --D. Matt Innis 10:25, 16 November 2007 (CST)

Really ready for approval?

I just corrected a grammar error in the intro. It looked like something had been deleted and left a dangling and. Also the very last paragraph in the intro is th following.

"Evidence-Based Health Care or evidence-based practice extends the concept of EBM to all health professions, including purchasing and management [2]."

Here the references cited does not seem to give any information with regard to purchasing and management and evidence-based practice. If it does it is not obvious to me as a layperson. Also, this reference does not use the <ref></ref> format. I know these are minor issues but if these are in the intro what is the rest of the article like? Chris Day (talk) 17:38, 17 November 2007 (CST)

Those references are much better. Is there any chance to have less than seven, it seems like overkill, or is each article very different? Can it be narrowed down to the best two or three? Chris Day (talk) 11:32, 18 November 2007 (CST)

Addition of a radical critique

I was encouraged to try and find a place for another critique of EBM, which relates to the (highly condemnable!) use of common sense under the EBM paradigm (http://en.citizendium.org/wiki/Evidence-based_medicine#EBM_not_recognizing_the_limits_of_clinical_epidemiology) Thanks, Matt, I needed some courage. I dared to be bold and to formulate a general synthesis. It looks like the final criticism (about the fallibility of knowledge), but I think it provides historical and political perspective. I think that the very last part is important, even though it is dangerously funnny. Some work should be done to contrast those two similar critiques. Pierre-Alain Gouanvic 02:09, 18 November 2007 (CST)

Criticisms of EBM - are these actually criticisms of nefarious expropriators of EBM?

Two of the sections in criticisms, "Unethical use of placebos" and "Ulterior motives" , might be more appropriate to view as criticisms of those who try to use the EBM paradigm for their advantage. Does anyone see a better way to organize to reflect this.

Also, as the EBM article fills out, we need to think about some the details shifting to other articles, specifically the randomized controlled trial and clinical practice guideline article. - Robert Badgett 10:19, 18 November 2007 (CST)

Hi, I think I see a better way to organize this in an elegant way. In my readings on the EBM debate, I have encountered a criticism that is often missed (because it is philosophical/socioeconomic), but which is by definition essential : the authors said: are we debating EBM as it is, or EBM at it should be/is at its best/is according to textbooks? Those critics said that most of the debate about EBM was misled because it diverted attention from its actual uses, from what it is empirically (according to the evidence!), and, in doing so, was letting wrongdoers continue their use of evidence for their own sake. I guess the most important was : before we discuss, let's agree on the definition we'll use (and i'dd add: that's relevant to the whole article we're working on). There's irony in this situation. If we use evidence-based analysis of evidence-based medicine, we'll indeed be led to recognize that EBM (by whatever sociopathologic mechanism, "let's not lose ourselves in theories", as we'd say in EBM language):

  • helps the pharmaceutical monopolies by giving prestige to the notion of placebo where it should be called medical negligence (see unethical use of placebos) and
  • an anticompetitive practice (all medications are superior to placebo, telling them apart is almost impossible, so all leading companies are happy with that because they get their share of the market);
  • is a tool to deny treatment ("there's no evidence that...") (Cf Ulterior motives);
  • feeds the research industry indefinitely but leaves clinicians and patients powerless ("more research is necessary..." (there's an interesting study on that :
RESULTS: We analysed 1016 completed systematic reviews. Of these, 44% concluded that the interventions studied were likely to be beneficial, of which 1% recommended no further research and 43% recommended additional research. Also, 7% of the reviews concluded that the interventions were likely to be harmful, of which 2% did not recommend further studies and 5% recommended additional studies. In total, 49% of the reviews reported that the evidence did not support either benefit or harm, of which 1% did not recommend further studies and 48% recommended additional studies. Overall, 96% of the reviews recommended further research. Conclusions: Cochrane systematic reviews were about evenly split between those in which the authors concluded that at least one of the interventions was beneficial and those in which the evidence neither supported nor refuted the intervention tested. The Cochrane Collaboration needs to include clinical trial protocol summaries with a study design optimized to answer the relevant research questions.;
  • creates the notion of a truth that can be bought (who will pay for a RCT to know whether we should treat vitamin D deficiency in dark-skinned people in Northern latitudes? Treatment would seem to be common sense, but a recent study recommends further research on vitamin D deficient minorities...; who will pay for a RCT to compare treatment with a drug with an expired patent or no patent at all with a new, proprietary drug? (that's the main point made by Marcia Angell, former NEJM editor and vocal critic of the current state of affairs);
  • based on the fiasco of hormone replacement therapy in menopause, and the terror it caused, it turns most researchers away from new, relevant pathophysiologic models of disease (because "theories are dangerous, look at the HRT trial!").

... and I guess the list could go on a little more. So, to formulate the problem as I see it (see next section:) (Pierre-Alain Gouanvic 16:04, 18 November 2007 (CST))

Evidence-based assessment of evidence-based medicine

An evidence-based assessment of the impact of evidence-based medicine is still lacking. The intense debate over the impact of EBM in clinical practice, health care systems, attitudes towards scientific literacy, research agenda, choice of medical treatments and health determinants that will be assessed, is not led using the "current best evidence", but often focuses on theoretical questions raised by clinical epidemiology and evidence-based medicine paradigms. Critiques of evidence medicine can thus be divided in two main categories : theory-based critiques and evidence-based critiques and assessments of evidence-based medicine.

Evidence-based critiques

Unethical use of placebos

Cf article: in summary: Since the introduction of EBM, and its adoption by Health agencies such as the US FDA, the use of placebo has become more prevalent, in violation of the Helsinki Declaration.

Rationing and negligence

(cf ulterior motives)

Changes in research agendas

Draft: since the adoption of EBM methods by health care systems and research consortia, the funds devoted to fundamental research have declined while the funds allocated to the evaluation of non-innovative drugs (me-too drugs) have raised sharply. (cf Marcia Angell's critique). I think that there's solid evidence that should be gathered and integrated here.

Lack of randomized controlled trials for clinical decisions

(cf article). In addition: But is this lack an unfortunate consequence of extraneous factors or is it constitutive of evidence-based medicine as it is now, as defined by the best available evidence? A Cochrane meta-analysis of Cochrane meta-analyses suggests that EBM metanalyses are usually not designed to guide clinical decisions (cf quote above ("The Cochrane Collaboration needs to include clinical trial protocol summaries with a study design optimized to answer the relevant research questions."); the ref is: J Eval Clin Pract. 2007 Aug;13(4):689-92. Mapping the Cochrane evidence for decision making in health care. El Dib RP, et al. The Brazilian Cochrane Centre - Universidade Federal de São Paulo, São Paulo, Brazil.)

Summary of the evidence about evidence-based medicine

Draft: Although it is too early to infer causation, there appears to be a trend in ... towards ... associated with the introduction of EBM methods and practices. More research is necessary to shown whether EBM caused such changes or was an innocent bystander. Any thoughts?

Theory-based critiques

EBM not recognizing the limits of clinical epidemiology

(cf article)

Fallibility of knowledge

(cf article)

Pierre-Alain Gouanvic 16:04, 18 November 2007 (CST)

Need consistency

I have noticed that in the text there are examples of "evidence-based medicine", "Evidence-Based Medicine" and EBM. Does the article intend to use the acronym throughout? That seems to be case from the intro. Should all cases of "evidence-based medicine" and "Evidence-Based Medicine" be changed to EBM? Chris Day (talk) 11:44, 18 November 2007 (CST)

Apply section

In this section is the following sentence:

"Both patients and healthcare professionals have difficulties with health numeracy and probabilistic reasoning.[29]"

How do these problems in numeracy result in a misapplication of EBM? This needs to be more explicit in the article. There is a reference for this, so what's the conclusion of the reference with respect to the misapplication of EBM? Chris Day (talk) 14:20, 18 November 2007 (CST)

Also in this section is the following sentence:

"Specialists may be less discriminating in their choice of journal reading. [39]"

Does not seem to tie in with the rest of the section. I was going to rewrite it but I'm not sure of what point is being made here. How does this fact relate to the over or under use of any evidence based method? I would expect from the context (specialist tend to over use EBM) that it is trying to suggest that results and usage of treatments from EBM are not generally reported in specialist journals? But this cannot be right given the sentence above. Wouldn't a specialist be more aware of EBM if they are less discriminating in the journals they read? Chris Day (talk) 12:11, 18 November 2007 (CST)

Last discussion item

As a rule, in discussions pages, it is good to add comments at the very end... I did not respect this rule, and now, my comment is difficult to trace back. Please go to http://en.citizendium.org/wiki/Talk:Evidence-based_medicine#Criticisms_of_EBM_-_are_these_actually_criticisms_of_nefarious_expropriators_of_EBM.3F to read the comment from the start. Thanks! Pierre-Alain Gouanvic 16:19, 18 November 2007 (CST)

Aquire section

As a layman reading this article I have no idea what to make of the Acquire section. Why do doctors aquire as many wrong as correct answers? What kind of data are we talking about, studies that are already published or new studies, or both? This section needs to be a lot more clear. Chris Day (talk) 22:49, 18 November 2007 (CST)

Clinical_reasoning/teaching section

As above, i just find this Clinical reasoning section very hard to interpret. For example, the second sentence states:

"In addition, medical experts rely more on pattern recognition which is faster and less prone to error[84];"

But less prone to error than what? Surely not better than Bayesian analysis which is implied in this context. Then there is a list of teaching strategies than does not really put any perspective on the diferent approaches and there is a table that is not even mentioned in the text. What does the table mean, what is 66% vs cell B? May be I could figure this out if i worked at it and read all the references, but this article should stand on its own. Chris Day (talk) 23:17, 18 November 2007 (CST)

Problems

I have started to check the quality of the refereneces. I looked at this."A search strategy similar to the 5S strategy should be taught for use when the searcher has limited time available during clinical care. This is based on one positive study of its use[14] and two negative studies[76][77] of teaching the use using secondary and primary publications.". The first reference is used 3 times in the article. It is a small study of 32 medical students assigned to one of two search strategies. The study has obvoius flaws, most clearly the statistical analysis is not based on individual performance but on the numbers of answers. As the participants may be regarded as independent but clearly their answers are not, this is inappropriate. My point is not to rubbish this small study, but that in an article on EBM above all, the quality of evidence cited should surely be consistently high. In fact I am worried that references here are being used as though they are citations to fact, not to evidence which itself needs appraisal. As editors it's our job to appraise and sift and present a balanced account, not leave it to the reader.

I think the article does have major problems at present. In my opinion at least an article must be understandable alone; some sections are not comprehensible, but are essentially a guide towards relevant literature for those who will recognise the key words. The test is simple; if we ignore all references, does the text actually make sense? In some cases I think it just doesn't.

My suggestions are 1) Shift the section on metrics to a subpage; it's just a list. 2) Create a separate article on Teaching EBM. This section perhaps has most problems and needs most work, and is essentially tangential. 3) Shift some of the references away from being cited in the text (as though they were reliable authorities) to a bibliography subpage, grouped by topic, as a guide to further reading, with some explanation. 4) Thus try to reduce the main article to a clear overview with minimal referencing only to major significant literature overviews.

At present, I can't support approval, but this is a very important topic and this is a very good start, but I think we have some way to go yet.Gareth Leng 03:55, 19 November 2007 (CST)

OK, I've created two new article, one on Teaching EBM and one on EBID, as these it seemed to me presented separate problems of explanation and introduction which have to be addressed. I think that this de-bulks the present main article allowing us to focus on this core content??Gareth Leng 05:51, 19 November 2007 (CST)

I think the two new subpages help. Similarly, I wonder if much of the content in the following two sections could be shifted to other CZ articles - some that already exists:

Would be nice to keep the metrics. Eventually this should be short descriptions that lead to main article; however, that requires time.

This approval process has been interesting. However, it is extremely time consuming which I think needs addressing. - Robert Badgett 07:34, 19 November 2007 (CST) - Robert Badgett 07:34, 19 November 2007 (CST)

The time consumption for this article's approval is not usual. It may reflect an abundance of expertise in the area on CZ. Given that there are many revisions proposed and we have gone beyond the article approval date, would one of the nominating editors consider removing the template for the moment? It should be reinstated when you (three) feel confident of approval in the light of comments and changes proposed here. --Martin Baldwin-Edwards 07:40, 19 November 2007 (CST)
Yes, please remove the nomination. I am listed as supporting the nomination - am I allowed to remove the template? - Robert Badgett 07:45, 19 November 2007 (CST)
As I understand CZ rules, when 3 editors nominate an article for approval, only one of those can remove the nomination. I have removed the template on your behalf, Robert. --Martin Baldwin-Edwards 08:19, 19 November 2007 (CST)

I think in this case (as perhaps often) nomination for Approval leads to a flurry of interest and (I hope) constructive input, and issues can come to light that are complex. In this case, the article is such an important one, and so large in scope, that perhaps it's not surprising that it won't be easy. My last change (shifting the Classification section) was my first response to comparing this article with the Wikipedia article. That section was a straight duplication.Gareth Leng 07:52, 19 November 2007 (CST)

I cut the statement that specialists are less discriminating in their reading than general physicians. It seemed surprising; the study referenced actually reports "Although they accessed many of the same journals as did the primary care physicians, the specialists accessed journals almost twice as often and accessed a greater number of more specialized journals, consistent with their clinical populations. " Gareth Leng 08:07, 19 November 2007 (CST)

This makes sense, i flagged that one above too, I thought it did not ring true. Chris Day (talk) 08:09, 19 November 2007 (CST)

Moved Conflicts of interest -> Medical ethics in line with Robert's suggestion above. Gareth Leng 10:41, 19 November 2007 (CST)

Classification section

Why was the classification section deleted? I think Eddy made an important construct here - about how EBM focuses excessively on the doctor patient dyad when focusing on the system will have more impact. Can we restore this? - Robert Badgett 16:06, 19 November 2007 (CST)

Hi Robert, I didn't delete the content, just moved it into the lead. I altered the format (lost the headings) because I noticed it was a straight copy of what was on WP.Gareth Leng 06:22, 20 November 2007 (CST)

I wrote it at WP. See what I did now - I did not want the thought to be missing from the TOC. - Robert Badgett 08:59, 20 November 2007 (CST)
Looks good to meGareth Leng 11:54, 20 November 2007 (CST)

Specialists physicians reading habits

The abstract of the article states "Primary care physicians, more so than specialists, chose full-text articles from clinical journals deemed important by several measures of value". It does not seem correct to delete this because the result is a surprise. If you want to interpret the study differently, then I suggest adding your interpretation or editing mine, but do not delete unless you think the article seriously flawed (unlikely as the McMaster group is pretty careful). Regarding it being a surprise, it is in line with 1) observations on specialists being more likely to accept new therapies of unproven benefit, 2) specialists journals in general are not rated as high with the McMaster/ACPJC methods. There is a difference between reading more journals versus reading better journals. It is a provocative study that should not be suppressed. - Robert Badgett 16:06, 19 November 2007 (CST)

I'm a bit confused here. Wasn't the original sentence:
"Specialists may be less discriminating in their choice of journal reading. [39]" See my original comment above and this edit by Gareth Leng.
Which implied that specialists read more broadly. But the study suggested that they read with the same breadth but much more of the specialist journals (i.e. no more broadly). Or is this a different sentence you are refering to? For me the surprise was that the specialists read more broadly. Above you seem to be saying its the Primary care physicians that read more broadly. Maybe we have diferent usage when considering the phrase "less discriminating"? Chris Day (talk) 21:54, 19 November 2007 (CST)
Maybe what you meant originally was "less discriminating in their choice of treatment"? That would make more sense. But is that proven by the study you cited. Gareth's cited quoted says that specialists downloaded the same breadth of journals but downloaded more on speacialist topics. There seems to be an assumption here that the specialists did not read all of what they downloaded. But does the study actually say this? And even if they do, is this a valid conclusion? Chris Day (talk) 22:04, 19 November 2007 (CST)
By using the word 'discriminating', I was trying to summarize the finding in the study that the primary care doctors were more likely to read articles from journals of higher quality. - Robert Badgett 22:13, 19 November 2007 (CST)
I agree it seemed to say that primary care doctors read 'less broadly' (to use Chris' words), though both downloaded 'full text' articles from high quality journals, primary care was consistently of higher quality. This makes sense to me as primary care doctors are likely to refer to specialists once they have exhausted the 'traditional' methods. It's the specialists that have to 'think outside the box.' --D. Matt Innis 22:52, 19 November 2007 (CST)
Well how can you define quality in this case? The less specialist journals are higher quality? This seems a little subjective, does the source make such as statement? Why wouldn't specialist journals be on the ball with respect to treatments in their specialty? And what of Gareth's quote that says that specialists and primary care doctors download from the same journals? Do they discuss what is read or only what is downloaded in the cited reference? We can't discuss what is read if the source only discusses what is downloaded. At present I don't feel such a strong statement is justified based on the current source.
Sorry if I'm sounding confused here but this whole line of reasoning seem very confusing, as well as the original text, which I now see I had misinterpreted completely. If this is to be included the argument need to be much more transparent and I'd prefer a source where they actually dicuss the knowledge of the doctors involved not what they downloaded (which they may or may not have read). Chris Day (talk) 23:19, 19 November 2007 (CST)
Definition of quality is were the articles included in the summary journals acpjc or ebm. See acpjc's criteira at http://www.acpjc.org/shared/purpose_and_procedure.htm. I think it would be a mistake to get hung up on download versus read - surely they are well correlated. It is very plausible that specialists are not reading as high of quality content. Matt's scenario could be right (specialists are confronted with difficult problems that require them to be more adventuresome in their reading) or another reason is that we have know for years that the general journals tend to have higher quality research (Haynes B. Where's the meat in clinical journals? ACP Journal Club 1993; 119: A22-A23.) Going back to why I included this thought inthe article, either of these scenarios could contribute to why studies have observed that specialists might be more like than generalists to adopt new treatments that have not been adequately studied. I think these are important thoughts to include in the EBM article. Admittedly the research is not definitive proof, but it seems possible enough and interesting enough to include these citations (with a disclaimer that these observations are not proof). - Robert Badgett 23:43, 19 November 2007 (CST)
My point about reading vs download is that the specialists download a lot more. Do we know they read it all and what subset do they read? As far as I can tell your point requires that the specialists don't read everything they download (since they do actually download from the high quality journals too, as often as the primary care doctors according to the paper). For the record, given how much they download, I'm sure that that they don't read it all, nevertheless, this is all speculation at best. It also generalises between primary and specialist carte to an extreme. I think there is a good chance you are right, but that does not mean it should be written, IMO. Chris Day (talk) 00:04, 20 November 2007 (CST)
I am not following the need for "specialists don't read everything they download" to be true. - Robert Badgett 00:07, 20 November 2007 (CST)
The citation says that the specialists and primary care doctors download from the same journals, except that the specialists also download from their specialist journals and downloaded a lot more in general. Here is the quote:
"Although they accessed many of the same journals as did the primary care physicians, the specialists accessed journals almost twice as often and accessed a greater number of more specialized journals, consistent with their clinical populations. "
So specialists may well be downloading the high quality articles too. So the assumption must be they download so much that they can't cover it all and miss the important bits. I'm just going from the text Gareth guoted, I have no idea if it true or not. This is why we need to know what they read not what they download. If you assume they read everything they download then the citation does not support your case. Do they discuss this issue in the paper? Chris Day (talk) 00:13, 20 November 2007 (CST)

I think I see what the confusion is. It is true that both groups of docs are downloading from the same pool of journals. However, within that pool of journals, there is a spectrum of quality of journals (as defined by the proportion of articles in a journal that meets criteria). In the results, the article states: "Correlations for each quality indicator were stronger for the primary care physicians than for the specialists (Table 6). Access by specialists correlated most strongly with ACP Journal Club abstracted journals (ACP-abstr, r = 0.610, 95% CI 0.391 to 0.763). Statistical difference between the primary care physicians and the specialists occurred only for the abstracted ACP Journal Club accesses (ACP-abstr, r = 0.915 vs. 0.610)."

So both statements are true:

  1. "specialists and primary care doctors download from the same journals"
  2. Within the pool of journals, the primary care docs are more likely to access articles from the higher quality journals.

I think their research is pretty tight, but I admit it can be hard to follow if you are not familiar with their work. - Robert Badgett 01:41, 20 November 2007 (CST)

I think the point is that specialists (plausibly) read more. They not only read the high impact journals, they also read the specialist journals. I don't find that surprising, and there's no indication that specialists read fewer articles from high quality journals than generalists. Of course if people don't read what they download there's not much that can be said anyway from the study.Gareth Leng 06:31, 20 November 2007 (CST)

Steps_in_evidence-based_medicine

The Steps in evidence-based medicine section has some valuable info, but is it titled correctly? In other words, is 'publication bias' a step? or more like a consideration? Unless these are documented guidelines from somewhere special, maybe we consider renaming the section? I'll leave that to others better qualified to decide. --D. Matt Innis 13:46, 20 November 2007 (CST)

  1. Michelson J (2004). "Critique of (im)pure reason: evidence-based medicine and common sense". Journal of evaluation in clinical practice 10 (2): 157–61. DOI:10.1111/j.1365-2753.2003.00478.x. PMID 15189382. Research Blogging.
  2. Sweeney, Kieran (2006). Complexity in Primary Care: Understanding Its Value. Abingdon: Radcliffe Medical Press. ISBN 1-85775-724-6. Review
  3. Holt, Tim A (2004). Complexity for Clinicians. Abingdon: Radcliffe Medical Press. ISBN 1-85775-855-2.  Review, ACP Journal Club Review