Talk:Evidence-based medicine/Draft: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>D. Matt Innis
imported>D. Matt Innis
(→‎Server timing: new section)
Line 198: Line 198:
== COx-2 drugs = COX-2 inhibitors? ==
== COx-2 drugs = COX-2 inhibitors? ==
Robert, can you rephrase  COX-2 "drugs"  to be more descriptive?
Robert, can you rephrase  COX-2 "drugs"  to be more descriptive?
== Server timing ==
In case anyone is wondering, the timing of the server is way off and clocks are not valid, so the oder of edits in the hisotry do not necessarily follow when they were actually made, i.e. Gareth's last edit was actually made before mine, but shows an hour later. --[[User:D. Matt Innis|Matt Innis]] [[User talk:D. Matt Innis|(Talk)]] 09:42, 16 November 2007 (CST)

Revision as of 09:42, 16 November 2007

This article has a Citable Version.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
To learn how to update the categories for this article, see here. To update categories, edit the metadata template.
 Definition The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. [d] [e]
Checklist and Archives
 Workgroup category Health Sciences [Categories OK]
 Talk Archive 1  English language variant American English


I will be gad to help author here, and would like to go over a plan for the article. I think that, as this article covers a a special sort of medical field that we should discuss "audience". Please, fellow editors, argue with any of these points if they differ from your understanding. Evidence based medicine is certainly all about clinical care of patients- but, unlike an article on dermatology, say, it really is about a way of thinking about medicine, an approach. Reading what is written so far- it is really meaty and presents that approach, but, in my mind suffers from 2 faults, one is that there is too much technical language without explanation, and (2) the history of medicine (in a way) has to be presented so that the naive reader understands that actually, "regualar medicine" is not evidenced based. I tyhink also, that including some real examples of changes in clinical practice that are based on evidence based medicine, may be helpful. I am going to add some of this and am open to discussion, especially from Supten. Nancy Sculerati 09:35, 15 May 2007 (CDT)

References-with notes

O'Malley P. Order no harm: evidence-based methods to reduce prescribing errors for the clinical nurse specialist. [Review] [17 refs] [Journal Article. Review] Clinical Nurse Specialist. 21(2):68-70, 2007 Mar-Apr. UI: 17308440 Classed under evidenced based medicine by Ovid (Medline) , his article reviews actual sources of medication errors.

Doumit G. Gattellari M. Grimshaw J. O'Brien MA. Local opinion leaders: effects on professional practice and health care outcomes.[update of Cochrane Database Syst Rev. 2000;(2):CD000125; PMID: 10796491]. [Review] [54 refs] [Journal Article. Review] Cochrane Database of Systematic Reviews. (1):CD000125, 2007. UI: 17253445

Lorenz LB. Wild RA. Polycystic ovarian syndrome: an evidence-based approach to evaluation and management of diabetes and cardiovascular risks for today's clinician. [Review] [60 refs] [Journal Article. Review] Clinical Obstetrics & Gynecology. 50(1):226-43, 2007 Mar. UI: 17304038

Jordan A. McDonagh JE. Transition: getting it right for young people. [Review] [29 refs] [Journal Article. Review] Clinical Medicine. 6(5):497-500, 2006 Sep-Oct. UI: 17080900

Thanigaraj S. Wollmuth JR. Zajarias A. Chemmalakuzhy J. Lasala JM. From randomized trials to routine clinical practice: an evidence-based approach for the use of drug-eluting stents. [Review] [48 refs] [Journal Article. Review] Coronary Artery Disease. 17(8):673-9, 2006 Dec. UI: 17119375

Stanley K. Design of randomized controlled trials. [Review] [9 refs] [Journal Article. Review] Circulation. 115(9):1164-9, 2007 Mar 6. UI: 17339574

Sectioning

Are there perhaps more sections than are useful here? CZ:Article Mechanics recommends against many relatively short sections in favor of relatively few, longer sections. But I don't think we have any very hard-and-fast rules about this.

Glad to see you here, Dr. Badgett! --Larry Sanger 22:01, 23 October 2007 (CDT)

Thanks - Robert Badgett 22:37, 31 October 2007 (CDT)

'Main' template not working

I added a new call to the main template, and now all three calls are not displaying correctly. - Robert Badgett 22:37, 31 October 2007 (CDT)

Misuses of EBM

The article ignores the misuses of EBM in the real world. Very few of the methods actually used in medicine have ever been validated by independent prospective randomized double-blind studies, or are likely to be. The main use of EBM is by HMOs and other prepaid managed care organizations, as an excuse to refuse to pay for expensive studies or treatments, while happily paying for inexpensive, untested, unproven treatments, such as herbal and other "alternative" medicines. I do not think this misuse of EBM should be ignored in this otherwise wholly laudatory article. Harvey Frey 17:20, 12 November 2007 (CST)

Hi!
The use of the "there is no evidence that" is becoming a little too frequent in clinical medicine. I suggest these two articles for inclusion; unfortunately I cannot access them (full text) right now.
J Med Ethics 2004;30:141-145 Evidence based medicine and justice: a framework for looking at the impact of EBM upon vulnerable or disadvantaged groups. W A Rogers
S I Saarni and H A Gylling Evidence based medicine guidelines: a solution to rationing or politics disguised as science?
J. Med. Ethics, Apr 2004; 30: 171 - 175.
May I summarize the two abstracts in the Criticisms section?
Pierre-Alain Gouanvic 23:34, 12 November 2007 (CST)

Problem with the references

Somewhere around the 50th reference, there is a bug. Can someone fix this? Pierre-Alain Gouanvic 23:47, 12 November 2007 (CST)

Great! Pierre-Alain Gouanvic 13:50, 13 November 2007 (CST)

Criticisms that may be incorporated into the Section

I think more needs to be added about the sources of much so-called EBM, from sources interested in minimizing expenses of government health plans, like the Cochrane group, or through medical auditors interested primarily in maximizing profits of private HMOs, like Milliman & Robertson. There also needs to be a fair admission of how little of accepted medical practice has actually been validated by 'gold-standard' studies. When should a procedure be denied based on lack of EBM support? And, to what extent are surrogate measures acceptable when, say, survival data is unavailable? For instance in Radiation Oncology (my own specialty) if you know that higher radiation doses kill more cancer cells, and high doses are usually limited by doses to surrounding tissues, and if you can show that some new technique gives less dose to surrounding tissues this allowing higher doses to cancers, is it irrational to take that as evidence that the new technique is superior? Must an HMO insist on a prospective randomized double-blind study using 20 year survival as an endpoint before allowing use of the new technique? The other issue is the extent to which 'cost' should be involved in EBM studies, and if it IS allowed, what should be the conversion factor between dollars and years of life, or dollars and years of pain-free life. Should we EVER do a coronary bypass operation, given that the same number of dollars could save thousands of lives if spent on malaria prevention instead? But, WOULD the dollars saved be spent on malaria prevention, or would it go to executive perks and stockholder dividends? One doctor in California recently received almost a billion dollars selling his share of an HMO. Those were dollars not spent on medical care, often justified by calling some procedure "not medically necessary", or "investigational"! And, what weight should be given to the EBM "guidelines"? Should they be used to overrule the decision of the primary doctor on the case? If so, who takes responsibility for adverse results? The clerk who countermanded a doctor's order based on an M&R cookbook? Harvey Frey

I think these are all legitimate issues. What we have so far is a pretty mainstream article, your stuff would help. Much of this could be added to the 'criticisms' section, which is currently sparse. Some of what you suggest might be better on the clinical guidelines page. Robert Badgett
Here's another example: http://www.careguidelines.com/ An entirely PROPRIETARY set of "EBM Guidelines" from Milliman, originally a hospital accounting firm, based on no known public peer review, widely sold to managed care organizations in the US, for the express purpose of controlling cost. And, of course they come with disclaimers, to avoid liability if anyone is injured by one of their clients using them. I do remember a case in California a few years ago when they figured prominently when a hospital prematurely discharged a woman post-delivery, based on these guidelines. Unfortunately, it wasn't an reported appellate case, so I'm having trouble finding it now. Harvey Frey
Interesting. I cannot find their guidelines to assess their methods, but from your description, it sounds like they hijacked the label evidence-based. Robert Badgett
If I understood you well, the example you provide from oncology:
For instance in Radiation Oncology (my own specialty) if you know that higher radiation doses kill more cancer cells, and high doses are usually limited by doses to surrounding tissues, and if you can show that some new technique gives less dose to surrounding tissues this allowing higher doses to cancers, is it irrational to take that as evidence that the new technique is superior?
is an illustration of the difficulty of using causal inferences and, for that matter, common sense, in the framework of EBM. I unearthed something like a little gem, which could be useful in defining EBM from the practicioner's and patient's point of view (I'm not saying that this article is "one of its kind" though): Critique of (im)pure reason: evidence-based medicine and common sense [1]
While the goal of evidence-based medicine (EBM) is certainly laudable, it is completely based on the proposition that 'truth' can be gleaned exclusively from statistical studies. In many instances, the complexity of human physiology and pathophysiology makes this a reasonable, if not necessary, assumption. However, there are two additional large classes of medical 'events' that are not well served by this paradigm: those that are based on physically required causality, and those that are so obvious (to the casual observer) that no self-respecting study will ever be undertaken (let alone published). Frequently, cause-and-effect relationships are so evident that they fall into both categories, and are best dealt with by the judicious use of common sense. Unfortunately, the use of common sense is not encouraged in the EBM literature, as it is felt to be diametrically opposed to the very notion of EBM. As is more fully discussed in the manuscript, this active disregard for common sense leaves us at a great disadvantage in the practical practice of medicine.
I believe that this criticism is important because it brings in bright light the relationship between EBM and fundamental research: the latter deals with complex-cause-and-effect relationships, the former with specific effects, out of the black box of human physiology. Pierre-Alain Gouanvic 12:05, 14 November 2007 (CST)

Some problems

"Evidence-based medicine seeks to promote practices that has been shown, through the scientific method to have validity by empiric proof." This needs re-thinking; I think that what is meant here is "promoting practices the effectiveness of which has been supported by stringent statistical analysis of the results of carefully controlled clinical studies."

Evidence-based medicine is not science-based medicine. Science-based medicine works from a fundamental understanding of basic mechanisms to generate a rationally designed intervention strategy. Not all medical interventions are actually based in science in this sense (and some would say that relatively few are). More commonly, they are based empirically on experience of what actually works, and the scientific rationale or explanation comes later (if at all).

Most importantly here though, the scientific method would test the explanations for the effectiveness of particular treatments by hypothesis-based experimental testing. Whether this has been done or not would not really influence the decision to use a particular intervemntion or not.Gareth Leng 03:55, 14 November 2007 (CST)


I haven't checked the references, only put them into what I think is style consistent within the article and consistent with Biology work group style; I've shorten author lists to et al. when there are more than 2 authors and omitted issue numbers as redundant, generally to try to keep the list concise for printing. My general feeling is that it seems over-referenced - I'd be wary of this as a large current reference list becomes outdated fast, a smaller list of elite core references has a longer shelf life. The size is also a burden for verification. However it's a very nicely written very helpful article. I'd just return to the use of the word "proof" which I'd strongly urge that you avoid. Scientists would rarely consider anything to be proved; the evidence might be strong enough to accept a conclusion (provisionally), but if a conclusion rests on statistics then there is always a margin for error.Gareth Leng 06:51, 14 November 2007 (CST)

required fixes, self approval?

Several things need fixing prior to approval. The article needs to be consistant, ie "evidence-based" vs "evidence based" oocurs in the article, as well as minor typos. At least two sections are completely empty somewhere near the bottom, including the "Apply" and "Assess" sections. They need to be removed or expanded. Finally, the nominating editor appears to have created and written on this page. I suggest removal of nomination, a careful read and editing, and then re-nomination David E. Volk 09:18, 14 November 2007 (CST)

Studies of effectiveness section

The last sentence in this paragraph is not a sentence. I can't figure out what was meant. I inserted EBM in a few sentences where it seemed to be missing. David E. Volk 10:15, 14 November 2007 (CST)

Is this ready for approval?

Several section were blank. i added text from related articles to give an overview but we can't have an approved aricles with blank sections. Also it seems incomplete in places. Particularly, There are four cases of one sub section in a hierarchy. This seems to imply there is another sub section that could be added. If not then the subsection seems unnecessary. For example;

7 Incorporating evidence into clinical care
7.1 Medical informatics
7.2  ?
8.3 Clinical reasoning
8.3.1 Improving clinical care
8.3.2  ?
9.4 Apply
9.4.1 Clinical reasoning
9.4.2  ?
10.3 Epistemology
10.3.1 Complexity theory
10.3.2  ?

In all these cases it seems like there should either be another subsection or that the x.x.1 sub heading is not required. I dpn't know enough about the topic to know what the ? might be. Chris Day (talk) 22:44, 14 November 2007 (CST)


What does this mean?

I was trying to edit the second paragraph in the first section but came to the conclusion that i did not really know what it means.

"Evidence-based medicine seeks to promote practices that have been shown, through the scientific method to have validity by empiric proof. As such, it currently encompasses only a few of the actual practices in clinical medicine and surgery. More often, recommendations are made on the basis of best evidence that are reasonable, but not proven. Evidence-based medicine is also a philosophy, however, that seeks to validate practices by finding proof."

The first sentence I would change to:

"Evidence-based medicine seeks to promote practices that have been shown to have validity using the scientific method."

The second sentence reads I'm unsure what the point is. Is the implication that the actual practices in clinical medicine and surgery do not follow the scientific method? Is so, does this even need to be said, this is redundant with the first paragraph.

The third sentence seems redundant with the first sentence. It says the same as "promote practices that have been shown to have validity using the scientific method"

The last paragraph relating to phiolosophy loses me. Phiolosophy and EBM seem to be the opposite of each other but this sentence seems to be saying they are both? i find this very confusing. I hope these comments are useful. Chris Day (talk) 23:16, 14 November 2007 (CST)

It looks like there's some redundancy in it, from an outsider point of view. Even when it says that "Evidence-based medicine is also a philosophy, however, that seeks to validate practices by finding proof", it seems to read that A is something that relies on B, but A also seeks to prove B but I would not use the word "philosophy" but rather restate it as "Part of the ultimate goal of evidence-based medicine is to validate practices by establishing proof of the results." --Robert W King 23:41, 14 November 2007 (CST)

Isn't that exactly the same as the first sentence in the paragraph? Here I slightly reworded it and you'll see what I mean.

"Part of the ultimate goal of evidence-based medicine is to validate practices by using the scientic method."

It seems to me that the whole paragraph distills down to the first sentence:

"Evidence-based medicine seeks to promote practices that have been shown to have validity using the scientific method."

Chris Day (talk) 23:50, 14 November 2007 (CST)

If the rest of the paragraph is superfulous because of redundancy, I'd probably just remove the errant content! --Robert W King 00:24, 15 November 2007 (CST)
That's what i have done. Let's see what the health editors think. Chris Day (talk) 00:35, 15 November 2007 (CST)

Industry and publication bias

Reading this again, it struck me that an important part of meta-analysis and appraisal is neglected. Most studies where they are misleading are I think misleading because of flaws in design or conduct or analysis. I don't know that I've ever seen a study where legitimate criticisms can't be raised, that might affect the interpretation. A good meta-analysis grades the quality of the trials, weighting the outcomes by quality, and I think attempts to come to a global recommendation on the basis that while individual trials might be imperfect for different reasons, when collectively they come to a common conclusion that conclusion is probably reliable.

The issue of publication bias works two ways. First, negative or inconclusive results are less likely to be reported. Second, positive results may be more likely to be reported when they are confirmatory of already published findings even when the quality of the trial is poor.

Overall, industry-sponsored trials are given a rough ride here. It should be remembered I think that, without industry sponsorship, there would be far fewer trials in the first place. The quality of studies very much depends on the integrity and competence of the academic or clinical scientists conducting them. We as academics can't blame industry for our own shortcomings. I do think that the major pharmaceutical companies try to find academic partners whose integrity and competence are unimpeachable; it's very much in their interests to do so, whatever the outcome of the trials.

Gareth Leng 03:59, 16 November 2007 (CST)

There are many "jargon terms" introduced here e.g. Relative risk ratio Relative risk reduction Absolute measures Absolute risk reduction Number needed to treat Number needed to screen Number needed to harm

I wonder if it would be sensible to add a glossary as a subpage that gave definitions of these? Or is there some other solution? Perhaps there's a case for making a stub for each of these, with a brief definition and an external link.Gareth Leng 07:17, 16 November 2007 (CST)

I like the glossary idea on subpages. It goes along with the definitions template that Larry created as well. --Matt Innis (Talk) 07:56, 16 November 2007 (CST)

Delayed approval

Although this article is outside of my sphere of competence, I recomment delaying the approval for at least 7 days, so that changes can be made. There are too many comments here from senior scientists, which may not be taken into account if the approval occurs as scheduled. Please indicate your support or opposition for my proposal of a short delay. --Martin Baldwin-Edwards 08:40, 16 November 2007 (CST)

I think 7 days may not be enough unless we are able to fill in the blank spaces toward the bottom and clean up the criticism section. I have added the Healing Arts Workgroup as this article affects them as well. --Matt Innis (Talk) 07:46, 16 November 2007 (CST)

I agree that some delay is sensible. I added a stub for Odds ratio as an example for my comment above, sadly I noticed too late that this is a term omitted (dooh)Gareth Leng 08:49, 16 November 2007 (CST)

Notice what I did with odds ratio. You put the name in the r template like this {{r|odds ratio}} and then click on the little 'e' and put in the defintiion. The it shows up anywhere we put that on any page about odds ratio. --Matt Innis (Talk) 08:31, 16 November 2007 (CST) ThanksGareth Leng 09:35, 16 November 2007 (CST)

Cut this?

I cut this short section. It is in the criticism section, and I couldn't identify in what respect there is any criticism. I guess I think that this may be interesting but is probably only tangential to the article?

Complexity theory

Complexity theory is proposed as further explaining the nature of medical knowledge.[2][3]

Gareth Leng 09:35, 16 November 2007 (CST)

Restored, renamed, and expanded this section with context. See what you think. - Robert Badgett 08:57, 16 November 2007 (CST)
Complexity theory needs to be defined or explained. --Matt Innis (Talk) 09:06, 16 November 2007 (CST)

??

"were more likely to adopt COX-2 drugs before the drugs were recalled by the FDA"

well yes, they would be wouldn't they? But were they recalled by the FDA?Gareth Leng 08:54, 16 November 2007 (CST)

Quality of references

I have started to do some checking of the references. I looked at this "A randomized controlled trial supports the efficiency of this approach.[7]" which is a reference used 3 times. This is a study of 32 medical students assigned to one of 2 search protocols. Frankly it has statistical weaknesses; most obviously the analysis is based on the numbers of answers to questions not on the individual performances; as the individuals are independent but their answers are obviously not, this seems inappropriate. I don't mean to rubbish this small trial, only to say that I think, especially in this article, we should set the bar for citing studies as evidence at an appropriately high level - i.e. at a level appropriate for the topic. Using small or poorly controlled studies as evidence to support conclusions about EBM is surely not what we want to do? I really would recommend trimming the references in the article down to a sustainable core of unimpeachably strong studies.Gareth Leng

alt med

I added a blurb about alt med being evaluted using EBM as well. Feel free to clarify or clean it up. I think it is important that this method might be the way to evaluate the claims made by techniques that are subject to bias. --Matt Innis (Talk) 09:40, 16 November 2007 (CST)

COx-2 drugs = COX-2 inhibitors?

Robert, can you rephrase COX-2 "drugs" to be more descriptive?

Server timing

In case anyone is wondering, the timing of the server is way off and clocks are not valid, so the oder of edits in the hisotry do not necessarily follow when they were actually made, i.e. Gareth's last edit was actually made before mine, but shows an hour later. --Matt Innis (Talk) 09:42, 16 November 2007 (CST)

  1. Michelson J (2004). "Critique of (im)pure reason: evidence-based medicine and common sense". Journal of evaluation in clinical practice 10 (2): 157–61. DOI:10.1111/j.1365-2753.2003.00478.x. PMID 15189382. Research Blogging.
  2. Sweeney, Kieran (2006). Complexity in Primary Care: Understanding Its Value. Abingdon: Radcliffe Medical Press. ISBN 1-85775-724-6. Review
  3. Holt, Tim A (2004). Complexity for Clinicians. Abingdon: Radcliffe Medical Press. ISBN 1-85775-855-2.  Review, ACP Journal Club Review