Annals of Indian Academy of Neurology
  Users Online: 1916 Home | About the Journal | InstructionsCurrent Issue | Back IssuesLogin      Print this page Email this page  Small font size Default font size Increase font size

REVIEW ARTICLE
Year : 2006  |  Volume : 9  |  Issue : 1  |  Page : 11-19
 

Evidence-based approach in neurology practice and teaching


Department of Neurology, All India Institute of Medical Sciences, New Delhi - 110029, India

Correspondence Address:
Kameshwar Prasad
Room 704, Neurosciences Centre, All India Institute of Medical Sciences, New Delhi - 110029
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/0972-2327.22816

Rights and Permissions

 

   Abstract 

While the rapid increase in the number, sophistication and cost of diagnostic and therapeutic technology offers novel approaches to manage patients' problems in Neurology, it also creates risk of overuse of technology for unproven indications. The ever-expanding information imposes unmanageable burden for individual neurophysicians to handle; and raises the need to separate the insignificant and unsound information from salient and crucial. Evidence-based approach consisting of four steps (ask, acquire, appraise and apply) provides the techniques and tools to efficiently access the relevant literature, critically evaluate its validity and importance, and apply judiciously to make sound clinical decisions. This article aims to provide introduction to evidence-based neurology, its goal and principles, outlines the steps necessary to practice it. As a strategy of clinical practice and teaching, the evidence-based approach contributes towards quality improvement in patient care and medical education, and helps in promoting research and scientific temper. The article also points to limitations and misconceptions about evidence-based neurology and summarizes what is new in the evidence-based approach.


Keywords: Clinical practice and teaching, evidence-based approach, medical education, patient care


How to cite this article:
Prasad K. Evidence-based approach in neurology practice and teaching. Ann Indian Acad Neurol 2006;9:11-9

How to cite this URL:
Prasad K. Evidence-based approach in neurology practice and teaching. Ann Indian Acad Neurol [serial online] 2006 [cited 2019 Nov 20];9:11-9. Available from: http://www.annalsofian.org/text.asp?2006/9/1/11/22816



   History Top


Evidence-based approach in neurology developed in parallel with similar developments in what is called evidence-based medicine (EBM). The term EBM first appeared in 1990 in the information brochure for McMaster University Internal Medicine Residency Program. However, the work which led to its origin may be traced back to late 1970s, when Professor David Sackett, the Chairman of the Department of Clinical Epidemiology and Biostatistics, McMaster University; Canada published a series of articles in the Canadian Medical Association Journal from 1981. The series was named, "The Readers' Guide to Medical Literature". The series had one article devoted to each of the paper on diagnosis, treatment, prognosis, etc. The articles provided guides to critical appraisal of the various types of clinical papers. Internet did not exit at that time and information technology was in infancy. Not surprisingly, the series did not contain any section on "how to search for relevant papers".

The starting point was a journal paper at hand. The emphasis was on critical appraisal of the paper. The section on application was relatively short. More than a decade had passed, when in early 1990s, a need was felt to re-visit and update the guides and include the latest advances in the field of critical appraisal. To do this, an International Evidence-Based Medicine Working Group was formed at McMaster University, Canada with Professor Gordon Guyatt as its chair. The group felt that the focus of the guides be changed from readers to users. They thought that emphasis should be placed on usefulness of the information in clinical practice. The starting point may be a problem faced by a clinician, who looks for relevant literature, finds it, critically appraises it and puts to use the information in his clinical practice. As sources of information had become enormous by then, and Internet had come into being, there was a need to guide the clinicians in searching the relevant literature. To incorporate these changes, the group worked to develop a series of papers (Users' Guides) to emphasize clinical practice and clinical decision-making based on sound evidence from clinical research.[1] During one of the retreats of the Department of Internal Medicine at McMaster University, one suggestion to name such clinical practice was to use the term "scientific medicine", which was vehemently opposed by other members of the department, mainly because of its implications that the practice so far had been "unscientific". Guyatt then suggested the term "evidence-based medicine" which proved felicitous.

It would be unfair and plainly wrong to say that philosophical foundations of EBM originated in 1990s or even 1970s. In fact, it would be clear from the following discussions that the basic tenets had existed right from the inception of medical practice. All major civilizations may find some indications of some of its principles in their ancient texts and historical accounts.

What is evidence-based medicine?

In simple terms, it means using the current best evidence in decision-making in medicine in conjunction (together) with expertise of the decision makers and expectations and values of the patients/people. In a broader sense, EBM is a new philosophy of clinical practice and a process of life-long learning which emphasizes a systematic and rigorous assessment of evidence for use in decision-making in health care. It involves integrating evidence with experience and expectations/circumstances of patients.

What is evidence?

Evidence in the context of clinical or health care practice consists of observations made with a specific purpose. Whether the observations come from clinical research or clinical practice or animal research, all count as evidence, but all are not equally valid or relevant for health care decision-making. Evidence-based approaches require assessment of the validity and relevance of the evidence at hand. In contrast to evidence-based practice, we often use logic to make decisions. The logic may be based on knowledge of pathophysiology or microbiology. Sometimes, logic may be so strong that we may base our decision on this, but more often than not strong evidence is necessary to avoid making mistakes.

Knowing about EBM (1-2-3-4)

Knowing EBM is like knowing 1-2-3-4. EBM has one goal, two fundamental principles, three components and four steps. One goal is to improve quality of clinical care; two principles are hierarchy of evidence and insufficiency of evidence alone in decision-making; three components are evidence, expertise and expectations of patients (triple Es) and four steps are ask, acquire, assess and apply (4 As). These are elaborated further in the following paragraphs.

Goal of EBM

EBM has one goal: to improve the health of people through decisions that will maximize their health related quality of life and life span. The decisions may be in relation to public health, health care, clinical care, nursing care or health policy.

Principles of EBM

Two fundamental principles include:

1. Hierarchy of evidence: It says that evidence available in any clinical decision-making can be arranged in order of strength based on likelihood of freedom from error. For example, for treatment decisions, meta-analyses of well-conducted large randomised trials may be the strongest evidence, followed in sequence by large multi-centric randomized trials, meta-analyses of well-conducted small randomized trials, single-centre randomized trials, observational studies, clinical experience or basic science research.

2. Insufficiency of evidence alone: The second fundamental principle of EBM is that evidence alone is never sufficient for decision-making. It has to be integrated with clinical expertise and patients' expectations and values. This principle gives rise to considerations of components of EBM which follows below.

Components of evidence-based approach to decision-making

In one sense, EBM is a misnomer because besides evidence, two other E's are required for decision-making, namely:

a) Expertise and experience of the decision makers

b) Expectations and values of the patients/people.

To emphasize all the three components, I use the word, Triple-E Based medicine (TEBM).

To illustrate the importance of the two Es, other than evidence, two examples follow.

Example 1

A 28-year-old man is admitted to the intensive care unit with ascending paralysis and respiratory distress. The resident makes a diagnosis of Guillain-Barrι Syndrome (GBS) and starts to discuss evidence-based approaches to treat him. The consultant comes, takes history and suspects dumb rabies. It becomes clear that the patient had a dog-bite three months ago and received only partial immunization. Further investigation confirmed the suspicion of dumb rabies and the patient was shifted to Infectious Diseases Hospital for further treatment. The whole discussion on GBS was irrelevant. This example illustrates the role of expertise in practicing evidence-based neurology (EBN). If the diagnosis is wrong, all the EBM discussion is superfluous.

Example 2

Expectations, values and circumstances of the patients/people.

a) The diagnosis of motor neurone disease (amyotrophic lateral sclerosis) requires certain level of expertise and experience. Once the diagnosis is made, one can look for evidence in favour of certain treatments like riluzole. It turns out that there is definitive evidence from RCTs and meta-analysis indicating that riluzole can prolong tracheostomy - free life for three months if taken regularly (usually for years). The cost of riluzole treatment is prohibitive. In view of the high cost and risk of hepatotoxicity (and the need to pay out of pocket in India) many neurologists and their patients do not use this. Patients do not consider it "worth it", however some patients, who can easily afford, do take riluzole for treatment of this condition.

b) There is a consistent evidence to show that alcohol in moderation is protective against heart attacks and stroke. However, in certain religion and cultures, alcohol is forbidden. It would be unacceptable to discuss alcohol intake in moderation with such a subject even if he has many risk factors for heart attack and stroke.

Why evidence-based approach?

The above examples indicate the need to integrate expertise and patients' values with the evidence in clinical decision-making. This is what practice of EBM requires. You might ask - isn't it what physicians always did and ought to do? How else did we make health care decisions? Well, there have been a number of different bases for such decisions other than evidence. The examples below are mainly for clinical decisions, but similar examples for policy decisions can also be cited.

A. Physiologic rationale

On many occasions, we make a decision on the basis of physiologic or pathophysiologic rationale. For example, ischemic stroke is commonly due to occlusion of middle cerebral artery (MCA). It makes physiologic sense to bypass the occlusion by connecting some branch of the external carotid to a branch of MCA beyond the occlusion. Such on operation is called external carotid-internal carotid (EC-IC) bypass. Based on this rationale, thousands of EC-IC bypass surgeries were being performed in many parts of the world, until some people questioned it. An international trial sponsored by NIH (USA) compared this to medical treatment and showed that the surgery is not only ineffective but also delays recovery.[2] After this evidence was published, the number of EC-IC bypass surgeries crashed in North America and is rarely, if ever, performed for ischaemic stroke anywhere in the world.

A second example is use of streptokinase, a thrombolytic agent, in ischaemic stroke. It makes physiologic sense to use streptokinase to dissolve the clot in this condition (just as in myocardial infarction). But three clinical trials (known as MAST-E[1], MAST-I[2] and ASK[3]) had to be stopped prematurely because more patients were dying with the use of streptokinase than without it. As a result, streptokinase is not used for ischaemic stroke, but surprisingly, tissue plasminogen activator (t-PA), another thrombolytic agent, is associated with less increase in mortality and overall better outcome, though physiologically, we do not know any good reason for this difference.

Several other examples (like increased mortality with encainide as antiarrhythmic agent) show that physiologically reasonable decisions may have unacceptable clinical risk and, therefore, clinical studies are necessary to determine the benefit-risk profile. Decisions based solely on physiologic rationale may cause more harm than good.

B. Experts' advice

We often seek experts' advice to take certain treatment decisions. Policy makers often seek experts' advice to take a policy decision. However, experts' advice without reference to adequate search and evaluation of evidence may be simply wrong.

Take the example of treatment of eclampsia. A survey conducted in UK in 1992 showed that only 2% of obstetricians used magnesium sulphate to control convulsions in eclampsia. The preferred drug was diazepam. Several neurologists prefer diazepam to magnesium sulphate. But evidence from clinical trials have clearly shown that magnesium sulphate more effectively controls convulsions than other drugs and decreases maternal mortality in eclampsia.[3] It is reassuring to note that in England the Royal College of Obstetrics and Gynaecology recently adopted the recommendation to use magnesium sulphate, rather than diazepam in this condition.

C. Textbooks and reviewers

We often look into textbooks or review articles for deciding to use an intervention. A number of examples are available to show that textbooks or review articles may recommend to use potentially harmful intervention and may not recommend to use potentially (or even established) helpful intervention. A classic example of this is streptokinase (SK) in acute myocardial infarction (AMI). Antman et al.[4] have shown that had there been a periodically updated summary of emerging evidence (called 'cumulative meta-analysis'), a strong case for recommending routine use of SK in AMI could have been made in 1977, but even in 1982 12 out of 13 articles did not mention SK for AMI (one mentioned it as an experimental drug). Recommendation became common (15 out of 24 articles) only near 1990, almost 13 years after, there was enough clinical evidence.

On the other hand, most textbook/review articles in 1970 were recommending routine use of lignocaine hydrochloride in AMI (in 9 out of 11 articles), whereas evidence to date was showing a trend towards increased mortality with its use. It was only after 1989 when evidence summary in the form of meta-analysis was published that textbooks and review articles stopped recommending its use in AMI.

D. Manufacturers' claims

Many clinicians often start using an intervention based on the information from the drug companies. However, the information may not be valid and may result in more harm than good. An example is use of hormone replacement therapy (HRT) in postmenopausal women. The companies promoted the use of HRT without adequate and high-quality evidence. Only when a large clinical trial showed that it may be dangerous that the clinicians have stopped recommending HRT widely.[5]

Many clinicians are easily convinced by information provided by the drug companies though many a times, it may be misleading.

The above examples show that decisions based exclusively on pathophysiologic rationale, experts' advice, textbook/review articles or drug company information may turn out to be wrong. This is not to say that all the time they are wrong or that advice based on a clinical trial or meta-analysis cannot go wrong. But point is that when physiologic rationale, or experts' advice are supported by clinical evidence the likelihood of such decisions going wrong is lower than when they are there not supported by the evidence. When there is discrepancy between the above sources, then there is a need to exercise caution and put more weight on valid clinical evidence than on other bases. EBM puts emphasis on this point.

What is new in an evidence-based approach?

It may be argued that physicians (or health policy-makers) have always used and continue to use evidence, expertise and patients' values in decision-making. Yes, this is largely true. All good physicians always did it and continue to do it. The new thing is the difference in emphasis, explicitness, rigor and understanding. The new tools and techniques of accessing, appraising and expressing the evidence make the process (of using evidence) more systematic and rigorous. Many notions and concepts carried by physicians before EBM era need to be changed. Some of such notions or concepts are given in [Table - 1].

Steps in practicing evidence-based neurology

The main (but not the only) objective of EBN is the application of the right and complete information by health care professionals in decision-making. To meet this objective, four key steps (4 As) are necessary:

a) Ask for the required information by formulating your question

b) Acquire (find) the information by searching resources

c) Appraise or analyse the relevance, quality and importance of the information

d) Apply the information in your practice or patient.

Each of the above steps is outlined below

Step 1. Asking for the required information in the form of a question

Asking question in a particular format is an important first step, because it helps to find the most relevant information. It specifies the outcomes you are looking for. It also helps in assessing the relevance of the information to your patient's problem.

Sources of the questions

The questions arise in our day-to-day practice on almost daily basis. In fact, many questions may arise every day. You may be on rounds - seeing a patient with posttraumatic vertigo. One of your colleagues asks - why not start him on a vasodilator? You may be seeing patients in outpatients department. Your patient with migraine may ask - is there any herbal remedy for my problem? What about acupuncture? You may be in a seminar. Your colleague challenges the superiority of coiling over clipping of aneurysms. You may yourself wonder at times whether there is any point in doing MRI in this patient with stroke?

You will face such questions every day, which require and motivate you to seek evidence from literature. However, it is important not to rush to Internet without some deeper thinking. I have seen some residents coming up with a paper, which bear superficial resemblance to the question but have nothing to do with it. Deeper thinking leads to refinement of the question.

Refining the question

To make your search efficient, you need to specify the following in your clinical questions:

a) Patient population: type of patients

b) Intervention (new): the new approach or strategy of treatment

c) Comparison: the control or conventional intervention

d) Outcomes: clinically meaningful outcomes that are important for the patients

The Acronym "PICO" is used to memorize the parts of a well-formulated clinical question. I must say that sometimes the comparison intervention may be missing, and "PIO" is enough to specify the question.

Beginners face difficulty in deciding which intervention goes under "I" and which one under "C". "I" stands for the new intervention. To emphasize this, I use the acronym 'PInCO' where "n" by the side of "I" stands for new.

Step 2: Acquiring (searching for) the evidence

There are two types of articles: primary and integrative. The publications carrying data collected by the authors of the article are called primary studies, whereas those summarising or building on data collected by other investigators are called integrative studies. Examples of integrative studies are systematic reviews, meta-analyses, economic analyses, etc. Nowadays, there are some publications that re-publish parts (usually abstract) of the primary studies with its critical appraisal, and these are called secondary publications. For example, new journals like Evidence-Based Medicine, Evidence-Based Psychiatry, etc. are secondary publications. The search for evidence may begin with integrative studies like meta-analyses summarizing the evidence on a topic, followed, if no suitable summary is available, by primary or secondary publications.

Step 3: Assessment or critical appraisal of the papers

There are four issues in the critical appraisal:

a) relevance

b) validity

c) consistency

d) importance or significance of results

a. Relevance refers to the extent to which the research paper matches your information need. Comparing the research question in the paper with your clinical question would help you to determine the relevance of the paper. Once again, PICO format of the question would make it easy to take the decision. Many times, one may find a match between the population and/or intervention but the outcomes may be different. Unless one finds another paper with the desired outcomes, it may be advisable to proceed with the paper.

b. Validity refers to the extent to which the results are free from bias. Biases are mainly of three types:

1. selection bias

2. measurement bias

3. bias in analysis (in all types of studies, you must look for these biases. The specific questions are given in the [Table - 2].

All kinds of studies need to be assessed for the above biases, while assessing validity. If a bias is present, you should ask the next question - so what? Does it affect the internal validity or external validity? Briefly these terms can be defined as follows:

1. Internal validity is concerned with the question: are the results correct for the subjects in the study. This is the primary or first question for any study.

2. External validity asks the question: to which population are the results of the study applicable or generalizable? External validity is judged in terms of time, place and person. Can the results be extrapolated to the current or future time, to different geographical region or settings and to patients outside the study?

Internal validity is the basic requirement of a study. However, it is an ideal to aspire for. It is nearly impossible to achieve 100% internal validity and many attempts to maximize internal validity may compromise the external validity. A reasonable balance needs to be achieved between internal and external validity.

a. Consistency refers to the extent to which the research results are similar across different analyses in the study and are in agreement with evidence outside the study. Consistency may be internal or external.

1. Internal consistency looks at the different analyses conducted in the study. For example, in a therapy paper there may be adjusted and unadjusted analysis, certain sensitivity analyses, analyses for subgroups and analyses of primary and secondary outcomes. If these analyses yield the same answer, say, in favour of the new treatment, then the results will be considered internally consistent.

2. External consistency refers to the consistency of study's findings with the evidence from biology, from other studies and even with the experience of clinicians. If findings are not consistent with one or more of these, one needs to explore the reasons. Knowledge of biology is vast, evolving and yet (to a great extent) incomplete. Hence, most of the time some biological explanations can always be found to explain the results. If not, one should remember the limit of our knowledge of human biology.

b. Significance of the information (results): This needs to be evaluated in the light of the type of paper. For therapy (treatment) and diagnosis (test) paper, you need to ask:

1. How did the new treatment or test perform in the study? Were the results statistically significant and clinically important?

2. What information can you take from the study to your practice/patient?

Step 4: Applying the results to your patient

Having found that the information in the paper is relevant, valid, consistent and important, the question is whether the test or treatment will be useful for your patient/practice? You need to determine (or best guess) your patient's disease probability or risk of adverse outcome and then consider how these will change with the application of the new test or treatment. Whether this change is worth the risk and cost of the new interventions?

What does your patient think about the benefits and risks associated with the new test or treatment? These considerations would help you to apply (or not to apply) the results of the paper and take a decision. A practice, which is based on these considerations is aptly called "evidence-based clinical practice".

Evidence-based neurology (EBN) as a strategy

The EBN is a strategy to solve problems, to learn effectively and efficiently, to empower learners and decision-makers, to avoid wasteful expenditures and to improve quality of health care.[6]

Towards quality of care improvement

1) A problem-solving strategy: A clinical practitioner or a policy maker faces problems almost every day. The clinician has to make a decision to prescribe a diagnostic test or a treatment. The policy maker has to decide whether to purchase a technology. They need information to make the decision. Often this information helps to judge the utility of the treatment or technology. They need to search this information, assess its validity, meaning and applicability in their context. EBN provides the skills to efficiently search and assess the information so that it can be used to solve the problems.

2) An empowering strategy: EBN empowers patients by clarifying whether there is evidence clearly in favour of one intervention or another; or that evidence is unclear and, therefore, the patient's preferences would count more in the decision-making. EBN empowers junior doctors and nurses or even students by providing opportunities for them to support their contention with evidence rather than blindly following their seniors.

3) A waste limiting strategy: By raising questions about the evidence of benefits and risks or costs about several routine procedures, EBN has the potential to reduce wasteful expenditures. For example, routine liver function tests in all cases of stroke is challenged by lack of evidence of any benefit and may be dropped leading to savings for the patients as well as hospitals.

4) A quality improvement strategy: Quality requires maximization of benefits and minimization of risks for the patients It requires organizational and individual learning. It requires guidelines and review criteria based on evidence. It encourages limiting wasteful expenditures. It is based on the philosophy of empowerment of workers. As indicated above, EBN helps in all of these and hence is a strategy to improve quality of health care.

5) Protective strategy against commercial exploitation by industry and vested interests: Sometimes, probably very often, manufacturers of drugs and devices provide potentially misleading information to health professionals and make taller claims than justified. EBN makes the professionals wary of such claims and protect them from prescribing drugs and procedures with unsubstantiated claims. It gives them skills to judge the appropriateness of such claims.

6) A communication facilitating strategy: Health professionals often differ in their approach and opinion while dealing with a patient problem. In the ever-growing need for a teamwork and multidisciplinary approach to patient care, communication is a key to success. The opinions may differ because of differences in the evidence base or values that go into decision-making. EBN provides skills to delineate the basis of differing opinion and a language to communicate with colleagues, with explicit description of evidence and values.

Towards education

7) A learning strategy: Medical educators espouse self-directed life long learning for health professionals. But how to achieve this? EBN offers at least one way of achieving this. By being relevant to one's practice, EBN is consistent with the principles of adult learning.

8) An information handling strategy: It is estimated that more than two million articles are published annually in more than 20 000 journals. It is impossible for any one to read all of them. Health professionals may be overwhelmed and bewildered if asked to read all of them. It is important to separate the insignificant and unsound studies from salient and crucial or to know the right sources that would have done this. Efficiency and selectivity are of crucial importance. EBN provides the basis for this efficiency and selectivity.

9) A collaborative learning strategy: There is increasing emphasis in medical education nowadays for collaborative and inter-professional learning. EBN is a common ground and base on which all health professionals may collaborate in learning.

Towards research

10) A research promotion strategy: EBN process includes accessing and appraising research. The process often leads to identification of shortcomings in existing research and awareness of lack of good research in a topic or field. This may become a starting point for many residents and practitioners to plan and conduct research. The process also leads to awareness of many terms and concepts, which help in understanding research methodology and motivate people to take further steps to plan a good research in the area where no sound evidence exists.

Limitations of EBN

1) Limited applicability of evidence to an individual patient: The evidence that is most applicable to a patient is the one coming from a study on him alone. Such studies are possible and are usually termed, N-of-1 trials. If the patient can be treated with a drug and a placebo in random order, and effects can be measured, the results will be very applicable to him. But the scope of such studies is very limited. Only chronic conditions with repeatable outcomes (unlike MI, stroke or death) are suitable for such studies. Moreover, most health care professionals do not have time, facility or inclination to conduct studies. Therefore, the evidence, which is available, comes from averaging over a number of patients, some of whom had beneficial effects, some had adverse effects and some had both. Is this average effect applicable to your patient? You don't know, you can't know. This is probably the biggest limitation of EBN. But what is the alternative? To have some idea of what happens on average is usually better than having no idea at all. The average effect is what would happen to most of your patients. In the absence of a better alternative, this is probably the best knowledge to work with.

2) Lack of consistent and coherent evidence: This is another big problem. For many questions that are encountered in clinical practice, there is little evidence, certainly very little good evidence. For many questions, evidence is available but inconsistent and incoherent. Recent controversy on effectiveness of mammography for breast cancer exemplifies the incoherence, which afflicts some of the available evidence.

3) Potential to limit creativity and innovation: EBN puts the standards of proof at such a high level that a new idea may be crushed even before sprouting. For example, if a new test is developed to diagnose, say migraine or tension headache, the investigators may feel limited in conducting a good study because there is no "gold standard", which is highlighted (even though, not true) as a "must" by the current EBN literature.

4) Need for time from clinicians: Clinicians are busy everywhere, more so in developing countries. The time required to learn even the language of EBN is not available. This poses a major limitation for the clinicians, even to read and understand preappraised EBN literature.

5) Limited availability of EBN resources: Many of the EBN resources, particularly secondary ones, are not available or affordable by clinicians in developing countries. Even local libraries do not carry them. Many clinicians may not have computers or Internet access. Under these circumstances, the clinicians cannot access the literature and cannot practice EBN. Also the human resource and expertise necessary for promoting EBN is limited at present.

6) Need to learn new concepts: (methodological and statistical): Many concepts in EBN are difficult to learn. One or two workshops are not enough to clarify those concepts. This puts many clinicians off. They develop a kind of aversion towards EBN.

7) Confusing terminology: Many new terms, which have been created by EBN proponents are unnecessary and confusing. Simple and self-explanatory terms like risk difference and risk ratio have been called absolute risk reduction and relative risk respectively. Similarly, incidence of an outcome in control group has been termed in EBM books as risk in control group, and also as control event rate (which strictly speaking is not a rate). These duplicate terms confuse clinicians who, as such, are not comfortable with numbers and mathematics.

8) Wrong assumptions: EBM books and articles nearly always assume that risk ratios of treatments and likelihood ratios of diagnostic test results are more or less constant across patient subgroups and apply even to individuals. In fact, these assumptions have often been found to be wrong. Of course, these measures are more stable across patient subgroups than other measures, but that they are stable across subgroups of patients or across different population of patients are untrue. Yet, these are fundamental assumptions on which EBM rests.

Misconceptions about EBN

1) Patient waits while the doctor searches and appraises the evidence: Sometimes, critics cite instances from emergency that cannot wait while the doctor is looking for evidence. But this is a misconception. Urgent decisions have to be taken based on what the doctor knows, or gathers from other colleagues who are involved in the care of the patient. However, if he has a question, evidence to which he is uncertain about, he follows the four steps of EBM whenever time permits. He may change his decision after knowing the evidence. It is expected that physicians would be up-to-date about the evidence involving commonly encountered conditions, particularly the emergencies, and would not have to start an EBN process, while the patient waits for action in the emergency.

2) Clinical expertise is ignored in EBN: Nothing can be farther from the truth than what this statement suggests. On the contrary, expertise and experience are essential to practice EBN. Without these, even the first step of asking the question cannot begin. Specifying the clinically meaningful outcomes requires clinical expertise. Similarly, evaluation of a "gold standard" in diagnosis or baseline prognostic balance and postrandomisation withdrawals from analysis in therapy requires expert judgment. Thus, rather than ignoring, EBN emphasizes integrating one's clinical expertise and experience with evidence.

3) Only randomised trials or meta-analyses are counted as "evidence": Most criticisms against EBN have been mainly criticisms of randomised trials and meta-analyses, as if only the latter are counted as evidence. This is not true. Read any book on EBN, you will find that it counts clinical experience, observational studies, basic research as well as animal studies as evidence. What EBN asserts is that strength of evidence varies as regards to its validity, clinical applicability and adequacy for benefit - risk analysis. Randomised evidence is often stronger for these purposes than other kinds of evidence. At times, though, clinical experience may be the dominant factor in decision-making.

For example, consider a patient with subarachnoid haemorrhage and wide-neck berry aneurysm. Suppose evidence suggests that coiling is the best option, but the clinical team has no experience of this. The neurosurgeon, may under special circumstances of urgency and expertise, may reasonably decide to do carotid ligation.

There are many disorders for which nonrandomized evidence is the most acceptable level of evidence and nobody has ever asked for randomized trials. For example, for all deficiency disorders (e.g., hypokalemia, hypothyroidism, etc.), replacement therapy based on observational studies alone is well-accepted. To my knowledge, nobody has ever asked a randomised trial for these and for many other endocrine and hematological disorders, though questions like tight versus loose control of diabetes did need randomized evidence.

4) EBN is a method for medical research: Many participants in EBN workshops come with an idea that they are going to learn clinical research methods. This is a misconcept. EBN is for users of research. Doing research requires a greater involvement and deeper knowledge. EBN workshops last from few days to one week. The time may be adequate only for understanding the language of EBN, not even for inculcating ability to do independent critical appraisal of literature. EBN intends to make practitioners more informed users of research.

 
   References Top

1.Guyatt GH, Rennie D. Users' Guides to the Medical Literature. (editorial) JAMA 1993;270:2096-7.  Back to cited text no. 1  [PUBMED]  
2.Haynes RB, Mukherjee J, Sackett DL, Taylor DW, Barnett HJ, Peerless SJ. Functional status changes following medical or surgical treatment for cerebral ischemia: results in the EC/IC Bypass Study. JAMA 1987;257:2043-6.   Back to cited text no. 2    
3.Duley L, Hnderson-Smart D. Magnesium sulphate versus diazepam for eclampsia (Cochrane Review). In The Cochrane Library. Chichester, UK: John Wiley & Sons Ltd; 2004.  Back to cited text no. 3    
4.Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA. 1992; 268:240-8.  Back to cited text no. 4    
5.Anderson GL, Limacher M, Assaf AR, Bassford T, Beresford SA, Black H, et al. Women's Health Initiative Steering Committee. Effects of conjugated equine estrogen in postmenopausal women with hysterectomy: the Women's Health Initiative randomised controlled trial. JAMA 2004; 291:1701-12.  Back to cited text no. 5    
6.Prasad K. Fundamentals of Evidence-Based Medicine. 1st edn. New Delhi, India: Meeta Publishers; 2004.  Back to cited text no. 6    


    Tables

[Table - 1], [Table - 2]


This article has been cited by
1 Barriers to evidence based medicine practice in South Asia and possible solutions
Agarwal, R., Kalita, J., Misra, U.K.
Neurology Asia. 2008; 13(2): 87-94
[Pubmed]



 

Top
Print this article  Email this article
Previous article Next article

    

 
   Search
 
   Next article
   Previous article 
   Table of Contents
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Article in PDF (202 KB)
    Citation Manager
    Access Statistics
    Reader Comments
    Email Alert *
    Add to My List *
* Registration required (free)  


    Abstract
    History
    References
    Article Tables

 Article Access Statistics
    Viewed6568    
    Printed217    
    Emailed3    
    PDF Downloaded245    
    Comments [Add]    
    Cited by others 1    

Recommend this journal