
July 2013 SAMJ (Source: http://www.samj.org.za)
The author points out a number of known problems with medical research and EBM. These range from fairly minor problems, like the difference between statistical and biological significance, to major issues like data fabrication. It’s important to be aware of these issues or it would be easy to misinterpret the available data. What I did find a bit concerning was the lack of treatment of any of the attempts to solve the problems. For example, publication bias (which ties in with sponsorship by for-profit organisations) was mentioned as a problem for medical literature as only the positive results get reported but there was no mention of All Trials, an initiative to improve publications of clinical trials which seems to having a bit of a positive impact.
Staying on publication bias, it’s at least something that we can detect through funnel plots. If we do this for a particular topic then we can tell whether that is something to worry about. This, of course, requires checking all the available studies, something a doctor obviously can’t do themselves for every medical question but something which should be done systematically. This both can check for publication bias and, since we’ve just collected all the studies, can see whether they agree. This is because more important than the individual studies is what the body of work says when taken together, see the second perspective here. Meta-analyses of many studies on the same topic ought to be more accurate than any single study. In addition it should reduce the effect of the various biases and problems that can affect individual studies. Unfortunately, there’s no mention of using multiple studies in the SAMJ article.
I would’ve appreciated if the article had included a bit on those sort of methods to overcome the problems that will be encountered with EBM but at the very least it’s good that they make people aware of them. Like I said at the beginning though, I don’t agree with the conclusions which say some things that are slightly concerning.
Evidence-based medicine has been defined as ‘The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.’ There are two major assumptions in this statement. First, it is assumed that the evidence is in fact the best. Unfortunately this is not necessarily so, and published evidence is affected by bias, sponsorship, and blind faith in mathematical probability which may not be clinically relevant.
Here my issue is that they’ve twisted the definition a bit. There is a huge difference between “current best evidence” and “best possible evidence.” The current best evidence may not be the best possible evidence but that’s not a reason to throw it away. We can’t know for certain whether we have the best possible evidence but it’s reckless and irresponsible to just ignore the evidence that we do have. Occasionally that will lead us to make the wrong call but at least we can justify why we made the wrong call.
Second, the evidence is population based and may not be applicable to the individual, and blind adherence to this concept may cause harm. We must not abandon clinical experience and judgement in favour of a series of inanimate data points. Medicine is an uncertain science.
And…
The concept [evidence-based medicine] is disease and not patient orientated, is not scientifically perfect, and must not be viewed as exclusive. As Osler observes, ‘Variability is the law of life and no two individuals react alike and behave alike under the abnormal conditions which we know as disease. The good physician treats the disease, the great one treats the patient.’
The first is from the abstract and the second is from the body but I think they need to go together. These sections worry me because they seem to imply that, because of the limitations and caveats with EBM, that we should return to when everything was based on personal judgement. I don’t think the author is advocating abandoning evidence-based medicine but there is a strange sort of conflict between these statements and supporting it.
Clinical experience and judgement is necessary but to call EBM “a series of inanimate data points” that are “disease and not patient orientated” is very misleading. Patients are individuals who need personalised treatment but we learn how those treatments work by looking at large groups. That’s still completely patient-orientated and necessary to know what will and won’t work. I actually find those comments disturbingly similar to those made when Pierre Louis published his results showing that blood-letting was harmful. As described in Trick or Treatment:
[W]hen Pierre Louis published the results of his trials in 1828, many doctors dismissed his negative conclusion about bloodletting precisely because it was based on the data gathered by analysing large numbers of patients. They slated his so-called “numerical method” because they were more interested in treating the individual patient lying in front of them than in what might happen to a large sample of patients.
Hopefully, we’ve moved on in the last 180 years. The only way to make sure our treatments are anchored in reality is by using the current best available evidence, tempered by knowledge of its limits. This is how we know what works and how we can best treat patients.
Brilliant article! I agree, it’s difficult to argue with Muckart’s key points – it’s well known that there is an issue with the publication and availability of clinical trial data – but as you mention there are no real suggestions of how this could be improved. “We must not abandon clinical experience and judgement in favour of a series of inanimate data points” closely followed by “For more than 2 000 years, anecdotes, personal experience and bias dictated medical practice. Untold harm was caused…” also sounds like quite a contradiction in terms to me.
Pingback: Two years, still going strong | Evidence & Reason