Trust in Patient Relationships
June 13, 2019
by Robert E. Cranston, MD, MA (Ethics)
Any third year medical student knows that the basic principles of medical ethics are autonomy, beneficence, non-maleficence and justice. At least, that is, according to Beauchamp and Childress’s Principles of Medical Ethics, first published in 1985. For many people these are the only specific guidelines we should employ in sorting through clinical ethics conundrums. In western medicine, Autonomy (with a capital A) seems to be the primary, major consideration. The customer is always right.
Of course, numerous other factors should be included in ethical considerations. For instance, in Virtues in Medical Practice the late Professor Edmund Pellegrino stressed the concept of the virtues in medicine, and the virtuous physician’s role in problem-solving. Among the virtues he listed were trust, compassion, prudence, temperance, self-effacement and, like Beauchamp and Childress, justice. With his elaboration on the issues, I think Dr. Pellegrino was much closer to the truth than the Beauchamp and Childress model. I particularly like his thoughts on trust.
For our patients to trust us we must maintain integrity in our actions and communications. While there are nuances regarding total informed consent—never completely attainable, or complete transparency—not necessarily always advisable, particularly when we don’t have all the data available to make informed decisions, there are actions which destroy trust. The ideas of consciously misleading patients, fraudulently performing procedures for which the patient has not consented, or publishing false information in order to support one’s pre-determined opinions (not following the scientific process) or for purposes of secondary gain are actions which are clearly unethical.
In this vein, I was recently asked to comment on research performed on U.S. veterans. In this study, investigators performed potentially dangerous procedures during a multi-center liver research trial, without patient consent, and in direct conflict with IRB-approved protocol. Some veterans experienced complications from these procedures, though fortunately no one died. Trust was broken, though, and one would think the guilty parties would be soundly chastised. Instead, the perpetrators of these deliberate assaults were not seriously reprimanded and did not lose their jobs, and, in fact, the information gained from these illicit procedures was published in a reputable medical journal. A justification offered was that important information, no matter how it is obtained, should be shared with the scientific community. A similar argument was made, when after World War II, research based on Nazi human experimentation was published.
Another continuing problem with current medical journalism is publishing information that is clearly faulty, outdated or based on research that is poorly designed. A landmark article by John P. A. Ioannidis in 2005 is reportedly one of the most widely cited articles in the medical literature. Ioannidis is a master of statistical analysis, and for those interested and capable, his full discussion of “Why Most Published Research Findings Are False” is worth the read. He notes that financial and professional conflicts of interest, confirmation bias, poorly designed studies and small sample size make many published research findings suspect. Additionally, he notes six corollaries regarding published research:
- The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
- The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
- The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
- The greater the flexibility in designs, definitions, outcomes and analytical modes in a scientific field, the less likely the research findings are to be true.
- The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
- The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.
In this information age, the errors of poor research are compounded by the public’s desire to hear something new on almost any health-related subject. One week we are told not to eat egg yolks, the next we are told eggs are a miracle food. Low fat, low carb, ketogenic, rotation and various program diets are touted as the new cure-alls for obesity—differing frequently as new “research”—usually based on single, unreplicated, small sample size studies become available. Bee sting therapy is useful for treating multiple sclerosis. No, it isn’t. The important caveat contained in most of these “scientific” articles, though seldom stated in the lay press, is, “Further studies are needed to confirm these findings.”
So, what are we to do regarding these various affronts on integrity and trust? How are we to restore some of the trust that has been lost? On a systemic level:
- We should demand better powered evidence in research studies.
- We should, if possible, decrease the pressure to publish as a primary means of achieving academic tenure. Though this is likely to be difficult, academic promotion and tenure should be more heavily weighted on teaching, service and the quality of research—not the quantity of articles published.
- We should be willing to publish more “negative studies,” which, while having less crowd appeal are important in searching for truth. (Though, as Ioannidis points out, these are not truly negative studies and should be labeled something else.)
- We should make it clear in our writing how small variances in outcome, while statistically significant, are frequently of no clinical significance.
- We should monitor our own research for ethical propriety and impose stricter penalties for serious breaches. One of the essential elements of professionalism is self-policing of inappropriate behaviors.
- We should support efforts to highlight debunked studies as such.
- We should refuse to cite studies which have been subsequently disproven. The anti-vaccination campaign sparked by Dr. Andrew Wakefield continues to be referenced by many anti-vax advocates, though it was clearly debunked years ago. Wakefield lost his license and standing because of his fraudulent work.
- Subscribe to or peruse whistle-blower sites such as Retraction Watch on a periodic basis.
On a personal level:
- We need to be careful when interpreting new research, and we need to be wary of changing our practice patterns based on new, limited information. The newest is often not the best.
- We need to educate our patients regarding new research and false reports. In our age of tolerance and in efforts to achieve positive patient engagement evaluations, we may be reluctant to tell our patients that their newly discovered “research” is not reliable.
- We need to be vigilant regarding our own claims for therapies we offer patients.
- We need to make certain we appropriately conduct our informed consent (shared decision-making) discussions and ensure that additional interventions without patient input and consent are rarely if ever performed.
As Thomas J. Watson once said, “The toughest things about the power of trust is that it’s very difficult to build and very easy to destroy….” It is our job as healthcare professionals to establish and nurture trust with our patients, and it’s our job to do our utmost to be worthy of that trust.