Medico-legal Perils of Artificial Intelligence and Deep Learning

Dr. Adam Tabriz

https://img.particlenews.com/image.php?url=0ODDjD_0awRAzT100
Photo by Mitya Ivanov from Unsplash

Technology is advancing rapidly, as its footprints can be amenably noticed within the healthcare domain. The overpowering ambivalence of technology among the physician community is consequently self-explanatory. In contrast to the ever-increasing millennial's over-reliance on technological applications, baby boomer despise of the health information tools is far-famed. In the midst of all, one piece has yet to conform to the ever-evolving hi-tech revolution as we follow its footsteps through the unfolding. That seems to be the unanswered question of all on medico-legal risks menacing the healthcare unrestricted. Medicine- due to its peculiar nature, is considered among the top litigation vulnerable professions.

Although the number of successful lawsuits invariably flickers, yet the trend seems to have an upward swerve. The reason why patients consider filing lawsuits and how physicians to rift the standard of care varies. Still, in bulk, if one digs deeper- besides deviation from the middle of care, often one can find a crumbled link within the chain of the bond between the doctor and their patient.

Trust, knowledge, concern, and loyalty are the essential elements of every healthy tie between physicians and patients. Such accord is a consensual manacle involving vulnerability and the trust between the two. Hence, it’s one of the most moving and meaningful experiences shared by any human being. Yet, the culminating rapport that originates from such a connection is not always perfect.

Traditional scholars have categorized the sleek kinship between the patient and the physician into three categories: Guidance-Cooperation, Active-Passive, and Mutual Participation. Even though the mentioned categories have been acknowledged throughout history, the last of the three dominate today’s healthcare scenery. It’s the depiction of the increasing citizen’s access to boundless information, surpassing know-how, as well as the advancement of consumerism. Other factors that could potentially disrupt their associations include constraints, language barriers, transparency, cultural norms, and personal attitude.

For the contemplation of simplicity, any instrumentality that intervenes between a patient and physician must perform concordance ideally, with the unique bond established between the two parties. Hence if not synchronized, it will potentially break one or more elements of the relationship. Disrupting factors of elements are subject to variations and de novo debut.

One potential demagogue of our time is applying artificial intelligence algorithms for medical applications and health. According to a recent article published in the JAMA network, physicians face imminent threats using unregulated and partially validated Artificial Intelligence ( AI) technologies.

Technology is to ease some of the burdens from physician responsibilities and make their job efficient and precise. Nonetheless, with the ever-changing industry business requirements, the algorithms need to be periodically updated and synchronized. With no hesitation, this applies to the healthcare industry, especially with regards to editing and synchronizing the standard of medical care for a specific time, place, and medical practice.

The practice of medicine is constantly changing, and so are the prevailing clinical care, technology, and social norms. The inherent multifaceted volatility within medical triumph is the commitment amid physicians and patient conflict.

To further discern, let’s go over some definitions

Medical practice is a professional functioning by an individual recognition by its peers to serve as an expert in medicine. Its primary mission is to treat often, cure sometimes, and comfort always according to Hippocratic Oath. The pledge applies to care for a person who consents to take part as the patient by establishing an alliance. Thus, the Clinical disposition built on trust through the utter concurrence of both parties within the context of knowledge, skills, and ethics stands the ultimate goal.

Contrary to other aptitudes, clinical decision-making remains the most variant prone of all, from doctor-patient interaction to the unique individual requirements and societal factors in between. Thus, justifying the standard of care does not pertain to an emblematic staple thought within the other industries. Medical care ideal is subject to shifting and variations in time, place, person, resources, accepted societal norms, and economic determinants. Standard of medical care is where physicians, through a similar training level and in the same medical communities, would offer, thus under the comparable circumstances breach of the standard, would be the cause of the alleged malpractice.

https://img.particlenews.com/image.php?url=4OzJrN_0awRAzT100
Photo by Andrea De Santis from unsplash
Unsplash

Artificial intelligence, machine learning (ML), or deep learning (DL)

In computer science, artificial intelligence (AI), also referred to as machine intelligence, describes the understanding demonstrated by machines. That falls in contrast with the ordinary human intellect with which we all are familiar. Leading AI textbooks define the latter discipline as the study of “intelligent agents.”

For an AI to function, it must first learn. The learning process is through “ Machine learning,” which provides the latter with the independent ability to shadow learn its human counterpart. It self-learns through aggressive data mining, utilizing various sensors and metadata input modes, algorithmic protocols, and trailing humans. Through machine learning, AI can self-improve functionality without requirements for explicit programming.

Deep learning (DL)- also a subset within the artificial intelligence (AI) provides a technology or networks capable of learning “unsupervised” data that is “unstructured or unlabeled.” Also known as deep neural learning or deep neural network, deep learning functions as the autonomous capability of Artificial intelligence in medicine and healthcare as unlimited and liberating, making intelligent human contributions to artificial intellectual development even more consequential for ensuring ethical and legal compliance precedence.

It has been the recent common conviction among some technocrats who place liberal faith in AI capacities; by way of it will ultimately diagnose diseases and offer suitable treatment options even flawlessly than humans. And are relentlessly sure that machines will be able to learn, make differential diagnostic workups and make the best choice of treatment decisions for the particular patient without physician participation. It’s utterly premature and radical to take on such a premise. Still, provided the help of the doubt, let’s speculate such a scenario is probable; a synopsis where the doctor-patient liaison, is in fact, a machine-patient relationship or corporate-patient affiliation. Yet, it seems greatly presumptuous to contemplate achieving such a scenario would require a transition period where physicians must periodically intervene.

With the swift pace of healthcare rushing towards robotic medicine, the human intervention must be considered detrimental to the medical community’s period of influence and safety. Failure to do so; will spawn a vacuum potentially drawing in the factors prone to swing the standard of medical care, adversely affecting the clinical judgment, hence exposing the physician to legal implications.

So, what if AI provides recommendations that are propagated without the capability to communicate the underlying differential diagnostic for the selected treatment choice?! Or, Machine learning was trained in unrelated clinical scenarios, using unreliable methods or fuzzy data sets.

Generally, by the commandment, the physician would be liable if they do not adhere to the standard of care. As a direct result of such a particular deviation, injury transpires. Amidst the application of AI, one can potentially foresee many potential avenues open for legal remedy. (Fig. 1)

https://img.particlenews.com/image.php?url=08sk1m_0awRAzT100
Fig.1 Adapted from JAMA; Published online October 4, 2019. doi:10.1001/jama.2019.15064

Due to its multifaceted nature, the deviation from the standard of care applicable to Artificial intelligence does not halt there. The continuous yet frequent shift of social expectations, science, technology, and sociopolitical landscape around medical practice, along with the ever-changing socioeconomic healthcare panorama, boils down to updating and confirming algorithms consequential parallel to those discrepancies. Nonetheless, the current medical community’s skepticism and disengagement from their technology domain render that task second to impossible. The repercussion encompasses physician’s liability at the mercy of the tech industry and non-physician algorithms. Until we reach a point of time when the public is ready to place their faith in automation to stay healthy without human empathy or with complete trust empathic transference, the potential for legal implications will remain high and unpredictable.

Logically, we can all concur- utilizing the AI as a distinct form of instrument in medicine. But there must be a point of concession that correspondingly secures safety protocols throughout the maturation period. Hence, for that reason utility of AI beyond its functional requirements must be diligently validated.

Indeed, we may be opening a “can-of-worms” without well-defined operational requirements. Machine learning and artificial intelligence are tools, help physicians perfect on History & physical exams, develop refined differential diagnostic workups, order appropriate tests and enhance patient collaboration. The contradiction between the physician’s recommendation and the machine must be delineated and transparent to the patients.

Establishing proper expectations is imperative, more so on possible complications and treatment failures. It ought to be coherent to every patient and physician alike. It must, as a result, be as portable as the deviation from the boilerplate is related to technological failure or pure physician negligence.

How physicians plan to override the AI recommendation carries unique legal and ethical challenges, more so if the algorithms are not disclosed to the physician in advance. Upon experiencing complications, how the particular process of clinical decision-making is perceived by Patients, Peers, or the legal system is a delicate issue to tackle.

Legal Liability and how to prevent

Indeed, overwhelming open-ended questions must be answered before the healthcare community can place faith in a machine learning capable mechanism such as “ Doctor Alexa. “

The decision to override the power of AI is a double-edged sword, as we may be putting our trust in Robots more than we are willing to contemplate the consequences associated with it.

But what does the law trust?! — Physicians or tech industry?!- Slow but sure; steps in the right direction is the key!

The medical community's attitude must change. The applicable amendment must follow, claiming the ownership of a realm they have been losing to alternate industries for the past decade. So, let’s start with simplifying the clinical judgment process using DL and AI while figuring out how to harness the power of machine learning to shadow around every physician independently, thus molding the technology to apply under each physician’s custom and style of practice. Respectively, unleash its ability to use shared patient data towards improving outcomes through feedback and constructive criticism, first outlining possible pros and cons. Having said- that is not the reality today. There’s significant blurry revelation and discrepancy within current systems of applied science and medicine. We must presume there’s a long road ahead of us before the medical community can reasonably show confidence in the full spectrum of available solutions. Ownership of the Healthcare domain by physicians is indispensable.

https://img.particlenews.com/image.php?url=0ZvoxJ_0awRAzT100
Photo by Michael Dziedzic from unsplash

The science of creating functional requirements and validating algorithms must be part of the medical prospectus. Imperatively Algorithms should deliver as intended for tactical medical care, devoid of any strategic undertaking to pivot corporate interest to financial gain.

Empower physicians by warranting the adaptability of Deep learning algorithms to individual circumstance, while designing them to act like a docile to physician vs attending as the independent provider.

AI Must recognize the reference point for the standard of care for a specific scenario, time, place, and person. With the help of patients, the physician ought to redefine every case and have the legal, ethical and technical power to override decisions by making a personalized approach mutually. It’s Always wiser, nevertheless safer, to limit the Scope of AI applicability to focused and smaller at a time until science supporting DL has matured to accommodate every case exclusively.

We must rid algorithms out of the access of inexpert; Limit the deep learning theorem to individual physicians or medical groups vs. making it universally available across the board to everyone. Avoid the bureaucratic process to bargain the quality and functionality of the tech formulation. Enable AI to Compare benchmarks and make a recommendation because predetermined protocols are a road to pernicious healthcare.

This is original content from NewsBreak’s Creator Program. Join today to publish and share your content.

Comments / 0

Published by

Adam Tabriz is a Physician, Writer, Entrepreneur, and public health policy, expert. He is an advocate for Personal liberty. The combination of his experience and expertise underlines his passion for advocating true “Personalized Healthcare” and “Healthcare without Borders.” His favorite slogan is: “Peace of mind would come to all people through the universal respect for the basic human rights of everyone”

San Francisco, CA
511 followers

More from Dr. Adam Tabriz

Comments / 0