Exclusive
scalehealthtech Realize your Healthcare’s Digital Transformation journey with ScaleHealthTech Learn More

AI in Healthcare: Regulatory & Legal Concerns – A Clinician’s Perspective

Written by : Dr. Ganapathy

September 5, 2024

Category Img

AI is increasingly being used in healthcare globally. There is voluminous global literature, dealing with almost all aspects of deployment of AI in healthcare.

WHO, the European Commission (EC), and hundreds of other learned bodies have held hundreds of conferences, and thousands of publications have resulted in every possible subsection of AI in Healthcare.

EC introduced the Artificial Intelligence Act (AIA) in which AI systems must undergo pre-deployment compliance assessments and post-market monitoring to ensure their adherence to all prescribed standards.

The FDA launched a digital health division in 2019 with new regulatory standards for AI-based technologies. Regulators were very clear that the software itself could be Software as a Medical Device (SaMD).

India is one of the countries that have addressed the integration of AI in healthcare in official documents. The Digital Information Security in Healthcare Act (DISHA) and the National Health Policy, are tailored to the country's unique healthcare challenges and infrastructure.

The legislative and regulatory framework in its current form however is not yet future-ready and suffers from significant gaps and lack of clarity, due to multiple disaggregated laws and policies.

However, insofar as a clinician’s accountability and liability is concerned, following the use of AI, if there is an adverse clinical outcome, the silence is deafening! Regulations and the law can never keep up with the exponential growth of technology.

When Charles Dickens' remarked in Oliver Twist “The law, is an ass” he could very well have been referring to the present “rules and regulations” regarding the liability of a clinician, when deploying AI in healthcare.

This communication discusses liability issues when AI is deployed in healthcare. It is only in Utopia that ever-changing, futuristic, user-friendly, uncomplicated regulatory requirements promoting compliance and adherence will be in place.

The benefits of AI could be delayed if slow, expensive clinical trials demonstrating unequivocal statistically valid persistent benefits and improvements in healthcare outcomes, need to be carried out.

Regulations should distinguish between diagnostic errors, malfunction of technology, or errors due to the initial use of inaccurate/inappropriate data as training data sets.

How responsibility and accountability need to be shared, when the implementation of an AI-based recommendation causes clinical problems, is not clear. Legislation is necessary to allow apportionment of damages consequent to the malfunction of an AI-enabled system.

Defective equipment and medical devices are subject to laws governing product liability. However, Watson, the AI-enabled supercomputer, is treated as a consulting physician and not categorized as a product.

AI systems like Watson learn from each incremental case and can be exposed, within minutes, to more cases than many clinicians could see in many lifetimes.

IP laws in India at present do not recognize the patentability of algorithms – the basis on which an AI solution functions. AI systems are becoming more autonomous resulting in a greater degree of direct-to-patient advice, bypassing human intervention.

Can a doctor overrule a machine’s diagnosis or decision and vice versa? Who is responsible for preventing malicious attacks on algorithms?

A clinician is expected to know the answers when asked why a specific management option is recommended. It is very unlikely that a clinician using a specific AI algorithm would know how the training, testing, and validation were done and the numbers in each group!

Could AI ‘replacing’ a doctor’s advice diminish the value of clinicians, reducing trust? Relative trust held in technology and in healthcare professionals also differs between individuals, generations, and at different times.

When an AI-based solution is used, existing regulations do not distinguish between errors in diagnosis, malfunction of technology, or original use of inaccurate or inappropriate data for the training database.

It is not clear how one determines the degree of accountability of a medical professional when the wrong diagnosis or treatment is due to a glitch in the system or an error in data entry. Will the software developer or the specific program design engineer be liable?

Interpretation of ‘the law’ could again differ depending on many variables. This is a grey area unlikely to be resolved soon.

Appropriate legislation is necessary to allow the apportionment of damages consequent to unwanted actions of an AI-enabled system. It has been recommended that AI systems need to develop ‘moral’ and ‘ethical’ behavior patterns aligned with human interests. Standards for robots also need to be formalized, if a robot is also to be sued for malpractice.

Vicarious responsibilities could include the human surgeon overseeing the robot, the company manufacturing the robot, and the specific engineer who designed it. The culpability of each of the protagonists also needs to be taken into account.

Ultimately, the law is interpreted contextually. Perceptions could be different among patients, clinicians, and the legal system. Creating awareness among all stakeholders, with periodic updates is necessary. Adoption challenges will ultimately be surmounted. However, this will take more time than that required for the maturation of the technologies themselves.

Lack of clarity in AI accountability poses a significant obstacle to its adoption. AI systems take inputs and generate outputs without disclosing their underlying measurements or reasoning, a challenge known as the black-box problem.

To tackle this issue and prevent healthcare practitioners from being wrongly held responsible for AI errors, the implementation of standardized policies and governmental measures is imperative.

If an AI tool recommends/performs a course of treatment that results in an unexpected adverse outcome how will the liability be shared?

Are existing anti-discrimination and human rights laws sufficient to address the problem of algorithmic bias, due to which the AI algorithm produced a poor outcome in a historically disadvantaged group to which the patient belonged? Would the Clinician’s defense lawyer be able to prove this in the first place?

Excessive onerous regulations could stifle innovation, delaying the benefits of AI to the patient. Unlike yesteryears, today's expectations from patients have reached an all-time high. The right to state-of-the-art health procedures will soon be considered a fundamental right.

Not using AI may in the near future, even be considered a deficiency of service and grounds for malpractice litigation! Today most doctors have malpractice insurance coverage. However, even the small print disclaimers and exclusion clauses do not address the deployment of AI!

Acknowledgment: Some of the contents of this blog have been taken from an article published earlier – Reference Ganapathy, Krishnan. (2021). Artificial Intelligence and Healthcare Regulatory Legal Concerns. Telehealth and Medicine Today. DOI: https://doi.org/10.30953/tmt.v6.252


ABOUT US

Digital Health News ( DHN) is India’s first dedicated digital health news platform launched by Industry recognized HealthTech Leaders. DHN Is Industry’s Leading Source Of HealthTech Business, Insights, Trends And Policy News.

DHN Provides In-Depth Data Analysis And Covers Most Impactful News As They Happen Across Entire Ecosystem Including Emerging Technology Trends And Innovations, Digital Health Startups, Hospitals, Health Insurance, Govt. Agencies & Policies, Pharmaceuticals And Biotech.

CONTACT US

© Digital Health News 2024