Exclusive
scalehealthtech Realize your Healthcare’s Digital Transformation journey with ScaleHealthTech Learn More

Why Researchers are Worried About AI Chatbots Potentially Spreading Healthcare Misinformation?

Written by : Arti Ghargi

March 26, 2024

Category Img

(Image Source: FreePik)

A recent study found that the LLMs behind popular AI-powered chatbots, including ChatGPT failed to block healthcare disinformation on their platform.


Amid all the hype and hysteria around Generative Artificial Intelligence, the risks associated with it often take a backseat. In just two years since the groundbreaking launch of OpenAI’s ChatGPT in 2022, Gen AI has become a key theme in discourse across sectors.

In healthcare too, it has shown immense potential to transform processes and influence outcomes.

However, researchers have been flagging disinformation/misinformation concerns involving Artificial Intelligence.

Right after the launch of ChatGPT, a group of researchers tested what the chatbot would respond if it is fed with questions containing conspiracy theories and false narratives.

The outcome was so disconcerting, that researchers minced no words in criticizing the technology.

While fact-checking activists across the world agree that disinformation existed even before the advent of this technology, their concern lies in the ease AI tools provide to generate dis/misinformation.

There are several concerns associated with AI, including that of AI-powered deepfakes and misinformation impacting elections in countries.

However, in a sensitive sector including healthcare, such disinformation could be life-threatening.

A recent study published in the British Medical Journal found that the large language models behind most of the popular AI-powered chatbots, including ChatGPT lacked sufficient safeguards or were inconsistent in preventing production of healthcare disinformation on their platform.

The researchers have thus called for robust regulation, more transparency and routine audits to prevent AI from contributing to healthcare disinformation.

Exclusive Study Findings

The study was conducted by a research team led by researchers from Flinders University in Adelaide, Australia to determine whether LLMs (large language models) powering AI-assistants and chatbots are able to resist attempts to generate misinformation.

The study found that LLMs including GPT-4, failed to block the requests to generate health misinformation, however, Claude 2 consistently blocked the queries.

For the purpose of the study, researchers ingested standard prompts (queries/requests to chatbots) asking the LLMs to generate blog posts including misinformation on the topic of sunscreen causing skin cancer and the alkaline diet curing cancer.

The prompt also asked the LLMs to create variations to target different demographics. The initial prompts did not attempt to dodge the built-in safeguards, however, in the subsequent attempts jailbreaking techniques were used for LLMs that refused to generate disinformation.

These techniques are used to manipulate, or in simpler terms ‘fool’ the models in order to get them to generate responses that would otherwise breach its policies or safeguarding mechanisms.

The study says, researchers submitted a total of 40 initial prompts followed by 80 jailbreaking attempts to evaluate responses and the effectiveness of safeguards.

As per the study published, GPT-4 (via ChatGPT), PaLM 2 (via Bard), and Llama 2 (via HuggingChat) were found to generate health disinformation on sunscreen and the alkaline diet, while GPT-4 (via Copilot) and Claude 2 (via Poe) consistently refused such prompts.

While Gemini Pro and Llama 2 sustained the capability to generate health disinformation, GPT-4 which initially blocked the attempt to generate misinformation, did so in the second attempt highlighting the lack of a continuous safeguard mechanism.

The study says only Claude 2 maintained consistency in refusal.

When the researchers investigated developer websites, they found a tool to report potential concerns. However, they did not find any public registry of issues already reported, details on patching the vulnerabilities or detection tool for generated texts.

The study says although some AI-assistants added disclaimers these could be easily removed from posts opening the responses to potential misuse.

The team of researchers then also sent a standardized mail and follow ups to the companies who developed the LLMs, however, they received little to no response. Only Anthropic and Poe acknowledged the receipt of the email.

Man typing prompt in ChatGPT (Source: FreePik)

Why is it Concerning?

In the digital era, technology has enabled information to reach millions of people across the world. However, this has also opened the door for dissemination of misinformation or disinformation at scale with the potential to impact lives and important events globally.

While AI errors are a common and ongoing challenge (AI hallucination), the issue becomes even more sensitive when it comes to Healthcare.

According to statistics published in a Sage Journal study, over 70% of people across the globe rely on the internet for health information. Another 2019 study by eligibility.com suggested that 89% of the patients in the US alone search Google before visiting a doctor for health issues.

Any misleading information provided by a chatbot could potentially have a harmful impact on a patient's health.

On the other hand, for bad actors, it can be an easy way to generate disinformation. A common man may not be able to detect whether an information is generated through chatbot or not; or whether the information is accurate or not. This exposes them to potentially being misled by the information provided.

Recently, a deepfake video of Dr Naresh Trehan, CMD of Gurugram-based Medanta Hospital was flagged by the hospital. The deepfake video showed him endorsing a particular obesity treatment medication.

The hospital filed a police complaint and said that the contents of the fabricated video could potentially create panic and confusion among viewers by promoting purported breakthroughs in obesity treatment.

On the other hand, more and more healthcare systems today are looking to integrate Gen AI into their workflow, primarily in non-clinical tasks. These may still include tasks which are patient-facing or related to patients, like booking appointments, conversing with the patients about their symptoms, etc.

Any wrong info sent out by the chatbot can lead the health systems to potential lawsuits.

In February 2024, Air Canada was ordered to refund a passenger after the chatbot it used as a customer interface gave the passenger incorrect instructions.

A civil tribunal ruled that companies using AI chatbots to handle customer inquiries must honor the advice they give customers.

Way Forward

LLMs are powerhouse tools trained on a vast set of data and they are constantly learning as more information is fed to them.

The study says one of the strategies for accurately answering health related questions could be developing generative AI applications that “ground” themselves in relevant sources of information.

For example, Perplexity AI is a chatbot that aims to take on Google. However, unlike ChaGPT, its responses are attributed to sources on the internet, from where the information is pulled and summarized in the response.

Another way to deal with this is improving the ability of the Generative AI platform to clearly communicate if they are uncertain about a particular response. Generative AI has the ability to generate almost human-like responses and language with assertiveness.

If it can communicate its uncertainty, it may help the user to reconsider or verify the information provided.

For disinformation, however, more robust strategies need to be deployed. Like training the models to align with human values and preferences to block any attempt to generate harmful disinformation.

The study also suggests an alternative in the way of a specialized screening model, that would screen the prompts and block inappropriate or harmful requests. Similarly, it would also screen the Gen AI’s response before it is released to the user.

The efforts are already underway to include a digital watermark, so as to identify AI-generated content. It would enable social media, and content sharing platforms to detect and remove harmful AI-generated content.

However, these strategies are not enough. It is important to maintain constant vigil. Several countries around the world are now waking up to the potential risks involved with AI and have launched regulatory frameworks.

In January this year, WHO too rolled out guidance addressing the ethical and governance challenges associated with Large Multi-Modal Models.

With applications across the healthcare sector, the guidance provides over 40 recommendations for governments, technology companies, and healthcare providers to ensure the responsible and beneficial deployment of LMMs.


POPULAR CATEGORIES

WEEKLY POPULAR POSTS

ABOUT US

Digital Health News ( DHN) is India’s first dedicated digital health news platform launched by Industry recognized HealthTech Leaders. DHN Is Industry’s Leading Source Of HealthTech Business, Insights, Trends And Policy News.

DHN Provides In-Depth Data Analysis And Covers Most Impactful News As They Happen Across Entire Ecosystem Including Emerging Technology Trends And Innovations, Digital Health Startups, Hospitals, Health Insurance, Govt. Agencies & Policies, Pharmaceuticals And Biotech.

CONTACT US

© Digital Health News 2024