Written by : Jayati Dubey
January 22, 2024
The guidance recognises potential risks of LMMs, such as generating false or biassed information with potential implications for health decisions.
The World Health Organisation (WHO) has issued guidance addressing the ethical and governance challenges associated with Large Multi-Modal Models (LMMs), a rapidly advancing form of generative artificial intelligence (AI) technology.
With applications across the healthcare sector, the guidance provides over 40 recommendations for governments, technology companies, and healthcare providers to ensure the responsible and beneficial deployment of LMMs.
Large Multi-Modal Models, a type of generative AI, have gained unprecedented popularity for their ability to accept various data inputs, such as text, videos, and images, and generate diverse outputs.
Notable platforms including ChatGPT, Bard, and Bert have surged into public consciousness in 2023. LMMs mimic human communication and possess the capability to perform tasks for which they were not explicitly programmed.
The WHO guidance delineates various applications of LMMs in healthcare. These include aiding in diagnosis and clinical care by responding to patients' written queries and providing diagnostic support. LMMs also contribute to patient-guided use, assisting individuals in investigating symptoms and comprehending treatment options.
Moreover, they are crucial in handling clerical and administrative tasks, such as documenting and summarising patient visits within electronic health records.
Additionally, LMMs extend their impact to medical and nursing education, offering simulated patient encounters to trainees. Furthermore, they contribute significantly to scientific research and drug development by identifying new compounds and actively participating in various research endeavours.
While LMMs present opportunities for transformative improvements in healthcare, the guidance also acknowledges associated risks, including the potential for generating false, inaccurate, biassed, or incomplete information. These risks may have implications for health decisions made based on such information.
LMMs in healthcare bring forth concerns related to quality and bias as they may be trained on subpar or biassed data, potentially impacting the accuracy and fairness of their generated outputs. Ensuring the integrity of training data becomes pivotal to mitigate these challenges.
Apart from quality concerns, issues regarding the accessibility and affordability of the best-performing LMMs may pose hurdles for health systems. Overcoming these challenges is crucial to ensure that the benefits of LMMs are widely accessible and do not exacerbate existing healthcare disparities.
Additionally, cybersecurity risks must be addressed proactively to safeguard patient information and maintain trust in healthcare algorithms, recognising that LMMs, like other AI forms, are susceptible to potential breaches.
The WHO guidance emphasises the crucial role of governments in setting standards for developing and deploying LMMs. Key recommendations include:
1. Investment in Public Infrastructure: Governments should invest in or provide not-for-profit or public infrastructure, including computing power and public datasets, accessible to developers in various sectors. This infrastructure should adhere to ethical principles and values.
2. Legal Frameworks: Laws, policies, and regulations should ensure that LMMs used in healthcare meet ethical obligations and human rights standards, addressing aspects such as dignity, autonomy, and privacy.
3. Regulatory Oversight: Governments should assign regulatory agencies to assess and approve LMMs intended for use in healthcare. The regulatory process should include mandatory post-release auditing and impact assessments by independent third parties.
Developers play a crucial role in ensuring the responsible design and deployment of LMMs. Key recommendations for developers include:
1. Inclusive Design: LMMs should be designed in collaboration with potential users, stakeholders, medical providers, researchers, healthcare professionals, and patients. Inclusive, transparent design processes should encourage ethical discussions and input.
2. Task Accuracy and Reliability: LMMs should be designed to perform well-defined tasks with the necessary accuracy and reliability to improve healthcare capacity and advance patient interests. Developers should be able to predict and understand potential secondary outcomes.
WHO's guidance underscores the importance of global collaboration in effectively regulating the development and use of AI technologies, particularly LMMs.
The engagement of various stakeholders, including governments, technology companies, healthcare providers, patients, and civil society, is deemed essential at all development and deployment stages.
As the healthcare landscape continues to evolve with the integration of AI technologies, WHO's comprehensive guidance aims to ensure the ethical use of LMMs, contributing to improved health outcomes and addressing persisting health inequities globally.
The recommendations provided serve as a foundation for developing transparent, accountable, and ethically sound AI applications in healthcare.