GENEVA: The World Health Organization (WHO) has warned against the use of large language model tools (LLMs) generated by artificial intelligence (AI) when it comes to health-related responses to safeguard human well-being, safety, autonomy, and public health.
LLMs, like ChatGPT, Bard, Bert, and others, imitate human communication and are rapidly expanding platforms. Their increasing use in healthcare settings has generated enthusiasm for their potential to support health needs.
However, it is imperative to conduct thorough risk assessments when utilising LLMs, to improve access to health information, aid decision-making, and enhance diagnostics in under-resourced environments.
While the WHO embraces the appropriate use of technologies, including LLMs, to aid healthcare professionals, patients, researchers, and scientists, there is concern that the caution typically exercised with any new technology is not consistently being applied to LLMs. Rapidly adopting untested systems can lead to healthcare worker errors, harm to patients, a decline in trust in AI, and ultimately hinder the realisation of long-term benefits and applications of these technologies globally.
The WHO recommends being cautious when using LLMs,to improve access to health information, serve as decision-support tools, or enhance diagnostic capabilities, especially in under-resourced settings. The risks associated with biased training data leading to misleading or inaccurate information, and the generation of incorrect or erroneous health-related responses, should be carefully examined.
Furthermore, LLMs may be trained on data without prior consent, potentially compromising the protection of sensitive user-provided information, including health data. Additionally, there is a concern that LLMs can be misused to propagate highly convincing disinformation, making it challenging for the public to distinguish reliable health content from false information.
While the WHO supports the use of technologies, including LLMs, to assist healthcare professionals, patients, researchers, and scientists, it emphasises the need for consistent application of caution and adherence to key values such as transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.
The WHO proposes addressing these concerns and ensuring clear evidence of benefit before widespread integration of LLMs into routine healthcare and medicine, irrespective of whether they are used by individuals, care providers, or health system administrators and policy-makers.
Moreover, the WHO underscores the significance of applying ethical principles and appropriate governance, as stated in their guidance on the ethics and governance of AI for health. These principles include safeguarding autonomy, promoting human well-being, safety, and the public interest, and intelligibility, fostering responsibility and accountability, ensuring inclusiveness and equity, and promoting AI that is responsive and sustainable.