As artificial intelligence deployments increase in pace and scope in healthcare organizations around the globe, the World Health Organization this week issued a plea for vigilance and deliberation when it comes to how AI and machine learning models are put to use.
WHY IT MATTERS
The WHO called for “caution to be exercised” in how AI is used in clinical and other healthcare settings – particularly the fast-evolving large language model tools such as ChatGPT.
In order to “protect and promote human well-being, human safety and autonomy” – as well as preserving public health – officials said it’s “imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequity.”
WHO acknowledged that the recent “meteoric public diffusion and growing experimental use” of tools such as ChatGPT, Bard, Bert others are “generating significant excitement around the potential to support people’s health needs.”
While experts at the U.N. body said they’re also enthusiastic about the “appropriate use” of those leading-edge algorithms, they’re also concerned that “caution that would normally be exercised for any new technology is not being exercised consistently with LLMs.”
WHO officials worry that “precipitous adoption of untested systems” not only causes harm to patients through medical errors and inaccurate information, but also “erode trust in AI and thereby undermine (or delay) the potential long-term benefits” of its use.
Specifically, the statement cited concerns about values of “transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.”
WHO wants those imperatives to be top-of-mind as AI is deployed, and called for “clear evidence of benefit be measured” before widespread and routine use of LLMs and other AI models in healthcare delivery.
THE LARGER TREND
In just a matter of months, ChatGPT and generative AI have been making it clear that a new era is upon us when it comes to healthcare processes and decision-making. LLM models and other machine learning tools are already poised to impact patient engagement and communication, inform hospital ADT choices, make waves across the healthcare workforce and basically completely transform the way care is delivered, with plenty of unknowns and not a little risk.
It’s clear that there is a need for oversight of AI in healthcare and, more generally, a thoughtful approach to the how – and the why – those tools are put to use.
At HIMSS23 this past month, leaders from the World Health Organization and other health ministries from around the world spoke about the need to pursue digital health strategies that have patient access, safety and health equity as their north star.
ON THE RECORD
“WHO reiterates the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing, and deploying AI for health,” said World Health Organization officials said in this week’s statement.
“The six core principles identified by WHO are: (1) protect autonomy; (2) promote human well-being, human safety, and the public interest; (3) ensure transparency, explainability, and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; (6) promote AI that is responsive and sustainable.”