President Biden ordered the nation’s leading health agencies on Monday to develop a plan for regulating artificial intelligence tools already widely in use within hospitals, insurance companies, and other health-related businesses.
The order directs the Department of Health and Human Services to establish a safety program to receive reports of AI-related harms and unsafe practices — and take actions to remedy them. The directive was part of a broader order issued by the White House to create standards for the use of AI across the government.
Within health care, as in other sectors, Biden’s order seeks to strike a balance between controlling the risks of AI while also encouraging innovation that may benefit consumers. While it calls for efforts to fight discrimination and harmful practices, it also directs agencies to provide grants and other funding for research to support the use of AI in drug discovery and other domains.
The U.S. is behind European countries in developing standards for the use of AI within health care, and it’s also being outpaced by U.S. businesses already using the technology to make consequential decisions about the care of patients.
STAT reported earlier this year, for example, that the nation’s largest health insurers are using an algorithm to issue payment denials to Medicare beneficiaries struggling to recover from serious illnesses and injuries. AI and other predictive tools are also being used to flag deadly health conditions, such as cancer and sepsis, a life-threatening blood infection, with varying levels of accuracy.
As the threats increase, the Biden administration does appear to be acting with more urgency, especially with regard to the generative AI tools beginning to make their way into health care settings. The order requires that any company developing a generative, or “foundation” model, that poses risks to the health and safety of the public notify the government when it is training the model, and to share the results of safety tests.
Biden is also giving HHS 180 days to create a strategy for determining whether AI tools are of sufficient quality to be used in health care. The plan would include standards for performance evaluation and maintenance of AI models before and after they are approved for commercial use, Politico reported on Friday.
The FDA has already developed a framework for regulating AI tools in health care and monitoring their use. But advances in the technology, combined with surging private-sector investment, is making it difficult for the agency to keep up with the emerging uses and risks.
Earlier this month, the FDA published a new accounting of AI tools approved for use in health care, adding 171 products to a public database. The data indicated that the number of authorized devices in 2023 is expected to increase by 30% over the prior year.