Healthcare Must Set Guardrails Around AI For Transparency And Safety

Four in 10 patients perceive implicit bias in their physicians, according to a MITRE-Harris survey on the patient experience. In addition to patients being extra sensitive to provider bias, the use of AI tools and machine learning models also have been shown to skew toward racial bias.

On a related note, a recent study found 60% of Americans would be uncomfortable with providers relying on AI for their healthcare. But between provider shortages, shrinking reimbursements and increasing patient demands, in time providers might have no option but to turn to AI tools.

Healthcare IT News sat down with Jean-Claude Saghbini, an AI expert and chief technology officer at Lumeris, a value-based care technology and services company, to discuss these concerns surrounding AI in healthcare – and what provider organization health IT leaders and clinicians can do about them.

  1. How can healthcare provider organization CIOs and other health IT leaders fight implicit bias in artificial intelligence as the popularity of AI systems is exploding?
  2. When we talk about AI we often use words like “training” and “machine learning.” This is because AI models are primarily trained on human-generated data, and as such they learn our human biases. These biases are a significant challenge in AI, and they are especially concerning in healthcare, where a patient’s health is at stake, and where their presence will continue to propagate healthcare inequity.

To fight this, health IT leaders need to develop a better understanding of the AI models that are embedded in the solutions they are adopting. Perhaps even more important, before they implement any new AI technologies, leaders must be sure the vendors delivering these solutions have an appreciation for the harm that AI bias can bring and have developed their models and tools accordingly to avoid it.

This can range from ensuring the upstream training data is unbiased and diverse, or applying transformation methods to outputs to compensate for inextricable biases in the training data.

At Lumeris, for example, we are taking a multipronged approach to fighting bias in AI. First, we are actively studying and adapting to health disparities represented in underlying data as part of our commitment to fairness and equity in healthcare. This approach involves analyzing healthcare training data for demographic patterns and adjusting our models to ensure they don’t unfairly impact any specific population groups.

Second, we are training our models on more diverse data sets to ensure they are representative of the populations they serve. This includes using more inclusive data sets that represent a broader range of patient demographics, health conditions and care settings.

Finally, we are embedding nontraditional healthcare features in our models, such as social determinants of health data, thereby ensuring predictive models and risk scores account for patients’ unique socioeconomic conditions. For example, two patients with very similar clinical presentations may be directed toward different interventions for optimal outcomes when we incorporate SDOH data in the AI models.

We also are taking a transparent approach to the development and deployment of our AI models, and incorporating feedback from users, and applying human oversight to ensure our AI recommendations are consistent with clinical best practices.

Fighting implicit bias in AI requires a comprehensive approach that considers the entire AI development lifecycle and can’t be an afterthought. This is key to truly promoting fairness and equity in healthcare AI.

  1. How do health systems strike a balance between patients not wanting their physicians to rely on AI and overburdened physicians looking to automation for help?
  2. First let’s examine two facts. Fact No. 1 is that in the time between waking up in the morning and seeing each other during an in-office visit, chances are both patient and physician already have used AI multiple times in instances such as asking Alexa about the weather, relying on a Nest device for temperature control, Google maps for optimal directions, and so on. AI already is contributing to many facets of our lives and has become unavoidable.

Fact No. 2 is that we are heading toward a shortage of 10 million clinicians worldwide by 2030, according to the World Health Organization. The use of AI to scale clinicians’ capabilities and reduce the disastrous impact of this shortage is no longer optional.

I absolutely understand that patients are concerned, and rightfully so. But I encourage us to consider the use of AI in patient care, versus patients “being treated” by AI tools, which I believe is what most people are worried about.

This scenario has been hyped up a lot lately, but the fact of the matter is that AI engines aren’t replacing doctors anytime soon, and with newer technologies such as generative AI, we have an exciting opportunity to provide the much-needed scale for the benefit of both patient and physician. Human expertise and experience remain critical components of healthcare.

Striking a balance between patients not wanting to be treated by AI and overburdened physicians looking to AI systems for help is a delicate issue. Patients may be concerned their care is being delegated to a machine, while physicians may feel overwhelmed by the volume of data they need to review to make informed decisions.

The key is education. Many headlines in the news and online are created to catastrophize and get clicks. By avoiding these misleading articles and focusing on real experiences and use cases of AI in healthcare, patients can see how AI can complement a physician’s knowledge, accelerate access to information, and detect patterns that are hidden in data and that may be easily missed even by the best of physicians.

Further, by focusing on facts, not headlines, we can also explain that this tool, and AI is just a tool, if integrated properly in workflows, can amplify a doctor’s ability to deliver optimal care while still keeping the physician in the driver’s seat in terms of interactions and responsibility toward the patient. AI is and can continue to be a valuable tool in healthcare, providing physicians with insights and recommendations to improve patient outcomes and reduce costs.

I personally believe the best way to strike a balance between patient and physician AI needs is to ensure that AI is used as a complementary tool to support clinical decision-making rather than a replacement for human expertise.

Lumeris technology, for example, powered by AI as well as other technologies, is designed to provide physicians with meaningful insights and actionable recommendations they can use to guide their care decisions while empowering them to make the final call.

Additionally, we believe it is essential to involve patients in the conversation around the development and deployment of AI systems, ensuring their concerns and preferences are taken into account. Patients may be more willing to accept the use of AI if they understand the benefits it can bring to their care.

Ultimately, it’s important to remember that AI is not a silver bullet for healthcare, but rather a tool that can help physicians make better decisions and exponentially scale and transform healthcare processes, especially with some of the newer foundational models such as GPT, for example.

By ensuring AI is used appropriately and transparently, and involving patients in the process, healthcare organizations can strike a balance between patient preferences and the needs of overburdened physicians.

  1. What should provider executives and clinicians be wary of as more and more AI technologies proliferate?
  2. The use of AI in health IT is indeed getting a lot of attention and is a top investment category, according to the latest AI Index Report published by Stanford, but we have a dilemma as healthcare leaders.

The excitement about the possibilities is urging us to move fast, yet the newness and sometimes black-box nature of the technology is raising some alarms and urging us to slow down and play it safe. Success is dependent on our ability to strike a balance between accelerating the use and adoption of new AI-based capabilities while ensuring implementation is done with the utmost safety and security.

AI relies on high-quality data to provide accurate insights and recommendations. Provider organizations must ensure the data used to train AI models is complete, accurate and representative of the patient populations they serve.

They should also be vigilant in monitoring the ongoing quality and integrity of their data to ensure AI is providing the most accurate and up-to-date information. This also applies to the use of pretrained large language models, where the goal of quality and integrity remains, even if the approach to validation is novel.

As I mentioned, bias in AI can have significant consequences in healthcare, including perpetuating health disparities and reducing the efficacy of clinical decision-making. Provider organizations should be wary of AI models that do not adequately compensate for biases.

As AI becomes more pervasive in healthcare, it’s critical that provider organizations remain transparent about how they are using AI. Additionally, they should ensure there is human oversight and accountability for the use of AI in patient care to prevent mistakes or errors from going unnoticed.

AI raises a host of ethical considerations in healthcare, including questions around privacy, data ownership and informed consent. Provider organizations should be mindful of these ethical considerations and ensure their use of AI, both directly as well as indirectly via vendors, aligns with their ethical principles and values.

AI is here to stay and evolve, in healthcare and beyond, especially with the new and exciting advances in generative AI and large language models. It is virtually impossible to stop this evolution – and not wise to do so, since, after a couple of decades of rapid technology adoption in healthcare, we have yet to deliver solutions that reduce clinician burden while delivering better care.

On the contrary, most technologies have added new tasks and additional work for providers. With AI, and more specifically with the advent of generative AI, we see great opportunities to finally make meaningful advances toward this elusive objective.

Yet, for the reasons I’ve listed, we must set guardrails for transparency, bias and safety. Interesting enough, if well thought out, it is these guardrails that will ensure an accelerated path to adoption by keeping us away from failures that would cause counter-evolutionary overreactions to AI adoption and usage.

 

Source Link

arrowcaret-downclosefacebook-squarehamburgerinstagram-squarelinkedin-squarepauseplaytwitter-squareyoutube-square