Provider Spotlight: Advocating for Responsible AI Through Human-Centered Design at Northwell Health
Dr. Susan Beane, MD, Executive Medical Director, Healthfirst Emily Kagan Trenchard, Senior Vice President and Chief of Consumer Digital Services, Northwell Health
Introduction by Dr. Susan Beane
Artificial intelligence (AI) used to seem futuristic, but it’s used today by healthcare organizations across the country.
Here at Healthfirst, we use AI technology to generate insights into our members’ health behaviors and even predict their needs and prioritize outreach.
Our provider partners are also researching use cases for AI in healthcare and using the technology in new and exciting ways to provide better care to those who need it most. In the coming weeks, the Healthfirst Advance Perspectives blog will highlight how our provider partners are working to incorporate the latest technologies into their systems.
Emily Kagan Trenchard, senior vice president and chief of consumer digital services at Northwell Health, is our first guest post author. Here, she gives an overview of where healthcare — and the patients it serves — is most likely to appreciate AI, where there may be cause for concern, and how to ensure that AI tools are designed, trained, and used for the greater good.
AI comes with benefits — and cautions
Before talking about where artificial intelligence should or shouldn’t be used in healthcare, it’s important to understand AI’s origins. The concept isn’t new; it’s an extension of data science, a field that traces its roots to the 1960s, and it’s been a focal point of science fiction for even longer.
There are two reasons that AI may seem new. During the pandemic there was an uptick in the use of AI-enabled technologies when many of us spent much more time engaging in digital environments, personally and professionally, than we otherwise would. Recent advances in large language models (LLM’s) make it possible for computers to learn from large data sets and process natural language. With capabilities like ChatGPT now being combined with other generative AI for images and sounds, unprecedented capabilities not only exist but are democratized with easy-to-use tools. We reap the benefits of AI in many ways in our everyday lives, from new song recommendations on our music streaming services to spam filters that keep our inboxes manageable. But we also must be careful. It’s one thing to let a machine decide the best course for vacuuming the living room without giving it a second thought. It’s another thing entirely to let a machine decide where we should buy a house, send our children to school, or receive medical care.
Unchecked, AI may widen the equity gap
In healthcare, AI has enabled significant advances in areas like medical coding, clinical documentation, and patient engagement efforts. The latter has been a significant area of focus for Northwell Health: When we put tools in place that help patients find resources, schedule appointments, or manage certain aspects of their health, we can empower them to take more control of their own well-being while alleviating our staff of the burden of phone or email outreach.
At the same time, we must make sure our AI tools aren’t widening the healthcare equity gap. As one commentary put it, algorithms “are often built on biased rules and homogenous data sets that do not reflect the patient population at large.” A second discussion noted that algorithmic bias can come up at any stage in the process of creating an algorithm — study design, data collection, data entry, development of the predictive model, and implementation.
Research is rife with examples of the impact biased algorithms can have on patient care. One highly publicized study found that Black patients are less likely to be referred for high-risk care management via an algorithm. My colleagues at New York City Health and Hospitals have highlighted how biased algorithms lead to fewer referrals for specialty kidney care as well as higher referrals for high-risk Cesarean births for Black patients.
The benefits of human-centered design
The fix isn’t easy. It requires rethinking the algorithms themselves and how they’re used in a clinical setting, especially as some have been in place for decades.
In my view, human-centered design is an important part of the solution. In human-centered design, those who ultimately use and benefit from a product or service are involved from the beginning of the design process. As noted above, bias can be introduced at many stages when creating an algorithm — but with the right guardrails in place, it can also be removed, resulting in a product that’s more equitable, accurate, and beneficial for the population it serves.
In the healthcare setting, human-centered design needs a big table, so to speak. Physicians, other clinical and administrative staff, community-based health workers, and even patients all need to participate in the conversation. With that said, AI can play an important role in helping us check our more problematic human behaviors such as unconscious biases that lead to errors in judgement and unequitable treatment. A Pew Research Center study recently found that between 30-40% of Americans felt AI in healthcare would reduce medical errors and create a more equal distribution of quality of care to people of all races and ethnicities. But that same study also found that nearly 60% of Americans feared that the introduction of AI tools would further disrupt the provider-patient relationship.
It will admittedly take time and effort to educate these stakeholders about how AI is developed, how and why it’s flawed, and how it can empower better care delivery. Such a methodical process may also conflict with the fast pace of software development. However, it’s critical for the future of equitable care that we take the right steps to ensure we use AI responsibly, and that cannot happen if those most impacted by AI in healthcare have the least say in its development.