Michelle L. O'Donoghue, MD, MPH; Ami B. Bhatt, MD

DISCLOSURES

This transcript has been edited for clarity. 

Michelle L. O'Donoghue, MD, MPH: Hi. This is Dr Michelle O'Donoghue, reporting for Medscape. Joining me today to talk about artificial intelligence (AI) and possible implications in healthcare is Dr Ami Bhatt, who is the chief innovation officer at the American College of Cardiology. Thanks for joining me, Ami. 

Ami B. Bhatt, MD: Thanks so much for having me.

O'Donoghue: Your background is interesting because you trained in adult congenital heart disease and practiced in that space for many years. That was before assuming this role as chief innovation officer. What drew you to that particular position?

From Telemedicine to Generative AI

Bhatt: It's always nice to be able to draw a line once you're looking back. You never know what that line's going to look like looking forward. Adult congenital heart disease is a great field subspecialty. 

We had young patients who, at the time, were using FaceTime to talk to their family and friends and oftentimes didn't live in a big city near a tertiary institution. They actually started asking me, "Hey doc, can't we just FaceTime? If we're just going over my results, why am I coming in?" All valid questions. 

When the Mass General started to use telemedicine for stroke care, trying to really help institutions that weren't as familiar with it get patients care faster, I asked if I could have a license to try it out and they were kind enough to let me.

I started to experiment in it 15 years ago. In 2013, I started — every Wednesday — a telemedicine clinic for my patients with congenital heart disease who lived far away. It was my patients who drove me and led me to it. 

Then you realize, I don't have hands on this patient. Very quickly, as a young cardiologist, you think, How do I get an EKG out there and get it sent to me? How do I get images? What does a stethoscope do for me? In adult congenital, the sounds of the heart are so important. 

Then I started foraying into digital health before it became a thing, in very early versions, because I realized that I needed more than just their face and their words from their home; I sometimes would need data.

O'Donoghue: What got me interested in this space was also just a simple conversation I was having with the fellows and the junior faculty, about how they're using AI in their lives today, including some of these tools like ChatGPT. I have to say that I was just completely ignorant about most of what is currently available out there. 

Maybe just start with the basics because I think when many people hear the term "AI," they don't really know what it means. It's really a very broad term. What do we mean in terms of healthcare implications? 

Bhatt: AI has been around for a long time, in fact, in medicine. Radiology is probably the most common place, and that's a form of AI machine learning. The most common example is to take an x-ray and show a ton of x-rays to the computer: "This is a pneumonia. This is also a pneumonia. This is a pneumonia." Eventually, the computer can say, "I can identify a pneumonia." Then you can do that with whatever you'd like. 

The more modern one that you just mentioned, the generative AI, had not been as easily accessible to us until more recently. It's really taken off because of the user-friendly nature, if you will. I'll give an example. 

I went out to Worcester, Massachusetts, where a nonprofit, Girls Inc., teaches STEM. I was going there to teach 5th graders about AI. As we started working together, I said, "Let's do a demo." I was typing and one of the girls came up to me — one who hadn't been paying attention, by the way — and she said, "Would you mind, doctor, if I did this?" 

She sat down, and the way she could generate questions from a ChatGPT, the way her brain worked, the way the kids responded to her, they were really facile. 

I love that you mentioned what you're learning from the fellows, residents, and med students because that generation is learning things far faster. I will say the fifth graders — I have a sixth grader at home — they're native to this, right? This is intuitive to them. This is how they think. 

I think the future generations are going to have an even easier time using AI, for a variety of these things, than we do. When I think about it, I think about three areas for healthcare. I think about administrative uses, and that's where this generative AI, or large language models, are really helpful.

Admin Help

Bhatt: Let's take an example in the office. You and I have both been practicing a long time. As computers came in and then EHRs, we started doing more of this and being away from our patients. I think it's really important to be looking at our patients. Voice to text: I talk, my note gets written, then I can edit it. That's a big way we're seeing the use of the generative AI in the clinic. 

Writing insurance notes. You know what needs to go in there. Teach the computer to go find it in the records, have it create a draft, and again, edit it. 

Billing. I think that administrative side is maybe where we're seeing the uptake the most. It's also the safest place because it's a draft, you can edit it, and it's not doing healthcare. Maybe that's the first category. And I'll stop there. 

O'Donoghue: As you say, there is certainly a generational aspect. There are many people who just fear the concept of AI. It has many negative connotations, but I think as you're nicely highlighting, there are many ways in which it can improve our quality of life as practitioners right now.

Some of the fear is perhaps related to changes that occurred along the way that are not necessarily AI related — things like EHRs. Some of it has made our lives more challenging rather than making things more streamlined as was initially hoped, perhaps.

The fact that so much administrative work is now required in our jobs, this is right now what I think is the most useful current usage of AI in many ways in healthcare today. 

Bhatt: I agree completely. I think there are areas where people are concerned. For example, the most common thing is, is the AI going to make a decision for me or my health or my patient? We think about that often. 

One of the things we're doing at the American College of Cardiology is, instead of saying, "We can give you decision support," it's "Can we navigate you to the right information at the right time?" That's the next natural step in using AI. 

There is so much information available for any disease that you're treating in your patient, that realistically the time it takes to find, source, parse through the information, and apply it appropriately is not the 20 minutes of a clinic visit. Can we use the computing power of AI analytics to say, "Can you find me everything that's relevant and can you summarize it for me?" Then I'll use my clinical acumen on top of that. 

Really, it's navigation of information rather than decision support. If we think about it that way, it's less scary and it's actually more helpful, just like you were saying, to actually ease the burden for the clinicians and to allow them to spend more time really talking with their patients rather than, again, turning away and facing a computer.

The other area that people are worried about is imaging, which is, if the AI is going to read the scan for me, what's my job? I think it's important to recognize that edge cases, nuance, the clinical relevance, and all of the setting around an image or an image test can't be taken in by AI yet. There is always going to be the need for humans. 

That then leads people to think, Am I going to have to do more work because it goes faster? As you talked about wellness, there is burnout. For that, one of our favorite examples is a project that we're doing looking at ultrasound in the hospital.

Right now, if you or I read an echocardiogram or a CT scan, we'll read them in the order they come from the hospital. The AI could find the next most dangerous one and pop it up in your list, so that you're actually helping the next sickest patient in the hospital as you read the next study. 

I think those kind of things make us feel really good about our job. I'm helping a patient sooner. I can get on the phone and call them because I have the time to do it because the AI is helping me with my reads, so I'm using the time wisely. 

Then you have to think about fiscal, right? Somebody's got to buy the AI. In the same vein, if there are people with normal ultrasounds waiting to go home to their families so someone else can come to the hospital in their place, we can also identify those and we can help the system be more efficient. 

We have to really change the way we're thinking about using AI and make sure we're keeping the patient and the clinician at the center. Then you see many really great use cases. 

Collaborative Intelligence

O'Donoghue: As you're saying, we need to think more about AI as an assistant rather than necessarily a threat to our careers. There are so many opportunities, as you say. If it could efficiently synthesize a patient's medical history for us, I think that would be a tremendous help to most practitioners. When we have patients who come in with a chart that's as big as a dictionary, it is helpful if there is some tool that could pull out the salient information. 

I suppose the question is, how much will it evolve to a place where it could functionally replace the need for an actual clinician to interpret the data? Right now, I know that when I'm looking at electrocardiograms, the read that pops up at the top, if you want to call it AI, is very often wrong. I get that. 

With time and with these machine learning possibilities, one could imagine that, especially in areas that are underserved, in countries that don't have as much access to healthcare, there may be a reasonable opportunity to improve access to care through some of these automated types of approaches.

What are your thoughts on that? 

Bhatt: If you think about cardiovascular disease burden globally, it's only going up. There will be more patients with more disease. If we just look at our workforce, it's going down. We have to rethink what "workforce" means. 

If you include people who come from all different walks of medical life that consider themselves caregivers, community health workers, sometimes it's family people, definitely it's pharmacists and team members. If we can upskill people using AI, if you can give them information or give them guidelines, anything that helps them better understand how to care for somebody, when to refer them to more care, when they might be safe, I think that's a great use of AI. 

That is going to help those communities that are underrepresented in getting better care. I still don't think we're at automation, but I do think that we can upskill the people who are available and very interested in giving care, but maybe haven't had the same training, by offering them AI. 

The other place is in testing. As we use AI to help diagnose based on tests, then we can also upskill people because it’s challenging. How many echocardiograms do you do before you become an expert? If it tells you to hold that position, you’re getting a great image, that's going to help someone diagnose something because we see it with the AI. 

I think there are so many different ways that AI, again, can be an assist in underrepresented areas, to really be able to say that we're going to use our workforce to the best of our ability. We're going to enable people to get the best care that they can.

O'Donoghue: I don't object to the idea that, as an additional check, perhaps, AI is able to offer alternative diagnoses as something I could have missed. Many people would value having that sort of safety net in place. The concerns, perhaps, are surrounding the idea that some of our roles will be phased out.

What do you say to those people who are fearful that that's the direction in which we are headed as physicians or as other types of healthcare providers?

Bhatt: As we are working hard thinking about AI governance and regulation, AI changes; the AI of 6 months from now will not be like today's, and the AI of today is nothing like what was there a year ago. I think it's going to be a long time before we can trust what we've referred to as AI to take somebody's job. I think that's where the job security lies.

I used to tell people, "You'll always be necessary," but you have to be more specific about it. AI will keep changing and therefore we will keep needing to test it and use it, and we will always need experts in the foreseeable future to be able to test, use, and ensure people's safety. 

People trust humans. People who know about AI and science maybe trust AI more because they know that the number of errors that a human can make when they're tired will increase. AI is not going to tire, but AI has its own problems. 

At the end of the day, I think the human-AI collaboration — we call it collaborative intelligence at the ACC — that is the only way to safely move forward because the field will continue to change. There's nothing static about AI. I think moving forward, change is the new normal and that therefore requires collaborative intelligence for us to continue to provide the best care for patients as we move. 

If you ask me what I'm really scared of, it's that we have only seen the tip of the iceberg — and maybe not even seen it yet in healthcare — of cybersecurity risk. That's probably what worries me the most. It's one thing to do pilots; as we start to scale, how are we measuring? How are we keeping our data safe? Do we know that every time we change a certain thing, what increased risk did we expose ourselves to? I think the rest of the world — the banking world, for sure, and the business world — is aware of cybersecurity risk. 

Healthcare is aware and we talk about it, but the clinicians who might be using the AI have no training or understanding. I don't think it works that way. I think we're going to have to understand something more about risk than we did before.

If you really ask me what I'm scared of, it's not that people will lose their jobs, but that we need systems and infrastructure to measure whether AI is doing what it was supposed to do, to measure whether outcomes are better with AI, and to ensure that we're not putting people at risk. I don't just mean privacy data; I mean people messing with the algorithms or the algorithms changing. There's a large amount of learning to be done.

I don't think anybody's going to let AI run without the clinicians in the middle just yet. 

O'Donoghue: Thanks, Ami. It's a very exciting space, so thanks for walking us through that. 

Ultimately, as you say, right now, for really all healthcare providers out there, we can't just turn a blind eye. I think we have to be willing to understand how we can incorporate AI and machine learning into our practices, and I think we've highlighted that, in many ways, that could be beneficial in terms of being able to streamline our time. 

Of course, concerns will continue to exist, and we'll have to see how things play out over the next several years. Thanks for taking the time to chat with me. 

Bhatt: Thanks so much for having me. 

O'Donoghue: Signing off for Medscape, this is Dr Michelle O'Donoghue.

Michelle O'Donoghue is a cardiologist at Brigham and Women's Hospital and senior investigator with the TIMI Study Group. A strong believer in evidence-based medicine, she relishes discussions about the published literature. A native Canadian, Michelle loves spending time outdoors with her family but admits with shame that she's never strapped on hockey skates.

TOP PICKS FOR YOU
Recommendations

3090D553-9492-4563-8681-AD288FA52ACE