ATS Breathe Easy - AI in Clinical Practice: The Future is Now
non: [00:00:00] You are listening to the ATS Breathe Easy podcast, brought to you by the American Thoracic Society.
Eddie: Hello and welcome. You are listening to the ATS Breathe Easy Podcast with me, your host, Dr. Eddie Chen. Each Tuesday of every month, the ATS will welcome guests who will share the latest news in pulmonary critical care and sleep medicine. Whether you're a patient, a patient advocate, or healthcare professional, the ATS Breathe Easy podcast is for you.
Joining me today is Dr. Matthew Chirp, who will continue on our prior discussion about ai, but. This time we'll talk about its role in clinical practice. Drek is faculty in the division of allergy, pulmonary and critical care medicine at the University of Wisconsin. He's quadruple board certified in internal medicine, pulmonary medicine, critical care medicine, and clinical informatics.
In terms of research Dr. [00:01:00] EK's data science lab utilizes electronic health record data and machine learning techniques such as natural language processing and deep learning to identify patients at risks. For clinical deterioration, sepsis, a KI, and other syndromes of critical illness. He's recognized for this research by the ATS and the A SCI.
Matt, thank you so much for joining us. Do you have anything you wanna say before we jump into it?
Matthew: Yeah, I just wanted to say thank you for the kind introduction and thanks for having me. Thank
Eddie: you for joining us, Matt. I really appreciate all the time, and like I said, you just seem to have a lot of busy research going on.
Like I said before in the intro, we've talked about. AI on this podcast and the scientific writing process, but a topic on so many minds is AI in clinical practice. Like how is this gonna impact our care for patients, both from a healthcare provider side and then for patients? How is AI impacting their personal care in some form?
AI has been present in medicine for many years, even though it [00:02:00] seems like that it's only exploded into our, in the general public's consciousness over the past few years. Matt first let's start from the beginning. Can you give me some examples of products that, you know, that have been powered by machine learning and clinicians have already been using to take care of patients for many years?
Matthew: Yeah, I think, you know, AI has, there has been an explosion of interest in ai both in you know, in our, the private sector and also as it relates to clinical medicine. And, you know, as you mentioned the beginning, you know, AI is really. A way to essentially automate, have machines, automate things that we think of are associated with human intelligence.
So, you know, decades and decades ago, AI systems were developed where experts were developed these rule systems. You know, for example, one one from 50 years ago now was ID clinicians would generate hundreds of rules around how can you identify. If a patient has a bacterial [00:03:00] infection, and then can we recommend antibiotics?
But I think similarly, if you think about whether it's clinical medicine or you know, trying to teach a car, you know, a car to drive itself, the rules start breaking down because they become so complex and there's so many exceptions. And that's really where machine learning has you know, come into play.
I think now what we're seeing are these tools where you have machine learning models learning from lots and lots of data, and these patterns are then identified that then can help clinicians. And I, so, so when we think about, I. The types of products that have been round around for the, for a while are those where there, you know, there are a lot of training data available.
So think about radiology images, for example. There have been, you know, AI tools around for many years that will do things from, IM improving the image quality. To segmenting parts of the image to identifying things like nodules [00:04:00] or help identify other other aspects, or even today, now essentially providing a preliminary read for the image.
So I think radiology is one area where we've seen some of these systems already, already available. And I think we're gonna see more and more of these systems to come.
Eddie: So much of when we, when we're talking about teaching clinicians, residents, and medical students, we say, you know, the patients don't always walk outta the textbook just saying that there's so many different ways that people can present.
And so recognizing all these patterns and stating rules for all these patterns is, is just an, an overwhelming proposition, obviously. What, Matt, you, you've, I've referenced that you've done a lot of research in this already. What are some examples of some projects that you've been working on in the past that are in this field that peak clinicians might already be using and not, not realize they're listening to the person who developed it?
Matthew: So, I mean, the research that I've been involved with for over a decade now relates [00:05:00] to early warning scores. So the idea that you have these patients who are out on the general medical surgical wards and some of them, you know, most of them are going to get better and, you know, eventually leave the hospital.
But there are some of them we're going to deteriorate and potentially ICU level care. They may suffer a cardiac arrest or have other adverse outcomes. So for many, many years, groups have developed these these tools that were based on expert opinion around how can we identify these patients. And so a project that I worked on during my residency and then into my PhD and I continue to work on now relates to developing machine learning versions of these tools.
What we've been doing is we use patient level data and machine learning models to identify these patients earlier, and then if you can identify them earlier, you can get the critical care resources to the bedside of the patients and then hopefully by doing [00:06:00] that, you can get better decisions earlier, transfer to the ICU and potentially better outcomes for our patients.
And I think when I first started this work, this is when electronic health records started becoming more, you know, more common across, across the country. And at that point, you know, the difficult thing was even getting access to the data. And then once you had access to the data you were using, you know, methods that were developed, you know, 50 or a hundred years ago is simple logistic regression or these other simpler machine learning algorithms.
And often you're just working with vital signs and laboratory data and maybe a few other variables. And so as you mentioned at the beginning, I think we've really changed the way machine learning has itself has changed a lot in terms of how we think about its use in clinical medicine. And I think that's why it's really exciting to think about the future going from not even being able to access the data to now having lots of data available to really train these models and to learn how to better patient care
Eddie: other than data, how have things.
Changed over time, [00:07:00] either from the access to the data, the algorithms that we're using developments in the field. We we're at a very different place than we were 10, 20 years ago. So, so what kind of changes have you seen over your career?
Matthew: So I, I think I think we think about the, maybe the algorithms first I think that, you know, AI in general, right?
I, it's a, a lot of what we saw at the beginning again were these, these expert rule-based systems. And then in came machine learning, when we were able to essentially collect and learn from, from these data. Then over the, you know, over the past several years now we've moved from then deep learning, which is a type of machine learning into then large language models and then into generative ai where instead of saying, I.
Tell me the probability that this patient has you know, is going to need the ICU in the next 12 hours. We're now having you know, review the lit medical literature on this topic, or giving, you know, [00:08:00] providing examples of complex patients and having it output you know, text for us. So I think, you know, things have changed quite a bit on the algorithm side.
I think in addition to the, in addition to that, and then the availability of data and the, I think in the way that, um we've had more efficient you know, computer chips and computer algorithms. I think the other thing that's changing that I think is really important is the fact that we, as. Users of AI are becoming a lot more familiar with it and becoming more comfortable with it because in our daily lives, we're surrounded by it, right?
Whether you're going online and doing shopping, whether you're you know talking to your Alexa or your, you know, or, or to Siri or you're using Google Maps. I mean, you, you're, you're seeing this in your private lives and now. As it starts creeping into clinical medicine it may be that we may be a, you know, we may be a little bit more comfortable [00:09:00] using that on the clinical medicine side as well, which I.
May slowly maybe I think, increase the pace of breaking down that other barrier. One of the other barriers for this, which is clinicians don't trust ai. And, and maybe as it, we trust it more of our private lives, we may start trusting it more in our, our, our work lives. Whether that's a good thing or not, I think we'll probably talk more about that.
Eddie: Yeah. And so much of this is, you know, clinicians don't trust ai, but I think a lot of clinicians who are normal human beings outside of their day job, don't realize how much AI is already in their normal lives. I mean, I, for I, I mean this is not hidden, but we're going on vacation in a couple of weeks and, you know, I.
Do use that GPT to, Hey, can you plan that itinerary for me to start? And I'll go from there. Speaking of the GPT which are the algorithms that use deep learning and a large database of training techs to generate new tech, so this generative AI that you already referenced have you used these in your clinical practice at all, or do you know any colleagues that [00:10:00] have used these a any experience with this?
Matthew: So there has been some survey work on this and you know, it looks like around. You know, up to 40% of clinicians have used these tools in their practice at least once. A lot, you know, a lot less than that. Use it every, every, every week or, you know, on, on, on, certainly on every patient. But I think that one of the more common uses of it is, you know, drug interactions or I helping to identify you know literature to support or, you know you know, certain.
Um differential diagnoses or, or even even treatments. So I think clinicians are, are using them and asking, you know, questions of these large language models. And, you know, I think my initial reaction when I first using, we started using them was a, uh. You know, a mix of sort of excitement and horror at the same time.
I think it was really exciting to [00:11:00] see, you know, how eloquently and how structured and how believable all the texts seemed and how human-like some of the results were. At the same time you could see instances where it was making up references or getting two concepts that sound similar, mixed up, and then the results were nonsense.
And if you. Just blindly trusted the algorithm. You could be led in a wrong direction that could potentially harm patients. So I think we're gonna see these being used by clinicians for a lot in a lot of different ways at the bedside. And they're gonna continue to get better. But I think, um.
Which is why I think we need to learn how to use them and when to use them, I think is really important point.
Eddie: Yeah. Yeah. I mean, do this may not be your, necessarily your area of expertise or something you feel the best to comment on. Do you have any advice or caution for providers who are looking to start to use it or have been using it for a little bit now, ever since this GBT boom in the past couple of years?
Matthew: Yeah, I [00:12:00] think you really need to treat these like a you know, like a, an inexperienced trainee. Right. So in other words, when you're interacting with these models, you shouldn't be treating them like a, an expert clinician you know, or a subspecialty, certainly not a subspecialist expert clinician, right?
You should do the things you would do if you had a new medical student come to you with their differential diagnosis and they see the fever and they're talking about how malaria is on the differential, right? Because it's, it's, it should be. But I, I think you what you, but what you need to do is you need to make sure that you have your own, you're using your own clinical judgment.
And the other thing that I think is really important is to, you can, at the, you know, this day and age, you can actually. Ask it for references and some, and some of the applications will provide references automatically. And I, I would look into those references because sometimes you look at those references and you realize that, yeah, actually this is exactly the paper I'm looking for.
Here is the European Society's guideline on this very disease that I [00:13:00] couldn't find when I was Googling earlier. But sometimes it'll show you a source that has nothing at all to do with you know, with, with the topic that you're asked about. So I really think that if you look at it like a you know, like a, an inexperienced trainee where you're verifying everything that is providing you, and you also have a sense of what things you could ask it to do that you can trust it on, and what things you can ask it to do that maybe you really need to look closer on.
You know, you talked about your your itinerary for your trip, right? And I think that, you know, as an example, like if I wanna go to X restaurant after work today, I. I'm probably just gonna plug it into Google Maps, so I'm just gonna let it drive me to that restaurant or, you know, or follow the directions to the restaurant.
And I'm not, I have a very low concern that it's gonna drive me off a cliff somewhere because it doesn't know where the roads are. Right. Right. Whereas if I'm going to, you know, we're going to to Greece this summer on vacation, and I did, I did use it somewhat [00:14:00] to help plan the vacation, but I'm not gonna like show up at the airport.
Without looking at any of the results and just saying, take me wherever the AI told me to go and I'll be back in two weeks. Right. So I think the human has to be in the loop somewhere and in a medicine, I think we need to be in the loop early and often to make sure that these are safe recommendations.
Eddie: Yeah. No, I think that's, that's a really good point and certainly reassuring to patients and people who are receiving care to say, Hey, like. We're using this as a guide, but we're still kind of behind the, behind the scenes in making everything safe and good for our patients. But that this is all, this is all great.
We are going to take a short break and then when we return, we'll talk a little bit about your current work, Matt. We'll be right back.
non: The ATS. 2025 International conference returns to San Francisco. May 16th to the 21st. Members get a discount on conference registration, so become an ATS member or renew your membership to take advantage of these savings at [00:15:00] ATS 2025.
You can network with colleagues. Find your next research collaborator, listen to patient stories and get inspired, and be among the first to learn about breaking news in the pulmonary medicine. Register now to attend. Go to conference. Do. thoracic.org today.
Eddie: Alright, everyone welcome back. We left off talking about some anecdotal experience using AI in clinical practice.
Matt, your, your research focuses on a different flavor of AI where you develop models that may be able to guide the clinician in the optimal care of those patients. Can you tell me a little bit more about that?
Matthew: Yeah, so you know, as I mentioned earlier in my career, I focused on just identifying who was going to develop critical illness.
But, you know, at. At, at that point. I think it's, you know, it's one thing to get to the bedside of the patient, but what actually is going to improve the care is providing the treatment that, for that patient has the highest likelihood of improving their outcome. And [00:16:00] so, you know, when we look at the, when we look at you know, especially in critical care at, at, you know, the evidence-based medicine that we have.
What we're seeing are, you know, these trials that are completed and they're presenting the average treatment effect across, you know, all the patients in the study. But you might imagine that in, within that study there may be some patients likely who were, were harmed, and maybe quite a bit by the therapy and other patients who may have benefit benefited from that therapy.
So you know it, and so this, this whole idea of, well, can you identify who those, who those patients are? Can you use machine learning, for example, as an approach to predict. The degree of benefit or the degree of harm for, for an individual patient. Because if you can combine this idea of evidence-based medicine from randomized clinical trials with what we're, we're calling, you know, what the field calls causal machine learning.
In other words, trying to [00:17:00] understand, uh. Um and essentially try to predict causal effects for, for individual patients. We can get closer to personalized medicine for the patients in front of us. And so with that in mind you know, our research group in collaboration from you know, a number of you know, Matt similar from from Vanderbilt and Paul Young and you know, many others we've been collaborating on.
These clinical trials and using these machine learning algorithms to try to identify the PA patients who are benefiting and, and who are being harmed by therapy. And, and the goal is that in the future for an individual patient, even if the average treatment effect from a trial said that there's no difference for that individual patient, we can provide them the therapy that you know, is predicted to benefit or avoid therapies.
They were predicted to be harmed by.
Eddie: This is really interesting to me and it reminds me of of a, of a comic. I guess I was reading this. Guess will she give a little insight [00:18:00] of what kind of comics I read? But just saying, Hey, like the, the point was that the individual patient that you're taking care of when you're seeing patients in clinic, you're seeing patients in icu, is they're not interested in the trial.
They're not interested in what happened to those patients. To, to patients like me, they wanna know what's the best treatment option for me. And to be quite frank, what I, when I'm, I as a patient am also interested in that and I'm interested in what's best for me as an individual which may not be the same as someone who comes up with similar statistics or similar chart that who might be next
Matthew: door.
Right. E. Exactly. And so I think, I think that you know, I, in some ways, in, you know, in critical care we've had, we have a lot of negative trials, right? And I think too many, right? Yeah. Too many, right? And so, and it, and it may be that, you know, there, there, these are truly studies [00:19:00] where no one is benefiting and no one is being harmed.
There's just no effect at all. But, you know, I, I think it's our hypothesis that. At least in some of these trials, those, those, you know, null effects or the non-significant average treatment effects are because of the fact that there are some patients who are benefiting from the intervention and some are being harmed.
And so, you know, for example you know, last year we published a study on oxygen targets where we develop these causal machine learning models in the pilot randomized for control control trial in the us. And then validated the model in the ICU Rocks trial in Australia and New Zealand. And even though the, these two trials didn't find a significant average treatment effect, we found a quite a variability in the individualized treatment effects in the ICU Rocks trial suggesting there were a lot of patients who were, would've potentially benefited from a high target or a low target.
And so [00:20:00] in the future. I think what we need to do then is if we can ident, if we can identify situations where there are, there is this heterogeneity of treatment effect, and then we can validate these, that these models are working well in external and additional trials, we can then start testing these models against usual care to see.
Can these models provide useful guidance to clinicians that will optimize the care and the outcomes of patients beyond what we're currently doing? So I think it's another way of thinking about machine learning in that not only can we target who is at risk of dying, but hopefully we can also provide useful recommendations regarding, well, what can help prevent that.
Eddie: Yeah. No, I mean, this is, this is really fascinating work and certainly as, as you had already referenced, this is your ongoing work. It's very excited to see where that's going. The we've talked a little bit about early warning signs. We've talked a little bit about this idea of [00:21:00] personalized medicine, which I think a lot of people are really excited about.
We've talked about some of the GPT learning stuff. Are there any other. Clinical applications of AI that you've heard of that we haven't touched upon already? I mean, I know some of my colleagues are really excited about AI based scribes for note writing, which I, I guess in some, in a lot of senses is not directly impactful to patient care.
But does impact clinical practice,
Matthew: right? I mean, I think, I think if we look to the different applications of AI and medicine. There are, you know, as you, as you were talking about there, there are some applications that are not patient facing, but I think are still going, going to be really important.
Right. I. Whether it's, you know, algorithms to help improve the sort of the quality of coding on an operation side for revenue generation or, or at least coding accuracy, for example. To you know, one of the projects that we worked on here at the University of Wisconsin was [00:22:00] you know, clinicians get a lot of in basket messages.
Ah, so we just published a paper where we use essentially used you know, a GPT model to help draft responses to those messages. And then the clinicians would then read the response and then edit the response and then be able to send it out. Similarly with scribes. You can, you know, you can have these devices that are listening to your conversations with the patient and then helping to draft your draft, your note and potentially providing, you know, additional useful information.
So I think there are then this, this other category of things where the goal could be to improve the efficiency of the work we do. And I think that those, I think, have a great potential to improve our quality of life and help with burnout potentially. Now, I know for decades and decades it's like, oh, the electronic health record is gonna help with burnouts.
And then of course it's. The exact opposite, not the opposite. [00:23:00] But I, I, you know, my hope is that one day AI and, or, you know, big data, what whatever you wanna call it, are, are, is really gonna help tackle that problem. And I think, you know, helping with note writing scribes, those things can help with, with that piece of it.
And then I think that another part of it, which where these models are being used is in education. So in similar surveys that, that that I mentioned earlier. You know, medical students are using these tools quite a bit to help with their education. And I think that there are, you know, a number of groups around the country that are using these, these approaches.
Generative ai for example, to help with simulation training, developing patient cases or other ways to help with education and training, which can directly help you know, help with patient care. So I think in addition to. You know some of the direct patient facing activities, I think there's a number of other ways that it overall could help us as [00:24:00] clinicians, as well as help patients.
Um in addition to you know, some of the models that are directly kind of predicting or recommending treatment for patients.
Eddie: Yeah, I mean that, I think that the, both of those statements ring really true for me. I'm, I'm, I'm not, I'm not that old, but when I was in grade school, I learned how to Google things, right?
Like mm-hmm. Now it's second nature to everybody. And chil pretty soon children and teenagers and school grade school children are gonna learn how to use AI and the, and how to answer questions and search for information and get there. And eventually those, those. Children are gonna grow and some of them are gonna become physicians.
And so these are gonna be kind of a real part of kind of the growing education landscape. And then when you're talking about the efficiency and burnout, I, I think what, it's gonna be hard to. Fully tease out its individual effect, but I, this will, I think, have a downstream impact on patient care. If [00:25:00] we are able to focus more of our energy on the, the care of patients rather than the documentation and other clerical burdens I think that's, that's only gonna be a good thing, I think, overall for patients downstream.
Matthew: Yeah. And I, I think that, you know, folks who are really excited about AI and talk about how, oh, it'll, it'll help. It will help make you more efficient. And of course then the response is, well, that means I'm just gonna get more patients added onto my clinic.
Eddie: Yeah.
Matthew: But at the same time, I think that, you know, I, I think most of us maybe, maybe all of us went into medicine because we wanted to care for patients and not.
Write a bunch of stuff down and, and do a lot of documentation and do a lot of things for billing, right? So if I had to choose between seeing, you know I don't have a clinic, so I'm making up numbers here. But like, you know, if I had to choose between seeing 10 patients in a day, but then spending a lot of my hours, you know, writing a lot of documentation and doing other things that could be automated by ai.
Or seeing, you know, 15 patients in a [00:26:00] day and all of that is like face-to-face, talking with the patient and making that relationship with the patient. I take the ladder every day. As long as I get more time doing, you know, the, the, the really, the, the things that got me into medicine in the first place, I.
Eddie: I mean, no, no medical student wrote their personal statement about how I wish I could spend more time at a computer. Right.
Matthew: Maybe they'll in the future, I don't know, but
Eddie: may, maybe. Maybe. But a, as of right now, I think that's true. At least it's true for you and I. Mm-hmm. So Matt, uh. I guess I'll take you a little bit back to your research and, and maybe you can answer this question in two ways for, for you personally in your research and then also in general, what's, what's coming next in this field for us as clinicians?
Where, where should, where do you wanna see this field move? I.
Matthew: Yeah. So I, I think, um what's coming next and where do I wanna see it move are probably two, two sort of highly interrelated questions here. So I think, I think what's coming next is that [00:27:00] I think first, I think the models are gonna get better and better and better.
I think. You know, imagine where we're, we're two or three years now, years ago now, versus where we are now. It's, it's really been an incredible leap forward and you know, so I'm imagining that the models themselves are gonna continue to improve. I think that there is a lot of drive to try to get these models to be more transparent, to provide more citations, to have more guardrails as it relates to models that might be, you know, implemented in a patient facing manner.
So I think that. There's gonna be a lot of growth in that, in that, in sort of, in that direction. I also, because of the fact that a lot of you know, big industries, big tech. R companies you know, there's a lot of funding, a lot of resources, a lot of money in this area. I think the field is gonna move, I think, a lot faster than what we typically thought think of in academic medicine, where you like write a paper and 10 years [00:28:00] later it becomes part of a guideline and 10, you know, or people are actually start doing it.
I think we're gonna see things change and move a lot more quickly. Um. I, I think that in addition, we're gonna see models that can use multimodal data that are being integrated into the hr, where you're able to, to utilize the clinical notes, the structured data, the image data. So you're gonna be able to get a fuller picture of, you know, what's going on, going on with the patient as well.
And I think that from a technical perspective, one of the things that you're seeing a lot now is this idea of agents and multi-agent workflows. So I think, you know, what, what does that mean? I think so what, what an agent essentially does is it, it allow, when you give it a task. It will plan the task, it can interact with the environment.
It can then sort of reiterate, sort of optimizing that task and then you can interact with it and [00:29:00] it sort of has a memory of that conversation you've had with it before. So, you know, a year or two ago you, you know, these models often were static models that were trained on data up to a certain point. So it would provide you information, but unless it was explicitly programmed to look at the internet, you know, it may forget, it may not know that you know, travel to that country is closed, so you can't go there.
Or you know, this, this airline is no longer flying out of this city or whatever else. Right? So, but now you can develop these agents that can develop expertise and interact with the environment. So how might that work in the HR? Well, you could, for example, have an agent that focuses on, like tries to become a, you know, a radiologist and interacts with the radiology images and really essentially gains expertise in, you know, reading the radiology images in your center, and then you have a critical care.
Physician agent that then talks to the radiology agent where the critical care [00:30:00] physician agent is learning about all the clinic, other clinical data about the patient, and maybe knows the guidelines and those things. And then know all these agents start interacting and then providing sort of final recommendations.
Or you can continue to interact with them over time. So I think way down the road, we're gonna be able to have these. Essentially whether you wanna call them, you know, a assistance or guides or, or agents that are able to summarize information and even, you know, provide additional even recommendations for, for things that you, that you might wanna do.
So I think that's where we're going over the, over the long term, I think. The hurdles to that are many. I think there are obviously patient privacy concerns about these. I think that's always a, a hurdle for these types of models. I think there are cultural changes in terms of trusting these models.
And I think another hurdle is that it's really hard to evaluate the quality of these [00:31:00] models. You know, if you're, if you're developing a classification model, you can output an a UC and you know, you know, and, and then you say that 0.9 auc, oh, that sounds really good. Okay. That's a good model. And this other model, the 0.48, you know, 0.5 AUC is, is not better than a coin flip.
Eddie: Yeah. You have, you have these numbers that tell you that the model is good or bad, and that's pretty easy.
Matthew: Yep, exactly. And so I think one of the, another challenge with this is that it's difficult to. At scale evaluate these models when they're outputting a lot of text. And so how do you know, you know, if you want to implement a one of these models in your, in your practice, how do you know if it's which model you use and how do you know if this is a, a good model or not a good model?
How, how are you gonna evaluate that? And I think that becomes a challenge. In, you know, if you're trying to get it to do board exam questions and answers, that's an easy evaluation because there are, you know, there are multiple choice, but for clinical medicine it's a lot more complex. Yeah.
Eddie: Are [00:32:00] these the things that you had referenced a little earlier when I asked the original question that where we're going and what you're excited about and what, where you wanna see the things move might be a little bit different.
Are these agents things or the things that, that kind of make you excited or what for you in your research where, where are you say, thinking the next steps are gonna be?
Matthew: Yeah, so in terms of like where I, where I really want this to go, I think that I think a really important part of this is.
Making sure we're evaluating these tools and testing them in clinical studies especially things that are going to drive clinician decision making and clinician actions in really rigorous ways, right? Because I think that. There is a lot of excitement to just build these models into the hr, turn them on, let clinicians use them because they have to be helpful.
Right? Obviously. But I mean, how many times, again, if we go back to critical care, right? I mean, how many times we said, oh, this is, you [00:33:00] know, you know, all the data shows, this is clearly gonna be the intervention that's gonna help patients. And then we find that. It's actually the opposite, right? So so I, I think we, we really need to consider these as we would consider any other intervention in that we need to be having rigorous studies comparing how these models are being used to, whether it's usual care, standard of care, you know, whatever you want that control group to be.
And making sure that not only are these valuable and providing benefit to the clinicians. And or the provider, you know, and or the patients and, and, and, you know, hopefully both. And, and then there's also the costs involved. And then there's also things like health equity and making sure that these models are trained on bias data that are available, you know, on the internet.
And oftentimes you don't know what all data sets were used to train these models. And you could, you know, I think there's always a concern that maybe you could be [00:34:00] reinforcing. Some of the biases that are already inherent in medicine by using these models to care for patients as well. So I think there are a lot of guardrails.
There are a lot of things we really need to be you know, implementing and a lot of things we need to be investigating before we just turn on a bunch of these and let them go wild in, in clinical medicine.
Eddie: Yeah, and I think it's, it is really refreshing for, for me, and I think many, I'm sure many of our listeners to hear for someone who's, who's.
Researchers in this field and this field is exploding to express this desire to, hey, like we need to pump the brakes a little bit. We need to make sure what we're doing is the right thing. Nobody ever made a decision or tried to make a change that they thought was harmful. Everybody's always trying to do the right thing.
But we have to make sure that we're doing the right thing for our patients overall. Yeah, exactly. Yeah, I think that's, that's a, that's a probably a pretty good note to, to start to wrap up. Is there anything else that you think our listeners would be interested in? Or any other, any other last words that you wanted to depart on us?[00:35:00]
Matthew: Yeah, I mean, I, I think that, um I think we're living in a really exciting time for medicine and I think. I don't wanna leave on a downer. So I really do think that these you know, these AI and machine learning and certainly these new generative ai, um tools and models, I have great potential to improve our lives as clinicians and the lives of our patients.
I think we need to think about how do we wanna. Help drive the use of these? How do we, how can we, Dr. You know, identify the best use cases? How can we sort of drive this field? Because I don't think we as clinicians want to be passive observers and let the big tech companies decide, you know, who, you know, how, and, and you know, how these are gonna be implemented in clinical practice.
I think we really need to try to advocate for you know, the concerns that we have about these models, the needs we have for these models, the needs our [00:36:00] patients have for better care. And then to see how can we then implement these models to solve these problems. And so I really hope that's where we move into the future and I'm really excited to see where everything goes.
Eddie: Yeah. No, I think that's great. It is never, it's never a downer. Talking with you, Matt, I really do appreciate your time. Thank you Matt. And thank you everybody for listening for joining us on today's ATS Breathe Easy episode. Please subscribe and share this episode with your colleagues. If you haven't done so yet register for the ATS International Conference, San F San Francisco and conference@thoracic.org.
Members get a discount conference registration. It'll probably be on a later side now, but we will see you next time. Thank you all. Thank you very much.
non: Thank you for joining us today. To learn more, visit our website@thoracic.org. Find more ats, breathe Easy podcasts on transistor, YouTube, apple podcasts, and Spotify. Don't forget to like, comment, and subscribe, so you never miss [00:37:00] a show.