ATS Breathe Easy: ATS 2026 Keynotes Preview, Part 2

Ugo: [00:00:00] Welcome back to ATS Breathe Easy podcast. I'm your host, Dr. Ugo Ezeama. I'm delighted to talk with today's guest, Dr. Laura Turner. Dr. Turner will deliver the keynote address on Tuesday, May 19th at the ATS Conference of 2026 in Orlando. The title of her presentation is Opportunities and Challenges Using AI in Medical Education.
This is a great complement to the other AI related content at this year's conference. So let's get into it. Dr. Turner, welcome. Welcome to the show.
Laurah: Thank you so much for having me.
Ugo: You did your PhD in biological anthropology, so you were studying human evolution and behavior. Yes. Now you're the associate dean of A- now, now you're the associate dean of AI and Education Informatics at the University of Cincinnati College of Medicine, and you're building AI systems that coach medical trainees in real time.
Do you mind walking me through that arc? How does someone- ... who [00:01:00] started off studying how humans evolved end up designing the technology that might reshape how we train the next generation of physicians?
Laurah: So I'm actually going to back up just a little bit. Before my PhD, and, and you'll, you'll see why, I think that my career trajectory has been a, a series of just right place, right time, and circumstances that I didn't anticipate.
So I, I started, I actually started out life in a very different form as a forensic anthropologist. And I did forensic works, and I loved the science of it. I loved the human anatomy. I loved trying to figure out the cases. I really, I really didn't like testifying and the legal aspect of it, which my husband, who is a judge, thinks is very funny.
And so I decided, that's where I decided to go and get my PhD in biological [00:02:00] anthropology. And yes, I was very interested in human evolution. And really even though it seems like a big leap, what my PhD was really and my dissertation was really looking at was modeling using machine learning to model gene transfer through human ancestral environments using primates and using modern hunter-gatherer societies as proxies.
So I was actually doing machine learning, very, you know very complex statistical analysis AI for all intents and purposes, right early in my career. When I graduated, however, it was the start of the recession, and so nobody was gonna fund that type of work, right? So what I ended up doing was taking a faculty position teaching gross anatomy, because I had that background in forensic anthropology, and I had done a lot of cadaveric dissection.
And so I, I [00:03:00] went through a couple of faculty roles and landed myself at the University of Cincinnati, and it was kind of recognized that I had both that very, uh you know, quantitative, statistic analytical background as well as the ability to do cadaveric dissection. And so I became the Dean for Assessment and Evaluation, and also taught gross anatomy across the curriculum and interestingly enough, in that role as dean for assessment and evaluation, I never really did exactly what traditional assessment and evaluation deans did.
The University of Cincinnati has always been very forward-thinking. And so my team was really pushing the boundaries. We were using you know, we were very early into learning analytics, big data, trying to understand how narrative assessment could be analyzed instead of just, like, accumulated and [00:04:00] never looked at again.
And so we were using natural language processing. We were using machine learning and all of these, these techniques the entire time I was at, I was in that position. And so we were, for to make it a little bit more simple, we were using small language models when large language models like- Mm
ChatGPT burst onto the scene. In fact, I was just down at UT Southwestern on a panel for simulation and, you know, was, was recalling how I remember ChatGPT 2 and how unusable it was. And I remember the moment when ChatGPT 3 you know, came on to the scene and how transformative it was. So it was a very easy transition for my team to start working with large language models, integrating large language models with traditional machine learning mechanisms.
We were, we were there. The difference was, all of a sudden, people cared about what we [00:05:00] were doing and were interested in it.
Ugo: this was actually a, a, a much more natural progression than it sounds. Yeah. Off the top, off off, based off of your background, it sounds like this is almost the next evolution- or the next application of your, of your experience and your education.
Laurah: Yes, absolutely. And we were very lucky because the University of Cincinnati recognized the work that we were doing in artificial intelligence very early on and was a leader in creating you know, one of, one of the first, if not the first decanal roles that focused both on artificial intelligence and education through educational informatics.
Ugo: you published in ATS Scholar earlier this year about what you call the, from what I understand, it's the idea that AI could create incredible, incredibly personalized learning pathways for trainees, [00:06:00] but that personalization might actually conflict with the structured competency development that educators have built their career to me, a real tension. Mm-hmm. Can you unpack that for it? What's the part that keeps you up at night?
Laurah: yeah, we, we-- I was very fortunate to be able to collaborate on this paper with Dr. Sanjay Desai and some of my colleagues at the University of Cincinnati. And it's, it's really just w- amazing when you, when you have, you know, people who come from informatics and from you know, a Palm Crit background, a physician, technical people, and you, you think about a big problem.
And, and, you know, where you can go. And so we actually ended up coining this, this concept that we refer to in the paper as the alignment paradox. Mm-hmm. And what the alignment paradox is it says basically, like, you know, as these AI systems grow more powerful [00:07:00] and more autonomous, and they're much more powerful and autonomous than when we wrote the paper even now you know, there's this question: How do we ensure that they remain aligned with our educational objectives while effectively- Mm
supporting physician training? And, you know, this-- a- and, and as I just said, you know, it's not a distant concern, right? It's already here. Most of us are, you know, talking about this, this future where AI is integrated into all parts of healthcare and healthcare education, but it's already here, right? It's already- Mm-hmm
the standard of care. And so it's not like we have this ramp up where we can say, "Oh, well, by, you know, 2027, we have to be ready for this." We have to muddle through it. We have to struggle through it now. Our learners, we know, are using these technologies in, in their studying in medical school, and we know that learners are already using [00:08:00] technologies like open evidence, ambient scribes- Mm-hmm
right? The adoption of these technologies are completely unprecedented, and we don't really understand the failure modes yet. We are uncovering them as, as we go. And so it's a very different place than we have ever been before in any type of technological relu-- revolution, which is both exciting, but it's concerning because we don't necessarily know how to change the paradigm of how we educate and how we continue to make competent physicians who are gonna be ready for a future with these technologies.
Ugo: I, I, it-- I think I've-- ever since I started using ChatGPT when it first came out, and it-- the, the exponential progress that it's had, along with all the other large language models that have come out, my overarching- Opinion about it all is that [00:09:00] we are grossly unprepared for how much our lives are going to be transformed not just in healthcare, but also in society at large.
It's going to be so-- Like, the way we look at the world, the way we interact with the world is about to be radically transformed in the next five to 10 years, if not sooner, with it, again, highlighting the exponential growth. And your lab, Two, Two Sigma labs, is that, is that- Yeah ... am I saying that correctly?
Laurah: That's correct.
Ugo: So y- you're working on you call-- You mentioned ambient AI, and- Yes ... I think pertinent to medical trainees, you're working on developing something that medical trainees could be wearing AI-enabled glasses during an actual patient encounter and receive real-time
as feedback on their clinical reasoning and communication skill. Trekkie. My first reaction was, There's a, there's a, there's an element of like how is this even going to work? Can you talk to me about what that actually looks like in practice, [00:10:00] and how you make sure the technology enhances- Yeah ... the encounter rather than getting between the trainee and the patient?
Laurah: Yeah. And these are questions that we are asking. So we, so we're we are partnering with the University of Cincinnati is partnering with Arizona State University and one of their affiliate hospitals community hospitals, Honor Health. And, you know, we are actually asking not just are AI wearables, you know- gonna work in these thi- these environments and, and can we, can we do the thing?
We are asking a deeper question. We are asking how these technologies impact the cognitive load of a learner how can they enhance it? Mm-hmm. So we're actually AB testing two technologies, the wearable AI glasses alongside a version of the technology that just furnaces the microphone from a smartphone, similar to, like, the, the Ambien scribes.
Mm. Because we are very [00:11:00] aware and also focused on understanding how these technologies could also create disparities for you know, community hospitals which are small that might not have the resources to have AI glasses. They're expensive, right? Also, you know, what hap- what happens, like yourself, if you already wear glasses?
Now, we can, we can put prescriptions in the glasses that we're developing, but, like, that, that's expensive as, you know, to have that ty- type of technologies. And while the heads up display, right, being able to push information into the visual field of the learner sounds super cool, we also don't understand how that is going to impact cognitive load, the thought process, the intimate interaction between the physician and the patient.
This is all part of the work that we have to do. So we are actually starting small. We have an AI [00:12:00] platform which has been deployed in, throughout the University of Cincinnati Medical School for the last, last three years, going on four years, and this technology creates simulated AI patients in a web environment, right?
So learners can log on, they interact with the patients, they can go through a whole scenario. You know, and depending on what level of learner they are, the, the goal might be diagnostic success or m- Mm-hmm ... management of care, right? We're actually partnering with the ABIM on, on this project. So they get very comprehensive and personalized feedback after each encounter, and over time the system learns about the learner and adjusts.
Mm. And so we're actually ripping off that feedback part of the technology and putting it onto the glasses. So instead of going through a counter an encounter online, you would just record the encounter of the patient scenario. [00:13:00] And so- Mm ... we're working on that, but we're asking questions like, what are we actually able to evaluate using ambient audio?
We're pretty sure we can do communication. competencies. And we're, we think we can do some clinical reasoning competencies, but we don't know that yet. We don't know the quality of that. We also have to learn how well that audio capture technology can diarize voices. And what diarization is, is the ability to distinguish consistently and accurately between the voices of people.
So consistently attributing the things I say to me and the things you say to you. Mm. Now, in Zoom, for example, like we're on right now, that's easy to do because you have a separate microphone than I do. But using a single microphone with multiple voices in one room that gets tricky. In our technology now, we can do that pretty well up [00:14:00] to four to five voices in the room, unless they're all female, and then it breaks down.
But th- these are very important things, right? Mm-hmm. Because we, like, these are the biases which are baked into some of these technologies that we might not be aware of, and we have to... Now, I'm very confident that we can solve for this, and we're already working on it. But unless we were deliberate on looking for where inconsistencies in things like voices or even accents, which are another thing that we are going to be really focusing on, we wouldn't know.
So we're doing all of that, and first, we're gonna be deploying it in the simulation environment. We are not going into real patient encounters until such time as we get this technology right in the simulated environment. And yes, of course, simulation is much easier and much controlled than the messy real world clinical encounters.
But we, in my lab, have a very specific and, deliberate approach to [00:15:00] advancing these technologies. We do take bold step forward in our deployment. However, we make sure that we are about 90% that the technology is going to consistently work, that it is reliable, that it is interpretable, and it is defensible before we deploy into, you know, a, a, a further high stakes environment.
Ugo: I think one of my favorite... This is my first time meeting you, and, and I have to say, in looking over all of your, your accomplishments, your resume is very, very impressive. And It, it sounds like there's a lot of inherent feedback in the way you work. And you, you said the word deliberate often, and I can't help but relate that to some, like, a s- a certain sense of humanitarianism about your work at large. One of your funded projects, and you, you brought it up unprompted really, is centered around equity [00:16:00] and bias, right?
So Fair Agent.
Laurah: Mm-hmm.
Ugo: If I re- so that, th- that word equity is doing a lot of heavy lifting, as it does in a lot, in a lot of pop culture, the way equity is discussed in, in, in pop culture and in regular conversations in, in our society now. We know that bias already exists in how trainees are evaluated.
D- do you think AI can help fix that problem? Does it risk encoding it at scale? And how do you build a system that's actively working against those biases that are already baked into the data it's learning from?
Laurah: I think these are wonderful questions. And you know, I'm not gonna say that we have it all figured out.
The, the one thing that our lab has been outstandingly fortunate to accomplish, and I can, I can, you know, make it sound like, "Oh, this is how we intended it the entire time," but really- Mm-hmm ... it was very [00:17:00] organic. It is a very multidisciplinary lab. And so we are fortunate at the University of Cincinnati to have just really, really brilliant people who have come together to work on these projects.
And in addition to that, we have been able to partner as a result of the consortium through the AMA, with other institutions. And so it's really bringing a diversity of thought to these big problems to solve them. I'm a very big believer in that concept of the wisdom of crowds. And so we have- Mm
um, we have tech people, data scientists, analysts informaticists, computer scientists in the lab, and we have physicians who have absolutely no tech background, but they are experts in equity. They are experts in clinical reasoning. They're experts- Mm-hmm ... in simulation. They're experts in assessment.
We have [00:18:00] design people for user interface and user experience. We have- ... learners, and that is the most important thing, on all of our build teams. We have a, you know, we, we hear our learners saying, "Nothing about me without me." And so they give us feedback. Our lab has an open door policy where learners at any time, as long as we're not talking about learner data that's sensitive, can come and just listen to the things that we're working on in the lab and give feedback.
And so I say all of this because when we were developing Fair Agent, we wanted to be deliberate but also eyes wide open. N- you know, knowing we're not gonna solve all of these problems We knew- Mm-hmm ... that in the literature there have been multiple studies, not only in medicine where, you know, there women are, were, are provided with narrative feedback that is different than males.
The language used, the, the concept, the, that are, that are talked about in, in the na- in the [00:19:00] narrative assessment, they're different. Underrepresented students in medicine are, receive different types of feedback than non-underrepresented learners, right? And, and all variations of that. And so our goal with Fair Agent was to really lean on all of that evidence and try to create a technology that does not, that, that, that doesn't necessarily say, "Oh, this is bias," but instead takes one step to solve a problem by bringing in the human, you might have heard the term human in the loop.
And so what Fair Agent- Mm-hmm ... does is all it does, it's, it's a bot, and it sits and it reads all of the narrative assessment, and it flags it any statement for potential inequities, right? Or fa- not fairness based on all of these liter- this literature. It doesn't say it's bias. It gives it a score of probability of bias of zero to 10.
And [00:20:00] it flags it, and it's an elevation protocol for human review. Now, we all know that, you know, it is difficult enough to get faculty to have the time to go through narrative assessments just for, you know, is the narrative assessment correct and reflective- Mm-hmm ... is it good quality feedback, let alone having somebody have a full-time FTE to go through it and look at it for bias, right?
And so we figured with this technology, if we could get this right, maybe we could just lower the barrier to entry and just elevate those particular narratives that need a human set of eyes on them. So that's what the technology does, and we developed at the University of Cincinnati in collaboration with University of California Davis, so UC Davis, and North Carolina Chapel Hill.
And the reason that we did that was very deliberate because there's something called robustness. If the technology only works at the University of Cincinnati, then it only [00:21:00] works at the University of Cincinnati, and that's not very helpful. Mm-hmm. So we wanted to make sure that it was robust from an environ- a, a type of environment standpoint.
So UC is a majority, majority school- UC Davis is a majority minority school. UNC- Mm ... Chapel Hill is a little bit of both. Also, geographically, you know, we have more of an East Coast, more of a Midwest, and more of a West Coast. There's always going to be these variations, and so we want to make sure that the technology works for everybody.
And so that's how we've approached doing the Fair Agent project. And we, we are hoping that by scaling that, and we're still in the process of tweaking it, that we can just elevate a, a, an issue that has, from a structural standpoint not been solvable just due to, you know, the, the constraints on human time and resource availability.
Ugo: I, I love how it's both a research project and, at the same time, a direct [00:22:00] intervention for the problem at hand. I think that's such a wonderful thing to focus on. Your keynote on Tuesday morning at ATS at your keynote there are going to be a lot of different types and different levels of providers and caregivers, as well as, you know, different people from different sorts of medical and healthcare industries.
So there'll be pulmonologists, respiratory therapists nurses, fellows, people who spend their days at bedside, not necessarily in an AI lab. And when they walk into that room, what's the one assumption about AI in medical education you want them to leave behind? And what's the one thing you want them to walk out ready to do?
Laurah: I think that the message that I'm going to be trying to convey is that we need to take a look at how we talk about training and education, because I think we've conflated them. [00:23:00] And I think that AI-
Ugo: Mm ...
Laurah: is really going to excel at training. But it's still, and I will show this through current studies that are out there, it still struggles in terms of education, at least how I define education.
And humans remain the gold standard, not because the models aren't impressive, but because the things education is actually for, judgment, reasoning, professional identity formation, ethical dilemma- Mm ... and navigation, these are the things that no benchmark has currently learned to measure. And AI is really reliable where education needs the least help and least reliable where we need the most thoughtful integration.
But I, I, I am very much an AI optimist because I think that when we use artificial intelligence [00:24:00] to augment human judgment rather than to replace it, and when we use it to scaffold learning rather than to outsource thinking, which I'll also talk about- we have the ability to truly transform medical education in a way that both elevates the learning experience, the learning environment, and can improve patient care
Ugo: Mohsen, you are a revelation.
Thank you. I-- That was, that was my last, like, big question. And so this is what I'm taking away from this conversation. A little backstory. So you started your career studying how humans evolved, and you're building-- you're helping to build a tool that might define how the next generation of physicians and providers can evolve as clinicians.
That-that's not by coincidence. I think we've highlighted that the bridge from anthropology to AI might not be that long [00:25:00] a bridge at all. I think the question underneath all of this isn't necessarily about AI, it's about learning. I love how you parsed the difference between education and training.
In medicine, we tend to conflate the two, like you pointed out. How do we learn? How do we get better? And who gets left behind when the system isn't paying attention? What struck me is that you're not selling us on AI necessarily, you're also s-wrestling with it. I can do-- Based on the way that you're talking about it, you're wrestling with it quite openly as well.
We spoke about the alignment paradox, the equity question, the tension between personalization and structure. I, I love how you're open about these are problems that we haven't solved yet, but we are-- you, you specifically And you're invite-- you as well. I think that's exactly what makes this keynote worth our listeners' time.
I think the reality is AI is already in our institutions. It's already a part of, [00:26:00] of healthcare. It's, it's in our pockets, open evidence, you pointed out. It's in the tools that we're using to study. I use, ambient listening for so-- to help me with my scribing. Mm-hmm. We're using it to, to prepare for boards, to look up things at three in the morning on call.
Yeah. So the question isn't whether AI is coming to medical education, it's, it's already here. The question is whether we're going to shape it or let it shape us. What are you, what are you looking forward to most at ATS right now?
Laurah: So I, I am very privileged that I get to go all over the world and, you know, learn about how o-other institutions and organizations like ATS are and, and, and their delegates and members are using artificial intelligence.
And so really, I, I am looking forward most to learning about in that world What are, what are the [00:27:00] concerns? What are the use cases? What are people excited about? Because I always- Mm ... feel like I take more away than, than I give at these at these events. So I'm, I'm just looking forward to, you know, as a lifelong learner, what, what I'm gonna figure out and what I'm gonna do next based on- Yeah
the things we learn from each other.
Ugo: I love it. Your keynote speak is on Tuesday morning at ATS in Orlando. I'm looking forward to hearing, and I'm sure you're looking forward to some good challenging questions.
Laurah: Yes.
Ugo: I... You sound like the kind of speaker that wants those kind of difficult questions anyway.
Absolutely. Thank you so much. Thank you so much for- Yeah ... For joining me for this conversation. I hope that I get to meet you in person at the conference. And that's it for our episode of ATS Reviewed, and we'll see you in Orlando.
Laurah: Thank you so much.

© 2025 American Thoracic Society