ATS Breathe Easy - AI's Impact on Pulmonary Research and Beyond

[00:00:00] You're listening to the ATS breathe easy podcast brought to you by the American Thoracic Society.

Eddie Qian, MD: Hello and welcome. You're listening to the ATS breathe easy podcast with me, your host, Dr. Eddie Qian. Each Tuesday of every month, the ATS will welcome guests who will share the latest news in pulmonary critical care and sleep medicine. Whether you're a patient, patient advocate or healthcare professional, the ATS Breathe Easy podcast is for you.

Joining me today is Dr. Nitin Seem, the editor of ATS Scholar, who will discuss AI and its role in scientific research and review. Dr. Seem is the editor in chief of ATS Scholar, as I mentioned. He's a clinical professor of medicine at George Washington University and the University, and the University Services University.

His areas of expertise relate [00:01:00] to mechanical ventilation and ARDS. His educational interests relate to innovation and medical education, including artificial intelligence, web based learning, and simulation based education. Welcome to the show, Nitin.

Nitin Seam, MD: Thank you, Eddie. It's really a pleasure. Excited to talk with you.

Eddie Qian, MD: Yeah, me too. This is a good topic. I think for the listeners, let's talk about first about a little bit of the jargon that we're going to be using. I think a lot of listeners at least vaguely understand what AI or artificial intelligence is as a broad topic, but we're going to be talking about LLMs or large language models.

Can you give me and give the listener a little bit of introduction to what are LLMs?

Nitin Seam, MD: Yeah, thanks. It's a great question. And, you know, a lot of it is just unfortunately unnecessarily confusing jargon. And so I get confused. It is kind of like, you know, ventilator jargon, I think, in a way, uh, yeah. So let me, let me walk through it a bit.

Um, so, you know, like you may hear the term chat [00:02:00] bot, right? A chat bot is a general purpose AI system, but with some sort of interface to check, uh, when we're talking about it. Yeah. Yeah. Natural language processing. That's a type of AI that's basically enabling a machine to understand, interpret, and generate human language.

You know, in a manner that's natural for us humans, and so this became really widely known and explored when open a I released chat GPT and the GPT stands for generative pre trained transformer, which is basically it's a large language model. That means it's trained on massive amounts of text data, basically to effectively contextualize the sequential nature of words in a sentence to predict the most plausible response.

Yeah. and generate natural language and sees names responses to text prompts. And so when you started inputting things into chat GPT and got [00:03:00] these, uh, remarkable responses, I think people really started to see the power of these large language models. And, you know, we are sometimes a little bit behind technology and medicine, but I think everyone saw.

Uh, so many different use cases and there's excitement in all domains of medicine and the use is really taken off in the last two years.

Eddie Qian, MD: I mean, talk about being behind the technology of medicine. I mean, do you still carry a pager?

Nitin Seam, MD: We recently phased out finally, but that's exactly right

Eddie Qian, MD: back. I was dealing with fax machines earlier today.

I mean, this is like archaic if you're talking about other industries. But here we are in medicine, but we're trying to get with the times here a little bit. We're going to be talking a little bit about a statement that the ATS put out that you authored, uh, about use of LLMs and kind of scientific research and writing.

But when we were talking a little bit before about this. You know, there's an interesting kind of backstory to [00:04:00] why the ATS thought this was important to put out as a statement, have a stance on what was going on. Why did the ATS really think that we needed to have this?

Nitin Seam, MD: Is it just, I mean, this is the ATS journals, right?

Are part of the big umbrella of ATS. And I think one of the, um, uh, as you introduced me, I'm Uh, you know, privileged to be the editor of one of the four ATS journals and, uh, ATS is a non profit, uh, uh, publisher of journals as opposed to a lot of the for profit ones, right? So, um, so the journals have, uh, oversight by a publication policy committee and they make sure we're doing all the right things and are on track.

And at our last meeting at the publication policy committee, which is sort of leaders in pulmonary critical care with publication experience as well as the editors. Um, there was discussion about our A. I. Policy because everyone's talking about A. I. And, you know, we realized we looked at the policy, the last policy we had, and there were questions [00:05:00] related to well, what is exactly this mean?

Or does that mean? What are we gonna do about papers and confidentiality and what can be put into a chat bot or not? And so it really sparked a lot of great conversation and debate. Thanks And then I think the editors at that meeting and some of the leaders of the group said we really probably need to take a look at this policy, provide some specificity to it.

Um, and, and then the, the genesis of the editorial we just published, um, was to say, There are a lot of moving parts here. There are a lot of areas of uncertainty in these, uh, Situations rather than just posting an update to a website. Let's talk about it And that's what the editorial was to say kind of what our overarching goals were But also some limitations and some challenges in some areas that we're going to have to monitor

Eddie Qian, MD: Yeah.

So, so for the listener, we'll have the link to this editorial on our show [00:06:00] notes. But Nin, can you take me through just some of the key summaries and main takeaway points from the, from the editorial, from the statement?

Nitin Seam, MD: Yeah. So, um, first of all, I appreciate you putting the, uh, link in, uh, to the podcast, uh, to the editorial, there's a table, a box that shows kind of what our policy was and what the changes were to provide specificity.

But just sort of briefly summarizing that, um, I think, you know, one of the things that we even just looking at it from a 30, 000 foot view as we were debating, it's really the we all as a group felt As you talk about sort of, uh, technology and, and, and, you know, we're a peer review, you know, we're a peer reviewed journals.

So when we, we take very seriously protecting authors, confidential work, that's a really important thing for us. Authors. Um, take the time and trust us with their manuscripts and we'll hope to publish them. We need to protect [00:07:00] that confidentiality until they're published. Um, and so that was one thing that was really an important guiding principle for us.

And that also that, you know, we're going to be using lots of AI tools and such. But ultimately, if you're an author of a paper, you're ultimately responsible for your work. So you can use these, uh, tools as appropriate, but transparent disclosure is a guiding principle for any, for our policies overall. Um, and so what we did is, um, we, we, we, one, we first reviewed our policies and there, there are big, uh, organizations, committees, like the International Committee of Medical Journal Editors and the Committee for Publication Ethics called COPE.

And our policies were aligned. in general with it with those committees that make these policy recommendations for medical journals. We also looked at other major medical journals, uh, and we found opportunities to kind of improve our [00:08:00] workflow in for authors and reviewers and increase the specificity of those policies.

Um, but sort of some of the high points, right, um, is that we made some changes to our submission portals. So when, you know, when you want to publish a paper, say, in the Blue Journal, American Journal of Respiratory Critical Care Medicine, or any of our four journals, you go to a submission portal and submit the paper and you fill out some, submit a cover letter and so forth.

But we, and you, you're asked several questions. We made a change asking both authors and reviewers to disclose any use of large language models or other AI tool use. This way there'll be transparency for us as editors seeing that for reviewers when they're taking a look at the paper. How did they use the tool and eventually hopefully readers of published articles.

So we all know how I was using writing manuscript and so manuscripts. So this was kind of more left to the [00:09:00] authors putting it in the body of the text as opposed to in that submission portal for us to see. So that was an improved workflow. We also wanted to reiterate a discussion that, that, that people had is that should, uh, LLMs be authors?

And, uh, they were pretty consistent across medical journals that we're going to stick with the policy that LLMs cannot be authors, that, uh, authors can use tools, but that, you know, authors are ultimately responsible. And then the other thing is for reviewers, we want to make clear that reviewers cannot upload author's confidential pre publication manuscripts to, uh, uh, a large language model, but they can use large language models to edit their own.

Yeah.

Eddie Qian, MD: Yeah. So I guess. I would be interested, uh, I'm an, I write papers, I peer review things. Why would a reviewer want to upload a paper to an LLM?

Nitin Seam, MD: Yeah, I mean, you know, there's always you a [00:10:00] lot of activation, energy and, uh, , any task. We appreciate peer reviewers a lot of times doing it. Uh. you know, after hours or at the end of a busy day, when, you know, it's another job in academic medicine.

So you may want to say, hey, start me off with some ideas of what do you think of this paper, right? Like people are using elements for all sorts of things. And so, um, what do I want for dinner? What do you think about this paper? Yeah. Yeah. Yeah. So, so that that's where I think it is something that, uh, that, that, that could come up.

And it, uh, so I think we, we really wanted to, to sort of clarify that.

Eddie Qian, MD: Yeah, I think one of the challenges that I can see with this is you're saying, oh, this is part of the submission portal and we're clicking these boxes. It's an honor system to disclose. Uh, and this is a quickly evolving field and a lot of these things, a lot of these policies might be seen as a little bit backwards or maybe even too much.

Uh, what are, what are your [00:11:00] comments on kind of this, the honor system disclosed and then how often is these policies going to be revisited and reviewed as this field is just exploding leaps and bounds every day there's something new.

Nitin Seam, MD: Yeah, these are, these are great questions. These are actually things that are, again, we.

Hadn't been sort of articulated. So this is a great time to walk through this. I think in terms of the honor system I need to honestly that that's when when you're asked about conflict of interest. That's uh an honor system basing So it's really no different than the other disclosures that that authors have And I think as you may have heard when my chat gtp Gpt was coming out.

It's actually very hard to pick these things up. Anyway as a gold standard of What is the I produce versus was author producing? Obviously, uh, when you're hearing about students writing their essay, which had dbt, this is an area that it can be challenging. But I think when we talk about, um, being [00:12:00] backwards or too much to disclose.

You know, I think our lens as leaders of peer review journals is different than say, like a technology company or an individual who's using AI tools to improve their efficiency and maximize your work, right? Um, I think, as I mentioned before, protecting author's confidential work and asking for transparent disclosure really are guiding principles.

So, Thank you. Uh, you know, a technology start start up or a busy, uh, uh, you know, uh, uh, uh, individual clearly doesn't have those same goals and such as and we acknowledge that they may see this as too much. Um, but I think to, to, to maintain our goals of protecting author's work and being transparent disclosure for our readers as well.

Um, we we felt strongly this was the way to go. And again, it's consistent with the other, uh, medical journal committee recommendations and with other journals. But I think your your last point of reviewing and revisiting is critical, right? Something starts out as new. We [00:13:00] check things for accuracy, bias and so forth.

And then at some point it just becomes part of our workflow. So, uh, part of our recommendation that we thought was a bit unique compared to what some other journals, uh, had, had done is really because we have these experts in the publication policy committee and we can actually add experts in informatics and in particular domains of expertise that to really set up, uh, a group of expert, uh, experts within them.

Committee for continuous review of the, of our policies. And, um, and as there's more And understanding of A. I. Tools. Maybe a time to say, Hey, the policy will be revised. But I think a continuous review process rather than saying we're gonna review this every 2 to 5 years or wherever that might be. As you say, it's such a rapidly evolving field.

We're gonna have to watch very closely.

Eddie Qian, MD: Yeah, no, that makes a lot of sense. And I think so. Let's say for me as an author, I want to [00:14:00] be honorable. I want to disclose everything I want. picked up a piece of candy from that booth at one of the conferences. I want to disclose that. I come across this box. I said, Hey, did you use AI?

Did you use LLMs in writing of this paper? But where's, where's the cutoff for that? Because as you've mentioned a couple of times, These AI chat, CBT, LLMs are a big part of just kind of everyday life and occurrences. So I have a real story. I have a colleague who came across a similar checkbox on a journal submission and they had used an AI function of their word processor that helps them cut down words because, you know, as much as I like writing.

4, 000 word, uh, essays, the journals only want 500 words. And so you have to, we all, we're always coming against and working, trying to decrease the number of words in our manuscript. Does that count? Uh, where, where should I [00:15:00] draw that line?

Nitin Seam, MD: Yeah. So, I mean, I think if you're using, and so these are again, very Sort of, uh, areas of debate.

Uh, I think if you're using, you know, Microsoft Word and you're getting the, you know, spell corrector, that is, that's something you don't need. What's that?

Eddie Qian, MD: The paper clip.

Nitin Seam, MD: Yeah, yeah, yeah, exactly. So you don't have to disclose those sort of things, uh, but if you are entering it in a chat, so what I would say is you disclose, I entered the, in, or editing and, uh, review of grammar into chat GPT or whatever you did, and that'll get flagged for the editor.

The editor will look at that and like, if I see that, I'll say, that's fine. Thank you for disclosing that. And we're going to, you know, we're going to send the paper out for peer review. So I think that that that's kind of a standard process. So I don't think that anyone should be worried that some reasonable use of an A.

I. tool that is part of your workflow. Well, you will be sort of your paper will be rejected. [00:16:00] your paper will be looked at more negatively. Um, but I think again, as we're all learning about the best uses of these tools and accuracy and bias and so forth, where we are, and we're acknowledging that in, in, in, in a peer reviewed journal, which is not, you know, a technology firm, we're going to err on the side of more disclosure.

Eddie Qian, MD: Yeah, no, that makes a lot of sense. So just to highlight that for me, for my colleagues, for all the listeners, the, if I click that box, just trying to be honest and with my disclosures, you as an editor or your other co editors, And reviewers aren't going to view that as negative. My paper won't have any worse chance of being accepted because of, I disclosed that.

Nitin Seam, MD: Yeah, well, I mean, if, um, they'll, like, if, if you, if you disclose like the, you know, the chatbot wrote the paper, then that'll be obviously a problem. But if you use it for the example you used, then yeah, the, the, the editor will just pass that along. What happens is, right, when it goes in [00:17:00] the submission portal, the editor sees it first, and the editor decides how, what to send out for peer review.

And so the editor will vet that if there's something that's completely out of bounds, the editor won't typically send it for peer review. So I think that's that's where it gets flagged. It's seen and it moves forward. So peer reviewer will see that. But the editors already said, Hey, this is a paper we want you to review.

So go ahead and view it. And so again, sometimes what we'll see is, uh, the transparency is the key for us. But also, you know, a reviewer might say, Yeah, I noticed that they can put comments into the editor, right? And they can, they can talk about the use of the, of the AI tool. But again, those sort of minor things, that's not going to impact the paper.

Eddie Qian, MD: No, that makes a lot of sense. So we talked a little bit from the reviewer's side, the editor's side, from the author's side. I kind of want to change gears a little bit. And I want to talk about LLMs and these. AI [00:18:00] as topics of a paper. So these, this field is ever evolving. This field is ever changing. This is a very exciting field and people want to use it clinically.

People want to use it as part of their research or as a topic of research. What, what would you say to authors who are looking to submit to one of the ATS journals or any journal when they're talking about, Hey, the, the, why I want to study is, is LLMs.

Nitin Seam, MD: Yeah, I think these are, I think when we talk about what's safe, what's accurate, what's biased, the key is to study, right?

So I think, um, it's very important to study LLMs and I think they're a great topic for, for papers, uh, and, and ATS journals, just like, uh, other journals will be very, uh, uh, welcoming of them. And you can see some journals are making AI sort of, uh, spin off journals, right? Uh, cause this is a, a hot topic. Um, I think, as you said, one of the challenges is, It changes so fast, right?

So you may do a [00:19:00] study on a prior version of GPT or Gemini or whichever LLM you're using, and then the other, the next version improves. So I think it's a, it's a rapidly evolving field, but we want those papers. We want you to disclose how you use an LLM, be detailed about the version you use and when you're writing the methods, you know, what were your primary endpoints and what did you find?

And I do acknowledge though, it's you have to then get it submitted quickly. And those are the sort of papers that are time sensitive that we try to push to peer review faster. Um, because I'm out of it quickly. That's where you see, you know, when I see birds that come out of Google or Microsoft about their research, they typically just reprint them and then they're moving on to their next.

Version because again, they are, um, not in the same business we're in. They're continuing to innovate and we are obviously by nature more cautious and wanting to study. So [00:20:00] certainly I think if you have, and then the other thing I'd say is, um, we always encourage, regardless of the topic, if you think something might be a fit for one of the ATS journals, um, you're not sure, or other journals.

Sending a pre submission inquiry to an editor and say, Hey, here's an abstract of a paper I have. You think this is something that you would be interested in publishing, you know, subject to peer review, of course. And so I'd encourage authors if they're not sure to do that. And if you get a yes, then that's, yes, you just do the full submission.

If no, then you can kind of move on to another journal.

Eddie Qian, MD: No, that's great. So taking your, your editor hat off a little bit, I said in the introduction that you're interested in. And innovations in medical education, including using artificial intelligence. So without giving away some of your best ideas, what are some of the things that topics around AI and large language models?

that you think should be studied in medicine. What are you, what would you be excited to read as a reader?

Nitin Seam, MD: Yeah, so I think, [00:21:00] um, I mean, I think there's so many things right now. I'll give you an example, right? Like, so, how do you, so for me, I think one of the fascinating things are Um, a lot of the use cases for for, uh, I are kind of low hanging fruit type of things, right?

Um, that as we were talking about before getting that initial activation energy of completing a mundane task, right? Helping draft emails. Providing language for maybe if you're writing learning objectives for a curriculum if you're an educator, right? Right, and you you just need need to get this moving say, okay Let me oh, that's pretty good or I don't like that and i'll start editing but it gets you moving forward.

Um, uh, what I find interesting I think there's some sort of clinical uses especially with education then there's There's so many uses but one of the things I find that's fascinating here. Is that? Obviously now this is a this is a case where our junior learners are using a new [00:22:00] technology Uh more frequently in their day to day workflow than our more senior Uh, people.

So one of these I see is like bummer around in the ICU. Um, you know, if I'm admitting, we admitted a patient, uh, we'd say, you know, severe hyponatremia and seizures or something. Um, you know, I have a kind of a, uh, some bedside teaching pearls and a way I want to approach that, you know, micro teaching session before we go to the next patient.

Um, yeah. But one of the things I find interesting is I'm hearing from others and uh, with this thing that our more junior learners may be asking the LLM, uh, you know, pre rounding say, okay, teach me some pearls about severe hyponatremia with, with, uh, uh, CNS, uh, effects. And so what I, what I'll find is that, um, I will be checking that because I think at the end of the day, if it's doing a lot of what I'm wanting to teach, Then I can spend my time really drilling down on [00:23:00] more particular expert level sort of topic so they can really learn and it maximizes the learning during a limited time of sort of patient rounds.

And so I think there's so many interesting use cases of LLMs in clinical care and education research and we need to learn about the best use cases, study accuracy and bias compared to whatever our current quote unquote gold standards are, which a lot of our gold standards are quite thin. And so. Some of the things I'm considering is, and you mentioned my interest in mechanical ventilation, right?

If you are highly technical complex skills How can LLMs help, uh, with, uh, teaching of that and help scale some of those? Because I think we think about the common use cases of, uh, the low hanging fruit, but what about the more complex topics? And, you know, so much what we do, um, in critical care, it's complicated, highly technical skills like vents, ultrasound, radiology.

Things like [00:24:00] that. So I think there are some tremendous opportunities in these areas. See how AI can support learning, self directed learning, personalized learning. So those are areas of my particular interest.

Eddie Qian, MD: So I was here sitting that the last time I was on rounds that the interns and residents were just so much smarter than I was at that stage of training.

And you were here telling me that maybe it might be the LLM that they're using. Is that

Nitin Seam, MD: personally speaking to the resident much smarter than what I trained. That's a hundred percent. But I also think they're using tools. And so that's great that they're using the tools. that they're more facile with.

But I think then we've gotta, we need to provide value in our education, right? And so, um, regurgitating what they're learning from the tools they're using isn't that helpful. And so I think as, as we really want to teach that next generation, we've got to figure out where we, we can go then to a higher level if they're already getting

Eddie Qian, MD: Yeah.

One of the golden adages of teaching is you got to meet the learners [00:25:00] where they're at. So yeah, no, absolutely. I completely agree. You, so you mentioned using LLMs and, uh, and teaching on rounds and otherwise, have you in your own clinical practice, use them, uh, clinically or for helping you with kind of feeling out some different types of differential diagnoses or getting a grasp for patient care reasons.

Nitin Seam, MD: Yeah, I've been, uh, I've played around with it. I've been, uh, in, you know, sort of some of these clinical reasoning cases, read many of these papers. I've been, uh, trying, uh, to let that data evolve before I use that in my own clinical practice.

Eddie Qian, MD: Yeah, I think I'm, I'm there too. I've, I've tried it before. I've, I've seen the power of using AI and that in those instances, but it's, I suppose it's kind of, kind of difficult to use.

And, uh, you have to kind of be really committed to it, I think, to make that work, at least as things stand now. And by the time this podcast gets released, maybe I think will be different. I'd always, [00:26:00] it always fascinates me when you see these, these papers or presentations at conferences where they're talking about, Oh, we, yeah.

We put the expert clinician against the, against chat GPT. And, uh, I think, I think the lay of the land that I've gotten so far is that we're, we're still okay, but it's getting closer and closer.

Nitin Seam, MD: Yeah. And I think, you know, I think that's where you, we have to see, like, I think, um, and I think what is so fascinating is I think you're alluding to is the exponential growth here compared to prior technologies and.

I think there and and you know, what are your prompts? What are your use cases? And how does that work? And I think at the end of the day, you're I think we can't talk about this in the in the editorial. You can't bury your head in the sand and say, you know, and avoid technology. But we have to figure out what the appropriate uses what's safe use and how it's going to help us at the end of the day.

What we're all trying to do is provide the best [00:27:00] care we can for patients.

Eddie Qian, MD: Yeah, I mean, we used to make fun of the generation above us because they didn't know how to Google things. Um, and so we just got to avoid being the butt of the joke for the, for the generation below us. So certainly, is there, is there anything else that you think the listeners should know about the ATS stance on, uh, AI use in writing papers and scientific research?

Nitin Seam, MD: Yeah, I mean, I think the biggest thing I'd say is one, This is a rapidly evolving field. And so we really do want to part of the editorial saying we want to partner with authors. We want feedback. We're viewers. We want to make life easier, not harder. And I think this is an area where major medical journals also should be collaborating with each other.

Talking about what there's the problems. They're seeing the challenges they're facing and seeing how we work them together. And I think kind of an honest, transparent process of this is [00:28:00] new. This is changing and rapidly evolving. We have even having the expertise to understand some of these papers can be very challenging as you get reviews.

So I think just being, uh, you know, saying sticking to what we mentioned earlier, we want to protect authors content. Uh, and we want we want to have transparent disclosure to to our readers Um as we go forward, I think is the best approach now But recognizing is that some of the things we talk about now may be quickly outdated and as you say Like google using google as a standard Using your chat box may become a chat bot may become a standard for you going forward.

And as we have more evidence of accuracy, bias and so forth, I think we'll be able to figure out when that appropriate time to alter our recommendations will be.

Eddie Qian, MD: Yeah, and I think I have a question for our listeners to a question for the audience is just how do you guys [00:29:00] use a eyes in your everyday life or for clinicians?

Clinically, I found success using it for trip planning and meal planning. But as I already mentioned, I don't typically use it clinically yet. And do you use it in your everyday life?

Nitin Seam, MD: Oh, yeah, I mean, I actually think it's like, you know, that's what's interesting. It's much more powerful than google, right?

If you say all right, i'm going to take a trip to to group uh to to italy tell me, uh, make me an itinerary for Uh, you know a week's trip or a 10 day trip and the the part is, you know Prompting it. Well, you do better. And then the part that's even more powerful is the conversation. You say, well, I like that, but I didn't like that.

And tell me more about doing this. And in general, as you have that conversation and you prompt it in a way that's effective, you can get a lot more out of it than googling something and typing, you know, clicking on 10 different links to find the one you like.

Eddie Qian, MD: You just [00:30:00] have to hope somebody's has it has flown into the same city has the same 10 day itinerary and that you have the same interests and try to find that on Google.

There's a lot more flexibility with the with the I and the chat box. But this has been a fantastic conversation. Thank you so much for your time and for joining us today. And thank you to the listeners for joining us today for the ATF breathe easy episode. Uh, please subscribe and share this episode with your colleagues.

And if you haven't done so register for the ATF international conference in San Francisco, this may go to conference. thoracic. org today. Uh, members get a discount on conference registration. So become a member or renew your membership to take advantage of the savings. And we'll see you next time. Thank you guys.

Thank you for joining us today. To learn more, visit our website at thoracic. org. Find more ATS Breathe Easy podcasts on Transistor, YouTube, Apple Podcasts, and Spotify. Don't forget to like, comment, and subscribe so you never miss a [00:31:00] show.

© 2025 American Thoracic Society