Skip to main content

How AI Can Improve Healthcare Delivery with Mozzi Etemadi, MD, PhD

Mozziyar “Mozzi” Etemadi, MD, PhD, is supporting the transformation of healthcare delivery and patient care at Northwestern Medicine by bringing engineers into direct contact with clinical providers. In this episode, he talks about the explosion of artificial intelligence in healthcare in recent years and how Northwestern Medicine is using this technology to improve healthcare delivery and patient care.


“I think that as some of these tools like ChatGPT become more commonplace, as we learn more how they work, how to create them, we will create somewhat simpler versions of them, I think, that will work well in healthcare because they'll be a human in the loop and they'll be clear interpretability in what these things are saying. And trust will be built gradually.” — Mozziyar Etemadi, MD, PhD 

Episode Notes 

Etemadi and his team have made significant progress streamlining information flow in healthcare utilizing the latest AI technologies, replacing manual workflows with computerized systems, including language processing tools.  

  • Early AI collaborations between Northwestern Medicine and Google led to the creation of algorithms that could identify lung and breast cancers, sometimes years before human detection. Since then, Northwestern Medicine and other health systems have started to build their own tools, as it's become easier to conduct AI work without the need for partnerships with large tech companies.  
  • Ahead of peer institutions, he says Northwestern has long recognized the value of healthcare data, and since the 1980s, has been saving data–electrocardiograms, ECGs, for example, recorded in digital format–under the assumption it would one day be useful. Such data is now central to the creation of AI tools that use language processing. 
  • At Northwestern Medicine, AI tools are now being generated to further streamline clinical practice. With the help of nurses and other professionals, Etemadi and his team can create data sets based on an enormous number of medical reports that function in tandem with natural language processing tools (like those utilized in ChatGPT).  
  • AI tools can also help fill in the gap when it comes to human oversight in clinical practice. For example, a common oversight in outpatient care is scheduling necessary followup or further evaluation after a test, because doing so requires interpretation of textual reports. To address this, Etemadi and his team were able to create an AI tool that can scan test results and assign next steps to guarantee patient followup. 
  • A three-way collaboration between Google, the Gates Foundation, and Northwestern Medicine aims to develop an AI tool for ultrasound imaging. While current AI tools can diagnose from perfect images, the project intends to facilitate the development of AI tools that can diagnose with imperfect images. The hope is that such technologies will be part of an open collaboration, able to potentially help people in low and middle income countries as well.  
  • AI has the potential to revolutionize all aspects of healthcare, not just direct patient care. For example, it can optimize numerous behind-the-scenes processes in a hospital system, such as predicting supply shortages or estimating surgery durations. Etemadi sees AI as able to make healthcare as safe and predictable as aviation or civil engineering.  
  • Northwestern Medicine has developed a multidisciplinary committee called AI Catalyst dedicated to discussing future use of augmented intelligence. The committee incorporates problem-solving feedback from across the entire healthcare system in support of a wide range of ideas and suggestions for AI use in healthcare.  
  • For anyone interested in the field of AI, Etemadi suggests learning with resources closest to one’s interests or profession. Clinicians should also not feel intimidated by the notion that only large conglomerates can develop AI tools. With some technical knowledge or collaboration, clinicians can solve problems independently using AI.  

[00:00:00] Erin Spain, MS: This is Breakthroughs, a podcast from Northwestern University Feinberg School of Medicine. I'm Erin Spain, host of the show. Today's guest is a leader in bringing artificial intelligence and other innovative tools from the world of engineering into medicine. Dr. Mozzi Etemadi, an assistant professor of anesthesiology here at Feinberg, was a guest on this show in 2019 to talk about a collaboration between Northwestern Medicine and Google that uses deep learning systems to predict lung cancer. Much has changed in the world of AI since that time. So we have him back on the show to discuss current projects. Welcome, Dr. Etemadi. 

[00:00:56] Mozzi Etemadi, MD, PhD: Thank you. 

[00:00:56] Erin Spain, MS: Take us all the way back to when you first came to Northwestern in 2016. What were your goals for using AI in medicine then and what has developed over time? 

[00:01:06] Mozzi Etemadi, MD, PhD: Wow. It's almost impossible to remember back that far, but the goal was really simple. It was, let's bring engineers to the bedside. The idea was we bring engineers closer to the action and have them talk directly with clinical providers, maybe we can solve some of these problems. So fast forward a couple years now to 2019 when we last spoke. We had already solved some pretty basic information flow problems. So for example, nurses in the ICU that we were working in would like write stuff down on a whiteboard and then go room to patient room and write some more stuff down on a whiteboard. And these very kind of manual heavy workflows were commonplace. So what we had done up to that point was just like combine the data sources together. And now at least, you know, we can use a computer to do some of these manual workflows a little bit more quickly. So around that time, all of a sudden AI in healthcare was exploding and it just so happened that the thing that you need to create these AI algorithms was the thing that we were using to do all these workflows, which is data, data, data, data. At that time, we started some very exciting collaborations with Google, and the two major projects we worked on with them were creating an algorithm that can identify lung cancer on a lung cancer CT scan you know, in some cases years before the human can identify it. And also, identifying breast cancer on a mammogram as well. Those two projects actually this year it was announced, are kind of in their final stages of regulatory approval and partnership and things like that, which is awesome. We're gonna actually see these things out in patient care, but a lot has changed since then. So I think a couple things is, it's actually become a lot easier to do AI work without the partnership of these massive tech companies. So our group and others at Northwestern and other health systems have started to build our own tools. The second thing is there have been more and more tools out there in clinical practice, so we've kind of learned as physicians how to interact with these tools. They're like part of our daily life now. So we've learned to deal with them. 

[00:02:51] Erin Spain, MS: That's really exciting. Tell me about the role that Google or Google Health had played and still plays in your research. 

[00:02:58] Mozzi Etemadi, MD, PhD: They are a really great partner to work with because of a couple reasons. One is, a lot of folks at Google were the people that actually developed these techniques for the first time. These are very powerful tools. So who better to understand the tool than the person that made the tool to begin with? Another thing is these tools require quite a bit of computational lift to get working. Now, that's getting a little bit better over time, but still to do these things the best way you need a lot of computers. And we're talking, you know, many hundreds of times more than a university could have access to basically. And the last is not only do they build the tools, not only do they know how to get them to run on these computers, but they've actually deployed these tools live also. I mean, you think about auto complete in your Gmail or all these other recommendation systems. Getting these things to run in the real world has its whole host of challenges and recipes and tweaks that are required that we also don't have experience with. So really just end-to-end the whole spectrum, being able to work with tech companies I think is a super important part of this that's not going away anytime soon. 

[00:03:53] Erin Spain, MS: Let's take it back a little bit and just define artificial intelligence and machine learning. How do you describe it to patients or other healthcare providers that you work with? 

[00:04:03] Mozzi Etemadi, MD, PhD: I think the terminology is slightly different depending on who you're talking to. When I talk to a patient about this stuff, I think, at the end of the day, we have many diagnostic tools at our disposal. We have x-rays, we have CAT scans, we have lab tests. I think I try to really treat it like one of those, because it is not this be all, end all being that can make tons of decisions. The tools that are available to us for patient care are very purpose-built, just like a blood test is today. But I think more broadly when we're talking to other providers or broader audiences, students, et cetera, I think it is important to frame the discussion of AI tools in the following way: if you think about a large dataset that we collect, there's inputs and there's outputs. The outputs are the thing we want the tool to produce, and then the inputs are the thing that it's gonna use to make its decision. So any AI tool is just a collection of inputs and outputs that you've showed the computer, and then for a new input that it's never seen before, it's gonna guess what you told it it should do based on the previous So the simplest, most basic example I always give is I have a lecture where I show the students as if I'm showing the computer different uh, playing cards. And you show ones that are hearts and you show ones that are clubs, for example. Anytime I show heart, I say, okay, computer, I want you to tell me that this is answer A and you show the clubs and I wanna say this is answer B. And you go over and over and you show it a bunch of different examples. And eventually you would hope that if I show it a heart in the future that it was not on the original data set, it's gonna guess an A essentially, you show it a club, it's gonna guess a B. The key is, at no point in time when you're developing these AI systems, do you actually tell the computer anything. You're just showing it examples. So I'm not using the word, heart or club in this analogy. I'm not even saying, you know, curvy lines make a heart and more curvy lines make a club. Like we're not, none of that is going into this. That type of detailed decision making and detailed programming is before AI, pre AI, that's how we would build algorithms. Post AI, you're just showing a bunch of examples and you're giving it a bunch of answers. So that's how I explain AI in the modern world. 

[00:06:03] Erin Spain, MS: And you're using these tools in a variety of different projects at Northwestern Medicine. Tell me about some of the diseases or conditions that the AI tools are being used right now. 

[00:06:14] Mozzi Etemadi, MD, PhD: The cool thing is we were already so close to so many patient care challenges and health system delivery challenges because of all of the workflows that we were already kind of working on. These tools become readily accessible in a variety of ways. So I'll give you some examples. So, a common problem in taking care of patients as outpatients, particularly as like a large population of outpatients, is that they're getting tests done all the time. And the results of these tests often are not in the form of a number, but they're in the form of some type of text. So a radiology report is the most common example of this. But there can be biopsy results and other test results that they basically come back in the form of an essay, if you will. Buried within these essays about patients are things that need further testing or further follow up of some kind. Somewhere around 5 percent of all of these kind of text results contains something within them that needs to be followed up on. Usually it's repeat the same test in 6 months or in 12 months, something like that. So 5 percent of all tests basically of this types, huge numbers we're talking about here. Now what's scary is actually around a third of these never actually get followed up on. There's a variety of reasons, but I think chiefly among them is that these are very nuanced things written in these text documents. So it's incumbent upon primary care providers to go through and carefully read through all of these things and find the stuff that needs to be followed up on. So this is a perfect example of where AI can help us. We have plenty of examples of these text documents. We can train a group of nurses, which is what we did, to come up with examples within that set of where there is a follow-up that's required and where there's not a follow-up that's required. 

We then create the computer tool, which is supposed to repeat this activity of the humans, but without the human in the future essentially. So created a tool that reads these documents and says, "okay, do you need to follow up on something?” And actually, if you do, what that thing is, is it another CAT scan? Is it something else essentially? So, we created this tool, we actually used it. It's still live now. We've been using it for several years now, and it's actually very much help our patients step through this process of finding these follow-ups and making sure they get done. 

[00:08:22] Erin Spain, MS: So you detailed this project last year in 2022 in the New England Journal of Medicine Catalyst Innovations and Care Delivery, and you're actually using this natural language processing, which is something we've heard about with ChatGPT recently. And you ran how many different reports through this tool? 

[00:08:41] Mozzi Etemadi, MD, PhD: So, this is going to sound like a big number, but I'm gonna preface it with saying that we've, we now have ways of reducing this number quite a bit. So this is what's so exciting about AI, is that the whole field basically changes entirely within six months. So you have to reinvent yourself constantly. So, back when we did this, this is our first time kind of going at it completely on our own without the help of a large company. We relied on a lot of open source tools and a lot of knowledge out there. And the prevailing knowledge at the time was like, you need a lot of these. I don't remember the exact number, but it was like close to a million basically, which is way overkill. Way overkill. But hey, we had it, it was easy to get. And that's again, one of the joys of doing this as a health system is the problems that you may have are totally different than the problems that you would have if you're a tech company and vice versa. So, we can reduce this quite a bit. So this is some of the more exciting, more recent innovations that have led to things like ChatGPT and stable diffusion and all the stuff you see is this concept of self-supervised learning. So what do I mean by that? Well, again, back in the day, a year and a half ago, we had to get nurses to go through and basically read all of these one million things and say, this is bad, this is good, this is bad, this is good. And then that's your data set. Now we can just do a different task that doesn't require human input. So what do I mean by that? You can actually take these text documents and instead of having a human look at it, you can just randomly get rid of words in the document. So let's say there's a sentence or two, you just like get rid of the third word and the seventh word. You can do this at random. The computer can do this all on its own, and then the computer, instead of trying to predict the thing that the nurse wants it to predict, just predicts those words that were missing basically. So, okay. Is that useful? Well, by itself, no. That's not useful. But when you do that process, if you then follow that process by the process that I just described with the nurses going through and doing the stuff, you can get that number of reports where you need the nurses to look at them down from a million to, you know, 10,000 or maybe even less a thousand depending on what it is. So you've kind of traded off the task. You make an easier task that you don't need a human supervision for, and then it can do the harder task with way less human input. 

[00:10:37] Erin Spain, MS: Because the useful part of this is then you can sort of ping the physician or nurse and let them know, "Hey look at this report." And that's what's happening with the tool you built right? 

[00:10:48] Mozzi Etemadi, MD, PhD: Yeah. So that you bring up a whole other relevant concept here, which is, let's say you have this amazing tool and it works perfectly, how do you actually use it in practice? So this has nothing to do with AI. This is just good old healthcare workflows, and healthcare workflows are not great most of the time. There's systems I've seen that have this amazing AI output or this amazing algorithm in the old school type of algorithm output, and the result is ending up in a fax machine somewhere in the doctor's office. Like that's not helpful. So from the very beginning, we focused on the workflow aspect of this, and quite honestly, it took way longer to do that than it did to make the AI tool. 

[00:11:22] Erin Spain, MS: And the whole goal of this, again, is to optimize patient care, right? 

[00:11:26] Mozzi Etemadi, MD, PhD: Absolutely, patient care is a big part of it, but it's every aspect of delivering healthcare. So there's a lot of things that happen behind the scenes in a hospital system, you know, predicting if you're gonna run out of something, or how long a surgery is gonna take. Like there's all these little things that are tangentially related to patient care, that, again, having these tools in place can really optimize. And ultimately the goal here is I think, we don't really think about when we get on an airplane or when you get into an elevator. It's not, you're not like worried about what's gonna happen basically. But in healthcare, there's still a lot of uncertainty. There's a lot of questions that are raised all the time as part of our daily practice and things that are missed all the time, that again, we've normalized as being like, "oh, this is okay, 5 percent of these get missed, or whatever." Very common in healthcare. I think AI is what's allowing us to finally take that final step to get these things as safe and as predictable as they are in aviation or civil engineering. 

[00:12:13] Erin Spain, MS: And you're hoping to lead a lot of these efforts through your role at Northwestern Medicine as Medical Director of Advanced Technologies, and then you're also publishing about what you're finding through your role at Feinberg as a research professor. Tell me about this role that you have and how unique is it? 

[00:12:29] Mozzi Etemadi, MD, PhD: There's a couple of things that are unique to Northwestern and I think unique to my role. So one is that Northwestern, I think, way earlier than other peer academic institutions, recognized the value of healthcare data and that it would be useful someday. So for example, we have electrocardiograms, ECGs, recorded in digital format. We were one of the first institutions to record this. We've been doing this since 1980, that's six years before I was born, we were doing this basically. Same goes for medical images and other documentation. Northwestern just has this history of basically recording everything and saving everything for downstream use. So that's one piece, I think, that's unique. And other one is we're really good at solving problems. As a health system, we have this very formal structure in place for solving these problems. We get user feedback, we do user studies, we talk to a bunch of different folks that are involved in whatever problem you're trying to solve. And we really get at the core of solving it and metrics to basically quantitatively monitor this moving forward. So that's where my role fits in is we have this amazing cadre of problem solvers across the health system. There's, for example, dozens of software developers in IT that work for healthcare IT. There's project managers that work for a different part of the hospital, et cetera, et cetera, nurses, clinical folks, et cetera. So my role is to basically wear the technology hat and explain the clinical part to the technologists, and also wear the clinical hat and explain the technology to the clinical folks as well. And there's no better place to do it here because we can actually solve some of these problems with the tools and the people that we have. 

[00:13:52] Erin Spain, MS: And this knowledge doesn't just stay at Northwestern. You're also involved in other studies that could potentially help people in low and middle income countries as well using AI. Tell me about this collaboration with Google and fetal ultrasounds. What's going on with that project? 

[00:14:08] Mozzi Etemadi, MD, PhD: Yeah, so this is actually a, three-way collaboration between Google, the Gates Foundation as well as us. And really the fundamental principle of this is for this to be an open collaboration with anyone in the world, basically. So, people get ultrasounds taken for all kinds of reasons, so I think we're gonna focus on kind of the fetal use case here, but thinking to the future, I think, ultrasound in general is becoming like the stethoscope of the future. It's a very relatively low cost portable tool that can be used to make a lot of different diagnoses. That's the good news is that ultrasound is getting less expensive and can help a lot of people. The bad news is it's actually really tricky to make diagnoses using ultrasound, and there's two totally different reasons for it. The first reason is kind of the more familiar reason is that you need a skilled person to look at the image. But what's not discussed as often, which is unique to ultrasound, is you have to actually get the image. So if you're getting an x-ray or a CAT scan, I don't wanna say it's easy, but the machine basically does a lot of the work. While there is a good amount of detail that goes into doing that, it's pales in comparison into the ultrasound where you have to take the probe and really move it all over the area of the body of interest and angulate it in a certain way and have the patient breathe in, breathe out. You may have to apply more gel to the ultrasound probe. It's a much more involved and complicated process, and in your head, you have to know, "Okay, I got that perfect picture. Now's the time to save the picture. Or ultrasound, you can save like a short video that's a couple seconds long." Knowing when to do that I would say much harder than making the diagnosis off of the image. Once you have that perfect picture, making the diagnosis is relatively more straightforward and also is something that AI tools are already doing great on CTs and x-rays. We would like to create an AI tool that can provide these perfect pictures, for another AI tool to then do the diagnosis. But we're not saving the imperfect shot essentially. So what this project is all about is creating a completely open source database where we save these amateur ultrasounds together with the label, in this case, the gold standard, which is that perfect shot or what is the outcome you're looking for from this imperfect amateur ultrasound. So the idea now, this is focusing first on fetal medicine, is that we consent patients to have their fetal ultrasounds in addition to the ones that they normally get as far as patient care. We can actually just take some amateur ultrasounds. In some cases the moms will take it themselves. In some cases, one of our folks will simulate being an amateur and then take the image, and then all of this gets de-identified and uploaded into this database for, you know, open to the world to access and use for their research. 

[00:16:28] Erin Spain, MS: And these are women who are at Prentice Women's Hospital right now taking part? 

[00:16:34] Mozzi Etemadi, MD, PhD: The database right now is being fed from us at Prentice. But you know, in just the last couple weeks and months, we've gotten some interest, and again, we're exploring this from other places around the world to potentially contribute to this database. There is a rich history of open databases for medical research and a lot of these now are used for AI. Most well known is there's ones coming out of Harvard and MIT, PhysioNet. There's a lot of information in these databases, but again, they're this fetal ultrasounds or ultrasounds in general are not quite there because these amateur ultrasounds are not being explicitly recorded.  

[00:17:06] Erin Spain, MS: And so the end goal would be in low and middle income countries, people could use a tool that could then actually give them a more accurate look at their amateur ultrasounds. Is that right? 

[00:17:16] Mozzi Etemadi, MD, PhD: Exactly, so, in the best case, you have an ultrasound probe available, whether that's one shared for a small community or someone comes around every so often, and this person does not have to be a skilled provider. It could be the patient themself or a low skilled person, and they just put the ultrasound in the general direction of where the problem is. And then the AI that we would build using this data, that we don't have yet, would tell you where to move the probe to get that perfect picture essentially. And then it would also tell you, okay here's what to do next. Go get further care. Or, oh, everything's fine. Or, you know, you need to take that ultrasound one more time. Something like that. 

[00:17:48] Erin Spain, MS: There is still this problem with trust that a lot of people have with these tools, and we know with ChatGPT right now, a lot of what you see may not be accurate, but is it different with these tools that you're using with AI and healthcare? 

[00:18:01] Mozzi Etemadi, MD, PhD: There's a lot of stuff here with having folks trust these tools more. So I think the, short answer is it was just so early in the process that I don't think anybody should like fully trust any of these tools. That's why we have humans in the loop. That's why we have monitoring and other things in place. So taking ChatGPT for a second, it's really is just predicting kind of the next couple words in whatever sentences is happening, and it's supposed to do so in a very broad way, so it's harder to trust because there's so much more that it can do. So we've almost kind of created a more difficult situation than we need to. For healthcare, you know, I don't need to be able to write an essay in the style of Shakespeare and diagnose breast cancer. So, I think that as some of these tools like ChatGPT become more commonplace, as we learn more how they work, how to create them, we will create somewhat simpler versions of them, I think, that will work well in healthcare because they'll be a human in the loop and they'll be clear interpretability in what these things are saying. And trust will be built gradually. 

[00:18:54] Erin Spain, MS: And At Northwestern Medicine, there's now a multidisciplinary committee that's just dedicated to talking about how are we going to use augmented intelligence. It's called AI Catalyst. So tell me about that and how a committee like that helps you with the work that you're doing. 

[00:19:10] Mozzi Etemadi, MD, PhD: Absolutely, as a member of that committee, I get to sit in these meetings and hear about all the cool ideas coming through. So I think this also hints at the question from earlier about trust. Fundamentally, we want to provide the best care to our patients. That's really what this is all about. And we can do so much as humans and these tools, when they're working appropriately, when they're appropriately aligned with what we want them to do, can just make us do even better than we were before. So this committee is exciting because it allows folks from all over the health system, and the emphasis really is on all over the health system. We don't want things just coming from, you know, the academic folks like myself that do this every day and do research in this area. Like we want the person that maybe has been nursing in one of our regional hospitals, 200 miles away from here that has a lot of real world experience. We want them involved in telling us the problems and pain points as well. So folks from all over the health system come, and they can either suggest ideas that they have or they can bring problems that they've witnessed that could potentially have an AI solution or, you know, maybe they heard about a company that seemed interesting and had a product; they want us to come vet it. I think it's really meant to be a problem solving committee formed by a really diverse set of interests across the entire system, aimed at, again, providing the best possible care for our patients using the best technologies. 

[00:20:23] Erin Spain, MS: What would you say to folks listening, maybe they're a graduate student, maybe they're a physician, but they wanna learn more about how to use these tools in either their practice or in their research, what would you say? 

[00:20:35] Mozzi Etemadi, MD, PhD: That's a great question. So the most important thing tell anyone joining this field of AI is, get out there and learn whatever it is that's closest to you. There's so much out there. I think trying to kind of over-plan or think through what resources, is just overwhelming. So if you are an engineering person, you know, check out some engineering resources that are around AI. There's so many open source classes, open source code bases, open source lectures. Some of my personal favorites: Fast AI is this totally free online course about AI that goes into amazing detail and teaches engineering minded folks how to do this stuff. On the clinical side too, I think there's a little bit of a mentality on the clinical side that there are these formal companies that build these AI tools and that you have to be part of this like big AI conglomerate in order to do this stuff. That couldn't be further from the truth. I mean, a clinical person with a little bit of technical background or partnering with someone can absolutely solve these problems by themselves. No longer is it rocket science that it was five years ago, basically. So for clinical folks, I would say just learn as much as you can to be able to, at the very least, explain this stuff to your patients, but also, you know, to be able to communicate with some exciting engineering folks that wanna work on this with you. We cannot solve this by ourselves. I think that's the other thing AI has exposed is really the need for collaboration across the entire end of the spectrum. 

[00:21:52] Erin Spain, MS: Well, thank you so much for coming on the show and giving us an update on all the things that have happened and people can go back and listen to the episode from 2019. There was still a lot of good information there, but so much has changed in that time. So thank you again for coming back and giving us an update on where things stand. 

[00:22:06] Mozzi Etemadi, MD, PhD: Thank you. 

[00:22:08] Erin Spain, MS: Thanks for listening, and be sure to subscribe to this show on Apple Podcasts or wherever you listen to podcasts and rate and review us. Also for medical professionals, this episode of Breakthroughs is available for CME credit. Go to our website, and search CME. 

Continuing Medical Education Credit

Physicians who listen to this podcast may claim continuing medical education credit after listening to an episode of this program.

Target Audience

Academic/Research, Multiple specialties

Learning Objectives

At the conclusion of this activity, participants will be able to:

  1. Identify the research interests and initiatives of Feinberg faculty.
  2. Discuss new updates in clinical and translational research.

Accreditation Statement

The Northwestern University Feinberg School of Medicine is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians.

Credit Designation Statement

The Northwestern University Feinberg School of Medicine designates this Enduring Material for a maximum of 0.25 AMA PRA Category 1 Credit(s)™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.

American Board of Surgery Continuous Certification Program

Successful completion of this CME activity enables the learner to earn credit toward the CME requirement(s) of the American Board of Surgery’s Continuous Certification program. It is the CME activity provider's responsibility to submit learner completion information to ACCME for the purpose of granting ABS credit.

All the relevant financial relationships for these individuals have been mitigated.

Disclosure Statement

Mozziyar Etemadi, MD, PhD, has disclosed membership on an advisory committee or review panel for Cardiosense, Inc. Content reviewer Timothy Loftus, MD, MBA, has nothing to disclose. Course director, Robert Rosa, MD, has nothing to disclose. Planning committee member, Erin Spain, has nothing to disclose. Feinberg School of Medicine's CME Leadership and Staff have nothing to disclose: Clara J. Schroedl, MD, Medical Director of CME, Sheryl Corey, Manager of CME, Allison McCollum, Senior Program Coordinator, Katie Daley, Senior Program Coordinator, and Rhea Alexis Banks, Administrative Assistant 2.

Claim your credit