Every year, forecast Intelligence saves 50 life in two Peeps at UC San Diego Health

​Part two of our two-part meeting with Dr. Karandeep Singh is now available online. To learn piece one, click here. We spoke with Dr. Karandeep Singh, Chief Health AI Officer and equate CMIO for inpatient treatment at UC San Diego Health, yesterday in our fresh series of articles, Chief AI Officials in Healthcare. He explained that the Chief AI Officer has to have the authority to oversee all AI in a wellness system, and that managers must have knowledge that cover both scientific and artificial intelligence, with the exception of a need for a compromise. Now we talk more with the doctor AI main about where and how UC San Diego Health is finding success with artificial knowledge. We examine an AI task with clinically relevant ROI and offer advice for executives looking to work as Chief AI Officers within their own organizations. Q… Please speak at a high level about where and how UC San Diego Health is using synthetic knowledge now. A…. We’re primarily using it in two distinct broad categories of utilize immediately. One of those is forecast AI, and one is conceptual AI. Forecast AI is used to typically determine the likelihood of a negative outcome and to create and implement interventions to counteract that risk. We now have that in all of our UC San Diego Health emergency rooms because of fever. It’s everything we’re in the process of deploying across our hospital and ICU rooms, as well. We already put this into practice as of first 2018. It’s something we have developed actually carefully. It was designed by associates of me at UC San Diego Health. One of the key differences between this and other work in this field is that they really created a study to compare the use of this model to an intervention that generally alerts our nursing staff to whether or not they are truly helping patients. What the group discovered was that this model saves about 50 lives annually across our health system’s two emergency rooms. It’s useful to people, and we’re keeping a truly close attention on it and looking for additional opportunities to improve. So that’s one instance of how we’re using forecast AI. Another one uses forecast AI to make predictions. I now highlighted in today’s meeting one of the use cases by our Mission Control, where we’re using a design to forecast our crisis office board patients. And that helps us find out what to do when we anticipate having a busy evening tomorrow in two or three days and being a part of things where we are also developing some of the processes. Some processes are being worked out right now. Thus, the other large category of employ cases is relational AI. We’re utilizing some of the features of our electronic health record that enable relational AI. When a person writes a message to their primary care physician, the physician has the option to respond in the usual way, with the recipient choosing to use the response as a starting point or to demo an AI draft response, and then change and forward that response. If the doctor opts to do that, we append a communication at the bottom that lets patients understand this concept was half mechanically generated so they know there was some method of drafting that information involved that wasn’t just the clinician being involved. That’s an example of one where we discovered that, remarkably, it actually lengthens the time it takes to reply to emails. However, the input we’ve received suggests that responding to messages with just a blank slate is less of a problem than it is to start with only a blank slate. That’s one that we’re still refining, and that’s an indication of one that’s integrated into our EHR. Different people have been constructed internally by us. In some cases, it’s work that was done in my scientific laboratory, but in many cases, it was job that my coworkers are now looking to put into practice as part of the Jacob Center for Health and Health Innovation. One instance of that is we have a conceptual AI tool that you read individual notes and intangible quality measures. The abstraction of quality assessment is typically quite time-consuming. The major conclusion is that doing it requires a lot of people. But more important, we’re just able to review a definitely small set of women’s figures just because it’s so time-consuming. Therefore, we always have access to the electronic health record’s most popular charts. What we’ve discovered so far is that when we use generative AI to perform some of these chart opinions and concepts of quality measures, we can get more than 90 % accuracy. We also can’t say whether they met this excellent measure or not. There’s also some room for improvement however there. However, the other important aspect is that we can review many more scenarios. Because we can move this on thousands of patients, we are not limited to a smaller number per month. It actually gives us a more holistic view into our quality of attention beyond what we could actually reach already, despite throwing a lot of tools and a lot of time at trying to do this well. generative AI and predictive AI are two distinct categories. There are many more work in progress or already being implemented. Q… You’ve discussed a number of projects you’ve got going, and this story addresses what it’s like to work as a Chief AI Officer in healthcare. Could you name a project for this question and describe how you, as the Chief Health AI Officer, oversaw it, and what your responsibilities were? A. I can talk about our Mission Control Forecasting Model. When I arrived at UC San Diego Health, this was something that was already in place in the initial version. I’ve been here for ten months. Some of the things I’m working on are on the runway, and some are just starting to be implemented. However, I have a role in this model because, despite it working somewhat well, there were times when the model would predict that we will have a not-so-busy day tomorrow. The model was correct in saying that tomorrow would roll around, but it was much busier. Anytime you have a model that’s doing forecasting, where it is predicting tomorrow’s information using today, and it’s really far off, the people who are using that tool start to lose faith in it– as I would, too. I once or twice said,” We can’t just tweak things now,” when this occurred. What did we do here, and what are the assumptions the model is using to determine why tomorrow’s prediction is inaccurate? I sat down with our data scientist. We examined the code, going through that model line by line. And what that provided guidance was discovering crucial components that we believed were missing in the model because they had previously been removed because it had been determined to be ineffective. So, we said,” Well, why was it not helpful”? We conducted a lot of digging and compared some of those predictors to those that we found to be ineffective because they were actually capturing the incorrect data. According to the predictor’s description, it was capturing something entirely different from what the code was actually doing. Doing that over the course of about three to five months, we went from version 2 of our model, which was implemented when I first got here, to version 5.1 of the model, which went live last month. What has transpired as a result of that? Our predictions for today are much better than those for January and February. And what that does is help us start to rely on the model to do workflows. There is little interest in linking any workflow around the model when it is inaccurate. However, as the model becomes more accurate, people begin to realize that the model actually assumes that tomorrow will be a busy day, which turns out to be the case, or that it is incorrect and turns out not to be so. That now lets us think about all kinds of things we could do to make our healthcare and access to care a bit more efficient. What do I do there? Working with the co-directors of our Center for Health Innovation, our data scientists, some of our PhD students, to find out what is happening on the data side, what is happening in our AI modeling code side, how we go live with new models and what is happening in our version control, and then making sure that our Mission Control staff is informed of the changes that are expected and what is actually changing as we upload those new models, they are kept informed of this. So, we develop model cards we distribute, then we make sure that information is communicated out to a broader set of health leaders at our Health AI Committee, which is our AI governing committee for the health system. Really, it’s a nut to be involved in everything, from how we’re gathering data to how the health system uses it clinically. I can’t do that alone, in any way. As you notice, each of those steps requires me to have some level of partnership, some level of someone who has domain knowledge and expertise. But what I need to do is make sure that when a clinician notices a problem, we can consider and think about what upstream processes might be causing that problem so that we can address it. Q… Please offer a couple of tips for executives looking to become a chief AI officer for a hospital or health system. A…. One thing to remember is that you must truly comprehend how two distinct worlds relate to one another. If you look online, there is a lot of chatter and discussion about AI. There is a lot of anticipation surrounding AI. Many people are just sharing their experiences with AI, which is useful to capture. It’s also important to read papers in the space of AI and understand some real limitations. You should be able to identify the broad spectrum of problems that a person might encounter when someone says,” We need to make sure we monitor this model because it might cause problems,” as well as the relevant historical examples of problems brought on by health AI because you will essentially be the organization’s AI domain expert. It’s a little challenging to transition from being a healthcare administrator leader to a chief health AI officer unless you already have a lot of health AI knowledge or are willing to work in that field to gain knowledge and build the community. Similarly, there are challenges to people who know the health AI side really well, but don’t speak the language of healthcare, don’t speak the language of medicine, can’t translate that into a way that can be digestible by the rest of healthcare leadership. The way you’re going to develop to be able to serve in that role will depend a little bit on which of those two worlds you’re coming from. If you’re already entering the healthcare industry, you’ve got to make sure you have domain expertise in AI, which will ensure that when you say you’re accountable, you actually are accountable. And on the AI side, you need to understand how the healthcare system works so as you’re working with health leaders, you’re not just translating and giving them your excitement about a specific method, but you’re saying,” With this new method, here’s the thing you need to do today that you can’t do, that we could do. There are really a number of different skill sets you must have, but there are thankfully many different ways to have strength in one area and not necessarily across the entire spectrum. Here’s how much we would need to invest, and here’s what that return on investment would be. Different health systems will approach this role with slightly different perspectives. Other companies, like payers, are going to look at this role a little bit differently. That is acceptable. You shouldn’t apply for this position just because you believe you’re missing out. You should hire this role because you already are using AI or you want to use it, and you want to make sure someone at the end of the day is going to be accountable to how you use it and how you don’t use it. Click here to watch the interview in a video with additional content not found in this article. Bill Siwicki’s HIT coverage can be found on LinkedIn at bsiwicki@himss .org Healthcare IT News is a publication of HIMSS Media. 

Leave a Reply

Your email address will not be published. Required fields are marked *