Encrypted Ambition: Where Ambition Meets Encryption - Inside The Minds Of Disruptors.

The Future of Medicine: How Robotic Surgery is Revolutionizing Healthcare with Dave Saunders

Craig Petronella

Send us a text

Surgical robotics stands at the fascinating crossroads of cutting-edge technology and life-saving medical innovation. Dave Saunders, a seasoned technology leader with three decades of experience spanning from early internet protocols to modern surgical robotics, takes us deep into this revolutionary field.

The stakes couldn't be higher when developing robots that hold surgical instruments. Saunders explains why patient safety forms the foundation of every design decision, particularly addressing cybersecurity concerns like man-in-the-middle attacks that could potentially harm patients. While current systems generally operate as standalone units without accessing patient records, the future integration of AI, computer vision, and machine learning will dramatically change this landscape.

Picture a surgeon performing delicate middle ear surgery, drilling through the hardest bone in the human body while trying to avoid facial nerves that aren't visible until exposed. Now imagine AI-guided robotics that can map these hidden structures from CT scans, creating virtual barriers to prevent accidental damage while reducing operation time from 90 minutes to just 15. These aren't science fiction scenarios but technologies currently being developed and tested.

The impact extends far beyond just improving existing procedures. Robotic systems are democratizing surgical expertise, allowing far more surgeons to perform complex operations like nerve-sparing prostatectomies that previously only a handful of specialists could master. However, with costs ranging from $500,000 to $2 million per system, economic barriers remain significant obstacles to widespread adoption, particularly in rural and resource-limited settings.

Looking forward, Saunders

This is Encrypted Ambition—a podcast about the builders rewriting the rules. Join Petronella Technology Group as we decode the ideas, challenges, and momentum behind tomorrow’s business, technology, and leadership breakthroughs. 

That’s a wrap on this episode of Encrypted Ambition. Subscribe wherever you listen, and if today’s guest inspired you—leave us a review or share the show with someone in your circle.

To learn more about how we support innovators with AI, cybersecurity, and compliance, head to PetronellaTech.com.

Thanks for listening—and remember, the future favors the bold.

Support the show

NO INVESTMENT ADVICE - The Content is for informational purposes only, you should not construe any such information or other material as legal, tax, investment, financial, or other advice. Nothing contained on our Site or podcast constitutes a solicitation, recommendation, endorsement, or offer by PTG.

Support the Show

Please visit https://compliancearmor.com and https://petronellatech.com for the latest in Cybersecurity and Training and be sure to like, subscribe and visit all of our properties at:

Speaker 2:

what you do uh, sure, um, I'm uh dave saunders. I'm a former chief technology officer of galen robotics and uh, now I'm working with another surgical robotics company that's kind of in a stealth mode and uh, I've got about 30 years worth of internet technology experience. That began from the very early days before the internet was public. I've developed some early internet protocols that unfortunately are no longer in use anymore Maybe it's fortunate and also worked on high-density access concentration in the days when dial-up was sexy and cool and my group I was also a research manager at Lucent Bell Labs and my group invented the first commercial Wi-Fi hotspot, which was the little spaceship-looking Apple Airport, if you remember that thing I do. We actually built that in my group at Lucent Bell Labs. So you know I've been around different hardware and software technologies for a long time, certainly worked with a lot of the early explorers into internet security issues.

Speaker 1:

Amazing.

Speaker 2:

And so you know it's certainly been part of my DNA. For a really long time, and most recently about the past 10 years, I've been working on commercializing some different surgical robotic technologies. I took one through a successful FDA clearance about two years ago and I've been for the past 10 years working in close relationship with Johns Hopkins University on diversity, on advancing surgical robotics and taking that into kind of the next generation of computer vision and AI utilizing those things, and of course, all of that is affected by cybersecurity, which is a good thing. It's not an afterthought, but it's something that you really do think about as an architecture from the get-go.

Speaker 1:

Amazing and from my understanding, Galen developed the concept of digital surgery as a service. Can you explain how compliance considerations went into developing a surgical robot?

Speaker 2:

how compliance considerations went into developing a surgical robot. Yeah, you know, surgical robots are interesting because, of course, they appear to come in many flavors, but really, in essence, you've got a master system component and then it's driving the controls for actuators and sensors and all of that to give you whatever that robotic function is. That creates a number of potential security issues, so not the least of which is just your basic man-in-the-middle risk, least of which is just your basic man-in-the-middle risk. The master of your surgical robot is always disconnected by millimeters or feet or miles between the slave components that it's intended to be operating and responding to, and so I think that's probably one of your first and foremost issues in terms of just making sure that the data that you send and receive between those components is authentic and is not subject to any sort of injection risks. And I think that's kind of like your first and biggest issue, because obviously that's going to be the first thing that is potentially going to put a patient at risk. Hipaa compliance and patient privacy those are also very important, but when you're dealing with a surgical robot, my first concern is to make sure that the patient is never put at physical risk of harm through the use of the system and that you know, everything is a risk-based model, and that risk, I believe, is far more important than HIPAA compliance. Those are important too, but you know my my patient isn't going to literally live and die based on HIPAA compliance, right? So so I think that's where your first model comes from.

Speaker 2:

Then your kind of your next areas then are you know how plugged into the rest of your hospital administration system are you? Are you plugged into Epic? Are you actually pulling a patient record or are you aware of who the patient is and associating your logs just for maintenance or follow-up reasons? Do we want to have any sort of association between the operation of the surgical robot and a patient record? And if the answer there is yes, then now we start to have some HIPAA compliance issues, where I want to de-anonymize my data and I want to make sure that any individual record is not going to expose patient privacy issues, and those are a challenge as well.

Speaker 2:

A lot of surgical robots today and there's about 170 of them on the market are not actually reading patient data directly, and so they tend to be islands. So that kind of patient privacy is not necessarily an issue today, but when we look forward in terms of where the technology is going. When I start to do things like, for example, for head and neck surgery, there is technology on deck for future applications where I might be taking the CT scan in, building a 3D model and then having the robot actually look through the surgical microscope and trying to see what the surgeon sees. To see what the surgeon sees, and now I have direct access to patient data, right? So I want to make sure that I'm not putting you know risky issues in terms of attaching patient records now to navigation data. You know what's the serial number of their, of their um, of their hearing implant, um, you know now.

Speaker 2:

You know now I do have the potential of of really breaching a lot of uh, private, uh, hipaa compliance issues and things like that in terms of of patient privacy. So it's, I would say, their patient privacy issues are probably not as much of a risk with most of the surgical robots on the market today, but certainly, as we try to bring in next generation features involving computer vision, involving machine learning, involving computer vision involving machine learning, those issues do become more of an issue because, just for example, probably the hottest topic when it comes to surgical robotics is something called a digital twin, where I want to be able to take in patient model data which is going to be associated with their records in Epic in some way, and I'm going to associate that with real-time telemetry data which could be the visualization system from my endoscope cameras, navigation data that is automatically calibrated and registered to patient CT models. So where this is going in the future. Yeah, we've got a really, really tight coupling between those patient records and the operation of the robot. So those things do become more and more of an issue over time, which just means that the security teams are going to have even more of a mess on their hands. I mean, I know security teams are already overburdened, um, just allowing hospitals to operate on the internet in general, um, uh.

Speaker 2:

But I think some of those issues are are going to become, uh, certainly more forefront over time. Because, going back to what I was saying originally, um, you know patient safety with with that, when I've got a robot that's effectively armed with a sharp pointy device, a scalpel, a cauterization tool, that thing could do a substantial amount of harm to the patient in the event of data being manipulated, man-in-the-middle telemetry, all of those sorts of things. So those become more and more of a risk the more I tie features in that become patient-specific and we start to allow the surgical robot to begin to have more autonomy. I mean, today it's really just the tip of the iceberg.

Speaker 2:

You look at your average pedicle screwdriver robot. It's not really doing that much in terms of autonomy. It's positioning a cannulated instrument adapter, a cannulated instrument adapter, and the surgeon is still, you know, twisting the handle on the drill. They're still driving the screw right, and so there still is a substantial amount of patient I'm sorry of surgeon autonomy. That's driving even the most automated of surgical robots out there, but it's not going to be that way forever. And it's not even the most automated of surgical robots out there, but it's not going to be that way forever and it's not even going to be that way for the next five to ten years what type of uh like data does, or like emr, ehr, phr like.

Speaker 1:

What type of data does these robots collect or do they require to to perform the surgeries or functions they're designed to do?

Speaker 2:

Today it's very limited. So if we take like, obviously, the 800-pound gorilla in the room, you know a big da Vinci-style system. This is a tele-operated robot. It's got, you know, four or five laparoscopic rods inserted into the abdomen of the patient and there is a surgeon sitting at a console and they're remote controlling those rods. The amount of patient data that's required for that procedure, in terms of what the robot needs to know, is almost nothing today, and that's because the robot is being driven by the surgeon. It's just, it's a remote, it's basically. Just. Keep in mind that the DaVinci was invented based on DARPA research that was originally designed to remote, manipulate radioactive materials and allow the operator to be safely away from the radioactive material, right. So that's the concept there. It's not that the da Vinci's doing anything autonomously, it's simply extending the hands of the surgeon into the patient's abdomen, giving them excellent vision, giving them really incredible dexterity, but it's soft tissue surgery, so there's really very little that it needs to know today about the patient.

Speaker 2:

Now, one example of a publicly demonstrated piece of technology that is not commercially available in any way is augmented reality for the visualization system of a da Vinci or a da Vinci type robot, where I can get false colors to show me that I'm now touching the actual patient organ and if I push on it, I can actually get like a color map showing me how hard I'm pushing on it Incredible, incredible feedback. Now, does that require patient data? No, it doesn't. It's all done in real time. It's all done in real time and so it's all very detached from the patient record information. Now can that information be uploaded into the patient record for use? Absolutely. You know, I just recently had a colonoscopy right and in my EHR, um, I've got a bunch of uh snapshots uh, great, for eight by twelves and um you know of of of my insides Right, and so that's now part of my my record.

Speaker 2:

Well, um, at some point there, it makes complete sense that you would be doing the same thing with a surgical robot. Where I'm going to be taking snapshots of this is the position of your pedicle screw. This is the position that we decided. This is where digital twins become very helpful. This is the planned position of your knee implant and this is the actual position based on bone data, based on the fact that the registered camera and navigation system can actually see your bone. It can see where the implant went and we are only 1% off of the intended original plan. We're definitely moving into a situation where we will actually be placing that kind of data into the EHR and it makes complete sense to do so. But in terms of the raw operation of the robot it's pretty limited in terms of how much patient record information the robot needs to operate.

Speaker 1:

Right, yeah, so you would think that attackers, right like hackers, would have some moral standing to not attack a hospital or an emergency like care facility, but unfortunately that happened not that long ago. Yeah, um, and it happens pretty regularly um hospital in boston.

Speaker 1:

Yeah, horrible, horrible dds yep unfortunately, my dad was uh was in um on a ventilator when the crowd strike uh event happened, and and uh so curious to see how you guys are kind of uh uh distancing yourselves from from. You know probabilities that could attack hospitals or institutions. You know how do you? Uh assure that your device remains functional and and right?

Speaker 2:

Very, very important. So, first and foremost, um, as of right now, um, if I asked a hospital for a live internet connection for a surgical robot that I'm installing during the procedure, um, I'm gonna get a hell no from the hospital. There's no chance. Um, now I still need that for occasional updates. I want to update my SAS data for maintenance, prediction and things like that. So there is that issue of vulnerability. That's not intraoperative, right, so that will be a risk in the future. But today, yeah, I'm going to be connecting my system to the internet to do, you know, saas uploads and things like that. So that's when I'm most vulnerable right now. So, like I was saying earlier, my first concern and this is where my paranoia comes in, just because of the hackers I've known and black hats versus white hats man in the middle is, as far as I'm concerned, the thing that actually keeps me up at night, right, and so when I'm making that connection, I want to make sure that I'm not getting any patch data to the operating system that did not come authentically from my company, and so I have a series of layered MD5 fingerprints and some other techniques to validate each of the subsystems in my control mechanism. So our software is kind of bifurcated between two different computers that talk to each other inside of the system, and then there's another controller as well. So there's a few different computing systems within our robot alone that actually talk to each other, and so each of them is authenticating their own software to make sure that if there is a patch, it has to be heavily authenticated and any signs that I don't have an authentic load automatically shut me down.

Speaker 2:

Now, that being said, I do want my robot used in research settings. So there are ways that the robot can be put into a not-for-human-use. You know, the screen is plastered with I'm not operating on a person, that sort of thing, and so then students do have a documented ability to take over the robot controls, but that is system software that's being run on a research platform, and they don't have the ability to override this, to actually patch the software on the robot itself. So there are mechanisms that allow you to completely hijack the system, but if you do that, if you know, as you're saying, like if, if there was a bad actor scenario, um, that would that override would be detected by, kind of like, the display computer and it would immediately go into a you're not operating mode. There's all kinds of, like you know, klaxons and red overlays that would indicate that we're we're in research mode only.

Speaker 2:

And if that ever happened in OR, um, you know the the the surgeons are are trained that yeah, okay, there's, there's a problem, and I would not use the system to operate if, if I'm not getting a, you know we're in operating mode. So those elements are, uh are elements are pretty well protected and preserved. Now, all of that data, so even my SAS data, my operating logs and things like that, those are going up encrypted, they're going up signed. So any data that's being moved back and forth between the system and my, my SAS servers is also authenticated, deanonymized, anonymized, anonymized.

Speaker 1:

I guess you don't deanonymize it.

Speaker 2:

So it's anonymizing it and you know, and signed. So encrypted, signed, anonymized all of that is going through a default process when it's being uploaded to anywhere outside of the hospital. So if I'm taking any of that data outside of the hospital I can't have any patient record information ever. I don't think any of the agreements that I've done with hospitals would allow me to, even for research purposes, have the name of the patient and I have no need for it anyway. So all of that is stripped if it goes up into the SAS server and that is explained in detail in the operating agreement that I have with the hospital and you may not be able to answer this.

Speaker 1:

I also don't want to get too much of your secret sauce, but as AI is evolving, how do you see the future of surgical robotics advancing?

Speaker 2:

So yeah, so there's a couple different areas there. One I think it gives me some really interesting opportunities for cybersecurity protection, right. One of those just being watching for ticks and blips in latency. You know, man in the middle, I'm just going to keep saying that that's where my paranoia lies. I'm just going to keep saying that that's where my paranoia lies. If I'm able to actually watch what my back and forth ping times are, I can see where my blips are, and so that's digital triage opportunities for me for a cybersecurity standpoint, and so just being able to monitor the data flow and say does this look normal? Does this look like just a general back and forth buzz? I mean, my updates are like every five milliseconds and things like that. And if there is a hiccup, you know I should be able to have an AI system that's in tune enough with the other subsystems that it could say oh, that was just a hiccup on the disk drive or whatever, right, and so the AI should be able to account for most hiccups in latency and be able to associate them with some system operation that actually is normal and it's okay. And so that means that now I only have maybe two or three, hopefully, anomalies that I want to go and look at and if they actually are false positives, then I'm going to take those out.

Speaker 2:

Next, level up, from a clinical standpoint, this is, of course, where everybody cares about cybersecurity, but nobody's going to buy my product unless it has clinical relevance right. So the AI that is most interesting to the customers is where we're going to start doing things like augmented reality. We're going to use things like AI to do patient registration, and what I mean by that is visualization systems, where I'm going to look at any patient imaging data that I might have in the EHR and I'm going to use that to either authenticate and make sure that I'm operating on the right patient. With bones, that's obviously a lot easier. So, for example, I'm about to open up a patient behind the ear for a mastoidectomy a pretty significant drilling process.

Speaker 2:

If I've got a CT scan of the patient's skull, I should, with today's technology, couldn't do this 10, 15 years ago. But with today's technology, just based on what I can see through the surgical microscope, which is a stereo microscope, I should be able to very quickly find anatomical landmarks and match up what I can now see of the actual IRL patient and match that up to the CT scan that's in the EHR. That's great, because not only is that potentially valuable for future navigation and augmented reality applications, but if I can't get a match, I'm going to scream bloody murder to the surgeon and say there's something wrong here. Are we operating on the wrong side of the head? Is this the correct patient? And you know so.

Speaker 2:

I think that gives us a really interesting opportunity to really make sure that we don't have those one in a thousand or even one in 10, ten thousand or one in a hundred thousand. Nobody want that's a lotto number that nobody wants to win, right? Yeah, um, so it really doesn't even matter what the error rate is for those surgical errors. Nobody wants to be the one. And if I can do such a simple thing as have a machine learning algorithm, just take any patient data that I have and correlate it to what is actually happening in real time and say wait, wait, wait, wait, wait, wait. That's not an appendix, yeah.

Speaker 1:

Something obvious.

Speaker 2:

Are we in the right part of the body? Are we doing the surgery that? Oh, by the way, I have a. I have a page, I have a copy of the patient's surgical order. You're about to remove the prostate of a patient that was in here for a colostomy right. The potential for machine learning to actually double check the procedure and just go hey, I know what your camera's pointing at. Why are you about to sever this person's prostate when we're supposed to be working on their colon to remove diverticulitis or you know something like that, right, that's a really interesting opportunity, and that, obviously, is going to require me to either have real-time access to the EHR or I'm going to want to be able to download it temporarily before I start the procedure. So there's some really cool opportunities for just ensuring patient safety in the future. When it comes to AI applications and surgical robotics, I think that's pretty cool all by itself.

Speaker 2:

There are also opportunities for advanced medical procedures, and the one that I've been studying the most recently is middle ear surgery, but these challenges apply all over the body. So when I'm going to do a middle ear procedure, I'm going to drill about an inch and a half diagonally through the hardest bone in the human body, which is your skull, right behind the ear. This is the mastoid. And, um, not only is it 60 to 90 minutes of drilling with an 80 000 rpm dremel tool, oh my god, it's, it's, it's, it's, it's gruesome, that's terrible. Um, the the punch line to this part of the procedure is that embedded in the bone are two facial nerves, and you cannot see them until you've exposed them. And if you nick one, um, your patient will have a. They're not going to die, but they will have an irreparable injury. Either they're, uh, they will lose control of one side of their face, um, or they will lose, um, pretty much all sense of taste and smell for, like salt and savory items. So, no more chocolate, red wine or meat, oh my god. Um, and that sucks right, yeah, so, um, and so this is, honestly, even though you're about, oh my God, I'm holding that drill in a white knuckle death grip, looking for weird little, not so standardized landmarks that will tell me that I'm really close, and I think this facial nerve is here.

Speaker 2:

And I think this facial nerve is here that by taking the full body or, I'm sorry, full head CT scan and giving it to a neural net. The neural net is able to segment all of the anatomical structures in the middle ear, all of the bones, and it's even able to pick out the little ghost trails of the two facial nerves that are embedded in the mastoid. So what can I do now? Well, if I have that 80,000 RPM Dremel tool being held in the hand of a surgical robot, I open the patient's skin up behind the ear. I've got now enough bone exposed that I should be able to map the CT scan, a 3D conversion of the CT scan to the actual patient bone. Boom, I now have effective navigation.

Speaker 2:

Now I register the tip of the drill, which doesn't require old school touching a bunch of fiducials. I can see the tip of the drill with a 4K surgical microscope. So with OpenCV and a neural net, I can now register the tip of that drill. From this point, all I have to do is watch the kinematics of the robot relative to the patient bone. So now I know where those embedded nerves are, even though the surgeon can't see them. So this is super cool. So now, even though the surgeon might be, their hand may still be literally on the hand of the drill, but the robot is holding the drill at the same time.

Speaker 2:

Now I allow the surgeon to do the drilling. They can go as fast, as slow as they want. But because I know where those facial nerves are, I can create virtual keep out zones, virtual barriers, and I can now protect the patient nerves by, you know, haptics. I can, you know, give you a little little little shutter on the drill. You know, if you move too close to where I know that nerve happens to be, um, I can just stop you entirely if you're about to hit, you know, hit the bone. That would have exposed it. And so now I can very safely guide the surgeon to just do their drilling but completely keep them away from those critical anatomical structures. Super, super cool.

Speaker 2:

And that has the potential of increasing the speed of those surgeries. In fact, in research we've taken 60 to 90 minutes of drill time down to about 15 to 20 minutes, so we're potentially cutting an entire hour off of OR time. The hospital might as well throw you a parade for that much savings, right, and they would still save money. And so that's a big deal right there, big deal right there. But that also now means that patients that might have had some bone abnormalities or the nerves in a weird place and I'm just having a hell of a time doing that drill through.

Speaker 2:

There are a lot of patients that are turned away from these kinds of so-called routine surgeries simply because the approach or some abnormality of the patient themselves they just make them a bad candidate for the surgery. And because we're now also creating these guardrails to make these ergonomically and just exhausting, you know, huge cog I mean imagine doing, you know, this kind of no mulligan drill through, not all surgeons can do that. Not all surgeons are qualified to do neurotology. It's just the burden is significant. And yet I will go out on a limb and say every single attending surgeon out there in the field of otolaryngology has at least been trained. They know functionally what to do to do a middle ear surgery, to do a middle ear surgery. Just, not all of them have that, like you know, super, super crazy, steady hands that you know the rock star surgeons have and those sorts of things.

Speaker 2:

The robot has the potential of moving those people over on the other side of the bell curve and allowing them to utilize all of that training and know-how and intelligence they have. But now give them the magic hands through virtual barriers and through things like that, and so that means more surgeons that are able to do these procedures. We've already seen this just because of the mechanical advantages of operating a da Vinci or a similar tele-operated robot. We're already seeing those advantages. Right there, we're literally an order, if not two orders of magnitude, more abdominal surgeons are able to do minimally invasive prostatectomies and minimally invasive hysterectomies because of the da Vinci. That's incredible, and you know we're talking. You know we're talking. You know two decimal point shifts in terms of the number of surgeons that are able to provide that standard of care to patients. That's huge.

Speaker 2:

By providing computer vision, by providing HUD guidance through the visualization systems, through the technologies that they're already familiar with, where, basically, the AI is your co-pilot, right? Maybe co-pilot is a bad word just because kind of Microsoft has soured that for a lot of people, but still that's what it is right You've got the AI sitting over your shoulder. The surgeon's still the one that goes to jail if something goes really, really wrong. The surgeon's still the one that gets sued. So let them be in control, but use AI to give them better tools or to make the tools that they already use, make them better, and I think that is the true clinical advantage of AI today in terms of clinical applications.

Speaker 2:

But on the cybersecurity front it's no different. All of your IT people at the hospitals? They're overburdened, they've got way way too much to do. If you can do digital triage on your security profiles and have them, you know, burp up. You know, oh, here's an abnormal log. Tell me if this is a false positive or not. You know, have them, burp that stuff up up. You're taking the mundane crap that nobody ever wanted to do in the first place. That's what computers are good at. Let them do that sort of thing.

Speaker 2:

And we see a parallel of that in radiology. In fact, a very close friend of mine recently found a tumor in her chest that initially a human radiologist had said I don't see anything wrong, but we just acquired an AI system that couples with our mammogram scanner. Would you mind if we ran it through that? They don't have it set up as a standard of care yet. And they ran it through the AI and it came back and said look at this shadow here, this is something. And so then they went back to her with a more refined instrument, did a close-up sonogram and, sure enough, they found something that the AI had found. So at some point this needs to become standard of care.

Speaker 2:

When you're standing at a mammogram machine, the very first thing it should do is go through a simple triage sorter. You know, 99% guaranteed, you're totally fine. You know, put on your shirt, go home, you're in, you're in, you're you look. Okay, I've got like a 95% confidence. We'll put it on the stack for the radiologist to double check.

Speaker 2:

Here's a couple of little things that I think I found. Or the other side of the bell curve, do not let this person go home. And if you can have that done in milliseconds or even seconds, integer seconds, you're providing a better standard of care for the patients and you're relieving burden on the human beings that right now are stuck with. Okay, at least they're in the EHR system, so they're not literal stacks of radiology scans, but they still have just so much to go through and most of it is fine. They're routine mammograms. There's no risk there.

Speaker 2:

Let AI sort those things out. I still feel that a human being, at least for the foreseeable future, should scan through those things manually as well and double check them and sign them off. But let's let the sorting hat do its job and put those things in the right piles and prioritize where the work should be, and and those are the kinds of things that I think you can do from a cyber cybersecurity standpoint, cybersecurity standpoint, and I think those are the kinds of things that you can do from a clinical standpoint that are going to give you the most bang for your buck and give you a real return of value for the hospitals, for the patients, for the payers, for the doctors. I mean all of the stakeholders benefit when you apply AI in those areas, in that model of paradigm, and that's that's how I see it.

Speaker 1:

Amazing. Um, I know we're about 40 minutes in. Um, I don't want to like clip you at your knees. Um, I also um don't want to like. I want to be respectful of your time. I just want to make sure you don't have like a hard stop or something. I'm good right now, okay. Okay, I guess I got one more question and? Um, as a film guy, you know films like I, robot, terminator, things like that. You think of the, the sky net right Of uh. You know you have a lot of people that are are less trusting in technology. Um, have you, have you uh, encountered resistance using uh the DaVinci? Um, how do you communicate, how do you ease patients and let them know that this technology is safe, and how do you build rapport with patients before going into surgery?

Speaker 2:

Today it's actually not a problem. In fact, I had spoken to hospital CFOs who are buying surgical robots because they are losing business without them. Wow, it is the weirdest thing I've ever seen. I mean, imagine going to a hospital, being told that you need surgery and saying, well, what brand of scalpel do you use? Yeah, brand of scalpel do you use. And if you don't get the answer you like you go. Well, I'm going to go to the hospital down the street because they use XYZ brand of scalpel. Right, that's effectively what is driving a lot of surgical robotic utilization today. Now, I still believe that there's an extreme value. I mean, I know some incredible surgeons that use they don't pick and choose when to use the da Vinci and when not to. They use it as their standard of care and they are just magicians with the da Vinci. It's incredible to watch them do their work. And so I believe that the more training and the more experience you have with surgical robotics Let me give you a quick parallel example.

Speaker 2:

If you ever have LASIK, okay, lasik is relatively new technology. It is a surgical robot, okay, it is tracking your eye and it's shooting a little laser to give you scars. Now, what existed before LASIK, a procedure called PRK, where a surgeon had a hand-drawn map of your eye and they had a little trajectory plan of where they were going to make incisions on your eye to do exactly what LASIK does. Right, the scarring caused by those incisions from the laser burns, reshapes your eye and that's what causes your vision to change with LASIK. Well, that used to be done with a diamond-tipped scalpel held in the hand of an ophthalmological surgeon. So let me give you a hypothetical question. Let's say you need to have your vision improved. Are you going to go with LASIK, which is a surgical robot, or are you going to say no, no, no, I need old school. Give me a surgeon with a scalpel in their hand and let them poke me in the eye with that thing. I'll tell you. I know exactly what my decision is going to be yeah, totally Same same.

Speaker 2:

I can't imagine what parallel universe I would be going. I don't know, I need to go old school. Give me that diamond tip scalpel, and that is becoming the market demand for things. Now let me give you another example. I hope that you are never unfortunate enough that you are told by a doctor that, hey, I'm sorry, but we got to take your prostate out.

Speaker 2:

If that was ever to happen, you three options. The first is old school open surgery. They open you up like a, like a frog in biology class, man. It is gruesome. You lose eight units of blood and there is no open procedure for a prostatectomy that will protect the nerve that runs up through the middle of the prostate, which means that when you're done with that prostatectomy, that will protect the nerve that runs up through the middle of the prostate, which means that when you're done with that prostatectomy, a certain malfunction that you would probably like to have preserved will never happen again.

Speaker 2:

Okay, option number two is manual laparoscopic surgery. That was invented. The manual laparoscopic prostatectomy was invented at Johns Hopkins University or School of Medicine and there are certainly surgeons that can do that as a manual laparoscopic procedure, but the number of surgeons actually is in the dozens we're on a planet with 8 billion people and there are probably dozens of surgeons that will do a manual nerve-sparing laparoscopic surgery. In the state of California alone, the last time I looked into this, there were probably 3,000 surgeons that could do the laparoscopic, nerve-sparing prostatectomy with the da Vinci or that kind of teleoperative robot. So that means that you've got two orders of magnitude more surgeons that can do the nerve-sparing technique as long as they're doing it with the robot compared to manual laparoscopic surgery.

Speaker 2:

So here's the rub. Sometimes you see these newspaper articles and they say oh, there was this longitudinal study and it says that using the surgical robot for this procedure is only as good as a laparoscopic surgeon doing it manually. Okay, well, that's an interesting data point. But if there are two orders of magnitude more surgeons that can do that procedure, well then that means I'm not waiting nine months to have my. In fact, I had a friend pass away on the waiting list because this was over a decade ago. He was terrified of having the da Vinci used for his prostatectomy and so he waited on a waiting list for a manual prostatectomy and died on the waiting list.

Speaker 1:

Wow, that's terrible.

Speaker 2:

So that's how I see the opportunity for surgical robotics is that, even if it is only as good as the best practitioners in the field that are out there today, if I can take surgeons that are fully trained, they know exactly what they're supposed to do and they simply operate a robot as the extension of their hands and they get consistent results day in, day out. You know every single procedure of the day. What's my downside, every single procedure of the day, what's my downside? And that's, I think, to me that is really important. And what's really what I think is gratifying is that, from the discussions I've had with patients, with surgeons, with hospital administrators, with the payors, they seem to agree that that is that right now, that's a very significant, not only market driver, that that is that right now, that's a. That's a very significant, not only market driver. But it's also a huge benefit if I know that at my hospital, simply by buying a two million dollar da vinci, I can now take my entire urology department and give them all superpowers.

Speaker 2:

Um, that that's a huge, huge win. If also from a marketing standpoint, that means that I'm not losing patients from good sam and san jose to you know stanford, you know up the road. Um, that's, that's a valuable purchase, that I can justify the purchase just by saying I'm not losing patients. Um, because at the end of the day, I mean, you know this, we are, this is a cup. You know we have to have revenue to keep the hospital open.

Speaker 2:

And so losing patients just because I don't have the latest tool is that's a significant burden. As well, that's a financial issue. So you know, right now, with the current state of the surgical robotic market, the acceptance of surgical robotics is pretty good, and so where that goes in terms of like, if we start allowing AI to perform any kind of autonomous function with that surgical robot, that might change. That perception might change. Right now, to my knowledge, 170 robots in the market. The FDA has not approved a single application in AI that is directly affecting the surgery. There are lots of opportunities. In fact, there's over 800 AI applications that have been approved by the FDA. Zero of them utilize LLMs of any kind, but the 800 are probably 80 to 90% of them are diagnostic right, and so that's certainly where the sweet spot is in terms of getting approval and applying those today. But there's going to become a point where, yeah, I'm going to be able to demonstrate clinical benefit and I am already I've described some of those to you already where, by bringing in AI, I'm going to be able to demonstrate real clinical benefits working with a surgeon, working with a surgeon. But yes, when I lift up the hood, the veil is pulled back. And yes, there is. The AI is, in fact, taking away a little bit of the thinking process. The AI is definitely relieving the surgeon from some decisions that they would otherwise have to make on their own, and you could argue that that is taking away surgeon autonomy. That's, that's when the FDA is going to really start to scrutinize those applications. Nobody's tried it yet. It's coming. It's coming, they're in research, but they haven't been commercially approved. When that starts to happen, the first one to do it unless they're an idiot, when that starts to happen, the first one to do it unless they're an idiot is going to just have the PR blitz of a lifetime, because whoever is the first one to get approved has cut ice for everybody else with the FDA. So it's a big deal when that first approval comes. But with that massive fanfare and screaming that you were the first from the mountains, certainly there are going to be some patients that are going to be and surgeons that are going to be a little apprehensive and go. I'm not sure right, that's going to happen, but I don't know. The market's a weird thing. I mean, look how many people seem to be willing to sit back and say you know, hey, chat GDP. Would you please write my legal brief and include? You know the trust might not be warranted, but there's certainly.

Speaker 2:

I think there's certainly evidence out there that people are willing to trust AIs, and that therefore puts the burden on people like us to make sure that, whatever we're implementing by golly, that sucker had better be handcuffed to the wall and make sure that it cannot go Skynet and it cannot go off in the wrong direction. Because when it comes to surgical robotics, I mean you know what are some of the comparisons of your average neural net, I mean, at best I've got a six-year-old's brain at a highly specialized function. Yes, it can process more data than any human being, but its decision capabilities are still incredibly narrow, and yet I'm putting a weapon in its hand and I'm asking it to do something effectively. You know, even even 20% autonomous, 80% surgeon control, I'm still putting some trust into a, a, a robot holding a weapon, and that is the perspective of the folks that I've spoken to at the fda. That's how they're looking at it, and so whoever tries to go through that door first yeah, they, they gotta they got some work on their hands, um, but it's gonna happen. And it's gonna happen not only because the potential clinical benefits coming in the future they're just they. They're, they're too much to pass up.

Speaker 2:

It's not just about, like, oh my gosh, you know I'm going to nerd out over putting AI into a surgical robot. It's not that. The re, the, the potential clinical benefits are real. The potential time savings um, even though the surgeon is still there, operating side by side co-pilot, whatever you want to call it. I'm already looking at potential clinical applications where I could cut a literal hour off of OR time. That's huge. You can't pass up an opportunity like that, and so those are big deals and we won't be able to hold that damn back forever. So, as those applications get developed, yeah, I got to make damn sure that the AI is only doing what it's allowed to do and what it's supposed to do, because we can't predict every outcome. So it needs to at least have enough pseudocognition to know when it's about to be asked to do something that shouldn't be doing. Um, but then where does that? We haven't, I know we haven't talked about this and we're about to click on an hour and I don't want to take up too much of your time.

Speaker 1:

My next appointment's at 10.

Speaker 2:

So I've got an hour 10, so I've got an hour. So so, um, you know, uh, where's that processing happen? Um, do I put you know a fleet of um, nvidia, I don't know what, what, what's going to be, what's going to be the hot card, you know, three years from now? 50, the 50, 90, the 60, 90? Yeah, um, do I, do I put that in a small Nakoda reactor beside my surgical robot and do all of this based on edge computing, which means I now have to have an architecture for downloading all of my latest learning data from the cloud, where I'm doing like aggregate processing. I'm aggregating all of the surgical robots, everything they've learned, everything they've encountered. I'm pulling that up into the cloud, I'm doing processing up there, I'm vetting it and then I'm pushing it down to the edge, because the hospitals, I hope, never allow an active internet connection anytime soon. So I can't do my processing in the cloud. Do I want to, even if I could? So I have to make those decisions Like how much is it really going to cost to have sufficient edge computing in every device, and do I want it to learn locally, or do I only want to do what it's already already vetted to be allowed to do? And take edge cases, take um false positive, false negatives, whatever, push those up to the cloud, allow them to be processed and vetted up there by a team of of computer and data scientists and then, once they vetted those new learning models, push those, those down to all of the computers. That needs to be figured out right.

Speaker 2:

And then at some point we get into, you know, the robot. You know what's the holy grail that we're really trying to achieve here. Right, we want the autonomous surgical robot from the movie Prometheus. Right, I've got an alien in my abdomen. I need an movie Prometheus. Right, I've got an alien in my abdomen, emergency C-section. Right, and it just knows what to do. And you need that. I mean at some point. You know moon base alpha and Mars colony, if that ever happens. I'm a little skeptic of the mars thing, but at some point we're going to be in space, right, and I don't know if you remember, there was a surgeon in the antarctic the only surgeon in the antarctic at the time who had a burst appendix. Well, who's going to do that surgery? Um, uh answer. He operated, he took out his own appendix. Oh, my god. And you can find pictures of this. It's pretty, it's pretty wild. Um so um, you know when you, when you have, you know these, these orbital platforms you've got moon base. We send a bunch of people to Europa, whatever it's going to happen someday, right?

Speaker 2:

So how many surgeons are going to be on board those spacecraft? How many of them are going to be on the remote colonies, and what happens if something happens to them? What happens if there's a crisis and you need more than one surgeon, because you've got 10 people that all need life-saving care? We've got to have surgical robots just to take care of that. We've got 8 billion people.

Speaker 2:

I don't know if you've ever looked into what surgery is like for rural India or rural China, but it ain't pretty. We need telerobots in those locations just to allow the surgeon that is in Shanghai to operate on somebody that's in, like you know, a less developed area, guangzhou or something like that, right. Same thing for India. We need that to help those surgeons. But at some point we're over 1.8 billion people in India. Over 1.8 billion people in India, right. And at some point we need autonomous surgeons to do the cookie cutter surgeries, right.

Speaker 2:

If I've got an appendix burst, at some point we really would benefit from having a surgical robot in the back room of Walgreens. This scenario sounds scary, but really, if you had that happen, I'm here in. Oh good, sorry, you know I'm here by Johns Hopkins. I know what the ERs look like here. I've talked to ER doctors. My wife had an injury a while back and you know, six hours of sitting there with a compound fracture is not time that anybody wants to spend.

Speaker 2:

And if you've got triage robots that can do certain cookie cutter tasks now not, you know, biology is is not standardized. Human beings do not fit into easy patterns, unfortunately, but there are certain things you can do. You could probably build a robot that can set a bone Um and uh. If it's a femur, you want to do that as fast as you can or your patient will die. You can actually die from certain broken bones, right, and so, having triage robots that can be handling these kind of cookie-cutter procedures, we need that.

Speaker 2:

It's a matter of practicality. It's not an issue of whether or not there's patient acceptance or doctor acceptance. We're getting to the point where there's 8 billion people on the planet and there's simply aren't enough available practitioners in a local geographic area to handle not only just normal traffic flow but mass shootings or dirty bomb. I mean I don't want to get into the into, into you know dystopian kind of stuff, but you know, shit happens right volcanoes, earthquakes. I mean it's not just you know bad people, there's just nature, stuff that happens and it creates this massive burden on the health care system that we can't deal with as a simple hr problem. I can't just be like, well, we'll go hire, we'll go recruit some doctors From where the roads are gone, you know.

Speaker 2:

So in those situations you've got to be able to have some kind of technology that's going to be able to back up the practitioners and provide that kind of cookie cutter care be able to back up the practitioners and provide that kind of cookie cutter care and that's going to handle 80% of your use cases. And if 80% of your use cases can be handled by a robot or an automated system of some sort, well then hopefully your actual human practitioners then can handle the remaining 10% to 20% of your edge cases. That's what you've got to build. That's what the market already needs it today. This isn't a future dystopia where people don't get health care if we don't have this kind of technology.

Speaker 2:

People aren't getting health care today because the systems are already overburdened and already unable to provide the kind of care that we need, affordably and when it's needed, to all of the patients that are acquiring it, and so we're already in a healthcare rationing situation. That's not just a United States thing, that is a planet thing, and so we have a need for this technology. There's no way that you can oversell I mean, the hype around AI is horrific as it is, but within the in terms of the need, I don't think you can oversell the need and potential for robotics and AI in healthcare. There's just it's. The need is now. It is there now. The technology might not be mature enough, but we've got to get there. And there's just there's no, there's no ifs, ands or buts about it.

Speaker 1:

It segues me, like perfectly to my next question, and this will be my final, final question Um, but like, how do you decide to place these, uh, these medical robots, surgical robots? And then I'm assuming you know you you talked about being a couple million dollars. It seems like that's a high barrier of entry. That seems like that's a high barrier of entry. So part one of the question is how many robots are in service right now and how do you decide strategically to place them? And the third I'm assuming you have limited production capabilities. You can't just be making millions of robots a year. So how do you balance that and how do you juggle that challenge?

Speaker 2:

It's a huge, huge problem that I think if you solve, there's a Nobel Prize in it for you. The issue with surgical robotics in general and this is a universal axiom is no one solution solves all problems, right. Is no one solution solves all problems, right? The robot that can take out your appendix is not the robot that does LASIK for your eyes. Sure, sure, and because just you know, throw out a couple of just you know little buzzwords, you know gear ratios alone, like if I built a DaVinci with the level of precision required for ophthalmological surgery, I am over-engineering the da Vinci just beyond, like, just to orders of magnitude beyond what is necessary to take out an appendix or to take out a prostate or or, you know, hysterectomy or something like that. So there isn't one robot to rule them all. And that creates that creates a production problem, because that means you know there's 170 robots on the market today. Eight of them do pedicle screws just for spine fusion. So even within a subsegment of surgery, there's a limited number of robots that will experience consolidation. So maybe we go from eight to four over the next 10 years, but you know there's still 170 other robots, yeah, and so that that creates a real interesting challenge there. And then the next one is OK, you know, rose, is like one point seven million. It's primarily a neurosurgery, orthopedic robot. Da Vinci's incredible at abdominal surgery. It's two million and change, um, you know, when you add on the accessories 10 service model every year for a break, fix and maintenance. So it's like, oh my god, the numbers just add up and yeah, and then you have. And then you have these little departments like ophthalmology or laryngology where I can still benefit from surgical robotics, but I can't. Just, even if the Da Vinci was good for doing laryngeals or vocal cord surgery, your, your, your voice, throat surgery department cannot amortize a two million dollar robot over any period of time. It just doesn't fit the financial model. So you've got different robots that have to have different price points so that they can fit into different models. So that's a huge problem right there. And because of that, right now most of your surgical robots are stuck in teaching hospitals where the weird Dr Strange procedures are being performed, but that's not where the highest volume of procedures are being performed. So if I'm like Texas Spine Institute, I'm doing tons of back surgeries, right, these aren't weird. You know crazy use cases that are going to the teaching hospitals these are. I don't want to denigrate anybody's need for surgery by calling these run of the mill, but certainly they're not weird.

Speaker 2:

You know case studies in spine surgery. They're relatively routine, which is why a group like the Texas Spine Institute can churn out, you know, hundreds and hundreds of spine surgeries you know a month because they're bringing in patients that you know just have. You know, really the same kind of procedure over and over again, and that's great. But they've got high throughput requirements and they've got really tight margins. They're not a teaching hospital, so they can't financially justify any of the spine robots on the market today. So those spine robots stay at the teaching hospitals and the ASCs have no technology.

Speaker 2:

And so what you really need is the next generation of surgical robots to have cost of acquisition models that aren't just CapEx. So we need to see, you know, realistic lease to own models. We need to see SAS models where I can place the robot, and if my robot costs a half million dollars to build internally, placing that thing for free and making it up on per usage charges, that's a pretty burdensome thing for the company to do, right. So these are really challenging situations. So the next generation of robots. We need to see the price points come down. We need to see those robots require kind of the next generation model of navigation registration using actual vision, automatically registering to your CT scans and bone models and things like that. I can't spend 20 to 30 minutes touching fiducials and then tapping confirmation on the touchscreen to register the surgical robot. That's 20 to 30 minutes of OR time. That is just. I can't afford to do that in an ASC. So we need to see the next generation of technology. That's AI, that's computer vision, that's integrating with visual systems, digital twins, right. So that's the whole next generation of software-based technologies and we need to couple that with the next generation of low-cost surgical robotics and fortunately that is possible. You know a lot of well.

Speaker 2:

I mean, lasik has blazed the trail right. You used to need to, you know, take a mortgage out in your house to get LASIK done, and now it's like I can get it done at the Walmart Eye Center. I don't know if they really do that yet, but LASIK has become incredibly accessible and affordable and the technology has improved to the point where patients that were turned away as LASIK candidates 10 years ago they're now like slam dunks the software capabilities and motion tracking and all of those required technologies for LASIK have just gone through the roof. So the technology has continued to improve and the price point has come down. We need to see that throughout the rest of the industry because we need to remove the robots, have to move out of the teaching hospitals, or robots.

Speaker 2:

I haven't looked in this, I haven't looked into this in years, but I believe that the da vinci's market penetration is like 10 to 15 percent. If you had a company that was 20 years old, 25 years old and you still were boasting that you had a 10% to 15% market penetration, hbr would be doing a case study on what a failure you are. And yet DaVinci's Intuitive Surgical is leading the field. They're the most successful ones but because of the expense of that robot they're stuck in the teaching hospitals. Good Sam, I think, is buying them. So there are some secondary hospitals but the Da Vinci hasn't really been able to penetrate kind of that. More rural centers and things like that.

Speaker 2:

I think they're getting there but it's difficult because the price point is very, very hard to absorb at a lower volume hospital or a higher volume, tighter margin hospital, which is what an ASC effectively ends up being so. The economic challenges are substantial and those need to be met while at the same time improving their technology. So you need better technology at a cheaper price. It's not impossible. I mean, you know the iPhone and whatever your favorite smartphone is is a perfect example of that. When I was a kid, radioshack literally sold a transistor radio that one of the big graphics on the box was that it included nine transistors. Why this radio has nine transistors in it? Um, I think, my, I think literally, the apple watch is a billion transistors, right so that the technology of the transistor has become so ubiquitous that I don't even think.

Speaker 2:

in fact, I'm looking at stuff on my desk and I'm like transistor, transistor, transistor. I've got transistors in just everything around me. My desk lamp, I know for a fact, has multiple transistors in it, and so it's funny that that kind of technology has become so ubiquitous that we don't even think about it. And yet when that thing was invented, it changed the world, and AI needs to do the same thing. Right now, we're stuck in the hype cycle of AI. Ai needs to go back in the toolbox, where it belongs, and you don't know that it's there because it's working. And I don't know if you know who Don Norman is.

Speaker 2:

He had an emeritus position, I think, until he passed away at apple, but he, uh, was at hp for a long time and he was this big visionaries in terms of computing, and he always used to talk about the ubiquitousness, the ubiquitous ubiquitousness of computing, and that's what we want to see, right? Um, I've got smart bikes now and that's obviously got a computer in it. It's like trying to figure out, like, what's the most efficient. You know, um, you know energy to apply to my ability to pedal the thing and to to give me a boost um, that's driven by a computer. Do I care? Do I know? Um, my microwave that's got a computer in it. Do I care? Do I know?

Speaker 2:

Computers are so ubiquitous that I don't even know that I'm interacting with one anymore. And if you think about the early days of computing, that is so science fiction to conceive of that. Computers would be so ubiquitous that I wouldn't even realize if I was dealing with a computer or not. That's what AI needs to do. We need to get AI out of this hype cycle where we're putting it in giant letters in the box AI inside and we need to get it to be so that it's so reliable, so functional, that we don't care that it's there anymore, that it's just doing its job. Like when's the last time somebody asked you I don't know if you're a coder, but there's 20 different sorting algorithms When's the last time somebody ever asked you which sorting algorithm you were using to alphabetize a list? Nobody cares.

Speaker 2:

It shouldn't matter, it doesn't.

Speaker 1:

Right.

Speaker 2:

And so that's where things need to, that's where things need to go, and, and so you know right now, if you want to get a company funded, if you want to, if you want to call your company a unicorn, you know you're gonna. It's gonna do you good to put ai on one of your slides. That's just the nature of the beast. But AI is going to become most effective when we forget that it's there, and to me that's Shangri-La right, that's the holy grail. That's where we got to go, and that requires reliability. That requires an incredibly robust infrastructure. We've got to figure out the line between cloud computing and edge computing and how that makes the most sense. And it makes the most sense for medical robotics in a different way that it makes sense for diagnostics or for managing your EHR. All of those use cases have a different balance. We got to figure out what those balances are and apply them correctly and apply them transparently, and so that's kind of like that's my final sermon.

Speaker 1:

No, this conversation has been very insightful. It's been a learning experience for me. I mean, some of the stuff that you dropped is epic and I haven't considered and just learning about you know, like Galen, and how you do what you do, and your thought process has been been amazing and insightful.

Speaker 2:

Hey, thanks, appreciate it.

Speaker 1:

Yeah yeah, thank you so much. Uh, I will definitely be in touch, um, I think, based on our discussion, um, let me see if I can end the recording. All right, uh.

People on this episode