Science

Surgery: will robots operate on us soon?

This article was originally published on the Knowable Magazine from Annual Reviews website and translated with kind permission.

In 2004, the US Defense Advanced Research Projects Agency (DARPA) raffled off a $1 million prize for any group that could develop an unmanned vehicle capable of self-driving the 142 miles of rugged terrain between Barstow, California, and Primm. in Nevada. Thirteen years later, the Department of Defense announced another award, this time not for a robot car, but for autonomous robotic doctors.

Robots have been found in operating rooms since the 1980s, including for holding patients’ limbs in place, and later for laparoscopic surgery, which allows surgeons to use remote-controlled robotic arms to operate on the human body through tiny openings instead of huge incisions. . But for the most part, these robots were just very sophisticated versions of the scalpels and forceps that surgeons have been using for centuries – incredibly complex tools, no doubt, capable of working with incredible precision, but still in the surgeon’s hands.

Despite many challenges, this situation is changing. Now, five years after the award was announced, engineers are taking steps to build independent machines that can not only cut or stitch, but plan those cuts, improvise and adapt. Researchers are improving the ability of machines to navigate the complexities of the human body and coordinate with those of doctors. But the truly autonomous surgical robot that the military could come up with, like self-driving cars, is still a long way off. And their biggest problem may not be technological, but rather to convince people that they can be used.

Navigate the unpredictable

Like drivers, surgeons must learn to navigate their particular environment, which sounds simple in principle but proves to be infinitely complex in the real world. Real roads have traffic, construction vehicles, pedestrians—all of which are not necessarily shown on Google Maps, and the car must learn to avoid.

In the same way, if one human body is generally similar to another, children’s films correctly say this: we are all special inside. The exact size and shape of organs, the presence of scar tissue, and the location of nerves or blood vessels often differ from person to person.

“Patients vary greatly from person to person,” says Barbara Goff, gynecologist-oncologist and chief surgeon at the University of Washington Medical Center in Seattle. “I think that might be a problem.” She has been using laparoscopic surgical robots — robots that don’t move on their own but mimic the surgeon’s movements — for more than a decade.

The fact that bodies are moving creates additional complexity. Several robots are already showing some degree of autonomy, one classic example being a device called ROBODOC that can be used in hip surgery to shave the bone around the hip cavity. But the bone is relatively easy to work with, and once in place, it barely moves. “Bone doesn’t bend,” says Alex Attanasio, research specialist who currently works for Konica Minolta and wrote about robots in surgery for the 2021 Annual Review of Control, Robotics, and Autonomous Systems.

Unfortunately, the rest of the body is not so easy to fix in place. Muscles twitch, stomach gurgle, brain twitch, and lungs expand and contract, for example, before a surgeon gets on the scene and starts moving things around. And if a human surgeon can obviously see and feel what he is doing, then how can a robot know if his scalpel is in the right place and if the tissues have moved?

One of the most promising options for such dynamic situations is the use of cameras and sophisticated tracking software. In early 2022, for example, researchers at Johns Hopkins University used a device called the Smart Tissue Autonomous Robot (STAR ​​for short) to stitch together the two ends of torn anesthetized pig intestines—potentially very delicate—thanks to this visual system.

The human operator marks the ends of the intestines with drops of fluorescent glue, creating markers for the robot to follow (much like an actor wearing a motion-capture suit in a Hollywood movie). At the same time, the camera system creates a 3D tissue model using a grid of light points projected onto the area. Together, these technologies allow the robot to see what is in front of it.

“What’s really special about our vision system is that it allows us to not only reconstruct the appearance of tissue, but to do it fast enough to be able to do it in real time,” says Justin Opfermann, STAR system developer. and Graduate Student in Engineering at Hopkins University. “If something moves during the operation, you can detect it and follow it.”

The robot can then use this visual information to predict the best course of action, either by presenting the human operator with various plans to choose from or by checking between two seams. In testing, STAR by itself worked fine, but not perfectly. A total of 83% of the stitches were self-administered, but the remaining 17% had to be intervened by the man to fix.

“83% is definitely surmountable,” says Opfermann. Most of the problems were due to the fact that the robot had little trouble finding the right angle in certain corners and needed a human to push it into the right place, he said. Newer trials that have not yet been published currently show success rates in the 90% range. In the future, the man may simply have to approve the plan and then oversee its execution without any intervention.

Since NASA’s early developments in the 1970s, surgical robots have gradually become more and more capable. Eventually, they will be able to make and execute decisions on their own, without interference or control from human surgeons.

Pass a security test

However, at this point, there should still be someone in the driver’s seat, so to speak. And this could be the case for a while for many different autonomous robots: if, in theory, we could trust a robot with all decision-making, this raises a question that has also been raised in relation to self-driving cars.

“What happens if any of this goes wrong?” Attanasio explains. “What happens if the car gets into an accident?”

The general consensus at the moment is that it’s better for people to stay in control at the end of the day – at least in an observer role, reviewing and validating procedures and being prepared in case of an emergency.

However, proving to hospitals and regulators that autonomous robots are safe and effective could be the biggest hurdle to truly inhuman robots in the operating room. Experts have several ideas on how to get around this obstacle.

For example, designers will likely need to be able to explain to regulators exactly how robots think and decide what to do next, says Attanasio, especially if they get to the point where they stop only helping a human surgeon and become involved in medicine themselves. This explanation, however, may be easier said than done, as current AI systems leave observers with few clues as to how they are making their decisions. Therefore, engineers may want to design a system that is “explainable” from the start.

Pietro Valdastri, a biomedical engineer at the University of Leeds in England and one of Attanasio’s co-authors, believes that no manufacturer can easily solve the regulation problem, but he has an alternative solution. “The solution here is to fabricate a system that, while self-contained, is inherently safe.” This means that the next generation of surgical robots may look less like roadsters and more like bumper cars.

Valdastry is working on so-called soft robots, especially for colonoscopies. Traditionally, a colonoscopy consists of inserting a flexible tube with a camera—an endoscope—through the intestines to look for the first signs of colon cancer. The procedure is recommended for all people over 45 years of age, but it takes a lot of time and training for the operator to master the endoscope. Since there are few properly trained operators, waiting lists have become longer.

According to Valdastri, using an intelligent robot that can control itself would make the task much easier, like driving a car in a video game. The doctor could then focus on what matters most: finding the first signs of cancer. Again, a robot built from soft materials will be inherently safer than more rigid devices. This may even reduce the need for anesthesia or sedatives, as it is easier to avoid pressure on the intestinal wall, Valdastri says. And because the robot doesn’t have the ability to cut or shear anything, it might be easier for regulators to accept it.

As technology advances, autonomous robots may only be approved for simpler tasks like holding a camera, Opfermann said. As these core tasks are approved, tasks can accumulate to form an autonomous system. In cars, we first had cruise control, he says, but now there’s brake assist, lane keep assist, and even parking assist — all of those things are moving towards a driverless system.

“I think it’s going to be a bit like,” says Opfermann, “when we see small, stand-alone tasks that end up coming together to form a complete system.”

Translated and published with the kind permission of Knowable Magazine. The original article can be found HERE.

.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.