New Computer Program Reduces Spine Surgery Errors Linked to “Wrong Level” Labeling
Posted by: Crystal Williams on: February 6, 2019 | Print This Page
Pilot study shows ‘LevelCheck’ program may prevent operating on wrong spinal segment
Researchers at Johns Hopkins Medicine report that a computer program they designed may help surgeons identify and label spinal segments during real time operating room procedures and avoid the costly and potentially debilitating consequences of operating on the wrong segment.
The current study builds on previously described work—published in April 2015 and March 2016—on the algorithm dubbed LevelCheck, which was designed and developed by Jeffrey Siewerdsen, Ph.D., professor of biomedical engineering, computer science and radiology at the Johns Hopkins University School of Medicine and founder of the school’s Imaging for Surgery, Therapy and Radiology Laboratory. Details of the current findings were published last fall in the Annals of Biomedical Engineering.
“Operating on the wrong part of a spine is rare, but even once is too much for a patient and a surgeon,” says Amir Manbachi, Ph.D., M.A.Sc., a research associate in Siewerdsen’s laboratory when the current research was completed and now first author on the study and an assistant research professor of biomedical engineering at the Johns Hopkins University. “LevelCheck is designed to help make such errors ‘never’ events.”
The researchers say current estimates indicate that spinal surgeons operate on the wrong spinal segment only about once in every 3,100 surgeries. Consequences, however, are huge, potentially leading to paralysis, more surgeries and huge increases in health care costs.
Most humans have the same number of spinal segments, which are labeled L1, L2 and so on. Currently, surgeons identify the correct target spinal segment, or “level,” by using X-rays of the patient taken in the operating room at the time of the surgery, and counting up or down the spinal segments on the X-rays to identify and verify the correct one.
These intraoperative X-rays sometimes can be difficult to read on the spot due to poor image quality, the patient’s position or weight, or atypical spinal anatomy. All of these issues potentially lead to surgeon error in identifying the correct spinal level on which to operate for such conditions as herniated discs.
Some surgeons also physically mark the correct spinal segment with a metal marker or surgical cement during a preliminary procedure, but with this approach, patients face additional surgical risks.
The LevelCheck program uses a patient’s MRI or CT scan images taken before the operation. By feeding the imaging data into the Level Check computer program, engineers use mathematical algorithms to compare anatomical landmarks, line them up, and transfer the digital labels of each spinal segment from the preoperative scan to the digital X-ray taken in the operating room.
The LevelCheck-verified spine segments are then presented to the surgeon to inform assessment of the correct spinal segment for surgery.
For the current research, the scientists set up a mock operating room. They selected 62 of 364 past spinal surgeries performed at The Johns Hopkins Hospital between 2012 and 2016 for surgeries involving long segments of the spine, specifically choosing X-ray images that were the most difficult to read and label.
A neuroradiologist previously and correctly labeled all of the X-rays to determine where the correct surgical sites were on the images.
The researchers then asked five surgeons to label the same X-rays in two ways: with LevelCheck assistance at the same time they labeled the segments and to confirm their labeling after marking the segments without the program’s assistance.
They also randomly presented some of the same cases to the surgeons multiple times to account for fatigue or waning attention.
Without LevelCheck assistance and in the difficult cases presented to them in the mock setup, the surgeons labeled the target spinal segment in these challenging cases incorrectly a median of 14 cases out of 46 trials.
However, when the surgeons used LevelCheck either before or after labeling the segments, the average error rate dropped to a median one case out of 46 trials.
Next, the researchers tested LevelCheck’s labeling during 20 real time operations at The Johns Hopkins Hospital after surgeons had labeled the segments without the aid of LevelCheck. While both the surgeons’ initial labeling and LevelCheck’s results were correct in all 20 operations, which were not selected for difficult cases, the goal was to determine if they could integrate LevelCheck into real-world operations of the surgical workflow.
The scientists found that it took an average of 17 to 72 seconds for LevelCheck to deliver its labeling results, close to the median 20- to 60-second range surgeons when surveyed said they were willing to wait for the results.
“A surgeon may say, ‘I don’t need this, I always get it right,’” says Siewerdsen, senior author of the study. “This algorithm actually improves surgeons’ rates of getting it right.”
Before and after each of the 62 mock operating room cases, the researchers gave questionnaires to the five surgeons, including the repeated cases, for a total of 410 questionnaires. The researchers found that LevelCheck improved the surgeons’ confidence in labeling 91 percent of the time (373 out of 410 times). Another 5.8 percent (24 out of 410) of the time, surgeons said it didn’t have an impact on their confidence, while 3 percent of the time (13 out of 410) surgeons reported feeling the program reduced their confidence.
In the 20 cases in the real-time operating room setting, the surgeons said LevelCheck improved their confidence in 16 of the 20 cases and had no impact in the remaining four cases.
Although the researchers say they have not determined the cost of LevelCheck at this stage of development, they say it requires a computer with a graphics card and, at this point, an engineer to operate the software. They hope to further automate the system so that surgeons can use it without an engineer present. The researchers aim to conduct more trials of the program at other institutions.
Other researchers involved in this study include Tharindu De Silva (who completed retrospective studies related to this work), Ali Uneri, Matthew Jacobson, Joseph Goerres, Michael Ketcha, and Runze Han of the Johns Hopkins University Department of Biomedical Engineering; Nafi Aygun of the Johns Hopkins University Russell H. Morgan Department of Radiology and Radiological Science; David Thompson of the Johns Hopkins University Armstrong Institute for Patient Safety and Quality; Xiaobu Ye, Camilo Molina, Rajiv Iyer, Tomas Garzon-Muvdi, Michael Raber, Mari Groves, and Jean-Paul Wolinsky of the Johns Hopkins University Department of Neurosurgery; and Sebastian Vogt and Gerhard Kleinszig of Siemens Healthineers.
This research was supported by the National Institute of Biomedical Imaging and Bioengineering (NIH R01-EB-017226) and Siemens Healthineers.
The scientists have filed for patents related to the technology described in this research.
This press release was written by Paige Bartlett, science writing intern for the Johns Hopkins Institute for Basic Biomedical Sciences.