The image shows how the UB tool works, when applied to histology image data. The large background image shows a mouse renal tissue section with kidney structures called glomeruli marked via automatically estimated boundaries. The boundaries can be iteratively updated during system training. The glomeruli structures change when the disease has progressed. Credit: Brendon Lutnick
Pictures may be worth a thousand words but with medical images, that’s an understatement. Digital images of biopsies are especially valuable in diagnosing and tracking the progression of certain diseases, such as chronic kidney disease and cancer.
Computational tools called neural networks, which focus on complex pattern recognition, are well-suited to such applications. But because machine learning is so complex, medical professionals typically rely on computer engineers to “train” or modify neural networks to properly annotate or interpret medical images.
Now, University at Buffalo researchers have developed a tool that lets medical professionals analyze images without engineering expertise. The tool and the image data that were used for its development are publicly available at:
https://github.com/SarderLab/H-AI-L
The technique was described in a paper published in Nature Machine Intelligence on Feb. 11. Expected to be applicable to digitize medical images of any organ, the researchers demonstrated the tool with histology images of chronic kidney disease and magnetic resonance images of the human prostate gland.
“We have created an automatic, human-in-the-loop segmentation tool for pathologists and radiologists,” said Pinaki Sarder, Ph.D., corresponding and senior author, and assistant professor in the Department of Pathology and Anatomical Sciences in the Jacobs School of Medicine and Biomedical Sciences at UB. The paper’s lead author is Brendon Lutnick, a doctoral candidate at the Jacobs School working on his dissertation research under Sarder’s supervision.
Intuitive interface
Designed with what the researchers call an intuitive interface, the tool automatically improves annotation and segmentation of medical images based on what it “learns” from the way the human user interacts with the system.
“With our system, you don’t have to know any machine learning,” said Sarder. “Now medical professionals can do structure annotation by themselves.
“The technique empowers medical professionals for the first time to use their own familiar tools, such as a commonly used whole-slide viewer for image annotation, without getting lost in the translation of machine learning jargon,” he said.
Lutnick explained that the system is designed to improve its performance as it is “trained” on the same dataset. “You want to train it on your own dataset iteratively,” he explained. “This optimizes the workload of the expert annotator as the system becomes more efficient each time you use it.”
The system improves iteratively, essentially learning each time the medical professional redraws a boundary on an image to pinpoint a particular structure or abnormality.
A better way to predict disease progression
The ultimate goal is a more precise understanding of a patient’s disease state. “When you take a biopsy, you want to figure out the image features and what they tell you about disease progression,” said Sarder.
He explained that, for example, a darker red area on an image of the glomerulus in the kidney, where waste products are filtered from blood, indicates sclerosis, which may signal that the disease has progressed. The more precisely the boundaries of those areas can be defined, the better the understanding of what stage of disease the patient is in and how it may progress in the future.
“The system performs better each time,” Lutnick said, “so the burden of the human operating the machine is reduced with each iteration. Each time the individual redraws a boundary on a sample, the system is learning. Importantly, this interaction allows the human to understand the weaknesses of the machine as it learns.”