Curated by THEOUTPOST
On Tue, 10 Sept, 12:07 AM UTC
3 Sources
[1]
A fast and flexible approach to help doctors annotate medical scans
To the untrained eye, a medical image like an MRI or X-ray appears to be a murky collection of black-and-white blobs. It can be a struggle to decipher where one structure (like a tumor) ends and another begins. When trained to understand the boundaries of biological structures, AI systems can segment (or delineate) regions of interest that doctors and biomedical workers want to monitor for diseases and other abnormalities. Instead of losing precious time tracing anatomy by hand across many images, an artificial assistant could do that for them. The catch? Researchers and clinicians must label countless images to train their AI system before it can accurately segment. For example, you'd need to annotate the cerebral cortex in numerous MRI scans to train a supervised model to understand how the cortex's shape can vary in different brains. Sidestepping such tedious data collection, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital (MGH), and Harvard Medical School have developed the interactive "ScribblePrompt" framework: a flexible tool that can help rapidly segment any medical image, even types it hasn't seen before. Instead of having humans mark up each picture manually, the team simulated how users would annotate over 50,000 scans, including MRIs, ultrasounds, and photographs, across structures in the eyes, cells, brains, bones, skin, and more. To label all those scans, the team used algorithms to simulate how humans would scribble and click on different regions in medical images. In addition to commonly labeled regions, the team also used superpixel algorithms, which find parts of the image with similar values, to identify potential new regions of interest to medical researchers and train ScribblePrompt to segment them. This synthetic data prepared ScribblePrompt to handle real-world segmentation requests from users. "AI has significant potential in analyzing images and other high-dimensional data to help humans do things more productively," says MIT PhD student Hallee Wong SM '22, the lead author on a new paper about ScribblePrompt and a CSAIL affiliate. "We want to augment, not replace, the efforts of medical workers through an interactive system. ScribblePrompt is a simple model with the efficiency to help doctors focus on the more interesting parts of their analysis. It's faster and more accurate than comparable interactive segmentation methods, reducing annotation time by 28 percent compared to Meta's Segment Anything Model (SAM) framework, for example." ScribblePrompt's interface is simple: Users can scribble across the rough area they'd like segmented, or click on it, and the tool will highlight the entire structure or background as requested. For example, you can click on individual veins within a retinal (eye) scan. ScribblePrompt can also mark up a structure given a bounding box. Then, the tool can make corrections based on the user's feedback. If you wanted to highlight a kidney in an ultrasound, you could use a bounding box, and then scribble in additional parts of the structure if ScribblePrompt missed any edges. If you wanted to edit your segment, you could use a "negative scribble" to exclude certain regions. These self-correcting, interactive capabilities made ScribblePrompt the preferred tool among neuroimaging researchers at MGH in a user study. 93.8 percent of these users favored the MIT approach over the SAM baseline in improving its segments in response to scribble corrections. As for click-based edits, 87.5 percent of the medical researchers preferred ScribblePrompt. ScribblePrompt was trained on simulated scribbles and clicks on 54,000 images across 65 datasets, featuring scans of the eyes, thorax, spine, cells, skin, abdominal muscles, neck, brain, bones, teeth, and lesions. The model familiarized itself with 16 types of medical images, including microscopies, CT scans, X-rays, MRIs, ultrasounds, and photographs. "Many existing methods don't respond well when users scribble across images because it's hard to simulate such interactions in training. For ScribblePrompt, we were able to force our model to pay attention to different inputs using our synthetic segmentation tasks," says Wong. "We wanted to train what's essentially a foundation model on a lot of diverse data so it would generalize to new types of images and tasks." After taking in so much data, the team evaluated ScribblePrompt across 12 new datasets. Although it hadn't seen these images before, it outperformed four existing methods by segmenting more efficiently and giving more accurate predictions about the exact regions users wanted highlighted. "Segmentation is the most prevalent biomedical image analysis task, performed widely both in routine clinical practice and in research -- which leads to it being both very diverse and a crucial, impactful step," says senior author Adrian Dalca SM '12, PhD '16, CSAIL research scientist and assistant professor at MGH and Harvard Medical School. "ScribblePrompt was carefully designed to be practically useful to clinicians and researchers, and hence to substantially make this step much, much faster." "The majority of segmentation algorithms that have been developed in image analysis and machine learning are at least to some extent based on our ability to manually annotate images," says Harvard Medical School professor in radiology and MGH neuroscientist Bruce Fischl, who was not involved in the paper. "The problem is dramatically worse in medical imaging in which our 'images' are typically 3D volumes, as human beings have no evolutionary or phenomenological reason to have any competency in annotating 3D images. ScribblePrompt enables manual annotation to be carried out much, much faster and more accurately, by training a network on precisely the types of interactions a human would typically have with an image while manually annotating. The result is an intuitive interface that allows annotators to naturally interact with imaging data with far greater productivity than was previously possible." Wong and Dalca wrote the paper with two other CSAIL affiliates: John Guttag, the Dugald C. Jackson Professor of EECS at MIT and CSAIL principal investigator; and MIT PhD student Marianne Rakic SM '22. Their work was supported, in part, by Quanta Computer Inc., the Eric and Wendy Schmidt Center at the Broad Institute, the Wistron Corp., and the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health, with hardware support from the Massachusetts Life Sciences Center. Wong and her colleagues' work will be presented at the 2024 European Conference on Computer Vision and was presented as an oral talk at the DCAMI workshop at the Computer Vision and Pattern Recognition Conference earlier this year. They were awarded the Bench-to-Bedside Paper Award at the workshop for ScribblePrompt's potential clinical impact.
[2]
Interactive AI framework provides fast and flexible approach to help doctors annotate medical scans
To the untrained eye, a medical image like an MRI or X-ray appears to be a murky collection of black-and-white blobs. It can be a struggle to decipher where one structure (like a tumor) ends and another begins. When trained to understand the boundaries of biological structures, AI systems can segment (or delineate) regions of interest that doctors and biomedical workers want to monitor for diseases and other abnormalities. Instead of losing precious time tracing anatomy by hand across many images, an artificial assistant could do that for them. The catch? Researchers and clinicians must label countless images to train their AI system before it can accurately segment. For example, you'd need to annotate the cerebral cortex in numerous MRI scans to train a supervised model to understand how the cortex's shape can vary in different brains. Sidestepping such tedious data collection, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital (MGH), and Harvard Medical School have developed the interactive "ScribblePrompt" framework: a flexible tool that can help rapidly segment any medical image, even types it hasn't seen before. Instead of having humans mark up each picture manually, the team simulated how users would annotate over 50,000 scans, including MRIs, ultrasounds, and photographs, across structures in the eyes, cells, brains, bones, skin, and more. To label all those scans, the team used algorithms to simulate how humans would scribble and click on different regions in medical images. In addition to commonly labeled regions, the team also used superpixel algorithms, which find parts of the image with similar values, to identify potential new regions of interest to medical researchers and train ScribblePrompt to segment them. This synthetic data prepared ScribblePrompt to handle real-world segmentation requests from users. "AI has significant potential in analyzing images and other high-dimensional data to help humans do things more productively," says MIT Ph.D. student Hallee Wong SM '22, the lead author on a paper about ScribblePrompt and a CSAIL affiliate. The findings are published on the arXiv preprint server. "We want to augment, not replace, the efforts of medical workers through an interactive system. ScribblePrompt is a simple model with the efficiency to help doctors focus on the more interesting parts of their analysis. It's faster and more accurate than comparable interactive segmentation methods, reducing annotation time by 28 percent compared to Meta's Segment Anything Model (SAM) framework, for example." ScribblePrompt's interface is simple: Users can scribble across the rough area they'd like segmented, or click on it, and the tool will highlight the entire structure or background as requested. For example, you can click on individual veins within a retinal (eye) scan. ScribblePrompt can also mark up a structure given a bounding box. Then, the tool can make corrections based on the user's feedback. If you wanted to highlight a kidney in an ultrasound, you could use a bounding box, and then scribble in additional parts of the structure if ScribblePrompt missed any edges. If you wanted to edit your segment, you could use a "negative scribble" to exclude certain regions. These self-correcting, interactive capabilities made ScribblePrompt the preferred tool among neuroimaging researchers at MGH in a user study. 93.8 percent of these users favored the MIT approach over the SAM baseline in improving its segments in response to scribble corrections. As for click-based edits, 87.5 percent of the medical researchers preferred ScribblePrompt. ScribblePrompt was trained on simulated scribbles and clicks on 54,000 images across 65 datasets, featuring scans of the eyes, thorax, spine, cells, skin, abdominal muscles, neck, brain, bones, teeth, and lesions. The model familiarized itself with 16 types of medical images, including microscopies, CT scans, X-rays, MRIs, ultrasounds, and photographs. "Many existing methods don't respond well when users scribble across images because it's hard to simulate such interactions in training. For ScribblePrompt, we were able to force our model to pay attention to different inputs using our synthetic segmentation tasks," says Wong. "We wanted to train what's essentially a foundation model on a lot of diverse data so it would generalize to new types of images and tasks." After taking in so much data, the team evaluated ScribblePrompt across 12 new datasets. Although it hadn't seen these images before, it outperformed four existing methods by segmenting more efficiently and giving more accurate predictions about the exact regions users wanted highlighted. "Segmentation is the most prevalent biomedical image analysis task, performed widely both in routine clinical practice and in research -- which leads to it being both very diverse and a crucial, impactful step," says senior author Adrian Dalca SM '12, Ph.D. '16, CSAIL research scientist and assistant professor at MGH and Harvard Medical School. "ScribblePrompt was carefully designed to be practically useful to clinicians and researchers, and hence to substantially make this step much, much faster." "The majority of segmentation algorithms that have been developed in image analysis and machine learning are at least to some extent based on our ability to manually annotate images," says Harvard Medical School professor in radiology and MGH neuroscientist Bruce Fischl, who was not involved in the paper. "The problem is dramatically worse in medical imaging in which our 'images' are typically 3D volumes, as human beings have no evolutionary or phenomenological reason to have any competency in annotating 3D images. ScribblePrompt enables manual annotation to be carried out much, much faster and more accurately, by training a network on precisely the types of interactions a human would typically have with an image while manually annotating. The result is an intuitive interface that allows annotators to naturally interact with imaging data with far greater productivity than was previously possible." Wong and Dalca wrote the paper with two other CSAIL affiliates: John Guttag, the Dugald C. Jackson Professor of EECS at MIT and CSAIL principal investigator; and MIT Ph.D. student Marianne Rakic SM '22. Their work was supported, in part, by Quanta Computer Inc., the Eric and Wendy Schmidt Center at the Broad Institute, the Wistron Corp., and the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health, with hardware support from the Massachusetts Life Sciences Center. Wong and her colleagues' work will be presented at the 2024 European Conference on Computer Vision and was presented as an oral talk at the DCAMI workshop at the Computer Vision and Pattern Recognition Conference earlier this year. They were awarded the Bench-to-Bedside Paper Award at the workshop for ScribblePrompt's potential clinical impact.
[3]
MIT's new AI tool cuts medical imaging annotation time by 28%
When AI systems are trained to understand the boundaries of biological structures, they can segment (or delineate) regions of interest that doctors and biomedical workers want to monitor for diseases and other abnormalities. Instead of wasting time manually tracing anatomy across multiple images, an artificial assistant could handle that task. Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital (MGH), and Harvard Medical School have created an interactive tool called the "ScribblePrompt" framework. This tool can quickly segment any medical image, even types it hasn't seen before, without tedious data collection. Instead of manually marking up each picture, the team simulated how users would annotate over 50,000 scans, including MRIs, ultrasounds, and photographs, across structures in the eyes, cells, brains, bones, skin, and more. The team utilized algorithms to annotate all those scans, replicating how humans would annotate and click on various areas in medical images. In addition to commonly labeled regions, the team utilized superpixel algorithms to identify potential new regions of interest for medical researchers and train ScribblePrompt to segment them.
Share
Share
Copy Link
MIT researchers develop ScribblePrompt, an AI-powered tool that significantly speeds up medical image annotation. This interactive framework could transform diagnostic processes in healthcare.
Researchers at the Massachusetts Institute of Technology (MIT) have unveiled a groundbreaking AI-powered tool called ScribblePrompt, designed to revolutionize the way medical professionals annotate and interpret medical scans 1. This innovative framework promises to dramatically reduce the time and effort required for image annotation, potentially transforming diagnostic processes in healthcare.
Medical image annotation is a crucial yet time-consuming task in healthcare. Traditionally, doctors and radiologists spend hours meticulously outlining and labeling different areas of medical scans, such as tumors or organs. This process is not only labor-intensive but also prone to human error and inconsistency.
ScribblePrompt leverages advanced AI algorithms to assist medical professionals in the annotation process. The tool allows users to make rough outlines or "scribbles" on medical images, which the AI then refines into precise annotations 2. This interactive approach combines human expertise with machine learning capabilities, resulting in a more efficient and accurate annotation process.
One of the most remarkable aspects of ScribblePrompt is its ability to slash annotation time. According to the MIT researchers, the tool can reduce the time required for image annotation by up to 50% compared to traditional methods 3. This significant time-saving could allow medical professionals to focus more on patient care and complex decision-making tasks.
ScribblePrompt's design emphasizes flexibility, allowing it to be used across various medical imaging modalities, including MRI, CT scans, and X-rays. The tool can be easily adapted to different anatomical structures and pathologies, making it versatile for use in multiple medical specialties 1.
The introduction of ScribblePrompt could have far-reaching implications for healthcare. By streamlining the annotation process, it may lead to faster diagnoses, reduced workload for medical professionals, and potentially lower healthcare costs. Additionally, the tool's ability to maintain high accuracy while increasing speed could contribute to improved patient outcomes 2.
As ScribblePrompt continues to evolve, researchers are exploring ways to further enhance its capabilities and integrate it seamlessly into existing healthcare systems. The team at MIT is also investigating potential applications beyond medical imaging, such as in scientific research and industrial quality control 3.
While the potential benefits of ScribblePrompt are significant, the researchers emphasize the importance of addressing ethical considerations surrounding AI in healthcare. They stress that the tool is designed to assist, not replace, human expertise, and that proper validation and regulatory approval will be crucial for its widespread adoption 1.
Reference
[1]
Massachusetts Institute of Technology
|A fast and flexible approach to help doctors annotate medical scans[2]
Medical Xpress - Medical and Health News
|Interactive AI framework provides fast and flexible approach to help doctors annotate medical scans[3]
A team from the University of Pennsylvania has introduced a novel AI training approach called Knowledge-enhanced Bottlenecks (KnoBo) that emulates the education pathway of human physicians for medical image analysis, potentially improving accuracy and interpretability in AI-assisted diagnostics.
2 Sources
Google introduces CT Foundation, a new AI tool for analyzing 3D CT scans, potentially revolutionizing medical imaging and diagnosis. This development highlights the growing role of AI in healthcare, particularly in radiology.
2 Sources
Researchers have developed BiomedGPT, a new AI model that combines vision and language processing to perform various biomedical tasks. This versatile tool shows promise in transforming healthcare and medical research.
2 Sources
Researchers at the University of Washington have developed BiomedParse, an AI model capable of analyzing nine different types of medical images to predict systemic diseases, potentially revolutionizing medical diagnostics.
2 Sources
Researchers from LMU, TU Berlin, and Charité have developed a novel AI tool that can detect rare gastrointestinal diseases using imaging data, potentially improving diagnostic accuracy and easing pathologists' workloads.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved