논문 다이제스트Paper Digest

관심 연구 주제별 최신 논문 요약 — PubMed 최근 2년, 매일 오전 9시 자동 업데이트 Latest paper summaries by research topic — PubMed last 2 years, auto-updated daily at 9 AM

의료영상 AI 2편
Ethical and Legal Concerns of Deepfake Technology in Biomedical Imaging: A Comprehensive Survey.
👤 Rajpar Suhail Ahmed, Chen Hongsong, Ashiru Aliyu 📰 Annals of biomedical engineering 📅 2026
📝 초록 요약
Deepfakes have posed severe challenges to healthcare systems as fake medical images and videos can be utilized to disseminate fake information about an organization or person. The challenges have open room for more solutions to address them. Therefore, this study provides a survey that highlights the considerable strides made in the development of deepfake detection technologies while showcasing various approaches, from advanced machine learning techniques to multi-modal systems. Therefore, it becomes critical to provide a survey on these issues in order to build and apply deepfake detection tools responsibly.
Deepfakes have posed severe challenges to healthcare systems as fake medical images and videos can be utilized to disseminate fake information about an organization or person. The challenges have open room for more solutions to address them. Therefore, this study provides a survey that highlights the considerable strides made in the development of deepfake detection technologies while showcasing various approaches, from advanced machine learning techniques to multi-modal systems. The progress made in identifying deepfakes, particularly with regard to deep learning and hybrid models, shows promise for detecting alterations in digital content and medical imaging. But the use of these technologies shows differing degrees of efficacy, suggesting the necessity for customized detection tactics that take into account the particular difficulties of certain domains, such as nuclear medicine and endoscopic videography. In addition, the application of these technologies raises significant ethical and legal questions, such as those pertaining to data security, privacy, and possible abuses of artificial intelligence. Therefore, it becomes critical to provide a survey on these issues in order to build and apply deepfake detection tools responsibly.
PubMed 원문 →
Commercial AI Model Diagnostic Accuracy for Intracranial Large- and Medium-Vessel Occlusion in Emergency CT Angiography.
👤 Andersson Henrik, Hansen Björn, Wassélius Johan 📰 Radiology. Artificial intelligence 📅 2026
📝 초록 요약
The diagnostic accuracy of AIDOC-VO, the first commercial artificial intelligence tool for intracranial large-and medium-vessel occlusion (LVO/MeVO) detection on head-and-neck CT angiography (CTA), was evaluated in a multicenter emergency setting. A prospective diagnostic-accuracy study of 3,031 adult CT angiograms (mean age, 67.3 years ± 16.4 [SD]; 1,549 females) acquired March-July 2024 across a ten-hospital region was performed. The AI model was compared with clinical radiology reporting. Performance did not differ significantly from clinical radiology reporting.
The diagnostic accuracy of AIDOC-VO, the first commercial artificial intelligence tool for intracranial large-and medium-vessel occlusion (LVO/MeVO) detection on head-and-neck CT angiography (CTA), was evaluated in a multicenter emergency setting. A prospective diagnostic-accuracy study of 3,031 adult CT angiograms (mean age, 67.3 years ± 16.4 [SD]; 1,549 females) acquired March-July 2024 across a ten-hospital region was performed. The AI model was compared with clinical radiology reporting. Examinations flagged positive or doubt by either the AI model or report underwent blinded rereading for reference-standard establishment. Of 3,031 CT angiograms, valid AI model output was yielded for 2,804 (92.5%), of which 224/2,804 (8.0%) had vessel occlusion (VO) on referencestandard reading. For VO detection within intended use (218/224), sensitivity was 81.7% (178/218) (clinical report: 81.2% [177/218]; P =.91), and specificity was 99.6% (2,569/2,580) (clinical report: 99.3% [2,561/2,580]; P =.12). LVO sensitivity was 92.8% (64/69) (clinical report: 87.0% [60/69]; P =.42) and MeVO sensitivity was 76.1% (121/159) (clinical report: 79.2% [126/159]; P =.55). The AI model identified VOs missed by radiologists in 42 examinations, for an enhanced detection rate of 18.8% (42/224; 15 per 1,000 CT angiograms), and generated 11 false alerts (3.9 per 1,000 CT angiograms). Performance did not differ significantly from clinical radiology reporting. ©RSNA, 2026.
PubMed 원문 →
방광암 2편
A deep learning-driven automated treatment planning framework for cervical cancer patients treated with volumetric modulated arc therapy.
👤 Ning Boda, Liang Xiuyan, Cui Zhenguo et al. 📰 Radiation oncology (London, England) 📅 2026
📝 초록 요약
The rapid and efficient generation of high-quality, dose-consistency volumetric modulated arc therapy (VMAT) plans remains challenging in radiotherapy. This study proposes a deep learning (DL) end-to-end (E2E) auto-planning framework and validate its practicality and feasibility for clinical implementation. An E2E auto-planning framework with a two-stage cascaded DL network was developed: Stage 1 predicted coarse dose from CT and structure masks, and Stage 2 refined it using four beam-band priors and a composite loss. The proposed DL method achieved the best performance, with Dose score, DVH score and snDVH score of 2.114 ± 0.218 Gy, 1.194 ± 0.295 Gy and 2.027 ± 0.586, respectively.
The rapid and efficient generation of high-quality, dose-consistency volumetric modulated arc therapy (VMAT) plans remains challenging in radiotherapy. This study proposes a deep learning (DL) end-to-end (E2E) auto-planning framework and validate its practicality and feasibility for clinical implementation. A total of 458 cervical cancer VMAT plans were enrolled and split into training, validation, and test cohorts. An E2E auto-planning framework with a two-stage cascaded DL network was developed: Stage 1 predicted coarse dose from CT and structure masks, and Stage 2 refined it using four beam-band priors and a composite loss. Dose-volume histogram (DVH) endpoints from refined predicted dose were converted into Monaco objectives via a scripting module for iterative optimization. Performance was evaluated with Dose, DVH, and snDVH scores, ablations, and comparisons with manual plans in terms of quality, clinical evaluation and deliverability. The proposed DL method achieved the best performance, with Dose score, DVH score and snDVH score of 2.114 ± 0.218 Gy, 1.194 ± 0.295 Gy and 2.027 ± 0.586, respectively. Compared with manual plans, E2E auto-plans preserved target volume coverage while reducing all DVH metrics for bladder, rectum, small intestine, and spinal cord by 2% - 35% (all p < 0.05). The gamma passing rate of E2E auto-plans was higher than manual plans in the 3%/3 mm gamma criterion (98.1% vs. 97.9%). The proposed auto-planning framework demonstrated a high level of automation and clinical applicability, offering a reliable and promising tool to support radiotherapy workflows. Not applicable.
PubMed 원문 →
Urine-Based Spectroscopy/AI Platform for Early Detection of Multiple Cancers.
👤 Mertz Leslie 📰 IEEE pulse 📅 2026
📝 초록 요약
A new test uses artificial intelligence to identify cancer-indicative patterns of volatile organic compounds in urine. FDA breakthrough device designation for early bladder cancer detection. The test is also designed for the early detection of other cancers including colorectal, stomach, pancreatic, prostate, kidney, bladder, breast, ovarian, cervical, and lung cancers.
A new test uses artificial intelligence to identify cancer-indicative patterns of volatile organic compounds in urine. The test received U.S. FDA breakthrough device designation for early bladder cancer detection. The test is also designed for the early detection of other cancers including colorectal, stomach, pancreatic, prostate, kidney, bladder, breast, ovarian, cervical, and lung cancers.
PubMed 원문 →
대장암 Staging 2편
Integrating liquid biopsies and artificial intelligence for early cancer detection: A systematic review and meta-analysis.
👤 Filis Panagiotis, Markozannes Georgios, Salgkamis Dimitrios et al. 📰 European journal of cancer (Oxford, England : 1990) 📅 2026
📝 초록 요약
The latest generation of liquid biopsies incorporates multi-omic features, including genomics, methylomics, and fragmentomics. This study aims to evaluate the integration of ML with circulating cell-free DNA (cfDNA) analysis for early cancer detection. ML and cfDNA profiling show potential for early cancer detection, with ensemble methods, neural networks and random forests achieving the best overall performance. Fragmentomic features provide the highest sensitivity.
The latest generation of liquid biopsies incorporates multi-omic features, including genomics, methylomics, and fragmentomics. Machine learning (ML) approaches have been proposed to synthesize these complex biological data for the development of diagnostic classifiers. This study aims to evaluate the integration of ML with circulating cell-free DNA (cfDNA) analysis for early cancer detection. Medline, Embase, Cochrane, and Web of Science were searched in July 2025. Eligible studies combined ML and cfDNA features to distinguish cancer patients (stages I-III) from non-cancer controls. Summary diagnostic performance metrics and their 95% confidence intervals (CI) were calculated. The study included 109 articles permitting analyses for lung (n = 34), liver (n = 29), colorectal (n = 28), pancreatic (n = 16), breast (n = 17), esophageal (n = 12), ovarian (n = 13), gastric (n = 9), head and neck (n = 4), and mixed (n = 27) cancer types. Specificity was consistently high across all tumor types and stages (94%-99%). Sensitivity ranged from 72% to 92% for stage I-III, 44-91% for stage I, 71-98% for stage II and 83-99% for stage III. In the pooled study population, neural networks (90%, 95% CI: 81%-95%), random forest (86%, 95% CI: 77%-92%) and heterogeneous ensemble learning (85%, 95% CI: 79%-89%) demonstrated the highest sensitivity. The stratified analysis by classifier feature revealed 86% (95% CI: 80%-90%) sensitivity for fragmentation and 81% (95% CI: 76%-85%) for methylation, with 92%-96% specificity. ML and cfDNA profiling show potential for early cancer detection, with ensemble methods, neural networks and random forests achieving the best overall performance. Fragmentomic features provide the highest sensitivity.
PubMed 원문 →
Exploring Machine Learning Approaches for Decision Support in Neoadjuvant Therapy of Locally Advanced Rectal Cancer.
👤 Dhar Eshita, Kabir Muhammad Ashad, Nadar Divyabharathy Ramesh et al. 📰 Oncology research 📅 2026
📝 초록 요약
Decisions regarding CT after nCCRT for locally advanced rectal cancer (LARC) are challenging due to limited evidence guiding treatment. This study aimed to (i) evaluate the predictive performance of machine learning (ML) models in patients treated with neoadjuvant concurrent chemoradiotherapy (nCCRT) alone vs. This retrospective study included 409 patients with LARC treated at three affiliated hospitals of Taipei Medical University. Conclusions: ML-based analysis identified key predictors and demonstrated good model performance, supporting individualised post-nCCRT chemotherapy decisions.
Decisions regarding CT after nCCRT for locally advanced rectal cancer (LARC) are challenging due to limited evidence guiding treatment. This study aimed to (i) evaluate the predictive performance of machine learning (ML) models in patients treated with neoadjuvant concurrent chemoradiotherapy (nCCRT) alone vs. those receiving nCCRT plus chemotherapy (CT), (ii) identify features associated with treatment improvement, and (iii) derive ML-based thresholds for treatment response. This retrospective study included 409 patients with LARC treated at three affiliated hospitals of Taipei Medical University. Patients were categorised into two groups: nCCRT alone followed by surgery (n = 182) and nCCRT plus additional CT (n = 227). Thirty-four baseline demographic, tumor, and laboratory variables were analysed. Four ML algorithms (K-Star, Random Forest, Multilayer Perceptron, and Random Committee) were evaluated, while five feature-ranking algorithms identified influential attributes among improved patients across both treatments. Decision Stump and AdaBoostM1 were applied to derive threshold-based patterns. K-Star achieved the highest accuracy for nCCRT alone (80.8%; AUC = 0.89), while Random Committee performed best for nCCRT plus CT (77.3%; AUC = 0.84). Clinical N stage (cN) ranked highest, followed by Sodium (Na), Glutamic pyruvic transaminase, estimated glomerular filtration rate, body weight, red blood cell count, mean corpuscular hemoglobin concentration, and blood urea nitrogen. Threshold patterns suggested that CT-related improvement aligned with higher lymphocyte percentage and lower platelet distribution width, whereas nCCRT-only improvement aligned with elevated eGFR, GPT, and cN = 2. Conclusions: ML-based analysis identified key predictors and demonstrated good model performance, supporting individualised post-nCCRT chemotherapy decisions.
PubMed 원문 →
복부 X-ray 응급 0편

검색 결과 없음

공초점 현미경 / H&E 변환 2편
Cancer-selective photoimmunotherapy spares T cells and NK cells and promotes antitumor immunity in an allogeneic human 3D culture model.
👤 Harman Rebecca C, Lozano Ivonne, Ramos Kristiana et al. 📰 Photochemistry and photobiology 📅 2026
📝 초록 요약
Epithelial ovarian cancer (EOC) is a lethal disease typically diagnosed at a late stage. There is an urgent need for treatment modalities that eliminate microscopic metastatic deposits missed by standard therapies while simultaneously engaging antitumor immunity. Using a 3D Matrigel dome model incorporating human ovarian cancer spheroids and allogeneic immune cells, we establish a broadly accessible imaging and analysis pipeline based on fluorescent labeling and 3D confocal microscopy to quantify cancer and immune cell viability. Together, these findings suggest that targeted PIT may extend the immune-modulatory foundations established for PDT and PDP, offering a strategy to simultaneously eradicate residual tumor deposits and promote antitumor immune priming in EOC.
Epithelial ovarian cancer (EOC) is a lethal disease typically diagnosed at a late stage. There is an urgent need for treatment modalities that eliminate microscopic metastatic deposits missed by standard therapies while simultaneously engaging antitumor immunity. Photodynamic therapy (PDT) has demonstrated immune-enhancing effects, including photodynamic priming (PDP), wherein sublethal photodynamic stress remodels the tumor microenvironment to facilitate immune activation and infiltration. Here, we investigate cancer-targeted photoimmunotherapy (PIT), a molecularly targeted form of PDT, as a strategy to build upon and potentially enhance PDP by selectively depleting cancer cells while preserving immune effectors critical to antitumor responses. Using a 3D Matrigel dome model incorporating human ovarian cancer spheroids and allogeneic immune cells, we establish a broadly accessible imaging and analysis pipeline based on fluorescent labeling and 3D confocal microscopy to quantify cancer and immune cell viability. In this system, the presence of T cells or peripheral blood mononuclear cells enhances cancer depletion following PIT, consistent with stimulation of an antitumor immune response. Importantly, PIT spares significantly more T cells and NK cells compared to untargeted PDT and cetuximab at equivalent concentrations. PIT reduces spheroid size while preserving effector immune populations within the tumor microenvironment. Together, these findings suggest that targeted PIT may extend the immune-modulatory foundations established for PDT and PDP, offering a strategy to simultaneously eradicate residual tumor deposits and promote antitumor immune priming in EOC.
PubMed 원문 →
Generative AI for misalignment-resistant virtual staining to accelerate histopathology workflows.
👤 Ma Jiabo, Li Wenqiang, Li Jinbang et al. 📰 Nature communications 📅 2026
📝 초록 요약
Accurate histopathological diagnosis typically relies on multiple chemical stains, a process that is labor-intensive, tissue-consuming, and environmentally taxing. We present a robust virtual staining framework that mitigates spatial mismatches through a cascaded registration mechanism. By decoupling image generation from spatial alignment, our method enables high-fidelity staining even from imperfectly paired or misaligned datasets without altering existing model architectures. In blinded evaluations, experienced pathologists achieved 52% accuracy in distinguishing virtual from chemical stains, indicating that the two were indistinguishable.
Accurate histopathological diagnosis typically relies on multiple chemical stains, a process that is labor-intensive, tissue-consuming, and environmentally taxing. While virtual staining offers a faster, tissue-conserving alternative, its clinical adoption is hindered by the requirement for perfectly aligned paired data, which is difficult to obtain due to tissue distortion during chemical processing. We present a robust virtual staining framework that mitigates spatial mismatches through a cascaded registration mechanism. By decoupling image generation from spatial alignment, our method enables high-fidelity staining even from imperfectly paired or misaligned datasets without altering existing model architectures. Our approach significantly outperforms state-of-the-art models across five datasets, showing a remarkable 23.8% improvement in image quality for highly misaligned samples. In blinded evaluations, experienced pathologists achieved 52% accuracy in distinguishing virtual from chemical stains, indicating that the two were indistinguishable. This framework simplifies data acquisition and provides a scalable pathway for integrating virtual staining into routine clinical workflows.
PubMed 원문 →
소아뇌질환 초음파-MRI 변환 2편
GlioMODA: Robust glioma segmentation in clinical routine.
👤 Canisius Julian, Buchner Josef, Rosier Marcel et al. 📰 Neuro-oncology advances 📅 2026
📝 초록 요약
Precise glioma segmentation in magnetic resonance imaging (MRI) is essential for accurate diagnosis, optimal treatment planning, and advancing clinical research. This study presents and evaluates GlioMODA, a robust deep learning framework designed for automated glioma segmentation that delivers consistent high performance across varied and incomplete MRI protocols. GlioMODA was trained and validated on the BraTS 2021 dataset (1251 training, 219 testing cases), systematically assessing performance across 11 clinically relevant MRI protocol combinations. GlioMODA demonstrated state-of-the-art segmentation accuracy across tumor subregions, maintaining robust performance with incomplete or heterogeneous MRI protocols.
Precise glioma segmentation in magnetic resonance imaging (MRI) is essential for accurate diagnosis, optimal treatment planning, and advancing clinical research. However, most deep learning approaches require complete, standardized MRI protocols that are frequently unavailable in routine clinical practice. This study presents and evaluates GlioMODA, a robust deep learning framework designed for automated glioma segmentation that delivers consistent high performance across varied and incomplete MRI protocols. GlioMODA was trained and validated on the BraTS 2021 dataset (1251 training, 219 testing cases), systematically assessing performance across 11 clinically relevant MRI protocol combinations. Segmentation accuracy was evaluated using Dice similarity coefficients (DSC) and panoptic quality metrics. Volumetric accuracy was benchmarked against manual ground truth, and statistical significance was established via Wilcoxon signed‑rank tests with Benjamini-Yekutieli correction. GlioMODA demonstrated state-of-the-art segmentation accuracy across tumor subregions, maintaining robust performance with incomplete or heterogeneous MRI protocols. Protocols including both T1-weighted contrast-enhanced and T2-FLAIR sequences yielded volumetric differences vs manual ground truth that were not statistically significant for enhancing tumor (median difference 55 mm³, P = .157) and whole tumor (median difference -7 mm³, P = 1.0), and exhibited median DSC differences close to zero relative to the 4‑sequence reference protocol. Omitting either sequence led to substantial and significant volumetric errors. GlioMODA facilitates reliable, automated glioma segmentation using a streamlined 2‑sequence protocol (T1‑contrast + T2‑FLAIR), supporting clinical workflow optimization and broader implementation of quantitative volumetry compatible with RANO 2.0 criteria. GlioMODA is published as an open-source, easy-to-use Python package at https://github.com/BrainLesion/GlioMODA/.
PubMed 원문 →
Insights into Accelerated MRI Protocols for Pediatric Brain Assessment in Emergency Cases.
👤 Kendel Josef Gabriel, Bender Benjamin, Gohla Georg et al. 📰 Diagnostics (Basel, Switzerland) 📅 2026
📝 초록 요약
Two accelerated magnetic resonance imaging (MRI) protocols for pediatric brain imaging, GOBrain and Deep Resolve Swift Brain, developed by Siemens Healthineers (Erlangen, Germany), were evaluated in a series of clinically relevant pediatric cases at 3 Tesla. Pediatric patients are particularly prone to motion, may be uncooperative, and often require sedation, especially in emergency settings. Consequently, there is a persistent clinical demand for fast brain MRI protocols that provide diagnostically sufficient image quality while minimizing examination time. In parallel, echo-planar imaging (EPI) has emerged as a promising approach for ultrafast multi-contrast imaging.
Two accelerated magnetic resonance imaging (MRI) protocols for pediatric brain imaging, GOBrain and Deep Resolve Swift Brain, developed by Siemens Healthineers (Erlangen, Germany), were evaluated in a series of clinically relevant pediatric cases at 3 Tesla. Pediatric patients are particularly prone to motion, may be uncooperative, and often require sedation, especially in emergency settings. Consequently, there is a persistent clinical demand for fast brain MRI protocols that provide diagnostically sufficient image quality while minimizing examination time. Contemporary turbo spin-echo (TSE)-based clinical protocols commonly integrate parallel imaging (PI) and simultaneous multi-slice (SMS) techniques to achieve substantial reductions in scan time. Recent advances in three-dimensional volumetric encoding, compressed sensing, and deep learning (DL)-based reconstruction have further mitigated geometry-factor-related noise amplification, enabling higher acceleration factors (GOBrain). In parallel, echo-planar imaging (EPI) has emerged as a promising approach for ultrafast multi-contrast imaging. To overcome the limitations of single-shot EPI, a multi-shot EPI-based brain MRI protocol combined with the DL-based reconstruction method Deep Resolve Swift Brain has been developed. This approach leverages the efficiency of EPI while improving image quality. Using these accelerated protocols, a comprehensive diagnostic multi-contrast brain MRI examination, particularly suited to triage and emergency imaging, can be completed in minutes. This case overview, including therapy-related leukencephalopathy in acute lymphoblastic leukemia (ALL), a brain abscess, traumatic diffuse axonal injury (DAI), a posterior circulation infarction due to vertebral artery dissection, leuokostasis syndrome, and a posterior fossa tumor with obstructive hydrocephalus, demonstrates the potential clinical feasibility of both protocols in pediatric neuroimaging. Both protocols position them as supplementary options alongside established imaging protocols, while dedicated high-resolution protocols might remain necessary for subtle pathological findings, such as focal cortical dysplasia, and for neuronavigation until larger comparative studies are available.
PubMed 원문 →
MRI 초고해상도 2편
Super-resolution deep learning reconstruction enhances visualization of cerebral aneurysms on magnetic resonance angiography.
👤 Kanzawa Jun, Yasaka Koichiro, Kato Masayoshi et al. 📰 Neuroradiology 📅 2026
📝 초록 요약
(초록 없음)
PubMed 원문 →
Ultrafast deep learning super-resolution single-shot T2-weighted imaging for robust edema visualization in cardiovascular magnetic resonance.
👤 Aziz-Safaie Taraneh, Katemann Christoph, Peeters Johannes M et al. 📰 Journal of cardiovascular magnetic resonance : official journal of the Society for Cardiovascular Magnetic Resonance 📅 2026
📝 초록 요약
To compare the diagnostic quality of deep learning (DL) super-resolution reconstructed breath-hold (BH) and free-breathing (FB) single-shot (SSH) black-blood T2-weighted short tau inversion recovery (STIR) imaging with standard BH T2-STIR in cardiovascular magnetic resonance (CMR). In this prospective study, short-axis BH and FB SSH T2-STIR were added to a standard cardiomyopathy CMR protocol at 1.5T, and DL super-resolution reconstruction were performed. Two readers evaluated diagnostic quality and certainty using a five-point Likert scale. Slice level-analysis showed that BH DL-SSH T2-STIR consistently provided superior image quality in apical slices compared to BH SSH and standard T2-STIR (4 [IQR, 4-5] vs.
To compare the diagnostic quality of deep learning (DL) super-resolution reconstructed breath-hold (BH) and free-breathing (FB) single-shot (SSH) black-blood T2-weighted short tau inversion recovery (STIR) imaging with standard BH T2-STIR in cardiovascular magnetic resonance (CMR). In this prospective study, short-axis BH and FB SSH T2-STIR were added to a standard cardiomyopathy CMR protocol at 1.5T, and DL super-resolution reconstruction were performed. Two readers evaluated diagnostic quality and certainty using a five-point Likert scale. Presence of focal edema was assessed on T2-weighted sequences including standard T2-STIR and T2 mapping (both used for reference clinical assessment) as well as SSH T2 STIR and DL-SSH T2-STIR. Friedman test and one-way ANOVA were performed. 81 participants (mean age: 54 ± 20 years; 50 men) were included. No difference was found in edema detection between reference assessment and DL-SSH T2-STIR (both 21/81 participants [26%]). Scan time was reduced by 63% for BH and 86% for FB DL-SSH T2-STIR compared to standard T2-STIR (90±6sec vs. 35±3sec vs. 243±16sec; p<.0001). BH and FB DL-SSH T2-STIR achieved lower artifact burden (5 [IQR, 4-5] vs. 4 [IQR, 4-5] vs. 4 [IQR, 3-5]; p<.0001), superior image contrast and sharpness compared to standard T2-STIR, especially in non-cooperative or arrhythmic participants. BH and FB DL-SSH T2-STIR imaging provided higher diagnostic certainty than standard T2-STIR (5 [IQR, 5-5] vs. 5 [IQR, 5-5] vs. 4 [IQR, 4-5]; p<.0001). Edema visibility was superior in BH DL-SSH compared to BH-SSH and standard T2-STIR (5 [IQR, 4.8-5] vs. 4 [IQR, 3.3-5] vs. 4 [IQR, 3-4.8]; p<.0001). Inter-rater agreement was substantial to excellent in the rating of edema visibility (BH DL-SSH T2-STIR, κ: 0.73 [95% CI: 0.44-1.0]; BH SSH T2-STIR, κ: 0.79 [95% CI: 0.66-0.97]; standard T2-STIR, κ: 0.86 [95% CI: 0.71-1.0]). Slice level-analysis showed that BH DL-SSH T2-STIR consistently provided superior image quality in apical slices compared to BH SSH and standard T2-STIR (4 [IQR, 4-5] vs. 4 [IQR, 4-4] vs. 4 [IQR, 3-4]; p<.0001). DL-SSH imaging enabled ultrafast T2-STIR acquisition and robust edema assessment in routine clinical CMR.
PubMed 원문 →
의료영상 분할-검출 2편
A multi-paradigm evaluation spanning pixels to voxels for deep learning-based kidney tumor segmentation.
👤 Lalwani Rahul, Telang Akshada, Tiwari Vibha 📰 Journal of medical engineering & technology 📅 2026
📝 초록 요약
Automated segmentation of kidney tumors from computed tomography (CT) scans is crit- ical for diagnosis, treatment planning, and monitoring of renal cell carcinoma (RCC). Unlike existing studies that emphasise quantitative metrics, this work investigates the critical gap between high segmentation accuracy and clinical applicability. We systematically evaluate six diverse architectures spanning 2D CNNs (U-Net, MedSAM) to 3D volumetric models (nnU-Net, UNETR, Total Segmenta- tor, MIScnn) on the KiTS19 dataset, emphasising false positive analysis, boundary delineation accuracy, and computational feasibility. Key findings [1]: MONAI U-Net achieves Dice score of 0.98 but exhibits excessive false positives, undermining clinical trust [2]; nnU-Net provides balanced performance (Dice: 0.82) with consistent results but demands 16GB VRAM [3]; MedSAM achieves state-of-the-art accuracy (Dice: 0.99) with minimal false positives but re- quires high-end GPUs [4]; computational constraints prevented full training of UNETR.
Automated segmentation of kidney tumors from computed tomography (CT) scans is crit- ical for diagnosis, treatment planning, and monitoring of renal cell carcinoma (RCC). While recent deep learning models report high Dice scores (>0.97), their clinical utility remains questionable due to false positive predictions that misclassify healthy tissue as tumors and computational constraints limiting real-world deployment. Unlike existing studies that emphasise quantitative metrics, this work investigates the critical gap between high segmentation accuracy and clinical applicability. We systematically evaluate six diverse architectures spanning 2D CNNs (U-Net, MedSAM) to 3D volumetric models (nnU-Net, UNETR, Total Segmenta- tor, MIScnn) on the KiTS19 dataset, emphasising false positive analysis, boundary delineation accuracy, and computational feasibility. Key findings [1]: MONAI U-Net achieves Dice score of 0.98 but exhibits excessive false positives, undermining clinical trust [2]; nnU-Net provides balanced performance (Dice: 0.82) with consistent results but demands 16GB VRAM [3]; MedSAM achieves state-of-the-art accuracy (Dice: 0.99) with minimal false positives but re- quires high-end GPUs [4]; computational constraints prevented full training of UNETR. This study identifies that high Dice scores do not guarantee clinical utility and provides actionable insights for developing clinically feasible segmentation tools for renal oncology applications including treatment planning, longitudinal monitoring, and risk assessment.
PubMed 원문 →
Deep Ensemble Learning to Detect Retinal Vascular Leakage on Ultrawide-Field Fundus Photographs of Patients With Uveitis.
👤 Kim Jongwoo, Nguyen Nam V, Soifer Matias A et al. 📰 Translational vision science & technology 📅 2026
📝 초록 요약
The purpose of this study was to develop a novel deep learning (DL) algorithm to detect retinal vascular leakage (RVL) on ultra-widefield fundus (UWF) images in patients with posterior segment uveitis. Several DL algorithms with different backbone architectures were trained and tested, and the ensemble learning (EL) method was adopted to enhance classification accuracy. EL based on 3 DL models showed superior performance with an accuracy of 0.7704, a sensitivity of 0.7699, a specificity of 0.7713, and an area under the curve (AUC) of 0.8018 for the dataset wMildRVL, and an accuracy of 0.7900, a sensitivity of 0.7819, a specificity of 0.8000, and an AUC of 0.8344 for the woMildRVL dataset. This algorithm model can be a potential screening tool to detect the presence of RVL on UWF images, thus determining the need for UWFFA, as this would be especially helpful in resource-limited settings or in patients with known adverse effects to the fluorescein dye.
The purpose of this study was to develop a novel deep learning (DL) algorithm to detect retinal vascular leakage (RVL) on ultra-widefield fundus (UWF) images in patients with posterior segment uveitis. Ultra-widefield fluorescein angiography (UWFFA) and corresponding UWF images of patients, who were evaluated at the uveitis clinic at the National Eye Institute, National Institutes of Health, were collected for this study. UWFFA images were used for the assessment of RVL, and the corresponding UWF images were used to train the algorithms. Several DL algorithms with different backbone architectures were trained and tested, and the ensemble learning (EL) method was adopted to enhance classification accuracy. A total of 405 eyes were included in the study. Two different datasets were generated, wMildRVL (405 eyes) and woMildRVL (excluding mild RVL eyes). EL based on 3 DL models showed superior performance with an accuracy of 0.7704, a sensitivity of 0.7699, a specificity of 0.7713, and an area under the curve (AUC) of 0.8018 for the dataset wMildRVL, and an accuracy of 0.7900, a sensitivity of 0.7819, a specificity of 0.8000, and an AUC of 0.8344 for the woMildRVL dataset. The proposed EL model demonstrated the potential in distinguishing those with and without RVL on UWF images from eyes with posterior segment uveitis. This algorithm model can be a potential screening tool to detect the presence of RVL on UWF images, thus determining the need for UWFFA, as this would be especially helpful in resource-limited settings or in patients with known adverse effects to the fluorescein dye.
PubMed 원문 →