Transformative Research and Efficient AI Technologies
for Multimodal Management of Tuberculosis
Segment cavities. Predict severity. Publish together.
| What | When | Includes |
|---|---|---|
| Dataset & Evaluation Protocol | May 15, 2026 | Training/validation data, submission format, clinical metadata schema, evaluation metrics, ranking formula, scoring scripts, hardware specs |
| Docker Submission Environment | Before Aug 1, 2026 | sample Dockerfile, submission template, execution guidelines |
Pulmonary tuberculosis (TB) remains a critical global health challenge, characterized by high morbidity and mortality. Chest X-ray (CXR) examination is an essential tool for TB screening, triage, and diagnosis.
Yet the manual assessment of CXRs is slow, subjective, and constrained by the global shortage of trained radiologists. In regions where TB claims the most lives, the tools to fight it remain least accessible — underscoring the need for automated, scalable diagnostic solutions.
This challenge addresses these gaps by introducing two innovative tasks:
From a medical domain perspective, this framework enhances diagnostic precision and enables consistent longitudinal monitoring. From a technical perspective, it leverages state-of-the-art machine learning to provide scalable, explainable tools for diverse clinical settings. The anticipated impact includes streamlining personalized treatment planning, reducing the burden on healthcare professionals, and bridging the gap in global TB care through automated, high-fidelity diagnostics.
Why cavities matter — The presence of a thick-walled cavitary lesion in the upper lobes strongly favors active pulmonary tuberculosis, particularly post-primary (reactivation) disease. Such cavitation typically reflects caseous necrosis with subsequent drainage into the bronchial tree, creating an air-filled space that serves as a surrogate marker of high mycobacterial burden.
Since these cavitary foci frequently communicate with airways, patients with cavitary TB are more likely to exhibit smear- and culture-positive sputum, making the lesion a critical indicator of increased infectiousness and heightened transmission risk. While these radiological features are crucial indicators of TB severity, visual interpretation of CXRs is time-intensive and dependent on radiologist expertise, which may not always be available.
Deep learningβbased detection and segmentation of cavitary lesions from CXR, targeting high-risk indicators of TB severity and infectiousness.
Timika score prediction through multimodal fusion of CXR imaging and clinical metadata for quantitative TB severity assessment.
External validation across 5 international sites β Korea, Mongolia, Peru, and the Philippines β ensuring clinical generalizability.
Build a model that determines whether a cavitary lesion is present on a frontal chest X-ray and segments the lesion when present.
Cavities are a key imaging marker of pulmonary tuberculosis severity and are strongly associated with bacterial burden, transmission risk, and treatment outcome. Automated cavity analysis can support more standardized and scalable TB assessment.
Participants must address two linked tasks:
Your method should be robust to variation in lesion appearance, size, location, and image quality.
For each test image, submit:
Detailed submission format specifications (file naming, directory structure, mask format) will be provided upon dataset release on May 15, 2026.
Note: This challenge focuses on frontal CXRs (AP/PA) as they represent the primary imaging modality for TB screening in resource-limited settings. Lateral views are planned for future iterations (e.g., 2027).
Build a model that predicts the Timika score β a validated radiographic severity measure β by integrating frontal chest X-ray imaging with clinical metadata.
Accurate assessment of TB severity is fundamental to treatment planning and disease management, yet current practice relies heavily on subjective radiological interpretation. The Timika score provides a standardized, quantitative measure of disease extent, but its manual computation is time-consuming and prone to inter-observer variability. A multimodal AI framework that combines imaging features with clinical context β symptoms, demographics, bacterial load, medical history β can emulate the integrative reasoning of expert clinicians and improve diagnostic consistency across diverse healthcare settings.
Participants must develop a multimodal prediction pipeline that takes two inputs simultaneously:
The model must accurately predict the Timika score for both TB-positive and normal cases. Your method should be robust to missing or incomplete clinical variables and generalizable across diverse patient populations.
For each case, submit:
Detailed submission format specifications and clinical metadata schema will be provided upon dataset release on May 15, 2026.
| Date | Event |
|---|---|
| Apr 10 β May 15, 2026 | Application for Participation |
| May 15, 2026 | Internal Dataset Release |
| May 15 β July 31, 2026 | Internal Competition Phase |
| July 31, 2026 | Announcement of Qualified Teams for External Competition |
| Aug 1 β Aug 20, 2026 | External Competition Phase |
| Aug 20 β Aug 31, 2026 | Organizer Verification and Docker Evaluation |
| September 1, 2026 | Final Ranking Announcement and MICCAI Invitation |
| October 4 & 8, 2026 | Challenge at MICCAI 2026 and Award Ceremony |
| November 1, 2026 | Paper Submission Deadline |
Participants will develop and validate their models using the internal dataset provided by the organizers. Based on the ranking achieved in this stage, selected teams will advance to the next stage.
Submission deadline: July 31, 2026, 09:00 AM (KST)
Selected teams will participate in the external competition for final evaluation. During this stage, each team may submit up to two times per week and must provide a Dockerfile. All submitted Dockerfiles will be executed directly by the organizers for final ranking.
A sample Dockerfile and submission template will be provided via the GitHub repository before the external competition begins.
Submission deadline: August 20, 2026, 09:00 AM (KST)
Important: Submissions with Dockerfiles that fail to execute successfully will be considered invalid.
Part of the Joint Thematic Day — Building Inclusive and Efficient AI Technologies for Medical Imaging in Africa and Other Resource-Constrained Settings
Chairs: Namkung Kim & Sengjoo Park
| Time | Session |
|---|---|
| 8:00 – 8:05 | Opening Remarks |
| 8:00 – 8:20 | (Keynote) Hoon Sang Lee (Right Foundation) — Bridging the Diagnostic Gap: Practical AI Deployment for TB in the Global South |
| 8:20 – 8:40 | Radiologists at the Forefront of Tuberculosis Eradication |
| 8:40 – 9:00 | TREAT-MMTB 2026 Challenge Overview |
| 9:00 – 10:00 | Top-Performing Teams Presentations |
| 10:00 – 10:30 | Coffee Break & Networking |
This session is part of the MICCAI 2026 Joint Thematic Day. Full agenda and venue details will be available on the MICCAI 2026 website. Schedule is subject to change.
Evaluation is expected to be based on Dice Similarity Coefficient (DSC) for segmentation quality and Accuracy for cavity presence detection. The exact metrics, ranking formula, and evaluation scripts will be finalized and announced upon dataset release on May 15, 2026.
Evaluation is expected to be based on F1-Score, balancing precision and recall to reflect clinical diagnostic requirements. Detailed scoring criteria and evaluation scripts will be finalized and announced upon dataset release on May 15, 2026.
Submission format and file structure will be provided upon dataset release on May 15, 2026.
Docker base image specifications, submission template, and detailed guidelines will be provided via the GitHub repository before the external competition begins.
Use of additional publicly available datasets and pre-trained models is permitted. Participants may leverage any open-source datasets, foundation models, or pre-trained weights to improve their methods. All data sources and models used must be fully disclosed in the Method Description. Failure to disclose will result in disqualification.
Participants are free to use any computational resources during training. However, submitted inference pipelines must run within the organizers' evaluation environment. Models that exceed available GPU memory or fail to complete inference within the allotted time limit will be considered invalid. Specific hardware specifications and time constraints will be announced upon dataset release.
Top-ranking teams must submit full source code and trained model weights for verification. Any form of data leakage, test set probing, or result manipulation will result in immediate disqualification.
Teams or individuals may enter one or both tasks. Winning prizes in both tasks simultaneously is allowed. Each individual may belong to only one team per task. Participating in multiple teams within the same task is strictly prohibited. Team roster changes are permitted only once and must be finalized before the application deadline (May 15, 2026).
2nd Stage: Up to two submissions per week. Submissions with Dockerfiles that fail to execute will be considered invalid.
The organizers will provide a subset of annotated cases for training and validation. The source images originate from publicly available repositories, which participants may independently access. However, the annotations and labels created by the organizers are proprietary to this challenge and must not be redistributed or made publicly available at any time, including after the competition concludes. All participants are required to sign a data security agreement upon registration.
Participating teams may publish their results independently. Please cite the challenge paper when applicable.
Top three performing methods will be announced publicly on the challenge website. Selected teams will be invited to present their methodology during an oral presentation at the MICCAI 2026 Challenge Day.
These teams will be listed as co-authors in the final TREAT-MMTB journal manuscript. At a minimum, the top-performing teams selected to present at the workshop will be included. Additionally, submissions that exhibit notable scientific value may also be invited as co-authors in the journal publication, regardless of final ranking.
Winners are determined solely by each task's respective metrics; there is no combined overall ranking across tasks. To be eligible for any challenge awards, top-ranking teams are strictly required to publicly release their complete inference pipelines — including Dockerfile, preprocessing, postprocessing — and model weights.
Registration is free. You may register for one or both tasks.
Register via Google Form βGoogle Form β team info + data agreement
Download instructions + baseline code
Submit predictions
Docker + Method Description
Academic & Clinical
Industry Partners
Society
For questions or collaboration inquiries: [mi2rl.challenge@gmail.com]