MICCAI 2026 Challenge

TREAT-MMTB 2026

Transformative Research and Efficient AI Technologies
for Multimodal Management of Tuberculosis

Segment cavities. Predict severity. Publish together.

Apr 10 – May 15, 2026 — Application for Participation Competition Phase
May 15 – Aug 20, 2026 — (Internal · External) Competition Phase
Oct 4 & 8, 2026 — MICCAI 2026 Awards, Abu Dhabi
📝
Journal Co-authorship
🎤
Oral Presentation at MICCAI
🏆
Prizes per Task

Upcoming Releases

WhatWhenIncludes
Dataset & Evaluation ProtocolMay 15, 2026Training/validation data, submission format, clinical metadata schema, evaluation metrics, ranking formula, scoring scripts, hardware specs
Docker Submission EnvironmentBefore Aug 1, 2026sample Dockerfile, submission template, execution guidelines

Challenge Background

Pulmonary tuberculosis (TB) remains a critical global health challenge, characterized by high morbidity and mortality. Chest X-ray (CXR) examination is an essential tool for TB screening, triage, and diagnosis.

Yet the manual assessment of CXRs is slow, subjective, and constrained by the global shortage of trained radiologists. In regions where TB claims the most lives, the tools to fight it remain least accessible — underscoring the need for automated, scalable diagnostic solutions.

Challenge Design

This challenge addresses these gaps by introducing two innovative tasks:

  • Task 1 focuses on cavity detection and segmentation, using advanced deep learning techniques to identify and quantify high-risk cavities in CXRs.
  • Task 2 implements a multimodal approach for tuberculosis diagnosis, integrating radiological features from CXRs with clinical metadata. By utilizing diverse datasets from various countries, this task evaluates diagnostic accuracy to ensure the models are robust, generalizable, and effective across different global populations.

Expected Impact

From a medical domain perspective, this framework enhances diagnostic precision and enables consistent longitudinal monitoring. From a technical perspective, it leverages state-of-the-art machine learning to provide scalable, explainable tools for diverse clinical settings. The anticipated impact includes streamlining personalized treatment planning, reducing the burden on healthcare professionals, and bridging the gap in global TB care through automated, high-fidelity diagnostics.

Clinical Background

Why cavities matter — The presence of a thick-walled cavitary lesion in the upper lobes strongly favors active pulmonary tuberculosis, particularly post-primary (reactivation) disease. Such cavitation typically reflects caseous necrosis with subsequent drainage into the bronchial tree, creating an air-filled space that serves as a surrogate marker of high mycobacterial burden.

Since these cavitary foci frequently communicate with airways, patients with cavitary TB are more likely to exhibit smear- and culture-positive sputum, making the lesion a critical indicator of increased infectiousness and heightened transmission risk. While these radiological features are crucial indicators of TB severity, visual interpretation of CXRs is time-intensive and dependent on radiologist expertise, which may not always be available.

🧩

Task 1: Cavity Detection & Segmentation

Deep learning–based detection and segmentation of cavitary lesions from CXR, targeting high-risk indicators of TB severity and infectiousness.

πŸ”¬

Task 2: Multimodal TB Diagnosis

Timika score prediction through multimodal fusion of CXR imaging and clinical metadata for quantitative TB severity assessment.

🌍

Multi-Center Validation

External validation across 5 international sites β€” Korea, Mongolia, Peru, and the Philippines β€” ensuring clinical generalizability.

Challenge Tasks

Two complementary tasks addressing different aspects of TB assessment. Participants may enter one or both. Dual-task winners are eligible for prizes in both tracks.
Task 1

Cavity Detection and Segmentation

Goal

Build a model that determines whether a cavitary lesion is present on a frontal chest X-ray and segments the lesion when present.

Why it matters

Cavities are a key imaging marker of pulmonary tuberculosis severity and are strongly associated with bacterial burden, transmission risk, and treatment outcome. Automated cavity analysis can support more standardized and scalable TB assessment.

What you need to solve

Participants must address two linked tasks:

  • Detection: decide whether a cavity is present or absent on the input CXR
  • Segmentation: if present, delineate the cavity boundary accurately

Your method should be robust to variation in lesion appearance, size, location, and image quality.

Expected output

For each test image, submit:

  • a cavity presence/absence prediction
  • a cavity segmentation mask when applicable

Detailed submission format specifications (file naming, directory structure, mask format) will be provided upon dataset release on May 15, 2026.

Note: This challenge focuses on frontal CXRs (AP/PA) as they represent the primary imaging modality for TB screening in resource-limited settings. Lateral views are planned for future iterations (e.g., 2027).

Task 2

Multimodal Timika Score Prediction

Goal

Build a model that predicts the Timika score β€” a validated radiographic severity measure β€” by integrating frontal chest X-ray imaging with clinical metadata.

Why it matters

Accurate assessment of TB severity is fundamental to treatment planning and disease management, yet current practice relies heavily on subjective radiological interpretation. The Timika score provides a standardized, quantitative measure of disease extent, but its manual computation is time-consuming and prone to inter-observer variability. A multimodal AI framework that combines imaging features with clinical context β€” symptoms, demographics, bacterial load, medical history β€” can emulate the integrative reasoning of expert clinicians and improve diagnostic consistency across diverse healthcare settings.

What you need to solve

Participants must develop a multimodal prediction pipeline that takes two inputs simultaneously:

  • Imaging: frontal chest X-ray (AP/PA)
  • Clinical data: structured metadata including demographics, symptoms, sputum results, and medical history

The model must accurately predict the Timika score for both TB-positive and normal cases. Your method should be robust to missing or incomplete clinical variables and generalizable across diverse patient populations.

Expected output

For each case, submit:

  • a predicted Timika score

Detailed submission format specifications and clinical metadata schema will be provided upon dataset release on May 15, 2026.

Dataset

Overview of the challenge dataset, including sample images, annotation examples, and data distributions. Full data access is provided upon registration.

Sample Images & Annotations

🏷️
Task 1: CXR with Cavity Annotations
Sample chest X-rays showing cavity detection
and segmentation masks
πŸ“Š
Task 2: Multimodal Data Overview
CXR samples paired with clinical metadata
and Timika score distributions

Data Distribution

πŸ“ˆ
Task 1: Cavity Presence Distribution
Class balance (positive vs. negative),
cavity size distribution, and location heatmap
πŸ“‰
Task 2: Timika Score Distribution
Timika score histogram, TB vs. Normal ratio,
and per-center data distribution

Annotation Details

🏷️
Labeling Protocol & Annotation Examples
Annotation guidelines, label format specification, weak vs. strong label examples,
and inter-annotator agreement samples

Schedule

All deadlines are based on Korea Standard Time (KST) unless otherwise noted.
DateEvent
Apr 10 – May 15, 2026Application for Participation
May 15, 2026Internal Dataset Release
May 15 – July 31, 2026Internal Competition Phase
July 31, 2026Announcement of Qualified Teams for External Competition
Aug 1 – Aug 20, 2026External Competition Phase
Aug 20 – Aug 31, 2026Organizer Verification and Docker Evaluation
September 1, 2026Final Ranking Announcement and MICCAI Invitation
October 4 & 8, 2026Challenge at MICCAI 2026 and Award Ceremony
November 1, 2026Paper Submission Deadline

1st Stage β€” Internal Competition

Participants will develop and validate their models using the internal dataset provided by the organizers. Based on the ranking achieved in this stage, selected teams will advance to the next stage.

Submission deadline: July 31, 2026, 09:00 AM (KST)

2nd Stage β€” External Competition

Selected teams will participate in the external competition for final evaluation. During this stage, each team may submit up to two times per week and must provide a Dockerfile. All submitted Dockerfiles will be executed directly by the organizers for final ranking.

A sample Dockerfile and submission template will be provided via the GitHub repository before the external competition begins.

Submission deadline: August 20, 2026, 09:00 AM (KST)

Important: Submissions with Dockerfiles that fail to execute successfully will be considered invalid.

Challenge Day at MICCAI 2026

Part of the Joint Thematic Day — Building Inclusive and Efficient AI Technologies for Medical Imaging in Africa and Other Resource-Constrained Settings

SE I: TB Challenges (8:00 – 10:30)

Chairs: Namkung Kim & Sengjoo Park

TimeSession
8:00 – 8:05Opening Remarks
8:00 – 8:20(Keynote) Hoon Sang Lee (Right Foundation) — Bridging the Diagnostic Gap: Practical AI Deployment for TB in the Global South
8:20 – 8:40Radiologists at the Forefront of Tuberculosis Eradication
8:40 – 9:00TREAT-MMTB 2026 Challenge Overview
9:00 – 10:00Top-Performing Teams Presentations
10:00 – 10:30Coffee Break & Networking

This session is part of the MICCAI 2026 Joint Thematic Day. Full agenda and venue details will be available on the MICCAI 2026 website. Schedule is subject to change.

Evaluation & Submission

Task 1 Metrics

Cavity Segmentation

DSC Accuracy

Evaluation is expected to be based on Dice Similarity Coefficient (DSC) for segmentation quality and Accuracy for cavity presence detection. The exact metrics, ranking formula, and evaluation scripts will be finalized and announced upon dataset release on May 15, 2026.

Task 2 Metrics

Multimodal Diagnosis

F1-Score

Evaluation is expected to be based on F1-Score, balancing precision and recall to reflect clinical diagnostic requirements. Detailed scoring criteria and evaluation scripts will be finalized and announced upon dataset release on May 15, 2026.

Phase 1 — Internal Competition

Prediction Submission

  • Submit prediction result files
  • Evaluated on internal validation set

Submission format and file structure will be provided upon dataset release on May 15, 2026.

Phase 2 — External Competition

Docker Submission

  • Dockerized inference pipeline with model weights
  • README with execution instructions

Docker base image specifications, submission template, and detailed guidelines will be provided via the GitHub repository before the external competition begins.

Rules & Policies

01

Additional Data Policy

Use of additional publicly available datasets and pre-trained models is permitted. Participants may leverage any open-source datasets, foundation models, or pre-trained weights to improve their methods. All data sources and models used must be fully disclosed in the Method Description. Failure to disclose will result in disqualification.

02

Computational Constraints

Participants are free to use any computational resources during training. However, submitted inference pipelines must run within the organizers' evaluation environment. Models that exceed available GPU memory or fail to complete inference within the allotted time limit will be considered invalid. Specific hardware specifications and time constraints will be announced upon dataset release.

03

Anti-Cheating & Reproducibility

Top-ranking teams must submit full source code and trained model weights for verification. Any form of data leakage, test set probing, or result manipulation will result in immediate disqualification.

04

Team & Participation

Teams or individuals may enter one or both tasks. Winning prizes in both tasks simultaneously is allowed. Each individual may belong to only one team per task. Participating in multiple teams within the same task is strictly prohibited. Team roster changes are permitted only once and must be finalized before the application deadline (May 15, 2026).

05

Submission Limits

2nd Stage: Up to two submissions per week. Submissions with Dockerfiles that fail to execute will be considered invalid.

06

Data Policy

The organizers will provide a subset of annotated cases for training and validation. The source images originate from publicly available repositories, which participants may independently access. However, the annotations and labels created by the organizers are proprietary to this challenge and must not be redistributed or made publicly available at any time, including after the competition concludes. All participants are required to sign a data security agreement upon registration.

07

Independent Publication

Participating teams may publish their results independently. Please cite the challenge paper when applicable.

Prizes

Recognition & Publication

Top three performing methods will be announced publicly on the challenge website. Selected teams will be invited to present their methodology during an oral presentation at the MICCAI 2026 Challenge Day.

These teams will be listed as co-authors in the final TREAT-MMTB journal manuscript. At a minimum, the top-performing teams selected to present at the workshop will be included. Additionally, submissions that exhibit notable scientific value may also be invited as co-authors in the journal publication, regardless of final ranking.

Task 1: Cavity Presence Detection and Segmentation

πŸ₯‡
$1,400
First Prize
πŸ₯ˆ
$700
Second Prize
πŸ₯‰
$350
Third Prize

Task 2: Multimodal Timika Score Prediction

πŸ₯‡
$1,400
First Prize
πŸ₯ˆ
$700
Second Prize
πŸ₯‰
$350
Third Prize

Award Policy

Winners are determined solely by each task's respective metrics; there is no combined overall ranking across tasks. To be eligible for any challenge awards, top-ranking teams are strictly required to publicly release their complete inference pipelines — including Dockerfile, preprocessing, postprocessing — and model weights.

Registration

Ready to participate?

Registration is free. You may register for one or both tasks.

Register via Google Form β†—
STEP 01

Register

Google Form β€” team info + data agreement

STEP 02

Access Data

Download instructions + baseline code

STEP 03

Phase 1

Submit predictions

STEP 04

Phase 2

Docker + Method Description

Organizers

Academic & Clinical

AMC

Prof. Namkug Kim

Challenge Chair
Asan Medical Center, Seoul
Seungjoo Park, Yoojin Nam, Dongju Lee
AMC

Prof. Sei Won Lee

Clinical Support
Asan Medical Center, Seoul
SEV

Prof. Young Ae Kang

Clinical Support
Severance Hospital, Seoul
MGL

Prof. Bayarbaatar Bold

External Validation
Intermed Hospital, Mongolia
NIH

Alexander Rosenthal

TB Portal
NIH / NIAID, USA

Industry Partners

DN

DEEP NOID, Inc.

Industry Partner
Seoul, Korea
PM

Promedius, Inc.

Industry Partner
Seoul, Korea
HyunJin Bae

Society

KSR

The Korean Society of Thoracic Radiology

Endorsing Society
Asan Medical Center
Severance Hospital
NIH / NIAID
Intermed
DEEP NOID
Promedius
KSTR

Get in Touch

For questions or collaboration inquiries: [mi2rl.challenge@gmail.com]