| --- |
| title: MR-RATE Dataset |
| license: cc-by-nc-sa-4.0 |
| task_categories: |
| - image-to-text |
| - text-to-image |
| - image-classification |
| - question-answering |
| - visual-question-answering |
| - zero-shot-classification |
| language: |
| - en |
| tags: |
| - brain-mri |
| - radiology |
| - science |
| - huggingscience |
| - 3d-medical-imaging |
| - medical |
| - mr-rate |
| - multimodal |
| - vision-language |
| - healthcare |
| - diagnostic-imaging |
| - computer-vision |
| - foundation-model |
| size_categories: |
| - 10K<n<100K |
| pretty_name: 'MR-RATE: Brain and Spine MRI Volumes with Radiology Reports' |
| extra_gated_prompt: > |
| ## Terms and Conditions for Using the MR-RATE Dataset |
| **1. Acceptance of Terms** |
| |
| Accessing and using the MR-RATE dataset implies your agreement to these terms |
| and conditions. If you disagree with any part, please refrain from using the |
| dataset. |
|
|
|
|
| **2. Permitted Use** |
|
|
| - The dataset is intended solely for academic, research, and educational |
| purposes. |
|
|
| - Any commercial exploitation of the dataset without prior permission is |
| strictly forbidden. |
|
|
| - You must adhere to all relevant laws, regulations, and research ethics, |
| including data privacy and protection standards. |
|
|
|
|
| **3. Data Protection and Privacy** |
|
|
| - Acknowledge the presence of sensitive information within the dataset and |
| commit to maintaining data confidentiality. |
|
|
| - Direct attempts to re-identify individuals from the dataset are prohibited. |
|
|
| - Ensure compliance with data protection laws such as GDPR and HIPAA. |
|
|
|
|
| **4. Attribution** |
|
|
| - Cite the dataset and acknowledge the providers in any publications resulting |
| from its use. |
|
|
| - Claims of ownership or exclusive rights over the dataset or derivatives are |
| not permitted. |
|
|
|
|
| **5. Redistribution** |
|
|
| - Redistribution of the dataset or any portion thereof is not allowed. |
|
|
| - Sharing derived data must respect the privacy and confidentiality terms set |
| forth. |
|
|
|
|
| **6. Disclaimer** |
|
|
| The dataset is provided "as is" without warranty of any kind, either expressed |
| or implied, including but not limited to the accuracy or completeness of the |
| data. |
|
|
|
|
| **7. Limitation of Liability** |
|
|
| Under no circumstances will the dataset providers be liable for any claims or |
| damages resulting from your use of the dataset. |
|
|
|
|
| **8. Access Revocation** |
|
|
| Violation of these terms may result in the termination of your access to the |
| dataset. |
|
|
|
|
| **9. Amendments** |
|
|
| The terms and conditions may be updated at any time; continued use of the |
| dataset signifies acceptance of the new terms. |
|
|
|
|
| **10. Governing Law** |
|
|
| These terms are governed by the laws of the location of the dataset providers, |
| excluding conflict of law rules. |
|
|
|
|
| **Consent:** |
|
|
| Accessing and using the MR-RATE dataset signifies your acknowledgment and |
| agreement to these terms and conditions. |
| extra_gated_fields: |
| First Name: text |
| Last Name: text |
| Institution: text |
| Role: text |
| Email: text |
| I have read and agree with Terms and Conditions for using the MR-RATE dataset: checkbox |
| --- |
| <p align="center"> |
| <span style="font-size: 24px; font-weight: 700;"> |
| MR-RATE: A Vision-Language Foundation Model and Dataset for Magnetic Resonance Imaging |
| </span> |
| </p> |
| <p align="center"> |
| <a href="https://github.com/forithmus/MR-RATE"> |
| <img alt="Code" src="https://img.shields.io/badge/Code-GitHub-181717?logo=github&logoColor=white"> |
| </a> |
| <a href="https://huggingface.co/datasets/Forithmus/MR-RATE#dataset-organization--getting-started"> |
| <img alt="Dataset Access" src="https://img.shields.io/badge/Dataset%20Getting%20Started-Hugging%20Face-FFD21E?logo=huggingface&logoColor=black"> |
| </a> |
| <a href="https://mrrate.forithmus.com/"> |
| <img alt="Dataset Explorer" src="https://img.shields.io/badge/Dataset%20Explorer-Streamlit-FF4B4B?logo=streamlit&logoColor=white"> |
| </a> |
| <br> |
| <a href=""> |
| <img alt="Paper (Coming Soon)" src="https://img.shields.io/badge/Paper-(Coming%20Soon)-B31B1B?logo=arxiv&logoColor=white"> |
| </a> |
| <a href=""> |
| <img alt="Model Weights (Coming Soon)" src="https://img.shields.io/badge/Model%20Weights-(Coming%20Soon)-FFD21E?logo=huggingface&logoColor=black"> |
| </a> |
| </p> |
| |
| Welcome to the official page for **MR-RATE**, a pioneering vision-language model and 3D medical imaging dataset that pairs textual reports with brain and spine MRI volumes. Following the approach of [CT-RATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE), the first 3D medical imaging dataset to pair images with textual reports, MR-RATE offers brain and spine MRI volumes matched with corresponding radiology reports and metadata along with derived registrations, segmentations, and pathology labels, all freely accessible to researchers. |
|
|
| --- |
|
|
| # News & Announcements |
|
|
| - **[Coming soon]** 🔄 Body Part Labels: Series-level labels distinguishing brain and spine MRI volumes will be added to the metadata soon. |
| - **[2026-04-03]** The old **Forithmus/MR-RATE-vista-seg** repository has been replaced by [Forithmus/MR-RATE-nvseg-ctmr](https://huggingface.co/datasets/Forithmus/MR-RATE-nvseg-ctmr). Brain and body segmentations generated with [NV-Segment-CTMR](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/tree/main/NV-Segment-CTMR) are now available for native-space volumes. See [multi-label brain and body segmentations](https://huggingface.co/datasets/Forithmus/MR-RATE#registration-segmentation-and-pathology-label-derivatives) for details. |
| - **[2026-03-31]** [Pathology labels](https://huggingface.co/datasets/Forithmus/MR-RATE/tree/main/pathology_labels) have been added to the dataset. |
| - **[2026-03-27]** New download script feature: A per-batch download status table is now printed after each run to help verify that everything completed successfully. |
| - **[2026-03-18]** Public release of the **MR-RATE** dataset and source code. |
| > This section will be updated with new releases, corrections, and derivatives. |
|
|
| --- |
|
|
| # A Novel Dataset of Brain and Spine MRI Volumes with Corresponding Radiology Reports |
|
|
| <p align="center"> |
| <img src="https://cdn-uploads.huggingface.co/production/uploads/65d077cf70354febcb8e1a09/sAqioeas5CZlpWGaxXAGu.png?raw=true" width="100%"> |
| </p> |
|
|
| A major challenge in computational research on 3D medical imaging is the lack of comprehensive datasets. Addressing this issue, we present the MR-RATE dataset, consisting of **705,254** non-contrast and contrast-enhanced brain and spine MRI volumes from **98,334** imaging studies of **83,425** unique patients, paired with corresponding radiology reports and metadata. Brain and spine MRI examinations are acquired from patients using MRI scanners and organised into multiple imaging sequence categories, including **T1-weighted**, **T2-weighted**, **FLAIR**, **SWI**, and **MRA**. Each study is paired with associated metadata and a radiology report, which is produced by the radiologist during clinical interpretation. Additionally, co-registrations, atlas-registrations, brain and body segmentations, and classified pathology labes are provided as derivatives, allowing researchers to incorporate these resources directly into their own workflows. Together, these components make MR-RATE a comprehensive dataset for multimodal brain and spine MRI research. |
|
|
| ### MRI Volumes & Metadata |
|
|
| All MRI volumes are provided in their native acquisition space. DICOM files are converted to NIfTI format using [dcm2niix](https://github.com/rordenlab/dcm2niix), as NIfTI is more convenient for research workflows and downstream processing. This conversion also allows DICOM metadata to be selectively curated, removing identifying information while preserving relevant acquisition and study details, which are then saved in CSV format. Using an adapted and parallelized version of the [BrainLesion Suite preprocessing module](https://github.com/BrainLesion/preprocessing), a brain mask is predicted for each brain volume with [HD-BET](https://github.com/MIC-DKFZ/HD-BET), and defacing is then applied with [Quickshear](https://github.com/nipy/quickshear) to remove identifiable facial features for patient anonymization. The predicted brain masks and defacing masks are released alongside the defaced volumes, enabling researchers to leverage these directly in their own workflows. |
|
|
| ### Radiology Reports |
|
|
| All radiology reports are anonymized to remove patient healthcare identifiers and then structured into clinical information, technique, findings, and impression sections through an iterative LLM-based pipeline using [Qwen3.5-35B-A3B-FP8](https://huggingface.co/Qwen/Qwen3.5-35B-A3B-FP8) model via [vLLM](https://github.com/vllm-project/vllm). Resulting reports are stored in CSV format. |
|
|
| ### Registration, Segmentation, and Pathology Label Derivatives |
|
|
| In addition to the native-space data, MR-RATE offers four sets of derivatives: |
| - **Co-registered volumes** — Within each study, a T1-weighted MRI is selected to be the center modality and all the other MRI volumes (defined as moving modalities) are registered to the center using [ANTs](https://github.com/antsx/ants), bringing all MRI sequences of a given study into a common anatomical reference frame. Brain mask and defacing mask of center modality are shared accross all modalities of a study. |
| - **Atlas-registered volumes** — Within each study, the center modality is registered to the [MNI152](https://nist.mni.mcgill.ca/icbm-152-nonlinear-atlases-2009/) atlas using [ANTs](https://github.com/antsx/ants), and the co-registered moving modalities are also transformed into atlas-space, enabling group-level analyses and cross-patient comparisons in a standardized coordinate space. Brain mask and defacing mask of center modality are also transformed to atlas-space and shared accross all modalities of a study. |
| - **Multi-label brain and body segmentations** — For **all** native-space T1-weighted center modality MRI volumes, voxel-wise anatomical multi-label brain segmentations are predicted using [NV-Segment-CTMR](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/tree/main/NV-Segment-CTMR) (MRI_BRAIN) model, supporting region-of-interest analysis and various downstream tasks. Additionally, for all native-space MRI volumes, voxel-wise anatomical multi-label body segmentations are predicted using the [NV-Segment-CTMR](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/tree/main/NV-Segment-CTMR) (MRI_BODY) model. |
| - **Pathology labels** — The findings sections of the structured reports are classified into 37 brain and spine MRI pathology categories, each grounded to SNOMED CT or RadLex, through a multi-step LLM-based pipeline using [Qwen3.5-35B-A3B-FP8](https://huggingface.co/Qwen/Qwen3.5-35B-A3B-FP8) model via [vLLM](https://github.com/vllm-project/vllm), and the resulting labels are stored in CSV format. |
|
|
| ### Data Splits |
|
|
| To support reproducible research, we provide [patient-level data splits](https://huggingface.co/datasets/Forithmus/MR-RATE/blob/main/splits.csv). The splits are used internally for our own experiments and are shared openly so that the research community can benchmark and compare methods under consistent conditions. |
|
|
| | Split | # Patients | # Studies | # Series | |
| |------------|----------|---------|---------| |
| | Train | 75,000 | 88,985 | 638,345 | |
| | Validation | 3,425 | 3,781 | 27,003 | |
| | Test | 5,000 | 5,568 | 39,906 | |
| | **Total** | **83,425** | **98,334** | **705,254** | |
|
|
|
|
| ### Dataset Organization & Getting Started |
|
|
| MR-RATE dataset is released across four Hugging Face repositories. Study folders are zipped to comply with Hugging Face's per-repository file count limits. Since zipping disables include/exclude pattern filtering for downloads, data for each study is split across multiple zip folders per space and distributed across Hugging Face repositories, allowing users to download only their preferred data subset while staying within file count limits. |
|
|
| | Repo | Size | Content | |
| |------|------|---------| |
| | **[Forithmus/MR-RATE](https://huggingface.co/datasets/Forithmus/MR-RATE/tree/main)** (this repo) | 8.1 TB | Defaced native-space MRI volumes with brain masks and defacing masks, radiology reports, metadata, pathology labels, and data splits | |
| | **[Forithmus/MR-RATE-coreg](https://huggingface.co/datasets/Forithmus/MR-RATE-coreg)** | 17.6 TB | Co-registered MRI volumes in coreg-space, where moving modalities are registered to the T1-weighted center modality and center modalities are copied from native-space; registration transforms; and brain masks and defacing masks for the center modalities copied from native-space| |
| | **[Forithmus/MR-RATE-atlas](https://huggingface.co/datasets/Forithmus/MR-RATE-atlas)** | 12.3 TB | Atlas-registered MRI volumes in atlas-space, where all modalities are registered to the MNI152 atlas; registration transforms; and brain masks and defacing masks for the center modalities in atlas-space | |
| | **[Forithmus/MR-RATE-nvseg-ctmr](https://huggingface.co/datasets/Forithmus/MR-RATE-nvseg-ctmr)** | 415 GB | Native-space multi-label brain segmentations for center modalities and body segmentations for all modalities | |
|
|
| To explore, download, and work with the dataset: |
| - 📂 **[Dataset Guide](https://github.com/forithmus/MR-RATE/blob/main/data-preprocessing/docs/dataset_guide.md)** — A detailed guide explaining the structure and contents of each repo, folder and file within the dataset. |
| - 📊 **[Dataset Explorer](https://mrrate.forithmus.com/)** — An interactive dashboard is available for data exploration, cohort building and dynamic metadata visualization. |
| - ⬇️ **[Downloading Dataset](https://github.com/forithmus/MR-RATE/tree/main/data-preprocessing#downloading-dataset)** — You can download the dataset via standalone Python scripts and merge study folders from multiple repos. |
| - 🛠️ **[Preprocessing Code](https://github.com/forithmus/MR-RATE/tree/main/data-preprocessing)** — Data preprocessing code is available open-source. |
|
|
| --- |
|
|
| # A Vision-Language Foundation Model for Magnetic Resonance Imaging |
| ``` |
| Coming soon |
| ``` |
|
|
| --- |
|
|
| # Citing Us |
| When using this dataset, please consider citing the following related papers: |
|
|
| ``` |
| Coming soon |
| ``` |
|
|
| # Ethical Approval |
| This study was approved by the Clinical Research Ethics Committee at Istanbul Medipol University (E-10840098-772.02-6841, 27/10/2023). All MRI volumes, metadata, and radiology reports were fully anonymized prior to analysis to protect patient privacy. |
|
|
| # License |
| We are committed to fostering innovation and collaboration in the research community. To this end, all elements of the MR-RATE dataset are released under a **[Creative Commons Attribution–NonCommercial–ShareAlike (CC BY-NC-SA)](https://creativecommons.org/licenses/by-nc-sa/4.0/)** license. |
|
|
| This licensing framework ensures that our contributions can be freely used for non-commercial research purposes, while also encouraging contributions and modifications, provided that the original work is properly cited and any derivative works are shared under similar terms. |
|
|
| For commercial inquiries related to MR-RATE, please contact: contact@forithmus.com. |
|
|
| # Acknowledgements |
| This project is conducted by Forithmus and the University of Zurich, in collaboration with NVIDIA and Istanbul Medipol University. |
|
|
| We are grateful to NVIDIA for their support, which made this work possible. We also sincerely thank Istanbul Medipol University Mega Hospital for their support and for providing the data used in this project. High-performance computing resources were provided by NVIDIA and the University of Zurich ScienceCluster. |
|
|
| We would also like to thank the following individuals from NVIDIA for their contributions to the development of MR-RATE: Marc Edgar, Daguang Xu, Dong Yang, Yucheng Tang, Can Zhao, Andriy Myronenko, and Pengfei Guo. |
|
|
|  |