Logo

MedMax:

Mixed-Modal Instruction Tuning for Training Biomedical Assistants

University of California, Los Angeles
Equal Contribution
MEDMAX Performance Comparison

Average performance of multimodal models on twelve VQA tasks. Our Logo MedMax instruction-tuned mixed-modal foundation model outperforms both open multimodal models (Chameleon, LLaVA-Med-v1.5, and Huatuo) and closed multimodal models (GPT-4o, GPT-4o-mini). This underscores the effectiveness of the Logo MedMax dataset in training capable multimodal biomedical assistants.

Introduction

Recent advancements in mixed-modal generative models have enabled flexible integration of information across image-text content. These models have opened new avenues for developing unified biomedical assistants capable of analyzing biomedical images, answering complex questions about them, and predicting the impact of medical procedures on a patient's health. However, existing resources face challenges such as limited data availability, narrow domain coverage, and restricted sources (e.g., medical papers).

To address these gaps, we present Logo MedMax, the first large-scale multimodal biomedical instruction-tuning dataset for mixed-modal foundation models. With 1.47 million instances, Logo MedMax encompasses a diverse range of tasks, including multimodal content generation (interleaved image-text data), biomedical image captioning and generation, visual chatting, and report understanding. These tasks span diverse medical domains such as radiology and histopathology.

Subsequently, we fine-tune a mixed-modal foundation model on the Logo MedMax dataset, achieving significant performance improvements: a 26% gain over the Chameleon model and an 18.3% improvement over GPT-4o across 12 downstream biomedical visual question-answering tasks. Additionally, we introduce a unified evaluation suite for biomedical tasks, providing a robust framework to guide the development of next-generation mixed-modal biomedical AI assistants.

Performance on Biomedical VQA Tasks

Performance comparison of multimodal models on various biomedical visual question answering datasets.

Models Average VQA-RAD (Closed) SLAKE (Closed) PathVQA (Closed) QuiltVQA (Closed) VQA-RAD (Open) SLAKE (Open) PathVQA (Open) QuiltVQA (Open) PMC-VQA OmniMedVQA PathMMU ProbMed
Chameleon (7B) 39.4 48.6 59.1 58.9 71.4 32.0 5.3 18.0 15.3 31.0 45.7 34.5 52.8
LLaVA-Med (v1.5-7B) 36.6 61.0 48.7 62.7 63.0 23.0 25.1 6.2 17.2 18.9 28.7 29.8 58.5
GPT-4o (mini) 38.5 55.8 50.4 48.7 38.5 13.0 49.3 7.3 28.0 39.6 45.1 35.6 50.6
GPT-4o 42.0 54.2 50.1 59.2 44.6 17.6 63.7 9.1 36.1 40.8 40.9 39.1 48.3
HuatuoGPT (Vision-7B) 52.4 74.5 70.7 65.9 55.7 19.0 53.3 6.0 22.2 51.6 75.6 55.4 78.7
Logo MedMax (7B) 65.5 75.3 88.4 91.8 61.2 46.5 82.2 40.6 26.0 49.0 99.5 49.3 75.8

Logo MedMax 7B Generation Examples on Diverse Tasks

Logo MedMax Dataset

Examples of Diverse Multimodal Biomedical Tasks

Examples of Diverse Multimodal Biomedical Tasks

Logo MedMax Dataset Statistics

Logo Experiment Results

Performance on image captioning and generation

Performance on the image captioning and image generation tasks. We find that MedMax model consistency outperforms the base Chameleon mixed-modal model across diverse biomedical domains.

Performance on multimodal generation

Performance on the multimodal generation task. Comparison between the performance of the MedMax and Chameleon mixed-modal model on the multimodal generation task. (a) We use LLM score to assess the quality of the generated data against the reference answer. (b) We use the image-image similarity score using BioMedCLIPScore to compare the generated image and reference image. We find that MedMax finetuning improves the multimodal content generation capabilities for the biomedical domain.

Performance on visual chat

Performance on the visual chat task. We find that the chatting capabilities of our model is competitive, suggesting its ability to answer novel queries about biomedical images.

BibTeX

@misc{bansal2024medmaxmixedmodalinstructiontuning,
      title={MedMax: Mixed-Modal Instruction Tuning for Training Biomedical Assistants}, 
      author={Hritik Bansal and Daniel Israel and Siyan Zhao and Shufan Li and Tung Nguyen and Aditya Grover},
      year={2024},
      eprint={2412.12661},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2412.12661}, 
}