First, I would like to express my appreciation for your significant contributions to multimodal medical pre-training.
I have a question regarding the handling of 3D data, such as CT and MRI scans, in models like *-CLIP and LLaVa-Med, which are designed to accept only 2D inputs. Specifically, how is the 3D vision data processed for these models? For instance, is the output from the vision encoder averaged across the temporal dimension using mean pooling?
Thank you for your time, and I look forward to your insights!