MinD-Video is a framework for high-quality video reconstruction from brain recording.
Cinematic Mindscapes: High-quality Video Reconstruction from Brain Activity.
Zijiao Chen*,
Jiaxin Qing*,
Juan Helen Zhou
* equal contribution
- Sep. 22, 2023. Accepted by NeurIPS2023 for Oral Presentation.
- May. 20, 2023. Preprint release.
Reconstructing human vision from brain activities has been an appealing task that helps to understand our cognitive process. Even though recent research has seen great success in reconstructing static images from non-invasive brain recordings, work on recovering continuous visual experiences in the form of videos is limited. In this work, we propose MinD-Video that learns spatiotemporal information from continuous fMRI data of the cerebral cortex progressively through masked brain modeling, multimodal contrastive learning with spatiotemporal attention, and co-training with an augmented Stable Diffusion model that incorporates network temporal inflation. We show that high-quality videos of arbitrary frame rates can be reconstructed with MinD-Video using adversarial guidance. The recovered videos were evaluated with various semantic and pixel-level metrics. We achieved an average accuracy of 85% in semantic classification tasks and 0.19 in structural similarity index (SSIM), outperforming the previous state-of-the-art by 45%. We also show that our model is biologically plausible and interpretable, reflecting established physiological processes.
- Some samples are shown below. Our methods can reconstruct various objects, animals, motions, and scenes. The reconstructed videos are of high quality and are consistent with the ground truth. For more samples, please refer to our website or download with google drive.
- The following samples are currently generated with one RTX3090. Due to GPU memory limitation, samples shown below are currently 2 seconds of 3 FPS at the resolution of 256 x 256. But our method can work with longer brain recordings and reconstruct longer videos with full frame rate (30 FPS) and higher resolution, if more GPU memory is available.
GT Ours | GT Ours | GT Ours | GT Ours | GT Ours |
To be updated
To be updated
- Codes will be released soon.
@article{chen2023cinematic,
title={Cinematic Mindscapes: High-quality Video Reconstruction from Brain Activity},
author={Chen, Zijiao and Qing, Jiaxin and Zhou, Juan Helen},
journal={arXiv preprint arXiv:2305.11675},
year={2023}
}