Accepted by CVPR-2024
Jiateng Shou, Zeyu Xiao, Shiyu Deng, Wei Huang, Peiyao Shi, Ruobing Zhang, Zhiwei Xiong, Feng Wu*
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences
*Corresponding Author
We provide code for two dataset separately.
Code for AC4 dataset is coming soon.
Three stages of our model are trained separately.
We provide all the training code and configuration files.
Please put all the trained models in the 'pre-train_model' folder.
python output_GPEMSR.py -opt option/output_GPEMSR_x8.yml # for 8x EMSR
python output_GPEMSR.py -opt option/output_GPEMSR_x16.yml # for 16x EMSR
Coming soon.
Please put the pre-trained segmentation models in the 'pre-train_model' folder.
Superhuman | MALA |
---|---|
GoogleDrive | GoogleDrive |
cd ./inference_code
python inference_seg.py -c config/seg_x8_superhuman.yaml # for 8x EMSR segmentation using superhuman model
python inference_seg.py -c config/seg_x8_MALA.yaml # for 8x EMSR segmentation using MALA model
python inference_seg.py -c config/seg_x16_superhuman.yaml # for 16x EMSR segmentation using superhuman model
python inference_seg.py -c config/seg_x16_MALA.yaml # for 16x EMSR segmentation using MALA model
We provide pre-trained models for all three stages.
We also provide pre-trained VGG model and SPyNet model.
CREMI | AC4 |
---|---|
GoogleDrive | Coming Soon |
If you have any problem with the released code, please contact me by email (shoujt@mail.ustc.edu.cn).