Publications
Conference & Journal Papers
2024
Explanatory Instructions: Towards Unified Vision Tasks Understanding and Zero-shot Generalization.
Y. Shen, X.-S. Wei†, Yifan Sun†, Yuxin Song, Tao Yuan, Jian Jin, Heyang Xu, Yazhou Yao, Errui Ding
arXiv preprint arXiv:2412.18525UniCanvas: Unified Real Image Editing via Customized Text-to-image Generation.
J. Jin, Y. Shen, X. Zhao, Z. Fu†, and J. Yang†.
International Journal of Computer Vision (IJCV), in pressDelving Deep into Simplicity Bias for Long-Tailed Image Recognition.
X.-S. Wei†*, X. Sun*, Y. Shen, A. Xu, P. Wang, F. Zhang.
International Journal of Computer Vision (IJCV), in pressPrune and Merge: Efficient Token Compression for Vision Transformer with Spacial Information Preserved.
J. Mao, Y. Shen, J. Guo, Y. Yao†, X. Hua, and H. Shen
IEEE Transactions on Multimedia, 2024, in pressEquiangular Basis Vectors: A Novel Paradigm for Classification Tasks.
Y. Shen, X. Sun, X.-S. Wei†, A. Xu, and L. Gao.
International Journal of Computer Vision (IJCV), 2024, Vol. 133, pp. 372–397Customized Generation Reimagined: Fidelity and Editability Harmonized
J. Jin, Y. Shen, Z. Fu†, and J. Yang†.
European Conference on Computer Vision (ECCV’24), pp. 410-426Few-shot open-set recognition via pairwise discriminant aggregation
J. Jin, Y. Shen, Z. Fu†, and J. Yang†.
Neurocomputing, 128214
2023
Equiangular Basis Vectors.
(This work was the winner solution of the 2022 DIGIX global AI challenge.)
Y. Shen, X. Sun, and X.-S. Wei†.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR’23), Vancouver, Canada, 2023, pp. 11755-11765. (Acceptance Rates: 2360/9155=25.8%)Attribute-Aware Deep Hashing with Self-Consistency for Large-Scale Fine-Grained Image Retrieval.
X.-S. Wei, Y. Shen, X. Sun, P. Wang, Y. Peng†.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023. DOI: 10.1109/TPAMI.2023.3299563.Attribute-aware knowledge based self-abductive for semi-supervised representation learning, in Chinese.
Y. Shen, X. Sun, H. Yang, X.-S. Wei†.
Science China Information Sciences (中国科学:信息科学), 2023. DOI: 10.1360/SSI-2023-0252.Hawkeye: A PyTorch-based Library for Fine-Grained Image Recognition with Deep Learning.
J. He, Y. Shen, X.-S. Wei†, Y. Wu.
ACM MM 2023 (Open Source Competition). 2023, pp. 9656–9659.
2022
SEMICON: A Learning-to-Hash Solution for Large-Scale Fine-Grained Image Retrieval.
Y. Shen, X. Sun, X.-S. Wei†, Q.-Y. Jiang, and J. Yang.
European Conference on Computer Vision (ECCV’22), Tel Aviv, Israel, 2022, pp. 531-548. (Acceptance Rates: 1650/5803=28%)Open-Set Object Detection Based on Annular Prototype Space Optimization, in Chinese.
X. Sun, Y. Shen, X.-S. Wei†, P. An.
Journal of Image and Graphics (中国图象图形学报), 2023. DOI: 10.11834/jig.220992.When large kernel meets vision transformer: A solution for snakeclef & fungiclef
Y. Shen†, X. Sun, Z. Zhu.
Working Notes of CVPR 2022-FGVC 9-CLEF 2022.A Channel Mix Method for Fine-Grained Cross-Modal Retrieval.
Y. Shen, X. Sun, X.-S. Wei†, H. Hu, and Z. Chen.
IEEE Conference on on Multimedia and Expo (ICME’22), Taipei, Taiwan, 2022, DOI: 10.1109/ICME52920.2022.9859609.Webly-Supervised Fine-Grained Recognition with Partial Label Learning.
Y.-Y. Xu*, Y. Shen*, X.-S. Wei†, and J. Yang.
International Joint Conference on Artificial Intelligence (IJCAI’22), Vienna, Australia, 2022, pp. 1502-1508. (Acceptance Rates: 681/4535=15.02%)Automatic Check-Out via Prototype-Based Classifier Learning from Single-Product Exemplars.
H. Chen, X.-S. Wei†, F. Zhang, Y. Shen, H. Xu, and L. Xiao.
European Conference on Computer Vision (ECCV’22), Tel Aviv, Israel, 2022, pp. 277-293. (Acceptance Rates: 1650/5803=28%)
2021
- A^2-Net: Learning Attribute-Aware Hash Codes for Large-Scale Fine-Grained Image Retrieval.
X.-S. Wei*, Y. Shen*, X. Sun, H.-J. Ye, and J. Yang†.
Neural Information Processing Systems (NeurIPS’21), Virtual, 2021, pp. 5720-5730. (Spotlight Presentation Acceptance Rates: 282/2334=10%)
Other Papers
Leaders Matter: Knowledge Sharing in a Medical Discussion Forum in China
Y. Fu, Y. Shen基于深度哈希的大规模细粒度图像检索.
魏秀参,沈阳.
中国计算机学会计算机视觉专委会简报, 2021, 1, 14-15.
Note:† denotes corresponding author,* denotes equal contribution.