[go: up one dir, main page]

Follow
Belinda Zeng
Belinda Zeng
Verified email at amazon.com
Title
Cited by
Cited by
Year
Vision-language pre-training with triple contrastive learning
J Yang, J Duan, S Tran, Y Xu, S Chanda, L Chen, B Zeng, T Chilimbi, ...
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2022
4402022
Multi-modal alignment using representation codebook
J Duan, L Chen, S Tran, J Yang, Y Xu, B Zeng, T Chilimbi
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
972022
Understanding and constructing latent modality structures in multi-modal representation learning
Q Jiang, C Chen, H Zhao, L Chen, Q Ping, SD Tran, Y Xu, B Zeng, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
862023
Why do we need large batchsizes in contrastive learning? a gradient-bias perspective
C Chen, J Zhang, Y Xu, L Chen, J Duan, Y Chen, S Tran, B Zeng, ...
Advances in Neural Information Processing Systems 35, 33860-33875, 2022
602022
CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models
A Muhamed, I Keivanloo, S Perera, J Mracek, Y Xu, Q Cui, S Rajagopalan, ...
NeurIPS Efficient Natural Language and Speech Processing Workshop, 2021
602021
Graph-aware language model pre-training on a large graph corpus can help multiple graph applications
H Xie, D Zheng, J Ma, H Zhang, VN Ioannidis, X Song, Q Ping, S Wang, ...
Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and …, 2023
562023
Reaugkd: Retrieval-augmented knowledge distillation for pre-trained language models
J Zhang, A Muhamed, A Anantharaman, G Wang, C Chen, K Zhong, ...
The 61st Annual Meeting of the Association for Computational Linguistics 2, 2023
332023
Simpler, faster, stronger: Breaking the log-k curse on contrastive learners with flatnce
J Chen, Z Gan, X Li, Q Guo, L Chen, S Gao, T Chung, Y Xu, B Zeng, W Lu, ...
arXiv preprint arXiv:2107.01152, 2021
282021
Efficient and effective training of language and graph neural network models
VN Ioannidis, X Song, D Zheng, H Zhang, J Ma, Y Xu, B Zeng, T Chilimbi, ...
arXiv preprint arXiv:2206.10781, 2022
232022
Robust multi-task learning with excess risks
Y He, S Zhou, G Zhang, H Yun, Y Xu, B Zeng, T Chilimbi, H Zhao
arXiv preprint arXiv:2402.02009, 2024
212024
Magic pyramid: Accelerating inference with early exiting and token pruning
X He, I Keivanloo, Y Xu, X He, B Zeng, S Rajagopalan, T Chilimbi
arXiv preprint arXiv:2111.00230, 2021
192021
Diffusion models for multi-task generative modeling
C Chen, H Ding, B Sisman, Y Xu, O Xie, BZ Yao, SD Tran, B Zeng
The Twelfth International Conference on Learning Representations, 2024
142024
Web-scale semantic product search with large language models
A Muhamed, S Srinivasan, CH Teo, Q Cui, B Zeng, T Chilimbi, ...
Pacific-Asia Conference on Knowledge Discovery and Data Mining, 73-85, 2023
132023
Top-down attention in end-to-end spoken language understanding
Y Chen, W Lu, A Mottini, LE Li, J Droppo, Z Du, B Zeng
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
122021
Vidla: Video-language alignment at scale
MN Rizve, F Fei, J Unnikrishnan, S Tran, BZ Yao, B Zeng, M Shah, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
102024
Diffusion models for multi-modal generative modeling
C Chen, H Ding, B Sisman, Y Xu, O Xie, BZ Yao, SD Tran, B Zeng
arXiv preprint arXiv:2407.17571, 2024
82024
Mlim: Vision-and-language model pre-training with masked language and image modeling
T Arici, MS Seyfioglu, T Neiman, Y Xu, S Train, T Chilimbi, B Zeng, I Tutar
arXiv preprint arXiv:2109.12178, 2021
82021
Vision-language pre-training with triple contrastive learning. 2022 IEEE
J Yang, J Duan, S Tran, Y Xu, S Chanda, L Chen, B Zeng, TM Chilimbi, ...
CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15650-15659, 2022
62022
Osscse: Overcoming surface structure bias in contrastive learning for unsupervised sentence embedding
Z Shi, G Wang, K Bai, J Li, X Li, Q Cui, B Zeng, T Chilimbi, X Zhu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
52023
Asynchronous convergence in multi-task learning via knowledge distillation from converged tasks
W Liu, S Rajagopalan, P Nigam, J Singh, X Sun, Y Xu, B Zeng, T Chilimbi
Proceedings of the 2022 conference of the North American chapter of the …, 2022
42022
The system can't perform the operation now. Try again later.
Articles 1–20