| Few-shot generative conversational query rewriting S Yu, J Liu, J Yang, C Xiong, P Bennett, J Gao, Z Liu Proceedings of the 43rd International ACM SIGIR conference on research and …, 2020 | 213 | 2020 |
| Few-shot conversational dense retrieval S Yu, Z Liu, C Xiong, T Feng, Z Liu Proceedings of the 44th International ACM SIGIR Conference on research and …, 2021 | 164 | 2021 |
| Visrag: Vision-based retrieval-augmented generation on multi-modality documents S Yu, C Tang, B Xu, J Cui, J Ran, Y Yan, Z Liu, S Wang, X Han, Z Liu, ... arXiv preprint arXiv:2410.10594, 2024 | 142 | 2024 |
| Augmentation-adapted retriever improves generalization of language models as generic plug-in Z Yu, C Xiong, S Yu, Z Liu arXiv preprint arXiv:2305.17331, 2023 | 90 | 2023 |
| Rageval: Scenario specific rag evaluation dataset generation framework K Zhu, Y Luo, D Xu, Y Yan, Z Liu, S Yu, R Wang, S Wang, Y Li, N Zhang, ... Proceedings of the 63rd Annual Meeting of the Association for Computational …, 2025 | 58 | 2025 |
| Text matching improves sequential recommendation by reducing popularity biases Z Liu, S Mei, C Xiong, X Li, S Yu, Z Liu, Y Gu, G Yu Proceedings of the 32nd ACM international conference on information and …, 2023 | 42 | 2023 |
| Structure-aware language model pretraining improves dense retrieval on structured data X Li, Z Liu, C Xiong, S Yu, Y Gu, Z Liu, G Yu arXiv preprint arXiv:2305.19912, 2023 | 41 | 2023 |
| Rag-ddr: Optimizing retrieval-augmented generation using differentiable data rewards X Li, S Mei, Z Liu, Y Yan, S Wang, S Yu, Z Zeng, H Chen, G Yu, Z Liu, ... arXiv preprint arXiv:2410.13509, 2024 | 29 | 2024 |
| Activerag: Autonomously knowledge assimilation and accommodation through retrieval-augmented agents Z Xu, Z Liu, Y Yan, S Wang, S Yu, Z Zeng, C Xiao, Z Liu, G Yu, C Xiong arXiv preprint arXiv:2402.13547, 2024 | 29* | 2024 |
| P3 ranker: Mitigating the gaps between pre-training and ranking fine-tuning with prompt-based learning and pre-finetuning X Hu, S Yu, C Xiong, Z Liu, Z Liu, G Yu Proceedings of the 45th International ACM SIGIR Conference on Research and …, 2022 | 17 | 2022 |
| CMT in TREC-COVID round 2: mitigating the generalization gaps from web to special domain search C Xiong, Z Liu, S Sun, Z Dai, K Zhang, S Yu, Z Liu, H Poon, J Gao, ... arXiv preprint arXiv:2011.01580, 2020 | 15 | 2020 |
| Openmatch-v2: An all-in-one multi-modality plm-based information retrieval toolkit S Yu, Z Liu, C Xiong, Z Liu Proceedings of the 46th International ACM SIGIR Conference on Research and …, 2023 | 14 | 2023 |
| RankCoT: Refining Knowledge for Retrieval-Augmented Generation through Ranking Chain-of-Thoughts M Wu, Z Liu, Y Yan, X Li, S Yu, Z Zeng, Y Gu, G Yu arXiv preprint arXiv:2502.17888, 2025 | 13 | 2025 |
| Retriever-and-memory: Towards adaptive note-enhanced retrieval-augmented generation R Wang, D Zha, S Yu, Q Zhao, Y Chen, Y Wang, S Wang, Y Yan, Z Liu, ... arXiv e-prints, arXiv: 2410.08821, 2024 | 13 | 2024 |
| Building a coding assistant via the retrieval-augmented language model X Li, H Wang, Z Liu, S Yu, S Wang, Y Yan, Y Fu, Y Gu, G Yu ACM Transactions on Information Systems 43 (2), 1-25, 2025 | 11 | 2025 |
| Say more with less: Understanding prompt learning behaviors through gist compression X Li, Z Liu, C Xiong, S Yu, Y Yan, S Wang, G Yu arXiv preprint arXiv:2402.16058, 2024 | 11 | 2024 |
| Learning more effective representations for dense retrieval through deliberate thinking before search Y Ji, Z Xu, Z Liu, Y Yan, S Yu, Y Li, Z Liu, Y Gu, G Yu, M Sun arXiv preprint arXiv:2502.12974, 2025 | 8 | 2025 |
| Craw4LLM: Efficient Web Crawling for LLM Pretraining S Yu, Z Liu, C Xiong arXiv preprint arXiv:2502.13347, 2025 | 6 | 2025 |
| Fusion-in-t5: unifying variant signals for simple and effective document ranking with attention fusion S Yu, C Fan, C Xiong, D Jin, Z Liu, Z Liu Proceedings of the 2024 Joint International Conference on Computational …, 2024 | 6* | 2024 |
| LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences S Yao, P Huang, Z Liu, Y Gu, Y Yan, S Yu, G Yu arXiv preprint arXiv:2502.17057, 2025 | 4 | 2025 |