| Survey on factuality in large language models: Knowledge, retrieval and domain-specificity C Wang*, X Liu*, Y Yue*, X Tang, T Zhang, C Jiayang, Y Yao, W Gao, ... ACM Computing Surveys (*equal contribution), 2023 | 343* | 2023 |
| Evaluating open-qa evaluation C Wang, S Cheng, Q Guo, Y Yue, B Ding, Z Xu, Y Wang, X Hu, Z Zhang, ... NeurIPS 2023, 2024 | 113 | 2024 |
| Supergpqa: Scaling llm evaluation across 285 graduate disciplines X Du, Y Yao, K Ma, B Wang, T Zheng, K Zhu, M Liu, Y Liang, X Jin, Z Wei, ... NeurIPS 2025, 2025 | 88 | 2025 |
| Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning Y Yue, C Wang, J Huang, P Wang EMNLP 2024 Findings, 2024 | 13 | 2024 |
| Survey on Factuality in Large Language Models: Knowledge C Wang, X Liu, Y Yue, X Tang, T Zhang, C Jiayang, Y Yao, W Gao, X Hu, ... Retrieval and Domain-Specificity, 2023 | 6 | 2023 |
| others. 2023. Survey on factuality in large language models: Knowledge, retrieval and domain-specificity C Wang, X Liu, Y Yue, X Tang, T Zhang, C Jiayang, Y Yao, W Gao, X Hu, ... arXiv preprint arXiv:2310.07521, 1 | 6 | 1 |
| Building a Family of Data Augmentation Models for Low-cost LLM Fine-tuning on the Cloud Y Yue*, C Wang*, J Huang, P Wang COLING 2025 Oral, 2024 | 3 | 2024 |
| EasyDistill: A Comprehensive Toolkit for Effective Knowledge Distillation of Large Language Models C Wang, J Yan, W Cai, Y Yue, J Huang EMNLP 2025, 2025 | 2 | 2025 |
| DistilQwen2. 5: Industrial Practices of Training Distilled Open Lightweight Language Models C Wang, J Yan, Y Yue, J Huang ACL 2025, 2025 | 2 | 2025 |