| Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate C Jin, T Che, H Peng, Y Li, DN Metaxas, M Pavone NeurIPS2024, 2024 | 59 | 2024 |
| APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking C Jin, H Peng, S Zhao, Z Wang, W Xu, L Han, J Zhao, K Zhong, ... WWW2025-RelWeb(Best Paper Award), 2025 | 53 | 2025 |
| Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective C Jin, T Huang, Y Zhang, M Pechenizkiy, S Liu, S Liu, T Chen AAAI2025, 2025 | 29 | 2025 |
| Two heads are better than one: Test-time scaling of multi-agent collaborative reasoning C Jin, H Peng, Q Zhang, Y Tang, DN Metaxas, T Che NeurIPS2025-SEA, 2025 | 23 | 2025 |
| RankFlow: A Multi-Role Collaborative Reranking Workflow Utilizing Large Language Models C Jin, H Peng, A Zhang, N Chen, J Zhao, X Xie, K Li, S Feng, K Zhong, ... WWW2025-RelWeb, 2025 | 17 | 2025 |
| LoR-VP: Low-Rank Visual Prompting for Efficient Vision Model Adaptation C Jin, Y Li, M Zhao, S Zhao, Z Wang, X He, L Han, T Che, DN Metaxas ICLR2025, 2025 | 14 | 2025 |
| Graph Canvas for Controllable 3D Scene Generation L Liu, S Chen, S Jia, J Shi, Z Jiang, C Jin, W Zongkai, JN Hwang, L Li ACM MM2025, 2025 | 8 | 2025 |
| Led: Llm enhanced open-vocabulary object detection without human curated data generation Y Zhou, S Zhao, Y Chen, Z Wang, C Jin, DN Metaxas arXiv preprint arXiv:2503.13794, 2025 | 7 | 2025 |
| Your reward function for rl is your best prm for search: Unifying rl and search-based tts C Jin, Y Zhou, Q Zhang, H Peng, D Zhang, M Pavone, L Han, ZW Hong, ... arXiv preprint arXiv:2508.14313, 2025 | 6 | 2025 |
| M^ 3-Bench: Multi-Modal, Multi-Hop, Multi-Threaded Tool-Using MLLM Agent Benchmark Y Zhou, M Zhao, Z Wang, D Gu, B Guo, R Ye, L Han, C Jin, DN Metaxas arXiv preprint arXiv:2511.17729, 2025 | 1 | 2025 |
| Epo: Entropy-regularized policy optimization for llm agents reinforcement learning W Xu, W Zhao, Z Wang, YJ Li, C Jin, M Jin, K Mei, K Wan, DN Metaxas arXiv preprint arXiv:2509.22576, 2025 | 1 | 2025 |
| Reasoning over Precedents Alongside Statutes: Case-Augmented Deliberative Alignment for LLM Safety C Jin, R Wu, T Che, Q Zhang, H Peng, J Zhao, Z Wang, W Wei, L Han, ... arXiv preprint arXiv:2601.08000, 2026 | | 2026 |
| Sparsity-Controllable Dynamic Top-p MoE for Large Foundation Model Pre-training C Jin, H Peng, M Xiang, Q Zhang, X Yuan, A Hasan, O Dibua, Y Gong, ... arXiv preprint arXiv:2512.13996, 2025 | | 2025 |
| MHB: Multimodal Handshape-aware Boundary Detection for Continuous Sign Language Recognition M Zhao, Z Yang, Y Zhou, Z Xia, C Jin, X He, DN Metaxas arXiv preprint arXiv:2511.19907, 2025 | | 2025 |
| Mitigating Forgetting Between Supervised and Reinforcement Learning Yields Stronger Reasoners X Yuan, X Chen, T Yu, D Shi, C Jin, W Lee, S Mitra arXiv preprint arXiv:2510.04454, 2025 | | 2025 |
| Effective Policy Learning for Multi-Agent Online Coordination Beyond Submodular Objectives Q Zhang, Y Sun, C Jin, X Zhang, Y Shu, P Zhao, L Shen, D Tao NeurIPS2025, 2025 | | 2025 |
| Multinoulli Extension: A Lossless Yet Effective Probabilistic Framework for Subset Selection over Partition Constraints Q Zhang, W Huang, C Jin, P Zhao, Y Shu, L Shen, D Tao ICML2025, 2025 | | 2025 |