[go: up one dir, main page]

Follow
Runxin Xu
Runxin Xu
DeepSeek AI | Peking University
Verified email at stu.pku.edu.cn - Homepage
Title
Cited by
Cited by
Year
Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning
D Guo, D Yang, H Zhang, J Song, R Zhang, R Xu, Q Zhu, S Ma, P Wang, ...
arXiv preprint arXiv:2501.12948, 2025
7770*2025
Deepseekmath: Pushing the limits of mathematical reasoning in open language models
Z Shao, P Wang, Q Zhu, R Xu, J Song, X Bi, H Zhang, M Zhang, YK Li, ...
arXiv preprint arXiv:2402.03300, 2024
3978*2024
Deepseek-v3 technical report
A Liu, B Feng, B Xue, B Wang, B Wu, C Lu, C Zhao, C Deng, C Zhang, ...
arXiv preprint arXiv:2412.19437, 2024
32642024
Deepseek llm: Scaling open-source language models with longtermism
X Bi, D Chen, G Chen, S Chen, D Dai, C Deng, H Ding, K Dong, Q Du, ...
arXiv preprint arXiv:2401.02954, 2024
7522024
Math-shepherd: Verify and reinforce llms step-by-step without human annotations
P Wang, L Li, Z Shao, R Xu, D Dai, Y Li, D Chen, Y Wu, Z Sui
Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024
718*2024
Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models
D Dai, C Deng, C Zhao, RX Xu, H Gao, D Chen, J Li, W Zeng, X Yu, Y Wu, ...
arXiv preprint arXiv:2401.06066, 2024
7102024
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
DS AI
678*2024
Deepseek-coder-v2: Breaking the barrier of closed-source models in code intelligence
Q Zhu, D Guo, Z Shao, D Yang, P Wang, R Xu, Y Wu, Y Li, H Gao, S Ma, ...
arXiv preprint arXiv:2406.11931, 2024
4042024
Double graph based reasoning for document-level relation extraction
S Zeng, R Xu, B Chang, L Li
arXiv preprint arXiv:2009.13752, 2020
3172020
Raise a child in large language model: Towards effective and generalizable fine-tuning
R Xu, F Luo, Z Zhang, C Tan, B Chang, S Huang, F Huang
arXiv preprint arXiv:2109.05687, 2021
2602021
Omni-math: A universal olympiad level mathematic benchmark for large language models
B Gao, F Song, Z Yang, Z Cai, Y Miao, Q Dong, L Li, C Ma, L Chen, R Xu, ...
arXiv preprint arXiv:2410.07985, 2024
2362024
Multimodal arxiv: A dataset for improving scientific comprehension of large vision-language models
L Li, Y Wang, R Xu, P Wang, X Feng, L Kong, Q Liu
arXiv preprint arXiv:2403.00231, 2024
1922024
Document-level event extraction via heterogeneous graph-based interaction model with a tracker
R Xu, T Liu, L Li, B Chang
arXiv preprint arXiv:2105.14924, 2021
1492021
Inference-time scaling for generalist reward modeling
Z Liu, P Wang, R Xu, S Ma, C Ruan, P Li, Y Liu, Y Wu
arXiv preprint arXiv:2504.02495, 2025
131*2025
A two-stream AMR-enhanced model for document-level event argument extraction
R Xu, P Wang, T Liu, S Zeng, B Chang, Z Sui
arXiv preprint arXiv:2205.00241, 2022
872022
An enhanced span-based decomposition method for few-shot sequence labeling
P Wang, R Xu, T Liu, Q Zhou, Y Cao, B Chang, Z Sui
Proceedings of the 2022 Conference of the North American Chapter of the …, 2022
732022
Llm critics help catch bugs in mathematics: Towards a better mathematical verifier with natural language feedback
B Gao, Z Cai, R Xu, P Wang, C Zheng, R Lin, K Lu, D Liu, C Zhou, W Xiao, ...
Findings of the Association for Computational Linguistics: ACL 2025, 14588-14604, 2025
48*2025
Making pre-trained language models end-to-end few-shot learners with contrastive prompt tuning
Z Xu, C Wang, M Qiu, F Luo, R Xu, S Huang, J Huang
Proceedings of the sixteenth ACM international conference on web search and …, 2023
432023
From dense to sparse: Contrastive pruning for better pre-trained language model compression
R Xu, F Luo, C Wang, B Chang, J Huang, S Huang, F Huang
Proceedings of the AAAI Conference on Artificial Intelligence 36 (10), 11547 …, 2022
402022
Codei/o: Condensing reasoning patterns via code input-output prediction
J Li, D Guo, D Yang, R Xu, Y Wu, J He
arXiv preprint arXiv:2502.07316, 2025
332025
The system can't perform the operation now. Try again later.
Articles 1–20