[go: up one dir, main page]

Follow
Hongcheng Gao
Hongcheng Gao
Incoming PhD at Tsinghua University
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Kimi k1. 5: Scaling reinforcement learning with llms
K Team, A Du, B Gao, B Xing, C Jiang, C Chen, C Li, C Xiao, C Du, C Liao, ...
arXiv preprint arXiv:2501.12599, 2025
819*2025
Generative pretraining in multimodality
Q Sun*, Q Yu*, Y Cui*, F Zhang*, X Zhang*, Y Wang, H Gao, J Liu, ...
ICLR, 2024
415*2024
Kimi k2: Open agentic intelligence
K Team, Y Bai, Y Bao, G Chen, J Chen, N Chen, R Chen, Y Chen, Y Chen, ...
arXiv preprint arXiv:2507.20534, 2025
3512025
Kimi-vl technical report
K Team, A Du, B Yin, B Xing, B Qu, B Wang, C Chen, C Zhang, C Du, ...
arXiv preprint arXiv:2504.07491, 2025
210*2025
Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evaluations
L Yuan, Y Chen, G Cui, H Gao, F Zou, X Cheng, H Ji, Z Liu, M Sun
NeurIPS (Dataset and Benchmark Track) 36, 2023
1702023
Spider 2.0: Evaluating language models on real-world enterprise text-to-sql workflows
F Lei, J Chen, Y Ye, R Cao, D Shin, H Su, Z Suo, H Gao, W Hu, P Yin, ...
ICLR, 2025
1442025
Exploring the universal vulnerability of prompt-based learning paradigm
L Xu, Y Chen, G Cui, H Gao, Z Liu
Findings of NAACL, 2022
1102022
Why should adversarial perturbations be imperceptible? rethink the research paradigm in adversarial NLP
Y Chen*, H Gao*, G Cui, F Qi, L Huang, Z Liu, M Sun
EMNLP, 2022
942022
Guardreasoner: Towards reasoning-based llm safeguards
Y Liu, H Gao, S Zhai, J Xia, T Wu, Z Xue, Y Chen, K Kawaguchi, J Zhang, ...
arXiv preprint arXiv:2501.18492, 2025
602025
Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?
R Cao, F Lei, H Wu, J Chen, Y Fu, H Gao, X Xiong, H Zhang, Y Mao, W Hu, ...
NeurIPS (Dataset and Benchmark Track), 2024
602024
Efficient inference for large reasoning models: A survey
Y Liu, J Wu, Y He, H Gao, H Chen, B Bi, J Zhang, Z Huang, B Hooi
arXiv preprint arXiv:2503.23077, 2025
422025
Evaluating the robustness of text-to-image diffusion models against real-world attacks
H Gao, H Zhang, Y Dong, Z Deng
arXiv preprint arXiv:2306.13103, 2023
412023
Universal Prompt Optimizer for Safe Text-to-Image Generation
Z Wu*, H Gao*, Y Wang, X Zhang, S Wang
NAACL, 2024
382024
Efficient detection of LLM-generated texts with a Bayesian surrogate model
Y Miao*, H Gao*, H Zhang, Z Deng
Findings of ACL, 2024
322024
AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models
Z Zeng*, Y Miao*, H Gao, H Zhang, Z Deng
Findings of EMNLP, 2024
302024
Guardreasoner-vl: Safeguarding vlms via reinforced reasoning
Y Liu, S Zhai, M Du, Y Chen, T Cao, H Gao, C Wang, X Li, K Wang, J Fang, ...
NeurIPS, 2025
282025
Textual backdoor attacks can be more harmful via two simple tricks
Y Chen*, F Qi*, H Gao, Z Liu, M Sun
EMNLP, 2022
282022
Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts
H Gao*, T Pang*, C Du, T Hu, Z Deng, M Lin
ICCV, 2025
252025
Is factuality decoding a free lunch for llms? evaluation on knowledge editing benchmark
B Bi, S Liu, Y Wang, L Mei, J Fang, H Gao, S Ni, X Cheng
ICLR, 2025
23*2025
Token Merging for Training-Free Semantic Binding in Text-to-Image Synthesis
T Hu, L Li, J van de Weijer, H Gao, FS Khan, J Yang, MM Cheng, K Wang, ...
NeurIPS, 2024
222024
The system can't perform the operation now. Try again later.
Articles 1–20