| Fast global convergence of natural policy gradient methods with entropy regularization S Cen, C Cheng, Y Chen, Y Wei, Y Chi Operations Research 70 (4), 2563-2578, 2022 | 292 | 2022 |
| Breaking the sample size barrier in model-based reinforcement learning with a generative model G Li, Y Wei, Y Chi, Y Chen Operations Research 72 (1), 203-221, 2024 | 201* | 2024 |
| Sample complexity of asynchronous Q-learning: Sharper analysis and variance reduction G Li, Y Wei, Y Chi, Y Gu, Y Chen IEEE Transactions on Information Theory 68 (1), 448-473, 2021 | 167 | 2021 |
| Towards faster non-asymptotic convergence for diffusion-based generative models G Li, Y Wei, Y Chen, Y Chi arXiv preprint arXiv:2306.09251, 2023 | 150* | 2023 |
| Settling the sample complexity of model-based offline reinforcement learning G Li, L Shi, Y Chen, Y Chi, Y Wei The Annals of Statistics 52 (1), 233-260, 2024 | 143 | 2024 |
| The lasso with general gaussian designs with applications to hypothesis testing M Celentano, A Montanari, Y Wei The Annals of Statistics 51 (5), 2194-2220, 2023 | 142 | 2023 |
| Is Q-learning minimax optimal? a tight sample complexity analysis G Li, C Cai, Y Chen, Y Wei, Y Chi Operations Research 72 (1), 222-236, 2024 | 140 | 2024 |
| Pessimistic q-learning for offline reinforcement learning: Towards optimal sample complexity L Shi, G Li, Y Wei, Y Chen, Y Chi International conference on machine learning, 19967-20025, 2022 | 140 | 2022 |
| Fast policy extragradient methods for competitive games with entropy regularization S Cen, Y Wei, Y Chi Advances in Neural Information Processing Systems 34, 27952-27964, 2021 | 119 | 2021 |
| Early stopping for kernel boosting algorithms: A general analysis with localized complexities Y Wei, F Yang, MJ Wainwright IEEE Transactions on Information Theory 65 (10), 6685-6703, 2019 | 107 | 2019 |
| Accelerating convergence of score-based diffusion models, provably G Li, Y Huang, T Efimov, Y Wei, Y Chi, Y Chen arXiv preprint arXiv:2403.03852, 2024 | 82 | 2024 |
| Softmax Policy Gradient Methods Can Take Exponential Time to Converge G Li, Y Wei, Y Chi, Y Chen Mathematical Programming, 2021 | 79 | 2021 |
| Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification C Dan, Y Wei, P Ravikumar International Conference on Machine Learning, 2345-2355, 2020 | 77 | 2020 |
| The curious price of distributional robustness in reinforcement learning with a generative model L Shi, G Li, Y Wei, Y Chen, M Geist, Y Chi Advances in Neural Information Processing Systems 36, 79903-79917, 2023 | 76 | 2023 |
| Uniform Consistency of Cross-Validation Estimators for High-Dimensional Ridge Regression P Patil, Y Wei, A Rinaldo, R Tibshirani International Conference on Artificial Intelligence and Statistics, 3178-3186, 2021 | 72 | 2021 |
| Derandomizing knockoffs Z Ren, Y Wei, E Candès Journal of the American Statistical Association 118 (542), 948-958, 2023 | 67 | 2023 |
| A sharp convergence theory for the probability flow odes of diffusion models G Li, Y Wei, Y Chi, Y Chen arXiv preprint arXiv:2408.02320, 2024 | 62 | 2024 |
| Theoretical insights for diffusion guidance: A case study for gaussian mixture models Y Wu, M Chen, Z Li, M Wang, Y Wei arXiv preprint arXiv:2403.01639, 2024 | 48 | 2024 |
| Minimum -norm interpolators: Precise asymptotics and multiple descent Y Li, Y Wei arXiv preprint arXiv:2110.09502, 2021 | 47 | 2021 |
| Minimax-optimal multi-agent RL in zero-sum Markov games with a generative model G Li, Y Chi, Y Wei, Y Chen Advances in Neural Information Processing Systems, 2022 | 46* | 2022 |