[go: up one dir, main page]

Follow
Mingrui Liu
Title
Cited by
Cited by
Year
Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning
H Rafique, M Liu, Q Lin, T Yang
Optimization Methods and Software 37 (3), 1087-1121, 2022
3152022
First-order convergence theory for weakly-convex-weakly-concave min-max problems
M Liu, H Rafique, Q Lin, T Yang
Journal of Machine Learning Research 22 (169), 1-34, 2021
144*2021
Understanding adamw through proximal methods and scale-freeness
Z Zhuang, M Liu, A Cutkosky, F Orabona
arXiv preprint arXiv:2202.00089, 2022
1392022
Stochastic AUC Maximization with Deep Neural Networks
M Liu, Z Yuan, Y Ying, T Yang
International Conference on Learning Representations 2020, 2019
1252019
Improved Schemes for Episodic Memory-based Lifelong Learning
Y Guo*, M Liu*, T Yang, T Rosing
Advances in Neural Information Processing Systems 33, 2020
1202020
Robustness to unbounded smoothness of generalized signsgd
M Crawshaw, M Liu, F Orabona, W Zhang, Z Zhuang
Advances in neural information processing systems 35, 9955-9968, 2022
1172022
A decentralized parallel algorithm for training generative adversarial nets
M Liu, W Zhang, Y Mroueh, X Cui, J Ross, T Yang, P Das
Advances in Neural Information Processing Systems 33, 11056-11070, 2020
992020
Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets
M Liu, Y Mroueh, J Ross, W Zhang, X Cui, P Das, T Yang
International Conference on Learning Representations 2020, 2019
872019
Fast Stochastic AUC Maximization with -Convergence Rate
M Liu, X Zhang, Z Chen, X Wang, T Yang
International Conference on Machine Learning, 3189-3197, 2018
762018
ADMM without a fixed penalty parameter: Faster convergence with new adaptive penalization
Y Xu, M Liu, Q Lin, T Yang
Advances in neural information processing systems 30, 2017
732017
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks
Z Guo, M Liu, Z Yuan, L Shen, W Liu, T Yang
International Conference on Machine Learning 2020, 2020
542020
Will bilevel optimizers benefit from loops
K Ji, M Liu, Y Liang, L Ying
Advances in Neural Information Processing Systems 35, 3011-3023, 2022
522022
Adam: A Stochastic Method with Adaptive Variance Reduction
M Liu, W Zhang, F Orabona, T Yang
arXiv preprint arXiv:2011.11985, 2020
472020
A communication-efficient distributed gradient clipping algorithm for training deep neural networks
M Liu, Z Zhuang, Y Lei, C Liao
Advances in Neural Information Processing Systems 35, 26204-26217, 2022
462022
Generalization guarantee of SGD for pairwise learning
Y Lei, M Liu, Y Ying
Advances in neural information processing systems 34, 21216-21228, 2021
452021
Adaptive negative curvature descent with applications in non-convex optimization
M Liu, Z Li, X Wang, J Yi, T Yang
Advances in Neural Information Processing Systems, 4853-4862, 2018
43*2018
Bilevel coreset selection in continual learning: A new formulation and algorithm
J Hao, K Ji, M Liu
Advances in Neural Information Processing Systems 36, 51026-51049, 2023
422023
Adaptive accelerated gradient converging methods under holderian error bound condition
M Liu, T Yang
Advances in Neural Information Processing Systems 30, 2016
312016
Spatiotemporal dynamics in a network composed of neurons with different excitabilities and excitatory coupling
WW Xiao, HG Gu, MR Liu
Science China Technological Sciences 59 (12), 1943-1952, 2016
302016
Fast rates of erm and stochastic approximation: Adaptive to error bound conditions
M Liu, X Zhang, L Zhang, R Jin, T Yang
Advances in Neural Information Processing Systems 30, 2018
292018
The system can't perform the operation now. Try again later.
Articles 1–20