| Lower bounds and optimal algorithms for personalized federated learning F Hanzely, S Hanzely, S Horváth, P Richtárik Advances in Neural Information Processing Systems 33, 2304-2315, 2020 | 245 | 2020 |
| ZeroSARAH: Efficient nonconvex finite-sum optimization with zero full gradient computation Z Li, S Hanzely, P Richtárik arXiv preprint arXiv:2103.01447, 2021 | 43 | 2021 |
| A Damped Newton Method Achieves Global and Local Quadratic Convergence Rate S Hanzely, D Kamzolov, D Pasechnyuk, A Gasnikov, P Richtarik, M Takac Advances in Neural Information Processing Systems 35, 25320-25334, 2022 | 36 | 2022 |
| Distributed Newton-type methods with communication compression and Bernoulli aggregation R Islamov, X Qian, S Hanzely, M Safaryan, P Richtárik arXiv preprint arXiv:2206.03588, 2022 | 22 | 2022 |
| Adaptive learning of the optimal mini-batch size of SGD M Alfarra, S Hanzely, A Albasyoni, B Ghanem, P Richtárik Workshop on Optimization for Machine Learning, NeurIPS 2020, 2020 | 13* | 2020 |
| Sketch-and-Project Meets Newton Method: Global Convergence with Low-Rank Updates S Hanzely arXiv preprint arXiv:2305.13082, 2023 | 10 | 2023 |
| Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes K Mishchenko, S Hanzely, P Richtárik arXiv preprint arXiv:2301.06806, 2023 | 8 | 2023 |
| Adaptive Optimization Algorithms for Machine Learning S Hanzely arXiv preprint arXiv:2311.10203, 2023 | 6 | 2023 |
| DAG: Projected Stochastic Approximation Iteration for DAG Structure Learning K Ziu, S Hanzely, L Li, K Zhang, M Takáč, D Kamzolov arXiv preprint arXiv:2410.23862, 2024 | 4 | 2024 |
| Simple Stepsize for Quasi-Newton Methods with Global Convergence Guarantees A Agafonov, V Ryspayev, S Horváth, A Gasnikov, M Takáč, S Hanzely arXiv preprint arXiv:2508.19712, 2025 | 1 | 2025 |
| Preconditioned Norms: A Unified Framework for Steepest Descent, Quasi-Newton and Adaptive Methods A Veprikov, A Bolatov, S Horváth, A Beznosikov, M Takáč, S Hanzely arXiv preprint arXiv:2510.10777, 2025 | | 2025 |
| Loss-Transformation Invariance in the Damped Newton Method A Shestakov, S Bohara, S Horváth, M Takáč, S Hanzely arXiv preprint arXiv:2509.25782, 2025 | | 2025 |
| Polyak Stepsize: Estimating Optimal Functional Values Without Parameters or Prior Knowledge F Abdukhakimov, CA Pham, S Horváth, M Takáč, S Hanzely arXiv preprint arXiv:2508.17288, 2025 | | 2025 |
| Newton Method Revisited: Global Convergence Rates up to for Stepsize Schedules and Linesearch Procedures S Hanzely, F Abdukhakimov, M Takáč arXiv preprint arXiv:2405.18926, 2024 | | 2024 |