[go: up one dir, main page]

Follow
Mert Pilanci
Title
Cited by
Cited by
Year
Newton sketch: A near linear-time optimization algorithm with linear-quadratic convergence
M Pilanci, MJ Wainwright
SIAM Journal on Optimization 27 (1), 205-245, 2017
4172017
Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares
M Pilanci, MJ Wainwright
Journal of Machine Learning Research 17 (53), 1-38, 2016
2672016
Randomized sketches for kernels: Fast and optimal nonparametric regression
Y Yang, M Pilanci, MJ Wainwright
2282017
Randomized sketches of convex programs with sharp guarantees
M Pilanci, MJ Wainwright
IEEE Transactions on Information Theory 61 (9), 5096-5115, 2015
2192015
Neural networks are convex regularizers: Exact polynomial-time convex optimization formulations for two-layer networks
M Pilanci, T Ergen
International Conference on Machine Learning, 7695-7705, 2020
1442020
Sparse learning via Boolean relaxations
M Pilanci, MJ Wainwright, L El Ghaoui
Mathematical Programming 151 (1), 63-87, 2015
1072015
Revealing the structure of deep neural networks via convex duality
T Ergen, M Pilanci
International Conference on Machine Learning, 3004-3014, 2021
852021
Convex geometry and duality of over-parameterized neural networks
T Ergen, M Pilanci
Journal of machine learning research 22 (212), 1-63, 2021
752021
Recovery of sparse probability measures via convex programming
M Pilanci, L Ghaoui, V Chandrasekaran
Advances in Neural Information Processing Systems 25, 2012
682012
The hidden convex optimization landscape of two-layer ReLU neural networks: An exact characterization of the optimal solutions
Y Wang, J Lacotte, M Pilanci
arXiv preprint arXiv:2006.05900, 2020
582020
Implicit convex regularizers of cnn architectures: Convex optimization of two-and three-layer networks in polynomial time
T Ergen, M Pilanci
arXiv preprint arXiv:2006.14798, 2020
572020
Vector-output relu neural network problems are copositive programs: Convex analysis of two layer networks and polynomial-time algorithms
A Sahiner, T Ergen, J Pauly, M Pilanci
arXiv preprint arXiv:2012.13329, 2020
522020
Global optimality beyond two layers: Training deep relu networks via convex programs
T Ergen, M Pilanci
International Conference on Machine Learning, 2993-3003, 2021
492021
Demystifying batch normalization in relu networks: Equivalent convex optimization models and implicit regularization
T Ergen, A Sahiner, B Ozturkler, J Pauly, M Mardani, M Pilanci
arXiv preprint arXiv:2103.01499, 2021
462021
Compressing large language models using low rank and low precision decomposition
R Saha, N Sagan, V Srivastava, A Goldsmith, M Pilanci
Advances in Neural Information Processing Systems 37, 88981-89018, 2024
442024
Unraveling attention via convex duality: Analysis and interpretations of vision transformers
A Sahiner, T Ergen, B Ozturkler, J Pauly, M Mardani, M Pilanci
International Conference on Machine Learning, 19050-19088, 2022
442022
Fast convex optimization for two-layer relu networks: Equivalent model classes and cone decompositions
A Mishkin, A Sahiner, M Pilanci
International Conference on Machine Learning, 15770-15816, 2022
442022
Riemannian preconditioned lora for fine-tuning foundation models
F Zhang, M Pilanci
arXiv preprint arXiv:2402.02347, 2024
432024
Matrix compression via randomized low rank and low precision factorization
R Saha, V Srivastava, M Pilanci
Advances in Neural Information Processing Systems 36, 18828-18872, 2023
412023
Newton-LESS: Sparsification without trade-offs for the sketched Newton update
M Derezinski, J Lacotte, M Pilanci, MW Mahoney
Advances in Neural Information Processing Systems 34, 2835-2847, 2021
412021
The system can't perform the operation now. Try again later.
Articles 1–20