| What algorithms can transformers learn? a study in length generalization H Zhou, A Bradley, E Littwin, N Razin, O Saremi, J Susskind, S Bengio, ... arXiv preprint arXiv:2310.16028, 2023 | 213 | 2023 |
| Stabilizing transformer training by preventing attention entropy collapse S Zhai, T Likhomanenko, E Littwin, D Busbridge, J Ramapuram, Y Zhang, ... International Conference on Machine Learning, 40770-40803, 2023 | 138 | 2023 |
| Tensor programs iib: Architectural universality of neural tangent kernel training dynamics G Yang, E Littwin International conference on machine learning, 11762-11772, 2021 | 91 | 2021 |
| The slingshot mechanism: An empirical study of adaptive optimizers and the grokking phenomenon V Thilak, E Littwin, S Zhai, O Saremi, R Paiss, J Susskind arXiv preprint arXiv:2206.04817, 2022 | 69 | 2022 |
| Transformers learn through gradual rank increase E Boix-Adsera, E Littwin, E Abbe, S Bengio, J Susskind Advances in Neural Information Processing Systems 36, 24519-24551, 2023 | 64 | 2023 |
| Biometric authentication techniques DS Prakash, LE Ballard, JV Hauck, F Tang, E LITTWIN, PKA Vasu, ... US Patent 10,929,515, 2021 | 57 | 2021 |
| Distillation scaling laws D Busbridge, A Shidani, F Weers, J Ramapuram, E Littwin, R Webb arXiv preprint arXiv:2502.08606, 2025 | 36 | 2025 |
| Tensor programs ivb: Adaptive optimization in the infinite-width limit G Yang, E Littwin arXiv preprint arXiv:2308.01814, 2023 | 36 | 2023 |
| On infinite-width hypernetworks E Littwin, T Galanti, L Wolf, G Yang Advances in neural information processing systems 33, 13226-13237, 2020 | 35* | 2020 |
| The multiverse loss for robust transfer learning E Littwin, L Wolf Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2016 | 35 | 2016 |
| Vanishing gradients in reinforcement finetuning of language models N Razin, H Zhou, O Saremi, V Thilak, A Bradley, P Nakkiran, J Susskind, ... arXiv preprint arXiv:2310.20703, 2023 | 25 | 2023 |
| When can transformers reason with abstract symbols? E Boix-Adsera, O Saremi, E Abbe, S Bengio, E Littwin, J Susskind arXiv preprint arXiv:2310.09753, 2023 | 24 | 2023 |
| Lidar: Sensing linear probing performance in joint embedding ssl architectures V Thilak, C Huang, O Saremi, L Dinh, H Goh, P Nakkiran, JM Susskind, ... arXiv preprint arXiv:2312.04000, 2023 | 20 | 2023 |
| How jepa avoids noisy features: The implicit bias of deep linear self distillation networks E Littwin, O Saremi, M Advani, V Thilak, P Nakkiran, C Huang, J Susskind Advances in Neural Information Processing Systems 37, 91300-91336, 2024 | 19 | 2024 |
| Regularizing by the variance of the activations' sample-variances E Littwin, L Wolf Advances in Neural Information Processing Systems 31, 2018 | 16 | 2018 |
| The loss surface of residual networks: Ensembles and the role of batch normalization E Littwin, L Wolf arXiv preprint arXiv:1611.02525, 2016 | 14 | 2016 |
| Collegial ensembles E Littwin, B Myara, S Sabah, J Susskind, S Zhai, O Golan Advances in Neural Information Processing Systems 33, 18738-18748, 2020 | 12 | 2020 |
| Adaptive Optimization in the -Width Limit E Littwin, G Yang The Eleventh International Conference on Learning Representations, 2023 | 10 | 2023 |
| Learning representation from neural fisher kernel with low-rank approximation R Zhang, S Zhai, E Littwin, J Susskind arXiv preprint arXiv:2202.01944, 2022 | 8 | 2022 |
| On random kernels of residual architectures E Littwin, T Galanti, L Wolf Uncertainty in Artificial Intelligence, 897-907, 2021 | 8 | 2021 |