| Decision transformer: Reinforcement learning via sequence modeling L Chen, K Lu, A Rajeswaran, K Lee, A Grover, M Laskin, P Abbeel, ... Advances in neural information processing systems 34, 15084-15097, 2021 | 2621 | 2021 |
| Learning complex dexterous manipulation with deep reinforcement learning and demonstrations A Rajeswaran, V Kumar, A Gupta, G Vezzani, J Schulman, E Todorov, ... Robotics: Science and Systems (RSS), 2018 | 1515 | 2018 |
| Meta-Learning with Implicit Gradients A Rajeswaran, C Finn, S Kakade, S Levine Advances in Neural Information Processing Systems (NeurIPS), 2019 | 1152 | 2019 |
| MOReL: Model-Based Offline Reinforcement Learning R Kidambi, A Rajeswaran, P Netrapalli, T Joachims Advances in Neural Information Processing Systems (NeurIPS), 2020 | 989 | 2020 |
| R3m: A universal visual representation for robot manipulation S Nair, A Rajeswaran, V Kumar, C Finn, A Gupta arXiv preprint arXiv:2203.12601, 2022 | 833 | 2022 |
| Online Meta-Learning C Finn, A Rajeswaran, S Kakade, S Levine International Conference on Machine Learning (ICML), 2019 | 612 | 2019 |
| Combo: Conservative offline model-based policy optimization T Yu, A Kumar, R Rafailov, A Rajeswaran, S Levine, C Finn Advances in neural information processing systems 34, 28954-28967, 2021 | 583 | 2021 |
| EPOpt: Learning Robust Neural Network Policies Using Model Ensembles A Rajeswaran, S Ghotra, B Ravindran, S Levine International Conference on Learning Representations (ICLR), 2017 | 478 | 2017 |
| Towards generalization and simplicity in continuous control A Rajeswaran, K Lowrey, EV Todorov, SM Kakade Advances in neural information processing systems 30, 2017 | 398 | 2017 |
| Identifying topology of low voltage distribution networks based on smart meter data SJ Pappu, N Bhatt, R Pasumarthy, A Rajeswaran IEEE Transactions on Smart Grid 9 (5), 5113-5122, 2017 | 322 | 2017 |
| Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control K Lowrey, A Rajeswaran, S Kakade, E Todorov, I Mordatch International Conference on Learning Representations (ICLR), 2019 | 318 | 2019 |
| Dexterous manipulation with deep reinforcement learning: Efficient, general, and low-cost H Zhu, A Gupta, A Rajeswaran, S Levine, V Kumar International Conference on Robotics and Automation (ICRA), 2019 | 312 | 2019 |
| Openeqa: Embodied question answering in the era of foundation models A Majumdar, A Ajay, X Zhang, P Putta, S Yenamandra, M Henaff, S Silwal, ... Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2024 | 255 | 2024 |
| The unsurprising effectiveness of pre-trained vision models for control S Parisi, A Rajeswaran, S Purushwalkam, A Gupta international conference on machine learning, 17359-17371, 2022 | 250 | 2022 |
| Where are we in the search for an artificial visual cortex for embodied intelligence? A Majumdar, K Yadav, S Arnaud, J Ma, C Chen, S Silwal, A Jain, ... Advances in Neural Information Processing Systems 36, 655-677, 2023 | 240 | 2023 |
| Variance reduction for policy gradient with action-dependent factorized baselines C Wu, A Rajeswaran, Y Duan, V Kumar, AM Bayen, S Kakade, I Mordatch, ... International Conference on Learning Representations (ICLR), 2018 | 201 | 2018 |
| Offline reinforcement learning from images with latent space models R Rafailov, T Yu, A Rajeswaran, C Finn Learning for dynamics and control, 1154-1168, 2021 | 179 | 2021 |
| A Game Theoretic Framework for Model Based Reinforcement Learning A Rajeswaran, I Mordatch, V Kumar International Conference on Machine Learning, 7953-7963, 2020 | 171 | 2020 |
| Unsupervised reinforcement learning with contrastive intrinsic control M Laskin, H Liu, XB Peng, D Yarats, A Rajeswaran, P Abbeel Advances in Neural Information Processing Systems 35, 34478-34491, 2022 | 145* | 2022 |
| Divide-and-conquer reinforcement learning D Ghosh, A Singh, A Rajeswaran, V Kumar, S Levine International Conference on Learning Representations (ICLR), 2018 | 145 | 2018 |