| Multi-task reinforcement learning: a hierarchical bayesian approach A Wilson, A Fern, S Ray, P Tadepalli Proceedings of the 24th international conference on Machine learning, 1015-1022, 2007 | 436 | 2007 |
| A bayesian approach for policy learning from trajectory preference queries A Wilson, A Fern, P Tadepalli Advances in neural information processing systems 25, 2012 | 247 | 2012 |
| Active learning with committees for text categorization R Liere, P Tadepalli AAAI/IAAI, 591-596, 1997 | 237 | 1997 |
| Dynamic preferences in multi-criteria reinforcement learning S Natarajan, P Tadepalli Proceedings of the 22nd international conference on Machine learning, 601-608, 2005 | 231 | 2005 |
| Relational reinforcement learning: An overview P Tadepalli, R Givan, K Driessens, P Tadepalli, R Givan Proceedings of the ICML-2004 workshop on relational reinforcement learning, 1-9, 2004 | 169 | 2004 |
| Structured machine learning: the next ten years TG Dietterich, P Domingos, L Getoor, S Muggleton, P Tadepalli Machine Learning 73 (1), 3-23, 2008 | 163 | 2008 |
| A decision-theoretic model of assistance A Fern, S Natarajan, K Judah, P Tadepalli Journal of Artificial Intelligence Research 50, 71-104, 2014 | 145 | 2014 |
| Model-based average reward reinforcement learning P Tadepalli, DK Ok Artificial intelligence 100 (1-2), 177-224, 1998 | 141 | 1998 |
| Multi-agent inverse reinforcement learning S Natarajan, G Kunapuli, K Judah, P Tadepalli, K Kersting, J Shavlik 2010 ninth international conference on machine learning and applications …, 2010 | 137 | 2010 |
| Transfer in variable-reward hierarchical reinforcement learning N Mehta, S Natarajan, P Tadepalli, A Fern Machine Learning 73 (3), 289-312, 2008 | 136 | 2008 |
| Optimal policies tend to seek power AM Turner, L Smith, R Shah, A Critch, P Tadepalli arXiv preprint arXiv:1912.01683, 2019 | 131 | 2019 |
| Lower bounding Klondike solitaire with Monte-Carlo planning R Bjarnason, A Fern, P Tadepalli Proceedings of the International Conference on Automated Planning and …, 2009 | 129 | 2009 |
| Interpreting recurrent and attention-based neural models: a case study on natural language inference R Ghaeini, X Fern, P Tadepalli Proceedings of the 2018 Conference on Empirical Methods in Natural Language …, 2018 | 126 | 2018 |
| Using trajectory data to improve bayesian optimization for reinforcement learning A Wilson, A Fern, P Tadepalli The Journal of Machine Learning Research 15 (1), 253-282, 2014 | 126 | 2014 |
| Automatic discovery and transfer of MAXQ hierarchies N Mehta, S Ray, P Tadepalli, T Dietterich Proceedings of the 25th international conference on Machine learning, 648-655, 2008 | 119 | 2008 |
| Learning first-order probabilistic models with combining rules S Natarajan, P Tadepalli, E Altendorf, TG Dietterich, A Fern, A Restificar Proceedings of the 22nd international conference on Machine learning, 609-616, 2005 | 110 | 2005 |
| Maximizing the predictive value of production rules SM Weiss, RS Galen, PV Tadepalli Artificial Intelligence 45 (1-2), 47-71, 1990 | 102 | 1990 |
| Lazy ExplanationBased Learning: A Solution to the Intractable Theory Problem. P Tadepalli IJCAI, 694-700, 1989 | 95 | 1989 |
| Event nugget detection with forward-backward recurrent neural networks R Ghaeini, X Fern, L Huang, P Tadepalli Proceedings of the 54th annual meeting of the Association for Computational …, 2016 | 93 | 2016 |
| Conservative agency via attainable utility preservation AM Turner, D Hadfield-Menell, P Tadepalli Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 385-391, 2020 | 80 | 2020 |