| Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems S Kambhampati, S Sreedharan, M Verma, Y Zha, L Guan AAAI 2022 Blue Sky Paper, 2021 | 72 | 2021 |
| Explicable planning as minimizing distance from expected behavior A Kulkarni, Y Zha, T Chakraborti, SG Vadlamudi, Y Zhang, ... AAMAS Conference proceedings, 2019 | 57 | 2019 |
| Explicablility as minimizing distance from expected behavior A Kulkarni, Y Zha, T Chakraborti, SG Vadlamudi, Y Zhang, ... arXiv preprint arXiv:1611.05497, 2016 | 48 | 2016 |
| Discovering underlying plans based on shallow models HH Zhuo, Y Zha, S Kambhampati, X Tian ACM Transactions on Intelligent Systems and Technology (TIST) 11 (2), 1-30, 2020 | 31 | 2020 |
| NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction S Shrestha, Y Zha, G Gao, C Fermuller, Y Aloimonos IEEE/ACM International Conference on Human-Robot Interaction, 2023 | 21 | 2023 |
| " Task Success" is not Enough: Investigating the Use of Video-Language Models as Behavior Critics for Catching Undesirable Agent Behaviors L Guan, Y Zhou, D Liu, Y Zha, HB Amor, S Kambhampati Conference on Language Modeling, 2024 | 20 | 2024 |
| Contrastively Learning Visual Attention as Affordance Cues from Demonstrations for Robotic Grasping Y Zha, S Bhambri, L Guan The IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2021 | 11 | 2021 |
| Recognizing plans by learning embeddings from observed action distributions Y Zha, Y Li, S Gopalakrishnan, B Li, S Kambhampati AAMAS Conference proceedings, 2017 | 11 | 2017 |
| Learning from ambiguous demonstrations with self-explanation guided reinforcement learning Y Zha, L Guan, S Kambhampati AAAI-24 Main Track & AAAI-22 Workshop on Reinforcement Learning in Games, 2021 | 9 | 2021 |
| AAM-SEALS: Developing Aerial-Aquatic Manipulators in SEa, Air, and Land Simulator WW Yang, K Kona, Y Jain, T Atzili, A Bhamidipati, X Lin, Y Zha arXiv preprint arXiv:2412.19744; ICRA 2025 Amphibious Robotics Workshop, 2024 | 3 | 2024 |
| NatSGLD: A Dataset with Speech, Gesture, Logic, and Demonstration for Robot Learning in Natural Human-Robot Interaction S Shrestha, Y Zha, S Banagiri, G Gao, Y Aloimonos, C Fermüller 2025 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI …, 2025 | 2 | 2025 |
| Plan-recognition-driven attention modeling for visual recognition Y Zha, Y Li, T Yu, S Kambhampati, B Li AAAI 2019 Workshop on Plan, Activity, and Intent Recognition (PAIR), 2018 | 1 | 2018 |
| Perceiving, Planning, Acting, and Self-Explaining: A Cognitive Quartet with Four Neural Networks Y Zha Arizona State University, 2022 | | 2022 |
| Discovering Underlying Plans Based on Shallow Models H Hankui Zhuo, Y Zha, S Kambhampati arXiv e-prints, arXiv: 1803.02208, 2018 | | 2018 |