| PAC-Bayesian soft actor-critic learning B Tasdighi, A Akgül, M Haussmann, KK Brink, M Kandemir arXiv preprint arXiv:2301.12776, 2023 | 11 | 2023 |
| Exploring pessimism and optimism dynamics in deep reinforcement learning B Tasdighi, N Werge, YS Wu, M Kandemir arXiv preprint arXiv:2406.03890, 2024 | 2 | 2024 |
| Probabilistic Actor-Critic: Learning to Explore with PAC-Bayes Uncertainty B Tasdighi, N Werge, YS Wu, M Kandemir | 2 | 2024 |
| Deep exploration with PAC-Bayes B Tasdighi, M Haussmann, N Werge, YS Wu, M Kandemir arXiv preprint arXiv:2402.03055, 2024 | 1 | 2024 |
| Uncertainty Aware Deep Reinforcement Learning Agents B Tasdighi | | 2025 |
| Directional Ensemble Aggregation for Actor-Critics N Werge, YS Wu, B Tasdighi, M Kandemir arXiv preprint arXiv:2507.23501, 2025 | | 2025 |
| ObjectRL: An Object-Oriented Reinforcement Learning Codebase G Baykal, A Akgül, M Haussmann, B Tasdighi, N Werge, YS Wu, ... arXiv preprint arXiv:2507.03487, 2025 | | 2025 |
| Deep Actor-Critics with Tight Risk Certificates B Tasdighi, M Haussmann, YS Wu, AR Masegosa, M Kandemir arXiv preprint arXiv:2505.19682, 2025 | | 2025 |
| Improving Actor-Critic Training with Steerable Action-Value Approximation Errors B Tasdighi, N Werge, YS Wu, M Kandemir ECAI 2025, 2770-2777, 2025 | | 2025 |