[go: up one dir, main page]

Follow
Victoria Krakovna
Victoria Krakovna
Other namesViktoriya Krakovna
Research Scientist at Google DeepMind
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
69922023
Gemini 2.5: Pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities
G Comanici, E Bieber, M Schaekermann, I Pasupat, N Sachdeva, I Dhillon, ...
arXiv preprint arXiv:2507.06261, 2025
13372025
AI safety gridworlds
J Leike, M Martic, V Krakovna, PA Ortega, T Everitt, A Lefrancq, L Orseau, ...
arXiv preprint arXiv:1711.09883, 2017
4302017
Specification gaming: the flip side of AI ingenuity
V Krakovna, J Uesato, V Mikulik, M Rahtz, T Everitt, R Kumar, Z Kenton, ...
https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI …, 2020
218*2020
Reinforcement Learning with a Corrupted Reward Channel
T Everitt, V Krakovna, L Orseau, M Hutter, S Legg
IJCAI AI & Autonomy, 2017
1712017
Reward tampering problems and solutions in reinforcement learning: A causal influence diagram perspective
T Everitt, M Hutter, R Kumar, V Krakovna
Synthese 198 (Suppl 27), 6435-6467, 2021
1642021
Evaluating Frontier Models for Dangerous Capabilities
M Phuong, M Aitchison, E Catt, S Cogan, A Kaskasoli, V Krakovna, ...
arXiv preprint arXiv:2403.13793, 2024
151*2024
The ethics of advanced AI assistants
I Gabriel, A Manzini, G Keeling, LA Hendricks, V Rieser, H Iqbal, ...
arXiv preprint arXiv:2404.16244, 2024
1452024
Goal misgeneralization: Why correct specifications aren't enough for correct goals
R Shah, V Varma, R Kumar, M Phuong, V Krakovna, J Uesato, Z Kenton
arXiv preprint arXiv:2210.01790, 2022
1382022
Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models
V Krakovna, F Doshi-Velez
ICML Workshop on Human Interpretability (WHI 2016), arXiv preprint arXiv …, 2016
982016
Penalizing side effects using stepwise relative reachability
V Krakovna, L Orseau, R Kumar, M Martic, S Legg
arXiv preprint arXiv:1806.01186, 2018
79*2018
Chain of thought monitorability: A new and fragile opportunity for AI safety
T Korbak, M Balesni, E Barnes, Y Bengio, J Benton, J Bloom, M Chen, ...
arXiv preprint arXiv:2507.11473, 2025
732025
Avoiding Side Effects By Considering Future Tasks
V Krakovna, L Orseau, R Ngo, M Martic, S Legg
NeurIPS 2020, arXiv preprint arXiv:2010.07877, 2020
672020
Specification gaming examples in AI
V Krakovna
tinyurl.com/specification-gaming, 2018
53*2018
An approach to technical AGI safety and security
R Shah, A Irpan, AM Turner, A Wang, A Conmy, D Lindner, ...
arXiv preprint arXiv:2504.01849, 2025
49*2025
Modeling AGI safety frameworks with causal influence diagrams
T Everitt, R Kumar, V Krakovna, S Legg
arXiv preprint arXiv:1906.08663, 2019
302019
Power-seeking can be probable and predictive for trained agents
V Krakovna, J Kramar
arXiv preprint arXiv:2304.06528, 2023
24*2023
Measuring and avoiding side effects using relative reachability
V Krakovna, L Orseau, M Martic, S Legg
arXiv preprint arXiv:1806.01186, 2018
232018
Evaluating Frontier Models for Stealth and Situational Awareness
M Phuong, RS Zimmermann, Z Wang, D Lindner, V Krakovna, S Cogan, ...
arXiv preprint arXiv:2505.01420, 2025
202025
Avoiding tampering incentives in deep RL via decoupled approval
J Uesato, R Kumar, V Krakovna, T Everitt, R Ngo, S Legg
arXiv preprint arXiv:2011.08827, 2020
172020
The system can't perform the operation now. Try again later.
Articles 1–20