[go: up one dir, main page]

Follow
Jamie Hayes
Jamie Hayes
Google DeepMind
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Gemini 2.5: Pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities
G Comanici, E Bieber, M Schaekermann, I Pasupat, N Sachdeva, I Dhillon, ...
arXiv preprint arXiv:2507.06261, 2025
13372025
Extracting training data from diffusion models
N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramer, B Balle, ...
32nd USENIX security symposium (USENIX Security 23), 5253-5270, 2023
10842023
Logan: Membership inference attacks against generative models
J Hayes, L Melis, G Danezis, E De Cristofaro
arXiv preprint arXiv:1705.07663, 2017
1035*2017
k-fingerprinting: A robust scalable website fingerprinting technique
J Hayes, G Danezis
25th USENIX Security Symposium (USENIX Security 16), 1187-1203, 2016
6152016
Generating steganographic images via adversarial training
J Hayes, G Danezis
Advances in neural information processing systems 30, 2017
4462017
Towards unbounded machine unlearning
M Kurmanji, P Triantafillou, J Hayes, E Triantafillou
Advances in neural information processing systems 36, 1957-1987, 2023
3732023
Local and central differential privacy for robustness and privacy in federated learning
M Naseri, J Hayes, E De Cristofaro
arXiv preprint arXiv:2009.03561, 2020
371*2020
Unlocking high-accuracy differentially private image classification through scale
S De, L Berrada, J Hayes, SL Smith, B Balle
arXiv preprint arXiv:2204.13650, 2022
3142022
The loopix anonymity system
AM Piotrowska, J Hayes, T Elahi, S Meiser, G Danezis
26th usenix security symposium (usenix security 17), 1199-1216, 2017
3112017
Reconstructing training data with informed adversaries
B Balle, G Cherubin, J Hayes
2022 IEEE Symposium on Security and Privacy (SP), 1138-1156, 2022
2502022
Learning universal adversarial perturbations with generative models
J Hayes, G Danezis
2018 IEEE Security and Privacy Workshops (SPW), 43-49, 2018
2122018
Scalable watermarking for identifying large language model outputs
S Dathathri, A See, S Ghaisas, PS Huang, R McAdam, J Welbl, V Bachani, ...
Nature 634 (8035), 818-823, 2024
2042024
On visible adversarial perturbations & digital watermarking
J Hayes
Proceedings of the IEEE conference on computer vision and pattern …, 2018
1692018
Tight auditing of differentially private machine learning
M Nasr, J Hayes, T Steinke, B Balle, F Tramèr, M Jagielski, N Carlini, ...
32nd USENIX Security Symposium (USENIX Security 23), 1631-1648, 2023
1472023
Website fingerprinting defenses at the application layer
G Cherubin, J Hayes, M Juárez
Proceedings on Privacy Enhancing Technologies 2017 (2), 168-185, 2017
1242017
Differentially private diffusion models generate useful synthetic images
S Ghalebikesabi, L Berrada, S Gowal, I Ktena, R Stanforth, J Hayes, S De, ...
arXiv preprint arXiv:2302.13861, 2023
108*2023
Contamination attacks and mitigation in multi-party machine learning
J Hayes, O Ohrimenko
Advances in neural information processing systems 31, 2018
1062018
Imagen 3
J Baldridge, J Bauer, M Bhutani, N Brichtova, A Bunner, L Castrejon, ...
arXiv preprint arXiv:2408.07009, 2024
902024
Defeating prompt injections by design
E Debenedetti, I Shumailov, T Fan, J Hayes, N Carlini, D Fabian, C Kern, ...
arXiv preprint arXiv:2503.18813, 2025
762025
A framework for robustness certification of smoothed classifiers using f-divergences
KD Dvijotham, J Hayes, B Balle, Z Kolter, C Qin, A Gyorgy, K Xiao, ...
International Conference on Learning Representations, 2020
712020
The system can't perform the operation now. Try again later.
Articles 1–20