[go: up one dir, main page]

Follow
Sara Hooker
Sara Hooker
Adaption
Verified email at adaptionlabs.ai - Homepage
Title
Cited by
Cited by
Year
A benchmark for interpretability methods in deep neural networks
S Hooker, D Erhan, PJ Kindermans, B Kim
Advances in neural information processing systems 32, 2019
1153*2019
The state of sparsity in deep neural networks
T Gale, E Elsen, S Hooker
arXiv preprint arXiv:1902.09574, 2019
10862019
The (un) reliability of saliency methods
PJ Kindermans, S Hooker, J Adebayo, M Alber, KT Schütt, S Dähne, ...
Explainable AI: Interpreting, explaining and visualizing deep learning, 267-280, 2019
9892019
Toward trustworthy AI development: mechanisms for supporting verifiable claims
M Brundage, S Avin, J Wang, H Belfield, G Krueger, G Hadfield, H Khlaaf, ...
arXiv preprint arXiv:2004.07213, 2020
7672020
Back to basics: Revisiting reinforce style optimization for learning from human feedback in llms
A Ahmadian, C Cremer, M Gallé, M Fadaee, J Kreutzer, O Pietquin, ...
arXiv preprint arXiv:2402.14740, 2024
5422024
The hardware lottery
S Hooker
Communications of the ACM 64 (12), 58-65, 2021
3482021
Aya model: An instruction finetuned open-access multilingual language model
A Üstün, V Aryabumi, Z Yong, WY Ko, D D’souza, G Onilude, N Bhandari, ...
Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024
3292024
What do compressed deep neural networks forget?
S Hooker, A Courville, G Clark, Y Dauphin, A Frome
WHI ICML 2019, 2019
284*2019
Moving beyond “algorithmic bias is a data problem”
S Hooker
Patterns 2 (4), 2021
2782021
Frontier AI regulation: Managing emerging risks to public safety
M Anderljung, J Barnhart, A Korinek, J Leung, C O'Keefe, J Whittlestone, ...
arXiv preprint arXiv:2307.03718, 2023
2502023
Characterising bias in compressed models
S Hooker, N Moorosi, G Clark, S Bengio, E Denton
arXiv preprint arXiv:2010.03058, 2020
2432020
Evaluating the social impact of generative ai systems in systems and society
I Solaiman, Z Talat, W Agnew, L Ahmad, D Baker, SL Blodgett, C Chen, ...
arXiv preprint arXiv:2306.05949, 2023
2082023
Efficient methods for natural language processing: A survey
M Treviso, JU Lee, T Ji, B Van Aken, Q Cao, MR Ciosici, M Hassid, ...
Transactions of the Association for Computational Linguistics 11, 826-860, 2023
1902023
Aya dataset: An open-access collection for multilingual instruction tuning
S Singh, F Vargus, D D’souza, BF Karlsson, A Mahendiran, WY Ko, ...
Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024
1812024
Pushing mixture of experts to the limit: Extremely parameter efficient moe for instruction tuning
T Zadouri, A Üstün, A Ahmadian, B Ermiş, A Locatelli, S Hooker
arXiv preprint arXiv:2309.05444, 2023
1752023
When less is more: Investigating data pruning for pretraining llms at scale
M Marion, A Üstün, L Pozzobon, A Wang, M Fadaee, S Hooker
arXiv preprint arXiv:2309.04564, 2023
1712023
Estimating example difficulty using variance of gradients
C Agarwal, D D'souza, S Hooker
IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR) 2022, 2020
1632020
Aya 23: Open weight releases to further multilingual progress
V Aryabumi, J Dang, D Talupuru, S Dash, D Cairuz, H Lin, B Venkitesh, ...
arXiv preprint arXiv:2405.15032, 2024
1532024
Randomness in neural network training: Characterizing the impact of tooling
D Zhuang, X Zhang, S Song, S Hooker
Proceedings of Machine Learning and Systems 4, 316-336, 2022
1332022
The data provenance initiative: A large scale audit of dataset licensing & attribution in ai
S Longpre, R Mahari, A Chen, N Obeng-Marnu, D Sileo, W Brannon, ...
126*2023
The system can't perform the operation now. Try again later.
Articles 1–20