| What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models A Ettinger Transactions of the Association for Computational Linguistics 8, 34-48, 2020 | 837 | 2020 |
| Faith and fate: Limits of transformers on compositionality N Dziri, X Lu, M Sclar, XL Li, L Jiang, BY Lin, S Welleck, P West, ... Advances in Neural Information Processing Systems 36, 70293-70332, 2023 | 638 | 2023 |
| Wildguard: Open one-stop moderation tools for safety risks, jailbreaks, and refusals of llms S Han, K Rao, A Ettinger, L Jiang, BY Lin, N Lambert, Y Choi, N Dziri Advances in Neural Information Processing Systems 37, 8093-8131, 2024 | 258 | 2024 |
| Probing for semantic evidence of composition by means of simple classification tasks A Ettinger, A Elgohary, P Resnik Proceedings of the 1st workshop on evaluating vector-space representations …, 2016 | 201 | 2016 |
| 2 OLMo 2 Furious T OLMo, P Walsh, L Soldaini, D Groeneveld, K Lo, S Arora, A Bhagia, ... arXiv preprint arXiv:2501.00656, 2024 | 142 | 2024 |
| Assessing Composition in Sentence Vector Representations A Ettinger, A Elgohary, C Phillips, P Resnik Proceedings of the 27th International Conference on Computational …, 2018 | 107 | 2018 |
| Towards linguistically generalizable NLP systems: A workshop and shared task A Ettinger, S Rao, H Daumé III, EM Bender arXiv preprint arXiv:1711.01505, 2017 | 104 | 2017 |
| Assessing phrasal representation and composition in transformers L Yu, A Ettinger arXiv preprint arXiv:2010.03763, 2020 | 100 | 2020 |
| Faeze Brahman, Sachin Kumar, Niloofar Mireshghallah, Ximing Lu, Maarten Sap, Yejin Choi, et al. 2024. Wildteaming at scale: From in-the-wild jailbreaks to (adversarially) safer … L Jiang, K Rao, S Han, A Ettinger arXiv preprint arXiv:2406.18510, 2024 | 84 | 2024 |
| The role of morphology in phoneme prediction: Evidence from MEG A Ettinger, T Linzen, A Marantz Brain and language 129, 14-23, 2014 | 75 | 2014 |
| Exploring BERT’s sensitivity to lexical cues using tests from semantic priming K Misra, A Ettinger, J Rayz Findings of the Association for Computational Linguistics: EMNLP 2020, 4625-4635, 2020 | 71 | 2020 |
| Learning to ignore: Long document coreference with bounded memory neural networks S Toshniwal, S Wiseman, A Ettinger, K Livescu, K Gimpel arXiv preprint arXiv:2010.02807, 2020 | 70 | 2020 |
| COMPS: Conceptual minimal pair sentences for testing robust property knowledge and its inheritance in pre-trained language models K Misra, J Rayz, A Ettinger Proceedings of the 17th Conference of the European Chapter of the …, 2023 | 64 | 2023 |
| Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words J Klafka, A Ettinger arXiv preprint arXiv:2005.01810, 2020 | 62 | 2020 |
| The Generative AI paradox:" What it can create, it may not understand" P West, X Lu, N Dziri, F Brahman, L Li, JD Hwang, L Jiang, J Fisher, ... arXiv preprint arXiv:2311.00059, 2023 | 58 | 2023 |
| Wildteaming at scale: From in-the-wild jailbreaks to (adversarially) safer language models L Jiang, K Rao, S Han, A Ettinger, F Brahman, S Kumar, N Mireshghallah, ... Advances in Neural Information Processing Systems 37, 47094-47165, 2024 | 52 | 2024 |
| Modeling N400 amplitude using vector space models of word representation A Ettinger, NH Feldman, P Resnik, C Phillips Proceedings of the 38th annual conference of the Cognitive Science Society …, 2016 | 52 | 2016 |
| Do language models learn typicality judgments from text? K Misra, A Ettinger, JT Rayz arXiv preprint arXiv:2105.02987, 2021 | 44 | 2021 |
| Pragmatic competence of pre-trained language models through the lens of discourse connectives L Pandia, Y Cong, A Ettinger arXiv preprint arXiv:2109.12951, 2021 | 39 | 2021 |
| Sorting through the noise: Testing robustness of information processing in pre-trained language models L Pandia, A Ettinger arXiv preprint arXiv:2109.12393, 2021 | 39 | 2021 |