| Visualbert: A simple and performant baseline for vision and language LH Li, M Yatskar, D Yin, CJ Hsieh, KW Chang arXiv preprint arXiv:1908.03557, 2019 | 2638 | 2019 |
| Men also like shopping: Reducing gender bias amplification using corpus-level constraints J Zhao, T Wang, M Yatskar, V Ordonez, KW Chang arXiv preprint arXiv:1707.09457, 2017 | 1426 | 2017 |
| Gender bias in coreference resolution: Evaluation and debiasing methods J Zhao, T Wang, M Yatskar, V Ordonez, KW Chang arXiv preprint arXiv:1804.06876, 2018 | 1410 | 2018 |
| Neural motifs: Scene graph parsing with global context R Zellers, M Yatskar, S Thomson, Y Choi Proceedings of the IEEE conference on computer vision and pattern …, 2018 | 1367 | 2018 |
| QuAC: Question answering in context E Choi, H He, M Iyyer, M Yatskar, W Yih, Y Choi, P Liang, L Zettlemoyer arXiv preprint arXiv:1808.07036, 2018 | 1087 | 2018 |
| Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations T Wang, J Zhao, M Yatskar, KW Chang, V Ordonez Proceedings of the IEEE/CVF international conference on computer vision …, 2019 | 601 | 2019 |
| Don't take the easy way out: Ensemble based methods for avoiding known dataset biases C Clark, M Yatskar, L Zettlemoyer arXiv preprint arXiv:1909.03683, 2019 | 599 | 2019 |
| Gender bias in contextualized word embeddings J Zhao, T Wang, M Yatskar, R Cotterell, V Ordonez, KW Chang arXiv preprint arXiv:1904.03310, 2019 | 568 | 2019 |
| Neural AMR: Sequence-to-sequence models for parsing and generation I Konstas, S Iyer, M Yatskar, Y Choi, L Zettlemoyer arXiv preprint arXiv:1704.08381, 2017 | 396 | 2017 |
| Robothor: An open simulation-to-real embodied ai platform M Deitke, W Han, A Herrasti, A Kembhavi, E Kolve, R Mottaghi, J Salvador, ... Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020 | 377 | 2020 |
| Language in a bottle: Language model guided concept bottlenecks for interpretable image classification Y Yang, A Panagopoulou, S Zhou, D Jin, C Callison-Burch, M Yatskar Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2023 | 371 | 2023 |
| Situation Recognition: Visual Semantic Role Labeling for Image Understanding M Yatskar, L Zettlemoyer, A Farhadi Conference on Computer Vision and Pattern Recognition, 2016 | 344 | 2016 |
| Molmo and pixmo: Open weights and open data for state-of-the-art multimodal models M Deitke, C Clark, S Lee, R Tripathi, Y Yang, JS Park, M Salehi, ... arXiv e-prints, arXiv: 2409.17146, 2024 | 289 | 2024 |
| For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia M Yatskar, B Pang, C Danescu-Niculescu-Mizil, L Lee arXiv preprint arXiv:1008.1986, 2010 | 229 | 2010 |
| What does BERT with vision look at? LH Li, M Yatskar, D Yin, CJ Hsieh, KW Chang Proceedings of the 58th annual meeting of the association for computational …, 2020 | 206 | 2020 |
| Holodeck: Language guided generation of 3d embodied ai environments Y Yang, FY Sun, L Weihs, E VanderBilt, A Herrasti, W Han, J Wu, N Haber, ... Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024 | 205 | 2024 |
| Grounded situation recognition S Pratt, M Yatskar, L Weihs, A Farhadi, A Kembhavi European Conference on Computer Vision, 314-332, 2020 | 141 | 2020 |
| Expertqa: Expert-curated questions and attributed answers C Malaviya, S Lee, S Chen, E Sieber, M Yatskar, D Roth Proceedings of the 2024 Conference of the North American Chapter of the …, 2024 | 128 | 2024 |
| A qualitative comparison of CoQA, SQuAD 2.0 and QuAC M Yatskar Proceedings of the 2019 conference of the North American chapter of the …, 2019 | 121 | 2019 |
| Visual semantic role labeling for video understanding A Sadhu, T Gupta, M Yatskar, R Nevatia, A Kembhavi Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021 | 95 | 2021 |