| Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation H Xu, A Sharaf, Y Chen, W Tan, L Shen, B Van Durme, K Murray, YJ Kim ICML 2024, 2024 | 405 | 2024 |
| A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models H Xu, YJ Kim, A Sharaf, HH Awadalla ICLR 2024, 2023 | 267 | 2023 |
| Phi-4-mini technical report: Compact yet powerful multimodal language models via mixture-of-loras A Abouelenin, A Ashfaq, A Atkinson, H Awadalla, N Bach, J Bao, ... arXiv preprint arXiv:2503.01743, 2025 | 219 | 2025 |
| BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation H Xu, B Van Durme, K Murray EMNLP 2021, 2021 | 114 | 2021 |
| The language barrier: Dissecting safety challenges of llms in multilingual contexts L Shen, W Tan, S Chen, Y Chen, J Zhang, H Xu, B Zheng, P Koehn, ... ACL 2024 Findings, 2024 | 86 | 2024 |
| Gradual Fine-Tuning for Low-Resource Domain Adaptation H Xu, S Ebner, M Yarmohammadi, AS White, B Van Durme, K Murray Adapt-NLP, EACL 2021, 2021 | 55 | 2021 |
| Everything Is All It Takes: A Multipronged Strategy for Zero-Shot Cross-Lingual Information Extraction M Yarmohammadi, S Wu, M Marone, H Xu, S Ebner, G Qin, Y Chen, ... EMNLP 2021, 2021 | 41 | 2021 |
| X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality Translation at Scale H Xu, K Murray, P Koehn, H Hoang, A Eriguchi, H Khayrallah ICLR 2025 Spotlight, 2024 | 26 | 2024 |
| Phi-4-mini-reasoning: Exploring the limits of small reasoning language models in math H Xu, B Peng, H Awadalla, D Chen, YC Chen, M Gao, YJ Kim, Y Li, L Ren, ... arXiv preprint arXiv:2504.21233, 2025 | 24 | 2025 |
| Cross-lingual bert contextual embedding space mapping with isotropic and isometric conditions H Xu, P Koehn arXiv preprint arXiv:2107.09186, 2021 | 16 | 2021 |
| Adapters for Altering LLM Vocabularies: What Languages Benefit the Most? HJ Han, A Eriguchi, H Xu, H Hoang, M Carpuat, H Khayrallah ICLR 2025, 2024 | 10 | 2024 |
| The Importance of Being Parameters: An Intra-Distillation Method for Serious Gains H Xu, P Koehn, K Murray EMNLP 2022, 2022 | 10 | 2022 |
| VAE based Text Style Transfer with Pivot Words Enhancement Learning H Xu, S Lu, Z Sun, C Ma, C Guo The eighteenth International Conference on Natural Language Processing, 2021 | 9 | 2021 |
| Por Qué Não Utiliser Alla Språk? Mixed Training with Gradient Optimization in Few-Shot Cross-Lingual Transfer H Xu, K Murray NAACL 2022 Findings, 2022 | 8 | 2022 |
| Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models T Li, H Xu, P Koehn, D Khashabi, K Murray ICLR 2024 Spotlight, 2023 | 6 | 2023 |
| Condensing Multilingual Knowledge with Lightweight Language-Specific Modules H Xu, W Tan, SS Li, Y Chen, B Van Durme, P Koehn, K Murray EMNLP 2023, 2023 | 6 | 2023 |
| Zero-Shot Cross-Lingual Dependency Parsing through Contextual Embedding Transformation H Xu, P Koehn Adapt-NLP, EACL 2021, 2021 | 6 | 2021 |
| Decoder-hybrid-decoder architecture for efficient reasoning with long generation L Ren, C Chen, H Xu, YJ Kim, A Atkinson, Z Zhan, J Sun, B Peng, L Liu, ... NeurIPS 2025, 2025 | 5 | 2025 |
| Contrastive preference optimization: Pushing the boundaries of LLM performance in machine translation. arXiv. preprint H Xu, A Sharaf, Y Chen, W Tan, L Shen, B Van Durme, YJ Kim arXiv preprint arXiv:2401.08417, 2024 | 5 | 2024 |
| Narrowing the Gap between Zero-and Few-shot Machine Translation by Matching Styles W Tan, H Xu, L Shen, SS Li, K Murray, P Koehn, B Van Durme, Y Chen NAACL 2024 Findings, 2023 | 5 | 2023 |