| TinyBERT: Distilling BERT for Natural Language Understanding X Jiao, Y Yin, L Shang, X Jiang, X Chen, L Li, F Wang, Q Liu EMNLP-findings (most influential paper of EMNLP-2020), 2019 | 2650 | 2019 |
| Unsupervised word and dependency path embeddings for aspect term extraction Y Yin, F Wei, L Dong, K Xu, M Zhang, M Zhou IJCAI 2016, 2016 | 266 | 2016 |
| TernaryBERT: Distillation-aware Ultra-low Bit BERT W Zhang, L Hou, Y Yin, L Shang, X Chen, X Jiang, Q Liu EMNLP 2020, 2020 | 255 | 2020 |
| Generate & Rank: A Multi-task Framework for Math Word Problems J Shen, Y Yin, L Li, L Shang, X Jiang, M Zhang, Q Liu EMNLP 2021 findings, 2021 | 161 | 2021 |
| bert2BERT: Towards Reusable Pretrained Language Models C Chen, Y Yin, L Shang, X Jiang, Y Qin, F Wang, Z Wang, X Chen, Z Liu, ... ACL 2022, 2021 | 108 | 2021 |
| Document-level multi-aspect sentiment classification as machine comprehension Y Yin, Y Song, M Zhang EMNLP 2017, 2044-2054, 2017 | 96 | 2017 |
| AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models Y Yin, C Chen, L Shang, X Jiang, X Chen, Q Liu ACL 2021, 2021 | 64 | 2021 |
| Dt-solver: Automated theorem proving with dynamic-tree sampling guided by proof-level value function H Wang, Y Yuan, Z Liu, J Shen, Y Yin, J Xiong, E Xie, H Shi, Y Li, L Li, ... ACL 2023, 12632-12646, 2023 | 63 | 2023 |
| Tinybert: Distilling bert for natural language understanding. arXiv 2019 X Jiao, Y Yin, L Shang, X Jiang, X Chen, L Li, F Wang, Q Liu arXiv preprint arXiv:1909.10351, 0 | 59 | |
| Fimo: A challenge formal dataset for automated theorem proving C Liu, J Shen, H Xin, Z Liu, Y Yuan, H Wang, W Ju, C Zheng, Y Yin, L Li, ... arXiv preprint arXiv:2309.04295, 2023 | 47 | 2023 |
| Dq-lore: Dual queries with low rank approximation re-ranking for in-context learning J Xiong, Z Li, C Zheng, Z Guo, Y Yin, E Xie, Z Yang, Q Cao, H Wang, ... ICLR 2024, 2023 | 38 | 2023 |
| Dialog State Tracking with Reinforced Data Augmentation Y Yin, L Shang, X Jiang, X Chen, Q Liu AAAI 2020, 2019 | 33 | 2019 |
| PoD: Positional Dependency-Based Word Embedding for Aspect Term Extraction Y Yin, C Wang, M Zhang COLING 2020, 2019 | 30 | 2019 |
| Nnembs at semeval-2017 task 4: Neural twitter sentiment classification: a simple ensemble method with different embeddings Y Yin, Y Song, M Zhang Proceedings of the 11th International Workshop on Semantic Evaluation …, 2017 | 26 | 2017 |
| Socialized word embeddings. Z Zeng, Y Yin, Y Song, M Zhang IJCAI, 3915-3921, 2017 | 23 | 2017 |
| TRIGO: Benchmarking Formal Mathematical Proof Reduction for Generative Language Models J Xiong, J Shen, Y Yuan, H Wang, Y Yin, Z Liu, L Li, Z Guo, Q Cao, ... EMNLP 2023, 2023 | 22 | 2023 |
| One Cannot Stand for Everyone! Leveraging Multiple User Simulators to train Task-oriented Dialogue Systems Y Liu, X Jiang, Y Yin, Y Wang, F Mi, Q Liu, X Wan, B Wang ACL 2023, 1-21, 2023 | 21 | 2023 |
| Reusing Pretrained Models by Multi-linear Operators for Efficient Training Y Pan, Y Yuan, Y Yin, Z Xu, L Shang, X Jiang, Q Liu NeurIPS 2023, 2023 | 19 | 2023 |
| AutoConv: Automatically Generating Information-seeking Conversations with Large Language Models S Li, C Yang, Y Yin, X Zhu, Z Cheng, L Shang, X Jiang, Q Liu, Y Yang ACL 2023, 2023 | 19 | 2023 |
| G-MAP: General Memory-Augmented Pre-trained Language Model for Domain Tasks Z Wan, Y Yin, W Zhang, J Shi, L Shang, G Chen, X Jiang, Q Liu EMNLP 2022, 2022 | 19 | 2022 |