[go: up one dir, main page]

Follow
Sheng Zhang
Sheng Zhang
Microsoft Research
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
LLaVA-Med: Training a large language-and-vision assistant for biomedicine in one day
C Li, C Wong, S Zhang, N Usuyama, H Liu, J Yang, T Naumann, H Poon, ...
arXiv preprint arXiv:2306.00890, 2023
15842023
BioGPT: generative pre-trained transformer for biomedical text generation and mining
R Luo, L Sun, Y Xia, T Qin, S Zhang, H Poon, TY Liu
Briefings in bioinformatics 23 (6), bbac409, 2022
14962022
A whole-slide foundation model for digital pathology from real-world data
H Xu, N Usuyama, J Bagga, S Zhang, R Rao, T Naumann, C Wong, ...
Nature 630 (8015), 181-188, 2024
7482024
Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs
S Zhang, Y Xu, N Usuyama, H Xu, J Bagga, R Tinn, S Preston, R Rao, ...
arXiv preprint arXiv:2303.00915, 2023
5502023
Can generalist foundation models outcompete special-purpose tuning? case study in medicine
H Nori, YT Lee, S Zhang, D Carignan, R Edgar, N Fusi, N King, J Larson, ...
arXiv preprint arXiv:2311.16452, 2023
5352023
ReCoRD: Bridging the gap between human and machine commonsense reading comprehension
S Zhang, X Liu, J Liu, J Gao, K Duh, B Van Durme
arXiv preprint arXiv:1810.12885, 2018
3182018
Large-scale domain-specific pretraining for biomedical vision-language processing
S Zhang, Y Xu, N Usuyama, J Bagga, R Tinn, S Preston, R Rao, M Wei, ...
arXiv preprint arXiv:2303.00915 2 (3), 6, 2023
2382023
UniversalNER: Targeted distillation from large language models for open named entity recognition
W Zhou, S Zhang, Y Gu, M Chen, H Poon
arXiv preprint arXiv:2308.03279, 2023
2252023
Deep generalized canonical correlation analysis
A Benton, H Khayrallah, B Gujral, DA Reisinger, S Zhang, R Arora
Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP …, 2019
2142019
AMR parsing as sequence-to-graph transduction
S Zhang, X Ma, K Duh, B Van Durme
arXiv preprint arXiv:1905.08704, 2019
2122019
Universal decompositional semantics on universal dependencies
AS White, D Reisinger, K Sakaguchi, T Vieira, S Zhang, R Rudinger, ...
Proceedings of the 2016 Conference on Empirical Methods in Natural Language …, 2016
2082016
Context-faithful prompting for large language models
W Zhou, S Zhang, H Poon, M Chen
arXiv preprint arXiv:2303.11315, 2023
1862023
BiomedCLIP: A multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs. arXiv 2023
S Zhang, Y Xu, N Usuyama, H Xu, J Bagga, R Tinn, S Preston, R Rao, ...
arXiv preprint arXiv:2303.00915, 2023
167*2023
Ordinal common-sense inference
S Zhang, R Rudinger, K Duh, B Van Durme
Transactions of the Association of Computational Linguistics, 2017
1452017
Answering natural language questions via phrasal semantic parsing
K Xu, S Zhang, Y Feng, D Zhao
CCF International Conference on Natural Language Processing and Chinese …, 2014
1292014
Muirbench: A comprehensive benchmark for robust multi-image understanding
F Wang, X Fu, JY Huang, Z Li, Q Liu, X Liu, MD Ma, N Xu, W Zhou, ...
arXiv preprint arXiv:2406.09411, 2024
1212024
Optimizing bi-encoder for named entity recognition via contrastive learning
S Zhang, H Cheng, J Gao, H Poon
arXiv preprint arXiv:2208.14565, 2022
942022
mdpo: Conditional preference optimization for multimodal large language models
F Wang, W Zhou, JY Huang, N Xu, S Zhang, H Poon, M Chen
arXiv preprint arXiv:2406.11839, 2024
932024
Broad-coverage semantic parsing as transduction
S Zhang, X Ma, K Duh, B Van Durme
arXiv preprint arXiv:1909.02607, 2019
862019
Knowledge-rich self-supervision for biomedical entity linking
S Zhang, H Cheng, S Vashishth, C Wong, J Xiao, X Liu, T Naumann, ...
Findings of the Association for Computational Linguistics: EMNLP 2022, 868-880, 2022
602022
The system can't perform the operation now. Try again later.
Articles 1–20