| Wat zei je? Detecting Out-of-Distribution Translations with Variational Transformers TZ Xiao, AN Gomez, Y Gal NeurIPS 2019 Workshop on Bayesian Deep Learning (arXiv preprint arXiv:2006 …, 2019 | 42* | 2019 |
| Can Large Language Models Understand Symbolic Graphics Programs? Z Qiu, W Liu, H Feng, Z Liu, TZ Xiao, KM Collins, JB Tenenbaum, A Weller, ... ICLR 2025 (arXiv preprint arXiv:2408.08313), 2024 | 32 | 2024 |
| Verbalized Machine Learning: Revisiting Machine Learning with Language Models TZ Xiao, R Bamler, B Schölkopf, W Liu TMLR 01/2025 (arXiv preprint arXiv:2406.04344), 2024 | 23 | 2024 |
| Improving VAE-based Representation Learning M Zhang, TZ Xiao, B Paige, D Barber arXiv preprint arXiv:2205.14539, 2022 | 17 | 2022 |
| Efficient Diversity-Preserving Diffusion Alignment via Gradient-Informed GFlowNets Z Liu, TZ Xiao, W Liu, Y Bengio, D Zhang ICLR 2025 (arXiv preprint arXiv:2412.07775), 2024 | 16 | 2024 |
| Iterative Teaching by Data Hallucination Z Qiu, W Liu, TZ Xiao, Z Liu, U Bhatt, Y Luo, A Weller, B Schölkopf AISTATS 2023 (arXiv preprint arXiv:2210.17467), 2022 | 14 | 2022 |
| Out-of-Distribution Detection with Class Ratio Estimation M Zhang, A Zhang, TZ Xiao, Y Sun, S McDonagh NeurIPS 2022 Workshop on Machine Learning Safety (arXiv preprint arXiv:2206 …, 2022 | 11* | 2022 |
| You Need Only Uncertain Answers: Data Efficient Multilingual Question Answering Z Lyu, D Duolikun, B Dai, Y Yao, P Minervini, TZ Xiao, Y Gal ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning, 2020 | 11 | 2020 |
| A partial reconfiguration controller for altera stratix v fpgas Z Xiao, D Koch, M Lujan International Conference on Field Programmable Logic and Applications (FPL …, 2016 | 10 | 2016 |
| Generating Symbolic World Models via Test-time Scaling of Large Language Models Z Yu, Y Yuan, TZ Xiao, FF Xia, J Fu, G Zhang, G Lin, W Liu TMLR 05/2025 (arXiv preprint arXiv:2502.04728), 2025 | 9 | 2025 |
| Improving Probabilistic Diffusion Models With Optimal Covariance Matching Z Ou, M Zhang, A Zhang, TZ Xiao, Y Li, D Barber ICLR 2025 (arXiv preprint arXiv:2406.10808), 2024 | 9* | 2024 |
| Your Finetuned Large Language Model is Already a Powerful Out-of-distribution Detector A Zhang, TZ Xiao, W Liu, R Bamler, D Wischik AISTATS 2025 (arXiv preprint arXiv:2404.08679), 2024 | 9 | 2024 |
| Trading Information between Latents in Hierarchical Variational Autoencoders TZ Xiao, R Bamler ICLR 2023 (arXiv preprint arXiv:2302.04855), 2023 | 8 | 2023 |
| Locally-Contextual Nonlinear CRFs for Sequence Labeling H Shah, T Xiao, D Barber arXiv preprint arXiv:2103.16210, 2021 | 6 | 2021 |
| A Compact Representation for Bayesian Neural Networks By Removing Permutation Symmetry TZ Xiao, W Liu, R Bamler NeurIPS 2023 Workshop on Unifying Representations (arXiv preprint arXiv:2401 …, 2023 | 5 | 2023 |
| The SVHN Dataset Is Deceptive for Probabilistic Generative Models Due to a Distribution Mismatch TZ Xiao, J Zenn, R Bamler NeurIPS 2023 Workshop on Distribution Shifts (arXiv preprint arXiv:2312.02168), 2023 | 4 | 2023 |
| Large Language Models Are Zero-Shot Problem Solvers — Just Like Modern Computers TZ Xiao, W Liu, R Bamler Harvard Data Science Review, 7(3), 2025 | 3 | 2025 |
| Reparameterized LLM Training via Orthogonal Equivalence Transformation Z Qiu, S Buchholz, TZ Xiao, M Dax, B Schölkopf, W Liu NeurIPS 2025 (arXiv preprint arXiv:2506.08001), 2025 | 3 | 2025 |
| A Note on Generalization in Variational Autoencoders: How Effective Is Synthetic Data & Overparameterization? TZ Xiao, J Zenn, R Bamler TMLR 12/2024 (arXiv preprint arXiv:2310.19653), 2024 | 3* | 2024 |
| Flipping Against All Odds: Reducing LLM Coin Flip Bias via Verbalized Rejection Sampling TZ Xiao, J Zenn, Z Liu, W Liu, R Bamler, B Schölkopf arXiv preprint arXiv:2506.09998, 2025 | 2 | 2025 |