| Local Model Poisoning Attacks to Byzantine-Robust Federated Learning M Fang, X Cao, J Jia, NZ Gong USENIX Security Symposium, 2020 | 1930 | 2020 |
| MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples J Jia, A Salem, M Backes, Y Zhang, NZ Gong ACM Conference on Computer and Communications Security (CCS), 2019 | 552 | 2019 |
| FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients Z Zhang, X Cao, J Jia, NZ Gong ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022 | 413 | 2022 |
| Formalizing and Benchmarking Prompt Injection Attacks and Defenses Y Liu, Y Jia, R Geng, J Jia, NZ Gong USENIX Security Symposium, 2024 | 335 | 2024 |
| Backdoor attacks to graph neural networks Z Zhang, J Jia, B Wang, NZ Gong ACM Symposium on Access Control Models and Technologies (SACMAT), 2021 | 321 | 2021 |
| {PoisonedRAG}: Knowledge Corruption Attacks to {Retrieval-Augmented} Generation of Large Language Models W Zou, R Geng, B Wang, J Jia USENIX Security Symposium, 3827-3844, 2025 | 293 | 2025 |
| BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning J Jia, Y Liu, NZ Gong IEEE Symposium on Security and Privacy (IEEE S&P), 2022 | 272 | 2022 |
| Stealing Links from Graph Neural Networks X He, J Jia, M Backes, NZ Gong, Y Zhang USENIX Security Symposium, 2021 | 258 | 2021 |
| AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning J Jia, NZ Gong USENIX Security Symposium, 2018 | 254 | 2018 |
| IPGuard: Protecting the Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary X Cao, J Jia, NZ Gong ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2021 | 234 | 2021 |
| SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding Z Xu, F Jiang, L Niu, J Jia, BY Lin, R Poovendran Annual Meeting of the Association for Computational Linguistics (ACL), 2024 | 215 | 2024 |
| Provably Secure Federated Learning against Malicious Clients X Cao, J Jia, NZ Gong AAAI Conference on Artificial Intelligence (AAAI), 2021 | 197 | 2021 |
| Random walk based fake account detection in online social networks J Jia, B Wang, NZ Gong IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2017 | 174 | 2017 |
| Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks J Jia, X Cao, NZ Gong AAAI Conference on Artificial Intelligence (AAAI), 2021 | 160 | 2021 |
| FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information X Cao, J Jia, Z Zhang, NZ Gong IEEE Symposium on Security and Privacy (IEEE S&P), 2023 | 149 | 2023 |
| Data Poisoning Attacks to Local Differential Privacy Protocols X Cao, J Jia, NZ Gong USENIX Security Symposium, 2021 | 139 | 2021 |
| On Certifying Robustness against Backdoor Attacks via Randomized Smoothing B Wang, X Cao, J Jia, NZ Gong CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision, 2020 | 138 | 2020 |
| EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning H Liu, J Jia, W Qu, NZ Gong ACM Conference on Computer and Communications Security (CCS), 2021 | 130 | 2021 |
| AttriInfer: Inferring user attributes in online social networks using markov random fields J Jia, B Wang, L Zhang, NZ Gong Proceedings of the WWW, 1561-1569, 2017 | 129 | 2017 |
| Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing J Jia, X Cao, B Wang, NZ Gong International Conference on Learning Representations (ICLR), 2020 | 120 | 2020 |