[go: up one dir, main page]

Follow
Tiansheng Huang
Title
Cited by
Cited by
Year
An efficiency-boosting client selection scheme for federated learning with fairness guarantee
T Huang, W Lin, W Wu, L He, K Li, AY Zomaya
IEEE Transactions on Parallel and Distributed Systems 32 (7), 1552-1564, 2020
3432020
Stochastic client selection for federated learning with volatile clients
T Huang, W Lin, L Shen, K Li, AY Zomaya
IEEE Internet of Things Journal 9 (20), 20055-20070, 2022
1662022
An ant colony optimization-based multiobjective service replicas placement strategy for fog computing
T Huang, W Lin, C Xiong, R Pan, J Huang
IEEE Transactions on Cybernetics 51 (11), 5595-5608, 2020
1202020
A survey on large language model-based game agents
S Hu, T Huang, G Liu, RR Kompella, F Ilhan, SF Tekin, Y Xu, Z Yahn, ...
arXiv preprint arXiv:2404.02039, 2024
1162024
Large Language Model-Powered Smart Contract Vulnerability Detection: New Perspectives
S Hu, T Huang, F İlhan, SF Tekin, L Liu
IEEE Trust, Privacy and Security 2023, 2023
1122023
Vaccine: Perturbation-aware alignment for large language models against harmful fine-tuning attack
T Huang, S Hu, L Liu
NeurIPS2024, 2024
103*2024
Fedspeed: Larger local interval, less communication round, and higher generalization accuracy
Y Sun, L Shen, T Huang, L Ding, D Tao
ICLR 2023, 2023
872023
Harmful fine-tuning attacks and defenses for large language models: A survey
T Huang, S Hu, F Ilhan, SF Tekin, L Liu
arXiv preprint arXiv:2409.18169, 2024
762024
Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning Attack
T Huang, S Hu, F Ilhan, SF Tekin, L Liu
NeurIPS2024, 2024
72*2024
Booster: Tackling harmful fine-tuning for large language models via attenuating harmful perturbation
T Huang, S Hu, F Ilhan, SF Tekin, L Liu
ICLR2025, 2024
582024
Achieving personalized federated learning with sparse local models
T Huang, S Liu, L Shen, F He, W Lin, D Tao
arXiv preprint arXiv:2201.11380, 2022
482022
Antidote: Post-fine-tuning safety alignment for large language models against harmful fine-tuning
T Huang, G Bhattacharya, P Joshi, J Kimball, L Liu
ICML2025, 2024
45*2024
Safety tax: Safety alignment makes your large reasoning models less reasonable
T Huang, S Hu, F Ilhan, SF Tekin, Z Yahn, Y Xu, L Liu
arXiv preprint arXiv:2503.00555, 2025
422025
Pokéllmon: A human-parity agent for pokémon battles with large language models
S Hu, T Huang, L Liu
arXiv preprint arXiv:2402.01118, 2024
392024
Zipzap: Efficient training of language models for large-scale fraud detection on blockchain
S Hu, T Huang, KH Chow, W Wei, Y Wu, L Liu
Proceedings of the ACM Web Conference 2024, 2807-2816, 2024
382024
Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training
T Huang, S Hu, KH Chow, F Ilhan, S Tekin, L Liu
NeurIPS2023, 2023
372023
Probe before you talk: Towards black-box defense against backdoor unalignment for large language models
B Yi, T Huang, S Chen, T Li, Z Liu, Z Chu, Y Li
ICLR2025, 2025
352025
LLM-TOPLA: Efficient LLM Ensemble by Maximising Diversity
S Furkan Tekin, F Ilhan, T Huang, S Hu, L Liu
arXiv e-prints, arXiv: 2410.03953, 2024
34*2024
Targeted vaccine: Safety alignment for large language models against harmful fine-tuning via layer-wise perturbation
G Liu, W Lin, Q Mu, T Huang, R Mo, Y Tao, L Shen
IEEE Transactions on Information Forensics and Security, 2025
262025
Adaptive deep neural network inference optimization with eenet
F Ilhan, KH Chow, S Hu, T Huang, S Tekin, W Wei, Y Wu, M Lee, ...
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2024
252024
The system can't perform the operation now. Try again later.
Articles 1–20