[go: up one dir, main page]

Follow
Tao Fan (范涛)
Tao Fan (范涛)
Tech Leader@WeBank
Verified email at webank.com - Homepage
Title
Cited by
Cited by
Year
Secureboost: A lossless federated learning framework
K Cheng, T Fan, Y Jin, Y Liu, T Chen, D Papadopoulos, Q Yang
IEEE intelligent systems 36 (6), 87-98, 2021
9112021
Fate: An industrial grade platform for collaborative learning with data protection
Y Liu, T Fan, T Chen, Q Xu, Q Yang
Journal of Machine Learning Research 22 (226), 1-6, 2021
3582021
Fate-llm: A industrial grade federated learning framework for large language models
T Fan, Y Kang, G Ma, W Chen, W Wei, L Fan, Q Yang
arXiv preprint arXiv:2310.10049, 2023
1652023
A quasi-newton method based vertical federated learning framework for logistic regression
K Yang, T Fan, T Chen, Y Shi, Q Yang
FL-NeurIPS 19, 2019
1332019
SecureBoost: Large Scale and High-Performance Vertical Federated Gradient Boosting Decision Tree
T Fan, W Chen, G Ma, Y Kang, L Fan, Q Yang
Pacific-Asia Conference on Knowledge Discovery and Data Mining, 237-249, 2024
72*2024
Privacy-preserving federated adversarial domain adaptation over feature groups for interpretability
Y Kang, Y He, J Luo, T Fan, Y Liu, Q Yang
IEEE Transactions on Big Data 10 (6), 879-890, 2022
602022
Grounding foundation models through federated transfer learning: A general framework
Y Kang, T Fan, H Gu, X Zhang, L Fan, Q Yang
ACM Transactions on Intelligent Systems and Technology 16 (4), 1-54, 2025
402025
Ten challenging problems in federated foundation models
T Fan, H Gu, X Cao, CS Chan, Q Chen, Y Chen, Y Feng, Y Gu, J Geng, ...
IEEE Transactions on Knowledge and Data Engineering, 2025
402025
Fedmkt: Federated mutual knowledge transfer for large and small language models
T Fan, G Ma, Y Kang, H Gu, Y Song, L Fan, K Chen, Q Yang
Proceedings of the 31st International Conference on Computational …, 2025
282025
Accelerating vertical federated learning
D Cai, T Fan, Y Kang, L Fan, M Xu, S Wang, Q Yang
IEEE Transactions on Big Data 10 (6), 752-760, 2022
222022
Unveiling the vulnerability of private fine-tuning in split-based frameworks for large language models: A bidirectionally enhanced attack
G Chen, Z Qin, M Yang, Y Zhou, T Fan, T Du, Z Xu
Proceedings of the 2024 on ACM SIGSAC Conference on Computer and …, 2024
82024
Fedcollm: A parameter-efficient federated co-tuning framework for large and small language models
T Fan, Y Kang, G Ma, L Fan, K Chen, Q Yang
arXiv preprint arXiv:2411.11707, 2024
82024
PDSS: A Privacy-Preserving Framework for Step-by-Step Distillation of Large Language Models
T Fan, Y Kang, W Chen, H Gu, Y Song, L Fan, K Chen, Q Yang
arXiv preprint arXiv:2406.12403, 2024
82024
Towards multi-agent reasoning systems for collaborative expertise delegation: An exploratory design study
B Xu, C Li, W Wang, W Fan, T Zheng, H Shi, T Fan, Y Song, Q Yang
arXiv preprint arXiv:2505.07313, 2025
52025
PPC-GPT: federated task-specific compression of large language models via pruning and chain-of-thought distillation
T Fan, G Ma, Y Song, L Fan, Q Yang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language …, 2025
32025
INFERENCEDYNAMICS: Efficient Routing Across LLMs through Structured Capability and Knowledge Profiling
H Shi, T Zheng, W Wang, B Xu, C Li, C Chan, T Fan, Y Song, Q Yang
arXiv preprint arXiv:2505.16303, 2025
32025
Federated-learning based method of acquiring model parameters, system and readable storage medium
T Fan, MA Guoqiang, T Chen, Q Yang, Y Liu
US Patent App. 17/231,314, 2021
32021
Navigation in Complex Networks Using Random Walk Theory and Principal Component Analysis
Y Gao, T Fan, S Cai
2019 16th International Computer Conference on Wavelet Active Media …, 2019
22019
H2Tune: Federated Foundation Model Fine-Tuning with Hybrid Heterogeneity
W Guo, S Lu, Y Tong, Z Hu, F Zhuang, X Zhang, T Fan, J Dong
ECAI, 2025, 2025
12025
Text-to-TrajVis: Enabling Trajectory Data Visualizations from Natural Language Questions
T Bai, H Ying, K Suo, J Wei, T Fan, Y Song
arXiv preprint arXiv:2504.16358, 2025
12025
The system can't perform the operation now. Try again later.
Articles 1–20