[go: up one dir, main page]

Follow
MingYu Yan
Title
Cited by
Cited by
Year
Hygcn: A gcn accelerator with hybrid architecture
M Yan, L Deng, X Hu, L Liang, Y Feng, X Ye, Z Zhang, D Fan, Y Xie
2020 IEEE International Symposium on High Performance Computer Architecture …, 2020
4472020
Simple and efficient heterogeneous graph neural network
X Yang, M Yan, S Pan, X Ye, D Fan
Proceedings of the AAAI conference on artificial intelligence 37 (9), 10816 …, 2023
2882023
Sampling methods for efficient training of graph convolutional networks: A survey
X Liu, M Yan, L Deng, G Li, X Ye, D Fan
IEEE/CAA Journal of Automatica Sinica 9 (2), 205-234, 2021
1642021
Alleviating irregularity in graph analytics acceleration: A hardware/software co-design approach
M Yan, X Hu, S Li, A Basak, H Li, X Ma, I Akgun, Y Feng, P Gu, L Deng, ...
Proceedings of the 52nd Annual IEEE/ACM International Symposium on …, 2019
109*2019
Characterizing and understanding GCNs on GPU
M Yan, Z Chen, L Deng, X Ye, Z Zhang, D Fan, Y Xie
IEEE Computer Architecture Letters 19 (1), 22-25, 2020
902020
Rubik: A hierarchical architecture for efficient graph neural network training
X Chen, Y Wang, X Xie, X Hu, A Basak, L Liang, M Yan, L Deng, Y Ding, ...
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2021
76*2021
Survey on graph neural network acceleration: An algorithmic perspective
X Liu, M Yan, L Deng, G Li, X Ye, D Fan, S Pan, Y Xie
Proceedings of the Thirty-First International Joint Conference on Artificial …, 2022
702022
A comprehensive survey on distributed training of graph neural networks
H Lin, M Yan, X Ye, D Fan, S Pan, W Chen, Y Xie
Proceedings of the IEEE 111 (12), 1572-1606, 2023
682023
fuseGNN: Accelerating graph convolutional neural network training on GPGPU
Z Chen, M Yan, M Zhu, L Deng, G Li, S Li, Y Xie
Proceedings of the 39th International Conference on Computer-Aided Design, 1-9, 2020
332020
Characterizing and understanding HGNNs on GPUs
M Yan, M Zou, X Yang, W Li, X Ye, D Fan, Y Xie
IEEE Computer Architecture Letters 21 (2), 69-72, 2022
212022
Characterizing and understanding distributed GNN training on GPUs
H Lin, M Yan, X Yang, M Zou, W Li, X Ye, D Fan
IEEE Computer Architecture Letters 21 (1), 21-24, 2022
182022
Fast search of the optimal contraction sequence in tensor networks
L Liang, J Xu, L Deng, M Yan, X Hu, Z Zhang, G Li, Y Xie
IEEE Journal of Selected Topics in Signal Processing 15 (3), 574-586, 2021
162021
GNNSampler: Bridging the gap between sampling algorithms of GNN and hardware
X Liu, M Yan, S Song, Z Lv, W Li, G Sun, X Ye, D Fan
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2022
152022
Multi-node acceleration for large-scale gcns
G Sun, M Yan, D Wang, H Li, W Li, X Ye, D Fan, Y Xie
IEEE Transactions on Computers 71 (12), 3140-3152, 2022
152022
A comprehensive survey on gnn characterization
M Wu, M Yan, W Li, X Ye, D Fan, N Sun, Y Xie
arXiv e-prints, arXiv: 2408.01902, 2024
14*2024
HiHGNN: Accelerating HGNNs through parallelism and data reusability exploitation
R Xue, D Han, M Yan, M Zou, X Yang, D Wang, W Li, Z Tang, J Kim, X Ye, ...
IEEE Transactions on Parallel and Distributed Systems 35 (7), 1122-1138, 2024
132024
Revisiting edge perturbation for graph neural network in graph data augmentation and attack
X Liu, Y Zhang, M Wu, M Yan, K He, W Yan, S Pan, X Ye, D Fan
IEEE Transactions on Knowledge and Data Engineering, 2025
112025
A high-accurate multi-objective ensemble exploration framework for design space of CPU microarchitecture
D Wang, M Yan, Y Teng, D Han, X Ye, D Fan
Proceedings of the Great Lakes Symposium on VLSI 2023, 379-383, 2023
102023
General spiking neural network framework for the learning trajectory from a noisy mmWave radar
X Liu, M Yan, L Deng, Y Wu, D Han, G Li, X Ye, D Fan
Neuromorphic Computing and Engineering 2 (3), 034013, 2022
102022
A high-accurate multi-objective exploration framework for design space of cpu
D Wang, M Yan, X Liu, M Zou, T Liu, W Li, X Ye, D Fan
2023 60th ACM/IEEE Design Automation Conference (DAC), 1-6, 2023
92023
The system can't perform the operation now. Try again later.
Articles 1–20