Skip to content

[ICML2023] Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication. Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, and Zhangyang Wang

License

Notifications You must be signed in to change notification settings

VITA-Group/graph_ladling

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Graph Ladling [ICML 2023]

image

New Update

Our new ArXiv release include comparison with a concurrent work (Jiong et. al. 2023 https://arxiv.org/pdf/2305.09887.pdf) which independently presents similar ideas, among other SOTA-distributed GNN training works.

More specifically, we summarize our key differences with Jiong et. al. as follows:

  1. Our work optimizes model performance with/without distributed data parallelism by interpolating soup GNN candidate weights. On the other hand, Jiong et. al. 2023 improves performance for data-parallel GNN training with model averaging and randomized partition of graphs.
  2. Our candiate models are interpolated only after training to facilitate diversity required for soup while Jiong et. al. weights are periodically averaged during training based on a time interval.
  3. Our soup ingredients are trained by sampling different clusters per epoch on the full graph while Jiong et. al. individual trainers use localized subgraph assigned by randomized node/super-node partitions.

For more detailed discussion, please refer to Section 4.1 of our new ArXiv (https://arxiv.org/abs/2306.10466).

Abstract

image image image

About

[ICML2023] Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication. Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, and Zhangyang Wang

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages