Skip to content

The official implementation for ICLR22 paper "Handling Distribution Shifts on Graphs: An Invariance Perspective"

Notifications You must be signed in to change notification settings

qitianwu/GraphOOD-EERM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GraphOOD-EERM

Codes and datasets for ICLR2022 paper Handling Distribution Shifts on Graphs: An Invariance Perspective. For tutorial on this work, one can read this Chinese Blog. We will release an English version soon.

This work focuses on distribution shifts on graph data, especially node-level prediction tasks (i.e., samples have inter-dependence induced by a large graph), and proposes a new approach Explore-to-Extrapolate Risk Minimization (EERM) for out-of-distribution generalization.

image

Dependency

PYTHON 3.7, PyTorch 1.9.0, PyTorch Geometric 1.7.2

Datasets

In our experiment, we consider three types of distribution shifts with six real-world datasets. The information of experimental datasets is summarized in the following Table.

image

You can make a directory ./data and download all the datasets through the Google drive:

  https://drive.google.com/drive/folders/15YgnsfSV_vHYTXe7I4e_hhGMcx0gKrO8?usp=sharing

Here is a brief introduction for three distribution shifts and the datasets:

  • Artificial Transformation: We use Cora and Amazon-Photo datasets to construct spurious node features. The data construction script is provided in ./synthetic/synthetic.py. The original datasets can easily accessed via Pytorch Geometric package.

  • Cross-Domain Transfer: We use Twitch-Explicit and Facebook-100 datasets. These two datasets both contain multiple graphs. We use different graphs for training/validation/testing. The original Twitch dataset is from Non-Homophily Benchmark. For Facebook dataset, we use partial graphs for experiments, and its complete version could be obtained from Facebook dataset.

  • Temporal Evolution: We use Elliptic and OGBN-Arxiv datasets. The raw Elliptic data is from Kaggle dataset. For OGB dataset, see the OGB website for more details.

Running the code

We do not provide the trained model since the training cost for each experiment is acceptable. To run the code, please refer to the bash script run.sh in each folder. For example, the training script for Cora and Amazon-Photo (with GCN generating synthetic data) is

      # cora
      python main.py --method erm --dataset cora --gnn_gen gcn --gnn gcn --run 20 --lr 0.001 --device 0

      python main.py --method eerm --dataset cora --gnn_gen gcn --gnn gcn --lr 0.005 --K 10 --T 1 --num_sample 1 --beta 1.0 --lr_a 0.001 --run 20 --device 0

      # amazon-photo
      python main.py --method erm --dataset amazon-photo --gnn_gen gcn --gnn gcn --run 20 --lr 0.001 --device 0

      python main.py --method eerm --dataset amazon-photo --gnn_gen gcn --gnn gcn --lr 0.01 --K 5 --T 1 --num_sample 1 --beta 1.0 --lr_a 0.005 --run 20 --device 0

More information will be updated. Welcome to contact me [email protected] for any question.

If you found the codes and datasets are useful, please cite our paper

      @inproceedings{wu2022eerm,
      title = {Handling Distribution Shifts on Graphs: An Invariance Perspective},
      author = {Qitian Wu and Hengrui Zhang and Junchi Yan and David Wipf},
      booktitle = {International Conference on Learning Representations (ICLR)},
      year = {2022}
      }

About

The official implementation for ICLR22 paper "Handling Distribution Shifts on Graphs: An Invariance Perspective"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published