Paper: Recurrent Event Network: Global Structure Inference over Temporal Knowledge Graph
TL;DR: We propose an autoregressive model to infer graph structures at unobserved times on temporal knowledge graphs (extrapolation problem).
This repository contains the implementation of the RE-Net architectures described in the paper.
Modeling dynamically-evolving, multi-relational graph data has received a surge of interests with the rapid growth of heterogeneous event data. However, predicting future events on such data requires global structure inference over time and the ability to integrate temporal and structural information, which are not yet well understood. We present Recurrent Event Network (RE-Net), a novel autoregressive architecture for modeling temporal sequences of multi-relational graphs (e.g., temporal knowledge graph), which can perform sequential, global structure inference over future time stamps to predict new events. RE-Net employs a recurrent event encoder to model the temporally conditioned joint probability distribution for the event sequences, and equips the event encoder with a neighborhood aggregator for modeling the concurrent events within a time window associated with each entity. We apply teacher forcing for model training over historical data, and infer graph sequences over future time stamps by sampling from the learned joint distribution in a sequential manner.
If you make use of this code or the RE-Net algorithm in your work, please cite the following paper:
@article{jin2019recurrent,
title={Recurrent Event Network: Global Structure Inference over Temporal Knowledge Graph},
author={Jin, Woojeong and Jiang, He and Qu, Meng and Chen, Tong and Zhang, Changlin and Szekely, Pedro and Ren, Xiang},
journal={ICLR-RLGM},
year={2019}
}
Install PyTorch (>= 0.4.0) and DGL following the instuctions on the PyTorch and DGL. Our code is written in Python3.
Run the following commands to create a conda environment (assume CUDA10):
conda create -n renet python=3.6 numpy
source activate renet
pip install torch torchvision
pip install dgl-cu100
In this code, RE-Net with RGCN aggregator is included. Before running, the user should preprocess datasets.
cd data/DATA_NAME
python3 get_history_graph.py
We first pretrain the global model.
python3 pretrain.py -d DATA_NAME --gpu 0 --dropout 0.5 --n-hidden 200 --lr 1e-3 --max-epochs 20 --batch-size 1024
Then, train the model.
python3 train.py -d DATA_NAME --gpu 0 --dropout 0.5 --n-hidden 200 --lr 1e-3 --max-epochs 20 --batch-size 1024
We are ready to test!
python3 test.py -d DATA_NAME --gpu 0 --n-hidden 200
The default hyperparameters give the best performances.
Our work is on an extrapolation problem. There are only a few work on the problem. Many studies on temporal knowledge graphs are focused on an intrapolation problem. We organized the list of related work such as Temporal Knowledge Graph Reasoning, Dynamic Graph Embedding, Knowledge Graph Embedding, and Static Graph Embedding.
There are four datasets: ICEWS18, ICEWS14 (from Know-Evolve), GDELT, WIKI, and YAGO. These datasets are for the extrapolation problem. Times of test set should be larger than times of train and valid sets. (Times of valid set also should be larger than times of train set.) Each data folder has 'stat.txt', 'train.txt', 'valid.txt', 'test.txt', 'get_history.py', and 'get_history_graph.py'.
- 'get_history.py': This is for getting history for model 0, 1, and 2.
- 'get_history_graph.py': This is for getting history and graph for model 3.
- 'stat.txt': First value is the number of entities, and second value is the number of relations.
- 'train.txt', 'valid.txt', 'test.txt': First column is subject entities, second column is relations, and third column is object entities. The fourth column is time. The fifth column is for know-evolve's data format. It is ignored in RE-Net.
We use the following public codes for baselines and hyperparameters. We validated embedding sizes among presented values.
Baselines | Code | Embedding size | Batch size |
---|---|---|---|
TransE (Bordes et al., 2013) | Link | 100, 200 | 1024 |
DistMult (Yang et al., 2015) | Link | 100, 200 | 1024 |
ComplEx (Trouillon et al., 2016) | Link | 50, 100, 200 | 100 |
RGCN (Schlichtkrull et al., 2018) | Link | 200 | Default |
ConvE (Dettmers et al., 2018) | Link | 200 | 128 |
Know-Evolve (Trivedi et al., 2017) | Link | Default | Default |
HyTE (Dasgupta et al., 2018) | Link | 128 | Default |
We implemented TA-TransE, TA-DistMult, and TTransE. The user can run the baselines by the following command.
cd ./baselines
CUDA_VISIBLE_DEVICES=0 python3 TA-TransE.py -f 1 -d ICEWS18 -L 1 -bs 1024 -n 1000
The user can find implementations in the 'baselines' folder.