Skip to content

Commit

Permalink
[Release] update version (#297)
Browse files Browse the repository at this point in the history
* update version; add news.md; modify contributing.md

* change urls to dmlc
  • Loading branch information
jermainewang authored Dec 11, 2018
1 parent f896e49 commit af23c45
Show file tree
Hide file tree
Showing 18 changed files with 88 additions and 42 deletions.
36 changes: 24 additions & 12 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,27 @@
## Contributing to DGL

If you are interested in contributing to DGL, your contributions will fall
into two categories:
1. You want to propose a new Feature and implement it
- post about your intended feature, and we shall discuss the design and
implementation. Once we agree that the plan looks good, go ahead and implement it.
2. You want to implement a feature or bug-fix for an outstanding issue
- Look at the outstanding issues
- Especially look at the Low Priority and Medium Priority issues
- Pick an issue and comment on the task that you want to work on this feature
- If you need more context on a particular issue, please ask and we shall provide.

Once you finish implementing a feature or bugfix, please send a Pull Request.
Contribution is always welcomed. A good starting place is the roadmap issue, where
you can find our current milestones. All contributions must go through pull requests
and be reviewed by the committors.

For document improvement, simply PR the change and prepend the title with `[Doc]`.

For new features, we suggest first create an issue using the feature request template.
Follow the template to describe the features you want to implement and your plans.
We also suggest pick the features from the roadmap issue because they are more likely
to be incoporated in the next release.

For bug fix, we suggest first create an issue using the bug report template if the
bug has not been reported yet. Please reply the issue that you'd like to help. Once
the task is assigned, make the change in your fork and PR the codes. Remember to
also refer to the issue where the bug is reported.

Once your PR is merged, congratulation, you are now an contributor to the DGL project.
We will put your name in the list below and also on our [website](https://www.dgl.ai/ack).

Contributors
------------
[Yizhi Liu](https://github.com/yzhliu)
[Yifei Ma](https://github.com/yifeim)
Hao Jin
[Sheng Zha](https://github.com/szha)
34 changes: 34 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
DGL release and change logs
==========

Refer to the roadmap issue for the on-going versions and features.

0.1.3
-----
Bug fix
* Compatible with Pytorch v1.0
* Bug fix in networkx graph conversion.

0.1.2
-----
First open release.
* Basic graph APIs.
* Basic message passing APIs.
* Pytorch backend.
* MXNet backend.
* Optimization using SPMV.
* Model examples w/ Pytorch:
- GCN
- GAT
- JTNN
- DGMG
- Capsule
- LGNN
- RGCN
- Transformer
- TreeLSTM
* Model examples w/ MXNet:
- GCN
- GAT
- RGCN
- SSE
8 changes: 4 additions & 4 deletions conda/dgl/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
package:
name: dgl
version: "0.1.2"
version: "0.1.3"

source:
git_rev: 0.1.2
git_url: https://github.com/jermainewang/dgl.git
git_rev: 0.1.x
git_url: https://github.com/dmlc/dgl.git

requirements:
build:
Expand All @@ -21,5 +21,5 @@ requirements:
- networkx

about:
home: https://github.com/jermainewang/dgl.git
home: https://github.com/dmlc/dgl.git
license_file: ../../LICENSE
2 changes: 1 addition & 1 deletion include/dgl/runtime/c_runtime_api.h
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
#endif

// DGL version
#define DGL_VERSION "0.1.2"
#define DGL_VERSION "0.1.3"


// DGL Runtime is DLPack compatible.
Expand Down
2 changes: 1 addition & 1 deletion python/dgl/_ffi/libinfo.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,4 +87,4 @@ def find_lib_path(name=None, search_path=None, optional=False):
# We use the version of the incoming release for code
# that is under development.
# The following line is set by dgl/python/update_version.py
__version__ = "0.1.2"
__version__ = "0.1.3"
2 changes: 1 addition & 1 deletion python/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def get_lib_path():
'scipy>=1.1.0',
'networkx>=2.1',
],
url='https://github.com/jermainewang/dgl',
url='https://github.com/dmlc/dgl',
distclass=BinaryDistribution,
classifiers=[
'Development Status :: 3 - Alpha',
Expand Down
2 changes: 1 addition & 1 deletion python/update_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# current version
# We use the version of the incoming release for code
# that is under development
__version__ = "0.1.2"
__version__ = "0.1.3"

# Implementations
def update(file_name, pattern, repl):
Expand Down
4 changes: 2 additions & 2 deletions tutorials/models/1_gnn/4_rgcn.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
#
# This tutorial will focus on the first task to show how to generate entity
# representation. `Complete
# code <https://github.com/jermainewang/dgl/tree/rgcn/examples/pytorch/rgcn>`_
# code <https://github.com/dmlc/dgl/tree/rgcn/examples/pytorch/rgcn>`_
# for both tasks can be found in DGL's github repository.
#
# Key ideas of R-GCN
Expand Down Expand Up @@ -356,4 +356,4 @@ def forward(self, g):
# The implementation is similar to the above but with an extra DistMult layer
# stacked on top of the R-GCN layers. You may find the complete
# implementation of link prediction with R-GCN in our `example
# code <https://github.com/jermainewang/dgl/blob/master/examples/pytorch/rgcn/link_predict.py>`_.
# code <https://github.com/dmlc/dgl/blob/master/examples/pytorch/rgcn/link_predict.py>`_.
2 changes: 1 addition & 1 deletion tutorials/models/1_gnn/6_line_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -610,7 +610,7 @@ def collate_fn(batch):

######################################################################################
# You can check out the complete code
# `here <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/line_graph>`_.
# `here <https://github.com/dmlc/dgl/tree/master/examples/pytorch/line_graph>`_.
#
# What's the business with :math:`\{Pm, Pd\}`?
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
2 changes: 1 addition & 1 deletion tutorials/models/1_gnn/8_sse_mx.py
Original file line number Diff line number Diff line change
Expand Up @@ -540,7 +540,7 @@ def test(g, test_nodes, steady_state_operator, predictor):
# scaled SSE to a graph with 50 million nodes and 150 million edges in a
# single P3.8x large instance and one epoch only takes about 160 seconds.
#
# See full examples `here <https://github.com/jermainewang/dgl/tree/master/examples/mxnet/sse>`_.
# See full examples `here <https://github.com/dmlc/dgl/tree/master/examples/mxnet/sse>`_.
#
# .. |image0| image:: https://s3.us-east-2.amazonaws.com/dgl.ai/tutorial/img/floodfill-paths.gif
# .. |image1| image:: https://s3.us-east-2.amazonaws.com/dgl.ai/tutorial/img/neighbor-sampling.gif
Expand Down
10 changes: 5 additions & 5 deletions tutorials/models/1_gnn/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,26 +5,26 @@ Graph Neural Network and its variant

* **GCN** `[paper] <https://arxiv.org/abs/1609.02907>`__ `[tutorial]
<1_gnn/1_gcn.html>`__ `[code]
<https://github.com/jermainewang/dgl/blob/master/examples/pytorch/gcn>`__:
<https://github.com/dmlc/dgl/blob/master/examples/pytorch/gcn>`__:
this is the vanilla GCN. The tutorial covers the basic uses of DGL APIs.

* **GAT** `[paper] <https://arxiv.org/abs/1710.10903>`__ `[code]
<https://github.com/jermainewang/dgl/blob/master/examples/pytorch/gat>`__:
<https://github.com/dmlc/dgl/blob/master/examples/pytorch/gat>`__:
the key extension of GAT w.r.t vanilla GCN is deploying multi-head attention
among neighborhood of a node, thus greatly enhances the capacity and
expressiveness of the model.

* **R-GCN** `[paper] <https://arxiv.org/abs/1703.06103>`__ `[tutorial]
<1_gnn/4_rgcn.html>`__ `[code]
<https://github.com/jermainewang/dgl/tree/master/examples/pytorch/rgcn>`__:
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/rgcn>`__:
the key difference of RGNN is to allow multi-edges among two entities of a
graph, and edges with distinct relationships are encoded differently. This
is an interesting extension of GCN that can have a lot of applications of
its own.

* **LGNN** `[paper] <https://arxiv.org/abs/1705.08415>`__ `[tutorial]
<1_gnn/6_line_graph.html>`__ `[code]
<https://github.com/jermainewang/dgl/tree/master/examples/pytorch/line_graph>`__:
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/line_graph>`__:
this model focuses on community detection by inspecting graph structures. It
uses representations of both the original graph and its line-graph
companion. In addition to demonstrate how an algorithm can harness multiple
Expand All @@ -34,7 +34,7 @@ Graph Neural Network and its variant

* **SSE** `[paper] <http://proceedings.mlr.press/v80/dai18a/dai18a.pdf>`__ `[tutorial]
<1_gnn/8_sse_mx.html>`__ `[code]
<https://github.com/jermainewang/dgl/blob/master/examples/mxnet/sse>`__:
<https://github.com/dmlc/dgl/blob/master/examples/mxnet/sse>`__:
the emphasize here is *giant* graph that cannot fit comfortably on one GPU
card. SSE is an example to illustrate the co-design of both algorithm and
system: sampling to guarantee asymptotic convergence while lowering the
Expand Down
2 changes: 1 addition & 1 deletion tutorials/models/2_small_graph/3_tree-lstm.py
Original file line number Diff line number Diff line change
Expand Up @@ -372,5 +372,5 @@ def batcher_dev(batch):
##############################################################################
# To train the model on full dataset with different settings(CPU/GPU,
# etc.), please refer to our repo's
# `example <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/tree_lstm>`__.
# `example <https://github.com/dmlc/dgl/tree/master/examples/pytorch/tree_lstm>`__.
# Besides, we also provide an implementation of the Child-Sum Tree LSTM.
2 changes: 1 addition & 1 deletion tutorials/models/2_small_graph/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Dealing with many small graphs

* **Tree-LSTM** `[paper] <https://arxiv.org/abs/1503.00075>`__ `[tutorial]
<2_small_graph/3_tree-lstm.html>`__ `[code]
<https://github.com/jermainewang/dgl/blob/master/examples/pytorch/tree_lstm>`__:
<https://github.com/dmlc/dgl/blob/master/examples/pytorch/tree_lstm>`__:
sentences of natural languages have inherent structures, which are thrown
away by treating them simply as sequences. Tree-LSTM is a powerful model
that learns the representation by leveraging prior syntactic structures
Expand Down
2 changes: 1 addition & 1 deletion tutorials/models/3_generative_model/5_dgmg.py
Original file line number Diff line number Diff line change
Expand Up @@ -762,7 +762,7 @@ def _get_next(i, v_max):

#######################################################################################
# For the complete implementation, see `dgl DGMG example
# <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/dgmg>`__.
# <https://github.com/dmlc/dgl/tree/master/examples/pytorch/dgmg>`__.
#
# Batched Graph Generation
# ---------------------------
Expand Down
4 changes: 2 additions & 2 deletions tutorials/models/3_generative_model/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Generative models

* **DGMG** `[paper] <https://arxiv.org/abs/1803.03324>`__ `[tutorial]
<3_generative_model/5_dgmg.html>`__ `[code]
<https://github.com/jermainewang/dgl/tree/master/examples/pytorch/dgmg>`__:
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/dgmg>`__:
this model belongs to the important family that deals with structural
generation. DGMG is interesting because its state-machine approach is the
most general. It is also very challenging because, unlike Tree-LSTM, every
Expand All @@ -14,7 +14,7 @@ Generative models
inter-graph parallelism to steadily improve the performance.

* **JTNN** `[paper] <https://arxiv.org/abs/1802.04364>`__ `[code]
<https://github.com/jermainewang/dgl/tree/master/examples/pytorch/jtnn>`__:
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/jtnn>`__:
unlike DGMG, this paper generates molecular graphs using the framework of
variational auto-encoder. Perhaps more interesting is its approach to build
structure hierarchically, in the case of molecular, with junction tree as
Expand Down
4 changes: 2 additions & 2 deletions tutorials/models/4_old_wines/2_capsule.py
Original file line number Diff line number Diff line change
Expand Up @@ -257,8 +257,8 @@ def weight_animate(i):
# |image5|
#
# The full code of this visualization is provided at
# `link <https://github.com/jermainewang/dgl/blob/master/examples/pytorch/capsule/simple_routing.py>`__; the complete
# code that trains on MNIST is at `link <https://github.com/jermainewang/dgl/tree/tutorial/examples/pytorch/capsule>`__.
# `link <https://github.com/dmlc/dgl/blob/master/examples/pytorch/capsule/simple_routing.py>`__; the complete
# code that trains on MNIST is at `link <https://github.com/dmlc/dgl/tree/tutorial/examples/pytorch/capsule>`__.
#
# .. |image0| image:: https://i.imgur.com/55Ovkdh.png
# .. |image1| image:: https://i.imgur.com/9tc6GLl.png
Expand Down
6 changes: 3 additions & 3 deletions tutorials/models/4_old_wines/7_transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@
# In this tutorial, we show a simplified version of the implementation in
# order to highlight the most important design points (for instance we
# only show single-head attention); the complete code can be found
# `here <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/transformer>`__.
# `here <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer>`__.
# The overall structure is similar to the one from `The Annotated
# Transformer <http://nlp.seas.harvard.edu/2018/04/03/attention.html>`__.
#
Expand Down Expand Up @@ -576,7 +576,7 @@
#
# Note that we do not involve inference module in this tutorial (which
# requires beam search), please refer to the `Github
# Repo <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/transformer>`__
# Repo <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer>`__
# for full implementation.
#
# .. code:: python
Expand Down Expand Up @@ -851,7 +851,7 @@
# that satisfy the given predicate.
#
# for the full implementation, please refer to our `Github
# Repo <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/transformer/modules/act.py>`__.
# Repo <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer/modules/act.py>`__.
#
# The figure below shows the effect of Adaptive Computational
# Time(different positions of a sentence were revised different times):
Expand Down
6 changes: 3 additions & 3 deletions tutorials/models/4_old_wines/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Old (new) wines in new bottle
-----------------------------
* **Capsule** `[paper] <https://arxiv.org/abs/1710.09829>`__ `[tutorial]
<4_old_wines/2_capsule.html>`__ `[code]
<https://github.com/jermainewang/dgl/tree/master/examples/pytorch/capsule>`__:
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/capsule>`__:
this new computer vision model has two key ideas -- enhancing the feature
representation in a vector form (instead of a scalar) called *capsule*, and
replacing max-pooling with dynamic routing. The idea of dynamic routing is to
Expand All @@ -15,9 +15,9 @@ Old (new) wines in new bottle


* **Transformer** `[paper] <https://arxiv.org/abs/1706.03762>`__ `[tutorial] <4_old_wines/7_transformer.html>`__
`[code] <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/transformer>`__ and **Universal Transformer**
`[code] <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer>`__ and **Universal Transformer**
`[paper] <https://arxiv.org/abs/1807.03819>`__ `[tutorial] <4_old_wines/7_transformer.html>`__
`[code] <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/transformer/modules/act.py>`__:
`[code] <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer/modules/act.py>`__:
these two models replace RNN with several layers of multi-head attention to
encode and discover structures among tokens of a sentence. These attention
mechanisms can similarly formulated as graph operations with
Expand Down

0 comments on commit af23c45

Please sign in to comment.