Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Environment issues #60

Open
Harley-ZP opened this issue Mar 26, 2022 · 4 comments
Open

Environment issues #60

Harley-ZP opened this issue Mar 26, 2022 · 4 comments

Comments

@Harley-ZP
Copy link

I am using window 10 OS,
and I followed the markdown instruction, installed packages including dgl-cuda 10.1,
but showing up 'no module named dgl' while executing "import dgl",
it seems that the dgl-cuda could not be correctly imported.

Any one facing same problem?

Chinese version:

按.md装了包之后dgl cuda没法正常被import..
import dgl报错。

@MrLiuCC
Copy link

MrLiuCC commented Mar 31, 2022

应该是环境没有配好

@MrLiuCC
Copy link

MrLiuCC commented Mar 31, 2022

I use following codes to create environments successfully:
`
conda create -n renet python=3.6 numpy
conda activate renet
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c python
conda install -c dglteam dgl-cuda10.1
conda install scikit-learn

`

@Harley-ZP
Copy link
Author

Harley-ZP commented Mar 31, 2022 via email

@ZhengJialin1000
Copy link

报错
E:\anaconda3\lib\site-packages\torch_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the
'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at ..\aten\sr
c\ATen\native\BinaryOps.cpp:467.)
return torch.floor_divide(self, other)
Traceback (most recent call last):
File "pretrain.py", line 139, in
train(args)
File "pretrain.py", line 83, in train
loss = model(batch_data, true_s, true_o, graph_dict)
File "E:\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "F:\code\RE-Net-master\global_model.py", line 47, in forward
packed_input = self.aggregator(sorted_t, self.ent_embeds, graph_dict, reverse=reverse)
File "E:\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "F:\code\RE-Net-master\Aggregator.py", line 55, in forward
batched_graph.ndata['h'] = ent_embeds[batched_graph.ndata['id']].view(-1, ent_embeds.shape[1])
File "E:\anaconda3\lib\site-packages\dgl\view.py", line 84, in setitem
self._graph._set_n_repr(self._ntid, self._nodes, {key : val})
File "E:\anaconda3\lib\site-packages\dgl\heterograph.py", line 4124, in _set_n_repr
' same device.'.format(key, F.context(val), self.device))
dgl._ffi.base.DGLError: Cannot assign node feature "h" on device cuda:0 to a graph on device cpu. Call DGLGraph.to() to copy the graph to the same device.
在aggregator 55行前添加了batched_graph = dgl.batch(g_list)改为 local variable 'batch_data' referenced before assignment
请问应该怎么修改,应该是dgl版本问题

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants