-
When I use device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
transform = T.Compose([
T.NormalizeFeatures(),
T.ToDevice(device),
T.RandomLinkSplit(num_val=0.1, num_test=0.1, is_undirected=True,
add_negative_train_samples=False),
])
dataset = Planetoid('data', name='CiteSeer', transform=transform)
train_data, val_data, test_data = dataset[0] I get the following results: Data(x=[3327, 3703], edge_index=[2, 7284], y=[3327], train_mask=[3327], val_mask=[3327], test_mask=[3327], edge_label=[3642], edge_label_index=[2, 3642])
Data(x=[3327, 3703], edge_index=[2, 7284], y=[3327], train_mask=[3327], val_mask=[3327], test_mask=[3327], edge_label=[910], edge_label_index=[2, 910])
Data(x=[3327, 3703], edge_index=[2, 8194], y=[3327], train_mask=[3327], val_mask=[3327], test_mask=[3327], edge_label=[910], edge_label_index=[2, 910]) In my understanding, Now we try to set Data(x=[3327, 3703], edge_index=[2, 3642], y=[3327], train_mask=[3327], val_mask=[3327], test_mask=[3327], edge_label=[1821], edge_label_index=[2, 1821])
Data(x=[3327, 3703], edge_index=[2, 7284], y=[3327], train_mask=[3327], val_mask=[3327], test_mask=[3327], edge_label=[910], edge_label_index=[2, 910])
Data(x=[3327, 3703], edge_index=[2, 8194], y=[3327], train_mask=[3327], val_mask=[3327], test_mask=[3327], edge_label=[910], edge_label_index=[2, 910]) It can be found that both If set to a value greater than 0.0, training edges will not be shared for message passing and supervision. Instead, disjoint_train_ratio edges are used as ground-truth labels for supervision during training. (default: 0.0) Based on the above explanation I am confused by the previous output. In my understanding if |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
IMO, it makes sense that both are reduced by half. Previously, message passing edges and supervision edges were shared, while they are not disjoint in a 50:50 ratio. Does that make sense? |
Beta Was this translation helpful? Give feedback.
-
Hi, Can you please explain to me the terms training edges, supervision edges, and message passing edges and how they relate? |
Beta Was this translation helpful? Give feedback.
IMO, it makes sense that both are reduced by half. Previously, message passing edges and supervision edges were shared, while they are not disjoint in a 50:50 ratio. Does that make sense?