You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've tried to use GCNconv for Vanilla GCN model (I implemented it as below as there is no model available in the repo)
As a result, it seems like the evaluation result is not reproducible and the result of vanilla model is even higher in some cases. i.e., when I test vanilla GCN model with REDDIT-BINARY in multiple runs, it gives me the average accuracy above 90% (I used test accuracy with the model parameters for the best validation accuracy)
Do you have any idea why it happens?
class GCN(torch.nn.Module):
def __init__(self, num_features=1, num_classes=1, num_hidden=32):
super(GCN, self).__init__()
dim = num_hidden
self.conv1 = GCNConv(num_features, dim)
self.bn1 = torch.nn.BatchNorm1d(dim)
self.conv2 = GCNConv(dim, dim)
self.bn2 = torch.nn.BatchNorm1d(dim)
self.conv3 = GCNConv(dim, dim)
self.bn3 = torch.nn.BatchNorm1d(dim)
self.conv4 = GCNConv(dim, dim)
self.bn4 = torch.nn.BatchNorm1d(dim)
self.conv5 = GCNConv(dim, dim)
self.bn5 = torch.nn.BatchNorm1d(dim)
self.fc1 = Linear(dim, dim)
self.fc2 = Linear(dim, num_classes)
def forward(self, x, edge_index, batch):
x = F.relu(self.conv1(x, edge_index))
x = self.bn1(x)
x = F.relu(self.conv2(x, edge_index))
x = self.bn2(x)
x = F.relu(self.conv3(x, edge_index))
x = self.bn3(x)
x = F.relu(self.conv4(x, edge_index))
x = self.bn4(x)
x = F.relu(self.conv5(x, edge_index))
x = self.bn5(x)
# x = global_add_pool(x, batch)
x = global_mean_pool(x, batch)
x = F.relu(self.fc1(x))
# x = F.dropout(x, p=0.5, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=-1)
The text was updated successfully, but these errors were encountered:
I also attempted to implement the GCN network and came up with something slightly different. (I am not one of the original authors). According to the G-Mixup paper, the GCN should be implemented as follows:
Four GNN layers and global mean pooling are applied. All the hidden units [are] set to 64. The activation function is ReLU
Here is my implementation:
class GCN(torch.nn.Module):
def __init__(self, num_features=1, num_classes=1, num_hidden=32):
super(GCN, self).__init__()
dim = num_hidden
self.conv1 = GCNConv(in_channels=num_features, out_channels=dim)
self.conv2 = GCNConv(in_channels=dim, out_channels=dim)
self.conv3 = GCNConv(in_channels=dim, out_channels=dim)
self.conv4 = GCNConv(in_channels=dim, out_channels=dim)
self.fc1 = Linear(dim, dim)
self.fc2 = Linear(dim, num_classes)
def forward(self, x, edge_index, batch):
x = F.relu(self.conv1(x, edge_index))
x = F.relu(self.conv2(x, edge_index))
x = F.relu(self.conv3(x, edge_index))
x = F.relu(self.conv4(x, edge_index))
x = global_mean_pool(x, batch)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=-1)
Basically, I copied the GIN implementation the authors provided in models.py and then made the following changes:
replaced “GIN” with “GCN”, and
removed the batch normalization.
With this network, G-Mixup seems to improve the performance (although we’ve only tested on some small datasets like PROTEINS). It would be great for the authors to comment on this issue!
Hi,
I've tried to use GCNconv for Vanilla GCN model (I implemented it as below as there is no model available in the repo)
As a result, it seems like the evaluation result is not reproducible and the result of vanilla model is even higher in some cases. i.e., when I test vanilla GCN model with REDDIT-BINARY in multiple runs, it gives me the average accuracy above 90% (I used test accuracy with the model parameters for the best validation accuracy)
Do you have any idea why it happens?
The text was updated successfully, but these errors were encountered: