cogdl.models.nn.gat

Module Contents

Classes

SpecialSpmmFunction

Special function for only sparse region backpropataion layer.

SpecialSpmm

SpGraphAttentionLayer

Sparse version GAT layer, similar to https://arxiv.org/abs/1710.10903

PetarVSpGAT

The GAT model from the `”Graph Attention Networks”

class cogdl.models.nn.gat.SpecialSpmmFunction[source]

Bases: torch.autograd.Function

Special function for only sparse region backpropataion layer.

static forward(ctx, indices, values, shape, b)[source]
static backward(ctx, grad_output)[source]
class cogdl.models.nn.gat.SpecialSpmm[source]

Bases: torch.nn.Module

forward(self, indices, values, shape, b)[source]
class cogdl.models.nn.gat.SpGraphAttentionLayer(in_features, out_features, dropout, alpha, concat=True)[source]

Bases: torch.nn.Module

Sparse version GAT layer, similar to https://arxiv.org/abs/1710.10903

forward(self, input, edge)[source]
__repr__(self)[source]
class cogdl.models.nn.gat.PetarVSpGAT(nfeat, nhid, nclass, dropout, alpha, nheads)[source]

Bases: cogdl.models.BaseModel

The GAT model from the “Graph Attention Networks” paper

Args:

num_features (int) : Number of input features. num_classes (int) : Number of classes. hidden_size (int) : The dimension of node representation. dropout (float) : Dropout rate for model training. alpha (float) : Coefficient of leaky_relu. nheads (int) : Number of attention heads.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(cls, args)[source]

Build a new model instance.

forward(self, x, edge_index)[source]
loss(self, data)[source]
predict(self, data)[source]