models

BaseModel

class cogdl.models.base_model.BaseModel[source]

Bases: torch.nn.modules.module.Module

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

property device
forward(*args)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(data)[source]
set_loss_fn(loss_fn)[source]
training: bool

Embedding Model

class cogdl.models.emb.hope.HOPE(dimension, beta)[source]

Bases: cogdl.models.base_model.BaseModel

The HOPE model from the “Grarep: Asymmetric transitivity preserving graph embedding” paper.

Parameters
  • hidden_size (int) – The dimension of node representation.

  • beta (float) – Parameter in katz decomposition.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph, return_dict=False)[source]

The author claim that Katz has superior performance in related tasks S_katz = (M_g)^-1 * M_l = (I - beta*A)^-1 * beta*A = (I - beta*A)^-1 * (I - (I -beta*A)) = (I - beta*A)^-1 - I

training: bool
class cogdl.models.emb.spectral.Spectral(hidden_size)[source]

Bases: cogdl.models.base_model.BaseModel

The Spectral clustering model from the “Leveraging social media networks for classification” paper

Parameters

hidden_size (int) – The dimension of node representation.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph, return_dict=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.hin2vec.Hin2vec(hidden_dim, walk_length, walk_num, batch_size, hop, negative, epochs, lr, cpu=True)[source]

Bases: cogdl.models.base_model.BaseModel

The Hin2vec model from the “HIN2Vec: Explore Meta-paths in Heterogeneous Information Networks for Representation Learning” paper.

Parameters
  • hidden_size (int) – The dimension of node representation.

  • walk_length (int) – The walk length.

  • walk_num (int) – The number of walks to sample for each node.

  • batch_size (int) – The batch size of training in Hin2vec.

  • hop (int) – The number of hop to construct training samples in Hin2vec.

  • negative (int) – The number of nagative samples for each meta2path pair.

  • epochs (int) – The number of training iteration.

  • lr (float) – The initial learning rate of SGD.

  • cpu (bool) – Use CPU or GPU to train hin2vec.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(data)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.netmf.NetMF(dimension, window_size, rank, negative, is_large=False)[source]

Bases: cogdl.models.base_model.BaseModel

The NetMF model from the “Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec” paper.

Parameters
  • hidden_size (int) – The dimension of node representation.

  • window_size (int) – The actual context size which is considered in language model.

  • rank (int) – The rank in approximate normalized laplacian.

  • negative (int) – The number of nagative samples in negative sampling.

  • is-large (bool) – When window size is large, use approximated deepwalk matrix to decompose.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph, return_dict=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.deepwalk.DeepWalk(dimension, walk_length, walk_num, window_size, worker, iteration)[source]

Bases: cogdl.models.base_model.BaseModel

The DeepWalk model from the “DeepWalk: Online Learning of Social Representations” paper

Parameters
  • hidden_size (int) – The dimension of node representation.

  • walk_length (int) – The walk length.

  • walk_num (int) – The number of walks to sample for each node.

  • window_size (int) – The actual context size which is considered in language model.

  • worker (int) – The number of workers for word2vec.

  • iteration (int) – The number of training iteration in word2vec.

static add_args(parser: argparse.ArgumentParser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args) cogdl.models.emb.deepwalk.DeepWalk[source]
forward(graph, embedding_model_creator=<class 'gensim.models.word2vec.Word2Vec'>, return_dict=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.gatne.GATNE(dimension, walk_length, walk_num, window_size, worker, epochs, batch_size, edge_dim, att_dim, negative_samples, neighbor_samples, schema)[source]

Bases: cogdl.models.base_model.BaseModel

The GATNE model from the “Representation Learning for Attributed Multiplex Heterogeneous Network” paper

Parameters
  • walk_length (int) – The walk length.

  • walk_num (int) – The number of walks to sample for each node.

  • window_size (int) – The actual context size which is considered in language model.

  • worker (int) – The number of workers for word2vec.

  • epochs (int) – The number of training epochs.

  • batch_size (int) – The size of each training batch.

  • edge_dim (int) – Number of edge embedding dimensions.

  • att_dim (int) – Number of attention dimensions.

  • negative_samples (int) – Negative samples for optimization.

  • neighbor_samples (int) – Neighbor samples for aggregation

  • schema (str) – The metapath schema used in model. Metapaths are splited with “,”,

  • example (while each node type are connected with "-" in each metapath. For) – “0-1-0,0-1-2-1-0”

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(network_data)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.dgk.DeepGraphKernel(hidden_dim, min_count, window_size, sampling_rate, rounds, epochs, alpha, n_workers=4)[source]

Bases: cogdl.models.base_model.BaseModel

The Hin2vec model from the “Deep Graph Kernels” paper.

Parameters
  • hidden_size (int) – The dimension of node representation.

  • min_count (int) – Parameter in word2vec.

  • window (int) – The actual context size which is considered in language model.

  • sampling_rate (float) – Parameter in word2vec.

  • iteration (int) – The number of iteration in WL method.

  • epochs (int) – The number of training iteration.

  • alpha (float) – The learning rate of word2vec.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
static feature_extractor(data, rounds, name)[source]
forward(graphs, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

save_embedding(output_path)[source]
training: bool
static wl_iterations(graph, features, rounds)[source]
class cogdl.models.emb.grarep.GraRep(dimension, step)[source]

Bases: cogdl.models.base_model.BaseModel

The GraRep model from the “Grarep: Learning graph representations with global structural information” paper.

Parameters
  • hidden_size (int) – The dimension of node representation.

  • step (int) – The maximum order of transitition probability.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph, return_dict=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.dngr.DNGR(hidden_size1, hidden_size2, noise, alpha, step, epochs, lr, cpu)[source]

Bases: cogdl.models.base_model.BaseModel

The DNGR model from the “Deep Neural Networks for Learning Graph Representations” paper

Parameters
  • hidden_size1 (int) – The size of the first hidden layer.

  • hidden_size2 (int) – The size of the second hidden layer.

  • noise (float) – Denoise rate of DAE.

  • alpha (float) – Parameter in DNGR.

  • step (int) – The max step in random surfing.

  • epochs (int) – The max epoches in training step.

  • lr (float) – Learning rate in DNGR.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph, return_dict=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_denoised_matrix(mat)[source]
get_emb(matrix)[source]
get_ppmi_matrix(mat)[source]
random_surfing(adj_matrix)[source]
scale_matrix(mat)[source]
training: bool
class cogdl.models.emb.pronepp.ProNEPP(filter_types, svd, search, max_evals=None, loss_type=None, n_workers=None)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
training: bool
class cogdl.models.emb.graph2vec.Graph2Vec(dimension, min_count, window_size, dm, sampling_rate, rounds, epochs, lr, worker=4)[source]

Bases: cogdl.models.base_model.BaseModel

The Graph2Vec model from the “graph2vec: Learning Distributed Representations of Graphs” paper

Parameters
  • hidden_size (int) – The dimension of node representation.

  • min_count (int) – Parameter in doc2vec.

  • window_size (int) – The actual context size which is considered in language model.

  • sampling_rate (float) – Parameter in doc2vec.

  • dm (int) – Parameter in doc2vec.

  • iteration (int) – The number of iteration in WL method.

  • lr (float) – Learning rate in doc2vec.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
static feature_extractor(data, rounds, name)[source]
forward(graphs, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

save_embedding(output_path)[source]
training: bool
static wl_iterations(graph, features, rounds)[source]
class cogdl.models.emb.metapath2vec.Metapath2vec(dimension, walk_length, walk_num, window_size, worker, iteration, schema)[source]

Bases: cogdl.models.base_model.BaseModel

The Metapath2vec model from the “metapath2vec: Scalable Representation Learning for Heterogeneous Networks” paper

Parameters
  • hidden_size (int) – The dimension of node representation.

  • walk_length (int) – The walk length.

  • walk_num (int) – The number of walks to sample for each node.

  • window_size (int) – The actual context size which is considered in language model.

  • worker (int) – The number of workers for word2vec.

  • iteration (int) – The number of training iteration in word2vec.

  • schema (str) – The metapath schema used in model. Metapaths are splited with “,”,

  • example (while each node type are connected with "-" in each metapath. For) – “0-1-0,0-2-0,1-0-2-0-1”.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(data)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.node2vec.Node2vec(dimension, walk_length, walk_num, window_size, worker, iteration, p, q)[source]

Bases: cogdl.models.base_model.BaseModel

The node2vec model from the “node2vec: Scalable feature learning for networks” paper

Parameters
  • hidden_size (int) – The dimension of node representation.

  • walk_length (int) – The walk length.

  • walk_num (int) – The number of walks to sample for each node.

  • window_size (int) – The actual context size which is considered in language model.

  • worker (int) – The number of workers for word2vec.

  • iteration (int) – The number of training iteration in word2vec.

  • p (float) – Parameter in node2vec.

  • q (float) – Parameter in node2vec.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph, return_dict=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.pte.PTE(dimension, walk_length, walk_num, negative, batch_size, alpha)[source]

Bases: cogdl.models.base_model.BaseModel

The PTE model from the “PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks” paper.

Parameters
  • hidden_size (int) – The dimension of node representation.

  • walk_length (int) – The walk length.

  • walk_num (int) – The number of walks to sample for each node.

  • negative (int) – The number of nagative samples for each edge.

  • batch_size (int) – The batch size of training in PTE.

  • alpha (float) – The initial learning rate of SGD.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(data)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.netsmf.NetSMF(dimension, window_size, negative, num_round, worker)[source]

Bases: cogdl.models.base_model.BaseModel

The NetSMF model from the “NetSMF: Large-Scale Network Embedding as Sparse Matrix Factorization” paper.

Parameters
  • hidden_size (int) – The dimension of node representation.

  • window_size (int) – The actual context size which is considered in language model.

  • negative (int) – The number of nagative samples in negative sampling.

  • num_round (int) – The number of round in NetSMF.

  • worker (int) – The number of workers for NetSMF.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph, return_dict=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.line.LINE(dimension, walk_length, walk_num, negative, batch_size, alpha, order)[source]

Bases: cogdl.models.base_model.BaseModel

The LINE model from the “Line: Large-scale information network embedding” paper.

Parameters
  • hidden_size (int) – The dimension of node representation.

  • walk_length (int) – The walk length.

  • walk_num (int) – The number of walks to sample for each node.

  • negative (int) – The number of nagative samples for each edge.

  • batch_size (int) – The batch size of training in LINE.

  • alpha (float) – The initial learning rate of SGD.

  • order (int) – 1 represents perserving 1-st order proximity, 2 represents 2-nd,

  • them (while 3 means both of) –

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph, return_dict=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.sdne.SDNE(hidden_size1, hidden_size2, droput, alpha, beta, nu1, nu2, epochs, lr, cpu)[source]

Bases: cogdl.models.base_model.BaseModel

The SDNE model from the “Structural Deep Network Embedding” paper

Parameters
  • hidden_size1 (int) – The size of the first hidden layer.

  • hidden_size2 (int) – The size of the second hidden layer.

  • droput (float) – Droput rate.

  • alpha (float) – Trade-off parameter between 1-st and 2-nd order objective function in SDNE.

  • beta (float) – Parameter of 2-nd order objective function in SDNE.

  • nu1 (float) – Parameter of l1 normlization in SDNE.

  • nu2 (float) – Parameter of l2 normlization in SDNE.

  • epochs (int) – The max epoches in training step.

  • lr (float) – Learning rate in SDNE.

  • cpu (bool) – Use CPU or GPU to train hin2vec.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph, return_dict=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.emb.prone.ProNE(dimension, step, mu, theta)[source]

Bases: cogdl.models.base_model.BaseModel

The ProNE model from the “ProNE: Fast and Scalable Network Representation Learning” paper.

Parameters
  • hidden_size (int) – The dimension of node representation.

  • step (int) – The number of items in the chebyshev expansion.

  • mu (float) – Parameter in ProNE.

  • theta (float) – Parameter in ProNE.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph: cogdl.data.data.Graph, return_dict=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

GNN Model

class cogdl.models.nn.dgi.DGIModel(in_feats, hidden_size, activation)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
embed(data)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.nn.mvgrl.MVGRL(in_feats, hidden_size, sample_size=2000, batch_size=4, alpha=0.2, dataset='cora')[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

augment(graph)[source]
classmethod build_model_from_args(args)[source]
embed(data, msk=None)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

loss(data)[source]
preprocess(graph)[source]
training: bool
class cogdl.models.nn.patchy_san.PatchySAN(num_features, num_classes, num_sample, num_neighbor, iteration)[source]

Bases: cogdl.models.base_model.BaseModel

The Patchy-SAN model from the “Learning Convolutional Neural Networks for Graphs” paper.

Parameters
  • batch_size (int) – The batch size of training.

  • sample (int) – Number of chosen vertexes.

  • stride (int) – Node selection stride.

  • neighbor (int) – The number of neighbor for each node.

  • iteration (int) – The number of training iteration.

static add_args(parser)[source]

Add model-specific arguments to the parser.

build_model(num_channel, num_sample, num_neighbor, num_class)[source]
classmethod build_model_from_args(args)[source]
forward(batch)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

classmethod split_dataset(dataset, args)[source]
training: bool
class cogdl.models.nn.gcn.GCN(in_feats, hidden_size, out_feats, num_layers, dropout, activation='relu', residual=False, norm=None, actnn=False, rp_ratio=1)[source]

Bases: cogdl.models.base_model.BaseModel

The GCN model from the “Semi-Supervised Classification with Graph Convolutional Networks” paper

Parameters
  • in_features (int) – Number of input features.

  • out_features (int) – Number of classes.

  • hidden_size (int) – The dimension of node representation.

  • dropout (float) – Dropout rate for model training.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
embed(graph)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(data)[source]
training: bool
class cogdl.models.nn.gdc_gcn.GDC_GCN(nfeat, nhid, nclass, dropout, alpha, t, k, eps, gdctype)[source]

Bases: cogdl.models.base_model.BaseModel

The GDC model from the “Diffusion Improves Graph Learning” paper, with the PPR and heat matrix variants combined with GCN

Parameters
  • num_features (int) – Number of input features in ppr-preprocessed dataset.

  • num_classes (int) – Number of classes.

  • hidden_size (int) – The dimension of node representation.

  • dropout (float) – Dropout rate for model training.

  • alpha (float) – PPR polynomial filter param, 0 to 1.

  • t (float) – Heat polynomial filter param

  • k (int) – Top k nodes retained during sparsification.

  • eps (float) – Threshold for clipping.

  • gdc_type (str) – “none”, “ppr”, “heat”

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(data=None)[source]
preprocessing(data, gdc_type='ppr')[source]
reset_data(data)[source]
training: bool
class cogdl.models.nn.graphsage.Graphsage(num_features, num_classes, hidden_size, num_layers, sample_size, dropout, aggr)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(*args)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

inference(x_all, data_loader)[source]
mini_forward(graph)[source]
sampling(edge_index, num_sample)[source]
training: bool
class cogdl.models.nn.compgcn.LinkPredictCompGCN(num_entities, num_rels, hidden_size, num_bases=0, layers=1, sampling_rate=0.01, penalty=0.001, dropout=0.0, lbl_smooth=0.1, opn='sub')[source]

Bases: cogdl.utils.link_prediction_utils.GNNLinkPredict, cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

add_reverse_edges(edge_index, edge_types)[source]
classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

loss(data: cogdl.data.data.Graph, scoring)[source]
predict(graph)[source]
training: bool
class cogdl.models.nn.drgcn.DrGCN(num_features, num_classes, hidden_size, num_layers, dropout, norm=None, activation='relu')[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(graph)[source]
training: bool
class cogdl.models.nn.graph_unet.GraphUnet(in_feats: int, hidden_size: int, out_feats: int, pooling_layer: int, pooling_rates: List[float], n_dropout: float = 0.5, adj_dropout: float = 0.3, activation: str = 'elu', improved: bool = False, aug_adj: bool = False)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph: cogdl.data.data.Graph) torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.nn.gcnmix.GCNMix(in_feat, hidden_size, num_classes, k, temperature, alpha, dropout)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

forward_aux(x, label, train_index, mix_hidden=True, layer_mix=1)[source]
predict_noise(data, tau=1)[source]
training: bool
class cogdl.models.nn.diffpool.DiffPool(in_feats, hidden_dim, embed_dim, num_classes, num_layers, num_pool_layers, assign_dim, pooling_ratio, batch_size, dropout=0.5, no_link_pred=True, concat=False, use_bn=False)[source]

Bases: cogdl.models.base_model.BaseModel

DIFFPOOL from paper Hierarchical Graph Representation Learning with Differentiable Pooling.

Parameters
  • in_feats (int) – Size of each input sample.

  • hidden_dim (int) – Size of hidden layer dimension of GNN.

  • embed_dim (int) – Size of embeded node feature, output size of GNN.

  • num_classes (int) – Number of target classes.

  • num_layers (int) – Number of GNN layers.

  • num_pool_layers (int) – Number of pooling.

  • assign_dim (int) – Embedding size after the first pooling.

  • pooling_ratio (float) – Size of each poolling ratio.

  • batch_size (int) – Size of each mini-batch.

  • dropout (float, optional) – Size of dropout, default: 0.5.

  • no_link_pred (bool, optional) – If True, use link prediction loss, default: True.

static add_args(parser)[source]

Add model-specific arguments to the parser.

after_pooling_forward(gnn_layers, adj, x, concat=False)[source]
classmethod build_model_from_args(args)[source]
forward(batch)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

graph_classificatoin_loss(batch)[source]
reset_parameters()[source]
classmethod split_dataset(dataset, args)[source]
training: bool
class cogdl.models.nn.gcnii.GCNII(in_feats, hidden_size, out_feats, num_layers, dropout=0.5, alpha=0.1, lmbda=1, wd1=0.0, wd2=0.0, residual=False, actnn=False)[source]

Bases: cogdl.models.base_model.BaseModel

Implementation of GCNII in paper “Simple and Deep Graph Convolutional Networks”.

Parameters
  • in_feats (int) – Size of each input sample

  • hidden_size (int) – Size of each hidden unit

  • out_feats (int) – Size of each out sample

  • num_layers (int) –

  • dropout (float) –

  • alpha (float) – Parameter of initial residual connection

  • lmbda (float) – Parameter of identity mapping

  • wd1 (float) – Weight-decay for Fully-connected layers

  • wd2 (float) – Weight-decay for convolutional layers

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_optimizer(args)[source]
predict(graph)[source]
training: bool
class cogdl.models.nn.sign.MLP(in_feats, out_feats, hidden_size, num_layers, dropout=0.0, activation='relu', norm=None, act_first=False, bias=True)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(data)[source]
training: bool
class cogdl.models.nn.mixhop.MixHop(num_features, num_classes, dropout, layer1_pows, layer2_pows)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(data)[source]
training: bool
class cogdl.models.nn.gat.GAT(in_feats, hidden_size, out_features, num_layers, dropout, attn_drop, alpha, nhead, residual, last_nhead, norm=None)[source]

Bases: cogdl.models.base_model.BaseModel

The GAT model from the “Graph Attention Networks” paper

Parameters
  • num_features (int) – Number of input features.

  • num_classes (int) – Number of classes.

  • hidden_size (int) – The dimension of node representation.

  • dropout (float) – Dropout rate for model training.

  • alpha (float) – Coefficient of leaky_relu.

  • nheads (int) – Number of attention heads.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(graph)[source]
training: bool
class cogdl.models.nn.han.HAN(num_edge, w_in, w_out, num_class, num_nodes, num_layers)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.nn.ppnp.PPNP(nfeat, nhid, nclass, num_layers, dropout, propagation, alpha, niter, cache=True)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(graph)[source]
training: bool
class cogdl.models.nn.grace.GRACE(in_feats: int, hidden_size: int, proj_hidden_size: int, num_layers: int, drop_feature_rates: List[float], drop_edge_rates: List[float], tau: float = 0.5, activation: str = 'relu', batch_size: int = - 1)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

augment(graph)[source]
batched_loss(z1: torch.Tensor, z2: torch.Tensor, batch_size: int)[source]
classmethod build_model_from_args(args)[source]
contrastive_loss(z1: torch.Tensor, z2: torch.Tensor)[source]
drop_adj(graph: cogdl.data.data.Graph, drop_rate: float = 0.5)[source]
drop_feature(x: torch.Tensor, droprate: float)[source]
embed(data)[source]
forward(graph: cogdl.data.data.Graph, x: Optional[torch.Tensor] = None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

prop(graph: cogdl.data.data.Graph, x: torch.Tensor, drop_feature_rate: float = 0.0, drop_edge_rate: float = 0.0)[source]
training: bool
class cogdl.models.nn.pprgo.PPRGo(in_feats, hidden_size, out_feats, num_layers, alpha, dropout, activation='relu', nprop=2, norm='sym')[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(x, targets, ppr_scores)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(graph, batch_size=10000)[source]
training: bool
class cogdl.models.nn.gin.GIN(num_layers, in_feats, out_feats, hidden_dim, num_mlp_layers, eps=0, pooling='sum', train_eps=False, dropout=0.5)[source]

Bases: cogdl.models.base_model.BaseModel

Graph Isomorphism Network from paper “How Powerful are Graph Neural Networks?”.

Parameters
  • num_layers – int Number of GIN layers

  • in_feats – int Size of each input sample

  • out_feats – int Size of each output sample

  • hidden_dim – int Size of each hidden layer dimension

  • num_mlp_layers – int Number of MLP layers

  • eps – float32, optional Initial epsilon value, default: 0

  • pooling – str, optional Aggregator type to use, default: sum

  • train_eps – bool, optional If True, epsilon will be a learnable parameter, default: True

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(batch)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

classmethod split_dataset(dataset, args)[source]
training: bool
class cogdl.models.nn.grand.Grand(nfeat, nhid, nclass, input_droprate, hidden_droprate, use_bn, dropnode_rate, order, alpha)[source]

Bases: cogdl.models.base_model.BaseModel

Implementation of GRAND in paper “Graph Random Neural Networks for Semi-Supervised Learning on Graphs” <https://arxiv.org/abs/2005.11079>

Parameters
  • nfeat (int) – Size of each input features.

  • nhid (int) – Size of hidden features.

  • nclass (int) – Number of output classes.

  • input_droprate (float) – Dropout rate of input features.

  • hidden_droprate (float) – Dropout rate of hidden features.

  • use_bn (bool) – Using batch normalization.

  • dropnode_rate (float) – Rate of dropping elements of input features

  • tem (float) – Temperature to sharpen predictions.

  • lam (float) – Proportion of consistency loss of unlabelled data

  • order (int) – Order of adjacency matrix

  • sample (int) – Number of augmentations for consistency loss

  • alpha (float) –

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
drop_node(x)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

normalize_x(x)[source]
predict(data)[source]
rand_prop(graph, x)[source]
training: bool
class cogdl.models.nn.gtn.GTN(num_edge, num_channels, w_in, w_out, num_class, num_nodes, num_layers)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

norm(edge_index, num_nodes, edge_weight, improved=False, dtype=None)[source]
normalization(H)[source]
training: bool
class cogdl.models.nn.rgcn.LinkPredictRGCN(num_entities, num_rels, hidden_size, num_layers, regularizer='basis', num_bases=None, self_loop=True, sampling_rate=0.01, penalty=0, dropout=0.0, self_dropout=0.0)[source]

Bases: cogdl.utils.link_prediction_utils.GNNLinkPredict, cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

loss(graph, scoring)[source]
predict(graph)[source]
training: bool
class cogdl.models.nn.deepergcn.DeeperGCN(in_feat, hidden_size, out_feat, num_layers, activation='relu', dropout=0.0, aggr='max', beta=1.0, p=1.0, learn_beta=False, learn_p=False, learn_msg_scale=True, use_msg_norm=False, edge_attr_size=None)[source]

Bases: cogdl.models.base_model.BaseModel

Implementation of DeeperGCN in paper “DeeperGCN: All You Need to Train Deeper GCNs”

Parameters
  • in_feat (int) – the dimension of input features

  • hidden_size (int) – the dimension of hidden representation

  • out_feat (int) – the dimension of output features

  • num_layers (int) – the number of layers

  • activation (str, optional) – activation function. Defaults to “relu”.

  • dropout (float, optional) – dropout rate. Defaults to 0.0.

  • aggr (str, optional) – aggregation function. Defaults to “max”.

  • beta (float, optional) – a coefficient for aggregation function. Defaults to 1.0.

  • p (float, optional) – a coefficient for aggregation function. Defaults to 1.0.

  • learn_beta (bool, optional) – whether beta is learnable. Defaults to False.

  • learn_p (bool, optional) – whether p is learnable. Defaults to False.

  • learn_msg_scale (bool, optional) – whether message scale is learnable. Defaults to True.

  • use_msg_norm (bool, optional) – use message norm or not. Defaults to False.

  • edge_attr_size (int, optional) – the dimension of edge features. Defaults to None.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(graph)[source]
training: bool
class cogdl.models.nn.drgat.DrGAT(num_features, num_classes, hidden_size, num_heads, dropout)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.models.nn.infograph.InfoGraph(in_feats, hidden_dim, out_feats, num_layers=3, sup=False)[source]

Bases: cogdl.models.base_model.BaseModel

Implimentation of Infograph in paper `”InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation

Learning via Mutual Information Maximization” <https://openreview.net/forum?id=r1lfF2NYvH>__. `

in_featsint

Size of each input sample.

out_featsint

Size of each output sample.

num_layersint, optional

Number of MLP layers in encoder, default: 3.

unsupbool, optional

Use unsupervised model if True, default: True.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(batch)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reset_parameters()[source]
classmethod split_dataset(dataset, args)[source]
sup_forward(batch, x)[source]
training: bool
unsup_forward(batch, x)[source]
class cogdl.models.nn.dropedge_gcn.DropEdge_GCN(nfeat, nhid, nclass, nhidlayer, dropout, baseblock, inputlayer, outputlayer, nbaselayer, activation, withbn, withloop, aggrmethod)[source]

Bases: cogdl.models.base_model.BaseModel

DropEdge: Towards Deep Graph Convolutional Networks on Node Classification Applying DropEdge to GCN @ https://arxiv.org/pdf/1907.10903.pdf

The model for the single kind of deepgcn blocks. The model architecture likes: inputlayer(nfeat)–block(nbaselayer, nhid)–…–outputlayer(nclass)–softmax(nclass)

The total layer is nhidlayer*nbaselayer + 2. All options are configurable.

Args:

Initial function. :param nfeat: the input feature dimension. :param nhid: the hidden feature dimension. :param nclass: the output feature dimension. :param nhidlayer: the number of hidden blocks. :param dropout: the dropout ratio. :param baseblock: the baseblock type, can be “mutigcn”, “resgcn”, “densegcn” and “inceptiongcn”. :param inputlayer: the input layer type, can be “gcn”, “dense”, “none”. :param outputlayer: the input layer type, can be “gcn”, “dense”. :param nbaselayer: the number of layers in one hidden block. :param activation: the activation function, default is ReLu. :param withbn: using batch normalization in graph convolution. :param withloop: using self feature modeling in graph convolution. :param aggrmethod: the aggregation function for baseblock, can be “concat” and “add”. For “resgcn”, the default

is “add”, for others the default is “concat”.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(data)[source]
reset_parameters()[source]
training: bool
class cogdl.models.nn.disengcn.DisenGCN(in_feats, hidden_size, num_classes, K, iterations, tau, dropout, activation)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(data)[source]
reset_parameters()[source]
training: bool
class cogdl.models.nn.mlp.MLP(in_feats, out_feats, hidden_size, num_layers, dropout=0.0, activation='relu', norm=None, act_first=False, bias=True)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(data)[source]
training: bool
class cogdl.models.nn.sgc.sgc(in_feats, out_feats)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(data)[source]
training: bool
class cogdl.models.nn.sortpool.SortPool(in_feats, hidden_dim, num_classes, num_layers, out_channel, kernel_size, k=30, dropout=0.5)[source]

Bases: cogdl.models.base_model.BaseModel

Implimentation of sortpooling in paper “An End-to-End Deep Learning Architecture for Graph Classification” <https://www.cse.wustl.edu/~muhan/papers/AAAI_2018_DGCNN.pdf>__.

Parameters
  • in_feats (int) – Size of each input sample.

  • out_feats (int) – Size of each output sample.

  • hidden_dim (int) – Dimension of hidden layer embedding.

  • num_classes (int) – Number of target classes.

  • num_layers (int) – Number of graph neural network layers before pooling.

  • k (int, optional) – Number of selected features to sort, default: 30.

  • out_channel (int) – Number of the first convolution’s output channels.

  • kernel_size (int) – Size of the first convolution’s kernel.

  • dropout (float, optional) – Size of dropout, default: 0.5.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(batch)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

classmethod split_dataset(dataset, args)[source]
training: bool
class cogdl.models.nn.srgcn.SRGCN(in_feats, hidden_size, out_feats, attention, activation, nhop, normalization, dropout, node_dropout, alpha, nhead, subheads)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict(data)[source]
training: bool
class cogdl.models.nn.unsup_graphsage.SAGE(num_features, hidden_size, num_layers, sample_size, dropout)[source]

Bases: cogdl.models.base_model.BaseModel

Implementation of unsupervised GraphSAGE in paper “Inductive Representation Learning on Large Graphs” <https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf>

Parameters
  • num_features (int) – Size of each input sample

  • hidden_size (int) –

  • num_layers (int) – The number of GNN layers.

  • samples_size (list) – The number sampled neighbors of different orders

  • dropout (float) –

  • walk_length (int) – The length of random walk

  • negative_samples (int) –

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
embed(data)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

sampling(edge_index, num_sample)[source]
training: bool
class cogdl.models.nn.daegc.DAEGC(num_features, hidden_size, embedding_size, num_heads, dropout, num_clusters)[source]

Bases: cogdl.models.base_model.BaseModel

The DAEGC model from the “Attributed Graph Clustering: A Deep Attentional Embedding Approach” paper

Parameters
  • num_clusters (int) – Number of clusters.

  • T (int) – Number of iterations to recalculate P and Q

  • gamma (float) – Hyperparameter that controls two parts of the loss.

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
forward(graph)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_2hop(edge_index)[source]

add 2-hop neighbors as new edges

get_cluster_center()[source]
get_features(data)[source]
recon_loss(z, adj)[source]
set_cluster_center(center)[source]
training: bool
class cogdl.models.nn.agc.AGC(num_clusters, max_iter, cpu)[source]

Bases: cogdl.models.base_model.BaseModel

The AGC model from the “Attributed Graph Clustering via Adaptive Graph Convolution” paper

Parameters
  • num_clusters (int) – Number of clusters.

  • max_iter (int) – Max iteration to increase k

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]
compute_intra(x, clusters)[source]
forward(data)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

Model Module

cogdl.models.build_model(args)[source]
cogdl.models.register_model(name)[source]

New model types can be added to cogdl with the register_model() function decorator. For example:

@register_model('gat')
class GAT(BaseModel):
    (...)
Parameters

name (str) – the name of the model

cogdl.models.try_adding_model_args(model, parser)[source]