layers

GCC module

class cogdl.layers.gcc_module.ApplyNodeFunc(mlp, use_selayer)[source]

Bases: torch.nn.modules.module.Module

Update the node feature hv with MLP, BN and ReLU.

forward(h)[source]
class cogdl.layers.gcc_module.GATLayer(g, in_dim, out_dim)[source]

Bases: torch.nn.modules.module.Module

edge_attention(edges)[source]
forward(h)[source]
message_func(edges)[source]
reduce_func(nodes)[source]
class cogdl.layers.gcc_module.GraphEncoder(positional_embedding_size=32, max_node_freq=8, max_edge_freq=8, max_degree=128, freq_embedding_size=32, degree_embedding_size=32, output_dim=32, node_hidden_dim=32, edge_hidden_dim=32, num_layers=6, num_heads=4, num_step_set2set=6, num_layer_set2set=3, norm=False, gnn_model='mpnn', degree_input=False, lstm_as_gate=False)[source]

Bases: torch.nn.modules.module.Module

MPNN from Neural Message Passing for Quantum Chemistry

node_input_dim : int
Dimension of input node feature, default to be 15.
edge_input_dim : int
Dimension of input edge feature, default to be 15.
output_dim : int
Dimension of prediction, default to be 12.
node_hidden_dim : int
Dimension of node feature in hidden layers, default to be 64.
edge_hidden_dim : int
Dimension of edge feature in hidden layers, default to be 128.
num_step_message_passing : int
Number of message passing steps, default to be 6.
num_step_set2set : int
Number of set2set steps
num_layer_set2set : int
Number of set2set layers
forward(g, return_all_outputs=False)[source]

Predict molecule labels

g : DGLGraph
Input DGLGraph for molecule(s)
n_feat : tensor of dtype float32 and shape (B1, D1)
Node features. B1 for number of nodes and D1 for the node feature size.
e_feat : tensor of dtype float32 and shape (B2, D2)
Edge features. B2 for number of edges and D2 for the edge feature size.

res : Predicted labels

class cogdl.layers.gcc_module.MLP(num_layers, input_dim, hidden_dim, output_dim, use_selayer)[source]

Bases: torch.nn.modules.module.Module

MLP with linear output

forward(x)[source]
class cogdl.layers.gcc_module.SELayer(in_channels, se_channels)[source]

Bases: torch.nn.modules.module.Module

Squeeze-and-excitation networks

forward(x)[source]
class cogdl.layers.gcc_module.UnsupervisedGAT(node_input_dim, node_hidden_dim, edge_input_dim, num_layers, num_heads)[source]

Bases: torch.nn.modules.module.Module

forward(g, n_feat, e_feat)[source]
class cogdl.layers.gcc_module.UnsupervisedGIN(num_layers, num_mlp_layers, input_dim, hidden_dim, output_dim, final_dropout, learn_eps, graph_pooling_type, neighbor_pooling_type, use_selayer)[source]

Bases: torch.nn.modules.module.Module

GIN model

forward(g, h, efeat)[source]
class cogdl.layers.gcc_module.UnsupervisedMPNN(output_dim=32, node_input_dim=32, node_hidden_dim=32, edge_input_dim=32, edge_hidden_dim=32, num_step_message_passing=6, lstm_as_gate=False)[source]

Bases: torch.nn.modules.module.Module

MPNN from Neural Message Passing for Quantum Chemistry

node_input_dim : int
Dimension of input node feature, default to be 15.
edge_input_dim : int
Dimension of input edge feature, default to be 15.
output_dim : int
Dimension of prediction, default to be 12.
node_hidden_dim : int
Dimension of node feature in hidden layers, default to be 64.
edge_hidden_dim : int
Dimension of edge feature in hidden layers, default to be 128.
num_step_message_passing : int
Number of message passing steps, default to be 6.
num_step_set2set : int
Number of set2set steps
num_layer_set2set : int
Number of set2set layers
forward(g, n_feat, e_feat)[source]

Predict molecule labels

g : DGLGraph
Input DGLGraph for molecule(s)
n_feat : tensor of dtype float32 and shape (B1, D1)
Node features. B1 for number of nodes and D1 for the node feature size.
e_feat : tensor of dtype float32 and shape (B2, D2)
Edge features. B2 for number of edges and D2 for the edge feature size.

res : Predicted labels

GPT-GNN module

class cogdl.layers.gpt_gnn_module.Classifier(n_hid, n_out)[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
class cogdl.layers.gpt_gnn_module.GNN(in_dim, n_hid, num_types, num_relations, n_heads, n_layers, dropout=0.2, conv_name='hgt', prev_norm=False, last_norm=False, use_RTE=True)[source]

Bases: torch.nn.modules.module.Module

forward(node_feature, node_type, edge_time, edge_index, edge_type)[source]
class cogdl.layers.gpt_gnn_module.GPT_GNN(gnn, rem_edge_list, attr_decoder, types, neg_samp_num, device, neg_queue_size=0)[source]

Bases: torch.nn.modules.module.Module

feat_loss(reps, out)[source]
forward(node_feature, node_type, edge_time, edge_index, edge_type)[source]
neg_sample(souce_node_list, pos_node_list)[source]
text_loss(reps, texts, w2v_model, device)[source]
class cogdl.layers.gpt_gnn_module.GeneralConv(conv_name, in_hid, out_hid, num_types, num_relations, n_heads, dropout, use_norm=True, use_RTE=True)[source]

Bases: torch.nn.modules.module.Module

forward(meta_xs, node_type, edge_index, edge_type, edge_time)[source]
class cogdl.layers.gpt_gnn_module.Graph[source]

Bases: object

add_edge(source_node, target_node, time=None, relation_type=None, directed=True)[source]
add_node(node)[source]
get_meta_graph()[source]
get_types()[source]
node_feature = None

edge_list: index the adjacancy matrix (time) by <target_type, source_type, relation_type, target_id, source_id>

update_node(node)[source]
class cogdl.layers.gpt_gnn_module.HGTConv(in_dim, out_dim, num_types, num_relations, n_heads, dropout=0.2, use_norm=True, use_RTE=True, **kwargs)[source]

Bases: torch_geometric.nn.conv.message_passing.MessagePassing

forward(node_inp, node_type, edge_index, edge_type, edge_time)[source]
message(edge_index_i, node_inp_i, node_inp_j, node_type_i, node_type_j, edge_type, edge_time)[source]

j: source, i: target; <j, i>

update(aggr_out, node_inp, node_type)[source]

Step 3: Target-specific Aggregation x = W[node_type] * gelu(Agg(x)) + x

class cogdl.layers.gpt_gnn_module.Matcher(n_hid, n_out, temperature=0.1)[source]

Bases: torch.nn.modules.module.Module

Matching between a pair of nodes to conduct link prediction. Use multi-head attention as matching model.

forward(x, ty, use_norm=True)[source]
class cogdl.layers.gpt_gnn_module.RNNModel(n_word, ninp, nhid, nlayers, dropout=0.2)[source]

Bases: torch.nn.modules.module.Module

Container module with an encoder, a recurrent module, and a decoder.

forward(inp, hidden=None)[source]
from_w2v(w2v)[source]
class cogdl.layers.gpt_gnn_module.RelTemporalEncoding(n_hid, max_len=240, dropout=0.2)[source]

Bases: torch.nn.modules.module.Module

Implement the Temporal Encoding (Sinusoid) function.

forward(x, t)[source]
cogdl.layers.gpt_gnn_module.args_print(args)[source]
cogdl.layers.gpt_gnn_module.dcg_at_k(r, k)[source]
cogdl.layers.gpt_gnn_module.defaultDictDict()[source]
cogdl.layers.gpt_gnn_module.defaultDictDictDictDictDictInt()[source]
cogdl.layers.gpt_gnn_module.defaultDictDictDictDictInt()[source]
cogdl.layers.gpt_gnn_module.defaultDictDictDictInt()[source]
cogdl.layers.gpt_gnn_module.defaultDictDictInt()[source]
cogdl.layers.gpt_gnn_module.defaultDictInt()[source]
cogdl.layers.gpt_gnn_module.defaultDictList()[source]
cogdl.layers.gpt_gnn_module.feature_OAG(layer_data, graph)[source]
cogdl.layers.gpt_gnn_module.feature_reddit(layer_data, graph)[source]
cogdl.layers.gpt_gnn_module.load_gnn(_dict)[source]
cogdl.layers.gpt_gnn_module.mean_reciprocal_rank(rs)[source]
cogdl.layers.gpt_gnn_module.ndcg_at_k(r, k)[source]
cogdl.layers.gpt_gnn_module.normalize(mx)[source]

Row-normalize sparse matrix

cogdl.layers.gpt_gnn_module.preprocess_dataset(dataset) → cogdl.layers.gpt_gnn_module.Graph[source]
cogdl.layers.gpt_gnn_module.randint()[source]
cogdl.layers.gpt_gnn_module.sample_subgraph(graph, time_range, sampled_depth=2, sampled_number=8, inp=None, feature_extractor=<function feature_OAG>)[source]

Sample Sub-Graph based on the connection of other nodes with currently sampled nodes We maintain budgets for each node type, indexed by <node_id, time>. Currently sampled nodes are stored in layer_data. After nodes are sampled, we construct the sampled adjacancy matrix.

cogdl.layers.gpt_gnn_module.sparse_mx_to_torch_sparse_tensor(sparse_mx)[source]

Convert a scipy sparse matrix to a torch sparse tensor.

cogdl.layers.gpt_gnn_module.to_torch(feature, time, edge_list, graph)[source]

Transform a sampled sub-graph into pytorch Tensor node_dict: {node_type: <node_number, node_type_ID>} node_number is used to trace back the nodes in original graph. edge_dict: {edge_type: edge_type_ID}

Mean Aggregator module

class cogdl.layers.maggregator.MeanAggregator(in_channels, out_channels, bias=True)[source]

Bases: torch.nn.modules.module.Module

forward(x, adj_sp)[source]
static norm(x, adj_sp)[source]
class cogdl.layers.maggregator.SumAggregator(in_channels, out_channels, bias=True)[source]

Bases: torch.nn.modules.module.Module

static aggr(x, adj)[source]
forward(x, adj)[source]

MixHop module

class cogdl.layers.mixhop_layer.MixHopLayer(num_features, adj_pows, dim_per_pow)[source]

Bases: torch.nn.modules.module.Module

adj_pow_x(x, adj, p)[source]
forward(x, edge_index)[source]
reset_parameters()[source]

PPRGo module

class cogdl.layers.pprgo_modules.PPRGoDataset(features: torch.Tensor, ppr_matrix: scipy.sparse.csr.csr_matrix, node_indices: torch.Tensor, labels_all: torch.Tensor = None)[source]

Bases: torch.utils.data.dataset.Dataset

cogdl.layers.pprgo_modules.build_topk_ppr_matrix_from_data(edge_index, *args, **kwargs)[source]
cogdl.layers.pprgo_modules.calc_ppr_topk_parallel[source]
cogdl.layers.pprgo_modules.construct_sparse(neighbors, weights, shape)[source]
cogdl.layers.pprgo_modules.ppr_topk(adj_matrix, alpha, epsilon, nodes, topk)[source]

Calculate the PPR matrix approximately using Anderson.

cogdl.layers.pprgo_modules.topk_ppr_matrix(adj_matrix, alpha, eps, idx, topk, normalization='row')[source]

Create a sparse matrix where each node has up to the topk PPR neighbors and their weights.

ProNE module

class cogdl.layers.prone_module.Gaussian(mu=0.5, theta=1, rescale=False, k=3)[source]

Bases: object

prop(mx, emb)[source]
class cogdl.layers.prone_module.HeatKernel(t=0.5, theta0=0.6, theta1=0.4)[source]

Bases: object

prop(mx, emb)[source]
prop_adjacency(mx)[source]
class cogdl.layers.prone_module.HeatKernelApproximation(t=0.2, k=5)[source]

Bases: object

chebyshev(mx, emb)[source]
prop(mx, emb)[source]
taylor(mx, emb)[source]
class cogdl.layers.prone_module.NodeAdaptiveEncoder[source]

Bases: object

  • shrink negative values in signal/feature matrix
  • no learning
static prop(signal)[source]
class cogdl.layers.prone_module.PPR(alpha=0.5, k=10)[source]

Bases: object

applying sparsification to accelerate computation

prop(mx, emb)[source]
class cogdl.layers.prone_module.ProNE[source]

Bases: object

class cogdl.layers.prone_module.SignalRescaling[source]

Bases: object

  • rescale signal of each node according to the degree of the node:
    • sigmoid(degree)
    • sigmoid(1/degree)
prop(mx, emb)[source]
cogdl.layers.prone_module.get_embedding_dense(matrix, dimension)[source]
cogdl.layers.prone_module.propagate(mx, emb, stype, space=None)[source]

SELayer module

class cogdl.layers.se_layer.SELayer(in_channels, se_channels)[source]

Bases: torch.nn.modules.module.Module

Squeeze-and-excitation networks

forward(x)[source]

SRGCN module

class cogdl.layers.srgcn_module.ColumnUniform[source]

Bases: torch.nn.modules.module.Module

forward(edge_index, edge_attr, N)[source]
class cogdl.layers.srgcn_module.EdgeAttention(in_feat)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr)[source]
class cogdl.layers.srgcn_module.HeatKernel(in_feat)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr)[source]
class cogdl.layers.srgcn_module.Identity(in_feat)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr)[source]
class cogdl.layers.srgcn_module.NodeAttention(in_feat)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr)[source]
class cogdl.layers.srgcn_module.NormIdentity[source]

Bases: torch.nn.modules.module.Module

forward(edge_index, edge_attr, N)[source]
class cogdl.layers.srgcn_module.PPR(in_feat)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr)[source]
class cogdl.layers.srgcn_module.RowSoftmax[source]

Bases: torch.nn.modules.module.Module

forward(edge_index, edge_attr, N)[source]
class cogdl.layers.srgcn_module.RowUniform[source]

Bases: torch.nn.modules.module.Module

forward(edge_index, edge_attr, N)[source]
class cogdl.layers.srgcn_module.SymmetryNorm[source]

Bases: torch.nn.modules.module.Module

forward(edge_index, edge_attr, N)[source]
cogdl.layers.srgcn_module.act_attention(attn_type)[source]
cogdl.layers.srgcn_module.act_map(act)[source]
cogdl.layers.srgcn_module.act_normalization(norm_type)[source]

Strategies module

class cogdl.layers.strategies_layers.ContextPredictTrainer(args)[source]

Bases: cogdl.layers.strategies_layers.Pretrainer

static add_args(parser)[source]
get_cbow_pred(overlapped_rep, overlapped_context, neighbor_rep)[source]
get_skipgram_pred(overlapped_rep, overlapped_context_size, neighbor_rep)[source]
class cogdl.layers.strategies_layers.Discriminator(hidden_size)[source]

Bases: torch.nn.modules.module.Module

forward(x, summary)[source]
reset_parameters()[source]
class cogdl.layers.strategies_layers.Finetuner(args)[source]

Bases: cogdl.layers.strategies_layers.Pretrainer

static add_args(parser)[source]
build_model(args)[source]
fit()[source]
split_data()[source]
class cogdl.layers.strategies_layers.GINConv(hidden_size, input_layer=None, edge_emb=None, edge_encode=None, pooling='sum', feature_concat=False)[source]

Bases: torch.nn.modules.module.Module

Implementation of Graph isomorphism network used in paper “Strategies for Pre-training Graph Neural Networks”. <https://arxiv.org/abs/1905.12265>

hidden_size : int
Size of each hidden unit
input_layer : int, optional
The size of input node features if not None.
edge_emb : list, optional
The number of edge types if not None
edge_encode : int, optional
Size of each edge feature if not None
pooling : str
Pooling method.
aggr(x, edge_index, num_nodes)[source]
forward(x, edge_index, edge_attr, self_loop_index=None, self_loop_type=None)[source]
class cogdl.layers.strategies_layers.GNN(num_layers, hidden_size, JK='last', dropout=0.5, input_layer=None, edge_encode=None, edge_emb=None, num_atom_type=None, num_chirality_tag=None, concat=False)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr, self_loop_index=None, self_loop_type=None)[source]
class cogdl.layers.strategies_layers.GNNPred(num_layers, hidden_size, num_tasks, JK='last', dropout=0, graph_pooling='mean', input_layer=None, edge_encode=None, edge_emb=None, num_atom_type=None, num_chirality_tag=None, concat=True)[source]

Bases: torch.nn.modules.module.Module

forward(data, self_loop_index, self_loop_type)[source]
load_from_pretrained(path)[source]
pool(x, batch)[source]
class cogdl.layers.strategies_layers.InfoMaxTrainer(args)[source]

Bases: cogdl.layers.strategies_layers.Pretrainer

static add_args(parser)[source]
class cogdl.layers.strategies_layers.MaskTrainer(args)[source]

Bases: cogdl.layers.strategies_layers.Pretrainer

static add_args(parser)[source]
class cogdl.layers.strategies_layers.Pretrainer(args, transform=None)[source]

Bases: torch.nn.modules.module.Module

Base class for Pre-training Models of paper “Strategies for Pre-training Graph Neural Networks”. <https://arxiv.org/abs/1905.12265>

fit()[source]
get_dataset(dataset_name, transform=None)[source]
class cogdl.layers.strategies_layers.SupervisedTrainer(args)[source]

Bases: cogdl.layers.strategies_layers.Pretrainer

static add_args(parser)[source]
split_data()[source]