src.algorithms.deep.models package
Submodules
src.algorithms.deep.models.FCNet module
- class src.algorithms.deep.models.FCNet.FCNet(in_channels: int, out_channels: int)[source]
Bases:
Module
Fully connected three-layer neural network
- Parameters:
in_channels (int) – Number of input channels
out_channels (int) – Number of output channels
- forward(x: tensor) tensor [source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
src.algorithms.deep.models.GCNEncoder module
- class src.algorithms.deep.models.GCNEncoder.GCNEncoder(in_channels: int, latent_dim: int, activation: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.activation.ReLU'>)[source]
Bases:
Module
Graph Convolutional Network Encoder for Graph Autoencoder
- Parameters:
in_channels (int) – Number of input channels
latent_dim (int) – Latent dimension
activation (nn.Module) – Activation function
src.algorithms.deep.models.MVGRLModel module
Adapted from https://github.com/kavehhassani/mvgrl/blob/master/node/train.py
- class src.algorithms.deep.models.MVGRLModel.Discriminator(in_channels: int)[source]
Bases:
Module
Discriminator module
- Parameters:
in_channels (int) – Number of features in the hidden GCN layers
- forward(ha: tensor, hb: tensor, Ha: tensor, Hb: tensor, Ha_corrupted: tensor, Hb_corrupted: tensor) tensor [source]
Forward pass of the discriminator computer the MI between the two representations of the views
- Parameters:
ha (torch.tensor) – Graph embedding of the original view
hb (torch.tensor) – Graph embedding of the diffused view
Ha (torch.tensor) – Node embedding of the original view
Hb (torch.tensor) – Node embedding of the diffused view
Ha_corrupted (torch.tensor) – Node embedding of the corrupted original view
Hb_corrupted (torch.tensor) – Node embedding of the corrupted diffused view
- Returns:
Discriminator output
- Return type:
torch.tensor
- class src.algorithms.deep.models.MVGRLModel.GCN(in_channels: int, out_channels: int)[source]
Bases:
Module
Graph Convolutional Network (GCN) with as single layer
- Parameters:
in_channels (int) – Number of input features
out_channels (int) – Number of output features
- forward(x: tensor, edge_index: tensor, edge_weight: tensor = None) tensor [source]
Forward pass
- Parameters:
x (torch.tensor) – Input features
edge_index (torch.tensor) – Edge index tensor
edge_weight (torch.tensor) – Edge weight tensor (if any)
- Returns:
Embeddings of the nodes at each GCN layer
- Return type:
torch.tensor
- class src.algorithms.deep.models.MVGRLModel.MVGRLModel(in_channels: int, latent_dim: int)[source]
Bases:
Module
Multi-View Graph Representation Learning (MVGRL) model
- Parameters:
in_channels (int) – Number of input features
latent_dim (int) – Dimension of the latent space
- encode(x: tensor, edge_index: tensor, diff_edge_index: tensor, diff_edge_weight: tensor) tensor [source]
Embedding function
- Parameters:
x (torch.tensor) – Input features
edge_index (torch.tensor) – Edge index tensor
diff_edge_index (torch.tensor) – Diffused edge index tensor
diff_edge_weight (torch.tensor) – Diffused edge weight tensor
- Returns:
Node embeddings
- Return type:
torch.tensor
- forward(x: tensor, edge_index: tensor, diff_edge_index: tensor, diff_edge_weight: tensor, corrupted_idx: tensor = None)[source]
Forward pass, a=alpha (original view), b=beta (diffused view)
- Parameters:
x (torch.tensor) – Input features
edge_index (torch.tensor) – Edge index tensor
diff_edge_index (torch.tensor) – Diffused edge index tensor
diff_edge_weight (torch.tensor) – Diffused edge weight tensor
corrupted_idx (torch.tensor) – Corrupted index tensor
- class src.algorithms.deep.models.MVGRLModel.Projection(latent_dim: int)[source]
Bases:
Module
Projection layer
- Parameters:
latent_dim (int) – Dimension of the latent space