Skip to main content
Back to top
Ctrl
+
K
Network Machine Learning
Foundations
2 End-to-end Biology Network Machine Learning Project
2.2 Get the Data
2.3 Prepare the Data
2.4 Select and Train
2.5 and 2.6 Fine tune, discover, and visualize
Code Reproducibility
Representations
3 Characterizing and preparing network data
3.1 Properties of Networks
3.2 Node Properties and Relationships
3.3 Network summary statistics
3.4 Degree Matrices and Laplacians
3.5 Subnetworks and Connected Components
3.6 Regularization and node pruning
3.7 Edge Regularization
3.8 Edge-weight global rescaling
Code Reproducibility
4 Statistical Models of Random Networks
4.1 Inhomogeneous Erdös-Rényi Random Networks
4.2 Erdös-Rényi Random Networks
4.3 Stochastic Block Models
4.4 Random Dot Product Graphs
4.5 Positive Semi-Definite Matrices
4.7 Degree-Corrected Stochastic Block Models
4.8 Structured Independent-Edge Random Networks
4.9 Multiple Network Models
4.10 Models with Covariates
Code Reproducibility
5 Learning Network Representations
5.1 Maximum likelihood estimation
5.2 Why do we embed networks?
5.3 Adjacency spectral embedding
5.4 Laplacian spectral embedding
5.5 Multiple network representation learning
5.6 Joint representation learning
5.7 Estimating latent dimensionality and non positive semidefiniteness
Code Reproducibility
Applications
6 Applications for a single network
6.1 The community detection problem
6.2 Sparsity and Storage
6.3 Testing for differences between groups of edges
6.4 Model selection with stochastic block models
6.5 The vertex nomination problem
6.6 Out-of-sample embedding
Code Reproducibility
7 Applications for two networks
7.1 Two-sample testing for networks
7.2 Two-sample testing for SBMs
7.3 The graph matching problem
Code Reproducibility
8 Applications for multiple networks
8.1 Anomaly detection in timeseries of networks
8.2 Testing for significant edges in incoherent signal subnetworks
8.3 Building coherent signal subnetworks
Code Reproducibility
9 Deep Learning Methods
9.1 Graph neural networks
9.2 Random walks and diffusion-based methods
Code Reproducibility
Appendices
A: Network Model Theory
A.4 Stochastic block models
A.6 Random Dot Product Graphs
Code Reproducibility
B: Learning representations theory
B.1 The basics of maximum likelihood estimation
B.2 Theoretical considerations for spectral embeddings
Code Reproducibility
C: Overview of machine learning techniques
C.1 Unsupervised machine learning
Code Reproducibility
Repository
Open issue
Index