11_gnn.pdf
Graph Review

This allows for defining features in graph neural networks through various methods:
- Node Features: Can include intrinsic properties like node attributes, labels, or characteristics specific to each vertex
- Edge Features: Represent relationships between nodes, including weights, types of connections, or directional information
- Structural Features: Derived from the graph topology, such as degree centrality, clustering coefficients, or local network statistics
- Learned Features: Generated through the neural network's hidden layers as it processes the graph structure
Graph can further be represented in memory as an adjacency matrix. Symmetric for undirected and un-symmetric for directed.
CNN Invariances
<aside>
❗
Graph Isomorphism and why it matters
- Function Consistency: GNNs should produce the same output embeddings for two isomorphic graphs.
- Expressive Power: The ability of a GNN to distinguish different graphs is closely related to its capacity to detect graph isomorphisms.
- Practical Impact: Some GNN architectures, like the Weisfeiler-Lehman (WL) test, are explicitly designed with this in mind.
</aside>
Set of Vertices
Let's begin by considering a graph with no edges - just a set of nodes V:
- Each node i has features x_i ∈ ℝᵏ
- These features can be combined into an n × k node feature matrix X:
X = [x₁, ..., xₙ]ᵀ
- Each row i in matrix X corresponds to the features x_i of node i
- While this representation requires choosing a specific node ordering, we want our neural network operations to be independent of this ordering