Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
SE-HTGNN [NeurIPS 2025]
How to run
If you do not have gpu, set -gpu -1.
Performance
We compare the performance reported in the original paper with the reproduction results using this framework.
Link Prediction
Device: GPU, GeForce RTX 3090
Datasets: OGBN-MAG, Aminer
Metrics: AUC (Area Under Curve) and AP (Average Precision).
Node Classification
Node Regression
Dataset
We utilize the pre-processed heterogeneous temporal graph datasets described in the SE-HTGNN paper.
Description
OGBN-MAG
Task: Link Prediction (Author collaboration prediction).
Time Span: 2010-2019 (Granularity: Year).
Nodes: Author (17k), Paper (282k), Field (34k), Institution (2k).
Relations: 4 types including author-writes-paper, paper-cites-paper, author-affiliated-with-institution, etc..
Snapshot: 10 graph snapshots.
Aminer
Task: Link Prediction (Predict whether a pair of authors will coauthor).
Time Span: 1990-2005 (Granularity: Year).
Nodes: Paper (18k), Author (23k), Venue (22).
Relations: paper-publish-venue, author-write-paper.
Snapshot: 16 graph snapshots.
YELP
Task: Node Classification (Business Category: "American (New) Food", "Fast Food", "Sushi").
Time Span: 01/2012 - 12/2021 (Granularity: Month).
Nodes: User (55k), Business (12k).
Snapshot: 12 graph snapshots.
COVID-19
Task: Node Regression (Predict new daily cases).
Time Span: 05/01/2020 - 02/28/2021 (Granularity: Day).
Nodes: State (54), County (3223).
Relations: state-includes-county, state-near-state, county-near-county.
Snapshot: 304 graph snapshots.
Model Details: SE-HTGNN
SE-HTGNN (Simple and Efficient Heterogeneous Temporal Graph Neural Network) proposes a novel learning paradigm to unify spatial and temporal modeling.
LLM-enhanced Prompt
It uses Large Language Models (e.g., LLaMA3, GPT) to generate semantic representations for node types as prior knowledge.
These embeddings initialize the hidden states of the temporal module, enhancing the model's understanding of node type properties.
Simplified Spatial Aggregation: Unlike traditional HGNNs using heavy node-level attention, SE-HTGNN employs a simplified neighbor aggregation (similar to GCN/Average) to reduce complexity, observing that intra-type neighbor variance is often low.
Dynamic-Attention-based Fusion
Temporal-Spatial Unification: Instead of decoupled steps, it integrates temporal modeling directly into the spatial fusion stage.
Mechanism: It uses a GRU-based dynamic attention mechanism where historical attention coefficients guide the calculation of current attention weights for fusing different relations.
Hyper-parameter specific to the model
You can modify the parameters in
openhgnn/config.inior pass them via command line.Checklist
Please feel free to remove inapplicable items for your PR.