Official Repository for "Are Large Language Models Good Temporal Graph Learners?"
module load python=3.10
python -m venv vllm_env
source vllm_env/bin/activate
pip install vllm
install TGB from source to download datasets and run evaluation
git clone [email protected]:shenyangHuang/TGB.git
pip install -e .
#* base model
CUDA_VISIBLE_DEVICES=0 python -u reasoning_main.py --batch 200 --model qwen1.7b --in_size 5 --bg_size 300 --data tgbl-subreddit --nbr 2
#* base model + icl
CUDA_VISIBLE_DEVICES=0 python -u reasoning_main.py --batch 200 --model qwen1.7b --in_size 5 --bg_size 300 --data tgbl-subreddit --nbr 2 --icl
#* base model + cot
CUDA_VISIBLE_DEVICES=0 python -u reasoning_main.py --batch 200 --model qwen1.7b --in_size 5 --bg_size 300 --data tgbl-subreddit --nbr 2 --cot --logfile reddit_log.json
#* base model + cot + icl
CUDA_VISIBLE_DEVICES=0 python -u reasoning_main.py --batch 200 --model qwen1.7b --in_size 5 --bg_size 300 --data tgbl-subreddit --nbr 2 --cot --icl --logfile reddit_log.json
We assume that:
- We have an answer cache
- We have the prompt cache
**Run the first 5,000 lines: ** This is the model you want the answer to be from
-
Get answer (e.g.)
CUDA_VISIBLE_DEVICES=0 python -u reasoning_main.py --batch 200 --model llama3 --in_size 5 --bg_size 300 --data tgbl-subreddit --nbr 2 --icl -—cache_dst
-
Create folder called
answer_cacheunderposthoc_explanations -
copy {{answer_cache}} to
answer_cache
**Run the prompt generation
** Keep gpt-4.1-mini-2025-04-14, path currently hard-coded in main.py
-
Get prompts (e.g.)
python3 ./generate_reasoning_main.py --batch 100 --model gpt-4.1-mini-2025-04-14 --in_size 5 --bg_size 300 --data tgbl-subreddit --nbr 2 --icl --max_no_of_prompts 5000
-
Create folder called
outputfromgpt-batchintoposthoc_explanations
** This is the model you want the explanation to be generated from + categorization.
python3 ./main.py --data tgbl-subreddit --model llama3
see gpt-batch/README.md for details
If this repo is useful for your project, please consider citing our paper:
@article{huang2025large,
title={Are Large Language Models Good Temporal Graph Learners?},
author={Huang, Shenyang and Parviz, Ali and Kondrup, Emma and Yang, Zachary and Ding, Zifeng and Bronstein, Michael and Rabbany, Reihaneh and Rabusseau, Guillaume},
journal={arXiv preprint arXiv:2506.05393},
year={2025}
}
