This reproducibility package was prepared for the paper titled "Revisiting the Performance of Graph Neural Networks for Session-based Recommendation" and submitted to RecSys 2025. The results reported in this paper were achieved with the help of the SessionRecGraphFusion framework, which is built on the session-rec framework. Session-rec is a Python-based framework for building and evaluating recommender systems. It implements a suite of state-of-the-art algorithms and baselines for session-based and session-aware recommendation. More information about the session-rec framework can be found here. The picture is generated by ChatGPT.
The following algorithms are compared in this study- Session-based Recommendations with Recurrent Neural Networks (ICLR 2016)
- STAMP: Short-Term Attention/Memory Priority Model for Session-based Recommendation (KDD 2018)
- Neural Attentive Session-based Recommendation (SIGIR 2018)
- Session-based recommendation with graph neural networks (AAAI 2019)
- Global Context Enhanced Graph Neural Networks for Session-based Recommendation (SIGIR 2020)
- Target Attentive Graph Neural Networks for Session-based Recommendation (SIGIR 2020)
- Graph Neighborhood Routing and Random Walk for Session-based Recommendation (ICDM 2021)
- COTREC: Self-Supervised Graph Co-Training for Session-based Recommendation (SIGIR 2021)
- Fusion of Latent Categorical Prediction and Sequential Prediction for Session-based Recommendation (Information Sciences-2021 IF: 8.10)
- Anaconda 4.X (Python 3.5 or higher)
- numpy=1.23.5
- pandas=1.5.3
- torch=1.13.1
- scipy=1.10.1
- python-dateutil=2.8.1
- pytz=2021.1
- certifi=2020.12.5
- pyyaml=5.4.1
- networkx=2.5.1
- scikit-learn=0.24.2
- keras=2.11.0
- six=1.15.0
- theano=1.0.3
- psutil=5.8.0
- pympler=0.9
- Scikit-optimize
- tensorflow=2.11.0
- tables=3.8.0
- scikit-optimize=0.8.1
- python-telegram-bot=13.5
- tqdm=4.64.1
- dill=0.3.6
- numba
This is how the framework can be downloaded and configured to run the experiments
- Download and install Docker from https://www.docker.com/
- Run the following command to "pull Docker Image" from Docker Hub:
docker pull shefai/session_rec_graph_fusion:latest
- Clone the GitHub repository by using the link:
https://github.com/RecSysEvaluation/RecSys_Evaluation.git
- Move into the RecSys_Evaluation directory
- Run the command to mount the current directory RecSys_Evaluation to the docker container named as session_rec_container:
docker run --name session_rec_container -it -v "$(pwd):/RecSys_Evaluation" -it shefai/session_rec_graph_fusion:latest
. If you have the support of CUDA-capable GPUs then run the following command to attach GPUs with the container:docker run --name session_rec_container -it --gpus all -v "$(pwd):/RecSys_Evaluation" -it shefai/session_rec_graph_fusion:latest
- If you are already inside the runing container then run the command to navigate to the mounted directory RecSys_Evaluation:
cd /RecSys_Evaluation
otherwise starts the "session_rec_container" and then run the command - Copy the config file of any model from the "conf" folder and past it into "conf/in" folder, then run this command to reproduce the reported results in all tables:
python run_config.py conf/in conf/out
- Download Anaconda from https://www.anaconda.com/ and install it
- Clone the GitHub repository by using this link:
https://github.com/RecSysEvaluation/RecSys_Evaluation.git
- Open the Anaconda command prompt
- Move into the RecSys_Evaluation directory
- Run this command to create virtual environment:
conda create --name RecSys_Evaluation python=3.8
- Run this command to activate the virtual environment:
conda activate RecSys_Evaluation
- Run this command to install the required libraries for CPU:
pip install -r requirements_cpu.txt
. However, if you have support of CUDA-capable GPUs, then run this command to install the required libraries to run the experiments on GPU:pip install -r requirements_gpu.txt
- Copy the config file of any model from the "conf" folder and past it into "conf/in" folder, then run this command to reproduce the reported results in all tables:
python run_config.py conf/in conf/out
To ensure the reproducibility of the results reported in the main table, we recorded the recommendation files for each considered dataset. You just need to run the following commands to reproduce the results.
- For the Diginetica dataset, run this command, and it will take approximately 160 seconds to reproduce the results.
python run_experiments_with_limited_resources.py --dataset diginetica
- For the Retailrocket dataset, run this command, and it will take approximately 70 seconds to reproduce the results.
python run_experiments_with_limited_resources.py --dataset retailrocket
- For the RSC15 dataset, run this command, and it will take approximately 30 seconds to reproduce the results.
python run_experiments_with_limited_resources.py --dataset rsc15
- For the Diginetica dataset, run this command, and it will take approximately ten seconds to conduct statistical analysis.
python run_statistics_analysis.py --dataset diginetica
- For the Retailrocket dataset, run this command, and it will take approximately five seconds to conduct statistical analysis.
python run_statistics_analysis.py --dataset retailrocket
- For the RSC15 dataset, run this command, and it will take approximately two seconds to conduct statistical analysis.
python run_statistics_analysis.py --dataset rsc15
Run experiments for short, medium, and long sessions using the best two GNN and sequential models (MRR@20)
- For the Diginetica dataset, run this command, and it will take approximately three seconds to reproduce the results.
python run_experiments_with_different_session_length.py --dataset diginetica
- For the Retailrocket dataset, run this command, and it will take approximately one seconds to reproduce the results.
python run_experiments_with_different_session_length.py --dataset retailrocket
- For the RSC15 dataset, run this command, and it will take approximately one second to reproduce the results.
python run_experiments_with_different_session_length.py --dataset rsc15