Skip to content

SaraFrancavilla/1D_Heat_Equation

Repository files navigation

1D Heat Equation Solver

A parallel explicit finite-difference solver for the one-dimensional heat equation, provided with 13 different implementations:

  • 3 serial kernels
  • 4 OpenMP kernels (half-domain, loop unrolling, SIMD-aligned, collapse, swap)
  • 5 MPI kernels (blocking point-to-point, non-blocking halo exchange, broadcast, scatter)

This repository includes a reproducible test suite, a performance driver, and post-processing scripts. You can compare speed-up and efficiency on both a local multicore machine and a PBS-managed HPC cluster.


Table of Contents


Prerequisites

  • Compiler & MPI
    • mpicc from GCC ≥ 9 with OpenMP support
    • MPICH ≥ 3.2 or equivalent MPI library
  • PBS job scheduler (for cluster runs)

Quick Start

  1. Clone the repository
    git clone https://github.com/SaraFrancavilla/1D_Heat_Equation.git
    cd 1D_Heat_Equation
  2. Compile
    mpicc -O2 -march=native -fopenmp all_implementation.h grouping_functions.c main.c matrix_config.c mpi.c parallel.c sequential.c write_to_file.c -lm
  3. Run
    mpirun -np <num_processes≤64> ./a.out

Quick Start Cluster

  1. Access and clone
    ssh [email protected]
    git clone https://github.com/SaraFrancavilla/1D_Heat_Equation.git
    cd 1D_Heat_Equation
  2. Start an interactive session and load the modules
    qsub -I -q short_cpuQ -l select=1:ncpus=64:mpiprocs=64
    module load gcc91 mpich-3.2.1--gcc-9.1.0
  3. Compile and launch the program
    mpicc -O2 -march=native -fopenmp all_implementation.h grouping_functions.c main.c matrix_config.c mpi.c     parallel.c sequential.c write_to_file.c -lm
    mpirun -np <max_processes64> ./a.out

Usage

After launching ./a.out, select one of the available menu options:

  1. Correctness
    Validate all 13 kernels and write the final temperature profile to Results_comparison.txt.
  2. Strong Scaling Fix problem sizes from the configuration matrix, test 2–64 OpenMP threads and 2–32 MPI ranks, and record wall-time, speed-up, and efficiency in Execution_times.txt.
  3. Weak Scaling & Best Configuration Double mesh size with proportional resources, then log execution time, weak-scaling ratio, and the optimal configuration in Optimal_configurations_study.txt.

Output Files

  1. Results_comparison.txt — Final temperature columns for correctness checks

  2. Execution_times.txt — Timings, speed-up, and efficiency for strong-scaling studies

  3. Optimal_configurations_study.txt — Times, weak-scaling ratios, and best thread/rank settings

Project Structure

 .
├── all_implementation.h     # Prototypes and common definitions  
├── grouping_functions.c     # Utility routines  
├── main.c                   # Driver and menu handling  
├── matrix_config.c          # Configuration matrix loader  
├── mpi.c                    # MPI-specific kernels and wrappers  
├── parallel.c               # OpenMP and hybrid implementations  
├── sequential.c             # Baseline serial implementation  
├── write_to_file.c          # Logging and output routines  
└── README.md                # This file  

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages