This project contains tools to test compute nodes before delivery of CPP nodes and new clusters and after maintenance. This is important for both CS and System teams to ensure the continuously high quality of service.
Given a set of nodes in a partition, there are typical tests that need to be done
- the CPUs in a node are functional (giving accurate results, performing as expected)
- the memory in a node is accessible and functional (allocable memory as advertised)
- the communication between the nodes (e.g. through MPI runs)
- the GPUs are functional (as advertised)
- the data transfers between the node(s) and storage spaces are at the expected rates
The tools should serve the following 2 primary use cases:
- Manual testing: the CS/System team members specify the node(s) to test.
- the GUI app:
nodes-testing.py - the CLI tools:
test-cpu-nodes.shandtest-gpu-nodes.sh
- the GUI app:
- Automated testing: the tool run in the background in the user space and automatically submits testing jobs
- the shell script:
requesting_tests.sh
- the shell script:
The general idea for these testing tools is to submit jobs from the login node to the compute nodes of interest, within each job, run the selected applications (from the RCC Benchmarking Suite repo) and compare the output with the expected values. If the run completed successfully and the differences are within the specified tolerances, then the nodes pass the tests.
The GUI tool nodes-testing.py is a Python script that runs through streamlit that
allows users to select the job type and an associated job script to submit the nodes.
The job script will run selected applications and print to an output log file.
The underlying job scripts queue-cpu-nodes.txt and queue-gpu-nodes.txt should be responsible
to the tell if the applications and shell commands therein complete successfully and/or reports
the performance and accuracy of the run.
The shell script requesting_tests.sh is used for automated testing, which fills in the details
in the job scripts.
To-Do:
- need to define a reference file for each test case: for example, with LAMMPS runs,
the reference file lists the expected performance and numerical result of the last
time step and the accepted tolerance (in progress, see
run-tests.pyandlammps.yaml). - more testing
git clone https://github.com/rcc-uchicago/nodes-testing-tools.git
cd nodes-testing-tools
On Midway3, you can run the GUI app nodes-testing.py with
module load python/miniforge-25.3.0
source /project/rcc/shared/nodes-testing/testing-env/bin/activate
streamlit run nodes-testing.py
For development, you can create your own enviroment with
module load python/miniforge-25.3.0
python3 -m venv my-env
source my-env/bin/activate
pip install -r requirements.txt
Note: The enviroment contains torch and nvidia* in case you need to run a ML training script with GPU.
Currently the two shell scripts test-cpu-nodes.sh and test-gpu-nodes.sh submit jobs to the compute nodes of interest.
The GUI app similarly submits the job scripts queue-cpu-nodes.sh and queue-gpu-nodes.sh to the nodes, within which
the output from the runs is analyzed to judge if the runs produce expected output.
To eventually consolidate these approaches, we develop the Python script run-tests.py which executes a testing pipeline
defined by a configuration file:
python run-tests.py --config-file lammps.yaml
The configuration file lammps.yaml specifies the application executable to run and the expected output (e.g. numerical accuracy and performance and tolerances).
Similar configuration files can be created for other applications such as HPCG and HPCC.
The idea is to use this Python script inside the existing CLI tools and job scripts.