SLKI (Second Level KI in Weichen) aims to improve the security and reliability of Germany's rail network by harnessing the potential of fixed acceleration sensor data at train switches.
One of the main challenges is to clean and preprocess these noisy time-series data to obtain high-quality and reliable train signal data.
Based on these cleaned signal data it is then possible to provide AI-driven approaches to classify train types, track and predict train speeds, uncover potential anomalies.
A good starting point, to learn more about the project and this repository, is the doumentation published on the GitHub Pages.
The repository is organized as follows:
- docs: Markdown based user documentation published on the GitHub Pages.
- LICENSES: All license files used somewhere in this project. Also see LICENSES.md.
- logs: Optional directory for log files.
- slki: Signal processing and cleaning pipeline source code. Inclusing its configuration file.
- notebooks: Jupyter notebooks to further analyze the processed singal data.
- scripts: Scripts extending this project.
-
Python 3.10 or later as well as pip
-
virtualenv or venv (highly recommended)
pip install -U virtualenv
There are multiple optional dependencies available:
dev: installs additional development toolsnotebooks: installs additional requirements to run the jupyter notebookstorch: installs PyTorchtest: installs test requirementsstubs: installs further type informationdocs: installs documentation requirementsall: installs all optional dependencies
Of course, it is possible to install the software without any additional optional dependencies and just use the signal processing and cleaning pipeline. Choose your poision based on your own requirements.
# create virtual environment
virtualenv -p $(which python3.10) .venv
# or
# python -m venv .venv
# activate our virtual environment
source .venv/bin/activate
# update pip (optional)
python -m pip install -U pip
# install
pip install -U -e ".[all]"
# enable git pre-commit hooks (optional)
pre-commit install-
adjust config file:
slki/config.py -
run the pipeline
python -m slki # or just slki
-
ensure that Jupyter Lab is installed
pip install jupyterlab
-
open Jupyter Lab
jupyter lab --notebook-dir=notebooks
from slki import ...This projects provides a few different tests and checks.
Note
For detailed information about the tests, please check out the Testing section in the documentation.
In gerneral, we have two files which defines all the tests:
To easily run these tests locally, use:
./scripts/test.shIf you only want to run the pre-commit hooks manually, use:
pre-commit run --all-filesRunning the GitLab CI locally is a bit more complicated. Is also requires Node.js as well as Docker installed and configured.
npm exec gitlab-ci-local
# or run a single job, e.g. pre-commit
npm exec gitlab-ci-local -- pre-commitPlease follow the contribution rules:
- use typed Python (type annotations)
- verify Python static code checks with ruff
- document fixes, enhancements, new features, ...
- write scripts and examples OS independent or at least with linux, wsl support
- verify shell script static code check compliancy with ShellCheck
- verify project license compliancy witout any license conflicts (e.g. for 3rd party libraries, data, models, ...)
- verify documentation (markdown) compliancy w.r.t. markdown linting rules further specified inside the .markdownlint-cli2.jsonc configuration file
- run all tests successfully
This projects is using the Docstring style from Google. At least public classes, methods, fields, ... should be documented.
For further documentation we are using Markdown documentation with Material for MkDocs. See the docs folder for more details.
To locally serve the documentation, feel free to use:
python -m mkdocs serveFor accurate citation, refer to the corresponding metadata in the CITATION.cff file associated with this work.
Please see the file LICENSE.md for further information about how the content is licensed.