Skip to content

A framework for training theoretically stable (and robust) Reinforcement Learning control algorithms.

License

Notifications You must be signed in to change notification settings

rickstaa/stable-learning-control

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 Cannot retrieve latest commit at this time.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stable Learning Control

Stable Learning Control GitHub release (latest by date) Python 3 codecov Contributions DOI Weights & Biases dashboard

Package Overview

The Stable Learning Control (SLC) framework is a collection of robust Reinforcement Learning control algorithms designed to ensure stability. These algorithms are built upon the Lyapunov actor-critic architecture introduced by Han et al. 2020. They guarantee stability and robustness by leveraging Lyapunov stability theory. These algorithms are specifically tailored for use with gymnasium environments that feature a positive definite cost function. Several ready-to-use compatible environments can be found in the stable-gym package.

Installation and Usage

Please see the docs for installation and usage instructions.

Contributing

We use husky pre-commit hooks and github actions to enforce high code quality. Please check the contributing guidelines before contributing to this repository.

Note

We used husky instead of pre-commit, which is more commonly used with Python projects. This was done because only some tools we wanted to use were possible to integrate the Please feel free to open a PR if you want to switch to pre-commit if this is no longer the case.

References

  • Han et al. 2020 - Used as a basis for the Lyapunov actor-critic architecture.
  • Spinningup - Used as a basis for the code structure.