We welcome contributions from the community to enhance the CartPole repository! Whether it's fixing bugs, adding new features, improving documentation, or suggesting ideas, your contributions are valuable. Follow the guidelines below to contribute effectively.
- Click on the "Fork" button at the top right of the repository page to create your own copy.
- Clone your forked repository to your local machine:
git clone https://github.com/<your-username>/CartPole.git
- Replace
<your-username>with your GitHub username.
- Create a new branch for your feature or bug fix:
git checkout -b feature-name
- Use a descriptive name for your branch (e.g.,
improve-visualization-metrics).
- Implement your changes or additions to the code.
- Ensure your code is well-documented and follows the project structure.
- Test your changes thoroughly using the provided scripts.
- Stage and commit your changes:
git add . git commit -m "Describe your changes (e.g., Enhance reward visualization in training metrics)"
- Push your changes to your forked repository:
git push origin feature-name
- Go to the original repository on GitHub and click on the "New Pull Request" button.
- Select your branch and provide a detailed description of your changes.
- Submit your pull request for review.
By participating in this project, you agree to uphold our Code of Conduct. Be respectful, inclusive, and collaborative in all interactions.
- Check the Issues tab to find bugs or feature requests you can work on.
- Keep your commits clean, concise, and related to a single task.
- Avoid committing unrelated changes or files.
- Regularly pull updates from the main repository to keep your fork in sync:
git pull upstream main
CartPole/:CartPole.py: Script for rendering the trained agent and observing its performance.CartPoleAlgorithm.py: Core DQN training algorithm for the CartPole-v1 environment.Visualize_q_table.py: Tools to visualize training metrics like rewards, losses, and epsilon decay.
ModelData/:- Contains the pre-trained DQN model (
cartpole_dqn_optimized.pth) and training logs (training_logs.json). - If you prefer, you can run
CartPoleAlgorithm.pyto generate your own model and metrics.
- Contains the pre-trained DQN model (
If you have questions about contributing, feel free to:
- Open an issue in the repository.
- Reach out via the contact information provided in the repository.
Thank you for contributing to CartPole! Together, we can master this classic reinforcement learning challenge.