Skip to content

vincentcounathe/alignment-toy-rl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Alignment RL

A simple experiment with reinforcement learning on a small language model.
Uses PPO or GRPO to optimize a hand-crafted reward.
Demonstrates how models can learn to game the reward instead of improving output quality.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages