I am interested in your work and would like to try to reproduce it. In the process, I've some questions:
I found two sft code in this repository, but no RL code found.
In the paper it states that there is reinforcement learning (sampling trajectories, calculating reward functions, preference data pairs, etc.), but that doesn't seem to be the case on this graph? I'd like to ask if the llm(ReasonFlux-F1) was obtained directly after the third step?
