I encountered a logging error when training off-policy algorithms (DDPG, TD3, SAC) in FinRL 0.3.8.
Root cause:
FinRL logging assumes the presence of rollout_buffer, which only exists for on-policy algorithms (A2C, PPO). Off-policy algorithms use replay_buffer, so the callback raises a logging error during training.
This does not break training itself, but breaks logging / TensorBoard
