Training loss and measure metrics #3174
              
                Unanswered
              
          
                  
                    
                      wangjiawen2013
                    
                  
                
                  asked this question in
                Q&A
              
            Replies: 1 comment
-
| 
         These metrics rely on hard predictions (e.g., thresholding logits to 0 or 1), which are non-differentiable. You can’t compute a gradient of "how many true positives" with respect to the model’s parameters in a smooth way. There are soft approximations that can be used to better match the metric you would like to actually optimize, but just like these loss functions it is still a surrogate (though better approximations are helpful for the target metric).  | 
  
Beta Was this translation helpful? Give feedback.
                  
                    0 replies
                  
                
            
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have a silly question. We usually use loss (such as crossentropyloss/mseloss) for backpropagation during training, while use recall, precsion, F1-score as performance metrics. Why not try the other way round ? I mean, use recall, precsion, F1-score for backpropagation during training and use loss as performance metrics. Is it technical feasible ?
Beta Was this translation helpful? Give feedback.
All reactions