Skip to content
Discussion options

You must be logged in to vote

You can manually implement Precision@k by using torch.topk to select the top k predictions and then compute precision from true positives and false positives for those predictions. Alternatively, consider using TorchMetrics’ MultilabelPrecision or MultilabelF1Score with an appropriate threshold to evaluate multilabel performance without top-k restriction.

Here is a minimal example to manually implement Precision@k using torch.topk and TorchMetrics Precision for multilabel classification:

import torch
from torchmetrics.classification import MultilabelPrecision


def precision_at_k(preds, target, k):
    """
    Args:
        preds: (batch_size, num_classes) tensor with prediction scores (f…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by Borda
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants