You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My question is I'm thinking is there any reason we can simplify cross entropy loss into the below way instead of what [1] used in cross-entropy.
def pointwise_loss(positive_predictions, negative_predictions, mask=None):
"""
Logistic loss function.
Parameters
----------
positive_predictions: tensor
Tensor containing predictions for known positive items.
negative_predictions: tensor
Tensor containing predictions for sampled negative items.
mask: tensor, optional
A binary tensor used to zero the loss from some entries
of the loss tensor.
Returns
-------
loss, float
The mean value of the loss function.
"""
positives_loss = (1.0 - torch.sigmoid(positive_predictions))
negatives_loss = torch.sigmoid(negative_predictions)
loss = (positives_loss + negatives_loss)
if mask is not None:
mask = mask.float()
loss = loss * mask
return loss.sum() / mask.sum()
return loss.mean()
My question is I'm thinking is there any reason we can simplify cross entropy loss into the below way instead of what [1] used in cross-entropy.
[1].https://ml-cheatsheet.readthedocs.io/en/latest/logistic_regression.html
The text was updated successfully, but these errors were encountered: