Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apply this to d-dim(d>1) regression task #1

Open
swc1204 opened this issue Sep 26, 2022 · 4 comments
Open

apply this to d-dim(d>1) regression task #1

swc1204 opened this issue Sep 26, 2022 · 4 comments

Comments

@swc1204
Copy link

swc1204 commented Sep 26, 2022

thank you for your greate job!
if i would like to apply this to d-dim(d>1) regression task(target has d-min), how should change the ranksim.py ?

@yugongg
Copy link
Collaborator

yugongg commented Sep 26, 2022

Hi, thanks for your interest!

To extend the method to d-dim (d>1) labels, Eq.2 in the paper needs to be modified. The core idea is how to produce a similarity matrix -- we need to measure the label similarity that can be used for ranking (which is used as supervision).
If it's 1-dim, it's straightforward (e.g. 70 > 25 > 21 > 1, and we used negative absolute distance);
If it's d-dim (d>1), you can use the same way as how we defined sigma_z (refer to Eq.3, and also ''Different choices for feature similarity function sigma_z'' in section 4.4). But this strategy has an implicit assumption that every dimension is homogeneous. For some heterogeneous multi-dimensional label, you may need to define some other similarity functions.

@swc1204
Copy link
Author

swc1204 commented Sep 27, 2022

thank you for your reply!
i think the cosine similarity function is suitable for my task,so i change the code as following, is there anything i should pay attention,like the RrueRanker ?

# Compute gaze similarities
yyt = torch.matmul(F.normalize(y.view(y.size(0), -1)), F.normalize(y.view(y.size(0), -1)).permute(1, 0))

# Compute features similarities
xxt = torch.matmul(F.normalize(x.view(x.size(0),-1)), F.normalize(x.view(x.size(0),-1)).permute(1,0))

# Compute ranking loss
for i in range(len(y)):
    label_ranks = TrueRanker.apply(yyt[i].unsqueeze(dim=0), lambda_val)
    feature_ranks = TrueRanker.apply(xxt[i].unsqueeze(dim=0), lambda_val)
    loss += F.mse_loss(feature_ranks, label_ranks)

@yugongg
Copy link
Collaborator

yugongg commented Sep 27, 2022

TrueRanker is not needed for label_ranks, because label_ranks is ground-truth and TrueRanker is used to make the ranking on the features to be differentiable. You can directly transform the similarity matrix to the ranking of matrix, by using rank_normalised function on each row of yyt.

@swc1204
Copy link
Author

swc1204 commented Sep 28, 2022

i got it,thank you for your patient solution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants