You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you look at the metrics, you compare real meta data with predicted meta data. What do you do with referrals? Or is this the end of the paper or code?
The text was updated successfully, but these errors were encountered:
Hi @janguck, thanks for your interest in our work.
We do the comparison via identifier tokens. Assume some paper cites 20 other papers, then we randomly drop some of these references and then, the task is to predict exactly those ones that have been artificially removed. We use datasets, in which full citation data is available (such as DOI to DOI links). In terms of recommender systems, we reconstruct item sets while having between hundred-thousands or millions of distinct items.
For the input side, we consider using also additional meta-data such as the title of the citing paper, but the output stays the same. The predicted identifiers are unique and can be resolved to a full meta-data record whenever needed.
When you look at the metrics, you compare real meta data with predicted meta data. What do you do with referrals? Or is this the end of the paper or code?
The text was updated successfully, but these errors were encountered: