We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pchunk is the same as tchunk.
def evaluate_each_class(pred_chunks,true_chunks, class_type): # class_type:PER or LOC or ORG # pred_chunks,true_chunks are one-dim. pred_chunk_class = [] for pchunk in pred_chunks: if pchunk[0] == class_type: pred_chunk_class.append(pchunk) true_chunk_class = [] for tchunk in pred_chunks: # <- should be true_chunks if tchunk[0] == class_type: true_chunk_class.append(tchunk) pred_chunk_class = set(pred_chunk_class) true_chunk_class = set(true_chunk_class) correct_preds = len((pred_chunk_class & set(true_chunks) )) total_preds = len(pred_chunk_class) total_correct = len(true_chunk_class) # print("type: ", class_type) # print("correct_preds, total_preds, total_correct: ", correct_preds,total_preds,total_correct) p = correct_preds / total_preds if correct_preds > 0 else 0 r = correct_preds / total_correct if correct_preds > 0 else 0 f1 = 2 * p * r / (p + r) if correct_preds > 0 else 0 return f1, p, r,correct_preds,total_preds,total_correct
The text was updated successfully, but these errors were encountered:
https://github.com/neulab/SpanNER/blob/main/combination/evaluate_metric.py The calculation of recall is wrong.
Sorry, something went wrong.
No branches or pull requests
pchunk is the same as tchunk.
The text was updated successfully, but these errors were encountered: