You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
when I am trying to train your source code in 5'th epoch (maybe or less maybe or more) I encountered error that stop training. so I increase max_step . But when I increase max_step(max_step==100K) I have this problem with loss and acc that
loss>100 && acc==0 .I attach screen of this problem.
What changing I should do in source code to continue training model without this problem?
The text was updated successfully, but these errors were encountered:
Hi @mohanades, I have realized some time back that, I have prepared the task in wrong way.
So, here is what I have done
I have tokenized the answers and then have trained a token classification kind of model. And when measuring the accuracy, I measure the accuracy on just the tokens that are in the answers and ignore the others.
But, here is what I should do instead
Given an ocr, an answer, I should find the answer in OCR and give two indices, one is start and other is end, and that means in the ocr from start to end indices lies the answer. And train a model to predict these two indices.
And, in this way, I believe the authors have came across the different metrics used in the paper.
I am learning (and have used this for the first time), so maybe this was a mistake from my side. Again apologies. Sorry for tagging @furkanbiten, but maybe can you reflect if this is what, I have correctly understood.
Hi,
when I am trying to train your source code in 5'th epoch (maybe or less maybe or more) I encountered error that stop training. so I increase max_step . But when I increase max_step(max_step==100K) I have this problem with loss and acc that
loss>100 && acc==0 .I attach screen of this problem.
What changing I should do in source code to continue training model without this problem?
The text was updated successfully, but these errors were encountered: