Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

regarding training accuracy and detection #6

Open
Ram-Godavarthi opened this issue Jan 28, 2020 · 6 comments
Open

regarding training accuracy and detection #6

Ram-Godavarthi opened this issue Jan 28, 2020 · 6 comments

Comments

@Ram-Godavarthi
Copy link

First of all the training on CPU is very slow.

training script is running properly. But I am not able to get the correct detections at all.

I am getting something like 👍

b'NRRRRRRRRRRRRRRRRRRRR'

b'M99999RRRRRRRRRRRRRRR'

Something like this..

Where do I need to the give the pretrained models and start detection process??

Could you please help me out with this??

Thank you

@codeaway23
Copy link
Contributor

Before running the eval script you will have to add the path to your training logs and checkpoint files in the common_flags.py file in the models/research/attention_ocr/python directory

@Ram-Godavarthi
Copy link
Author

I am getting like this when I run the training script.

INFO:tensorflow:global step 176: loss = 45.1903 (4.068 sec/step)
INFO 2020-01-28 10:31:35.000737: tf_logging.py: 82 global step 176: loss = 45.1903 (4.068 sec/step)
INFO:tensorflow:global step 177: loss = 44.7751 (4.017 sec/step)
INFO 2020-01-28 10:31:39.000878: tf_logging.py: 82 global step 177: loss = 44.7751 (4.017 sec/step)
INFO:tensorflow:Recording summary at step 177.
INFO 2020-01-28 10:31:44.000128: tf_logging.py: 82 Recording summary at step 177.
INFO:tensorflow:global step 178: loss = 47.6715 (8.347 sec/step)
INFO 2020-01-28 10:31:48.000249: tf_logging.py: 82 global step 178: loss = 47.6715 (8.347 sec/step)
INFO:tensorflow:global step 179: loss = 57.1654 (4.037 sec/step)
INFO 2020-01-28 10:31:52.000310: tf_logging.py: 82 global step 179: loss = 57.1654 (4.037 sec/step)
INFO:tensorflow:global step 180: loss = 44.0419 (4.018 sec/step)
INFO 2020-01-28 10:31:56.000362: tf_logging.py: 82 global step 180: loss = 44.0419 (4.018 sec/step)
INFO:tensorflow:global step 181: loss = 47.7864 (4.122 sec/step)
INFO 2020-01-28 10:32:00.000507: tf_logging.py: 82 global step 181: loss = 47.7864 (4.122 sec/step)
INFO:tensorflow:global step 182: loss = 46.3507 (4.042 sec/step)
INFO 2020-01-28 10:32:04.000572: tf_logging.py: 82 global step 182: loss = 46.3507 (4.042 sec/step)
INFO:tensorflow:global step 183: loss = 46.4693 (4.069 sec/step)
INFO 2020-01-28 10:32:08.000677: tf_logging.py: 82 global step 183: loss = 46.4693 (4.069 sec/step)
INFO:tensorflow:global step 184: loss = 48.3467 (4.057 sec/step)
INFO 2020-01-28 10:32:12.000758: tf_logging.py: 82 global step 184: loss = 48.3467 (4.057 sec/step)
INFO:tensorflow:global step 185: loss = 42.2563 (4.047 sec/step)
INFO 2020-01-28 10:32:16.000839: tf_logging.py: 82 global step 185: loss = 42.2563 (4.047 sec/step)
INFO:tensorflow:global step 186: loss = 46.8351 (4.087 sec/step)
INFO 2020-01-28 10:32:20.000948: tf_logging.py: 82 global step 186: loss = 46.8351 (4.087 sec/step)
INFO:tensorflow:global step 187: loss = 44.3799 (4.096 sec/step)
INFO 2020-01-28 10:32:25.000081: tf_logging.py: 82 global step 187: loss = 44.3799 (4.096 sec/step)
INFO:tensorflow:global step 188: loss = 45.6797 (4.065 sec/step)
INFO 2020-01-28 10:32:29.000168: tf_logging.py: 82 global step 188: loss = 45.6797 (4.065 sec/step)
INFO:tensorflow:global step 189: loss = 47.7265 (4.045 sec/step)
INFO 2020-01-28 10:32:33.000237: tf_logging.py: 82 global step 189: loss = 47.7265 (4.045 sec/step)
INFO:tensorflow:global step 190: loss = 45.3303 (4.070 sec/step)
INFO 2020-01-28 10:32:37.000429: tf_logging.py: 82 global step 190: loss = 45.3303 (4.070 sec/step)
INFO:tensorflow:Recording summary at step 190.
INFO 2020-01-28 10:32:41.000687: tf_logging.py: 82 Recording summary at step 190.
INFO:tensorflow:global step 191: loss = 46.0952 (8.345 sec/step)
INFO 2020-01-28 10:32:45.000797: tf_logging.py: 82 global step 191: loss = 46.0952 (8.345 sec/step)
INFO:tensorflow:global step 192: loss = 36.7755 (4.066 sec/step)
INFO 2020-01-28 10:32:49.000887: tf_logging.py: 82 global step 192: loss = 36.7755 (4.066 sec/step)
INFO:tensorflow:global step 193: loss = 46.4757 (4.032 sec/step)
INFO 2020-01-28 10:32:53.000941: tf_logging.py: 82 global step 193: loss = 46.4757 (4.032 sec/step)
INFO:tensorflow:global step 194: loss = 51.2351 (4.016 sec/step)
INFO 2020-01-28 10:32:57.000992: tf_logging.py: 82 global step 194: loss = 51.2351 (4.016 sec/step)
INFO:tensorflow:global step 195: loss = 51.1448 (4.074 sec/step)
INFO 2020-01-28 10:33:02.000088: tf_logging.py: 82 global step 195: loss = 51.1448 (4.074 sec/step)
INFO:tensorflow:global step 196: loss = 47.8617 (4.045 sec/step)
INFO 2020-01-28 10:33:06.000155: tf_logging.py: 82 global step 196: loss = 47.8617 (4.045 sec/step)
INFO:tensorflow:global step 197: loss = 31.9874 (4.075 sec/step)
INFO 2020-01-28 10:33:10.000264: tf_logging.py: 82 global step 197: loss = 31.9874 (4.075 sec/step)
INFO:tensorflow:global step 198: loss = 45.3187 (4.025 sec/step)
INFO 2020-01-28 10:33:14.000314: tf_logging.py: 82 global step 198: loss = 45.3187 (4.025 sec/step)
INFO:tensorflow:global step 199: loss = 49.2523 (4.045 sec/step)
INFO 2020-01-28 10:33:18.000393: tf_logging.py: 82 global step 199: loss = 49.2523 (4.045 sec/step)
INFO:tensorflow:global step 200: loss = 48.7986 (6.094 sec/step)
INFO 2020-01-28 10:33:24.000510: tf_logging.py: 82 global step 200: loss = 48.7986 (6.094 sec/step)
INFO:tensorflow:Stopping Training.
INFO 2020-01-28 10:33:24.000534: tf_logging.py: 82 Stopping Training.
INFO:tensorflow:Finished training! Saving model to disk.
INFO 2020-01-28 10:33:24.000534: tf_logging.py: 82 Finished training! Saving model to disk.
(env) ram@virtualbox:~/attention_ocr/python$

What could be the problem??

@Ram-Godavarthi
Copy link
Author

And I got this after running eval.py script.

INFO 2020-01-28 10:46:05.000354: tf_logging.py: 82 Evaluation [99/100]
INFO:tensorflow:Evaluation [100/100]
INFO 2020-01-28 10:46:06.000709: tf_logging.py: 82 Evaluation [100/100]
INFO:tensorflow:Finished evaluation at 2020-01-28-09:46:06
INFO 2020-01-28 10:46:06.000709: tf_logging.py: 82 Finished evaluation at 2020-01-28-09:46:06
2020-01-28 10:46:07.241598: I tensorflow/core/kernels/logging_ops.cc:79] eval/SequenceAccuracy[0]
2020-01-28 10:46:07.241830: I tensorflow/core/kernels/logging_ops.cc:79] eval/CharacterAccuracy[0.0399122499]
INFO:tensorflow:Waiting for new checkpoint at datasets/data/model_checkpoints
INFO 2020-01-28 10:46:08.000799: tf_logging.py: 82 Waiting for new checkpoint at datasets/data/model_checkpoints

Could you tell me where is it going wrong

@Ram-Godavarthi
Copy link
Author

When I run demo_inference,py script I get this result.

DEBUG 2020-01-28 11:01:54.000358: PngImagePlugin.py: 138 STREAM b'IHDR' 16 13
DEBUG 2020-01-28 11:01:54.000358: PngImagePlugin.py: 138 STREAM b'IDAT' 41 43382
datasets/data/crops/3.png number_plates 1
This part is executing
Reading filepattern/1.png
DEBUG 2020-01-28 11:01:54.000369: PngImagePlugin.py: 138 STREAM b'IHDR' 16 13
DEBUG 2020-01-28 11:01:54.000369: PngImagePlugin.py: 138 STREAM b'IDAT' 41 43382
datasets/data/crops/3.png number_plates 2
This part is executing
Reading filepattern/2.png
DEBUG 2020-01-28 11:01:54.000377: PngImagePlugin.py: 138 STREAM b'IHDR' 16 13
DEBUG 2020-01-28 11:01:54.000377: PngImagePlugin.py: 138 STREAM b'IDAT' 41 43382
datasets/data/crops/3.png number_plates 3
This part is executing
Reading filepattern/3.png
DEBUG 2020-01-28 11:01:54.000380: PngImagePlugin.py: 138 STREAM b'IHDR' 16 13
DEBUG 2020-01-28 11:01:54.000383: PngImagePlugin.py: 138 STREAM b'IDAT' 41 43382
2020-01-28 11:01:54.889808: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2
INFO:tensorflow:Restoring parameters from datasets/data/model_checkpoints/model.ckpt-200
INFO 2020-01-28 11:01:54.000890: tf_logging.py: 82 Restoring parameters from datasets/data/model_checkpoints/model.ckpt-200
b'444444444444444444444'
b'444444444444444444444'
b'444444444444444444444'
b'444444444444444444444'

Please try to help me out in this issue.

What's going wrong

@Rubeen-Mohammad
Copy link

Rubeen-Mohammad commented Mar 23, 2020

Hello! Thank you so much for such nicely explained kernels. I too have the same kind of problem.
I have added the path to training logs and checkpoint files in the common_flags.py file in the models/research/attention_ocr/python directory

Problem1 #####:

Even though a new checkpoint is available, it is unable to collect that checkpoint and it is waiting for a long time. Could you please tell me what's the problem?
My evaluation is getting struck as follows:
INFO 2020-03-23 13:29:03.000567: evaluation.py: 169 Evaluation [90/100]
INFO:tensorflow:Evaluation [100/100]
INFO 2020-03-23 13:29:16.000567: evaluation.py: 169 Evaluation [100/100]
INFO:tensorflow:Finished evaluation at 2020-03-23-07:59:16
INFO 2020-03-23 13:29:16.000567: evaluation.py: 455 Finished evaluation at 2020-03-23-07:59:16
eval/SequenceAccuracy[0]
eval/CharacterAccuracy[0.00575520797]
INFO:tensorflow:Waiting for new checkpoint at /home/mwebware/Documents/Rubeen/Text_Recognition2/models-master/research/attention_ocr/python/aocr-logs
INFO 2020-03-23 13:29:18.000558: evaluation.py: 189 Waiting for new checkpoint at /home/mwebware/Documents/Rubeen/Text_Recognition2/models-master/research/attention_ocr/python/aocr-logs

Problem2 #####:

After running inference.py, my model is predicting the license plate characters as follows:
-----------Predicted------------- [b'5555555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91', b'555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91', b'111111\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91', b'111111\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91', b'5555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91', b'555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91', b'555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91', b'555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91']
b'5555555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91'
b'555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91'
b'111111\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91'
b'111111\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91'
b'5555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91'
b'555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91'
b'555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91'
b'555\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x91'
Could you please help me in resolving this.
Thank you

@Ram-Godavarthi
Copy link
Author

Problem 1: I just tried 1 time. So, whenever I need it, I ran the script to check the accuracy.

Problem 2: Just add convert the output format by inserting this line.

line.decode('utf-8')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants