-
Notifications
You must be signed in to change notification settings - Fork 4
/
FENet_imagenet.log
1796 lines (1796 loc) · 119 KB
/
FENet_imagenet.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
[2022-06-14 13:31:47,712] Namespace(auto_augment=False, batch_size=1024, cutout=False, data_dir='/dataset/public/ImageNetOrigin/', epoch=480, lr=0.6, nesterov=True, reduction=1.375, results_dir='./results/', resume=None)
[2022-06-14 13:31:47,713] ==> Preparing data..
[2022-06-14 13:31:56,749] Training / Testing data number: 50000 / 1281167
[2022-06-14 13:31:56,750] Using path: ./results/14133147/
[2022-06-14 13:31:56,750] ==> Building model..
[2022-06-14 13:32:02,766] DataParallel(
(module): FENet(
(conv1): Conv2d(3, 22, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ibssl): IBSSL(
(conv1): Conv2d(22, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(88, 22, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(22, 220, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(220, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(220, 44, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(44, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(feblock1): FEBlock3n2s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(11, 66, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(66, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(66, 11, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(11, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(22, 132, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(132, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(132, 22, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(44, 528, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(528, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(528, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock2): FEBlock4n2s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(11, 66, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(66, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(66, 11, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(11, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(22, 132, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(132, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(132, 22, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_3): ResIBSSL(
(conv1): Conv2d(44, 264, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(264, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(264, 44, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(44, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(88, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(1056, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(176, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock3): FEBlock4n1s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(22, 132, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(132, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(132, 22, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(44, 264, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(264, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(264, 44, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(44, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_3): ResIBSSL(
(conv1): Conv2d(88, 528, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(528, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(528, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibssl): IBSSL(
(conv1): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(1056, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(176, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock4): FEBlock4n2s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(22, 132, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(132, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(132, 22, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(44, 264, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(264, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(264, 44, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(44, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_3): ResIBSSL(
(conv1): Conv2d(88, 528, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(528, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(528, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(176, 2112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(2112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(2112, 352, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock5): FEBlock3n1s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(88, 528, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(528, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(528, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(1056, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(176, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibssl): IBSSL(
(conv1): Conv2d(352, 2112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(2112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(2112, 352, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(conv2): Conv2d(352, 1932, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(1932, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(gap): AdaptiveAvgPool2d(output_size=(1, 1))
(dropout): Dropout(p=0.2, inplace=False)
(fc): Conv2d(1932, 1000, kernel_size=(1, 1), stride=(1, 1))
)
)
[2022-06-14 13:32:02,775] Epoch: 0
[2022-06-14 13:43:13,366] Train: Loss: 5.624 | Acc: 5.652 (72411/1281167) | Lr: 0.6
[2022-06-14 13:43:58,082] Test: Loss: 5.094 | Acc: 9.416 (4708/50000)
[2022-06-14 13:43:58,082] Saving..
[2022-06-14 13:43:58,194] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 13:43:58,194] Epoch: 1
[2022-06-14 13:54:33,611] Train: Loss: 4.000 | Acc: 21.036 (269504/1281167) | Lr: 0.5999935746063304
[2022-06-14 13:55:16,124] Test: Loss: 3.953 | Acc: 21.048 (10524/50000)
[2022-06-14 13:55:16,124] Saving..
[2022-06-14 13:55:16,213] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 13:55:16,213] Epoch: 2
[2022-06-14 14:05:41,313] Train: Loss: 3.395 | Acc: 29.920 (383330/1281167) | Lr: 0.5999742987005642
[2022-06-14 14:06:26,271] Test: Loss: 3.882 | Acc: 23.036 (11518/50000)
[2022-06-14 14:06:26,272] Saving..
[2022-06-14 14:06:26,366] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 14:06:26,366] Epoch: 3
[2022-06-14 14:16:59,658] Train: Loss: 3.076 | Acc: 35.194 (450900/1281167) | Lr: 0.599942173108417
[2022-06-14 14:17:43,553] Test: Loss: 3.348 | Acc: 30.914 (15457/50000)
[2022-06-14 14:17:43,553] Saving..
[2022-06-14 14:17:43,627] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 14:17:43,627] Epoch: 4
[2022-06-14 14:28:19,790] Train: Loss: 2.891 | Acc: 38.242 (489938/1281167) | Lr: 0.5998971992060422
[2022-06-14 14:29:04,393] Test: Loss: 2.864 | Acc: 37.842 (18921/50000)
[2022-06-14 14:29:04,393] Saving..
[2022-06-14 14:29:04,489] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 14:29:04,490] Epoch: 5
[2022-06-14 14:39:52,025] Train: Loss: 2.750 | Acc: 40.683 (521223/1281167) | Lr: 0.5998393789199723
[2022-06-14 14:40:33,955] Test: Loss: 2.716 | Acc: 39.920 (19960/50000)
[2022-06-14 14:40:33,956] Saving..
[2022-06-14 14:40:34,047] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 14:40:34,047] Epoch: 6
[2022-06-14 14:51:31,977] Train: Loss: 2.646 | Acc: 42.441 (543737/1281167) | Lr: 0.5997687147270356
[2022-06-14 14:52:13,598] Test: Loss: 2.803 | Acc: 39.512 (19756/50000)
[2022-06-14 14:52:13,599] Epoch: 7
[2022-06-14 15:02:58,481] Train: Loss: 2.570 | Acc: 43.805 (561213/1281167) | Lr: 0.5996852096542512
[2022-06-14 15:03:44,069] Test: Loss: 3.103 | Acc: 34.052 (17026/50000)
[2022-06-14 15:03:44,070] Epoch: 8
[2022-06-14 15:14:27,628] Train: Loss: 2.510 | Acc: 44.855 (574662/1281167) | Lr: 0.5995888672786983
[2022-06-14 15:15:08,360] Test: Loss: 3.075 | Acc: 35.408 (17704/50000)
[2022-06-14 15:15:08,360] Epoch: 9
[2022-06-14 15:25:46,619] Train: Loss: 2.461 | Acc: 45.707 (585586/1281167) | Lr: 0.5994796917273638
[2022-06-14 15:26:27,740] Test: Loss: 2.554 | Acc: 43.052 (21526/50000)
[2022-06-14 15:26:27,741] Saving..
[2022-06-14 15:26:27,979] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 15:26:27,979] Epoch: 10
[2022-06-14 15:37:08,320] Train: Loss: 2.428 | Acc: 46.388 (594308/1281167) | Lr: 0.5993576876769647
[2022-06-14 15:37:48,632] Test: Loss: 2.423 | Acc: 44.948 (22474/50000)
[2022-06-14 15:37:48,632] Saving..
[2022-06-14 15:37:48,844] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 15:37:48,844] Epoch: 11
[2022-06-14 15:48:33,237] Train: Loss: 2.393 | Acc: 47.005 (602211/1281167) | Lr: 0.5992228603537487
[2022-06-14 15:49:13,749] Test: Loss: 2.526 | Acc: 43.596 (21798/50000)
[2022-06-14 15:49:13,750] Epoch: 12
[2022-06-14 16:00:00,291] Train: Loss: 2.362 | Acc: 47.520 (608813/1281167) | Lr: 0.5990752155332696
[2022-06-14 16:00:40,616] Test: Loss: 2.337 | Acc: 46.844 (23422/50000)
[2022-06-14 16:00:40,616] Saving..
[2022-06-14 16:00:40,694] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 16:00:40,694] Epoch: 13
[2022-06-14 16:11:18,434] Train: Loss: 2.345 | Acc: 47.883 (613460/1281167) | Lr: 0.5989147595401398
[2022-06-14 16:11:59,200] Test: Loss: 2.523 | Acc: 42.970 (21485/50000)
[2022-06-14 16:11:59,200] Epoch: 14
[2022-06-14 16:22:41,525] Train: Loss: 2.326 | Acc: 48.259 (618279/1281167) | Lr: 0.5987414992477603
[2022-06-14 16:23:27,119] Test: Loss: 2.312 | Acc: 47.038 (23519/50000)
[2022-06-14 16:23:27,120] Saving..
[2022-06-14 16:23:27,197] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 16:23:27,197] Epoch: 15
[2022-06-14 16:34:14,094] Train: Loss: 2.306 | Acc: 48.583 (622427/1281167) | Lr: 0.5985554420780254
[2022-06-14 16:34:55,070] Test: Loss: 2.409 | Acc: 45.420 (22710/50000)
[2022-06-14 16:34:55,070] Epoch: 16
[2022-06-14 16:45:43,335] Train: Loss: 2.291 | Acc: 48.858 (625955/1281167) | Lr: 0.5983565960010048
[2022-06-14 16:46:23,523] Test: Loss: 2.375 | Acc: 46.404 (23202/50000)
[2022-06-14 16:46:23,524] Epoch: 17
[2022-06-14 16:57:12,860] Train: Loss: 2.278 | Acc: 49.148 (629672/1281167) | Lr: 0.5981449695346027
[2022-06-14 16:57:55,657] Test: Loss: 2.660 | Acc: 41.692 (20846/50000)
[2022-06-14 16:57:55,657] Epoch: 18
[2022-06-14 17:08:51,907] Train: Loss: 2.258 | Acc: 49.456 (633611/1281167) | Lr: 0.5979205717441928
[2022-06-14 17:09:32,772] Test: Loss: 2.337 | Acc: 46.910 (23455/50000)
[2022-06-14 17:09:32,772] Epoch: 19
[2022-06-14 17:20:13,012] Train: Loss: 2.253 | Acc: 49.618 (635692/1281167) | Lr: 0.5976834122422292
[2022-06-14 17:20:53,306] Test: Loss: 2.661 | Acc: 42.094 (21047/50000)
[2022-06-14 17:20:53,306] Epoch: 20
[2022-06-14 17:31:33,757] Train: Loss: 2.236 | Acc: 49.924 (639610/1281167) | Lr: 0.5974335011878359
[2022-06-14 17:32:14,986] Test: Loss: 2.401 | Acc: 46.084 (23042/50000)
[2022-06-14 17:32:14,986] Epoch: 21
[2022-06-14 17:42:51,104] Train: Loss: 2.230 | Acc: 50.039 (641080/1281167) | Lr: 0.5971708492863705
[2022-06-14 17:43:31,331] Test: Loss: 2.297 | Acc: 47.752 (23876/50000)
[2022-06-14 17:43:31,331] Saving..
[2022-06-14 17:43:31,410] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 17:43:31,410] Epoch: 22
[2022-06-14 17:54:06,303] Train: Loss: 2.225 | Acc: 50.118 (642095/1281167) | Lr: 0.5968954677889666
[2022-06-14 17:54:46,616] Test: Loss: 2.394 | Acc: 46.472 (23236/50000)
[2022-06-14 17:54:46,617] Epoch: 23
[2022-06-14 18:05:24,163] Train: Loss: 2.214 | Acc: 50.337 (644899/1281167) | Lr: 0.5966073684920506
[2022-06-14 18:06:03,818] Test: Loss: 2.287 | Acc: 47.892 (23946/50000)
[2022-06-14 18:06:03,819] Saving..
[2022-06-14 18:06:03,893] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 18:06:03,894] Epoch: 24
[2022-06-14 18:16:32,445] Train: Loss: 2.209 | Acc: 50.383 (645487/1281167) | Lr: 0.596306563736838
[2022-06-14 18:17:12,486] Test: Loss: 2.442 | Acc: 45.324 (22662/50000)
[2022-06-14 18:17:12,487] Epoch: 25
[2022-06-14 18:27:53,546] Train: Loss: 2.198 | Acc: 50.715 (649741/1281167) | Lr: 0.5959930664088029
[2022-06-14 18:28:34,324] Test: Loss: 2.223 | Acc: 48.950 (24475/50000)
[2022-06-14 18:28:34,325] Saving..
[2022-06-14 18:28:34,398] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 18:28:34,399] Epoch: 26
[2022-06-14 18:39:16,584] Train: Loss: 2.192 | Acc: 50.713 (649715/1281167) | Lr: 0.5956668899371277
[2022-06-14 18:39:56,759] Test: Loss: 2.535 | Acc: 43.870 (21935/50000)
[2022-06-14 18:39:56,759] Epoch: 27
[2022-06-14 18:50:32,616] Train: Loss: 2.186 | Acc: 50.841 (651355/1281167) | Lr: 0.5953280482941267
[2022-06-14 18:51:12,807] Test: Loss: 2.360 | Acc: 46.996 (23498/50000)
[2022-06-14 18:51:12,807] Epoch: 28
[2022-06-14 19:01:58,573] Train: Loss: 2.180 | Acc: 50.907 (652209/1281167) | Lr: 0.5949765559946483
[2022-06-14 19:02:44,850] Test: Loss: 2.486 | Acc: 44.584 (22292/50000)
[2022-06-14 19:02:44,850] Epoch: 29
[2022-06-14 19:13:27,571] Train: Loss: 2.174 | Acc: 51.075 (654354/1281167) | Lr: 0.5946124280954524
[2022-06-14 19:14:05,987] Test: Loss: 2.387 | Acc: 47.232 (23616/50000)
[2022-06-14 19:14:05,988] Epoch: 30
[2022-06-14 19:24:44,550] Train: Loss: 2.166 | Acc: 51.210 (656090/1281167) | Lr: 0.5942356801945667
[2022-06-14 19:25:25,812] Test: Loss: 2.297 | Acc: 47.504 (23752/50000)
[2022-06-14 19:25:25,813] Epoch: 31
[2022-06-14 19:35:57,331] Train: Loss: 2.166 | Acc: 51.212 (656112/1281167) | Lr: 0.5938463284306172
[2022-06-14 19:36:38,698] Test: Loss: 2.429 | Acc: 45.842 (22921/50000)
[2022-06-14 19:36:38,698] Epoch: 32
[2022-06-14 19:47:14,234] Train: Loss: 2.158 | Acc: 51.391 (658401/1281167) | Lr: 0.5934443894821377
[2022-06-14 19:47:56,573] Test: Loss: 2.264 | Acc: 48.432 (24216/50000)
[2022-06-14 19:47:56,574] Epoch: 33
[2022-06-14 19:58:42,256] Train: Loss: 2.154 | Acc: 51.474 (659466/1281167) | Lr: 0.5930298805668548
[2022-06-14 19:59:22,889] Test: Loss: 2.315 | Acc: 47.422 (23711/50000)
[2022-06-14 19:59:22,890] Epoch: 34
[2022-06-14 20:10:02,784] Train: Loss: 2.153 | Acc: 51.499 (659794/1281167) | Lr: 0.592602819440951
[2022-06-14 20:10:42,358] Test: Loss: 2.860 | Acc: 39.660 (19830/50000)
[2022-06-14 20:10:42,358] Epoch: 35
[2022-06-14 20:21:21,158] Train: Loss: 2.146 | Acc: 51.637 (661556/1281167) | Lr: 0.5921632243983034
[2022-06-14 20:22:02,392] Test: Loss: 2.305 | Acc: 47.618 (23809/50000)
[2022-06-14 20:22:02,392] Epoch: 36
[2022-06-14 20:32:46,833] Train: Loss: 2.142 | Acc: 51.696 (662310/1281167) | Lr: 0.5917111142697007
[2022-06-14 20:33:28,184] Test: Loss: 2.907 | Acc: 38.938 (19469/50000)
[2022-06-14 20:33:28,184] Epoch: 37
[2022-06-14 20:44:03,198] Train: Loss: 2.140 | Acc: 51.725 (662678/1281167) | Lr: 0.591246508422036
[2022-06-14 20:44:43,101] Test: Loss: 2.433 | Acc: 46.006 (23003/50000)
[2022-06-14 20:44:43,102] Epoch: 38
[2022-06-14 20:55:21,813] Train: Loss: 2.140 | Acc: 51.702 (662388/1281167) | Lr: 0.5907694267574775
[2022-06-14 20:56:01,441] Test: Loss: 2.418 | Acc: 46.190 (23095/50000)
[2022-06-14 20:56:01,441] Epoch: 39
[2022-06-14 21:06:35,311] Train: Loss: 2.127 | Acc: 52.005 (666276/1281167) | Lr: 0.5902798897126158
[2022-06-14 21:07:16,707] Test: Loss: 2.239 | Acc: 48.852 (24426/50000)
[2022-06-14 21:07:16,707] Epoch: 40
[2022-06-14 21:17:53,365] Train: Loss: 2.130 | Acc: 51.972 (665845/1281167) | Lr: 0.5897779182575887
[2022-06-14 21:18:34,314] Test: Loss: 2.492 | Acc: 44.890 (22445/50000)
[2022-06-14 21:18:34,314] Epoch: 41
[2022-06-14 21:29:15,872] Train: Loss: 2.129 | Acc: 51.932 (665337/1281167) | Lr: 0.5892635338951826
[2022-06-14 21:29:56,470] Test: Loss: 2.681 | Acc: 41.712 (20856/50000)
[2022-06-14 21:29:56,470] Epoch: 42
[2022-06-14 21:40:33,363] Train: Loss: 2.124 | Acc: 51.977 (665913/1281167) | Lr: 0.5887367586599115
[2022-06-14 21:41:13,666] Test: Loss: 2.217 | Acc: 49.368 (24684/50000)
[2022-06-14 21:41:13,666] Saving..
[2022-06-14 21:41:13,739] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 21:41:13,739] Epoch: 43
[2022-06-14 21:52:03,206] Train: Loss: 2.118 | Acc: 52.148 (668102/1281167) | Lr: 0.5881976151170734
[2022-06-14 21:52:44,490] Test: Loss: 2.387 | Acc: 45.804 (22902/50000)
[2022-06-14 21:52:44,491] Epoch: 44
[2022-06-14 22:03:23,469] Train: Loss: 2.116 | Acc: 52.153 (668168/1281167) | Lr: 0.5876461263617831
[2022-06-14 22:04:03,755] Test: Loss: 2.073 | Acc: 51.616 (25808/50000)
[2022-06-14 22:04:03,755] Saving..
[2022-06-14 22:04:03,834] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 22:04:03,834] Epoch: 45
[2022-06-14 22:14:41,855] Train: Loss: 2.114 | Acc: 52.202 (668795/1281167) | Lr: 0.5870823160179836
[2022-06-14 22:15:22,589] Test: Loss: 2.192 | Acc: 49.550 (24775/50000)
[2022-06-14 22:15:22,590] Epoch: 46
[2022-06-14 22:26:05,911] Train: Loss: 2.112 | Acc: 52.306 (670129/1281167) | Lr: 0.5865062082374333
[2022-06-14 22:26:49,889] Test: Loss: 2.313 | Acc: 47.052 (23526/50000)
[2022-06-14 22:26:49,889] Epoch: 47
[2022-06-14 22:37:32,133] Train: Loss: 2.108 | Acc: 52.362 (670847/1281167) | Lr: 0.5859178276986722
[2022-06-14 22:38:13,156] Test: Loss: 2.213 | Acc: 49.102 (24551/50000)
[2022-06-14 22:38:13,157] Epoch: 48
[2022-06-14 22:48:57,027] Train: Loss: 2.107 | Acc: 52.360 (670815/1281167) | Lr: 0.5853171996059642
[2022-06-14 22:49:37,432] Test: Loss: 2.262 | Acc: 47.944 (23972/50000)
[2022-06-14 22:49:37,432] Epoch: 49
[2022-06-14 23:00:25,114] Train: Loss: 2.106 | Acc: 52.364 (670870/1281167) | Lr: 0.5847043496882178
[2022-06-14 23:01:06,841] Test: Loss: 2.468 | Acc: 45.482 (22741/50000)
[2022-06-14 23:01:06,842] Epoch: 50
[2022-06-14 23:12:08,716] Train: Loss: 2.102 | Acc: 52.487 (672448/1281167) | Lr: 0.5840793041978839
[2022-06-14 23:12:49,715] Test: Loss: 2.049 | Acc: 52.140 (26070/50000)
[2022-06-14 23:12:49,715] Saving..
[2022-06-14 23:12:49,804] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-14 23:12:49,804] Epoch: 51
[2022-06-14 23:23:38,169] Train: Loss: 2.101 | Acc: 52.444 (671889/1281167) | Lr: 0.5834420899098308
[2022-06-14 23:24:19,272] Test: Loss: 2.112 | Acc: 51.202 (25601/50000)
[2022-06-14 23:24:19,272] Epoch: 52
[2022-06-14 23:35:06,887] Train: Loss: 2.101 | Acc: 52.474 (672278/1281167) | Lr: 0.5827927341201978
[2022-06-14 23:35:47,784] Test: Loss: 2.245 | Acc: 48.602 (24301/50000)
[2022-06-14 23:35:47,784] Epoch: 53
[2022-06-14 23:46:32,633] Train: Loss: 2.095 | Acc: 52.630 (674278/1281167) | Lr: 0.5821312646452258
[2022-06-14 23:47:13,474] Test: Loss: 2.167 | Acc: 50.070 (25035/50000)
[2022-06-14 23:47:13,474] Epoch: 54
[2022-06-14 23:57:46,934] Train: Loss: 2.096 | Acc: 52.522 (672895/1281167) | Lr: 0.5814577098200655
[2022-06-14 23:58:26,220] Test: Loss: 2.312 | Acc: 47.138 (23569/50000)
[2022-06-14 23:58:26,221] Epoch: 55
[2022-06-15 00:09:13,917] Train: Loss: 2.093 | Acc: 52.615 (674091/1281167) | Lr: 0.5807720984975637
[2022-06-15 00:09:55,319] Test: Loss: 2.241 | Acc: 48.794 (24397/50000)
[2022-06-15 00:09:55,319] Epoch: 56
[2022-06-15 00:20:29,212] Train: Loss: 2.087 | Acc: 52.672 (674815/1281167) | Lr: 0.5800744600470279
[2022-06-15 00:21:10,466] Test: Loss: 2.312 | Acc: 47.118 (23559/50000)
[2022-06-15 00:21:10,466] Epoch: 57
[2022-06-15 00:31:45,167] Train: Loss: 2.092 | Acc: 52.728 (675535/1281167) | Lr: 0.5793648243529671
[2022-06-15 00:32:24,803] Test: Loss: 2.423 | Acc: 45.846 (22923/50000)
[2022-06-15 00:32:24,803] Epoch: 58
[2022-06-15 00:42:57,920] Train: Loss: 2.087 | Acc: 52.740 (675693/1281167) | Lr: 0.5786432218138128
[2022-06-15 00:43:38,186] Test: Loss: 2.411 | Acc: 46.062 (23031/50000)
[2022-06-15 00:43:38,186] Epoch: 59
[2022-06-15 00:54:15,610] Train: Loss: 2.083 | Acc: 52.877 (677437/1281167) | Lr: 0.5779096833406159
[2022-06-15 00:54:56,329] Test: Loss: 2.276 | Acc: 48.172 (24086/50000)
[2022-06-15 00:54:56,330] Epoch: 60
[2022-06-15 01:05:52,334] Train: Loss: 2.086 | Acc: 52.705 (675237/1281167) | Lr: 0.5771642403557232
[2022-06-15 01:06:32,587] Test: Loss: 2.210 | Acc: 49.414 (24707/50000)
[2022-06-15 01:06:32,588] Epoch: 61
[2022-06-15 01:17:11,868] Train: Loss: 2.080 | Acc: 52.930 (678118/1281167) | Lr: 0.5764069247914314
[2022-06-15 01:17:52,057] Test: Loss: 2.279 | Acc: 48.170 (24085/50000)
[2022-06-15 01:17:52,057] Epoch: 62
[2022-06-15 01:28:29,818] Train: Loss: 2.078 | Acc: 52.980 (678757/1281167) | Lr: 0.5756377690886185
[2022-06-15 01:29:09,544] Test: Loss: 2.377 | Acc: 46.064 (23032/50000)
[2022-06-15 01:29:09,558] Epoch: 63
[2022-06-15 01:39:44,155] Train: Loss: 2.076 | Acc: 52.958 (678485/1281167) | Lr: 0.574856806195355
[2022-06-15 01:40:25,624] Test: Loss: 2.297 | Acc: 48.286 (24143/50000)
[2022-06-15 01:40:25,624] Epoch: 64
[2022-06-15 01:51:13,750] Train: Loss: 2.076 | Acc: 52.979 (678751/1281167) | Lr: 0.5740640695654917
[2022-06-15 01:51:53,786] Test: Loss: 2.231 | Acc: 48.922 (24461/50000)
[2022-06-15 01:51:53,786] Epoch: 65
[2022-06-15 02:02:23,125] Train: Loss: 2.075 | Acc: 53.040 (679529/1281167) | Lr: 0.5732595931572279
[2022-06-15 02:03:03,038] Test: Loss: 3.069 | Acc: 36.336 (18168/50000)
[2022-06-15 02:03:03,039] Epoch: 66
[2022-06-15 02:13:37,237] Train: Loss: 2.076 | Acc: 52.929 (678110/1281167) | Lr: 0.572443411431655
[2022-06-15 02:14:16,772] Test: Loss: 2.312 | Acc: 48.162 (24081/50000)
[2022-06-15 02:14:16,772] Epoch: 67
[2022-06-15 02:24:48,938] Train: Loss: 2.066 | Acc: 53.176 (681268/1281167) | Lr: 0.5716155593512818
[2022-06-15 02:25:34,053] Test: Loss: 2.241 | Acc: 49.058 (24529/50000)
[2022-06-15 02:25:34,054] Epoch: 68
[2022-06-15 02:36:09,204] Train: Loss: 2.074 | Acc: 53.043 (679570/1281167) | Lr: 0.5707760723785362
[2022-06-15 02:36:48,890] Test: Loss: 2.344 | Acc: 47.094 (23547/50000)
[2022-06-15 02:36:48,891] Epoch: 69
[2022-06-15 02:47:24,622] Train: Loss: 2.065 | Acc: 53.222 (681859/1281167) | Lr: 0.5699249864742459
[2022-06-15 02:48:05,196] Test: Loss: 2.472 | Acc: 45.394 (22697/50000)
[2022-06-15 02:48:05,197] Epoch: 70
[2022-06-15 02:58:38,339] Train: Loss: 2.066 | Acc: 53.179 (681309/1281167) | Lr: 0.5690623380960986
[2022-06-15 02:59:19,561] Test: Loss: 2.359 | Acc: 47.150 (23575/50000)
[2022-06-15 02:59:19,561] Epoch: 71
[2022-06-15 03:10:08,880] Train: Loss: 2.066 | Acc: 53.164 (681116/1281167) | Lr: 0.5681881641970796
[2022-06-15 03:10:48,785] Test: Loss: 2.216 | Acc: 49.556 (24778/50000)
[2022-06-15 03:10:48,785] Epoch: 72
[2022-06-15 03:21:47,983] Train: Loss: 2.059 | Acc: 53.341 (683383/1281167) | Lr: 0.5673025022238892
[2022-06-15 03:22:28,592] Test: Loss: 2.100 | Acc: 51.326 (25663/50000)
[2022-06-15 03:22:28,592] Epoch: 73
[2022-06-15 03:33:28,849] Train: Loss: 2.061 | Acc: 53.260 (682351/1281167) | Lr: 0.5664053901153387
[2022-06-15 03:34:09,694] Test: Loss: 2.212 | Acc: 49.788 (24894/50000)
[2022-06-15 03:34:09,695] Epoch: 74
[2022-06-15 03:45:01,076] Train: Loss: 2.062 | Acc: 53.183 (681357/1281167) | Lr: 0.565496866300725
[2022-06-15 03:45:40,374] Test: Loss: 2.217 | Acc: 49.942 (24971/50000)
[2022-06-15 03:45:40,374] Epoch: 75
[2022-06-15 03:56:34,918] Train: Loss: 2.062 | Acc: 53.273 (682521/1281167) | Lr: 0.5645769696981845
[2022-06-15 03:57:14,636] Test: Loss: 2.435 | Acc: 45.828 (22914/50000)
[2022-06-15 03:57:14,636] Epoch: 76
[2022-06-15 04:08:06,768] Train: Loss: 2.061 | Acc: 53.254 (682267/1281167) | Lr: 0.563645739713026
[2022-06-15 04:08:46,981] Test: Loss: 2.180 | Acc: 50.168 (25084/50000)
[2022-06-15 04:08:46,981] Epoch: 77
[2022-06-15 04:19:39,862] Train: Loss: 2.054 | Acc: 53.332 (683273/1281167) | Lr: 0.5627032162360428
[2022-06-15 04:20:23,016] Test: Loss: 2.051 | Acc: 52.368 (26184/50000)
[2022-06-15 04:20:23,017] Saving..
[2022-06-15 04:20:23,119] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-15 04:20:23,119] Epoch: 78
[2022-06-15 04:31:00,275] Train: Loss: 2.053 | Acc: 53.418 (684374/1281167) | Lr: 0.5617494396418036
[2022-06-15 04:31:39,532] Test: Loss: 2.076 | Acc: 51.810 (25905/50000)
[2022-06-15 04:31:39,532] Epoch: 79
[2022-06-15 04:42:19,896] Train: Loss: 2.050 | Acc: 53.479 (685152/1281167) | Lr: 0.5607844507869232
[2022-06-15 04:43:00,629] Test: Loss: 2.510 | Acc: 44.480 (22240/50000)
[2022-06-15 04:43:00,629] Epoch: 80
[2022-06-15 04:53:43,462] Train: Loss: 2.053 | Acc: 53.427 (684484/1281167) | Lr: 0.5598082910083125
[2022-06-15 04:54:25,343] Test: Loss: 2.214 | Acc: 48.890 (24445/50000)
[2022-06-15 04:54:25,343] Epoch: 81
[2022-06-15 05:05:06,384] Train: Loss: 2.050 | Acc: 53.465 (684982/1281167) | Lr: 0.5588210021214074
[2022-06-15 05:05:46,609] Test: Loss: 2.675 | Acc: 41.876 (20938/50000)
[2022-06-15 05:05:46,610] Epoch: 82
[2022-06-15 05:16:27,616] Train: Loss: 2.045 | Acc: 53.597 (686664/1281167) | Lr: 0.5578226264183781
[2022-06-15 05:17:08,380] Test: Loss: 2.793 | Acc: 41.190 (20595/50000)
[2022-06-15 05:17:08,380] Epoch: 83
[2022-06-15 05:27:56,403] Train: Loss: 2.048 | Acc: 53.500 (685430/1281167) | Lr: 0.5568132066663166
[2022-06-15 05:28:36,435] Test: Loss: 1.943 | Acc: 54.436 (27218/50000)
[2022-06-15 05:28:36,436] Saving..
[2022-06-15 05:28:36,521] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-15 05:28:36,521] Epoch: 84
[2022-06-15 05:39:13,223] Train: Loss: 2.046 | Acc: 53.584 (686499/1281167) | Lr: 0.5557927861054056
[2022-06-15 05:39:54,074] Test: Loss: 2.254 | Acc: 48.316 (24158/50000)
[2022-06-15 05:39:54,075] Epoch: 85
[2022-06-15 05:50:48,002] Train: Loss: 2.047 | Acc: 53.604 (686762/1281167) | Lr: 0.5547614084470658
[2022-06-15 05:51:28,550] Test: Loss: 2.199 | Acc: 49.300 (24650/50000)
[2022-06-15 05:51:28,550] Epoch: 86
[2022-06-15 06:02:28,163] Train: Loss: 2.038 | Acc: 53.677 (687692/1281167) | Lr: 0.5537191178720833
[2022-06-15 06:03:08,843] Test: Loss: 1.991 | Acc: 53.910 (26955/50000)
[2022-06-15 06:03:08,844] Epoch: 87
[2022-06-15 06:13:56,449] Train: Loss: 2.041 | Acc: 53.679 (687714/1281167) | Lr: 0.5526659590287172
[2022-06-15 06:14:38,282] Test: Loss: 2.082 | Acc: 51.716 (25858/50000)
[2022-06-15 06:14:38,282] Epoch: 88
[2022-06-15 06:25:44,671] Train: Loss: 2.039 | Acc: 53.673 (687636/1281167) | Lr: 0.5516019770307873
[2022-06-15 06:26:24,419] Test: Loss: 2.124 | Acc: 51.216 (25608/50000)
[2022-06-15 06:26:24,419] Epoch: 89
[2022-06-15 06:37:02,289] Train: Loss: 2.039 | Acc: 53.696 (687932/1281167) | Lr: 0.5505272174557411
[2022-06-15 06:37:42,504] Test: Loss: 2.023 | Acc: 52.822 (26411/50000)
[2022-06-15 06:37:42,504] Epoch: 90
[2022-06-15 06:48:29,286] Train: Loss: 2.036 | Acc: 53.719 (688230/1281167) | Lr: 0.5494417263427018
[2022-06-15 06:49:15,293] Test: Loss: 2.425 | Acc: 46.160 (23080/50000)
[2022-06-15 06:49:15,293] Epoch: 91
[2022-06-15 06:59:48,958] Train: Loss: 2.037 | Acc: 53.702 (688010/1281167) | Lr: 0.5483455501904958
[2022-06-15 07:00:29,446] Test: Loss: 2.253 | Acc: 48.754 (24377/50000)
[2022-06-15 07:00:29,447] Epoch: 92
[2022-06-15 07:11:11,285] Train: Loss: 2.033 | Acc: 53.873 (690199/1281167) | Lr: 0.5472387359556613
[2022-06-15 07:11:51,064] Test: Loss: 2.065 | Acc: 52.526 (26263/50000)
[2022-06-15 07:11:51,065] Epoch: 93
[2022-06-15 07:22:29,680] Train: Loss: 2.032 | Acc: 53.773 (688926/1281167) | Lr: 0.5461213310504361
[2022-06-15 07:23:09,816] Test: Loss: 2.283 | Acc: 48.244 (24122/50000)
[2022-06-15 07:23:09,817] Epoch: 94
[2022-06-15 07:33:56,995] Train: Loss: 2.032 | Acc: 53.853 (689948/1281167) | Lr: 0.5449933833407276
[2022-06-15 07:34:37,449] Test: Loss: 2.217 | Acc: 49.482 (24741/50000)
[2022-06-15 07:34:37,449] Epoch: 95
[2022-06-15 07:45:23,691] Train: Loss: 2.028 | Acc: 53.931 (690944/1281167) | Lr: 0.5438549411440613
[2022-06-15 07:46:04,801] Test: Loss: 2.061 | Acc: 51.634 (25817/50000)
[2022-06-15 07:46:04,801] Epoch: 96
[2022-06-15 07:57:09,902] Train: Loss: 2.036 | Acc: 53.709 (688103/1281167) | Lr: 0.542706053227512
[2022-06-15 07:57:49,452] Test: Loss: 1.990 | Acc: 53.702 (26851/50000)
[2022-06-15 07:57:49,452] Epoch: 97
[2022-06-15 08:08:37,489] Train: Loss: 2.025 | Acc: 53.917 (690769/1281167) | Lr: 0.5415467688056143
[2022-06-15 08:09:19,398] Test: Loss: 2.125 | Acc: 51.676 (25838/50000)
[2022-06-15 08:09:19,398] Epoch: 98
[2022-06-15 08:20:03,847] Train: Loss: 2.024 | Acc: 53.968 (691419/1281167) | Lr: 0.5403771375382543
[2022-06-15 08:20:44,959] Test: Loss: 2.606 | Acc: 42.560 (21280/50000)
[2022-06-15 08:20:44,959] Epoch: 99
[2022-06-15 08:31:19,281] Train: Loss: 2.026 | Acc: 53.935 (690995/1281167) | Lr: 0.5391972095285429
[2022-06-15 08:31:59,799] Test: Loss: 2.268 | Acc: 48.702 (24351/50000)
[2022-06-15 08:31:59,800] Epoch: 100
[2022-06-15 08:42:35,830] Train: Loss: 2.019 | Acc: 54.097 (693069/1281167) | Lr: 0.5380070353206687
[2022-06-15 08:43:18,098] Test: Loss: 2.019 | Acc: 52.980 (26490/50000)
[2022-06-15 08:43:18,099] Epoch: 101
[2022-06-15 08:54:00,059] Train: Loss: 2.022 | Acc: 54.023 (692131/1281167) | Lr: 0.5368066658977336
[2022-06-15 08:54:41,851] Test: Loss: 2.119 | Acc: 50.794 (25397/50000)
[2022-06-15 08:54:41,852] Epoch: 102
[2022-06-15 09:05:20,237] Train: Loss: 2.020 | Acc: 54.045 (692401/1281167) | Lr: 0.5355961526795687
[2022-06-15 09:06:01,072] Test: Loss: 2.187 | Acc: 49.976 (24988/50000)
[2022-06-15 09:06:01,072] Epoch: 103
[2022-06-15 09:16:50,497] Train: Loss: 2.016 | Acc: 54.195 (694330/1281167) | Lr: 0.5343755475205313
[2022-06-15 09:17:29,903] Test: Loss: 1.969 | Acc: 54.008 (27004/50000)
[2022-06-15 09:17:29,903] Epoch: 104
[2022-06-15 09:28:04,975] Train: Loss: 2.018 | Acc: 54.141 (693634/1281167) | Lr: 0.5331449027072837
[2022-06-15 09:28:50,214] Test: Loss: 2.035 | Acc: 52.808 (26404/50000)
[2022-06-15 09:28:50,214] Epoch: 105
[2022-06-15 09:39:25,319] Train: Loss: 2.019 | Acc: 54.120 (693369/1281167) | Lr: 0.5319042709565539
[2022-06-15 09:40:05,842] Test: Loss: 2.387 | Acc: 46.494 (23247/50000)
[2022-06-15 09:40:05,843] Epoch: 106
[2022-06-15 09:50:48,971] Train: Loss: 2.017 | Acc: 54.130 (693491/1281167) | Lr: 0.5306537054128772
[2022-06-15 09:51:29,313] Test: Loss: 2.201 | Acc: 49.538 (24769/50000)
[2022-06-15 09:51:29,314] Epoch: 107
[2022-06-15 10:02:10,020] Train: Loss: 2.015 | Acc: 54.105 (693170/1281167) | Lr: 0.529393259646319
[2022-06-15 10:02:50,925] Test: Loss: 1.911 | Acc: 55.182 (27591/50000)
[2022-06-15 10:02:50,926] Saving..
[2022-06-15 10:02:51,037] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-15 10:02:51,037] Epoch: 108
[2022-06-15 10:13:46,806] Train: Loss: 2.012 | Acc: 54.174 (694054/1281167) | Lr: 0.528122987650181
[2022-06-15 10:14:26,176] Test: Loss: 2.098 | Acc: 51.816 (25908/50000)
[2022-06-15 10:14:26,176] Epoch: 109
[2022-06-15 10:25:25,326] Train: Loss: 2.010 | Acc: 54.277 (695384/1281167) | Lr: 0.5268429438386876
[2022-06-15 10:26:04,459] Test: Loss: 2.169 | Acc: 50.458 (25229/50000)
[2022-06-15 10:26:04,460] Epoch: 110
[2022-06-15 10:36:58,808] Train: Loss: 2.009 | Acc: 54.291 (695558/1281167) | Lr: 0.5255531830446555
[2022-06-15 10:37:39,615] Test: Loss: 1.985 | Acc: 53.538 (26769/50000)
[2022-06-15 10:37:39,616] Epoch: 111
[2022-06-15 10:48:31,737] Train: Loss: 2.002 | Acc: 54.412 (697110/1281167) | Lr: 0.5242537605171443
[2022-06-15 10:49:12,032] Test: Loss: 2.097 | Acc: 51.954 (25977/50000)
[2022-06-15 10:49:12,032] Epoch: 112
[2022-06-15 10:59:50,962] Train: Loss: 2.008 | Acc: 54.297 (695640/1281167) | Lr: 0.5229447319190905
[2022-06-15 11:00:32,119] Test: Loss: 2.153 | Acc: 50.892 (25446/50000)
[2022-06-15 11:00:32,120] Epoch: 113
[2022-06-15 11:11:08,139] Train: Loss: 2.005 | Acc: 54.299 (695657/1281167) | Lr: 0.5216261533249222
[2022-06-15 11:11:49,224] Test: Loss: 2.153 | Acc: 50.546 (25273/50000)
[2022-06-15 11:11:49,225] Epoch: 114
[2022-06-15 11:22:32,077] Train: Loss: 2.001 | Acc: 54.441 (697480/1281167) | Lr: 0.5202980812181581
[2022-06-15 11:23:16,608] Test: Loss: 2.332 | Acc: 48.596 (24298/50000)
[2022-06-15 11:23:16,614] Epoch: 115
[2022-06-15 11:34:02,602] Train: Loss: 2.002 | Acc: 54.437 (697427/1281167) | Lr: 0.5189605724889867
[2022-06-15 11:34:42,931] Test: Loss: 1.996 | Acc: 53.418 (26709/50000)
[2022-06-15 11:34:42,931] Epoch: 116
[2022-06-15 11:45:16,872] Train: Loss: 1.999 | Acc: 54.411 (697101/1281167) | Lr: 0.5176136844318308
[2022-06-15 11:45:57,562] Test: Loss: 2.114 | Acc: 51.006 (25503/50000)
[2022-06-15 11:45:57,563] Epoch: 117
[2022-06-15 11:56:27,665] Train: Loss: 2.000 | Acc: 54.487 (698071/1281167) | Lr: 0.5162574747428917
[2022-06-15 11:57:07,348] Test: Loss: 1.989 | Acc: 53.698 (26849/50000)
[2022-06-15 11:57:07,348] Epoch: 118
[2022-06-15 12:07:49,552] Train: Loss: 1.997 | Acc: 54.484 (698028/1281167) | Lr: 0.5148920015176788
[2022-06-15 12:08:30,620] Test: Loss: 2.137 | Acc: 51.340 (25670/50000)
[2022-06-15 12:08:30,620] Epoch: 119
[2022-06-15 12:19:11,181] Train: Loss: 2.001 | Acc: 54.399 (696940/1281167) | Lr: 0.5135173232485203
[2022-06-15 12:19:50,926] Test: Loss: 2.132 | Acc: 50.958 (25479/50000)
[2022-06-15 12:19:50,927] Epoch: 120
[2022-06-15 12:30:35,060] Train: Loss: 1.992 | Acc: 54.597 (699474/1281167) | Lr: 0.5121334988220579
[2022-06-15 12:31:16,014] Test: Loss: 2.019 | Acc: 53.018 (26509/50000)
[2022-06-15 12:31:16,015] Epoch: 121
[2022-06-15 12:41:57,860] Train: Loss: 1.990 | Acc: 54.667 (700378/1281167) | Lr: 0.5107405875167246
[2022-06-15 12:42:43,218] Test: Loss: 2.186 | Acc: 50.326 (25163/50000)
[2022-06-15 12:42:43,218] Epoch: 122
[2022-06-15 12:53:20,682] Train: Loss: 1.991 | Acc: 54.607 (699608/1281167) | Lr: 0.5093386490002044
[2022-06-15 12:53:59,870] Test: Loss: 2.098 | Acc: 51.678 (25839/50000)
[2022-06-15 12:53:59,870] Epoch: 123
[2022-06-15 13:04:46,489] Train: Loss: 1.986 | Acc: 54.703 (700839/1281167) | Lr: 0.5079277433268776
[2022-06-15 13:05:26,176] Test: Loss: 2.334 | Acc: 47.528 (23764/50000)
[2022-06-15 13:05:26,176] Epoch: 124
[2022-06-15 13:16:08,470] Train: Loss: 1.988 | Acc: 54.735 (701243/1281167) | Lr: 0.5065079309352473
[2022-06-15 13:16:48,191] Test: Loss: 2.037 | Acc: 52.274 (26137/50000)
[2022-06-15 13:16:48,191] Epoch: 125
[2022-06-15 13:27:41,222] Train: Loss: 1.986 | Acc: 54.682 (700562/1281167) | Lr: 0.5050792726453508
[2022-06-15 13:28:22,322] Test: Loss: 2.064 | Acc: 52.352 (26176/50000)
[2022-06-15 13:28:22,322] Epoch: 126
[2022-06-15 13:39:07,423] Train: Loss: 1.987 | Acc: 54.746 (701383/1281167) | Lr: 0.5036418296561543
[2022-06-15 13:39:47,455] Test: Loss: 1.997 | Acc: 53.636 (26818/50000)
[2022-06-15 13:39:47,456] Epoch: 127
[2022-06-15 13:50:28,404] Train: Loss: 1.983 | Acc: 54.734 (701234/1281167) | Lr: 0.5021956635429314
[2022-06-15 13:51:10,625] Test: Loss: 2.735 | Acc: 42.718 (21359/50000)
[2022-06-15 13:51:10,625] Epoch: 128
[2022-06-15 14:01:50,347] Train: Loss: 1.981 | Acc: 54.760 (701565/1281167) | Lr: 0.5007408362546251
[2022-06-15 14:02:38,627] Test: Loss: 2.044 | Acc: 52.804 (26402/50000)
[2022-06-15 14:02:38,628] Epoch: 129
[2022-06-15 14:13:15,322] Train: Loss: 1.979 | Acc: 54.868 (702947/1281167) | Lr: 0.4992774101111944
[2022-06-15 14:13:54,246] Test: Loss: 2.015 | Acc: 52.998 (26499/50000)
[2022-06-15 14:13:54,247] Epoch: 130
[2022-06-15 14:24:41,431] Train: Loss: 1.978 | Acc: 54.887 (703192/1281167) | Lr: 0.4978054478009446
[2022-06-15 14:25:21,311] Test: Loss: 2.268 | Acc: 48.768 (24384/50000)
[2022-06-15 14:25:21,311] Epoch: 131
[2022-06-15 14:36:02,951] Train: Loss: 1.979 | Acc: 54.854 (702774/1281167) | Lr: 0.49632501237784193
[2022-06-15 14:36:42,116] Test: Loss: 1.965 | Acc: 54.312 (27156/50000)
[2022-06-15 14:36:42,116] Epoch: 132
[2022-06-15 14:47:17,666] Train: Loss: 1.975 | Acc: 54.907 (703455/1281167) | Lr: 0.49483616725881285
[2022-06-15 14:48:03,544] Test: Loss: 2.063 | Acc: 52.312 (26156/50000)
[2022-06-15 14:48:03,545] Epoch: 133
[2022-06-15 14:58:41,363] Train: Loss: 1.975 | Acc: 54.973 (704298/1281167) | Lr: 0.49333897622102685
[2022-06-15 14:59:21,461] Test: Loss: 1.868 | Acc: 56.280 (28140/50000)
[2022-06-15 14:59:21,462] Saving..
[2022-06-15 14:59:21,554] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-15 14:59:21,554] Epoch: 134
[2022-06-15 15:10:00,455] Train: Loss: 1.975 | Acc: 54.946 (703949/1281167) | Lr: 0.49183350339916493
[2022-06-15 15:10:41,848] Test: Loss: 2.273 | Acc: 48.774 (24387/50000)
[2022-06-15 15:10:41,848] Epoch: 135
[2022-06-15 15:21:53,142] Train: Loss: 1.974 | Acc: 54.984 (704438/1281167) | Lr: 0.4903198132826722
[2022-06-15 15:22:34,689] Test: Loss: 2.108 | Acc: 51.906 (25953/50000)
[2022-06-15 15:22:34,689] Epoch: 136
[2022-06-15 15:33:27,695] Train: Loss: 1.971 | Acc: 55.036 (705109/1281167) | Lr: 0.4887979707129954
[2022-06-15 15:34:08,210] Test: Loss: 2.120 | Acc: 51.368 (25684/50000)
[2022-06-15 15:34:08,211] Epoch: 137
[2022-06-15 15:44:54,783] Train: Loss: 1.968 | Acc: 55.092 (705817/1281167) | Lr: 0.487268040880805
[2022-06-15 15:45:36,112] Test: Loss: 1.930 | Acc: 54.338 (27169/50000)
[2022-06-15 15:45:36,113] Epoch: 138
[2022-06-15 15:56:32,898] Train: Loss: 1.968 | Acc: 55.066 (705488/1281167) | Lr: 0.485730089323203
[2022-06-15 15:57:14,591] Test: Loss: 1.867 | Acc: 56.072 (28036/50000)
[2022-06-15 15:57:14,592] Epoch: 139
[2022-06-15 16:08:11,028] Train: Loss: 1.964 | Acc: 55.119 (706169/1281167) | Lr: 0.48418418192091556
[2022-06-15 16:08:52,066] Test: Loss: 2.147 | Acc: 51.042 (25521/50000)
[2022-06-15 16:08:52,067] Epoch: 140
[2022-06-15 16:19:34,464] Train: Loss: 1.967 | Acc: 55.087 (705757/1281167) | Lr: 0.48263038489547055
[2022-06-15 16:20:14,446] Test: Loss: 2.350 | Acc: 47.324 (23662/50000)
[2022-06-15 16:20:14,447] Epoch: 141
[2022-06-15 16:30:50,426] Train: Loss: 1.963 | Acc: 55.238 (707690/1281167) | Lr: 0.48106876480636107
[2022-06-15 16:31:30,437] Test: Loss: 2.030 | Acc: 52.966 (26483/50000)
[2022-06-15 16:31:30,437] Epoch: 142
[2022-06-15 16:42:02,802] Train: Loss: 1.966 | Acc: 55.166 (706773/1281167) | Lr: 0.47949938854819424
[2022-06-15 16:42:42,935] Test: Loss: 2.414 | Acc: 46.548 (23274/50000)
[2022-06-15 16:42:42,935] Epoch: 143
[2022-06-15 16:53:15,678] Train: Loss: 1.963 | Acc: 55.168 (706797/1281167) | Lr: 0.47792232334782575
[2022-06-15 16:53:56,206] Test: Loss: 1.957 | Acc: 54.090 (27045/50000)
[2022-06-15 16:53:56,206] Epoch: 144
[2022-06-15 17:04:32,152] Train: Loss: 1.960 | Acc: 55.245 (707784/1281167) | Lr: 0.47633763676147983
[2022-06-15 17:05:12,813] Test: Loss: 2.658 | Acc: 42.330 (21165/50000)
[2022-06-15 17:05:12,813] Epoch: 145
[2022-06-15 17:15:49,434] Train: Loss: 1.961 | Acc: 55.168 (706792/1281167) | Lr: 0.47474539667185567
[2022-06-15 17:16:30,429] Test: Loss: 2.034 | Acc: 53.006 (26503/50000)
[2022-06-15 17:16:30,430] Epoch: 146
[2022-06-15 17:26:59,767] Train: Loss: 1.954 | Acc: 55.340 (709000/1281167) | Lr: 0.4731456712852192
[2022-06-15 17:27:40,579] Test: Loss: 2.143 | Acc: 51.282 (25641/50000)
[2022-06-15 17:27:40,580] Epoch: 147
[2022-06-15 17:38:12,284] Train: Loss: 1.952 | Acc: 55.384 (709558/1281167) | Lr: 0.47153852912848176
[2022-06-15 17:38:55,373] Test: Loss: 1.935 | Acc: 54.794 (27397/50000)
[2022-06-15 17:38:55,373] Epoch: 148
[2022-06-15 17:49:21,461] Train: Loss: 1.953 | Acc: 55.398 (709744/1281167) | Lr: 0.4699240390462645
[2022-06-15 17:50:01,761] Test: Loss: 1.978 | Acc: 54.066 (27033/50000)
[2022-06-15 17:50:01,761] Epoch: 149
[2022-06-15 18:00:30,772] Train: Loss: 1.952 | Acc: 55.417 (709981/1281167) | Lr: 0.4683022701979489
[2022-06-15 18:01:11,648] Test: Loss: 1.901 | Acc: 55.504 (27752/50000)
[2022-06-15 18:01:11,648] Epoch: 150
[2022-06-15 18:11:38,421] Train: Loss: 1.951 | Acc: 55.382 (709540/1281167) | Lr: 0.4666732920547148
[2022-06-15 18:12:19,533] Test: Loss: 1.992 | Acc: 53.928 (26964/50000)
[2022-06-15 18:12:19,533] Epoch: 151
[2022-06-15 18:22:49,842] Train: Loss: 1.948 | Acc: 55.499 (711030/1281167) | Lr: 0.46503717439656433
[2022-06-15 18:23:30,473] Test: Loss: 1.887 | Acc: 55.666 (27833/50000)
[2022-06-15 18:23:30,473] Epoch: 152
[2022-06-15 18:34:05,620] Train: Loss: 1.944 | Acc: 55.524 (711359/1281167) | Lr: 0.46339398730933234
[2022-06-15 18:34:46,154] Test: Loss: 1.946 | Acc: 54.240 (27120/50000)
[2022-06-15 18:34:46,154] Epoch: 153
[2022-06-15 18:45:25,295] Train: Loss: 1.943 | Acc: 55.586 (712148/1281167) | Lr: 0.46174380118168473
[2022-06-15 18:46:06,710] Test: Loss: 1.974 | Acc: 53.842 (26921/50000)
[2022-06-15 18:46:06,710] Epoch: 154
[2022-06-15 18:56:47,718] Train: Loss: 1.940 | Acc: 55.606 (712402/1281167) | Lr: 0.4600866867021032
[2022-06-15 18:57:26,911] Test: Loss: 1.972 | Acc: 53.590 (26795/50000)
[2022-06-15 18:57:26,911] Epoch: 155
[2022-06-15 19:08:03,419] Train: Loss: 1.943 | Acc: 55.635 (712775/1281167) | Lr: 0.45842271485585645
[2022-06-15 19:08:45,768] Test: Loss: 1.860 | Acc: 56.322 (28161/50000)
[2022-06-15 19:08:45,768] Saving..
[2022-06-15 19:08:45,880] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-15 19:08:45,880] Epoch: 156
[2022-06-15 19:19:39,461] Train: Loss: 1.937 | Acc: 55.702 (713640/1281167) | Lr: 0.45675195692196036
[2022-06-15 19:20:20,851] Test: Loss: 2.056 | Acc: 52.540 (26270/50000)
[2022-06-15 19:20:20,851] Epoch: 157
[2022-06-15 19:31:06,651] Train: Loss: 1.940 | Acc: 55.635 (712778/1281167) | Lr: 0.4550744844701241
[2022-06-15 19:31:47,779] Test: Loss: 1.857 | Acc: 56.542 (28271/50000)
[2022-06-15 19:31:47,780] Saving..
[2022-06-15 19:31:47,850] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-15 19:31:47,851] Epoch: 158
[2022-06-15 19:42:40,560] Train: Loss: 1.931 | Acc: 55.834 (715332/1281167) | Lr: 0.4533903693576845
[2022-06-15 19:43:20,980] Test: Loss: 2.066 | Acc: 51.940 (25970/50000)
[2022-06-15 19:43:20,981] Epoch: 159
[2022-06-15 19:53:47,585] Train: Loss: 1.930 | Acc: 55.793 (714803/1281167) | Lr: 0.4516996837265278
[2022-06-15 19:54:28,295] Test: Loss: 1.836 | Acc: 56.300 (28150/50000)
[2022-06-15 19:54:28,296] Epoch: 160
[2022-06-15 20:05:03,199] Train: Loss: 1.931 | Acc: 55.845 (715468/1281167) | Lr: 0.4500024999999993
[2022-06-15 20:05:43,110] Test: Loss: 1.858 | Acc: 56.178 (28089/50000)
[2022-06-15 20:05:43,110] Epoch: 161
[2022-06-15 20:16:17,665] Train: Loss: 1.928 | Acc: 55.887 (716011/1281167) | Lr: 0.44829889087980124
[2022-06-15 20:16:58,390] Test: Loss: 2.097 | Acc: 52.234 (26117/50000)
[2022-06-15 20:16:58,391] Epoch: 162
[2022-06-15 20:27:37,526] Train: Loss: 1.928 | Acc: 55.877 (715873/1281167) | Lr: 0.4465889293428783
[2022-06-15 20:28:17,202] Test: Loss: 2.203 | Acc: 49.804 (24902/50000)
[2022-06-15 20:28:17,203] Epoch: 163
[2022-06-15 20:38:57,497] Train: Loss: 1.928 | Acc: 55.830 (715277/1281167) | Lr: 0.44487268863829144
[2022-06-15 20:39:37,157] Test: Loss: 1.948 | Acc: 54.560 (27280/50000)
[2022-06-15 20:39:37,158] Epoch: 164
[2022-06-15 20:50:08,780] Train: Loss: 1.927 | Acc: 55.880 (715914/1281167) | Lr: 0.44315024228408056
[2022-06-15 20:50:49,395] Test: Loss: 2.073 | Acc: 52.376 (26188/50000)
[2022-06-15 20:50:49,396] Epoch: 165
[2022-06-15 21:01:25,722] Train: Loss: 1.926 | Acc: 55.934 (716611/1281167) | Lr: 0.44142166406411454
[2022-06-15 21:02:05,787] Test: Loss: 1.859 | Acc: 55.972 (27986/50000)
[2022-06-15 21:02:05,788] Epoch: 166
[2022-06-15 21:12:53,116] Train: Loss: 1.921 | Acc: 55.985 (717255/1281167) | Lr: 0.4396870280249311
[2022-06-15 21:13:33,847] Test: Loss: 1.966 | Acc: 54.354 (27177/50000)
[2022-06-15 21:13:33,847] Epoch: 167
[2022-06-15 21:24:09,964] Train: Loss: 1.921 | Acc: 56.059 (718209/1281167) | Lr: 0.437946408472565
[2022-06-15 21:24:49,659] Test: Loss: 1.984 | Acc: 54.372 (27186/50000)
[2022-06-15 21:24:49,660] Epoch: 168
[2022-06-15 21:35:25,418] Train: Loss: 1.923 | Acc: 56.003 (717498/1281167) | Lr: 0.43619987996936466
[2022-06-15 21:36:06,021] Test: Loss: 1.955 | Acc: 54.610 (27305/50000)
[2022-06-15 21:36:06,021] Epoch: 169
[2022-06-15 21:46:41,277] Train: Loss: 1.916 | Acc: 56.104 (718788/1281167) | Lr: 0.4344475173307981
[2022-06-15 21:47:22,080] Test: Loss: 1.981 | Acc: 54.110 (27055/50000)
[2022-06-15 21:47:22,080] Epoch: 170
[2022-06-15 21:58:03,785] Train: Loss: 1.915 | Acc: 56.074 (718405/1281167) | Lr: 0.4326893956222486
[2022-06-15 21:58:44,642] Test: Loss: 1.821 | Acc: 57.090 (28545/50000)
[2022-06-15 21:58:44,642] Saving..
[2022-06-15 21:58:44,729] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-15 21:58:44,729] Epoch: 171
[2022-06-15 22:09:21,436] Train: Loss: 1.915 | Acc: 56.120 (718991/1281167) | Lr: 0.4309255901557986
[2022-06-15 22:10:02,029] Test: Loss: 2.068 | Acc: 52.446 (26223/50000)
[2022-06-15 22:10:02,030] Epoch: 172
[2022-06-15 22:20:41,284] Train: Loss: 1.915 | Acc: 56.141 (719257/1281167) | Lr: 0.4291561764870039
[2022-06-15 22:21:21,731] Test: Loss: 2.069 | Acc: 53.004 (26502/50000)
[2022-06-15 22:21:21,733] Epoch: 173
[2022-06-15 22:31:55,051] Train: Loss: 1.913 | Acc: 56.109 (718848/1281167) | Lr: 0.42738123041165693
[2022-06-15 22:32:35,916] Test: Loss: 1.857 | Acc: 56.184 (28092/50000)
[2022-06-15 22:32:35,917] Epoch: 174
[2022-06-15 22:43:05,387] Train: Loss: 1.907 | Acc: 56.326 (721635/1281167) | Lr: 0.4256008279625401
[2022-06-15 22:43:46,276] Test: Loss: 1.755 | Acc: 57.972 (28986/50000)
[2022-06-15 22:43:46,277] Saving..
[2022-06-15 22:43:46,348] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-15 22:43:46,348] Epoch: 175
[2022-06-15 22:54:17,714] Train: Loss: 1.905 | Acc: 56.316 (721497/1281167) | Lr: 0.4238150454061688
[2022-06-15 22:54:57,548] Test: Loss: 1.931 | Acc: 54.862 (27431/50000)
[2022-06-15 22:54:57,549] Epoch: 176
[2022-06-15 23:05:29,128] Train: Loss: 1.903 | Acc: 56.367 (722152/1281167) | Lr: 0.4220239592395241
[2022-06-15 23:06:08,418] Test: Loss: 1.963 | Acc: 54.216 (27108/50000)
[2022-06-15 23:06:08,418] Epoch: 177
[2022-06-15 23:16:47,067] Train: Loss: 1.903 | Acc: 56.394 (722505/1281167) | Lr: 0.4202276461867761
[2022-06-15 23:17:28,177] Test: Loss: 1.943 | Acc: 54.842 (27421/50000)
[2022-06-15 23:17:28,178] Epoch: 178
[2022-06-15 23:28:06,668] Train: Loss: 1.897 | Acc: 56.474 (723531/1281167) | Lr: 0.4184261831959976
[2022-06-15 23:28:46,705] Test: Loss: 1.984 | Acc: 54.118 (27059/50000)
[2022-06-15 23:28:46,705] Epoch: 179
[2022-06-15 23:39:19,487] Train: Loss: 1.899 | Acc: 56.415 (722771/1281167) | Lr: 0.4166196474358673
[2022-06-15 23:39:58,782] Test: Loss: 2.018 | Acc: 53.068 (26534/50000)
[2022-06-15 23:39:58,782] Epoch: 180
[2022-06-15 23:50:45,777] Train: Loss: 1.898 | Acc: 56.483 (723639/1281167) | Lr: 0.4148081162923645
[2022-06-15 23:51:26,425] Test: Loss: 1.858 | Acc: 56.138 (28069/50000)
[2022-06-15 23:51:26,425] Epoch: 181
[2022-06-16 00:01:58,153] Train: Loss: 1.897 | Acc: 56.499 (723849/1281167) | Lr: 0.4129916673654542
[2022-06-16 00:02:39,327] Test: Loss: 2.004 | Acc: 53.380 (26690/50000)
[2022-06-16 00:02:39,327] Epoch: 182
[2022-06-16 00:13:17,442] Train: Loss: 1.890 | Acc: 56.651 (725789/1281167) | Lr: 0.4111703784657627
[2022-06-16 00:13:59,573] Test: Loss: 1.878 | Acc: 55.672 (27836/50000)
[2022-06-16 00:13:59,573] Epoch: 183
[2022-06-16 00:24:32,421] Train: Loss: 1.891 | Acc: 56.578 (724857/1281167) | Lr: 0.409344327611245
[2022-06-16 00:25:12,695] Test: Loss: 1.812 | Acc: 57.424 (28712/50000)
[2022-06-16 00:25:12,696] Epoch: 184
[2022-06-16 00:35:43,799] Train: Loss: 1.888 | Acc: 56.661 (725926/1281167) | Lr: 0.4075135930238419
[2022-06-16 00:36:24,500] Test: Loss: 1.910 | Acc: 55.370 (27685/50000)
[2022-06-16 00:36:24,501] Epoch: 185
[2022-06-16 00:46:56,089] Train: Loss: 1.885 | Acc: 56.778 (727427/1281167) | Lr: 0.40567825312612993
[2022-06-16 00:47:36,511] Test: Loss: 1.883 | Acc: 56.086 (28043/50000)
[2022-06-16 00:47:36,511] Epoch: 186
[2022-06-16 00:58:15,441] Train: Loss: 1.887 | Acc: 56.681 (726176/1281167) | Lr: 0.403838386537962
[2022-06-16 00:58:56,110] Test: Loss: 2.004 | Acc: 53.802 (26901/50000)
[2022-06-16 00:58:56,111] Epoch: 187
[2022-06-16 01:09:27,870] Train: Loss: 1.888 | Acc: 56.673 (726072/1281167) | Lr: 0.4019940720730991
[2022-06-16 01:10:08,037] Test: Loss: 1.918 | Acc: 55.336 (27668/50000)
[2022-06-16 01:10:08,038] Epoch: 188
[2022-06-16 01:20:45,360] Train: Loss: 1.882 | Acc: 56.780 (727443/1281167) | Lr: 0.4001453887358346
[2022-06-16 01:21:25,971] Test: Loss: 2.362 | Acc: 47.726 (23863/50000)
[2022-06-16 01:21:25,972] Epoch: 189
[2022-06-16 01:32:03,311] Train: Loss: 1.878 | Acc: 56.854 (728389/1281167) | Lr: 0.39829241571760976
[2022-06-16 01:32:44,065] Test: Loss: 2.024 | Acc: 52.986 (26493/50000)
[2022-06-16 01:32:44,066] Epoch: 190
[2022-06-16 01:43:25,104] Train: Loss: 1.877 | Acc: 56.905 (729053/1281167) | Lr: 0.3964352323936215
[2022-06-16 01:44:05,375] Test: Loss: 1.821 | Acc: 57.012 (28506/50000)
[2022-06-16 01:44:05,375] Epoch: 191
[2022-06-16 01:54:39,114] Train: Loss: 1.873 | Acc: 56.950 (729619/1281167) | Lr: 0.39457391831942223
[2022-06-16 01:55:19,005] Test: Loss: 1.822 | Acc: 57.050 (28525/50000)
[2022-06-16 01:55:19,005] Epoch: 192
[2022-06-16 02:05:49,440] Train: Loss: 1.877 | Acc: 56.874 (728647/1281167) | Lr: 0.3927085532275119
[2022-06-16 02:06:30,043] Test: Loss: 2.045 | Acc: 52.114 (26057/50000)
[2022-06-16 02:06:30,043] Epoch: 193
[2022-06-16 02:17:07,666] Train: Loss: 1.874 | Acc: 56.981 (730026/1281167) | Lr: 0.39083921702392277
[2022-06-16 02:17:47,176] Test: Loss: 2.084 | Acc: 52.928 (26464/50000)
[2022-06-16 02:17:47,177] Epoch: 194
[2022-06-16 02:28:34,844] Train: Loss: 1.869 | Acc: 57.078 (731270/1281167) | Lr: 0.388965989784796
[2022-06-16 02:29:15,883] Test: Loss: 1.881 | Acc: 55.748 (27874/50000)
[2022-06-16 02:29:15,883] Epoch: 195
[2022-06-16 02:39:50,346] Train: Loss: 1.868 | Acc: 57.122 (731828/1281167) | Lr: 0.38708895175295205
[2022-06-16 02:40:30,669] Test: Loss: 1.714 | Acc: 59.296 (29648/50000)
[2022-06-16 02:40:30,669] Saving..
[2022-06-16 02:40:30,767] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-16 02:40:30,767] Epoch: 196
[2022-06-16 02:51:13,193] Train: Loss: 1.867 | Acc: 57.058 (731013/1281167) | Lr: 0.3852081833344529
[2022-06-16 02:51:53,444] Test: Loss: 1.929 | Acc: 54.966 (27483/50000)
[2022-06-16 02:51:53,445] Epoch: 197
[2022-06-16 03:02:24,825] Train: Loss: 1.863 | Acc: 57.219 (733070/1281167) | Lr: 0.38332376509515786
[2022-06-16 03:03:05,674] Test: Loss: 1.929 | Acc: 55.018 (27509/50000)
[2022-06-16 03:03:05,674] Epoch: 198
[2022-06-16 03:13:37,649] Train: Loss: 1.864 | Acc: 57.157 (732280/1281167) | Lr: 0.3814357777572725
[2022-06-16 03:14:17,358] Test: Loss: 2.170 | Acc: 50.918 (25459/50000)
[2022-06-16 03:14:17,359] Epoch: 199
[2022-06-16 03:24:54,271] Train: Loss: 1.861 | Acc: 57.224 (733134/1281167) | Lr: 0.37954430219589075
[2022-06-16 03:25:34,360] Test: Loss: 2.191 | Acc: 50.440 (25220/50000)
[2022-06-16 03:25:34,360] Epoch: 200
[2022-06-16 03:36:07,309] Train: Loss: 1.855 | Acc: 57.294 (734035/1281167) | Lr: 0.37764941943553026
[2022-06-16 03:36:47,570] Test: Loss: 1.805 | Acc: 57.260 (28630/50000)
[2022-06-16 03:36:47,571] Epoch: 201
[2022-06-16 03:47:35,523] Train: Loss: 1.857 | Acc: 57.299 (734097/1281167) | Lr: 0.37575121064666184
[2022-06-16 03:48:30,606] Test: Loss: 1.916 | Acc: 55.298 (27649/50000)
[2022-06-16 03:48:30,606] Epoch: 202
[2022-06-16 03:59:24,148] Train: Loss: 1.852 | Acc: 57.352 (734779/1281167) | Lr: 0.37384975714223234
[2022-06-16 04:00:04,383] Test: Loss: 1.764 | Acc: 58.042 (29021/50000)
[2022-06-16 04:00:04,384] Epoch: 203
[2022-06-16 04:11:14,447] Train: Loss: 1.849 | Acc: 57.385 (735195/1281167) | Lr: 0.37194514037418125
[2022-06-16 04:12:04,941] Test: Loss: 1.714 | Acc: 59.278 (29639/50000)
[2022-06-16 04:12:04,941] Epoch: 204
[2022-06-16 04:23:12,278] Train: Loss: 1.849 | Acc: 57.480 (736410/1281167) | Lr: 0.3700374419299519
[2022-06-16 04:23:51,642] Test: Loss: 1.744 | Acc: 58.530 (29265/50000)
[2022-06-16 04:23:51,643] Epoch: 205
[2022-06-16 04:34:59,438] Train: Loss: 1.845 | Acc: 57.545 (737249/1281167) | Lr: 0.3681267435289963
[2022-06-16 04:35:44,273] Test: Loss: 2.122 | Acc: 51.804 (25902/50000)
[2022-06-16 04:35:44,273] Epoch: 206
[2022-06-16 04:46:34,928] Train: Loss: 1.847 | Acc: 57.465 (736217/1281167) | Lr: 0.3662131270192749
[2022-06-16 04:47:18,926] Test: Loss: 1.937 | Acc: 55.056 (27528/50000)
[2022-06-16 04:47:18,927] Epoch: 207
[2022-06-16 04:58:22,348] Train: Loss: 1.843 | Acc: 57.534 (737102/1281167) | Lr: 0.3642966743737495
[2022-06-16 04:59:02,061] Test: Loss: 1.748 | Acc: 58.528 (29264/50000)
[2022-06-16 04:59:02,062] Epoch: 208
[2022-06-16 05:09:59,754] Train: Loss: 1.841 | Acc: 57.599 (737937/1281167) | Lr: 0.36237746768687323
[2022-06-16 05:10:44,826] Test: Loss: 1.807 | Acc: 57.252 (28626/50000)
[2022-06-16 05:10:44,826] Epoch: 209
[2022-06-16 05:21:50,389] Train: Loss: 1.840 | Acc: 57.609 (738064/1281167) | Lr: 0.360455589171073
[2022-06-16 05:22:29,776] Test: Loss: 1.811 | Acc: 57.074 (28537/50000)
[2022-06-16 05:22:29,776] Epoch: 210
[2022-06-16 05:33:42,948] Train: Loss: 1.838 | Acc: 57.688 (739079/1281167) | Lr: 0.358531121153228
[2022-06-16 05:34:26,397] Test: Loss: 1.713 | Acc: 59.074 (29537/50000)
[2022-06-16 05:34:26,397] Epoch: 211
[2022-06-16 05:45:26,399] Train: Loss: 1.835 | Acc: 57.735 (739679/1281167) | Lr: 0.3566041460711427
[2022-06-16 05:46:07,380] Test: Loss: 1.865 | Acc: 56.180 (28090/50000)
[2022-06-16 05:46:07,381] Epoch: 212
[2022-06-16 05:57:09,915] Train: Loss: 1.832 | Acc: 57.803 (740553/1281167) | Lr: 0.35467474647001634
[2022-06-16 05:57:57,847] Test: Loss: 1.762 | Acc: 58.184 (29092/50000)
[2022-06-16 05:57:57,848] Epoch: 213
[2022-06-16 06:09:03,817] Train: Loss: 1.829 | Acc: 57.848 (741133/1281167) | Lr: 0.3527430049989062
[2022-06-16 06:09:43,439] Test: Loss: 1.721 | Acc: 58.948 (29474/50000)
[2022-06-16 06:09:43,439] Epoch: 214
[2022-06-16 06:20:43,592] Train: Loss: 1.825 | Acc: 57.905 (741857/1281167) | Lr: 0.3508090044071877
[2022-06-16 06:21:23,951] Test: Loss: 1.966 | Acc: 54.380 (27190/50000)
[2022-06-16 06:21:23,951] Epoch: 215
[2022-06-16 06:32:22,028] Train: Loss: 1.824 | Acc: 57.880 (741543/1281167) | Lr: 0.34887282754100923
[2022-06-16 06:33:08,880] Test: Loss: 1.832 | Acc: 56.916 (28458/50000)
[2022-06-16 06:33:08,880] Epoch: 216
[2022-06-16 06:44:09,973] Train: Loss: 1.828 | Acc: 57.876 (741491/1281167) | Lr: 0.3469345573397436
[2022-06-16 06:44:51,450] Test: Loss: 2.056 | Acc: 53.250 (26625/50000)
[2022-06-16 06:44:51,451] Epoch: 217
[2022-06-16 06:55:49,527] Train: Loss: 1.823 | Acc: 57.933 (742217/1281167) | Lr: 0.3449942768324353
[2022-06-16 06:56:43,927] Test: Loss: 1.764 | Acc: 58.282 (29141/50000)
[2022-06-16 06:56:43,927] Epoch: 218
[2022-06-16 07:07:40,726] Train: Loss: 1.821 | Acc: 57.971 (742709/1281167) | Lr: 0.34305206913424346
[2022-06-16 07:08:21,215] Test: Loss: 1.660 | Acc: 60.080 (30040/50000)
[2022-06-16 07:08:21,215] Saving..
[2022-06-16 07:08:21,291] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-16 07:08:21,291] Epoch: 219
[2022-06-16 07:19:06,675] Train: Loss: 1.815 | Acc: 58.098 (744332/1281167) | Lr: 0.3411080174428815
[2022-06-16 07:19:48,259] Test: Loss: 1.729 | Acc: 58.830 (29415/50000)
[2022-06-16 07:19:48,259] Epoch: 220
[2022-06-16 07:30:50,903] Train: Loss: 1.814 | Acc: 58.105 (744426/1281167) | Lr: 0.3391622050350539
[2022-06-16 07:31:32,928] Test: Loss: 2.004 | Acc: 53.610 (26805/50000)
[2022-06-16 07:31:32,928] Epoch: 221
[2022-06-16 07:42:22,038] Train: Loss: 1.815 | Acc: 58.139 (744854/1281167) | Lr: 0.3372147152628879
[2022-06-16 07:43:02,669] Test: Loss: 1.812 | Acc: 57.594 (28797/50000)
[2022-06-16 07:43:02,669] Epoch: 222
[2022-06-16 07:54:05,852] Train: Loss: 1.810 | Acc: 58.237 (746113/1281167) | Lr: 0.33526563155036354
[2022-06-16 07:54:52,818] Test: Loss: 1.793 | Acc: 57.690 (28845/50000)
[2022-06-16 07:54:52,818] Epoch: 223
[2022-06-16 08:06:10,769] Train: Loss: 1.810 | Acc: 58.212 (745797/1281167) | Lr: 0.33331503738974005
[2022-06-16 08:06:50,887] Test: Loss: 1.761 | Acc: 57.958 (28979/50000)
[2022-06-16 08:06:50,887] Epoch: 224
[2022-06-16 08:17:43,213] Train: Loss: 1.806 | Acc: 58.254 (746326/1281167) | Lr: 0.33136301633797927
[2022-06-16 08:18:39,991] Test: Loss: 1.865 | Acc: 56.156 (28078/50000)
[2022-06-16 08:18:39,991] Epoch: 225
[2022-06-16 08:29:36,519] Train: Loss: 1.805 | Acc: 58.325 (747243/1281167) | Lr: 0.3294096520131662
[2022-06-16 08:30:16,221] Test: Loss: 1.766 | Acc: 57.986 (28993/50000)
[2022-06-16 08:30:16,221] Epoch: 226
[2022-06-16 08:41:07,833] Train: Loss: 1.800 | Acc: 58.369 (747801/1281167) | Lr: 0.327455028090927
[2022-06-16 08:41:48,741] Test: Loss: 1.787 | Acc: 57.482 (28741/50000)
[2022-06-16 08:41:48,742] Epoch: 227
[2022-06-16 08:52:43,027] Train: Loss: 1.797 | Acc: 58.441 (748728/1281167) | Lr: 0.32549922830084527
[2022-06-16 08:53:31,702] Test: Loss: 1.806 | Acc: 57.240 (28620/50000)
[2022-06-16 08:53:31,703] Epoch: 228
[2022-06-16 09:04:32,629] Train: Loss: 1.801 | Acc: 58.374 (747872/1281167) | Lr: 0.3235423364228745
[2022-06-16 09:05:12,435] Test: Loss: 1.744 | Acc: 58.540 (29270/50000)
[2022-06-16 09:05:12,435] Epoch: 229
[2022-06-16 09:16:00,923] Train: Loss: 1.795 | Acc: 58.516 (749691/1281167) | Lr: 0.3215844362837498
[2022-06-16 09:16:41,308] Test: Loss: 1.744 | Acc: 58.624 (29312/50000)
[2022-06-16 09:16:41,309] Epoch: 230
[2022-06-16 09:27:45,734] Train: Loss: 1.788 | Acc: 58.624 (751072/1281167) | Lr: 0.31962561175339643
[2022-06-16 09:28:25,172] Test: Loss: 1.779 | Acc: 57.718 (28859/50000)
[2022-06-16 09:28:25,172] Epoch: 231
[2022-06-16 09:39:17,699] Train: Loss: 1.792 | Acc: 58.593 (750680/1281167) | Lr: 0.3176659467413381
[2022-06-16 09:39:55,946] Test: Loss: 1.712 | Acc: 59.068 (29534/50000)
[2022-06-16 09:39:55,946] Epoch: 232
[2022-06-16 09:50:49,416] Train: Loss: 1.789 | Acc: 58.571 (750392/1281167) | Lr: 0.3157055251931016
[2022-06-16 09:51:30,111] Test: Loss: 1.658 | Acc: 60.472 (30236/50000)
[2022-06-16 09:51:30,112] Saving..
[2022-06-16 09:51:30,183] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-16 09:51:30,183] Epoch: 233
[2022-06-16 10:02:26,448] Train: Loss: 1.786 | Acc: 58.663 (751576/1281167) | Lr: 0.3137444310866212
[2022-06-16 10:03:07,111] Test: Loss: 2.049 | Acc: 52.978 (26489/50000)
[2022-06-16 10:03:07,112] Epoch: 234
[2022-06-16 10:13:56,087] Train: Loss: 1.782 | Acc: 58.772 (752970/1281167) | Lr: 0.31178274842864145
[2022-06-16 10:14:39,515] Test: Loss: 1.842 | Acc: 56.782 (28391/50000)
[2022-06-16 10:14:39,515] Epoch: 235
[2022-06-16 10:25:40,230] Train: Loss: 1.780 | Acc: 58.808 (753426/1281167) | Lr: 0.30982056125111845
[2022-06-16 10:26:20,484] Test: Loss: 1.766 | Acc: 58.278 (29139/50000)
[2022-06-16 10:26:20,485] Epoch: 236
[2022-06-16 10:37:14,390] Train: Loss: 1.776 | Acc: 58.827 (753666/1281167) | Lr: 0.3078579536076201
[2022-06-16 10:37:53,682] Test: Loss: 1.688 | Acc: 59.354 (29677/50000)
[2022-06-16 10:37:53,682] Epoch: 237
[2022-06-16 10:48:40,837] Train: Loss: 1.777 | Acc: 58.895 (754544/1281167) | Lr: 0.30589500956972593
[2022-06-16 10:49:22,843] Test: Loss: 1.842 | Acc: 57.086 (28543/50000)
[2022-06-16 10:49:22,844] Epoch: 238
[2022-06-16 11:00:22,654] Train: Loss: 1.773 | Acc: 58.975 (755571/1281167) | Lr: 0.3039318132234252
[2022-06-16 11:01:01,546] Test: Loss: 1.835 | Acc: 56.938 (28469/50000)
[2022-06-16 11:01:01,546] Epoch: 239
[2022-06-16 11:11:58,143] Train: Loss: 1.772 | Acc: 59.014 (756074/1281167) | Lr: 0.3019684486655154
[2022-06-16 11:12:37,412] Test: Loss: 1.812 | Acc: 57.248 (28624/50000)
[2022-06-16 11:12:37,432] Epoch: 240
[2022-06-16 11:23:21,395] Train: Loss: 1.743 | Acc: 59.499 (762277/1281167) | Lr: 0.30000499999999974
[2022-06-16 11:24:13,981] Test: Loss: 1.925 | Acc: 55.618 (27809/50000)
[2022-06-16 11:24:13,981] Epoch: 241
[2022-06-16 11:35:15,161] Train: Loss: 1.741 | Acc: 59.559 (763056/1281167) | Lr: 0.29804155133448396
[2022-06-16 11:35:56,081] Test: Loss: 1.645 | Acc: 60.530 (30265/50000)
[2022-06-16 11:35:56,082] Saving..
[2022-06-16 11:35:56,150] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-16 11:35:56,150] Epoch: 242
[2022-06-16 11:47:12,763] Train: Loss: 1.732 | Acc: 59.723 (765153/1281167) | Lr: 0.29607818677657416
[2022-06-16 11:47:53,505] Test: Loss: 1.737 | Acc: 58.962 (29481/50000)
[2022-06-16 11:47:53,505] Epoch: 243
[2022-06-16 11:58:40,620] Train: Loss: 1.727 | Acc: 59.855 (766843/1281167) | Lr: 0.29411499043027345
[2022-06-16 11:59:21,724] Test: Loss: 1.642 | Acc: 60.896 (30448/50000)
[2022-06-16 11:59:21,725] Saving..
[2022-06-16 11:59:21,827] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-16 11:59:21,827] Epoch: 244
[2022-06-16 12:10:21,071] Train: Loss: 1.728 | Acc: 59.822 (766423/1281167) | Lr: 0.2921520463923793
[2022-06-16 12:11:08,448] Test: Loss: 1.690 | Acc: 59.876 (29938/50000)
[2022-06-16 12:11:08,449] Epoch: 245
[2022-06-16 12:22:11,546] Train: Loss: 1.719 | Acc: 60.054 (769397/1281167) | Lr: 0.290189438748881
[2022-06-16 12:22:51,553] Test: Loss: 2.290 | Acc: 49.460 (24730/50000)
[2022-06-16 12:22:51,553] Epoch: 246
[2022-06-16 12:33:52,634] Train: Loss: 1.716 | Acc: 60.054 (769387/1281167) | Lr: 0.2882272515713579
[2022-06-16 12:34:33,548] Test: Loss: 1.588 | Acc: 61.886 (30943/50000)
[2022-06-16 12:34:33,548] Saving..
[2022-06-16 12:34:33,620] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-16 12:34:33,621] Epoch: 247
[2022-06-16 12:45:32,145] Train: Loss: 1.714 | Acc: 60.133 (770407/1281167) | Lr: 0.2862655689133781
[2022-06-16 12:46:15,610] Test: Loss: 1.696 | Acc: 59.646 (29823/50000)
[2022-06-16 12:46:15,610] Epoch: 248
[2022-06-16 12:57:16,100] Train: Loss: 1.710 | Acc: 60.197 (771222/1281167) | Lr: 0.2843044748068978
[2022-06-16 12:57:57,052] Test: Loss: 1.771 | Acc: 58.278 (29139/50000)
[2022-06-16 12:57:57,052] Epoch: 249
[2022-06-16 13:08:59,334] Train: Loss: 1.710 | Acc: 60.233 (771680/1281167) | Lr: 0.2823440532586613
[2022-06-16 13:09:40,093] Test: Loss: 1.633 | Acc: 60.930 (30465/50000)
[2022-06-16 13:09:40,093] Epoch: 250
[2022-06-16 13:20:39,022] Train: Loss: 1.704 | Acc: 60.299 (772535/1281167) | Lr: 0.280384388246603
[2022-06-16 13:21:19,124] Test: Loss: 1.579 | Acc: 61.862 (30931/50000)
[2022-06-16 13:21:19,124] Epoch: 251
[2022-06-16 13:32:28,844] Train: Loss: 1.701 | Acc: 60.403 (773867/1281167) | Lr: 0.27842556371624966
[2022-06-16 13:33:16,889] Test: Loss: 1.590 | Acc: 61.640 (30820/50000)
[2022-06-16 13:33:16,890] Epoch: 252
[2022-06-16 13:44:21,477] Train: Loss: 1.695 | Acc: 60.410 (773949/1281167) | Lr: 0.27646766357712493
[2022-06-16 13:45:02,204] Test: Loss: 1.575 | Acc: 62.302 (31151/50000)
[2022-06-16 13:45:02,204] Saving..
[2022-06-16 13:45:02,284] * Saved checkpoint to ./results/14133147/FENet_imagenet.t7
[2022-06-16 13:45:02,284] Epoch: 253
[2022-06-16 13:56:14,888] Train: Loss: 1.694 | Acc: 60.518 (775331/1281167) | Lr: 0.2745107716991541
[2022-06-16 13:56:54,577] Test: Loss: 1.579 | Acc: 61.862 (30931/50000)
[2022-06-16 13:56:54,577] Epoch: 254
[2022-06-16 14:08:11,860] Train: Loss: 1.691 | Acc: 60.625 (776713/1281167) | Lr: 0.27255497190907235
[2022-06-16 14:08:52,875] Test: Loss: 1.680 | Acc: 60.002 (30001/50000)
[2022-06-16 14:08:52,875] Epoch: 255
[2022-06-16 14:20:03,379] Train: Loss: 1.688 | Acc: 60.660 (777160/1281167) | Lr: 0.2706003479868332
[2022-06-16 14:20:42,914] Test: Loss: 1.635 | Acc: 60.582 (30291/50000)
[2022-06-16 14:20:42,915] Epoch: 256
[2022-06-16 14:31:46,073] Train: Loss: 1.688 | Acc: 60.637 (776866/1281167) | Lr: 0.2686469836620201
[2022-06-16 14:32:24,873] Test: Loss: 1.638 | Acc: 60.968 (30484/50000)
[2022-06-16 14:32:24,873] Epoch: 257