-
Notifications
You must be signed in to change notification settings - Fork 4
/
FENet_imagenet.log
1798 lines (1798 loc) · 119 KB
/
FENet_imagenet.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
[2022-06-14 10:45:52,046] Namespace(auto_augment=True, batch_size=1024, data_dir='/dataset/public/ImageNetOrigin/', epoch=480, lr=0.6, mode='Train', nesterov=True, reduction=1.375, results_dir='./results/', resume=None)
[2022-06-14 10:45:52,046] ==> Preparing data..
[2022-06-14 10:46:00,774] Training / Testing data number: 50000 / 1281167
[2022-06-14 10:46:00,775] Using path: ./results/14104552/
[2022-06-14 10:46:00,775] ==> Building model..
[2022-06-14 10:46:04,857] DataParallel(
(module): FENet(
(conv1): Conv2d(3, 22, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ibssl): IBSSL(
(conv1): Conv2d(22, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(88, 22, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(22, 220, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(220, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(220, 44, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(44, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(feblock1): FEBlock3n2s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(11, 66, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(66, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(66, 11, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(11, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(22, 132, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(132, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(132, 22, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(44, 528, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(528, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(528, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock2): FEBlock4n2s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(11, 66, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(66, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(66, 11, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(11, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(22, 132, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(132, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(132, 22, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_3): ResIBSSL(
(conv1): Conv2d(44, 264, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(264, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(264, 44, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(44, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(88, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(1056, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(176, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock3): FEBlock4n1s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(22, 132, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(132, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(132, 22, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(44, 264, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(264, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(264, 44, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(44, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_3): ResIBSSL(
(conv1): Conv2d(88, 528, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(528, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(528, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibssl): IBSSL(
(conv1): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(1056, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(176, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock4): FEBlock4n2s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(22, 132, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(132, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(132, 22, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(22, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(44, 264, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(264, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(264, 44, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(44, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_3): ResIBSSL(
(conv1): Conv2d(88, 528, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(528, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(528, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(176, 2112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(2112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(2112, 352, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock5): FEBlock3n1s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(88, 528, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(528, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(528, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(1056, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(176, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibssl): IBSSL(
(conv1): Conv2d(352, 2112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(2112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(2112, 352, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(conv2): Conv2d(352, 1932, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(1932, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(gap): AdaptiveAvgPool2d(output_size=(1, 1))
(dropout): Dropout(p=0.2, inplace=False)
(fc): Conv2d(1932, 1000, kernel_size=(1, 1), stride=(1, 1))
)
)
[2022-06-14 10:46:04,866] Epoch: 0
[2022-06-14 11:05:39,178] Train: Loss: 5.822 | Acc: 4.267 (54667/1281167) | Lr: 0.6
[2022-06-14 11:06:28,947] Test: Loss: 5.160 | Acc: 8.358 (4179/50000)
[2022-06-14 11:06:28,948] Saving..
[2022-06-14 11:06:29,051] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 11:06:29,052] Epoch: 1
[2022-06-14 11:24:16,374] Train: Loss: 4.343 | Acc: 16.825 (215550/1281167) | Lr: 0.5999935746063304
[2022-06-14 11:25:05,788] Test: Loss: 4.006 | Acc: 20.324 (10162/50000)
[2022-06-14 11:25:05,788] Saving..
[2022-06-14 11:25:05,871] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 11:25:05,871] Epoch: 2
[2022-06-14 11:42:53,102] Train: Loss: 3.716 | Acc: 25.458 (326160/1281167) | Lr: 0.5999742987005642
[2022-06-14 11:43:42,175] Test: Loss: 3.340 | Acc: 28.888 (14444/50000)
[2022-06-14 11:43:42,175] Saving..
[2022-06-14 11:43:42,253] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 11:43:42,253] Epoch: 3
[2022-06-14 12:01:31,557] Train: Loss: 3.396 | Acc: 30.344 (388752/1281167) | Lr: 0.599942173108417
[2022-06-14 12:02:20,770] Test: Loss: 3.324 | Acc: 30.442 (15221/50000)
[2022-06-14 12:02:20,771] Saving..
[2022-06-14 12:02:20,842] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 12:02:20,842] Epoch: 4
[2022-06-14 12:20:03,017] Train: Loss: 3.207 | Acc: 33.375 (427595/1281167) | Lr: 0.5998971992060422
[2022-06-14 12:20:51,303] Test: Loss: 3.034 | Acc: 34.352 (17176/50000)
[2022-06-14 12:20:51,303] Saving..
[2022-06-14 12:20:51,380] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 12:20:51,380] Epoch: 5
[2022-06-14 12:38:37,697] Train: Loss: 3.082 | Acc: 35.418 (453766/1281167) | Lr: 0.5998393789199723
[2022-06-14 12:39:27,486] Test: Loss: 3.048 | Acc: 34.212 (17106/50000)
[2022-06-14 12:39:27,486] Epoch: 6
[2022-06-14 12:57:22,915] Train: Loss: 2.978 | Acc: 37.121 (475580/1281167) | Lr: 0.5997687147270356
[2022-06-14 12:58:14,349] Test: Loss: 2.906 | Acc: 37.420 (18710/50000)
[2022-06-14 12:58:14,349] Saving..
[2022-06-14 12:58:14,437] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 12:58:14,438] Epoch: 7
[2022-06-14 13:15:57,139] Train: Loss: 2.906 | Acc: 38.305 (490747/1281167) | Lr: 0.5996852096542512
[2022-06-14 13:16:50,274] Test: Loss: 3.140 | Acc: 32.892 (16446/50000)
[2022-06-14 13:16:50,274] Epoch: 8
[2022-06-14 13:34:29,756] Train: Loss: 2.848 | Acc: 39.316 (503703/1281167) | Lr: 0.5995888672786983
[2022-06-14 13:35:21,809] Test: Loss: 2.732 | Acc: 39.710 (19855/50000)
[2022-06-14 13:35:21,809] Saving..
[2022-06-14 13:35:21,896] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 13:35:21,896] Epoch: 9
[2022-06-14 13:53:05,453] Train: Loss: 2.797 | Acc: 40.189 (514882/1281167) | Lr: 0.5994796917273638
[2022-06-14 13:53:57,068] Test: Loss: 2.762 | Acc: 39.886 (19943/50000)
[2022-06-14 13:53:57,068] Saving..
[2022-06-14 13:53:57,240] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 13:53:57,240] Epoch: 10
[2022-06-14 14:11:47,990] Train: Loss: 2.758 | Acc: 40.885 (523804/1281167) | Lr: 0.5993576876769647
[2022-06-14 14:12:39,554] Test: Loss: 2.603 | Acc: 42.324 (21162/50000)
[2022-06-14 14:12:39,555] Saving..
[2022-06-14 14:12:39,641] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 14:12:39,642] Epoch: 11
[2022-06-14 14:30:19,140] Train: Loss: 2.727 | Acc: 41.478 (531402/1281167) | Lr: 0.5992228603537487
[2022-06-14 14:31:11,052] Test: Loss: 2.586 | Acc: 42.580 (21290/50000)
[2022-06-14 14:31:11,052] Saving..
[2022-06-14 14:31:11,130] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 14:31:11,130] Epoch: 12
[2022-06-14 14:48:59,977] Train: Loss: 2.700 | Acc: 41.884 (536600/1281167) | Lr: 0.5990752155332696
[2022-06-14 14:49:49,273] Test: Loss: 2.696 | Acc: 40.994 (20497/50000)
[2022-06-14 14:49:49,273] Epoch: 13
[2022-06-14 15:07:52,424] Train: Loss: 2.673 | Acc: 42.454 (543902/1281167) | Lr: 0.5989147595401398
[2022-06-14 15:08:45,236] Test: Loss: 2.715 | Acc: 39.934 (19967/50000)
[2022-06-14 15:08:45,236] Epoch: 14
[2022-06-14 15:26:38,286] Train: Loss: 2.654 | Acc: 42.758 (547799/1281167) | Lr: 0.5987414992477603
[2022-06-14 15:27:27,230] Test: Loss: 2.731 | Acc: 40.242 (20121/50000)
[2022-06-14 15:27:27,231] Epoch: 15
[2022-06-14 15:45:19,533] Train: Loss: 2.639 | Acc: 43.019 (551144/1281167) | Lr: 0.5985554420780254
[2022-06-14 15:46:08,054] Test: Loss: 2.555 | Acc: 43.490 (21745/50000)
[2022-06-14 15:46:08,055] Saving..
[2022-06-14 15:46:08,167] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 15:46:08,167] Epoch: 16
[2022-06-14 16:04:04,897] Train: Loss: 2.622 | Acc: 43.267 (554322/1281167) | Lr: 0.5983565960010048
[2022-06-14 16:04:52,582] Test: Loss: 2.567 | Acc: 43.300 (21650/50000)
[2022-06-14 16:04:52,582] Epoch: 17
[2022-06-14 16:22:54,936] Train: Loss: 2.607 | Acc: 43.565 (558141/1281167) | Lr: 0.5981449695346027
[2022-06-14 16:23:46,142] Test: Loss: 2.459 | Acc: 44.758 (22379/50000)
[2022-06-14 16:23:46,142] Saving..
[2022-06-14 16:23:46,293] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 16:23:46,293] Epoch: 18
[2022-06-14 16:41:42,596] Train: Loss: 2.593 | Acc: 43.824 (561455/1281167) | Lr: 0.5979205717441928
[2022-06-14 16:42:30,372] Test: Loss: 2.325 | Acc: 46.986 (23493/50000)
[2022-06-14 16:42:30,373] Saving..
[2022-06-14 16:42:30,476] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 16:42:30,476] Epoch: 19
[2022-06-14 17:00:30,097] Train: Loss: 2.582 | Acc: 44.014 (563898/1281167) | Lr: 0.5976834122422292
[2022-06-14 17:01:18,555] Test: Loss: 2.352 | Acc: 46.190 (23095/50000)
[2022-06-14 17:01:18,555] Epoch: 20
[2022-06-14 17:19:17,047] Train: Loss: 2.573 | Acc: 44.172 (565921/1281167) | Lr: 0.5974335011878359
[2022-06-14 17:20:04,783] Test: Loss: 2.445 | Acc: 44.948 (22474/50000)
[2022-06-14 17:20:04,784] Epoch: 21
[2022-06-14 17:38:00,359] Train: Loss: 2.561 | Acc: 44.351 (568213/1281167) | Lr: 0.5971708492863705
[2022-06-14 17:38:48,513] Test: Loss: 2.539 | Acc: 43.668 (21834/50000)
[2022-06-14 17:38:48,513] Epoch: 22
[2022-06-14 17:56:33,517] Train: Loss: 2.552 | Acc: 44.487 (569947/1281167) | Lr: 0.5968954677889666
[2022-06-14 17:57:21,304] Test: Loss: 2.530 | Acc: 43.840 (21920/50000)
[2022-06-14 17:57:21,304] Epoch: 23
[2022-06-14 18:15:19,542] Train: Loss: 2.544 | Acc: 44.684 (572472/1281167) | Lr: 0.5966073684920506
[2022-06-14 18:16:08,174] Test: Loss: 2.780 | Acc: 39.116 (19558/50000)
[2022-06-14 18:16:08,174] Epoch: 24
[2022-06-14 18:34:08,703] Train: Loss: 2.538 | Acc: 44.852 (574629/1281167) | Lr: 0.596306563736838
[2022-06-14 18:34:56,132] Test: Loss: 2.296 | Acc: 47.810 (23905/50000)
[2022-06-14 18:34:56,133] Saving..
[2022-06-14 18:34:56,219] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 18:34:56,219] Epoch: 25
[2022-06-14 18:52:55,895] Train: Loss: 2.525 | Acc: 45.013 (576696/1281167) | Lr: 0.5959930664088029
[2022-06-14 18:53:45,161] Test: Loss: 2.671 | Acc: 41.642 (20821/50000)
[2022-06-14 18:53:45,161] Epoch: 26
[2022-06-14 19:11:41,892] Train: Loss: 2.520 | Acc: 45.074 (577470/1281167) | Lr: 0.5956668899371277
[2022-06-14 19:12:36,356] Test: Loss: 2.402 | Acc: 46.194 (23097/50000)
[2022-06-14 19:12:36,357] Epoch: 27
[2022-06-14 19:30:24,219] Train: Loss: 2.515 | Acc: 45.199 (579073/1281167) | Lr: 0.5953280482941267
[2022-06-14 19:31:13,609] Test: Loss: 2.304 | Acc: 47.932 (23966/50000)
[2022-06-14 19:31:13,609] Saving..
[2022-06-14 19:31:13,699] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 19:31:13,700] Epoch: 28
[2022-06-14 19:49:11,257] Train: Loss: 2.508 | Acc: 45.298 (580345/1281167) | Lr: 0.5949765559946483
[2022-06-14 19:49:59,850] Test: Loss: 2.362 | Acc: 46.618 (23309/50000)
[2022-06-14 19:49:59,850] Epoch: 29
[2022-06-14 20:07:47,970] Train: Loss: 2.502 | Acc: 45.416 (581851/1281167) | Lr: 0.5946124280954524
[2022-06-14 20:08:35,420] Test: Loss: 2.239 | Acc: 48.724 (24362/50000)
[2022-06-14 20:08:35,420] Saving..
[2022-06-14 20:08:35,504] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 20:08:35,504] Epoch: 30
[2022-06-14 20:26:21,933] Train: Loss: 2.497 | Acc: 45.538 (583421/1281167) | Lr: 0.5942356801945667
[2022-06-14 20:27:13,629] Test: Loss: 2.306 | Acc: 47.734 (23867/50000)
[2022-06-14 20:27:13,630] Epoch: 31
[2022-06-14 20:45:06,181] Train: Loss: 2.492 | Acc: 45.617 (584426/1281167) | Lr: 0.5938463284306172
[2022-06-14 20:45:53,589] Test: Loss: 2.565 | Acc: 42.502 (21251/50000)
[2022-06-14 20:45:53,590] Epoch: 32
[2022-06-14 21:04:10,934] Train: Loss: 2.490 | Acc: 45.667 (585065/1281167) | Lr: 0.5934443894821377
[2022-06-14 21:04:57,020] Test: Loss: 2.390 | Acc: 46.030 (23015/50000)
[2022-06-14 21:04:57,020] Epoch: 33
[2022-06-14 21:23:02,008] Train: Loss: 2.485 | Acc: 45.714 (585675/1281167) | Lr: 0.5930298805668548
[2022-06-14 21:23:48,907] Test: Loss: 2.537 | Acc: 43.048 (21524/50000)
[2022-06-14 21:23:48,908] Epoch: 34
[2022-06-14 21:41:45,124] Train: Loss: 2.481 | Acc: 45.843 (587331/1281167) | Lr: 0.592602819440951
[2022-06-14 21:42:36,090] Test: Loss: 2.942 | Acc: 38.022 (19011/50000)
[2022-06-14 21:42:36,091] Epoch: 35
[2022-06-14 22:00:28,651] Train: Loss: 2.474 | Acc: 45.928 (588409/1281167) | Lr: 0.5921632243983034
[2022-06-14 22:01:16,440] Test: Loss: 2.190 | Acc: 49.680 (24840/50000)
[2022-06-14 22:01:16,440] Saving..
[2022-06-14 22:01:16,526] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-14 22:01:16,526] Epoch: 36
[2022-06-14 22:19:11,304] Train: Loss: 2.470 | Acc: 45.988 (589178/1281167) | Lr: 0.5917111142697007
[2022-06-14 22:19:59,606] Test: Loss: 2.578 | Acc: 44.082 (22041/50000)
[2022-06-14 22:19:59,606] Epoch: 37
[2022-06-14 22:38:00,029] Train: Loss: 2.464 | Acc: 46.123 (590913/1281167) | Lr: 0.591246508422036
[2022-06-14 22:38:48,931] Test: Loss: 2.332 | Acc: 46.948 (23474/50000)
[2022-06-14 22:38:48,931] Epoch: 38
[2022-06-14 22:56:36,196] Train: Loss: 2.463 | Acc: 46.159 (591378/1281167) | Lr: 0.5907694267574775
[2022-06-14 22:57:24,329] Test: Loss: 2.418 | Acc: 46.248 (23124/50000)
[2022-06-14 22:57:24,329] Epoch: 39
[2022-06-14 23:15:16,552] Train: Loss: 2.462 | Acc: 46.204 (591956/1281167) | Lr: 0.5902798897126158
[2022-06-14 23:16:07,086] Test: Loss: 2.673 | Acc: 42.316 (21158/50000)
[2022-06-14 23:16:07,086] Epoch: 40
[2022-06-14 23:34:05,658] Train: Loss: 2.456 | Acc: 46.310 (593303/1281167) | Lr: 0.5897779182575887
[2022-06-14 23:34:53,744] Test: Loss: 2.446 | Acc: 45.184 (22592/50000)
[2022-06-14 23:34:53,744] Epoch: 41
[2022-06-14 23:52:43,289] Train: Loss: 2.453 | Acc: 46.415 (594653/1281167) | Lr: 0.5892635338951826
[2022-06-14 23:53:31,192] Test: Loss: 2.313 | Acc: 48.234 (24117/50000)
[2022-06-14 23:53:31,192] Epoch: 42
[2022-06-15 00:11:36,638] Train: Loss: 2.451 | Acc: 46.367 (594037/1281167) | Lr: 0.5887367586599115
[2022-06-15 00:12:25,108] Test: Loss: 2.221 | Acc: 49.070 (24535/50000)
[2022-06-15 00:12:25,108] Epoch: 43
[2022-06-15 00:30:27,604] Train: Loss: 2.447 | Acc: 46.417 (594679/1281167) | Lr: 0.5881976151170734
[2022-06-15 00:31:16,527] Test: Loss: 2.199 | Acc: 49.504 (24752/50000)
[2022-06-15 00:31:16,527] Epoch: 44
[2022-06-15 00:49:13,191] Train: Loss: 2.445 | Acc: 46.500 (595739/1281167) | Lr: 0.5876461263617831
[2022-06-15 00:50:02,177] Test: Loss: 2.303 | Acc: 46.896 (23448/50000)
[2022-06-15 00:50:02,177] Epoch: 45
[2022-06-15 01:07:58,377] Train: Loss: 2.444 | Acc: 46.557 (596475/1281167) | Lr: 0.5870823160179836
[2022-06-15 01:08:47,190] Test: Loss: 2.592 | Acc: 43.228 (21614/50000)
[2022-06-15 01:08:47,190] Epoch: 46
[2022-06-15 01:26:49,904] Train: Loss: 2.439 | Acc: 46.644 (597586/1281167) | Lr: 0.5865062082374333
[2022-06-15 01:27:38,068] Test: Loss: 2.792 | Acc: 40.764 (20382/50000)
[2022-06-15 01:27:38,069] Epoch: 47
[2022-06-15 01:45:35,230] Train: Loss: 2.441 | Acc: 46.598 (596998/1281167) | Lr: 0.5859178276986722
[2022-06-15 01:46:23,724] Test: Loss: 2.128 | Acc: 50.688 (25344/50000)
[2022-06-15 01:46:23,724] Saving..
[2022-06-15 01:46:23,819] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-15 01:46:23,819] Epoch: 48
[2022-06-15 02:04:13,759] Train: Loss: 2.433 | Acc: 46.737 (598783/1281167) | Lr: 0.5853171996059642
[2022-06-15 02:05:02,335] Test: Loss: 2.374 | Acc: 46.308 (23154/50000)
[2022-06-15 02:05:02,336] Epoch: 49
[2022-06-15 02:23:04,186] Train: Loss: 2.431 | Acc: 46.785 (599391/1281167) | Lr: 0.5847043496882178
[2022-06-15 02:23:51,830] Test: Loss: 2.502 | Acc: 44.516 (22258/50000)
[2022-06-15 02:23:51,831] Epoch: 50
[2022-06-15 02:41:50,756] Train: Loss: 2.432 | Acc: 46.718 (598540/1281167) | Lr: 0.5840793041978839
[2022-06-15 02:42:38,909] Test: Loss: 2.851 | Acc: 39.360 (19680/50000)
[2022-06-15 02:42:38,909] Epoch: 51
[2022-06-15 03:00:28,314] Train: Loss: 2.423 | Acc: 46.949 (601493/1281167) | Lr: 0.5834420899098308
[2022-06-15 03:01:16,779] Test: Loss: 2.623 | Acc: 42.766 (21383/50000)
[2022-06-15 03:01:16,779] Epoch: 52
[2022-06-15 03:19:06,897] Train: Loss: 2.430 | Acc: 46.772 (599233/1281167) | Lr: 0.5827927341201978
[2022-06-15 03:19:57,168] Test: Loss: 2.204 | Acc: 49.634 (24817/50000)
[2022-06-15 03:19:57,168] Epoch: 53
[2022-06-15 03:37:56,842] Train: Loss: 2.418 | Acc: 46.922 (601147/1281167) | Lr: 0.5821312646452258
[2022-06-15 03:38:44,881] Test: Loss: 2.151 | Acc: 50.180 (25090/50000)
[2022-06-15 03:38:44,881] Epoch: 54
[2022-06-15 03:56:49,901] Train: Loss: 2.420 | Acc: 47.009 (602262/1281167) | Lr: 0.5814577098200655
[2022-06-15 03:57:37,064] Test: Loss: 3.237 | Acc: 33.896 (16948/50000)
[2022-06-15 03:57:37,064] Epoch: 55
[2022-06-15 04:15:26,004] Train: Loss: 2.421 | Acc: 46.970 (601770/1281167) | Lr: 0.5807720984975637
[2022-06-15 04:16:13,934] Test: Loss: 2.294 | Acc: 47.200 (23600/50000)
[2022-06-15 04:16:13,934] Epoch: 56
[2022-06-15 04:34:18,399] Train: Loss: 2.416 | Acc: 47.064 (602970/1281167) | Lr: 0.5800744600470279
[2022-06-15 04:35:11,554] Test: Loss: 2.550 | Acc: 43.658 (21829/50000)
[2022-06-15 04:35:11,555] Epoch: 57
[2022-06-15 04:53:12,656] Train: Loss: 2.415 | Acc: 47.047 (602747/1281167) | Lr: 0.5793648243529671
[2022-06-15 04:54:02,024] Test: Loss: 2.567 | Acc: 43.992 (21996/50000)
[2022-06-15 04:54:02,025] Epoch: 58
[2022-06-15 05:12:03,114] Train: Loss: 2.414 | Acc: 47.084 (603223/1281167) | Lr: 0.5786432218138128
[2022-06-15 05:12:52,129] Test: Loss: 2.937 | Acc: 39.476 (19738/50000)
[2022-06-15 05:12:52,129] Epoch: 59
[2022-06-15 05:30:55,936] Train: Loss: 2.412 | Acc: 47.080 (603172/1281167) | Lr: 0.5779096833406159
[2022-06-15 05:31:43,813] Test: Loss: 2.451 | Acc: 45.424 (22712/50000)
[2022-06-15 05:31:43,813] Epoch: 60
[2022-06-15 05:49:34,355] Train: Loss: 2.411 | Acc: 47.100 (603427/1281167) | Lr: 0.5771642403557232
[2022-06-15 05:50:22,284] Test: Loss: 2.215 | Acc: 49.124 (24562/50000)
[2022-06-15 05:50:22,284] Epoch: 61
[2022-06-15 06:08:15,966] Train: Loss: 2.405 | Acc: 47.251 (605369/1281167) | Lr: 0.5764069247914314
[2022-06-15 06:09:05,173] Test: Loss: 2.582 | Acc: 43.884 (21942/50000)
[2022-06-15 06:09:05,173] Epoch: 62
[2022-06-15 06:26:58,557] Train: Loss: 2.409 | Acc: 47.178 (604435/1281167) | Lr: 0.5756377690886185
[2022-06-15 06:27:46,115] Test: Loss: 2.208 | Acc: 49.058 (24529/50000)
[2022-06-15 06:27:46,116] Epoch: 63
[2022-06-15 06:45:38,563] Train: Loss: 2.403 | Acc: 47.227 (605052/1281167) | Lr: 0.574856806195355
[2022-06-15 06:46:26,955] Test: Loss: 2.481 | Acc: 44.560 (22280/50000)
[2022-06-15 06:46:26,955] Epoch: 64
[2022-06-15 07:04:28,247] Train: Loss: 2.405 | Acc: 47.216 (604910/1281167) | Lr: 0.5740640695654917
[2022-06-15 07:05:16,394] Test: Loss: 2.202 | Acc: 49.416 (24708/50000)
[2022-06-15 07:05:16,394] Epoch: 65
[2022-06-15 07:23:10,913] Train: Loss: 2.400 | Acc: 47.303 (606025/1281167) | Lr: 0.5732595931572279
[2022-06-15 07:24:02,472] Test: Loss: 2.134 | Acc: 51.098 (25549/50000)
[2022-06-15 07:24:02,472] Saving..
[2022-06-15 07:24:02,551] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-15 07:24:02,551] Epoch: 66
[2022-06-15 07:41:46,304] Train: Loss: 2.398 | Acc: 47.323 (606284/1281167) | Lr: 0.572443411431655
[2022-06-15 07:42:35,513] Test: Loss: 2.641 | Acc: 42.390 (21195/50000)
[2022-06-15 07:42:35,513] Epoch: 67
[2022-06-15 08:00:28,857] Train: Loss: 2.399 | Acc: 47.352 (606658/1281167) | Lr: 0.5716155593512818
[2022-06-15 08:01:17,584] Test: Loss: 2.138 | Acc: 50.622 (25311/50000)
[2022-06-15 08:01:17,584] Epoch: 68
[2022-06-15 08:19:13,847] Train: Loss: 2.395 | Acc: 47.434 (607715/1281167) | Lr: 0.5707760723785362
[2022-06-15 08:20:02,007] Test: Loss: 2.204 | Acc: 49.516 (24758/50000)
[2022-06-15 08:20:02,007] Epoch: 69
[2022-06-15 08:38:03,568] Train: Loss: 2.395 | Acc: 47.414 (607453/1281167) | Lr: 0.5699249864742459
[2022-06-15 08:38:55,012] Test: Loss: 2.533 | Acc: 44.068 (22034/50000)
[2022-06-15 08:38:55,013] Epoch: 70
[2022-06-15 08:57:07,285] Train: Loss: 2.393 | Acc: 47.422 (607551/1281167) | Lr: 0.5690623380960986
[2022-06-15 08:57:55,490] Test: Loss: 2.354 | Acc: 46.858 (23429/50000)
[2022-06-15 08:57:55,491] Epoch: 71
[2022-06-15 09:15:42,742] Train: Loss: 2.391 | Acc: 47.415 (607470/1281167) | Lr: 0.5681881641970796
[2022-06-15 09:16:30,653] Test: Loss: 2.288 | Acc: 47.948 (23974/50000)
[2022-06-15 09:16:30,653] Epoch: 72
[2022-06-15 09:34:31,849] Train: Loss: 2.392 | Acc: 47.468 (608143/1281167) | Lr: 0.5673025022238892
[2022-06-15 09:35:19,841] Test: Loss: 2.386 | Acc: 46.598 (23299/50000)
[2022-06-15 09:35:19,842] Epoch: 73
[2022-06-15 09:53:16,735] Train: Loss: 2.392 | Acc: 47.483 (608336/1281167) | Lr: 0.5664053901153387
[2022-06-15 09:54:04,697] Test: Loss: 2.322 | Acc: 47.582 (23791/50000)
[2022-06-15 09:54:04,698] Epoch: 74
[2022-06-15 10:11:55,813] Train: Loss: 2.389 | Acc: 47.576 (609534/1281167) | Lr: 0.565496866300725
[2022-06-15 10:12:44,064] Test: Loss: 2.724 | Acc: 41.012 (20506/50000)
[2022-06-15 10:12:44,064] Epoch: 75
[2022-06-15 10:30:39,959] Train: Loss: 2.386 | Acc: 47.535 (608999/1281167) | Lr: 0.5645769696981845
[2022-06-15 10:31:28,348] Test: Loss: 2.366 | Acc: 46.820 (23410/50000)
[2022-06-15 10:31:28,349] Epoch: 76
[2022-06-15 10:49:19,019] Train: Loss: 2.382 | Acc: 47.622 (610115/1281167) | Lr: 0.563645739713026
[2022-06-15 10:50:06,694] Test: Loss: 2.313 | Acc: 47.420 (23710/50000)
[2022-06-15 10:50:06,694] Epoch: 77
[2022-06-15 11:07:59,880] Train: Loss: 2.382 | Acc: 47.669 (610714/1281167) | Lr: 0.5627032162360428
[2022-06-15 11:08:48,323] Test: Loss: 2.237 | Acc: 48.920 (24460/50000)
[2022-06-15 11:08:48,324] Epoch: 78
[2022-06-15 11:26:51,513] Train: Loss: 2.385 | Acc: 47.642 (610373/1281167) | Lr: 0.5617494396418036
[2022-06-15 11:27:40,902] Test: Loss: 2.443 | Acc: 45.128 (22564/50000)
[2022-06-15 11:27:40,902] Epoch: 79
[2022-06-15 11:45:43,917] Train: Loss: 2.378 | Acc: 47.730 (611503/1281167) | Lr: 0.5607844507869232
[2022-06-15 11:46:31,765] Test: Loss: 2.286 | Acc: 48.096 (24048/50000)
[2022-06-15 11:46:31,765] Epoch: 80
[2022-06-15 12:04:24,646] Train: Loss: 2.380 | Acc: 47.652 (610507/1281167) | Lr: 0.5598082910083125
[2022-06-15 12:05:12,341] Test: Loss: 2.467 | Acc: 45.582 (22791/50000)
[2022-06-15 12:05:12,341] Epoch: 81
[2022-06-15 12:23:03,101] Train: Loss: 2.377 | Acc: 47.748 (611736/1281167) | Lr: 0.5588210021214074
[2022-06-15 12:23:51,070] Test: Loss: 2.556 | Acc: 44.808 (22404/50000)
[2022-06-15 12:23:51,070] Epoch: 82
[2022-06-15 12:41:36,226] Train: Loss: 2.377 | Acc: 47.739 (611617/1281167) | Lr: 0.5578226264183781
[2022-06-15 12:42:24,448] Test: Loss: 2.083 | Acc: 51.596 (25798/50000)
[2022-06-15 12:42:24,448] Saving..
[2022-06-15 12:42:24,529] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-15 12:42:24,529] Epoch: 83
[2022-06-15 13:00:19,415] Train: Loss: 2.373 | Acc: 47.805 (612468/1281167) | Lr: 0.5568132066663166
[2022-06-15 13:01:07,593] Test: Loss: 2.165 | Acc: 50.302 (25151/50000)
[2022-06-15 13:01:07,594] Epoch: 84
[2022-06-15 13:19:05,908] Train: Loss: 2.373 | Acc: 47.811 (612534/1281167) | Lr: 0.5557927861054056
[2022-06-15 13:19:53,309] Test: Loss: 2.862 | Acc: 39.980 (19990/50000)
[2022-06-15 13:19:53,309] Epoch: 85
[2022-06-15 13:37:56,746] Train: Loss: 2.369 | Acc: 47.913 (613849/1281167) | Lr: 0.5547614084470658
[2022-06-15 13:38:45,229] Test: Loss: 2.192 | Acc: 49.610 (24805/50000)
[2022-06-15 13:38:45,229] Epoch: 86
[2022-06-15 13:56:42,026] Train: Loss: 2.369 | Acc: 47.914 (613855/1281167) | Lr: 0.5537191178720833
[2022-06-15 13:57:35,446] Test: Loss: 2.284 | Acc: 47.906 (23953/50000)
[2022-06-15 13:57:35,446] Epoch: 87
[2022-06-15 14:15:24,972] Train: Loss: 2.367 | Acc: 47.956 (614399/1281167) | Lr: 0.5526659590287172
[2022-06-15 14:16:12,920] Test: Loss: 2.017 | Acc: 52.840 (26420/50000)
[2022-06-15 14:16:12,920] Saving..
[2022-06-15 14:16:13,004] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-15 14:16:13,004] Epoch: 88
[2022-06-15 14:34:11,567] Train: Loss: 2.367 | Acc: 47.930 (614061/1281167) | Lr: 0.5516019770307873
[2022-06-15 14:34:58,598] Test: Loss: 2.134 | Acc: 50.598 (25299/50000)
[2022-06-15 14:34:58,598] Epoch: 89
[2022-06-15 14:52:49,322] Train: Loss: 2.366 | Acc: 47.983 (614742/1281167) | Lr: 0.5505272174557411
[2022-06-15 14:53:37,473] Test: Loss: 2.264 | Acc: 48.966 (24483/50000)
[2022-06-15 14:53:37,474] Epoch: 90
[2022-06-15 15:11:31,139] Train: Loss: 2.361 | Acc: 48.020 (615221/1281167) | Lr: 0.5494417263427018
[2022-06-15 15:12:19,456] Test: Loss: 3.087 | Acc: 35.748 (17874/50000)
[2022-06-15 15:12:19,456] Epoch: 91
[2022-06-15 15:30:13,952] Train: Loss: 2.362 | Acc: 48.011 (615103/1281167) | Lr: 0.5483455501904958
[2022-06-15 15:31:02,145] Test: Loss: 2.299 | Acc: 48.014 (24007/50000)
[2022-06-15 15:31:02,146] Epoch: 92
[2022-06-15 15:48:45,019] Train: Loss: 2.361 | Acc: 48.068 (615831/1281167) | Lr: 0.5472387359556613
[2022-06-15 15:49:32,572] Test: Loss: 2.297 | Acc: 47.696 (23848/50000)
[2022-06-15 15:49:32,572] Epoch: 93
[2022-06-15 16:07:15,858] Train: Loss: 2.364 | Acc: 47.993 (614870/1281167) | Lr: 0.5461213310504361
[2022-06-15 16:08:05,441] Test: Loss: 2.124 | Acc: 51.290 (25645/50000)
[2022-06-15 16:08:05,442] Epoch: 94
[2022-06-15 16:25:53,818] Train: Loss: 2.361 | Acc: 48.022 (615242/1281167) | Lr: 0.5449933833407276
[2022-06-15 16:26:41,831] Test: Loss: 2.761 | Acc: 40.736 (20368/50000)
[2022-06-15 16:26:41,831] Epoch: 95
[2022-06-15 16:44:36,623] Train: Loss: 2.355 | Acc: 48.125 (616566/1281167) | Lr: 0.5438549411440613
[2022-06-15 16:45:24,966] Test: Loss: 2.244 | Acc: 48.396 (24198/50000)
[2022-06-15 16:45:24,967] Epoch: 96
[2022-06-15 17:03:08,977] Train: Loss: 2.353 | Acc: 48.201 (617530/1281167) | Lr: 0.542706053227512
[2022-06-15 17:03:57,343] Test: Loss: 2.101 | Acc: 51.062 (25531/50000)
[2022-06-15 17:03:57,343] Epoch: 97
[2022-06-15 17:21:47,356] Train: Loss: 2.352 | Acc: 48.196 (617466/1281167) | Lr: 0.5415467688056143
[2022-06-15 17:22:36,224] Test: Loss: 2.144 | Acc: 50.714 (25357/50000)
[2022-06-15 17:22:36,225] Epoch: 98
[2022-06-15 17:40:29,572] Train: Loss: 2.355 | Acc: 48.136 (616703/1281167) | Lr: 0.5403771375382543
[2022-06-15 17:41:18,514] Test: Loss: 2.099 | Acc: 51.654 (25827/50000)
[2022-06-15 17:41:18,515] Epoch: 99
[2022-06-15 17:59:06,254] Train: Loss: 2.351 | Acc: 48.234 (617963/1281167) | Lr: 0.5391972095285429
[2022-06-15 17:59:54,803] Test: Loss: 2.042 | Acc: 52.198 (26099/50000)
[2022-06-15 17:59:54,804] Epoch: 100
[2022-06-15 18:17:37,116] Train: Loss: 2.346 | Acc: 48.322 (619089/1281167) | Lr: 0.5380070353206687
[2022-06-15 18:18:26,684] Test: Loss: 2.269 | Acc: 48.372 (24186/50000)
[2022-06-15 18:18:26,685] Epoch: 101
[2022-06-15 18:36:18,770] Train: Loss: 2.346 | Acc: 48.283 (618581/1281167) | Lr: 0.5368066658977336
[2022-06-15 18:37:07,370] Test: Loss: 2.834 | Acc: 40.274 (20137/50000)
[2022-06-15 18:37:07,370] Epoch: 102
[2022-06-15 18:54:45,374] Train: Loss: 2.344 | Acc: 48.380 (619835/1281167) | Lr: 0.5355961526795687
[2022-06-15 18:55:34,974] Test: Loss: 2.016 | Acc: 53.042 (26521/50000)
[2022-06-15 18:55:34,974] Saving..
[2022-06-15 18:55:35,070] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-15 18:55:35,070] Epoch: 103
[2022-06-15 19:13:11,207] Train: Loss: 2.346 | Acc: 48.326 (619141/1281167) | Lr: 0.5343755475205313
[2022-06-15 19:14:00,830] Test: Loss: 2.144 | Acc: 50.500 (25250/50000)
[2022-06-15 19:14:00,831] Epoch: 104
[2022-06-15 19:31:39,856] Train: Loss: 2.341 | Acc: 48.434 (620520/1281167) | Lr: 0.5331449027072837
[2022-06-15 19:32:28,434] Test: Loss: 2.052 | Acc: 52.328 (26164/50000)
[2022-06-15 19:32:28,434] Epoch: 105
[2022-06-15 19:50:10,801] Train: Loss: 2.339 | Acc: 48.446 (620677/1281167) | Lr: 0.5319042709565539
[2022-06-15 19:50:59,474] Test: Loss: 2.359 | Acc: 47.416 (23708/50000)
[2022-06-15 19:50:59,474] Epoch: 106
[2022-06-15 20:08:47,184] Train: Loss: 2.343 | Acc: 48.420 (620342/1281167) | Lr: 0.5306537054128772
[2022-06-15 20:09:36,476] Test: Loss: 2.249 | Acc: 48.984 (24492/50000)
[2022-06-15 20:09:36,477] Epoch: 107
[2022-06-15 20:27:20,613] Train: Loss: 2.338 | Acc: 48.433 (620512/1281167) | Lr: 0.529393259646319
[2022-06-15 20:28:09,546] Test: Loss: 2.453 | Acc: 45.590 (22795/50000)
[2022-06-15 20:28:09,546] Epoch: 108
[2022-06-15 20:45:46,719] Train: Loss: 2.337 | Acc: 48.485 (621168/1281167) | Lr: 0.528122987650181
[2022-06-15 20:46:35,094] Test: Loss: 2.520 | Acc: 44.320 (22160/50000)
[2022-06-15 20:46:35,094] Epoch: 109
[2022-06-15 21:04:17,773] Train: Loss: 2.334 | Acc: 48.590 (622525/1281167) | Lr: 0.5268429438386876
[2022-06-15 21:05:06,374] Test: Loss: 2.130 | Acc: 50.586 (25293/50000)
[2022-06-15 21:05:06,374] Epoch: 110
[2022-06-15 21:22:44,342] Train: Loss: 2.336 | Acc: 48.543 (621921/1281167) | Lr: 0.5255531830446555
[2022-06-15 21:23:33,545] Test: Loss: 2.244 | Acc: 49.210 (24605/50000)
[2022-06-15 21:23:33,546] Epoch: 111
[2022-06-15 21:41:11,176] Train: Loss: 2.335 | Acc: 48.528 (621729/1281167) | Lr: 0.5242537605171443
[2022-06-15 21:42:00,560] Test: Loss: 2.129 | Acc: 51.152 (25576/50000)
[2022-06-15 21:42:00,561] Epoch: 112
[2022-06-15 21:59:42,798] Train: Loss: 2.330 | Acc: 48.629 (623025/1281167) | Lr: 0.5229447319190905
[2022-06-15 22:00:31,463] Test: Loss: 2.027 | Acc: 52.838 (26419/50000)
[2022-06-15 22:00:31,463] Epoch: 113
[2022-06-15 22:18:09,042] Train: Loss: 2.329 | Acc: 48.692 (623824/1281167) | Lr: 0.5216261533249222
[2022-06-15 22:18:57,529] Test: Loss: 2.133 | Acc: 50.636 (25318/50000)
[2022-06-15 22:18:57,529] Epoch: 114
[2022-06-15 22:36:35,995] Train: Loss: 2.329 | Acc: 48.682 (623703/1281167) | Lr: 0.5202980812181581
[2022-06-15 22:37:24,971] Test: Loss: 2.032 | Acc: 52.632 (26316/50000)
[2022-06-15 22:37:24,971] Epoch: 115
[2022-06-15 22:55:01,732] Train: Loss: 2.324 | Acc: 48.690 (623796/1281167) | Lr: 0.5189605724889867
[2022-06-15 22:55:49,796] Test: Loss: 2.413 | Acc: 45.646 (22823/50000)
[2022-06-15 22:55:49,796] Epoch: 116
[2022-06-15 23:13:30,021] Train: Loss: 2.327 | Acc: 48.723 (624218/1281167) | Lr: 0.5176136844318308
[2022-06-15 23:14:18,366] Test: Loss: 2.008 | Acc: 53.468 (26734/50000)
[2022-06-15 23:14:18,366] Saving..
[2022-06-15 23:14:18,465] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-15 23:14:18,465] Epoch: 117
[2022-06-15 23:31:58,712] Train: Loss: 2.327 | Acc: 48.667 (623510/1281167) | Lr: 0.5162574747428917
[2022-06-15 23:32:46,499] Test: Loss: 2.042 | Acc: 52.742 (26371/50000)
[2022-06-15 23:32:46,500] Epoch: 118
[2022-06-15 23:50:18,968] Train: Loss: 2.322 | Acc: 48.736 (624394/1281167) | Lr: 0.5148920015176788
[2022-06-15 23:51:06,469] Test: Loss: 2.089 | Acc: 51.428 (25714/50000)
[2022-06-15 23:51:06,469] Epoch: 119
[2022-06-16 00:08:45,551] Train: Loss: 2.322 | Acc: 48.750 (624567/1281167) | Lr: 0.5135173232485203
[2022-06-16 00:09:34,553] Test: Loss: 2.070 | Acc: 51.842 (25921/50000)
[2022-06-16 00:09:34,554] Epoch: 120
[2022-06-16 00:27:12,590] Train: Loss: 2.320 | Acc: 48.801 (625216/1281167) | Lr: 0.5121334988220579
[2022-06-16 00:28:01,208] Test: Loss: 2.110 | Acc: 51.422 (25711/50000)
[2022-06-16 00:28:01,208] Epoch: 121
[2022-06-16 00:45:47,668] Train: Loss: 2.317 | Acc: 48.912 (626639/1281167) | Lr: 0.5107405875167246
[2022-06-16 00:46:35,377] Test: Loss: 2.083 | Acc: 51.686 (25843/50000)
[2022-06-16 00:46:35,377] Epoch: 122
[2022-06-16 01:04:15,852] Train: Loss: 2.318 | Acc: 48.867 (626070/1281167) | Lr: 0.5093386490002044
[2022-06-16 01:05:04,700] Test: Loss: 2.779 | Acc: 40.670 (20335/50000)
[2022-06-16 01:05:04,700] Epoch: 123
[2022-06-16 01:22:39,237] Train: Loss: 2.319 | Acc: 48.909 (626606/1281167) | Lr: 0.5079277433268776
[2022-06-16 01:23:26,678] Test: Loss: 2.164 | Acc: 50.210 (25105/50000)
[2022-06-16 01:23:26,678] Epoch: 124
[2022-06-16 01:41:03,148] Train: Loss: 2.315 | Acc: 48.924 (626799/1281167) | Lr: 0.5065079309352473
[2022-06-16 01:41:51,404] Test: Loss: 2.026 | Acc: 52.848 (26424/50000)
[2022-06-16 01:41:51,405] Epoch: 125
[2022-06-16 01:59:30,090] Train: Loss: 2.313 | Acc: 48.944 (627056/1281167) | Lr: 0.5050792726453508
[2022-06-16 02:00:18,993] Test: Loss: 2.870 | Acc: 39.224 (19612/50000)
[2022-06-16 02:00:18,993] Epoch: 126
[2022-06-16 02:17:56,639] Train: Loss: 2.316 | Acc: 48.877 (626190/1281167) | Lr: 0.5036418296561543
[2022-06-16 02:18:44,901] Test: Loss: 2.119 | Acc: 51.562 (25781/50000)
[2022-06-16 02:18:44,902] Epoch: 127
[2022-06-16 02:36:24,365] Train: Loss: 2.308 | Acc: 49.066 (628616/1281167) | Lr: 0.5021956635429314
[2022-06-16 02:37:12,997] Test: Loss: 2.962 | Acc: 37.596 (18798/50000)
[2022-06-16 02:37:12,997] Epoch: 128
[2022-06-16 02:54:56,684] Train: Loss: 2.306 | Acc: 49.049 (628405/1281167) | Lr: 0.5007408362546251
[2022-06-16 02:55:46,301] Test: Loss: 2.199 | Acc: 50.262 (25131/50000)
[2022-06-16 02:55:46,301] Epoch: 129
[2022-06-16 03:13:25,948] Train: Loss: 2.309 | Acc: 48.971 (627404/1281167) | Lr: 0.4992774101111944
[2022-06-16 03:14:14,509] Test: Loss: 2.164 | Acc: 50.728 (25364/50000)
[2022-06-16 03:14:14,509] Epoch: 130
[2022-06-16 03:31:52,526] Train: Loss: 2.305 | Acc: 49.064 (628595/1281167) | Lr: 0.4978054478009446
[2022-06-16 03:32:39,999] Test: Loss: 1.968 | Acc: 54.196 (27098/50000)
[2022-06-16 03:32:39,999] Saving..
[2022-06-16 03:32:40,088] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-16 03:32:40,088] Epoch: 131
[2022-06-16 03:50:28,381] Train: Loss: 2.302 | Acc: 49.164 (629875/1281167) | Lr: 0.49632501237784193
[2022-06-16 03:51:17,124] Test: Loss: 2.173 | Acc: 50.162 (25081/50000)
[2022-06-16 03:51:17,125] Epoch: 132
[2022-06-16 04:09:48,506] Train: Loss: 2.302 | Acc: 49.169 (629932/1281167) | Lr: 0.49483616725881285
[2022-06-16 04:10:42,608] Test: Loss: 2.423 | Acc: 46.600 (23300/50000)
[2022-06-16 04:10:42,608] Epoch: 133
[2022-06-16 04:28:55,546] Train: Loss: 2.296 | Acc: 49.256 (631053/1281167) | Lr: 0.49333897622102685
[2022-06-16 04:29:43,266] Test: Loss: 2.321 | Acc: 48.132 (24066/50000)
[2022-06-16 04:29:43,267] Epoch: 134
[2022-06-16 04:47:58,839] Train: Loss: 2.303 | Acc: 49.081 (628815/1281167) | Lr: 0.49183350339916493
[2022-06-16 04:48:48,579] Test: Loss: 2.148 | Acc: 50.044 (25022/50000)
[2022-06-16 04:48:48,580] Epoch: 135
[2022-06-16 05:07:00,617] Train: Loss: 2.297 | Acc: 49.253 (631018/1281167) | Lr: 0.4903198132826722
[2022-06-16 05:07:50,051] Test: Loss: 2.030 | Acc: 52.810 (26405/50000)
[2022-06-16 05:07:50,051] Epoch: 136
[2022-06-16 05:26:26,253] Train: Loss: 2.296 | Acc: 49.235 (630779/1281167) | Lr: 0.4887979707129954
[2022-06-16 05:27:14,372] Test: Loss: 2.020 | Acc: 52.960 (26480/50000)
[2022-06-16 05:27:14,373] Epoch: 137
[2022-06-16 05:45:24,194] Train: Loss: 2.299 | Acc: 49.170 (629947/1281167) | Lr: 0.487268040880805
[2022-06-16 05:46:12,638] Test: Loss: 2.265 | Acc: 48.558 (24279/50000)
[2022-06-16 05:46:12,639] Epoch: 138
[2022-06-16 06:04:31,156] Train: Loss: 2.296 | Acc: 49.288 (631465/1281167) | Lr: 0.485730089323203
[2022-06-16 06:05:19,540] Test: Loss: 2.083 | Acc: 51.658 (25829/50000)
[2022-06-16 06:05:19,541] Epoch: 139
[2022-06-16 06:23:35,218] Train: Loss: 2.290 | Acc: 49.390 (632765/1281167) | Lr: 0.48418418192091556
[2022-06-16 06:24:23,696] Test: Loss: 2.516 | Acc: 44.774 (22387/50000)
[2022-06-16 06:24:23,696] Epoch: 140
[2022-06-16 06:42:34,484] Train: Loss: 2.290 | Acc: 49.447 (633503/1281167) | Lr: 0.48263038489547055
[2022-06-16 06:43:24,000] Test: Loss: 2.209 | Acc: 49.616 (24808/50000)
[2022-06-16 06:43:24,001] Epoch: 141
[2022-06-16 07:01:35,503] Train: Loss: 2.289 | Acc: 49.426 (633227/1281167) | Lr: 0.48106876480636107
[2022-06-16 07:02:24,989] Test: Loss: 2.498 | Acc: 44.818 (22409/50000)
[2022-06-16 07:02:24,990] Epoch: 142
[2022-06-16 07:20:31,074] Train: Loss: 2.287 | Acc: 49.392 (632793/1281167) | Lr: 0.47949938854819424
[2022-06-16 07:21:19,222] Test: Loss: 2.194 | Acc: 49.774 (24887/50000)
[2022-06-16 07:21:19,223] Epoch: 143
[2022-06-16 07:39:40,858] Train: Loss: 2.290 | Acc: 49.396 (632848/1281167) | Lr: 0.47792232334782575
[2022-06-16 07:40:31,659] Test: Loss: 2.338 | Acc: 47.684 (23842/50000)
[2022-06-16 07:40:31,660] Epoch: 144
[2022-06-16 07:58:46,801] Train: Loss: 2.290 | Acc: 49.366 (632462/1281167) | Lr: 0.47633763676147983
[2022-06-16 07:59:34,934] Test: Loss: 2.117 | Acc: 51.060 (25530/50000)
[2022-06-16 07:59:34,934] Epoch: 145
[2022-06-16 08:17:49,531] Train: Loss: 2.283 | Acc: 49.520 (634438/1281167) | Lr: 0.47474539667185567
[2022-06-16 08:18:54,200] Test: Loss: 2.072 | Acc: 52.200 (26100/50000)
[2022-06-16 08:18:54,201] Epoch: 146
[2022-06-16 08:37:08,970] Train: Loss: 2.279 | Acc: 49.572 (635095/1281167) | Lr: 0.4731456712852192
[2022-06-16 08:37:56,848] Test: Loss: 3.331 | Acc: 33.924 (16962/50000)
[2022-06-16 08:37:56,848] Epoch: 147
[2022-06-16 08:56:12,531] Train: Loss: 2.281 | Acc: 49.558 (634922/1281167) | Lr: 0.47153852912848176
[2022-06-16 08:57:00,695] Test: Loss: 2.083 | Acc: 51.996 (25998/50000)
[2022-06-16 08:57:00,696] Epoch: 148
[2022-06-16 09:15:12,525] Train: Loss: 2.275 | Acc: 49.653 (636137/1281167) | Lr: 0.4699240390462645
[2022-06-16 09:16:00,921] Test: Loss: 2.238 | Acc: 49.826 (24913/50000)
[2022-06-16 09:16:00,921] Epoch: 149
[2022-06-16 09:34:31,677] Train: Loss: 2.272 | Acc: 49.740 (637255/1281167) | Lr: 0.4683022701979489
[2022-06-16 09:35:20,900] Test: Loss: 2.184 | Acc: 49.934 (24967/50000)
[2022-06-16 09:35:20,900] Epoch: 150
[2022-06-16 09:53:36,164] Train: Loss: 2.272 | Acc: 49.714 (636913/1281167) | Lr: 0.4666732920547148
[2022-06-16 09:54:24,236] Test: Loss: 2.019 | Acc: 53.214 (26607/50000)
[2022-06-16 09:54:24,236] Epoch: 151
[2022-06-16 10:12:27,300] Train: Loss: 2.273 | Acc: 49.699 (636726/1281167) | Lr: 0.46503717439656433
[2022-06-16 10:13:19,385] Test: Loss: 2.117 | Acc: 50.976 (25488/50000)
[2022-06-16 10:13:19,385] Epoch: 152
[2022-06-16 10:31:44,026] Train: Loss: 2.272 | Acc: 49.669 (636346/1281167) | Lr: 0.46339398730933234
[2022-06-16 10:32:30,494] Test: Loss: 2.081 | Acc: 51.872 (25936/50000)
[2022-06-16 10:32:30,494] Epoch: 153
[2022-06-16 10:50:37,869] Train: Loss: 2.268 | Acc: 49.772 (637668/1281167) | Lr: 0.46174380118168473
[2022-06-16 10:51:24,662] Test: Loss: 1.890 | Acc: 55.468 (27734/50000)
[2022-06-16 10:51:24,662] Saving..
[2022-06-16 10:51:24,740] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-16 10:51:24,740] Epoch: 154
[2022-06-16 11:09:28,038] Train: Loss: 2.265 | Acc: 49.822 (638305/1281167) | Lr: 0.4600866867021032
[2022-06-16 11:10:14,792] Test: Loss: 1.913 | Acc: 55.044 (27522/50000)
[2022-06-16 11:10:14,792] Epoch: 155
[2022-06-16 11:28:13,704] Train: Loss: 2.263 | Acc: 49.895 (639232/1281167) | Lr: 0.45842271485585645
[2022-06-16 11:29:02,826] Test: Loss: 2.138 | Acc: 51.166 (25583/50000)
[2022-06-16 11:29:02,827] Epoch: 156
[2022-06-16 11:47:02,958] Train: Loss: 2.262 | Acc: 49.917 (639521/1281167) | Lr: 0.45675195692196036
[2022-06-16 11:47:52,926] Test: Loss: 2.012 | Acc: 52.664 (26332/50000)
[2022-06-16 11:47:52,926] Epoch: 157
[2022-06-16 12:06:02,515] Train: Loss: 2.265 | Acc: 49.883 (639090/1281167) | Lr: 0.4550744844701241
[2022-06-16 12:06:51,156] Test: Loss: 2.254 | Acc: 49.148 (24574/50000)
[2022-06-16 12:06:51,156] Epoch: 158
[2022-06-16 12:25:11,250] Train: Loss: 2.258 | Acc: 49.929 (639674/1281167) | Lr: 0.4533903693576845
[2022-06-16 12:26:05,876] Test: Loss: 2.135 | Acc: 50.964 (25482/50000)
[2022-06-16 12:26:05,877] Epoch: 159
[2022-06-16 12:44:32,495] Train: Loss: 2.262 | Acc: 49.904 (639351/1281167) | Lr: 0.4516996837265278
[2022-06-16 12:45:26,158] Test: Loss: 2.163 | Acc: 50.312 (25156/50000)
[2022-06-16 12:45:26,158] Epoch: 160
[2022-06-16 13:03:46,346] Train: Loss: 2.256 | Acc: 50.029 (640958/1281167) | Lr: 0.4500024999999993
[2022-06-16 13:04:35,917] Test: Loss: 2.257 | Acc: 49.570 (24785/50000)
[2022-06-16 13:04:35,918] Epoch: 161
[2022-06-16 13:22:59,294] Train: Loss: 2.253 | Acc: 50.071 (641495/1281167) | Lr: 0.44829889087980124
[2022-06-16 13:23:50,036] Test: Loss: 1.977 | Acc: 53.688 (26844/50000)
[2022-06-16 13:23:50,036] Epoch: 162
[2022-06-16 13:42:06,611] Train: Loss: 2.254 | Acc: 50.049 (641215/1281167) | Lr: 0.4465889293428783
[2022-06-16 13:42:55,496] Test: Loss: 2.163 | Acc: 50.334 (25167/50000)
[2022-06-16 13:42:55,497] Epoch: 163
[2022-06-16 14:01:17,984] Train: Loss: 2.250 | Acc: 50.126 (642198/1281167) | Lr: 0.44487268863829144
[2022-06-16 14:02:07,002] Test: Loss: 2.132 | Acc: 51.192 (25596/50000)
[2022-06-16 14:02:07,003] Epoch: 164
[2022-06-16 14:20:40,303] Train: Loss: 2.248 | Acc: 50.129 (642236/1281167) | Lr: 0.44315024228408056
[2022-06-16 14:21:29,749] Test: Loss: 2.092 | Acc: 51.964 (25982/50000)
[2022-06-16 14:21:29,750] Epoch: 165
[2022-06-16 14:39:56,940] Train: Loss: 2.249 | Acc: 50.158 (642602/1281167) | Lr: 0.44142166406411454
[2022-06-16 14:40:49,826] Test: Loss: 1.937 | Acc: 54.696 (27348/50000)
[2022-06-16 14:40:49,826] Epoch: 166
[2022-06-16 14:59:09,973] Train: Loss: 2.245 | Acc: 50.209 (643258/1281167) | Lr: 0.4396870280249311
[2022-06-16 14:59:58,662] Test: Loss: 2.239 | Acc: 48.972 (24486/50000)
[2022-06-16 14:59:58,663] Epoch: 167
[2022-06-16 15:18:17,580] Train: Loss: 2.244 | Acc: 50.249 (643778/1281167) | Lr: 0.437946408472565
[2022-06-16 15:19:07,230] Test: Loss: 2.168 | Acc: 50.316 (25158/50000)
[2022-06-16 15:19:07,230] Epoch: 168
[2022-06-16 15:37:46,889] Train: Loss: 2.244 | Acc: 50.213 (643310/1281167) | Lr: 0.43619987996936466
[2022-06-16 15:38:34,859] Test: Loss: 2.040 | Acc: 52.726 (26363/50000)
[2022-06-16 15:38:34,859] Epoch: 169
[2022-06-16 15:57:19,459] Train: Loss: 2.242 | Acc: 50.291 (644317/1281167) | Lr: 0.4344475173307981
[2022-06-16 15:58:07,343] Test: Loss: 2.145 | Acc: 51.002 (25501/50000)
[2022-06-16 15:58:07,343] Epoch: 170
[2022-06-16 16:17:01,170] Train: Loss: 2.240 | Acc: 50.290 (644296/1281167) | Lr: 0.4326893956222486
[2022-06-16 16:17:48,964] Test: Loss: 1.958 | Acc: 54.614 (27307/50000)
[2022-06-16 16:17:48,964] Epoch: 171
[2022-06-16 16:36:26,381] Train: Loss: 2.238 | Acc: 50.392 (645600/1281167) | Lr: 0.4309255901557986
[2022-06-16 16:37:25,516] Test: Loss: 2.185 | Acc: 50.214 (25107/50000)
[2022-06-16 16:37:25,516] Epoch: 172
[2022-06-16 16:55:58,868] Train: Loss: 2.235 | Acc: 50.429 (646081/1281167) | Lr: 0.4291561764870039
[2022-06-16 16:56:48,741] Test: Loss: 2.162 | Acc: 49.810 (24905/50000)
[2022-06-16 16:56:48,742] Epoch: 173
[2022-06-16 17:15:41,794] Train: Loss: 2.234 | Acc: 50.409 (645825/1281167) | Lr: 0.42738123041165693
[2022-06-16 17:16:34,728] Test: Loss: 1.887 | Acc: 55.612 (27806/50000)
[2022-06-16 17:16:34,728] Saving..
[2022-06-16 17:16:34,813] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-16 17:16:34,813] Epoch: 174
[2022-06-16 17:35:16,009] Train: Loss: 2.232 | Acc: 50.427 (646051/1281167) | Lr: 0.4256008279625401
[2022-06-16 17:36:04,507] Test: Loss: 2.239 | Acc: 49.634 (24817/50000)
[2022-06-16 17:36:04,507] Epoch: 175
[2022-06-16 17:54:34,859] Train: Loss: 2.230 | Acc: 50.551 (647649/1281167) | Lr: 0.4238150454061688
[2022-06-16 17:55:23,781] Test: Loss: 1.972 | Acc: 54.060 (27030/50000)
[2022-06-16 17:55:23,781] Epoch: 176
[2022-06-16 18:13:48,591] Train: Loss: 2.223 | Acc: 50.663 (649077/1281167) | Lr: 0.4220239592395241
[2022-06-16 18:14:48,535] Test: Loss: 2.019 | Acc: 53.088 (26544/50000)
[2022-06-16 18:14:48,536] Epoch: 177
[2022-06-16 18:33:28,380] Train: Loss: 2.225 | Acc: 50.641 (648790/1281167) | Lr: 0.4202276461867761
[2022-06-16 18:34:16,630] Test: Loss: 1.897 | Acc: 55.338 (27669/50000)
[2022-06-16 18:34:16,630] Epoch: 178
[2022-06-16 18:52:46,600] Train: Loss: 2.227 | Acc: 50.596 (648214/1281167) | Lr: 0.4184261831959976
[2022-06-16 18:53:33,616] Test: Loss: 2.188 | Acc: 50.526 (25263/50000)
[2022-06-16 18:53:33,616] Epoch: 179
[2022-06-16 19:12:06,752] Train: Loss: 2.223 | Acc: 50.661 (649053/1281167) | Lr: 0.4166196474358673
[2022-06-16 19:12:52,977] Test: Loss: 2.225 | Acc: 49.318 (24659/50000)
[2022-06-16 19:12:52,977] Epoch: 180
[2022-06-16 19:31:23,954] Train: Loss: 2.220 | Acc: 50.676 (649250/1281167) | Lr: 0.4148081162923645
[2022-06-16 19:32:11,566] Test: Loss: 2.016 | Acc: 52.872 (26436/50000)
[2022-06-16 19:32:11,566] Epoch: 181
[2022-06-16 19:50:37,787] Train: Loss: 2.217 | Acc: 50.754 (650245/1281167) | Lr: 0.4129916673654542
[2022-06-16 19:51:30,905] Test: Loss: 2.275 | Acc: 48.738 (24369/50000)
[2022-06-16 19:51:30,905] Epoch: 182
[2022-06-16 20:09:54,282] Train: Loss: 2.215 | Acc: 50.788 (650677/1281167) | Lr: 0.4111703784657627
[2022-06-16 20:10:47,378] Test: Loss: 2.227 | Acc: 49.258 (24629/50000)
[2022-06-16 20:10:47,379] Epoch: 183
[2022-06-16 20:29:13,823] Train: Loss: 2.214 | Acc: 50.840 (651347/1281167) | Lr: 0.409344327611245
[2022-06-16 20:30:05,959] Test: Loss: 1.926 | Acc: 54.790 (27395/50000)
[2022-06-16 20:30:05,960] Epoch: 184
[2022-06-16 20:48:44,056] Train: Loss: 2.210 | Acc: 50.941 (652641/1281167) | Lr: 0.4075135930238419
[2022-06-16 20:49:32,337] Test: Loss: 1.861 | Acc: 56.172 (28086/50000)
[2022-06-16 20:49:32,337] Saving..
[2022-06-16 20:49:32,424] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-16 20:49:32,424] Epoch: 185
[2022-06-16 21:08:02,733] Train: Loss: 2.211 | Acc: 50.809 (650947/1281167) | Lr: 0.40567825312612993
[2022-06-16 21:08:51,954] Test: Loss: 1.939 | Acc: 54.654 (27327/50000)
[2022-06-16 21:08:51,954] Epoch: 186
[2022-06-16 21:27:19,159] Train: Loss: 2.209 | Acc: 50.913 (652283/1281167) | Lr: 0.403838386537962
[2022-06-16 21:28:17,072] Test: Loss: 2.043 | Acc: 52.896 (26448/50000)
[2022-06-16 21:28:17,072] Epoch: 187
[2022-06-16 21:46:52,749] Train: Loss: 2.206 | Acc: 50.981 (653157/1281167) | Lr: 0.4019940720730991
[2022-06-16 21:47:43,206] Test: Loss: 1.986 | Acc: 53.784 (26892/50000)
[2022-06-16 21:47:43,206] Epoch: 188
[2022-06-16 22:06:06,042] Train: Loss: 2.201 | Acc: 51.020 (653646/1281167) | Lr: 0.4001453887358346
[2022-06-16 22:06:58,738] Test: Loss: 1.946 | Acc: 54.744 (27372/50000)
[2022-06-16 22:06:58,738] Epoch: 189
[2022-06-16 22:25:20,738] Train: Loss: 2.204 | Acc: 51.008 (653496/1281167) | Lr: 0.39829241571760976
[2022-06-16 22:26:11,196] Test: Loss: 2.002 | Acc: 53.370 (26685/50000)
[2022-06-16 22:26:11,197] Epoch: 190
[2022-06-16 22:44:44,017] Train: Loss: 2.203 | Acc: 51.027 (653737/1281167) | Lr: 0.3964352323936215
[2022-06-16 22:45:36,214] Test: Loss: 2.538 | Acc: 45.166 (22583/50000)
[2022-06-16 22:45:36,215] Epoch: 191
[2022-06-16 23:04:10,118] Train: Loss: 2.198 | Acc: 51.153 (655355/1281167) | Lr: 0.39457391831942223
[2022-06-16 23:04:58,184] Test: Loss: 2.003 | Acc: 53.118 (26559/50000)
[2022-06-16 23:04:58,184] Epoch: 192
[2022-06-16 23:23:35,628] Train: Loss: 2.196 | Acc: 51.162 (655468/1281167) | Lr: 0.3927085532275119
[2022-06-16 23:24:23,482] Test: Loss: 2.003 | Acc: 53.438 (26719/50000)
[2022-06-16 23:24:23,482] Epoch: 193
[2022-06-16 23:43:15,103] Train: Loss: 2.196 | Acc: 51.143 (655230/1281167) | Lr: 0.39083921702392277
[2022-06-16 23:44:02,900] Test: Loss: 2.026 | Acc: 53.368 (26684/50000)
[2022-06-16 23:44:02,900] Epoch: 194
[2022-06-17 00:02:51,718] Train: Loss: 2.193 | Acc: 51.188 (655798/1281167) | Lr: 0.388965989784796
[2022-06-17 00:03:48,014] Test: Loss: 1.931 | Acc: 54.904 (27452/50000)
[2022-06-17 00:03:48,014] Epoch: 195
[2022-06-17 00:22:11,734] Train: Loss: 2.190 | Acc: 51.276 (656935/1281167) | Lr: 0.38708895175295205
[2022-06-17 00:23:00,132] Test: Loss: 1.909 | Acc: 55.058 (27529/50000)
[2022-06-17 00:23:00,132] Epoch: 196
[2022-06-17 00:41:28,496] Train: Loss: 2.188 | Acc: 51.332 (657644/1281167) | Lr: 0.3852081833344529
[2022-06-17 00:42:17,388] Test: Loss: 1.929 | Acc: 55.196 (27598/50000)
[2022-06-17 00:42:17,389] Epoch: 197
[2022-06-17 01:00:45,920] Train: Loss: 2.188 | Acc: 51.247 (656560/1281167) | Lr: 0.38332376509515786
[2022-06-17 01:01:37,239] Test: Loss: 2.458 | Acc: 45.792 (22896/50000)
[2022-06-17 01:01:37,239] Epoch: 198
[2022-06-17 01:19:55,531] Train: Loss: 2.182 | Acc: 51.437 (658998/1281167) | Lr: 0.3814357777572725
[2022-06-17 01:20:57,540] Test: Loss: 2.016 | Acc: 52.902 (26451/50000)
[2022-06-17 01:20:57,541] Epoch: 199
[2022-06-17 01:39:18,181] Train: Loss: 2.179 | Acc: 51.487 (659636/1281167) | Lr: 0.37954430219589075
[2022-06-17 01:40:09,275] Test: Loss: 2.110 | Acc: 51.732 (25866/50000)
[2022-06-17 01:40:09,275] Epoch: 200
[2022-06-17 01:58:25,362] Train: Loss: 2.182 | Acc: 51.400 (658519/1281167) | Lr: 0.37764941943553026
[2022-06-17 01:59:16,034] Test: Loss: 1.885 | Acc: 55.812 (27906/50000)
[2022-06-17 01:59:16,035] Epoch: 201
[2022-06-17 02:17:46,105] Train: Loss: 2.178 | Acc: 51.447 (659119/1281167) | Lr: 0.37575121064666184
[2022-06-17 02:18:35,454] Test: Loss: 2.200 | Acc: 50.360 (25180/50000)
[2022-06-17 02:18:35,454] Epoch: 202
[2022-06-17 02:37:08,949] Train: Loss: 2.172 | Acc: 51.598 (661051/1281167) | Lr: 0.37384975714223234
[2022-06-17 02:37:58,227] Test: Loss: 2.415 | Acc: 46.828 (23414/50000)
[2022-06-17 02:37:58,227] Epoch: 203
[2022-06-17 02:56:33,750] Train: Loss: 2.173 | Acc: 51.565 (660630/1281167) | Lr: 0.37194514037418125
[2022-06-17 02:57:22,106] Test: Loss: 1.916 | Acc: 55.292 (27646/50000)
[2022-06-17 02:57:22,107] Epoch: 204
[2022-06-17 03:15:37,758] Train: Loss: 2.170 | Acc: 51.679 (662089/1281167) | Lr: 0.3700374419299519
[2022-06-17 03:16:27,889] Test: Loss: 1.882 | Acc: 55.674 (27837/50000)
[2022-06-17 03:16:27,890] Epoch: 205
[2022-06-17 03:35:08,816] Train: Loss: 2.171 | Acc: 51.598 (661054/1281167) | Lr: 0.3681267435289963
[2022-06-17 03:35:59,024] Test: Loss: 1.930 | Acc: 55.022 (27511/50000)
[2022-06-17 03:35:59,024] Epoch: 206
[2022-06-17 03:54:29,302] Train: Loss: 2.169 | Acc: 51.687 (662196/1281167) | Lr: 0.3662131270192749
[2022-06-17 03:55:24,712] Test: Loss: 2.183 | Acc: 49.990 (24995/50000)
[2022-06-17 03:55:24,712] Epoch: 207
[2022-06-17 04:13:48,843] Train: Loss: 2.168 | Acc: 51.672 (662010/1281167) | Lr: 0.3642966743737495
[2022-06-17 04:14:42,800] Test: Loss: 1.723 | Acc: 58.512 (29256/50000)
[2022-06-17 04:14:42,801] Saving..
[2022-06-17 04:14:42,889] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-17 04:14:42,889] Epoch: 208
[2022-06-17 04:33:15,775] Train: Loss: 2.161 | Acc: 51.746 (662951/1281167) | Lr: 0.36237746768687323
[2022-06-17 04:34:03,965] Test: Loss: 1.942 | Acc: 54.672 (27336/50000)
[2022-06-17 04:34:03,966] Epoch: 209
[2022-06-17 04:52:27,548] Train: Loss: 2.163 | Acc: 51.826 (663975/1281167) | Lr: 0.360455589171073
[2022-06-17 04:53:15,177] Test: Loss: 1.832 | Acc: 56.542 (28271/50000)
[2022-06-17 04:53:15,178] Epoch: 210
[2022-06-17 05:11:59,448] Train: Loss: 2.162 | Acc: 51.829 (664021/1281167) | Lr: 0.358531121153228
[2022-06-17 05:12:46,158] Test: Loss: 1.956 | Acc: 54.416 (27208/50000)
[2022-06-17 05:12:46,159] Epoch: 211
[2022-06-17 05:31:13,291] Train: Loss: 2.158 | Acc: 51.907 (665013/1281167) | Lr: 0.3566041460711427
[2022-06-17 05:32:02,012] Test: Loss: 1.888 | Acc: 55.590 (27795/50000)
[2022-06-17 05:32:02,013] Epoch: 212
[2022-06-17 05:50:46,481] Train: Loss: 2.151 | Acc: 51.992 (666106/1281167) | Lr: 0.35467474647001634
[2022-06-17 05:51:35,320] Test: Loss: 2.011 | Acc: 53.204 (26602/50000)
[2022-06-17 05:51:35,320] Epoch: 213
[2022-06-17 06:10:22,990] Train: Loss: 2.154 | Acc: 51.964 (665740/1281167) | Lr: 0.3527430049989062
[2022-06-17 06:11:11,952] Test: Loss: 2.228 | Acc: 49.580 (24790/50000)
[2022-06-17 06:11:11,952] Epoch: 214
[2022-06-17 06:29:40,835] Train: Loss: 2.153 | Acc: 52.023 (666498/1281167) | Lr: 0.3508090044071877
[2022-06-17 06:30:29,735] Test: Loss: 1.764 | Acc: 57.968 (28984/50000)
[2022-06-17 06:30:29,736] Epoch: 215
[2022-06-17 06:49:22,083] Train: Loss: 2.148 | Acc: 52.104 (667539/1281167) | Lr: 0.34887282754100923
[2022-06-17 06:50:10,289] Test: Loss: 1.967 | Acc: 54.106 (27053/50000)
[2022-06-17 06:50:10,290] Epoch: 216
[2022-06-17 07:08:44,776] Train: Loss: 2.147 | Acc: 52.089 (667349/1281167) | Lr: 0.3469345573397436
[2022-06-17 07:09:36,763] Test: Loss: 1.836 | Acc: 56.470 (28235/50000)
[2022-06-17 07:09:36,764] Epoch: 217
[2022-06-17 07:28:23,689] Train: Loss: 2.142 | Acc: 52.142 (668021/1281167) | Lr: 0.3449942768324353
[2022-06-17 07:29:09,819] Test: Loss: 1.800 | Acc: 57.184 (28592/50000)
[2022-06-17 07:29:09,819] Epoch: 218
[2022-06-17 07:48:09,495] Train: Loss: 2.145 | Acc: 52.094 (667405/1281167) | Lr: 0.34305206913424346
[2022-06-17 07:48:57,035] Test: Loss: 1.932 | Acc: 54.540 (27270/50000)
[2022-06-17 07:48:57,035] Epoch: 219
[2022-06-17 08:07:45,139] Train: Loss: 2.138 | Acc: 52.193 (668680/1281167) | Lr: 0.3411080174428815
[2022-06-17 08:08:35,417] Test: Loss: 1.813 | Acc: 57.600 (28800/50000)
[2022-06-17 08:08:35,417] Epoch: 220
[2022-06-17 08:27:04,540] Train: Loss: 2.136 | Acc: 52.330 (670438/1281167) | Lr: 0.3391622050350539
[2022-06-17 08:27:54,213] Test: Loss: 1.936 | Acc: 54.588 (27294/50000)
[2022-06-17 08:27:54,213] Epoch: 221
[2022-06-17 08:46:43,474] Train: Loss: 2.130 | Acc: 52.393 (671244/1281167) | Lr: 0.3372147152628879
[2022-06-17 08:47:33,252] Test: Loss: 1.933 | Acc: 55.060 (27530/50000)
[2022-06-17 08:47:33,253] Epoch: 222
[2022-06-17 09:06:01,959] Train: Loss: 2.133 | Acc: 52.365 (670889/1281167) | Lr: 0.33526563155036354
[2022-06-17 09:06:49,444] Test: Loss: 1.853 | Acc: 56.504 (28252/50000)
[2022-06-17 09:06:49,445] Epoch: 223
[2022-06-17 09:25:23,035] Train: Loss: 2.127 | Acc: 52.486 (672438/1281167) | Lr: 0.33331503738974005
[2022-06-17 09:26:11,984] Test: Loss: 1.939 | Acc: 54.914 (27457/50000)
[2022-06-17 09:26:11,984] Epoch: 224
[2022-06-17 09:44:39,089] Train: Loss: 2.125 | Acc: 52.470 (672228/1281167) | Lr: 0.33136301633797927
[2022-06-17 09:45:27,054] Test: Loss: 1.877 | Acc: 55.772 (27886/50000)
[2022-06-17 09:45:27,054] Epoch: 225
[2022-06-17 10:03:51,426] Train: Loss: 2.124 | Acc: 52.540 (673122/1281167) | Lr: 0.3294096520131662
[2022-06-17 10:04:39,949] Test: Loss: 1.719 | Acc: 58.950 (29475/50000)
[2022-06-17 10:04:39,950] Saving..
[2022-06-17 10:04:40,070] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-17 10:04:40,071] Epoch: 226
[2022-06-17 10:23:17,404] Train: Loss: 2.120 | Acc: 52.569 (673501/1281167) | Lr: 0.327455028090927
[2022-06-17 10:24:10,601] Test: Loss: 1.921 | Acc: 54.880 (27440/50000)
[2022-06-17 10:24:10,601] Epoch: 227
[2022-06-17 10:42:38,278] Train: Loss: 2.123 | Acc: 52.534 (673050/1281167) | Lr: 0.32549922830084527
[2022-06-17 10:43:30,587] Test: Loss: 1.795 | Acc: 57.354 (28677/50000)
[2022-06-17 10:43:30,587] Epoch: 228
[2022-06-17 11:02:04,076] Train: Loss: 2.116 | Acc: 52.664 (674715/1281167) | Lr: 0.3235423364228745
[2022-06-17 11:02:56,737] Test: Loss: 1.863 | Acc: 56.640 (28320/50000)
[2022-06-17 11:02:56,737] Epoch: 229
[2022-06-17 11:21:23,932] Train: Loss: 2.117 | Acc: 52.607 (673978/1281167) | Lr: 0.3215844362837498
[2022-06-17 11:22:15,685] Test: Loss: 1.760 | Acc: 58.168 (29084/50000)
[2022-06-17 11:22:15,685] Epoch: 230
[2022-06-17 11:41:03,415] Train: Loss: 2.110 | Acc: 52.784 (676253/1281167) | Lr: 0.31962561175339643
[2022-06-17 11:41:53,651] Test: Loss: 1.750 | Acc: 58.368 (29184/50000)
[2022-06-17 11:41:53,651] Epoch: 231
[2022-06-17 12:00:21,786] Train: Loss: 2.110 | Acc: 52.760 (675942/1281167) | Lr: 0.3176659467413381
[2022-06-17 12:01:14,191] Test: Loss: 1.741 | Acc: 58.650 (29325/50000)
[2022-06-17 12:01:14,192] Epoch: 232
[2022-06-17 12:19:53,875] Train: Loss: 2.106 | Acc: 52.892 (677641/1281167) | Lr: 0.3157055251931016
[2022-06-17 12:20:44,994] Test: Loss: 1.741 | Acc: 58.928 (29464/50000)
[2022-06-17 12:20:44,995] Epoch: 233
[2022-06-17 12:39:09,432] Train: Loss: 2.105 | Acc: 52.902 (677759/1281167) | Lr: 0.3137444310866212
[2022-06-17 12:39:58,962] Test: Loss: 1.748 | Acc: 58.322 (29161/50000)
[2022-06-17 12:39:58,962] Epoch: 234
[2022-06-17 12:58:34,799] Train: Loss: 2.102 | Acc: 52.885 (677544/1281167) | Lr: 0.31178274842864145
[2022-06-17 12:59:24,966] Test: Loss: 1.750 | Acc: 58.204 (29102/50000)
[2022-06-17 12:59:24,966] Epoch: 235
[2022-06-17 13:18:15,917] Train: Loss: 2.099 | Acc: 53.045 (679590/1281167) | Lr: 0.30982056125111845
[2022-06-17 13:19:09,473] Test: Loss: 1.952 | Acc: 54.688 (27344/50000)
[2022-06-17 13:19:09,473] Epoch: 236
[2022-06-17 13:37:36,843] Train: Loss: 2.096 | Acc: 53.108 (680405/1281167) | Lr: 0.3078579536076201
[2022-06-17 13:38:34,575] Test: Loss: 1.717 | Acc: 58.852 (29426/50000)
[2022-06-17 13:38:34,576] Epoch: 237
[2022-06-17 13:57:22,541] Train: Loss: 2.094 | Acc: 53.139 (680802/1281167) | Lr: 0.30589500956972593
[2022-06-17 13:58:17,720] Test: Loss: 1.848 | Acc: 56.472 (28236/50000)
[2022-06-17 13:58:17,721] Epoch: 238
[2022-06-17 14:16:42,449] Train: Loss: 2.095 | Acc: 53.102 (680323/1281167) | Lr: 0.3039318132234252
[2022-06-17 14:17:34,853] Test: Loss: 1.735 | Acc: 58.522 (29261/50000)
[2022-06-17 14:17:34,853] Epoch: 239
[2022-06-17 14:36:12,585] Train: Loss: 2.091 | Acc: 53.202 (681609/1281167) | Lr: 0.3019684486655154
[2022-06-17 14:37:05,504] Test: Loss: 2.173 | Acc: 50.430 (25215/50000)
[2022-06-17 14:37:05,505] Epoch: 240
[2022-06-17 14:55:32,876] Train: Loss: 2.058 | Acc: 53.758 (688730/1281167) | Lr: 0.30000499999999974
[2022-06-17 14:56:24,145] Test: Loss: 1.729 | Acc: 58.564 (29282/50000)
[2022-06-17 14:56:24,146] Epoch: 241
[2022-06-17 15:15:13,008] Train: Loss: 2.051 | Acc: 53.899 (690538/1281167) | Lr: 0.29804155133448396
[2022-06-17 15:15:57,791] Test: Loss: 1.939 | Acc: 54.880 (27440/50000)
[2022-06-17 15:15:57,793] Epoch: 242
[2022-06-17 15:34:46,382] Train: Loss: 2.046 | Acc: 54.038 (692322/1281167) | Lr: 0.29607818677657416
[2022-06-17 15:35:31,057] Test: Loss: 1.826 | Acc: 57.166 (28583/50000)
[2022-06-17 15:35:31,057] Epoch: 243
[2022-06-17 15:54:00,171] Train: Loss: 2.042 | Acc: 54.126 (693439/1281167) | Lr: 0.29411499043027345
[2022-06-17 15:54:47,966] Test: Loss: 1.815 | Acc: 57.272 (28636/50000)
[2022-06-17 15:54:47,967] Epoch: 244
[2022-06-17 16:13:11,971] Train: Loss: 2.038 | Acc: 54.182 (694159/1281167) | Lr: 0.2921520463923793
[2022-06-17 16:14:01,776] Test: Loss: 1.748 | Acc: 58.338 (29169/50000)
[2022-06-17 16:14:01,777] Epoch: 245
[2022-06-17 16:32:52,897] Train: Loss: 2.030 | Acc: 54.364 (696492/1281167) | Lr: 0.290189438748881
[2022-06-17 16:33:42,907] Test: Loss: 1.841 | Acc: 56.872 (28436/50000)
[2022-06-17 16:33:42,908] Epoch: 246
[2022-06-17 16:52:16,448] Train: Loss: 2.031 | Acc: 54.313 (695838/1281167) | Lr: 0.2882272515713579
[2022-06-17 16:53:04,642] Test: Loss: 1.693 | Acc: 59.202 (29601/50000)
[2022-06-17 16:53:04,643] Saving..
[2022-06-17 16:53:04,726] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-17 16:53:04,727] Epoch: 247
[2022-06-17 17:11:40,686] Train: Loss: 2.026 | Acc: 54.391 (696838/1281167) | Lr: 0.2862655689133781
[2022-06-17 17:12:33,855] Test: Loss: 1.622 | Acc: 60.990 (30495/50000)
[2022-06-17 17:12:33,856] Saving..
[2022-06-17 17:12:33,937] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-17 17:12:33,938] Epoch: 248
[2022-06-17 17:31:09,223] Train: Loss: 2.022 | Acc: 54.457 (697686/1281167) | Lr: 0.2843044748068978
[2022-06-17 17:31:58,243] Test: Loss: 1.735 | Acc: 58.792 (29396/50000)
[2022-06-17 17:31:58,245] Epoch: 249
[2022-06-17 17:50:25,595] Train: Loss: 2.019 | Acc: 54.554 (698933/1281167) | Lr: 0.2823440532586613
[2022-06-17 17:51:15,132] Test: Loss: 1.897 | Acc: 55.496 (27748/50000)
[2022-06-17 17:51:15,133] Epoch: 250
[2022-06-17 18:09:52,792] Train: Loss: 2.017 | Acc: 54.595 (699453/1281167) | Lr: 0.280384388246603
[2022-06-17 18:10:42,593] Test: Loss: 1.686 | Acc: 59.560 (29780/50000)
[2022-06-17 18:10:42,595] Epoch: 251
[2022-06-17 18:29:09,749] Train: Loss: 2.009 | Acc: 54.738 (701282/1281167) | Lr: 0.27842556371624966
[2022-06-17 18:29:58,890] Test: Loss: 1.717 | Acc: 58.986 (29493/50000)
[2022-06-17 18:29:58,890] Epoch: 252
[2022-06-17 18:48:31,157] Train: Loss: 2.010 | Acc: 54.727 (701148/1281167) | Lr: 0.27646766357712493
[2022-06-17 18:49:21,330] Test: Loss: 1.622 | Acc: 61.110 (30555/50000)
[2022-06-17 18:49:21,331] Saving..
[2022-06-17 18:49:21,418] * Saved checkpoint to ./results/14104552/FENet_imagenet.t7
[2022-06-17 18:49:21,419] Epoch: 253
[2022-06-17 19:07:55,487] Train: Loss: 2.001 | Acc: 54.895 (703297/1281167) | Lr: 0.2745107716991541
[2022-06-17 19:08:44,841] Test: Loss: 1.720 | Acc: 59.076 (29538/50000)
[2022-06-17 19:08:44,842] Epoch: 254
[2022-06-17 19:27:32,454] Train: Loss: 2.005 | Acc: 54.844 (702638/1281167) | Lr: 0.27255497190907235
[2022-06-17 19:28:27,051] Test: Loss: 1.848 | Acc: 56.620 (28310/50000)
[2022-06-17 19:28:27,052] Epoch: 255
[2022-06-17 19:47:16,353] Train: Loss: 2.002 | Acc: 54.839 (702579/1281167) | Lr: 0.2706003479868332
[2022-06-17 19:48:10,007] Test: Loss: 1.644 | Acc: 60.582 (30291/50000)
[2022-06-17 19:48:10,008] Epoch: 256
[2022-06-17 20:06:53,407] Train: Loss: 1.998 | Acc: 54.924 (703667/1281167) | Lr: 0.2686469836620201