-
Notifications
You must be signed in to change notification settings - Fork 4
/
FENet_imagenet.log
1804 lines (1804 loc) · 120 KB
/
FENet_imagenet.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
[2022-06-14 09:10:54,700] Namespace(auto_augment=True, batch_size=1024, data_dir='/dataset/public/ImageNetOrigin/', epoch=480, lr=0.6, mode='Train', nesterov=True, reduction=1.0, results_dir='./results/', resume=None)
[2022-06-14 09:10:54,700] ==> Preparing data..
[2022-06-14 09:11:01,410] Training / Testing data number: 50000 / 1281167
[2022-06-14 09:11:01,411] Using path: ./results/14091054/
[2022-06-14 09:11:01,411] ==> Building model..
[2022-06-14 09:11:04,586] DataParallel(
(module): FENet(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ibssl): IBSSL(
(conv1): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(16, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(160, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(feblock1): FEBlock3n2s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(8, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(48, 8, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(96, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(32, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock2): FEBlock4n2s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(8, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(48, 8, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(96, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_3): ResIBSSL(
(conv1): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(64, 768, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock3): FEBlock4n1s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(96, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_3): ResIBSSL(
(conv1): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibssl): IBSSL(
(conv1): Conv2d(128, 768, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock4): FEBlock4n2s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(96, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_3): ResIBSSL(
(conv1): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibpool): IBPool(
(conv1): Conv2d(128, 1536, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(1536, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(conv2): Conv2d(1536, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(feblock5): FEBlock3n1s(
(resibssl_1): ResIBSSL(
(conv1): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(resibssl_2): ResIBSSL(
(conv1): Conv2d(128, 768, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ibssl): IBSSL(
(conv1): Conv2d(256, 1536, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(1536, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shift2): SSL2d()
(conv2): Conv2d(1536, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(conv2): Conv2d(256, 1932, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(1932, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(gap): AdaptiveAvgPool2d(output_size=(1, 1))
(dropout): Dropout(p=0.2, inplace=False)
(fc): Conv2d(1932, 1000, kernel_size=(1, 1), stride=(1, 1))
)
)
[2022-06-14 09:11:04,595] Epoch: 0
[2022-06-14 09:27:42,148] Train: Loss: 5.780 | Acc: 4.340 (55609/1281167) | Lr: 0.6
[2022-06-14 09:28:26,033] Test: Loss: 5.324 | Acc: 7.348 (3674/50000)
[2022-06-14 09:28:26,033] Saving..
[2022-06-14 09:28:26,133] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 09:28:26,133] Epoch: 1
[2022-06-14 09:44:57,546] Train: Loss: 4.409 | Acc: 15.950 (204351/1281167) | Lr: 0.5999935746063304
[2022-06-14 09:45:39,562] Test: Loss: 4.024 | Acc: 19.648 (9824/50000)
[2022-06-14 09:45:39,562] Saving..
[2022-06-14 09:45:39,632] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 09:45:39,632] Epoch: 2
[2022-06-14 10:02:37,675] Train: Loss: 3.863 | Acc: 23.286 (298336/1281167) | Lr: 0.5999742987005642
[2022-06-14 10:03:20,447] Test: Loss: 3.732 | Acc: 24.910 (12455/50000)
[2022-06-14 10:03:20,447] Saving..
[2022-06-14 10:03:20,525] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 10:03:20,525] Epoch: 3
[2022-06-14 10:20:21,975] Train: Loss: 3.543 | Acc: 28.110 (360136/1281167) | Lr: 0.599942173108417
[2022-06-14 10:21:07,375] Test: Loss: 3.507 | Acc: 27.184 (13592/50000)
[2022-06-14 10:21:07,375] Saving..
[2022-06-14 10:21:07,445] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 10:21:07,446] Epoch: 4
[2022-06-14 10:38:07,584] Train: Loss: 3.344 | Acc: 31.124 (398751/1281167) | Lr: 0.5998971992060422
[2022-06-14 10:38:51,800] Test: Loss: 3.551 | Acc: 27.488 (13744/50000)
[2022-06-14 10:38:51,801] Saving..
[2022-06-14 10:38:51,866] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 10:38:51,866] Epoch: 5
[2022-06-14 10:55:57,373] Train: Loss: 3.215 | Acc: 33.172 (424984/1281167) | Lr: 0.5998393789199723
[2022-06-14 10:56:40,344] Test: Loss: 3.385 | Acc: 30.018 (15009/50000)
[2022-06-14 10:56:40,344] Saving..
[2022-06-14 10:56:40,494] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 10:56:40,494] Epoch: 6
[2022-06-14 11:14:49,225] Train: Loss: 3.116 | Acc: 34.860 (446609/1281167) | Lr: 0.5997687147270356
[2022-06-14 11:15:33,228] Test: Loss: 3.019 | Acc: 34.836 (17418/50000)
[2022-06-14 11:15:33,229] Saving..
[2022-06-14 11:15:33,313] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 11:15:33,313] Epoch: 7
[2022-06-14 11:32:15,629] Train: Loss: 3.045 | Acc: 36.067 (462079/1281167) | Lr: 0.5996852096542512
[2022-06-14 11:32:56,835] Test: Loss: 3.185 | Acc: 32.452 (16226/50000)
[2022-06-14 11:32:56,835] Epoch: 8
[2022-06-14 11:49:56,155] Train: Loss: 2.988 | Acc: 36.932 (473157/1281167) | Lr: 0.5995888672786983
[2022-06-14 11:50:39,194] Test: Loss: 3.293 | Acc: 31.002 (15501/50000)
[2022-06-14 11:50:39,195] Epoch: 9
[2022-06-14 12:07:38,031] Train: Loss: 2.943 | Acc: 37.720 (483257/1281167) | Lr: 0.5994796917273638
[2022-06-14 12:08:24,960] Test: Loss: 3.107 | Acc: 34.154 (17077/50000)
[2022-06-14 12:08:24,961] Epoch: 10
[2022-06-14 12:25:00,189] Train: Loss: 2.905 | Acc: 38.370 (491583/1281167) | Lr: 0.5993576876769647
[2022-06-14 12:25:42,683] Test: Loss: 2.846 | Acc: 37.692 (18846/50000)
[2022-06-14 12:25:42,684] Saving..
[2022-06-14 12:25:42,768] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 12:25:42,768] Epoch: 11
[2022-06-14 12:42:38,370] Train: Loss: 2.875 | Acc: 38.931 (498774/1281167) | Lr: 0.5992228603537487
[2022-06-14 12:43:20,622] Test: Loss: 3.054 | Acc: 34.886 (17443/50000)
[2022-06-14 12:43:20,623] Epoch: 12
[2022-06-14 12:59:58,356] Train: Loss: 2.843 | Acc: 39.456 (505498/1281167) | Lr: 0.5990752155332696
[2022-06-14 13:00:44,748] Test: Loss: 2.864 | Acc: 37.528 (18764/50000)
[2022-06-14 13:00:44,749] Epoch: 13
[2022-06-14 13:17:17,445] Train: Loss: 2.819 | Acc: 39.884 (510984/1281167) | Lr: 0.5989147595401398
[2022-06-14 13:18:03,922] Test: Loss: 2.978 | Acc: 35.702 (17851/50000)
[2022-06-14 13:18:03,923] Epoch: 14
[2022-06-14 13:34:57,074] Train: Loss: 2.800 | Acc: 40.169 (514634/1281167) | Lr: 0.5987414992477603
[2022-06-14 13:35:42,688] Test: Loss: 2.529 | Acc: 42.902 (21451/50000)
[2022-06-14 13:35:42,689] Saving..
[2022-06-14 13:35:42,755] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 13:35:42,756] Epoch: 15
[2022-06-14 13:52:38,535] Train: Loss: 2.780 | Acc: 40.471 (518496/1281167) | Lr: 0.5985554420780254
[2022-06-14 13:53:24,853] Test: Loss: 2.569 | Acc: 42.410 (21205/50000)
[2022-06-14 13:53:24,853] Epoch: 16
[2022-06-14 14:10:28,452] Train: Loss: 2.764 | Acc: 40.811 (522855/1281167) | Lr: 0.5983565960010048
[2022-06-14 14:11:13,978] Test: Loss: 2.714 | Acc: 41.212 (20606/50000)
[2022-06-14 14:11:13,979] Epoch: 17
[2022-06-14 14:29:11,354] Train: Loss: 2.751 | Acc: 40.962 (524793/1281167) | Lr: 0.5981449695346027
[2022-06-14 14:29:59,693] Test: Loss: 2.975 | Acc: 36.218 (18109/50000)
[2022-06-14 14:29:59,693] Epoch: 18
[2022-06-14 14:48:52,697] Train: Loss: 2.743 | Acc: 41.144 (527117/1281167) | Lr: 0.5979205717441928
[2022-06-14 14:49:40,700] Test: Loss: 2.748 | Acc: 39.880 (19940/50000)
[2022-06-14 14:49:40,700] Epoch: 19
[2022-06-14 15:07:11,277] Train: Loss: 2.727 | Acc: 41.421 (530667/1281167) | Lr: 0.5976834122422292
[2022-06-14 15:07:53,474] Test: Loss: 2.734 | Acc: 39.944 (19972/50000)
[2022-06-14 15:07:53,474] Epoch: 20
[2022-06-14 15:25:14,478] Train: Loss: 2.716 | Acc: 41.620 (533220/1281167) | Lr: 0.5974335011878359
[2022-06-14 15:26:01,829] Test: Loss: 3.029 | Acc: 35.920 (17960/50000)
[2022-06-14 15:26:01,829] Epoch: 21
[2022-06-14 15:44:07,301] Train: Loss: 2.706 | Acc: 41.798 (535507/1281167) | Lr: 0.5971708492863705
[2022-06-14 15:44:53,606] Test: Loss: 2.930 | Acc: 36.938 (18469/50000)
[2022-06-14 15:44:53,607] Epoch: 22
[2022-06-14 16:02:26,539] Train: Loss: 2.699 | Acc: 41.940 (537321/1281167) | Lr: 0.5968954677889666
[2022-06-14 16:03:16,636] Test: Loss: 2.737 | Acc: 40.064 (20032/50000)
[2022-06-14 16:03:16,637] Epoch: 23
[2022-06-14 16:21:35,806] Train: Loss: 2.691 | Acc: 42.117 (539585/1281167) | Lr: 0.5966073684920506
[2022-06-14 16:22:19,092] Test: Loss: 2.795 | Acc: 38.896 (19448/50000)
[2022-06-14 16:22:19,092] Epoch: 24
[2022-06-14 16:39:10,129] Train: Loss: 2.684 | Acc: 42.184 (540445/1281167) | Lr: 0.596306563736838
[2022-06-14 16:39:52,447] Test: Loss: 3.687 | Acc: 27.644 (13822/50000)
[2022-06-14 16:39:52,448] Epoch: 25
[2022-06-14 16:56:13,355] Train: Loss: 2.673 | Acc: 42.352 (542595/1281167) | Lr: 0.5959930664088029
[2022-06-14 16:56:57,942] Test: Loss: 2.606 | Acc: 41.844 (20922/50000)
[2022-06-14 16:56:57,942] Epoch: 26
[2022-06-14 17:15:28,517] Train: Loss: 2.666 | Acc: 42.531 (544889/1281167) | Lr: 0.5956668899371277
[2022-06-14 17:16:12,556] Test: Loss: 2.788 | Acc: 39.290 (19645/50000)
[2022-06-14 17:16:12,557] Epoch: 27
[2022-06-14 17:34:50,968] Train: Loss: 2.664 | Acc: 42.538 (544978/1281167) | Lr: 0.5953280482941267
[2022-06-14 17:35:34,820] Test: Loss: 2.711 | Acc: 40.498 (20249/50000)
[2022-06-14 17:35:34,821] Epoch: 28
[2022-06-14 17:53:28,448] Train: Loss: 2.657 | Acc: 42.647 (546380/1281167) | Lr: 0.5949765559946483
[2022-06-14 17:54:11,538] Test: Loss: 2.513 | Acc: 43.348 (21674/50000)
[2022-06-14 17:54:11,538] Saving..
[2022-06-14 17:54:11,610] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 17:54:11,611] Epoch: 29
[2022-06-14 18:12:52,543] Train: Loss: 2.655 | Acc: 42.743 (547615/1281167) | Lr: 0.5946124280954524
[2022-06-14 18:13:32,449] Test: Loss: 2.638 | Acc: 41.952 (20976/50000)
[2022-06-14 18:13:32,450] Epoch: 30
[2022-06-14 18:31:06,361] Train: Loss: 2.648 | Acc: 42.820 (548590/1281167) | Lr: 0.5942356801945667
[2022-06-14 18:31:49,454] Test: Loss: 2.436 | Acc: 45.110 (22555/50000)
[2022-06-14 18:31:49,454] Saving..
[2022-06-14 18:31:49,518] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 18:31:49,519] Epoch: 31
[2022-06-14 18:48:39,972] Train: Loss: 2.643 | Acc: 42.939 (550116/1281167) | Lr: 0.5938463284306172
[2022-06-14 18:49:22,310] Test: Loss: 2.465 | Acc: 44.638 (22319/50000)
[2022-06-14 18:49:22,310] Epoch: 32
[2022-06-14 19:06:13,064] Train: Loss: 2.636 | Acc: 43.018 (551132/1281167) | Lr: 0.5934443894821377
[2022-06-14 19:06:55,233] Test: Loss: 2.645 | Acc: 41.262 (20631/50000)
[2022-06-14 19:06:55,234] Epoch: 33
[2022-06-14 19:23:49,538] Train: Loss: 2.629 | Acc: 43.145 (552764/1281167) | Lr: 0.5930298805668548
[2022-06-14 19:24:33,092] Test: Loss: 2.509 | Acc: 43.428 (21714/50000)
[2022-06-14 19:24:33,093] Epoch: 34
[2022-06-14 19:43:16,843] Train: Loss: 2.629 | Acc: 43.154 (552878/1281167) | Lr: 0.592602819440951
[2022-06-14 19:44:00,837] Test: Loss: 2.403 | Acc: 45.478 (22739/50000)
[2022-06-14 19:44:00,837] Saving..
[2022-06-14 19:44:00,908] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 19:44:00,908] Epoch: 35
[2022-06-14 20:02:26,027] Train: Loss: 2.628 | Acc: 43.161 (552970/1281167) | Lr: 0.5921632243983034
[2022-06-14 20:03:10,600] Test: Loss: 2.328 | Acc: 46.868 (23434/50000)
[2022-06-14 20:03:10,600] Saving..
[2022-06-14 20:03:10,690] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-14 20:03:10,691] Epoch: 36
[2022-06-14 20:21:45,323] Train: Loss: 2.622 | Acc: 43.305 (554803/1281167) | Lr: 0.5917111142697007
[2022-06-14 20:22:32,203] Test: Loss: 2.731 | Acc: 39.536 (19768/50000)
[2022-06-14 20:22:32,203] Epoch: 37
[2022-06-14 20:40:40,505] Train: Loss: 2.617 | Acc: 43.366 (555585/1281167) | Lr: 0.591246508422036
[2022-06-14 20:41:23,148] Test: Loss: 2.683 | Acc: 40.746 (20373/50000)
[2022-06-14 20:41:23,148] Epoch: 38
[2022-06-14 20:58:35,559] Train: Loss: 2.611 | Acc: 43.468 (556895/1281167) | Lr: 0.5907694267574775
[2022-06-14 20:59:17,984] Test: Loss: 2.572 | Acc: 42.462 (21231/50000)
[2022-06-14 20:59:17,984] Epoch: 39
[2022-06-14 21:17:48,292] Train: Loss: 2.615 | Acc: 43.458 (556764/1281167) | Lr: 0.5902798897126158
[2022-06-14 21:18:32,459] Test: Loss: 2.685 | Acc: 40.706 (20353/50000)
[2022-06-14 21:18:32,459] Epoch: 40
[2022-06-14 21:36:46,533] Train: Loss: 2.613 | Acc: 43.469 (556906/1281167) | Lr: 0.5897779182575887
[2022-06-14 21:37:30,870] Test: Loss: 2.555 | Acc: 43.076 (21538/50000)
[2022-06-14 21:37:30,871] Epoch: 41
[2022-06-14 21:54:01,179] Train: Loss: 2.607 | Acc: 43.544 (557868/1281167) | Lr: 0.5892635338951826
[2022-06-14 21:54:44,038] Test: Loss: 2.677 | Acc: 41.642 (20821/50000)
[2022-06-14 21:54:44,038] Epoch: 42
[2022-06-14 22:11:59,166] Train: Loss: 2.605 | Acc: 43.605 (558653/1281167) | Lr: 0.5887367586599115
[2022-06-14 22:12:41,988] Test: Loss: 2.476 | Acc: 44.630 (22315/50000)
[2022-06-14 22:12:41,988] Epoch: 43
[2022-06-14 22:30:59,329] Train: Loss: 2.600 | Acc: 43.655 (559293/1281167) | Lr: 0.5881976151170734
[2022-06-14 22:31:45,693] Test: Loss: 2.863 | Acc: 38.222 (19111/50000)
[2022-06-14 22:31:45,694] Epoch: 44
[2022-06-14 22:48:36,192] Train: Loss: 2.599 | Acc: 43.704 (559917/1281167) | Lr: 0.5876461263617831
[2022-06-14 22:49:21,045] Test: Loss: 2.650 | Acc: 41.730 (20865/50000)
[2022-06-14 22:49:21,046] Epoch: 45
[2022-06-14 23:06:22,530] Train: Loss: 2.596 | Acc: 43.761 (560649/1281167) | Lr: 0.5870823160179836
[2022-06-14 23:07:07,609] Test: Loss: 2.418 | Acc: 45.470 (22735/50000)
[2022-06-14 23:07:07,610] Epoch: 46
[2022-06-14 23:24:24,829] Train: Loss: 2.594 | Acc: 43.806 (561232/1281167) | Lr: 0.5865062082374333
[2022-06-14 23:25:08,598] Test: Loss: 2.586 | Acc: 42.654 (21327/50000)
[2022-06-14 23:25:08,599] Epoch: 47
[2022-06-14 23:42:31,048] Train: Loss: 2.594 | Acc: 43.769 (560754/1281167) | Lr: 0.5859178276986722
[2022-06-14 23:43:13,481] Test: Loss: 2.470 | Acc: 44.692 (22346/50000)
[2022-06-14 23:43:13,482] Epoch: 48
[2022-06-14 23:59:54,278] Train: Loss: 2.590 | Acc: 43.843 (561699/1281167) | Lr: 0.5853171996059642
[2022-06-15 00:00:35,278] Test: Loss: 2.803 | Acc: 39.082 (19541/50000)
[2022-06-15 00:00:35,278] Epoch: 49
[2022-06-15 00:17:31,640] Train: Loss: 2.584 | Acc: 43.955 (563135/1281167) | Lr: 0.5847043496882178
[2022-06-15 00:18:14,185] Test: Loss: 2.426 | Acc: 45.596 (22798/50000)
[2022-06-15 00:18:14,185] Epoch: 50
[2022-06-15 00:35:18,097] Train: Loss: 2.585 | Acc: 43.973 (563373/1281167) | Lr: 0.5840793041978839
[2022-06-15 00:36:02,151] Test: Loss: 2.831 | Acc: 39.842 (19921/50000)
[2022-06-15 00:36:02,151] Epoch: 51
[2022-06-15 00:54:51,587] Train: Loss: 2.581 | Acc: 43.974 (563381/1281167) | Lr: 0.5834420899098308
[2022-06-15 00:55:34,704] Test: Loss: 2.471 | Acc: 45.228 (22614/50000)
[2022-06-15 00:55:34,705] Epoch: 52
[2022-06-15 01:13:39,720] Train: Loss: 2.585 | Acc: 43.951 (563090/1281167) | Lr: 0.5827927341201978
[2022-06-15 01:14:22,550] Test: Loss: 3.009 | Acc: 37.188 (18594/50000)
[2022-06-15 01:14:22,550] Epoch: 53
[2022-06-15 01:32:48,533] Train: Loss: 2.579 | Acc: 44.059 (564466/1281167) | Lr: 0.5821312646452258
[2022-06-15 01:33:30,518] Test: Loss: 2.668 | Acc: 41.504 (20752/50000)
[2022-06-15 01:33:30,518] Epoch: 54
[2022-06-15 01:50:29,410] Train: Loss: 2.574 | Acc: 44.124 (565301/1281167) | Lr: 0.5814577098200655
[2022-06-15 01:51:15,069] Test: Loss: 2.274 | Acc: 48.018 (24009/50000)
[2022-06-15 01:51:15,069] Saving..
[2022-06-15 01:51:15,158] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-15 01:51:15,158] Epoch: 55
[2022-06-15 02:09:48,393] Train: Loss: 2.575 | Acc: 44.133 (565415/1281167) | Lr: 0.5807720984975637
[2022-06-15 02:10:32,118] Test: Loss: 2.642 | Acc: 41.036 (20518/50000)
[2022-06-15 02:10:32,118] Epoch: 56
[2022-06-15 02:29:00,591] Train: Loss: 2.570 | Acc: 44.162 (565791/1281167) | Lr: 0.5800744600470279
[2022-06-15 02:29:43,785] Test: Loss: 2.541 | Acc: 43.564 (21782/50000)
[2022-06-15 02:29:43,786] Epoch: 57
[2022-06-15 02:47:46,806] Train: Loss: 2.571 | Acc: 44.124 (565298/1281167) | Lr: 0.5793648243529671
[2022-06-15 02:48:29,409] Test: Loss: 2.435 | Acc: 45.066 (22533/50000)
[2022-06-15 02:48:29,409] Epoch: 58
[2022-06-15 03:06:55,059] Train: Loss: 2.569 | Acc: 44.214 (566452/1281167) | Lr: 0.5786432218138128
[2022-06-15 03:07:37,084] Test: Loss: 2.625 | Acc: 41.752 (20876/50000)
[2022-06-15 03:07:37,084] Epoch: 59
[2022-06-15 03:26:12,289] Train: Loss: 2.568 | Acc: 44.272 (567201/1281167) | Lr: 0.5779096833406159
[2022-06-15 03:26:54,576] Test: Loss: 2.825 | Acc: 38.804 (19402/50000)
[2022-06-15 03:26:54,576] Epoch: 60
[2022-06-15 03:44:46,417] Train: Loss: 2.564 | Acc: 44.308 (567662/1281167) | Lr: 0.5771642403557232
[2022-06-15 03:45:32,362] Test: Loss: 2.399 | Acc: 45.354 (22677/50000)
[2022-06-15 03:45:32,363] Epoch: 61
[2022-06-15 04:03:29,490] Train: Loss: 2.561 | Acc: 44.400 (568844/1281167) | Lr: 0.5764069247914314
[2022-06-15 04:04:12,669] Test: Loss: 2.763 | Acc: 39.894 (19947/50000)
[2022-06-15 04:04:12,669] Epoch: 62
[2022-06-15 04:22:43,141] Train: Loss: 2.565 | Acc: 44.324 (567862/1281167) | Lr: 0.5756377690886185
[2022-06-15 04:23:23,353] Test: Loss: 2.551 | Acc: 43.876 (21938/50000)
[2022-06-15 04:23:23,353] Epoch: 63
[2022-06-15 04:40:26,587] Train: Loss: 2.562 | Acc: 44.338 (568043/1281167) | Lr: 0.574856806195355
[2022-06-15 04:41:09,546] Test: Loss: 2.697 | Acc: 41.064 (20532/50000)
[2022-06-15 04:41:09,547] Epoch: 64
[2022-06-15 04:58:44,500] Train: Loss: 2.559 | Acc: 44.447 (569446/1281167) | Lr: 0.5740640695654917
[2022-06-15 04:59:32,469] Test: Loss: 2.683 | Acc: 40.900 (20450/50000)
[2022-06-15 04:59:32,470] Epoch: 65
[2022-06-15 05:18:01,264] Train: Loss: 2.558 | Acc: 44.443 (569386/1281167) | Lr: 0.5732595931572279
[2022-06-15 05:18:42,435] Test: Loss: 2.336 | Acc: 47.222 (23611/50000)
[2022-06-15 05:18:42,435] Epoch: 66
[2022-06-15 05:37:15,718] Train: Loss: 2.558 | Acc: 44.424 (569150/1281167) | Lr: 0.572443411431655
[2022-06-15 05:37:59,871] Test: Loss: 2.697 | Acc: 42.416 (21208/50000)
[2022-06-15 05:37:59,871] Epoch: 67
[2022-06-15 05:56:29,386] Train: Loss: 2.553 | Acc: 44.511 (570255/1281167) | Lr: 0.5716155593512818
[2022-06-15 05:57:13,008] Test: Loss: 2.249 | Acc: 48.318 (24159/50000)
[2022-06-15 05:57:13,008] Saving..
[2022-06-15 05:57:13,112] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-15 05:57:13,112] Epoch: 68
[2022-06-15 06:14:49,511] Train: Loss: 2.555 | Acc: 44.466 (569689/1281167) | Lr: 0.5707760723785362
[2022-06-15 06:15:30,801] Test: Loss: 2.483 | Acc: 44.794 (22397/50000)
[2022-06-15 06:15:30,801] Epoch: 69
[2022-06-15 06:34:02,148] Train: Loss: 2.551 | Acc: 44.524 (570432/1281167) | Lr: 0.5699249864742459
[2022-06-15 06:34:44,117] Test: Loss: 2.430 | Acc: 45.482 (22741/50000)
[2022-06-15 06:34:44,118] Epoch: 70
[2022-06-15 06:53:19,049] Train: Loss: 2.553 | Acc: 44.547 (570718/1281167) | Lr: 0.5690623380960986
[2022-06-15 06:54:06,005] Test: Loss: 2.338 | Acc: 46.976 (23488/50000)
[2022-06-15 06:54:06,006] Epoch: 71
[2022-06-15 07:12:06,128] Train: Loss: 2.546 | Acc: 44.628 (571756/1281167) | Lr: 0.5681881641970796
[2022-06-15 07:12:49,981] Test: Loss: 2.684 | Acc: 41.316 (20658/50000)
[2022-06-15 07:12:49,981] Epoch: 72
[2022-06-15 07:30:49,160] Train: Loss: 2.548 | Acc: 44.578 (571123/1281167) | Lr: 0.5673025022238892
[2022-06-15 07:31:32,928] Test: Loss: 2.777 | Acc: 40.232 (20116/50000)
[2022-06-15 07:31:32,928] Epoch: 73
[2022-06-15 07:49:34,142] Train: Loss: 2.545 | Acc: 44.700 (572687/1281167) | Lr: 0.5664053901153387
[2022-06-15 07:50:17,932] Test: Loss: 2.484 | Acc: 44.544 (22272/50000)
[2022-06-15 07:50:17,932] Epoch: 74
[2022-06-15 08:08:34,812] Train: Loss: 2.543 | Acc: 44.652 (572063/1281167) | Lr: 0.565496866300725
[2022-06-15 08:09:20,720] Test: Loss: 2.564 | Acc: 43.028 (21514/50000)
[2022-06-15 08:09:20,720] Epoch: 75
[2022-06-15 08:27:52,265] Train: Loss: 2.543 | Acc: 44.673 (572340/1281167) | Lr: 0.5645769696981845
[2022-06-15 08:28:40,761] Test: Loss: 2.352 | Acc: 46.578 (23289/50000)
[2022-06-15 08:28:40,762] Epoch: 76
[2022-06-15 08:47:10,729] Train: Loss: 2.542 | Acc: 44.772 (573609/1281167) | Lr: 0.563645739713026
[2022-06-15 08:47:53,640] Test: Loss: 2.351 | Acc: 46.370 (23185/50000)
[2022-06-15 08:47:53,640] Epoch: 77
[2022-06-15 09:05:21,392] Train: Loss: 2.537 | Acc: 44.871 (574875/1281167) | Lr: 0.5627032162360428
[2022-06-15 09:06:04,099] Test: Loss: 2.595 | Acc: 42.848 (21424/50000)
[2022-06-15 09:06:04,099] Epoch: 78
[2022-06-15 09:23:31,969] Train: Loss: 2.534 | Acc: 44.875 (574922/1281167) | Lr: 0.5617494396418036
[2022-06-15 09:24:14,740] Test: Loss: 2.592 | Acc: 42.466 (21233/50000)
[2022-06-15 09:24:14,741] Epoch: 79
[2022-06-15 09:42:05,636] Train: Loss: 2.536 | Acc: 44.824 (574274/1281167) | Lr: 0.5607844507869232
[2022-06-15 09:42:48,547] Test: Loss: 2.748 | Acc: 39.832 (19916/50000)
[2022-06-15 09:42:48,548] Epoch: 80
[2022-06-15 10:01:22,074] Train: Loss: 2.531 | Acc: 44.906 (575317/1281167) | Lr: 0.5598082910083125
[2022-06-15 10:02:06,127] Test: Loss: 2.462 | Acc: 44.214 (22107/50000)
[2022-06-15 10:02:06,127] Epoch: 81
[2022-06-15 10:20:32,638] Train: Loss: 2.535 | Acc: 44.801 (573980/1281167) | Lr: 0.5588210021214074
[2022-06-15 10:21:13,951] Test: Loss: 2.379 | Acc: 45.898 (22949/50000)
[2022-06-15 10:21:13,952] Epoch: 82
[2022-06-15 10:39:21,423] Train: Loss: 2.527 | Acc: 44.969 (576133/1281167) | Lr: 0.5578226264183781
[2022-06-15 10:40:04,857] Test: Loss: 2.577 | Acc: 43.110 (21555/50000)
[2022-06-15 10:40:04,858] Epoch: 83
[2022-06-15 10:58:09,125] Train: Loss: 2.530 | Acc: 44.925 (575559/1281167) | Lr: 0.5568132066663166
[2022-06-15 10:58:51,893] Test: Loss: 2.712 | Acc: 42.148 (21074/50000)
[2022-06-15 10:58:51,893] Epoch: 84
[2022-06-15 11:17:23,496] Train: Loss: 2.526 | Acc: 44.980 (576266/1281167) | Lr: 0.5557927861054056
[2022-06-15 11:18:12,165] Test: Loss: 2.220 | Acc: 48.750 (24375/50000)
[2022-06-15 11:18:12,165] Saving..
[2022-06-15 11:18:12,247] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-15 11:18:12,247] Epoch: 85
[2022-06-15 11:36:41,828] Train: Loss: 2.529 | Acc: 44.966 (576087/1281167) | Lr: 0.5547614084470658
[2022-06-15 11:37:24,807] Test: Loss: 2.525 | Acc: 43.942 (21971/50000)
[2022-06-15 11:37:24,808] Epoch: 86
[2022-06-15 11:55:51,563] Train: Loss: 2.523 | Acc: 45.078 (577529/1281167) | Lr: 0.5537191178720833
[2022-06-15 11:56:35,970] Test: Loss: 2.346 | Acc: 46.708 (23354/50000)
[2022-06-15 11:56:35,971] Epoch: 87
[2022-06-15 12:14:05,175] Train: Loss: 2.520 | Acc: 45.127 (578158/1281167) | Lr: 0.5526659590287172
[2022-06-15 12:14:47,765] Test: Loss: 2.745 | Acc: 41.100 (20550/50000)
[2022-06-15 12:14:47,766] Epoch: 88
[2022-06-15 12:32:09,915] Train: Loss: 2.520 | Acc: 45.110 (577937/1281167) | Lr: 0.5516019770307873
[2022-06-15 12:32:56,327] Test: Loss: 2.599 | Acc: 43.538 (21769/50000)
[2022-06-15 12:32:56,327] Epoch: 89
[2022-06-15 12:49:36,436] Train: Loss: 2.519 | Acc: 45.083 (577588/1281167) | Lr: 0.5505272174557411
[2022-06-15 12:50:20,155] Test: Loss: 2.583 | Acc: 43.680 (21840/50000)
[2022-06-15 12:50:20,155] Epoch: 90
[2022-06-15 13:08:21,309] Train: Loss: 2.519 | Acc: 45.143 (578355/1281167) | Lr: 0.5494417263427018
[2022-06-15 13:09:02,481] Test: Loss: 2.310 | Acc: 47.290 (23645/50000)
[2022-06-15 13:09:02,482] Epoch: 91
[2022-06-15 13:27:31,680] Train: Loss: 2.516 | Acc: 45.242 (579624/1281167) | Lr: 0.5483455501904958
[2022-06-15 13:28:15,023] Test: Loss: 3.018 | Acc: 36.360 (18180/50000)
[2022-06-15 13:28:15,024] Epoch: 92
[2022-06-15 13:46:45,038] Train: Loss: 2.518 | Acc: 45.161 (578590/1281167) | Lr: 0.5472387359556613
[2022-06-15 13:47:33,898] Test: Loss: 2.399 | Acc: 45.544 (22772/50000)
[2022-06-15 13:47:33,899] Epoch: 93
[2022-06-15 14:04:08,039] Train: Loss: 2.514 | Acc: 45.201 (579099/1281167) | Lr: 0.5461213310504361
[2022-06-15 14:04:49,799] Test: Loss: 2.726 | Acc: 41.672 (20836/50000)
[2022-06-15 14:04:49,799] Epoch: 94
[2022-06-15 14:22:51,881] Train: Loss: 2.512 | Acc: 45.279 (580100/1281167) | Lr: 0.5449933833407276
[2022-06-15 14:23:35,758] Test: Loss: 2.526 | Acc: 43.978 (21989/50000)
[2022-06-15 14:23:35,758] Epoch: 95
[2022-06-15 14:42:03,854] Train: Loss: 2.510 | Acc: 45.284 (580160/1281167) | Lr: 0.5438549411440613
[2022-06-15 14:42:51,386] Test: Loss: 2.456 | Acc: 44.926 (22463/50000)
[2022-06-15 14:42:51,386] Epoch: 96
[2022-06-15 15:01:23,121] Train: Loss: 2.513 | Acc: 45.213 (579260/1281167) | Lr: 0.542706053227512
[2022-06-15 15:02:05,572] Test: Loss: 2.459 | Acc: 44.890 (22445/50000)
[2022-06-15 15:02:05,572] Epoch: 97
[2022-06-15 15:20:35,094] Train: Loss: 2.514 | Acc: 45.275 (580054/1281167) | Lr: 0.5415467688056143
[2022-06-15 15:21:18,043] Test: Loss: 2.271 | Acc: 48.474 (24237/50000)
[2022-06-15 15:21:18,043] Epoch: 98
[2022-06-15 15:37:38,776] Train: Loss: 2.503 | Acc: 45.410 (581772/1281167) | Lr: 0.5403771375382543
[2022-06-15 15:38:21,468] Test: Loss: 2.442 | Acc: 45.154 (22577/50000)
[2022-06-15 15:38:21,468] Epoch: 99
[2022-06-15 15:56:52,651] Train: Loss: 2.506 | Acc: 45.417 (581862/1281167) | Lr: 0.5391972095285429
[2022-06-15 15:57:35,026] Test: Loss: 2.478 | Acc: 44.794 (22397/50000)
[2022-06-15 15:57:35,026] Epoch: 100
[2022-06-15 16:15:24,230] Train: Loss: 2.503 | Acc: 45.439 (582145/1281167) | Lr: 0.5380070353206687
[2022-06-15 16:16:07,206] Test: Loss: 2.331 | Acc: 46.902 (23451/50000)
[2022-06-15 16:16:07,206] Epoch: 101
[2022-06-15 16:34:35,015] Train: Loss: 2.506 | Acc: 45.328 (580727/1281167) | Lr: 0.5368066658977336
[2022-06-15 16:35:17,674] Test: Loss: 2.458 | Acc: 44.690 (22345/50000)
[2022-06-15 16:35:17,674] Epoch: 102
[2022-06-15 16:52:35,026] Train: Loss: 2.506 | Acc: 45.358 (581118/1281167) | Lr: 0.5355961526795687
[2022-06-15 16:53:17,790] Test: Loss: 2.304 | Acc: 47.630 (23815/50000)
[2022-06-15 16:53:17,790] Epoch: 103
[2022-06-15 17:11:04,872] Train: Loss: 2.503 | Acc: 45.474 (582597/1281167) | Lr: 0.5343755475205313
[2022-06-15 17:11:45,434] Test: Loss: 2.360 | Acc: 46.816 (23408/50000)
[2022-06-15 17:11:45,435] Epoch: 104
[2022-06-15 17:29:09,182] Train: Loss: 2.503 | Acc: 45.394 (581572/1281167) | Lr: 0.5331449027072837
[2022-06-15 17:29:52,214] Test: Loss: 2.239 | Acc: 48.764 (24382/50000)
[2022-06-15 17:29:52,214] Saving..
[2022-06-15 17:29:52,309] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-15 17:29:52,309] Epoch: 105
[2022-06-15 17:47:36,025] Train: Loss: 2.495 | Acc: 45.557 (583656/1281167) | Lr: 0.5319042709565539
[2022-06-15 17:48:19,336] Test: Loss: 2.209 | Acc: 49.116 (24558/50000)
[2022-06-15 17:48:19,336] Saving..
[2022-06-15 17:48:19,403] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-15 17:48:19,403] Epoch: 106
[2022-06-15 18:06:44,467] Train: Loss: 2.497 | Acc: 45.495 (582865/1281167) | Lr: 0.5306537054128772
[2022-06-15 18:07:29,280] Test: Loss: 2.314 | Acc: 47.160 (23580/50000)
[2022-06-15 18:07:29,280] Epoch: 107
[2022-06-15 18:25:47,892] Train: Loss: 2.492 | Acc: 45.558 (583674/1281167) | Lr: 0.529393259646319
[2022-06-15 18:26:29,722] Test: Loss: 2.454 | Acc: 45.028 (22514/50000)
[2022-06-15 18:26:29,722] Epoch: 108
[2022-06-15 18:44:51,580] Train: Loss: 2.495 | Acc: 45.611 (584347/1281167) | Lr: 0.528122987650181
[2022-06-15 18:45:35,464] Test: Loss: 2.536 | Acc: 43.524 (21762/50000)
[2022-06-15 18:45:35,464] Epoch: 109
[2022-06-15 19:03:53,708] Train: Loss: 2.492 | Acc: 45.617 (584429/1281167) | Lr: 0.5268429438386876
[2022-06-15 19:04:37,124] Test: Loss: 2.567 | Acc: 42.712 (21356/50000)
[2022-06-15 19:04:37,125] Epoch: 110
[2022-06-15 19:22:54,552] Train: Loss: 2.494 | Acc: 45.643 (584759/1281167) | Lr: 0.5255531830446555
[2022-06-15 19:23:38,396] Test: Loss: 2.503 | Acc: 45.294 (22647/50000)
[2022-06-15 19:23:38,396] Epoch: 111
[2022-06-15 19:42:00,302] Train: Loss: 2.488 | Acc: 45.680 (585232/1281167) | Lr: 0.5242537605171443
[2022-06-15 19:42:41,613] Test: Loss: 2.453 | Acc: 45.026 (22513/50000)
[2022-06-15 19:42:41,613] Epoch: 112
[2022-06-15 20:00:28,481] Train: Loss: 2.492 | Acc: 45.635 (584655/1281167) | Lr: 0.5229447319190905
[2022-06-15 20:01:13,568] Test: Loss: 2.433 | Acc: 45.734 (22867/50000)
[2022-06-15 20:01:13,568] Epoch: 113
[2022-06-15 20:19:34,963] Train: Loss: 2.487 | Acc: 45.712 (585649/1281167) | Lr: 0.5216261533249222
[2022-06-15 20:20:18,494] Test: Loss: 2.315 | Acc: 47.530 (23765/50000)
[2022-06-15 20:20:18,494] Epoch: 114
[2022-06-15 20:38:04,464] Train: Loss: 2.485 | Acc: 45.771 (586404/1281167) | Lr: 0.5202980812181581
[2022-06-15 20:38:43,648] Test: Loss: 2.354 | Acc: 46.270 (23135/50000)
[2022-06-15 20:38:43,648] Epoch: 115
[2022-06-15 20:57:04,666] Train: Loss: 2.486 | Acc: 45.698 (585465/1281167) | Lr: 0.5189605724889867
[2022-06-15 20:57:47,380] Test: Loss: 2.378 | Acc: 46.392 (23196/50000)
[2022-06-15 20:57:47,380] Epoch: 116
[2022-06-15 21:16:04,491] Train: Loss: 2.482 | Acc: 45.764 (586312/1281167) | Lr: 0.5176136844318308
[2022-06-15 21:16:48,143] Test: Loss: 2.353 | Acc: 46.646 (23323/50000)
[2022-06-15 21:16:48,144] Epoch: 117
[2022-06-15 21:34:31,833] Train: Loss: 2.481 | Acc: 45.813 (586944/1281167) | Lr: 0.5162574747428917
[2022-06-15 21:35:15,446] Test: Loss: 2.216 | Acc: 49.054 (24527/50000)
[2022-06-15 21:35:15,446] Epoch: 118
[2022-06-15 21:53:07,123] Train: Loss: 2.481 | Acc: 45.817 (586989/1281167) | Lr: 0.5148920015176788
[2022-06-15 21:53:51,027] Test: Loss: 2.159 | Acc: 50.344 (25172/50000)
[2022-06-15 21:53:51,027] Saving..
[2022-06-15 21:53:51,101] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-15 21:53:51,101] Epoch: 119
[2022-06-15 22:11:31,835] Train: Loss: 2.477 | Acc: 45.900 (588054/1281167) | Lr: 0.5135173232485203
[2022-06-15 22:12:14,174] Test: Loss: 2.938 | Acc: 37.932 (18966/50000)
[2022-06-15 22:12:14,175] Epoch: 120
[2022-06-15 22:30:35,930] Train: Loss: 2.480 | Acc: 45.846 (587358/1281167) | Lr: 0.5121334988220579
[2022-06-15 22:31:19,931] Test: Loss: 2.343 | Acc: 47.246 (23623/50000)
[2022-06-15 22:31:19,932] Epoch: 121
[2022-06-15 22:49:37,308] Train: Loss: 2.477 | Acc: 45.888 (587896/1281167) | Lr: 0.5107405875167246
[2022-06-15 22:50:19,321] Test: Loss: 2.419 | Acc: 45.894 (22947/50000)
[2022-06-15 22:50:19,323] Epoch: 122
[2022-06-15 23:08:33,084] Train: Loss: 2.474 | Acc: 45.949 (588682/1281167) | Lr: 0.5093386490002044
[2022-06-15 23:09:15,946] Test: Loss: 2.225 | Acc: 49.308 (24654/50000)
[2022-06-15 23:09:15,947] Epoch: 123
[2022-06-15 23:27:37,648] Train: Loss: 2.475 | Acc: 45.955 (588759/1281167) | Lr: 0.5079277433268776
[2022-06-15 23:28:20,435] Test: Loss: 2.238 | Acc: 48.764 (24382/50000)
[2022-06-15 23:28:20,435] Epoch: 124
[2022-06-15 23:46:09,843] Train: Loss: 2.474 | Acc: 45.963 (588866/1281167) | Lr: 0.5065079309352473
[2022-06-15 23:46:52,016] Test: Loss: 2.350 | Acc: 46.834 (23417/50000)
[2022-06-15 23:46:52,016] Epoch: 125
[2022-06-16 00:05:10,848] Train: Loss: 2.473 | Acc: 46.036 (589794/1281167) | Lr: 0.5050792726453508
[2022-06-16 00:05:54,787] Test: Loss: 2.344 | Acc: 46.976 (23488/50000)
[2022-06-16 00:05:54,787] Epoch: 126
[2022-06-16 00:24:16,665] Train: Loss: 2.468 | Acc: 46.091 (590507/1281167) | Lr: 0.5036418296561543
[2022-06-16 00:25:00,217] Test: Loss: 2.288 | Acc: 48.212 (24106/50000)
[2022-06-16 00:25:00,217] Epoch: 127
[2022-06-16 00:42:48,157] Train: Loss: 2.468 | Acc: 46.031 (589740/1281167) | Lr: 0.5021956635429314
[2022-06-16 00:43:30,958] Test: Loss: 2.458 | Acc: 44.940 (22470/50000)
[2022-06-16 00:43:30,959] Epoch: 128
[2022-06-16 01:01:44,413] Train: Loss: 2.466 | Acc: 46.090 (590491/1281167) | Lr: 0.5007408362546251
[2022-06-16 01:02:25,516] Test: Loss: 2.103 | Acc: 51.298 (25649/50000)
[2022-06-16 01:02:25,516] Saving..
[2022-06-16 01:02:25,601] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-16 01:02:25,602] Epoch: 129
[2022-06-16 01:20:17,839] Train: Loss: 2.466 | Acc: 46.096 (590570/1281167) | Lr: 0.4992774101111944
[2022-06-16 01:21:01,214] Test: Loss: 2.527 | Acc: 44.274 (22137/50000)
[2022-06-16 01:21:01,215] Epoch: 130
[2022-06-16 01:39:21,908] Train: Loss: 2.461 | Acc: 46.174 (591560/1281167) | Lr: 0.4978054478009446
[2022-06-16 01:40:04,669] Test: Loss: 2.106 | Acc: 50.990 (25495/50000)
[2022-06-16 01:40:04,670] Epoch: 131
[2022-06-16 01:58:21,881] Train: Loss: 2.465 | Acc: 46.158 (591359/1281167) | Lr: 0.49632501237784193
[2022-06-16 01:59:05,478] Test: Loss: 2.321 | Acc: 47.116 (23558/50000)
[2022-06-16 01:59:05,479] Epoch: 132
[2022-06-16 02:17:26,166] Train: Loss: 2.462 | Acc: 46.133 (591036/1281167) | Lr: 0.49483616725881285
[2022-06-16 02:18:08,984] Test: Loss: 2.524 | Acc: 44.026 (22013/50000)
[2022-06-16 02:18:08,984] Epoch: 133
[2022-06-16 02:36:24,047] Train: Loss: 2.456 | Acc: 46.287 (593014/1281167) | Lr: 0.49333897622102685
[2022-06-16 02:37:08,928] Test: Loss: 2.754 | Acc: 41.066 (20533/50000)
[2022-06-16 02:37:08,929] Epoch: 134
[2022-06-16 02:55:29,878] Train: Loss: 2.454 | Acc: 46.300 (593185/1281167) | Lr: 0.49183350339916493
[2022-06-16 02:56:12,166] Test: Loss: 2.430 | Acc: 45.202 (22601/50000)
[2022-06-16 02:56:12,167] Epoch: 135
[2022-06-16 03:14:31,341] Train: Loss: 2.454 | Acc: 46.320 (593440/1281167) | Lr: 0.4903198132826722
[2022-06-16 03:15:14,790] Test: Loss: 2.557 | Acc: 43.300 (21650/50000)
[2022-06-16 03:15:14,790] Epoch: 136
[2022-06-16 03:33:05,764] Train: Loss: 2.454 | Acc: 46.278 (592896/1281167) | Lr: 0.4887979707129954
[2022-06-16 03:33:48,433] Test: Loss: 2.150 | Acc: 50.200 (25100/50000)
[2022-06-16 03:33:48,434] Epoch: 137
[2022-06-16 03:52:27,149] Train: Loss: 2.455 | Acc: 46.329 (593551/1281167) | Lr: 0.487268040880805
[2022-06-16 03:53:08,815] Test: Loss: 2.252 | Acc: 48.190 (24095/50000)
[2022-06-16 03:53:08,815] Epoch: 138
[2022-06-16 04:11:54,603] Train: Loss: 2.450 | Acc: 46.347 (593785/1281167) | Lr: 0.485730089323203
[2022-06-16 04:12:46,307] Test: Loss: 2.339 | Acc: 47.100 (23550/50000)
[2022-06-16 04:12:46,307] Epoch: 139
[2022-06-16 04:31:32,945] Train: Loss: 2.446 | Acc: 46.467 (595319/1281167) | Lr: 0.48418418192091556
[2022-06-16 04:32:16,718] Test: Loss: 2.285 | Acc: 47.664 (23832/50000)
[2022-06-16 04:32:16,719] Epoch: 140
[2022-06-16 04:50:30,379] Train: Loss: 2.451 | Acc: 46.364 (593998/1281167) | Lr: 0.48263038489547055
[2022-06-16 04:51:14,528] Test: Loss: 2.272 | Acc: 48.216 (24108/50000)
[2022-06-16 04:51:14,529] Epoch: 141
[2022-06-16 05:09:26,092] Train: Loss: 2.448 | Acc: 46.414 (594639/1281167) | Lr: 0.48106876480636107
[2022-06-16 05:10:13,179] Test: Loss: 2.250 | Acc: 48.370 (24185/50000)
[2022-06-16 05:10:13,179] Epoch: 142
[2022-06-16 05:28:47,139] Train: Loss: 2.446 | Acc: 46.454 (595154/1281167) | Lr: 0.47949938854819424
[2022-06-16 05:29:29,908] Test: Loss: 2.212 | Acc: 49.324 (24662/50000)
[2022-06-16 05:29:29,909] Epoch: 143
[2022-06-16 05:48:10,809] Train: Loss: 2.440 | Acc: 46.545 (596322/1281167) | Lr: 0.47792232334782575
[2022-06-16 05:48:53,982] Test: Loss: 2.253 | Acc: 48.584 (24292/50000)
[2022-06-16 05:48:53,982] Epoch: 144
[2022-06-16 06:07:35,046] Train: Loss: 2.444 | Acc: 46.525 (596067/1281167) | Lr: 0.47633763676147983
[2022-06-16 06:08:18,629] Test: Loss: 2.315 | Acc: 47.056 (23528/50000)
[2022-06-16 06:08:18,630] Epoch: 145
[2022-06-16 06:26:52,520] Train: Loss: 2.441 | Acc: 46.568 (596609/1281167) | Lr: 0.47474539667185567
[2022-06-16 06:27:35,172] Test: Loss: 2.714 | Acc: 40.914 (20457/50000)
[2022-06-16 06:27:35,172] Epoch: 146
[2022-06-16 06:45:45,937] Train: Loss: 2.439 | Acc: 46.519 (595980/1281167) | Lr: 0.4731456712852192
[2022-06-16 06:46:29,012] Test: Loss: 2.359 | Acc: 46.588 (23294/50000)
[2022-06-16 06:46:29,012] Epoch: 147
[2022-06-16 07:05:14,820] Train: Loss: 2.440 | Acc: 46.533 (596171/1281167) | Lr: 0.47153852912848176
[2022-06-16 07:05:58,468] Test: Loss: 2.202 | Acc: 49.628 (24814/50000)
[2022-06-16 07:05:58,468] Epoch: 148
[2022-06-16 07:24:42,766] Train: Loss: 2.434 | Acc: 46.720 (598564/1281167) | Lr: 0.4699240390462645
[2022-06-16 07:25:26,283] Test: Loss: 2.224 | Acc: 48.890 (24445/50000)
[2022-06-16 07:25:26,284] Epoch: 149
[2022-06-16 07:44:04,755] Train: Loss: 2.435 | Acc: 46.691 (598192/1281167) | Lr: 0.4683022701979489
[2022-06-16 07:44:46,824] Test: Loss: 2.154 | Acc: 50.332 (25166/50000)
[2022-06-16 07:44:46,826] Epoch: 150
[2022-06-16 08:02:54,570] Train: Loss: 2.437 | Acc: 46.641 (597551/1281167) | Lr: 0.4666732920547148
[2022-06-16 08:03:38,988] Test: Loss: 2.723 | Acc: 41.484 (20742/50000)
[2022-06-16 08:03:38,988] Epoch: 151
[2022-06-16 08:22:21,302] Train: Loss: 2.429 | Acc: 46.775 (599267/1281167) | Lr: 0.46503717439656433
[2022-06-16 08:23:05,335] Test: Loss: 2.355 | Acc: 46.966 (23483/50000)
[2022-06-16 08:23:05,336] Epoch: 152
[2022-06-16 08:41:36,328] Train: Loss: 2.434 | Acc: 46.743 (598859/1281167) | Lr: 0.46339398730933234
[2022-06-16 08:42:18,428] Test: Loss: 2.091 | Acc: 51.002 (25501/50000)
[2022-06-16 08:42:18,428] Epoch: 153
[2022-06-16 09:00:38,399] Train: Loss: 2.431 | Acc: 46.737 (598784/1281167) | Lr: 0.46174380118168473
[2022-06-16 09:01:22,176] Test: Loss: 2.318 | Acc: 46.986 (23493/50000)
[2022-06-16 09:01:22,177] Epoch: 154
[2022-06-16 09:18:26,059] Train: Loss: 2.424 | Acc: 46.852 (600258/1281167) | Lr: 0.4600866867021032
[2022-06-16 09:19:10,402] Test: Loss: 2.269 | Acc: 48.732 (24366/50000)
[2022-06-16 09:19:10,403] Epoch: 155
[2022-06-16 09:36:30,699] Train: Loss: 2.424 | Acc: 46.862 (600381/1281167) | Lr: 0.45842271485585645
[2022-06-16 09:37:14,866] Test: Loss: 2.356 | Acc: 46.448 (23224/50000)
[2022-06-16 09:37:14,867] Epoch: 156
[2022-06-16 09:55:41,757] Train: Loss: 2.421 | Acc: 46.943 (601413/1281167) | Lr: 0.45675195692196036
[2022-06-16 09:56:25,072] Test: Loss: 2.426 | Acc: 45.546 (22773/50000)
[2022-06-16 09:56:25,072] Epoch: 157
[2022-06-16 10:14:40,435] Train: Loss: 2.424 | Acc: 46.874 (600529/1281167) | Lr: 0.4550744844701241
[2022-06-16 10:15:26,806] Test: Loss: 2.180 | Acc: 50.096 (25048/50000)
[2022-06-16 10:15:26,806] Epoch: 158
[2022-06-16 10:33:37,385] Train: Loss: 2.421 | Acc: 46.912 (601027/1281167) | Lr: 0.4533903693576845
[2022-06-16 10:34:20,429] Test: Loss: 2.102 | Acc: 51.440 (25720/50000)
[2022-06-16 10:34:20,429] Saving..
[2022-06-16 10:34:20,508] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-16 10:34:20,508] Epoch: 159
[2022-06-16 10:52:31,062] Train: Loss: 2.420 | Acc: 46.888 (600719/1281167) | Lr: 0.4516996837265278
[2022-06-16 10:53:14,317] Test: Loss: 2.109 | Acc: 51.148 (25574/50000)
[2022-06-16 10:53:14,317] Epoch: 160
[2022-06-16 11:11:26,387] Train: Loss: 2.419 | Acc: 46.956 (601588/1281167) | Lr: 0.4500024999999993
[2022-06-16 11:12:09,100] Test: Loss: 2.215 | Acc: 49.150 (24575/50000)
[2022-06-16 11:12:09,101] Epoch: 161
[2022-06-16 11:29:51,708] Train: Loss: 2.413 | Acc: 47.037 (602628/1281167) | Lr: 0.44829889087980124
[2022-06-16 11:30:37,987] Test: Loss: 2.361 | Acc: 46.872 (23436/50000)
[2022-06-16 11:30:37,988] Epoch: 162
[2022-06-16 11:49:34,355] Train: Loss: 2.413 | Acc: 47.113 (603593/1281167) | Lr: 0.4465889293428783
[2022-06-16 11:50:20,160] Test: Loss: 2.052 | Acc: 52.164 (26082/50000)
[2022-06-16 11:50:20,161] Saving..
[2022-06-16 11:50:20,362] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-16 11:50:20,362] Epoch: 163
[2022-06-16 12:08:53,015] Train: Loss: 2.413 | Acc: 47.096 (603384/1281167) | Lr: 0.44487268863829144
[2022-06-16 12:09:45,408] Test: Loss: 2.489 | Acc: 44.876 (22438/50000)
[2022-06-16 12:09:45,408] Epoch: 164
[2022-06-16 12:28:02,488] Train: Loss: 2.407 | Acc: 47.197 (604667/1281167) | Lr: 0.44315024228408056
[2022-06-16 12:28:47,335] Test: Loss: 2.092 | Acc: 51.604 (25802/50000)
[2022-06-16 12:28:47,337] Epoch: 165
[2022-06-16 12:46:51,637] Train: Loss: 2.410 | Acc: 47.211 (604855/1281167) | Lr: 0.44142166406411454
[2022-06-16 12:47:37,266] Test: Loss: 2.011 | Acc: 52.588 (26294/50000)
[2022-06-16 12:47:37,266] Saving..
[2022-06-16 12:47:37,332] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-16 12:47:37,332] Epoch: 166
[2022-06-16 13:05:50,860] Train: Loss: 2.409 | Acc: 47.190 (604588/1281167) | Lr: 0.4396870280249311
[2022-06-16 13:06:40,011] Test: Loss: 2.108 | Acc: 51.478 (25739/50000)
[2022-06-16 13:06:40,011] Epoch: 167
[2022-06-16 13:25:17,334] Train: Loss: 2.404 | Acc: 47.247 (605308/1281167) | Lr: 0.437946408472565
[2022-06-16 13:25:59,572] Test: Loss: 2.086 | Acc: 51.606 (25803/50000)
[2022-06-16 13:25:59,572] Epoch: 168
[2022-06-16 13:44:42,663] Train: Loss: 2.403 | Acc: 47.217 (604923/1281167) | Lr: 0.43619987996936466
[2022-06-16 13:45:30,171] Test: Loss: 2.289 | Acc: 47.862 (23931/50000)
[2022-06-16 13:45:30,171] Epoch: 169
[2022-06-16 14:04:46,308] Train: Loss: 2.401 | Acc: 47.298 (605971/1281167) | Lr: 0.4344475173307981
[2022-06-16 14:05:30,951] Test: Loss: 2.132 | Acc: 50.680 (25340/50000)
[2022-06-16 14:05:30,951] Epoch: 170
[2022-06-16 14:23:38,293] Train: Loss: 2.395 | Acc: 47.320 (606244/1281167) | Lr: 0.4326893956222486
[2022-06-16 14:24:20,734] Test: Loss: 2.127 | Acc: 50.776 (25388/50000)
[2022-06-16 14:24:20,734] Epoch: 171
[2022-06-16 14:41:30,718] Train: Loss: 2.400 | Acc: 47.347 (606598/1281167) | Lr: 0.4309255901557986
[2022-06-16 14:42:15,864] Test: Loss: 2.321 | Acc: 47.722 (23861/50000)
[2022-06-16 14:42:15,864] Epoch: 172
[2022-06-16 15:00:14,433] Train: Loss: 2.394 | Acc: 47.408 (607372/1281167) | Lr: 0.4291561764870039
[2022-06-16 15:01:00,103] Test: Loss: 2.102 | Acc: 51.398 (25699/50000)
[2022-06-16 15:01:00,103] Epoch: 173
[2022-06-16 15:19:50,251] Train: Loss: 2.397 | Acc: 47.359 (606748/1281167) | Lr: 0.42738123041165693
[2022-06-16 15:20:35,082] Test: Loss: 2.301 | Acc: 47.690 (23845/50000)
[2022-06-16 15:20:35,082] Epoch: 174
[2022-06-16 15:38:23,902] Train: Loss: 2.393 | Acc: 47.464 (608095/1281167) | Lr: 0.4256008279625401
[2022-06-16 15:39:05,095] Test: Loss: 2.192 | Acc: 49.590 (24795/50000)
[2022-06-16 15:39:05,096] Epoch: 175
[2022-06-16 15:57:32,871] Train: Loss: 2.391 | Acc: 47.454 (607960/1281167) | Lr: 0.4238150454061688
[2022-06-16 15:58:19,460] Test: Loss: 2.263 | Acc: 48.116 (24058/50000)
[2022-06-16 15:58:19,461] Epoch: 176
[2022-06-16 16:17:04,559] Train: Loss: 2.388 | Acc: 47.527 (608898/1281167) | Lr: 0.4220239592395241
[2022-06-16 16:17:49,221] Test: Loss: 2.226 | Acc: 49.314 (24657/50000)
[2022-06-16 16:17:49,222] Epoch: 177
[2022-06-16 16:35:39,061] Train: Loss: 2.385 | Acc: 47.610 (609960/1281167) | Lr: 0.4202276461867761
[2022-06-16 16:36:33,478] Test: Loss: 2.265 | Acc: 47.902 (23951/50000)
[2022-06-16 16:36:33,479] Epoch: 178
[2022-06-16 16:55:07,579] Train: Loss: 2.382 | Acc: 47.688 (610958/1281167) | Lr: 0.4184261831959976
[2022-06-16 16:55:54,425] Test: Loss: 2.202 | Acc: 49.782 (24891/50000)
[2022-06-16 16:55:54,426] Epoch: 179
[2022-06-16 17:15:25,959] Train: Loss: 2.384 | Acc: 47.579 (609569/1281167) | Lr: 0.4166196474358673
[2022-06-16 17:16:17,415] Test: Loss: 2.064 | Acc: 51.986 (25993/50000)
[2022-06-16 17:16:17,416] Epoch: 180
[2022-06-16 17:34:48,472] Train: Loss: 2.386 | Acc: 47.567 (609410/1281167) | Lr: 0.4148081162923645
[2022-06-16 17:35:34,168] Test: Loss: 2.010 | Acc: 53.034 (26517/50000)
[2022-06-16 17:35:34,168] Saving..
[2022-06-16 17:35:34,242] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-16 17:35:34,242] Epoch: 181
[2022-06-16 17:55:11,124] Train: Loss: 2.381 | Acc: 47.675 (610791/1281167) | Lr: 0.4129916673654542
[2022-06-16 17:55:55,939] Test: Loss: 2.241 | Acc: 48.856 (24428/50000)
[2022-06-16 17:55:55,939] Epoch: 182
[2022-06-16 18:15:28,678] Train: Loss: 2.374 | Acc: 47.755 (611825/1281167) | Lr: 0.4111703784657627
[2022-06-16 18:16:24,867] Test: Loss: 2.345 | Acc: 47.354 (23677/50000)
[2022-06-16 18:16:24,867] Epoch: 183
[2022-06-16 18:34:17,456] Train: Loss: 2.376 | Acc: 47.745 (611696/1281167) | Lr: 0.409344327611245
[2022-06-16 18:35:00,795] Test: Loss: 2.108 | Acc: 51.750 (25875/50000)
[2022-06-16 18:35:00,795] Epoch: 184
[2022-06-16 18:54:03,063] Train: Loss: 2.373 | Acc: 47.775 (612079/1281167) | Lr: 0.4075135930238419
[2022-06-16 18:54:48,703] Test: Loss: 2.054 | Acc: 52.168 (26084/50000)
[2022-06-16 18:54:48,703] Epoch: 185
[2022-06-16 19:13:55,466] Train: Loss: 2.371 | Acc: 47.871 (613304/1281167) | Lr: 0.40567825312612993
[2022-06-16 19:14:40,398] Test: Loss: 2.157 | Acc: 50.528 (25264/50000)
[2022-06-16 19:14:40,398] Epoch: 186
[2022-06-16 19:34:07,885] Train: Loss: 2.372 | Acc: 47.854 (613086/1281167) | Lr: 0.403838386537962
[2022-06-16 19:34:53,012] Test: Loss: 2.181 | Acc: 50.032 (25016/50000)
[2022-06-16 19:34:53,013] Epoch: 187
[2022-06-16 19:52:45,868] Train: Loss: 2.366 | Acc: 47.958 (614419/1281167) | Lr: 0.4019940720730991
[2022-06-16 19:53:33,897] Test: Loss: 2.458 | Acc: 45.150 (22575/50000)
[2022-06-16 19:53:33,898] Epoch: 188
[2022-06-16 20:12:17,091] Train: Loss: 2.369 | Acc: 47.877 (613388/1281167) | Lr: 0.4001453887358346
[2022-06-16 20:13:01,594] Test: Loss: 2.088 | Acc: 51.656 (25828/50000)
[2022-06-16 20:13:01,594] Epoch: 189
[2022-06-16 20:30:11,851] Train: Loss: 2.362 | Acc: 48.007 (615055/1281167) | Lr: 0.39829241571760976
[2022-06-16 20:30:57,666] Test: Loss: 2.054 | Acc: 52.380 (26190/50000)
[2022-06-16 20:30:57,666] Epoch: 190
[2022-06-16 20:47:55,388] Train: Loss: 2.360 | Acc: 48.073 (615901/1281167) | Lr: 0.3964352323936215
[2022-06-16 20:48:38,412] Test: Loss: 2.170 | Acc: 49.852 (24926/50000)
[2022-06-16 20:48:38,413] Epoch: 191
[2022-06-16 21:06:56,717] Train: Loss: 2.359 | Acc: 48.114 (616418/1281167) | Lr: 0.39457391831942223
[2022-06-16 21:07:38,750] Test: Loss: 2.107 | Acc: 51.114 (25557/50000)
[2022-06-16 21:07:38,751] Epoch: 192
[2022-06-16 21:24:59,264] Train: Loss: 2.360 | Acc: 48.081 (615998/1281167) | Lr: 0.3927085532275119
[2022-06-16 21:25:49,172] Test: Loss: 2.541 | Acc: 43.934 (21967/50000)
[2022-06-16 21:25:49,172] Epoch: 193
[2022-06-16 21:43:37,872] Train: Loss: 2.356 | Acc: 48.146 (616829/1281167) | Lr: 0.39083921702392277
[2022-06-16 21:44:21,732] Test: Loss: 2.192 | Acc: 50.146 (25073/50000)
[2022-06-16 21:44:21,732] Epoch: 194
[2022-06-16 22:03:09,220] Train: Loss: 2.355 | Acc: 48.125 (616558/1281167) | Lr: 0.388965989784796
[2022-06-16 22:03:52,836] Test: Loss: 2.200 | Acc: 49.550 (24775/50000)
[2022-06-16 22:03:52,836] Epoch: 195
[2022-06-16 22:21:10,401] Train: Loss: 2.353 | Acc: 48.179 (617249/1281167) | Lr: 0.38708895175295205
[2022-06-16 22:21:52,893] Test: Loss: 2.082 | Acc: 51.650 (25825/50000)
[2022-06-16 22:21:52,894] Epoch: 196
[2022-06-16 22:39:24,221] Train: Loss: 2.350 | Acc: 48.241 (618045/1281167) | Lr: 0.3852081833344529
[2022-06-16 22:40:10,991] Test: Loss: 2.163 | Acc: 50.250 (25125/50000)
[2022-06-16 22:40:10,992] Epoch: 197
[2022-06-16 22:57:20,071] Train: Loss: 2.349 | Acc: 48.218 (617755/1281167) | Lr: 0.38332376509515786
[2022-06-16 22:58:01,590] Test: Loss: 2.082 | Acc: 51.516 (25758/50000)
[2022-06-16 22:58:01,590] Epoch: 198
[2022-06-16 23:15:11,210] Train: Loss: 2.349 | Acc: 48.257 (618258/1281167) | Lr: 0.3814357777572725
[2022-06-16 23:15:52,569] Test: Loss: 2.174 | Acc: 49.934 (24967/50000)
[2022-06-16 23:15:52,571] Epoch: 199
[2022-06-16 23:33:07,303] Train: Loss: 2.345 | Acc: 48.308 (618910/1281167) | Lr: 0.37954430219589075
[2022-06-16 23:33:50,379] Test: Loss: 2.245 | Acc: 48.694 (24347/50000)
[2022-06-16 23:33:50,379] Epoch: 200
[2022-06-16 23:51:11,850] Train: Loss: 2.346 | Acc: 48.347 (619407/1281167) | Lr: 0.37764941943553026
[2022-06-16 23:51:55,841] Test: Loss: 2.232 | Acc: 48.912 (24456/50000)
[2022-06-16 23:51:55,842] Epoch: 201
[2022-06-17 00:09:25,259] Train: Loss: 2.346 | Acc: 48.276 (618492/1281167) | Lr: 0.37575121064666184
[2022-06-17 00:10:18,937] Test: Loss: 2.089 | Acc: 51.520 (25760/50000)
[2022-06-17 00:10:18,937] Epoch: 202
[2022-06-17 00:27:42,276] Train: Loss: 2.342 | Acc: 48.366 (619646/1281167) | Lr: 0.37384975714223234
[2022-06-17 00:28:28,748] Test: Loss: 2.112 | Acc: 51.798 (25899/50000)
[2022-06-17 00:28:28,748] Epoch: 203
[2022-06-17 00:46:09,666] Train: Loss: 2.340 | Acc: 48.427 (620430/1281167) | Lr: 0.37194514037418125
[2022-06-17 00:46:55,831] Test: Loss: 2.016 | Acc: 53.456 (26728/50000)
[2022-06-17 00:46:55,831] Saving..
[2022-06-17 00:46:55,905] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-17 00:46:55,905] Epoch: 204
[2022-06-17 01:04:30,319] Train: Loss: 2.332 | Acc: 48.584 (622437/1281167) | Lr: 0.3700374419299519
[2022-06-17 01:05:12,424] Test: Loss: 2.219 | Acc: 49.316 (24658/50000)
[2022-06-17 01:05:12,424] Epoch: 205
[2022-06-17 01:22:32,660] Train: Loss: 2.331 | Acc: 48.570 (622258/1281167) | Lr: 0.3681267435289963
[2022-06-17 01:23:20,694] Test: Loss: 1.948 | Acc: 54.278 (27139/50000)
[2022-06-17 01:23:20,695] Saving..
[2022-06-17 01:23:20,885] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-17 01:23:20,885] Epoch: 206
[2022-06-17 01:40:43,824] Train: Loss: 2.331 | Acc: 48.586 (622465/1281167) | Lr: 0.3662131270192749
[2022-06-17 01:41:28,392] Test: Loss: 2.059 | Acc: 52.260 (26130/50000)
[2022-06-17 01:41:28,393] Epoch: 207
[2022-06-17 01:59:49,740] Train: Loss: 2.334 | Acc: 48.606 (622720/1281167) | Lr: 0.3642966743737495
[2022-06-17 02:00:33,052] Test: Loss: 2.099 | Acc: 51.686 (25843/50000)
[2022-06-17 02:00:33,053] Epoch: 208
[2022-06-17 02:18:42,845] Train: Loss: 2.332 | Acc: 48.569 (622249/1281167) | Lr: 0.36237746768687323
[2022-06-17 02:19:26,489] Test: Loss: 2.515 | Acc: 44.290 (22145/50000)
[2022-06-17 02:19:26,489] Epoch: 209
[2022-06-17 02:37:14,357] Train: Loss: 2.328 | Acc: 48.645 (623227/1281167) | Lr: 0.360455589171073
[2022-06-17 02:37:57,200] Test: Loss: 2.066 | Acc: 52.188 (26094/50000)
[2022-06-17 02:37:57,201] Epoch: 210
[2022-06-17 02:55:44,521] Train: Loss: 2.323 | Acc: 48.715 (624124/1281167) | Lr: 0.358531121153228
[2022-06-17 02:56:25,632] Test: Loss: 2.165 | Acc: 50.438 (25219/50000)
[2022-06-17 02:56:25,632] Epoch: 211
[2022-06-17 03:14:19,864] Train: Loss: 2.322 | Acc: 48.760 (624695/1281167) | Lr: 0.3566041460711427
[2022-06-17 03:15:03,861] Test: Loss: 2.080 | Acc: 51.470 (25735/50000)
[2022-06-17 03:15:03,861] Epoch: 212
[2022-06-17 03:33:20,421] Train: Loss: 2.319 | Acc: 48.824 (625522/1281167) | Lr: 0.35467474647001634
[2022-06-17 03:34:01,555] Test: Loss: 2.293 | Acc: 48.194 (24097/50000)
[2022-06-17 03:34:01,555] Epoch: 213
[2022-06-17 03:52:24,959] Train: Loss: 2.322 | Acc: 48.738 (624411/1281167) | Lr: 0.3527430049989062
[2022-06-17 03:53:08,465] Test: Loss: 2.442 | Acc: 45.526 (22763/50000)
[2022-06-17 03:53:08,466] Epoch: 214
[2022-06-17 04:11:03,045] Train: Loss: 2.316 | Acc: 48.842 (625753/1281167) | Lr: 0.3508090044071877
[2022-06-17 04:11:52,219] Test: Loss: 2.190 | Acc: 50.026 (25013/50000)
[2022-06-17 04:11:52,219] Epoch: 215
[2022-06-17 04:29:05,228] Train: Loss: 2.315 | Acc: 48.908 (626594/1281167) | Lr: 0.34887282754100923
[2022-06-17 04:29:48,721] Test: Loss: 2.152 | Acc: 50.452 (25226/50000)
[2022-06-17 04:29:48,722] Epoch: 216
[2022-06-17 04:48:07,381] Train: Loss: 2.311 | Acc: 48.927 (626832/1281167) | Lr: 0.3469345573397436
[2022-06-17 04:48:50,790] Test: Loss: 2.018 | Acc: 52.852 (26426/50000)
[2022-06-17 04:48:50,790] Epoch: 217
[2022-06-17 05:06:00,038] Train: Loss: 2.310 | Acc: 48.960 (627255/1281167) | Lr: 0.3449942768324353
[2022-06-17 05:06:43,381] Test: Loss: 2.106 | Acc: 51.454 (25727/50000)
[2022-06-17 05:06:43,381] Epoch: 218
[2022-06-17 05:24:08,522] Train: Loss: 2.313 | Acc: 48.897 (626455/1281167) | Lr: 0.34305206913424346
[2022-06-17 05:24:53,399] Test: Loss: 2.021 | Acc: 52.598 (26299/50000)
[2022-06-17 05:24:53,399] Epoch: 219
[2022-06-17 05:42:12,483] Train: Loss: 2.306 | Acc: 49.064 (628596/1281167) | Lr: 0.3411080174428815
[2022-06-17 05:42:57,720] Test: Loss: 2.033 | Acc: 52.448 (26224/50000)
[2022-06-17 05:42:57,720] Epoch: 220
[2022-06-17 06:00:42,666] Train: Loss: 2.304 | Acc: 49.063 (628583/1281167) | Lr: 0.3391622050350539
[2022-06-17 06:01:27,575] Test: Loss: 2.159 | Acc: 50.932 (25466/50000)
[2022-06-17 06:01:27,575] Epoch: 221
[2022-06-17 06:18:57,514] Train: Loss: 2.303 | Acc: 49.134 (629493/1281167) | Lr: 0.3372147152628879
[2022-06-17 06:19:39,236] Test: Loss: 2.177 | Acc: 50.338 (25169/50000)
[2022-06-17 06:19:39,237] Epoch: 222
[2022-06-17 06:37:25,124] Train: Loss: 2.299 | Acc: 49.197 (630299/1281167) | Lr: 0.33526563155036354
[2022-06-17 06:38:08,642] Test: Loss: 2.025 | Acc: 52.658 (26329/50000)
[2022-06-17 06:38:08,642] Epoch: 223
[2022-06-17 06:55:42,544] Train: Loss: 2.295 | Acc: 49.228 (630693/1281167) | Lr: 0.33331503738974005
[2022-06-17 06:56:26,093] Test: Loss: 2.140 | Acc: 50.764 (25382/50000)
[2022-06-17 06:56:26,094] Epoch: 224
[2022-06-17 07:15:05,504] Train: Loss: 2.295 | Acc: 49.301 (631627/1281167) | Lr: 0.33136301633797927
[2022-06-17 07:15:50,110] Test: Loss: 2.520 | Acc: 44.784 (22392/50000)
[2022-06-17 07:15:50,110] Epoch: 225
[2022-06-17 07:34:37,933] Train: Loss: 2.297 | Acc: 49.189 (630191/1281167) | Lr: 0.3294096520131662
[2022-06-17 07:35:21,820] Test: Loss: 1.949 | Acc: 54.388 (27194/50000)
[2022-06-17 07:35:21,820] Saving..
[2022-06-17 07:35:21,886] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-17 07:35:21,887] Epoch: 226
[2022-06-17 07:54:06,903] Train: Loss: 2.293 | Acc: 49.379 (632632/1281167) | Lr: 0.327455028090927
[2022-06-17 07:54:48,648] Test: Loss: 2.192 | Acc: 50.386 (25193/50000)
[2022-06-17 07:54:48,648] Epoch: 227
[2022-06-17 08:12:49,216] Train: Loss: 2.287 | Acc: 49.426 (633232/1281167) | Lr: 0.32549922830084527
[2022-06-17 08:13:30,974] Test: Loss: 2.085 | Acc: 51.490 (25745/50000)
[2022-06-17 08:13:30,974] Epoch: 228
[2022-06-17 08:31:26,884] Train: Loss: 2.286 | Acc: 49.465 (633731/1281167) | Lr: 0.3235423364228745
[2022-06-17 08:32:12,756] Test: Loss: 2.045 | Acc: 52.740 (26370/50000)
[2022-06-17 08:32:12,756] Epoch: 229
[2022-06-17 08:49:46,877] Train: Loss: 2.280 | Acc: 49.511 (634320/1281167) | Lr: 0.3215844362837498
[2022-06-17 08:50:31,401] Test: Loss: 2.158 | Acc: 50.752 (25376/50000)
[2022-06-17 08:50:31,401] Epoch: 230
[2022-06-17 09:08:49,139] Train: Loss: 2.282 | Acc: 49.541 (634702/1281167) | Lr: 0.31962561175339643
[2022-06-17 09:09:35,422] Test: Loss: 1.931 | Acc: 54.508 (27254/50000)
[2022-06-17 09:09:35,423] Saving..
[2022-06-17 09:09:35,497] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-17 09:09:35,497] Epoch: 231
[2022-06-17 09:27:28,737] Train: Loss: 2.282 | Acc: 49.533 (634605/1281167) | Lr: 0.3176659467413381
[2022-06-17 09:28:13,409] Test: Loss: 2.059 | Acc: 52.052 (26026/50000)
[2022-06-17 09:28:13,410] Epoch: 232
[2022-06-17 09:47:04,579] Train: Loss: 2.275 | Acc: 49.642 (636003/1281167) | Lr: 0.3157055251931016
[2022-06-17 09:47:47,340] Test: Loss: 2.055 | Acc: 52.276 (26138/50000)
[2022-06-17 09:47:47,340] Epoch: 233
[2022-06-17 10:05:20,097] Train: Loss: 2.277 | Acc: 49.590 (635331/1281167) | Lr: 0.3137444310866212
[2022-06-17 10:06:01,494] Test: Loss: 2.004 | Acc: 53.286 (26643/50000)
[2022-06-17 10:06:01,494] Epoch: 234
[2022-06-17 10:23:34,005] Train: Loss: 2.273 | Acc: 49.650 (636095/1281167) | Lr: 0.31178274842864145
[2022-06-17 10:24:18,198] Test: Loss: 2.235 | Acc: 49.174 (24587/50000)
[2022-06-17 10:24:18,198] Epoch: 235
[2022-06-17 10:42:30,919] Train: Loss: 2.271 | Acc: 49.693 (636656/1281167) | Lr: 0.30982056125111845
[2022-06-17 10:43:15,177] Test: Loss: 2.009 | Acc: 52.876 (26438/50000)
[2022-06-17 10:43:15,178] Epoch: 236
[2022-06-17 11:01:10,046] Train: Loss: 2.271 | Acc: 49.715 (636938/1281167) | Lr: 0.3078579536076201
[2022-06-17 11:01:57,464] Test: Loss: 2.196 | Acc: 49.852 (24926/50000)
[2022-06-17 11:01:57,465] Epoch: 237
[2022-06-17 11:19:16,325] Train: Loss: 2.268 | Acc: 49.830 (638411/1281167) | Lr: 0.30589500956972593
[2022-06-17 11:19:59,892] Test: Loss: 2.019 | Acc: 53.270 (26635/50000)
[2022-06-17 11:19:59,892] Epoch: 238
[2022-06-17 11:37:34,957] Train: Loss: 2.260 | Acc: 49.944 (639870/1281167) | Lr: 0.3039318132234252
[2022-06-17 11:38:16,964] Test: Loss: 1.983 | Acc: 53.766 (26883/50000)
[2022-06-17 11:38:16,965] Epoch: 239
[2022-06-17 11:55:06,683] Train: Loss: 2.263 | Acc: 49.826 (638350/1281167) | Lr: 0.3019684486655154
[2022-06-17 11:55:57,103] Test: Loss: 2.163 | Acc: 50.944 (25472/50000)
[2022-06-17 11:55:57,104] Epoch: 240
[2022-06-17 12:13:26,916] Train: Loss: 2.223 | Acc: 50.610 (648395/1281167) | Lr: 0.30000499999999974
[2022-06-17 12:14:09,968] Test: Loss: 2.044 | Acc: 52.536 (26268/50000)
[2022-06-17 12:14:09,969] Epoch: 241
[2022-06-17 12:31:44,325] Train: Loss: 2.208 | Acc: 50.930 (652504/1281167) | Lr: 0.29804155133448396
[2022-06-17 12:32:27,623] Test: Loss: 1.869 | Acc: 55.920 (27960/50000)
[2022-06-17 12:32:27,623] Saving..
[2022-06-17 12:32:27,705] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-17 12:32:27,706] Epoch: 242
[2022-06-17 12:50:08,226] Train: Loss: 2.204 | Acc: 50.956 (652832/1281167) | Lr: 0.29607818677657416
[2022-06-17 12:50:49,287] Test: Loss: 1.955 | Acc: 54.234 (27117/50000)
[2022-06-17 12:50:49,288] Epoch: 243
[2022-06-17 13:08:23,266] Train: Loss: 2.202 | Acc: 51.017 (653610/1281167) | Lr: 0.29411499043027345
[2022-06-17 13:09:04,936] Test: Loss: 1.979 | Acc: 53.564 (26782/50000)
[2022-06-17 13:09:04,937] Epoch: 244
[2022-06-17 13:25:58,169] Train: Loss: 2.194 | Acc: 51.116 (654887/1281167) | Lr: 0.2921520463923793
[2022-06-17 13:26:43,975] Test: Loss: 1.884 | Acc: 55.690 (27845/50000)
[2022-06-17 13:26:43,976] Epoch: 245
[2022-06-17 13:44:29,010] Train: Loss: 2.193 | Acc: 51.146 (655268/1281167) | Lr: 0.290189438748881
[2022-06-17 13:45:10,753] Test: Loss: 1.860 | Acc: 56.364 (28182/50000)
[2022-06-17 13:45:10,754] Saving..
[2022-06-17 13:45:10,831] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-17 13:45:10,832] Epoch: 246
[2022-06-17 14:03:19,572] Train: Loss: 2.189 | Acc: 51.309 (657352/1281167) | Lr: 0.2882272515713579
[2022-06-17 14:04:02,241] Test: Loss: 2.103 | Acc: 51.704 (25852/50000)
[2022-06-17 14:04:02,242] Epoch: 247
[2022-06-17 14:22:09,365] Train: Loss: 2.181 | Acc: 51.422 (658802/1281167) | Lr: 0.2862655689133781
[2022-06-17 14:22:52,604] Test: Loss: 2.233 | Acc: 49.656 (24828/50000)
[2022-06-17 14:22:52,605] Epoch: 248
[2022-06-17 14:41:05,882] Train: Loss: 2.181 | Acc: 51.360 (658002/1281167) | Lr: 0.2843044748068978
[2022-06-17 14:41:53,388] Test: Loss: 2.097 | Acc: 51.554 (25777/50000)
[2022-06-17 14:41:53,389] Epoch: 249
[2022-06-17 15:00:01,757] Train: Loss: 2.178 | Acc: 51.437 (658992/1281167) | Lr: 0.2823440532586613
[2022-06-17 15:00:44,164] Test: Loss: 1.956 | Acc: 54.238 (27119/50000)
[2022-06-17 15:00:44,165] Epoch: 250
[2022-06-17 15:18:55,005] Train: Loss: 2.171 | Acc: 51.569 (660688/1281167) | Lr: 0.280384388246603
[2022-06-17 15:19:38,068] Test: Loss: 1.797 | Acc: 57.094 (28547/50000)
[2022-06-17 15:19:38,068] Saving..
[2022-06-17 15:19:38,156] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-17 15:19:38,157] Epoch: 251
[2022-06-17 15:37:31,247] Train: Loss: 2.170 | Acc: 51.579 (660807/1281167) | Lr: 0.27842556371624966
[2022-06-17 15:38:12,619] Test: Loss: 1.971 | Acc: 54.100 (27050/50000)
[2022-06-17 15:38:12,621] Epoch: 252
[2022-06-17 15:56:19,737] Train: Loss: 2.168 | Acc: 51.632 (661496/1281167) | Lr: 0.27646766357712493
[2022-06-17 15:57:01,902] Test: Loss: 1.767 | Acc: 57.802 (28901/50000)
[2022-06-17 15:57:01,903] Saving..
[2022-06-17 15:57:01,974] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7
[2022-06-17 15:57:01,975] Epoch: 253
[2022-06-17 16:14:37,416] Train: Loss: 2.167 | Acc: 51.598 (661060/1281167) | Lr: 0.2745107716991541
[2022-06-17 16:15:21,090] Test: Loss: 1.784 | Acc: 57.494 (28747/50000)
[2022-06-17 16:15:21,090] Epoch: 254
[2022-06-17 16:32:48,440] Train: Loss: 2.164 | Acc: 51.691 (662245/1281167) | Lr: 0.27255497190907235
[2022-06-17 16:33:30,158] Test: Loss: 1.861 | Acc: 55.984 (27992/50000)
[2022-06-17 16:33:30,159] Epoch: 255
[2022-06-17 16:50:48,317] Train: Loss: 2.158 | Acc: 51.859 (664394/1281167) | Lr: 0.2706003479868332
[2022-06-17 16:51:31,760] Test: Loss: 1.752 | Acc: 58.714 (29357/50000)
[2022-06-17 16:51:31,760] Saving..
[2022-06-17 16:51:31,855] * Saved checkpoint to ./results/14091054/FENet_imagenet.t7