-
Notifications
You must be signed in to change notification settings - Fork 17
Description
Каждый уважающий себя рисерчер должен в какой-то момент придумать свою собственную сетку. Вот и у меня есть такое желание. я много насмотрелся и думаю что могу сделать лучше объеденив разные идеи под одну крышу.
Плюс я надеюсь сделать из этого магистерский диплом.
Базовые сети и идеи:
- ResNet - классика. Базовый блок
BasicBlockилиBottleNeck. Первый гораздо быстрее, но заметно хуже работает. интересно почему? кажется, что дело исключительно в глубине сетки - DarkNet - в отличие от ResNet уменьшение размерности происходит между блоками, таким образом не нужно нигде отдельно уменьшать residual
- TResNet - первые блоки
BasicBlock, последующиеBottleNeck. из важных изменений - attention после 2й свертки в BottleNeck, а не после 3й как в SE-net и ему подобных. - Neural Architecture Design for GPU-Efficient Networks - решают те же задачи что и TResNet- хотят быстрый инференс на GPU, а не на мобилках. тоже получают что инференс ни от flops ни от кол-ва параметров не зависит. изучают три блока xx-block (по сути basic), bottleneck (BL) и inverted bottleneck с depthwise convs. очень интересные графики latency в зависимости от feature size, на которых есть явный излом, который видимо случается когда начинает ядер на карте не хватать (?). Из экспериментов видно, что BL/DW эффективней на GPU когда матрицы низкоранговые. учат несколько разных сеток, строят для них собственные числа, получают что в первых стадиях ранк выше, а потом быстро падает, т.е. нужно использовать XX в первых стадиях и BL/DW в последующих. Предлагают несколько очень общих вариантов MasterNet. сами выбирают X, X, B, D, D, но говорят что и X, X, D, D тоже работает не плохо. руками придумывают вот такую сетку:
blocks: C,X,X,D,D,C. depth: 1,1,4,8,6,1. width: 32,48,64,160,320,1280. stride: 2,2,2,2,2,1. Acc@1: 77.5%. потом запускают NAS и оптимизируют. получают два интересных вывода: NAS постоянно выбирает 3x3 свертки и bottle_ratio=3 для DW, хотя в EffNet используют 6. (т.е. видимо 6 много и можно меньше?)
Лучшие архитектуры для разных режимов показаны ниже.
мысли после прочтения - надо попрбовать делать как они и менять блоки в конце на блоки с groups, не обязательно даже DW.
Авторы учили свою normal модель на разрешении 192. Провалидировал веса: 79.96@192, 80.7@224 (!), 81@256. выглядит очень мощно
еще чуваки явно умеют тренить сетки, потому что такое качетство без офигенного пайплайна не получить никак. половина их секрета в этом. но что чуваки не увидели, так это то, что у них 21М параметров из которых больше половины приходится на expand и squeeze свертки, потому что у них количество каналов сильно больше чем в mobilenet или effnet. будет ли оно работать лучше, если поставить туда побольше Separable Convs вместо Inverted? кажется что да, но на деле хрен его знает.
-
Rethinking Bottleneck Structure for Efficient Mobile Network Design - модифицируют блоки в mobilenet и смотрят где лучше ставить residual. получают, что лучше работает с skip connection между широкими слоями, а не узкими. предлагают новый bottleneck block - sandglass block: input -> DW 3x3 -> reduction Conv1x1 -> expansion Conv1x1 -> DW 3x3 + input. Дополнительно предлагают ввести параметер Identity tensor multiplier, который по сути делает то же самое что partial residual, т.е. skip прокидывается только от части фич. в их экспериметах значение 0.5 не ухудшает accuracy, но даёт ~5% буст к скорости. в конце отмечают, что в их блоке 2 DW свертки, а в Inverted Residual только одна, поэтому сравниваются с модификацией Mblnv2 с двумя DW свертками. это докидывает, но все равно работает хуже чем вариант авторов.
-
CSPNet - (Cross Stage Partial Networks) - Модифицирует Residual, вместо него вход делится на две части, одна копируется, а другая проходит через блок, потом они конкатенируются и проходят еще свертку 1x1 чтобы перемешаться. уменьшет количесто параметров и FLOPS без ухудшения качества. Предыдущая статья тех же авторов Enriching variety of layer-wise learning information by gradient combination paper - в которой они предлагают partial ResNet - это когда skip connection только на половину блоков. утверждают что позволяет избавиться от лишних градиентов. Есть интересные картинки про градиенты, которые я пока не понял :(
-
Hybrid Composition with IdleBlock: More Efficient Networks for Image Recognition - делают точно то же самое что в CSP Net но для мобильных сеток, называют свой блок Idle. из интересных идей - менять не все слои, а чередовать обычные слои и Idle. обращают внимание на то, что вообще говоря важно какая из частей feaure map остается idle (первая или вторая aka L или R). предлагают две версии L-Idle и R-idle. если стакать только один тип, то receptive field на части фичемап растет, а на части не растет. а если чередовать, то растет одинаково. работает лучше, если ставить только один тип (как в CSP в общем)
-
Res2Net - предлагают модифицировать BottleNeck иерархическими фичами. При том же количестве параметров стабильно лучше работает, докидывает как на classification так и на downstream задачах.
-
Attention blocks - нужно выбрать и какой использовать (пока лучше всех выглядит ECA) и где его расположить. upd. 12.09.20. склоняюсь к варианту SE-Var3 из статьи про ECA (см. ниже)
-
ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network - смотрят на дизайн сетки с точки зрения рангов матриц и показывают, что 1) модные активации типа swish лучше сохраняют ранг 2) сжатие каналов уменьшает model capacity 3) expand layers (где outchannels > in channels) лучше сохраняют ранг. предлагают вставлять их как можно больше. Посмотрев их код можно вынести еще пару идей. 4) Они используют Partial Residual, когда если количество output каналов больше чем input, то add только к первым каналам. 5) они увеличивают количество каналов не каждый stage, а каждый блок (!). на чуть-чуть. таким образом получая бОльшее количество expand layers 6) используют голову как в MobileNet когда на выходе сетки не очень много слоёв (~360), потом conv1x1 360x2500 -> bn -> swish -> AvgPool -> Dropout -> Linear. Возможные улучшения для этой сетки - не использовать Inverted Bottlenecks, а просто линейно наращивать количество каналов, используя group convs вместо depthwise
upd. 27.01.2021
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design - прочитал известную статью про ShuffletNet часто видел её до этого, но не читал. предлагают design principles для построения быстрых сеток: 1) Equal channel width minimizes memory access cost - лучше чтобы количество входных и выходных каналов совпадало. reduction conv1x1 - не очень 2) Excessive group convolution increases MAC - group conv медленные, лучше их не использовать 3) Network fragmentation reduces degree of parallelism - много бранчей или много сверток внутри блока уменьшают скорость по сравнению с одной большой сверткой (при одинаковых FLOPs) 4) Element-wise operations are non-negligible -всякие ReLU и суммы тоже важно учитывать
Предлагают вот такую архитектуру. Очень похоже на CSP выше. пару замечаний 1) "the three successive elementwise operations, “Concat”, “Channel Shuffle” and “Channel Split”, are merged into a single element-wise operation" - ?? не понимаю про что они 2) нет ReLU после DW свертки 3) есть доп. conv1x1 перед GAP 4) их вариант downsampling мне не нравится, я бы воткнул тупо BlurPool и всё

в целом интересная статья, которая в очередной раз подтверждает известные мне мысли
upd. 01.06.21 - авторы сильно обновили статью про ReXNet, чуть переделав свои выводы. модные активации лучше вставлять после сверток которые увеличивают количество каналов, после DW можно оставить ReLU, а после первой conv1x1 нужно swish (aka silu). если вдобавок сделать увеличение фильтров линейным, даже без изменения dimension rate в inverted bottleneck, сетка станет сильно лучше.
Дополнительные идеи:
- Adjoint-Network - дистилляция прямо во время обучения. говорят что это одновременно помогает большой сетке хорошо обучиться. но у них очень слабые бейлайны, которые они улучшают. (73% для resnet50). возможно идею можно улучшить, если выкидывать меньше не первых слоях и больше на последних. но я не очень в теме sota distillation, так что возможно есть идеи лучше.
Все эксперименты будут проводиться на уменьшенной версии Imagenet (128х128) для более быстрых итераций (один эксперимент в таком сетапе занимает
План экспериментов:
Spoiler Template
To be added- Darknet53. Моя версия плохо учится, остановил процесс после ~40 эпох
- Vanilla ResNet50 -> 2 GPUs + smooth
Vanilla Resnet
Epoch 85/90. training: 2504it [13:46, 3.03it/s, Acc@1=84.299, Acc@5=94.812, Loss=1.6742]
Epoch 85/90. validating: 101it [00:20, 4.82it/s, Acc@1=79.392, Acc@5=94.716, Loss=1.7683]
[08-08 12:50:18] - Train loss: 1.6732 | Acc@1: 84.3233 | Acc@5: 94.8197
[08-08 12:50:18] - Val loss: 1.9336 | Acc@1: 75.6580 | Acc@5: 92.6180
[08-08 12:50:18] - Epoch 85: best loss improved from 1.9355 to 1.9336
[08-08 12:50:18] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [13:46, 3.03it/s, Acc@1=84.544, Acc@5=94.888, Loss=1.6655]
Epoch 86/90. validating: 101it [00:20, 4.93it/s, Acc@1=72.024, Acc@5=90.464, Loss=2.0965]
[08-08 13:04:25] - Train loss: 1.6659 | Acc@1: 84.5328 | Acc@5: 94.8969
[08-08 13:04:25] - Val loss: 1.9314 | Acc@1: 75.7660 | Acc@5: 92.6420
[08-08 13:04:25] - Epoch 86: best loss improved from 1.9336 to 1.9314
[08-08 13:04:26] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [13:46, 3.03it/s, Acc@1=84.599, Acc@5=94.965, Loss=1.6629]
Epoch 87/90. validating: 101it [00:20, 4.82it/s, Acc@1=79.528, Acc@5=94.752, Loss=1.7644]
[08-08 13:18:33] - Train loss: 1.6629 | Acc@1: 84.6236 | Acc@5: 94.9396
[08-08 13:18:33] - Val loss: 1.9299 | Acc@1: 75.8300 | Acc@5: 92.6400
[08-08 13:18:33] - Epoch 87: best loss improved from 1.9314 to 1.9299
[08-08 13:18:33] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [13:46, 3.03it/s, Acc@1=84.720, Acc@5=94.930, Loss=1.6612]
Epoch 88/90. validating: 101it [00:20, 4.94it/s, Acc@1=72.064, Acc@5=90.544, Loss=2.0947]
[08-08 13:32:41] - Train loss: 1.6612 | Acc@1: 84.6932 | Acc@5: 94.9399
[08-08 13:32:41] - Val loss: 1.9286 | Acc@1: 75.8040 | Acc@5: 92.6400
[08-08 13:32:41] - Epoch 88: best loss improved from 1.9299 to 1.9286
[08-08 13:32:41] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [13:46, 3.03it/s, Acc@1=84.811, Acc@5=94.989, Loss=1.6571]
Epoch 89/90. validating: 101it [00:20, 4.83it/s, Acc@1=79.480, Acc@5=94.752, Loss=1.7639]
[08-08 13:46:49] - Train loss: 1.6562 | Acc@1: 84.8184 | Acc@5: 94.9993
[08-08 13:46:49] - Val loss: 1.9301 | Acc@1: 75.7580 | Acc@5: 92.6140
[08-08 13:46:49] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [13:46, 3.03it/s, Acc@1=84.829, Acc@5=94.994, Loss=1.6559]
Epoch 90/90. validating: 101it [00:20, 4.94it/s, Acc@1=71.976, Acc@5=90.532, Loss=2.0955]
[08-08 14:00:56] - Train loss: 1.6560 | Acc@1: 84.8103 | Acc@5: 94.9930
[08-08 14:00:56] - Val loss: 1.9288 | Acc@1: 75.7700 | Acc@5: 92.6660
[08-08 14:00:57] - Acc@1 75.770 Acc@5 92.666
[08-08 14:00:57] - Total time: 21h 28.4mVanilla Resnet + EMA + color twist aug
Epoch 85/90. training: 2504it [13:50, 3.02it/s, Acc@1=82.961, Acc@5=94.302, Loss=1.7165]
Epoch 85/90. validating: 101it [00:20, 4.81it/s, Acc@1=80.076, Acc@5=95.016, Loss=1.7426]
[08-10 03:29:15] - Train loss: 1.7151 | Acc@1: 83.0049 | Acc@5: 94.3181
[08-10 03:29:15] - Val loss: 1.9036 | Acc@1: 76.3040 | Acc@5: 92.9180
[08-10 03:29:15] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [13:50, 3.01it/s, Acc@1=83.259, Acc@5=94.416, Loss=1.7061]
Epoch 86/90. validating: 101it [00:20, 4.94it/s, Acc@1=72.516, Acc@5=90.828, Loss=2.0635]
[08-10 03:43:27] - Train loss: 1.7073 | Acc@1: 83.2211 | Acc@5: 94.4151
[08-10 03:43:27] - Val loss: 1.9032 | Acc@1: 76.3060 | Acc@5: 92.9180
[08-10 03:43:27] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [13:51, 3.01it/s, Acc@1=83.305, Acc@5=94.436, Loss=1.7054]
Epoch 87/90. validating: 101it [00:20, 4.82it/s, Acc@1=80.016, Acc@5=95.004, Loss=1.7425]
[08-10 03:57:39] - Train loss: 1.7053 | Acc@1: 83.3025 | Acc@5: 94.4406
[08-10 03:57:39] - Val loss: 1.9030 | Acc@1: 76.3040 | Acc@5: 92.8900
[08-10 03:57:39] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [13:51, 3.01it/s, Acc@1=83.414, Acc@5=94.425, Loss=1.7033]
Epoch 88/90. validating: 101it [00:20, 4.93it/s, Acc@1=72.532, Acc@5=90.840, Loss=2.0637]
[08-10 04:11:51] - Train loss: 1.7032 | Acc@1: 83.4011 | Acc@5: 94.4357
[08-10 04:11:51] - Val loss: 1.9032 | Acc@1: 76.2660 | Acc@5: 92.9100
[08-10 04:11:51] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [13:51, 3.01it/s, Acc@1=83.509, Acc@5=94.462, Loss=1.6992]
Epoch 89/90. validating: 101it [00:20, 4.82it/s, Acc@1=80.012, Acc@5=94.988, Loss=1.7425]
[08-10 04:26:03] - Train loss: 1.6979 | Acc@1: 83.5370 | Acc@5: 94.4932
[08-10 04:26:03] - Val loss: 1.9031 | Acc@1: 76.2620 | Acc@5: 92.9060
[08-10 04:26:03] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [13:50, 3.02it/s, Acc@1=83.515, Acc@5=94.537, Loss=1.6972]
Epoch 90/90. validating: 101it [00:20, 4.94it/s, Acc@1=72.516, Acc@5=90.856, Loss=2.0637]
[08-10 04:40:15] - Train loss: 1.6973 | Acc@1: 83.5025 | Acc@5: 94.5317
[08-10 04:40:15] - Val loss: 1.9031 | Acc@1: 76.2780 | Acc@5: 92.9220
[08-10 04:40:16] - Acc@1 76.278 Acc@5 92.922
[08-10 04:40:16] - Total time: 21h 28.7m- ResNet50 с единой stride 2 сверткой.
ResNet with no residual in stride 2 block
Epoch 85/90. training: 2504it [13:30, 3.09it/s, Acc@1=82.457, Acc@5=94.125, Loss=1.7528]
Epoch 85/90. validating: 101it [00:20, 4.89it/s, Acc@1=79.572, Acc@5=94.864, Loss=1.7781]
[08-08 11:17:41] - Train loss: 1.7520 | Acc@1: 82.5174 | Acc@5: 94.1372
[08-08 11:17:41] - Val loss: 1.9366 | Acc@1: 75.8400 | Acc@5: 92.7960
[08-08 11:17:41] - Epoch 85: best loss improved from 1.9403 to 1.9366
[08-08 11:17:42] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [13:06, 3.18it/s, Acc@1=82.795, Acc@5=94.256, Loss=1.7428]
Epoch 86/90. validating: 101it [00:21, 4.73it/s, Acc@1=72.212, Acc@5=90.788, Loss=2.0923]
[08-08 11:31:10] - Train loss: 1.7436 | Acc@1: 82.7438 | Acc@5: 94.2680
[08-08 11:31:11] - Val loss: 1.9340 | Acc@1: 75.9720 | Acc@5: 92.7920
[08-08 11:31:11] - Epoch 86: best loss improved from 1.9366 to 1.9340
[08-08 11:31:11] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [13:04, 3.19it/s, Acc@1=82.826, Acc@5=94.329, Loss=1.7408]
Epoch 87/90. validating: 101it [00:24, 4.11it/s, Acc@1=79.704, Acc@5=94.880, Loss=1.7735]
[08-08 11:44:40] - Train loss: 1.7409 | Acc@1: 82.8435 | Acc@5: 94.3096
[08-08 11:44:40] - Val loss: 1.9317 | Acc@1: 75.9680 | Acc@5: 92.8780
[08-08 11:44:40] - Epoch 87: best loss improved from 1.9340 to 1.9317
[08-08 11:44:40] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [12:51, 3.25it/s, Acc@1=82.918, Acc@5=94.294, Loss=1.7392]
Epoch 88/90. validating: 101it [00:20, 5.05it/s, Acc@1=72.320, Acc@5=90.908, Loss=2.0905]
[08-08 11:57:52] - Train loss: 1.7392 | Acc@1: 82.9118 | Acc@5: 94.2970
[08-08 11:57:52] - Val loss: 1.9315 | Acc@1: 76.0240 | Acc@5: 92.8440
[08-08 11:57:52] - Epoch 88: best loss improved from 1.9317 to 1.9315
[08-08 11:57:52] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [12:48, 3.26it/s, Acc@1=83.049, Acc@5=94.341, Loss=1.7353]
Epoch 89/90. validating: 101it [00:20, 4.92it/s, Acc@1=79.672, Acc@5=94.824, Loss=1.7720]
[08-08 12:11:02] - Train loss: 1.7341 | Acc@1: 83.0485 | Acc@5: 94.3593
[08-08 12:11:02] - Val loss: 1.9311 | Acc@1: 75.9920 | Acc@5: 92.8540
[08-08 12:11:02] - Epoch 89: best loss improved from 1.9315 to 1.9311
[08-08 12:11:02] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [12:47, 3.26it/s, Acc@1=83.063, Acc@5=94.343, Loss=1.7339]
Epoch 90/90. validating: 101it [00:20, 5.04it/s, Acc@1=72.284, Acc@5=90.932, Loss=2.0908]
[08-08 12:24:10] - Train loss: 1.7339 | Acc@1: 83.0634 | Acc@5: 94.3575
[08-08 12:24:10] - Val loss: 1.9314 | Acc@1: 75.9560 | Acc@5: 92.8920
[08-08 12:24:10] - Acc@1 75.956 Acc@5 92.892
[08-08 12:24:10] - Total time: 19h 51.5mResNet with no residual in stride 2 block + EMA + color twist
Epoch 85/90. training: 2504it [12:35, 3.31it/s, Acc@1=81.405, Acc@5=93.703, Loss=1.7885]
Epoch 85/90. validating: 101it [00:20, 5.01it/s, Acc@1=80.180, Acc@5=95.024, Loss=1.7578]
[08-10 01:40:39] - Train loss: 1.7873 | Acc@1: 81.4761 | Acc@5: 93.7253
[08-10 01:40:39] - Val loss: 1.9184 | Acc@1: 76.3980 | Acc@5: 92.9700
[08-10 01:40:39] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [12:36, 3.31it/s, Acc@1=81.765, Acc@5=93.854, Loss=1.7787]
Epoch 86/90. validating: 101it [00:19, 5.13it/s, Acc@1=72.552, Acc@5=90.928, Loss=2.0778]
[08-10 01:53:36] - Train loss: 1.7790 | Acc@1: 81.7623 | Acc@5: 93.8478
[08-10 01:53:36] - Val loss: 1.9177 | Acc@1: 76.3740 | Acc@5: 92.9820
[08-10 01:53:36] - Epoch 86: best loss improved from 1.9179 to 1.9177
[08-10 01:53:36] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [12:37, 3.31it/s, Acc@1=81.771, Acc@5=93.871, Loss=1.7770]
Epoch 87/90. validating: 101it [00:20, 5.00it/s, Acc@1=80.200, Acc@5=95.028, Loss=1.7574]
[08-10 02:06:34] - Train loss: 1.7770 | Acc@1: 81.7953 | Acc@5: 93.8757
[08-10 02:06:34] - Val loss: 1.9174 | Acc@1: 76.3880 | Acc@5: 93.0200
[08-10 02:06:34] - Epoch 87: best loss improved from 1.9177 to 1.9174
[08-10 02:06:34] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [12:37, 3.31it/s, Acc@1=81.940, Acc@5=93.872, Loss=1.7742]
Epoch 88/90. validating: 101it [00:19, 5.11it/s, Acc@1=72.500, Acc@5=91.008, Loss=2.0770]
[08-10 02:19:32] - Train loss: 1.7745 | Acc@1: 81.9126 | Acc@5: 93.8782
[08-10 02:19:32] - Val loss: 1.9173 | Acc@1: 76.3600 | Acc@5: 93.0380
[08-10 02:19:32] - Epoch 88: best loss improved from 1.9174 to 1.9173
[08-10 02:19:32] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [12:35, 3.31it/s, Acc@1=82.016, Acc@5=93.939, Loss=1.7702]
Epoch 89/90. validating: 101it [00:20, 4.99it/s, Acc@1=80.232, Acc@5=95.052, Loss=1.7574]
[08-10 02:32:28] - Train loss: 1.7692 | Acc@1: 82.0163 | Acc@5: 93.9560
[08-10 02:32:28] - Val loss: 1.9173 | Acc@1: 76.3860 | Acc@5: 93.0080
[08-10 02:32:28] - Epoch 89: best loss improved from 1.9173 to 1.9173
[08-10 02:32:29] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [12:36, 3.31it/s, Acc@1=82.042, Acc@5=93.938, Loss=1.7694]
Epoch 90/90. validating: 101it [00:19, 5.14it/s, Acc@1=72.532, Acc@5=90.976, Loss=2.0769]
[08-10 02:45:25] - Train loss: 1.7694 | Acc@1: 82.0458 | Acc@5: 93.9409
[08-10 02:45:25] - Val loss: 1.9171 | Acc@1: 76.3740 | Acc@5: 93.0020
[08-10 02:45:25] - Epoch 90: best loss improved from 1.9173 to 1.9171
[08-10 02:45:26] - Acc@1 76.374 Acc@5 93.002
[08-10 02:45:26] - Total time: 19h 34.0m- Linear Bottleneck ResNet50
ResNet with no residual in stride 2 block and no last activation
To be addedВыводы после трех экспериментов выше - no residual in stride 2 block заметно ускоряет, без уменьшения качества. точно имеет смысл оставить.
- Resnet34-50 (?). Bottleneck ratio=1 (это даже не bottleneck получается). количество слоёв как в resnet50, количество фильтров как в resnet34. no residual в stride=2 блоках.
Resnet34-50 with no residual in stride 2 block
Epoch 85/90. training: 2504it [08:52, 4.70it/s, Acc@1=77.790, Acc@5=92.127, Loss=1.9207]
Epoch 85/90. validating: 101it [00:16, 6.03it/s, Acc@1=78.988, Acc@5=94.436, Loss=1.8038]
[08-10 06:15:20] - Train loss: 1.9197 | Acc@1: 77.8283 | Acc@5: 92.1667
[08-10 06:15:20] - Val loss: 1.9649 | Acc@1: 75.1400 | Acc@5: 92.3740
[08-10 06:15:20] - Epoch 85: best loss improved from 1.9655 to 1.9649
[08-10 06:15:20] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [08:53, 4.70it/s, Acc@1=77.991, Acc@5=92.282, Loss=1.9129]
Epoch 86/90. validating: 101it [00:16, 6.17it/s, Acc@1=71.284, Acc@5=90.276, Loss=2.1250]
[08-10 06:24:30] - Train loss: 1.9128 | Acc@1: 77.9938 | Acc@5: 92.2696
[08-10 06:24:30] - Val loss: 1.9643 | Acc@1: 75.1600 | Acc@5: 92.3340
[08-10 06:24:30] - Epoch 86: best loss improved from 1.9649 to 1.9643
[08-10 06:24:30] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [09:12, 4.53it/s, Acc@1=78.049, Acc@5=92.307, Loss=1.9109] B
Epoch 87/90. validating: 101it [00:16, 5.95it/s, Acc@1=79.016, Acc@5=94.400, Loss=1.8031]
[08-10 06:33:59] - Train loss: 1.9110 | Acc@1: 78.0691 | Acc@5: 92.3038
[08-10 06:33:59] - Val loss: 1.9641 | Acc@1: 75.1840 | Acc@5: 92.3540
[08-10 06:33:59] - Epoch 87: best loss improved from 1.9643 to 1.9641
[08-10 06:34:00] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [09:18, 4.49it/s, Acc@1=78.217, Acc@5=92.330, Loss=1.9079]
Epoch 88/90. validating: 101it [00:16, 6.14it/s, Acc@1=71.360, Acc@5=90.296, Loss=2.1245]
[08-10 06:43:35] - Train loss: 1.9081 | Acc@1: 78.1746 | Acc@5: 92.3133
[08-10 06:43:35] - Val loss: 1.9638 | Acc@1: 75.1600 | Acc@5: 92.3460
[08-10 06:43:35] - Epoch 88: best loss improved from 1.9641 to 1.9638
[08-10 06:43:35] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [09:15, 4.51it/s, Acc@1=78.238, Acc@5=92.371, Loss=1.9043]
Epoch 89/90. validating: 101it [00:17, 5.92it/s, Acc@1=78.980, Acc@5=94.412, Loss=1.8028]
[08-10 06:53:07] - Train loss: 1.9031 | Acc@1: 78.2649 | Acc@5: 92.3855
[08-10 06:53:07] - Val loss: 1.9636 | Acc@1: 75.1500 | Acc@5: 92.3660
[08-10 06:53:07] - Epoch 89: best loss improved from 1.9638 to 1.9636
[08-10 06:53:08] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [09:13, 4.52it/s, Acc@1=78.295, Acc@5=92.367, Loss=1.9017]
Epoch 90/90. validating: 101it [00:16, 6.13it/s, Acc@1=71.360, Acc@5=90.352, Loss=2.1242]
[08-10 07:02:38] - Train loss: 1.9025 | Acc@1: 78.2674 | Acc@5: 92.3671
[08-10 07:02:38] - Val loss: 1.9634 | Acc@1: 75.1700 | Acc@5: 92.3840
[08-10 07:02:38] - Epoch 90: best loss improved from 1.9636 to 1.9634
[08-10 07:02:39] - Acc@1 75.170 Acc@5 92.384
[08-10 07:02:39] - Total time: 20h 45.5m - время обучение не правильное. там в какой-то момент влез другой процесс и сильно все замедлил, эта сетка где-то на 70% быстрее версии R50 выше и на 25% быстрее учитсяResnet34-50 with no residual in stride 2 block and no last activation
Epoch 85/90. training: 2504it [24:46, 1.68it/s, Acc@1=77.986, Acc@5=92.217, Loss=1.9219]
Epoch 85/90. validating: 101it [00:16, 6.06it/s, Acc@1=79.276, Acc@5=94.524, Loss=1.7990]
[08-10 23:36:29] - Train loss: 1.9211 | Acc@1: 78.0078 | Acc@5: 92.2317
[08-10 23:36:29] - Val loss: 1.9565 | Acc@1: 75.5320 | Acc@5: 92.4920
[08-10 23:36:29] - Epoch 85: best loss improved from 1.9571 to 1.9565
[08-10 23:36:30] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [24:46, 1.68it/s, Acc@1=78.235, Acc@5=92.365, Loss=1.9132]
Epoch 86/90. validating: 101it [00:16, 6.28it/s, Acc@1=71.816, Acc@5=90.452, Loss=2.1128]
[08-11 00:01:50] - Train loss: 1.9136 | Acc@1: 78.1908 | Acc@5: 92.3697
[08-11 00:01:50] - Val loss: 1.9558 | Acc@1: 75.5520 | Acc@5: 92.5000
[08-11 00:01:50] - Epoch 86: best loss improved from 1.9565 to 1.9558
[08-11 00:01:50] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [25:00, 1.67it/s, Acc@1=78.175, Acc@5=92.325, Loss=1.9133]
Epoch 87/90. validating: 101it [00:16, 6.10it/s, Acc@1=79.208, Acc@5=94.524, Loss=1.7980]
[08-11 00:27:25] - Train loss: 1.9127 | Acc@1: 78.2190 | Acc@5: 92.3324
[08-11 00:27:25] - Val loss: 1.9555 | Acc@1: 75.4840 | Acc@5: 92.4620
[08-11 00:27:25] - Epoch 87: best loss improved from 1.9558 to 1.9555
[08-11 00:27:25] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [24:58, 1.67it/s, Acc@1=78.341, Acc@5=92.368, Loss=1.9095]
Epoch 88/90. validating: 101it [00:16, 6.30it/s, Acc@1=71.756, Acc@5=90.412, Loss=2.1128]
[08-11 00:52:58] - Train loss: 1.9095 | Acc@1: 78.3341 | Acc@5: 92.3745
[08-11 00:52:58] - Val loss: 1.9554 | Acc@1: 75.4780 | Acc@5: 92.4700
[08-11 00:52:58] - Epoch 88: best loss improved from 1.9555 to 1.9554
[08-11 00:52:58] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [24:54, 1.68it/s, Acc@1=78.476, Acc@5=92.434, Loss=1.9045]
Epoch 89/90. validating: 101it [00:16, 6.12it/s, Acc@1=79.208, Acc@5=94.548, Loss=1.7978]
[08-11 01:18:26] - Train loss: 1.9039 | Acc@1: 78.4616 | Acc@5: 92.4578
[08-11 01:18:26] - Val loss: 1.9552 | Acc@1: 75.4920 | Acc@5: 92.4840
[08-11 01:18:26] - Epoch 89: best loss improved from 1.9554 to 1.9552
[08-11 01:18:26] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [24:53, 1.68it/s, Acc@1=78.474, Acc@5=92.445, Loss=1.9037]
Epoch 90/90. validating: 101it [00:16, 6.31it/s, Acc@1=71.796, Acc@5=90.432, Loss=2.1124]
[08-11 01:43:54] - Train loss: 1.9035 | Acc@1: 78.4825 | Acc@5: 92.4667
[08-11 01:43:54] - Val loss: 1.9551 | Acc@1: 75.5240 | Acc@5: 92.4880
[08-11 01:43:54] - Epoch 90: best loss improved from 1.9552 to 1.9551
[08-11 01:43:55] - Acc@1 75.524 Acc@5 92.488
[08-11 01:43:55] - Total time: 18h 28.6m работает чутка лучше
- Timm Darknet53
Timm Darknet53
Epoch 85/90. training: 2504it [18:44, 2.23it/s, Acc@1=81.288, Acc@5=93.484, Loss=1.7823]
Epoch 85/90. validating: 101it [00:25, 3.97it/s, Acc@1=79.584, Acc@5=94.548, Loss=1.7704]
[08-11 08:05:23] - Train loss: 1.7811 | Acc@1: 81.3370 | Acc@5: 93.5267
[08-11 08:05:23] - Val loss: 1.9384 | Acc@1: 75.6280 | Acc@5: 92.4500
[08-11 08:05:23] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [18:41, 2.23it/s, Acc@1=81.598, Acc@5=93.667, Loss=1.7716]
Epoch 86/90. validating: 101it [00:24, 4.09it/s, Acc@1=71.724, Acc@5=90.304, Loss=2.1059]
[08-11 08:24:30] - Train loss: 1.7718 | Acc@1: 81.5709 | Acc@5: 93.6734
[08-11 08:24:30] - Val loss: 1.9384 | Acc@1: 75.6640 | Acc@5: 92.3720
[08-11 08:24:30] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [18:42, 2.23it/s, Acc@1=81.625, Acc@5=93.685, Loss=1.7700]
Epoch 87/90. validating: 101it [00:24, 4.05it/s, Acc@1=79.596, Acc@5=94.504, Loss=1.7705]
[08-11 08:43:37] - Train loss: 1.7697 | Acc@1: 81.6773 | Acc@5: 93.6703
[08-11 08:43:37] - Val loss: 1.9389 | Acc@1: 75.7260 | Acc@5: 92.4320
[08-11 08:43:37] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [18:42, 2.23it/s, Acc@1=81.805, Acc@5=93.640, Loss=1.7675]
Epoch 88/90. validating: 101it [00:24, 4.18it/s, Acc@1=71.796, Acc@5=90.332, Loss=2.1073]
[08-11 09:02:44] - Train loss: 1.7675 | Acc@1: 81.7973 | Acc@5: 93.6588
[08-11 09:02:44] - Val loss: 1.9390 | Acc@1: 75.7140 | Acc@5: 92.4320
[08-11 09:02:44] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [18:42, 2.23it/s, Acc@1=81.873, Acc@5=93.714, Loss=1.7628]
Epoch 89/90. validating: 101it [00:24, 4.04it/s, Acc@1=79.620, Acc@5=94.536, Loss=1.7704]
[08-11 09:21:52] - Train loss: 1.7612 | Acc@1: 81.9164 | Acc@5: 93.7516
[08-11 09:21:52] - Val loss: 1.9389 | Acc@1: 75.6940 | Acc@5: 92.4320
[08-11 09:21:52] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [18:48, 2.22it/s, Acc@1=81.924, Acc@5=93.754, Loss=1.7609] dB
Epoch 90/90. validating: 101it [00:24, 4.06it/s, Acc@1=71.772, Acc@5=90.332, Loss=2.1073]
[08-11 09:41:07] - Train loss: 1.7611 | Acc@1: 81.9245 | Acc@5: 93.7698
[08-11 09:41:07] - Val loss: 1.9388 | Acc@1: 75.6980 | Acc@5: 92.4320
[08-11 09:41:08] - Acc@1 75.698 Acc@5 92.432
[08-11 09:41:08] - Total time: 28h 35.6mРаботает на уровне моего resnet34-50, только тут 41М параметров и он заметно медленнее
- Timm CSPDarknet53
Timm CSPDarknet53
Epoch 85/90. training: 2504it [20:22, 2.05it/s, Acc@1=80.479, Acc@5=93.336, Loss=1.8029]
Epoch 85/90. validating: 101it [00:25, 4.03it/s, Acc@1=80.496, Acc@5=95.296, Loss=1.7263]
[08-11 10:38:27] - Train loss: 1.8021 | Acc@1: 80.5142 | Acc@5: 93.3400
[08-11 10:38:27] - Val loss: 1.8755 | Acc@1: 76.9200 | Acc@5: 93.3800
[08-11 10:38:27] - Epoch 85: best loss improved from 1.8756 to 1.8755
[08-11 10:38:28] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [20:11, 2.07it/s, Acc@1=80.719, Acc@5=93.427, Loss=1.7948]
Epoch 86/90. validating: 101it [00:24, 4.18it/s, Acc@1=73.404, Acc@5=91.536, Loss=2.0234]
[08-11 10:59:04] - Train loss: 1.7947 | Acc@1: 80.6935 | Acc@5: 93.4401
[08-11 10:59:04] - Val loss: 1.8747 | Acc@1: 76.9680 | Acc@5: 93.4300
[08-11 10:59:04] - Epoch 86: best loss improved from 1.8755 to 1.8747
[08-11 10:59:05] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [20:11, 2.07it/s, Acc@1=80.693, Acc@5=93.454, Loss=1.7931]
Epoch 87/90. validating: 101it [00:25, 3.96it/s, Acc@1=80.552, Acc@5=95.280, Loss=1.7256]
[08-11 11:19:42] - Train loss: 1.7932 | Acc@1: 80.7234 | Acc@5: 93.4479
[08-11 11:19:42] - Val loss: 1.8745 | Acc@1: 76.9480 | Acc@5: 93.3660
[08-11 11:19:42] - Epoch 87: best loss improved from 1.8747 to 1.8745
[08-11 11:19:42] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [20:09, 2.07it/s, Acc@1=80.851, Acc@5=93.434, Loss=1.7902]
Epoch 88/90. validating: 101it [00:24, 4.15it/s, Acc@1=73.400, Acc@5=91.504, Loss=2.0231]
[08-11 11:40:16] - Train loss: 1.7909 | Acc@1: 80.8035 | Acc@5: 93.4390
[08-11 11:40:16] - Val loss: 1.8743 | Acc@1: 76.9840 | Acc@5: 93.4060
[08-11 11:40:16] - Epoch 88: best loss improved from 1.8745 to 1.8743
[08-11 11:40:17] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [20:14, 2.06it/s, Acc@1=80.889, Acc@5=93.497, Loss=1.7869]
Epoch 89/90. validating: 101it [00:25, 3.96it/s, Acc@1=80.560, Acc@5=95.300, Loss=1.7253] Bp
[08-11 12:00:57] - Train loss: 1.7857 | Acc@1: 80.9323 | Acc@5: 93.5143
[08-11 12:00:57] - Val loss: 1.8743 | Acc@1: 76.9800 | Acc@5: 93.4000
[08-11 12:00:57] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [20:39, 2.02it/s, Acc@1=80.941, Acc@5=93.536, Loss=1.7845]
Epoch 90/90. validating: 101it [00:26, 3.82it/s, Acc@1=73.416, Acc@5=91.504, Loss=2.0233]
[08-11 12:22:04] - Train loss: 1.7849 | Acc@1: 80.9294 | Acc@5: 93.5404
[08-11 12:22:04] - Val loss: 1.8744 | Acc@1: 76.9800 | Acc@5: 93.4000
[08-11 12:22:05] - Acc@1 76.980 Acc@5 93.400
[08-11 12:22:05] - Total time: 31h 15.6mРаботает заметно лучше обычного DarkNet при меньшем количестве параметров. Тренится чуть медленее
- Preact Resnet34-50
PreAct Resnet34-50 + no bias in last conv in block
Epoch 85/90. training: 2504it [10:29, 3.98it/s, Acc@1=77.720, Acc@5=92.113, Loss=1.9219]
Epoch 85/90. validating: 101it [00:19, 5.28it/s, Acc@1=78.692, Acc@5=94.420, Loss=1.8053]
[08-12 03:34:26] - Train loss: 1.9207 | Acc@1: 77.7639 | Acc@5: 92.1334
[08-12 03:34:26] - Val loss: 1.9669 | Acc@1: 75.0200 | Acc@5: 92.3200
[08-12 03:34:26] - Epoch 85: best loss improved from 1.9677 to 1.9669
[08-12 03:34:26] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [10:26, 3.99it/s, Acc@1=78.003, Acc@5=92.219, Loss=1.9133]
Epoch 86/90. validating: 101it [00:19, 5.15it/s, Acc@1=71.368, Acc@5=90.268, Loss=2.1277]
[08-12 03:45:13] - Train loss: 1.9134 | Acc@1: 77.9695 | Acc@5: 92.2331
[08-12 03:45:13] - Val loss: 1.9661 | Acc@1: 75.0540 | Acc@5: 92.3300
[08-12 03:45:13] - Epoch 86: best loss improved from 1.9669 to 1.9661
[08-12 03:45:14] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [10:32, 3.96it/s, Acc@1=77.961, Acc@5=92.273, Loss=1.9115]
Epoch 87/90. validating: 101it [00:18, 5.39it/s, Acc@1=78.760, Acc@5=94.392, Loss=1.8038]
[08-12 03:56:04] - Train loss: 1.9114 | Acc@1: 78.0165 | Acc@5: 92.2695
[08-12 03:56:04] - Val loss: 1.9659 | Acc@1: 75.0240 | Acc@5: 92.3140
[08-12 03:56:04] - Epoch 87: best loss improved from 1.9661 to 1.9659
[08-12 03:56:05] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [10:25, 4.00it/s, Acc@1=78.106, Acc@5=92.279, Loss=1.9086]
Epoch 88/90. validating: 101it [00:19, 5.21it/s, Acc@1=71.348, Acc@5=90.232, Loss=2.1273]
[08-12 04:06:50] - Train loss: 1.9091 | Acc@1: 78.0847 | Acc@5: 92.2814
[08-12 04:06:50] - Val loss: 1.9655 | Acc@1: 75.0880 | Acc@5: 92.3140
[08-12 04:06:50] - Epoch 88: best loss improved from 1.9659 to 1.9655
[08-12 04:06:50] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [10:23, 4.01it/s, Acc@1=78.222, Acc@5=92.356, Loss=1.9050]
Epoch 89/90. validating: 101it [00:18, 5.36it/s, Acc@1=78.864, Acc@5=94.408, Loss=1.8035]
[08-12 04:17:33] - Train loss: 1.9035 | Acc@1: 78.2516 | Acc@5: 92.3837
[08-12 04:17:33] - Val loss: 1.9654 | Acc@1: 75.1220 | Acc@5: 92.3580
[08-12 04:17:33] - Epoch 89: best loss improved from 1.9655 to 1.9654
[08-12 04:17:33] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [10:33, 3.95it/s, Acc@1=78.261, Acc@5=92.377, Loss=1.9032]
Epoch 90/90. validating: 101it [00:18, 5.60it/s, Acc@1=71.404, Acc@5=90.304, Loss=2.1271]
[08-12 04:28:26] - Train loss: 1.9036 | Acc@1: 78.2454 | Acc@5: 92.3815
[08-12 04:28:26] - Val loss: 1.9653 | Acc@1: 75.1220 | Acc@5: 92.3520
[08-12 04:28:26] - Epoch 90: best loss improved from 1.9654 to 1.9653
[08-12 04:28:27] - Acc@1 75.122 Acc@5 92.352
[08-12 04:28:27] - Total time: 16h 23.1mРаботает чуть хуже, чем Resnet34-50 + Linear Bottleneck.
Добавлю bias в последний conv слой и повторю
With bias in last conv
Epoch 85/90. training: 2504it [10:42, 3.90it/s, Acc@1=77.690, Acc@5=92.111, Loss=1.9217]
Epoch 85/90. validating: 101it [00:22, 4.56it/s, Acc@1=78.724, Acc@5=94.408, Loss=1.8059]
[08-12 22:34:47] - Train loss: 1.9207 | Acc@1: 77.7494 | Acc@5: 92.1224
[08-12 22:34:47] - Val loss: 1.9684 | Acc@1: 75.0120 | Acc@5: 92.2920
[08-12 22:34:47] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [10:44, 3.89it/s, Acc@1=77.986, Acc@5=92.229, Loss=1.9141]
Epoch 86/90. validating: 101it [00:19, 5.06it/s, Acc@1=71.340, Acc@5=90.120, Loss=2.1296]
[08-12 22:45:52] - Train loss: 1.9136 | Acc@1: 77.9766 | Acc@5: 92.2491
[08-12 22:45:52] - Val loss: 1.9674 | Acc@1: 75.1000 | Acc@5: 92.2820
[08-12 22:45:52] - Epoch 86: best loss improved from 1.9683 to 1.9674
[08-12 22:45:53] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [10:48, 3.86it/s, Acc@1=77.944, Acc@5=92.250, Loss=1.9120]
Epoch 87/90. validating: 101it [00:21, 4.60it/s, Acc@1=78.804, Acc@5=94.412, Loss=1.8044]
[08-12 22:57:04] - Train loss: 1.9117 | Acc@1: 78.0041 | Acc@5: 92.2757
[08-12 22:57:04] - Val loss: 1.9669 | Acc@1: 75.0260 | Acc@5: 92.2980
[08-12 22:57:04] - Epoch 87: best loss improved from 1.9674 to 1.9669
[08-12 22:57:04] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [10:44, 3.89it/s, Acc@1=78.149, Acc@5=92.276, Loss=1.9089]
Epoch 88/90. validating: 101it [00:19, 5.26it/s, Acc@1=71.272, Acc@5=90.180, Loss=2.1293]
[08-12 23:08:08] - Train loss: 1.9093 | Acc@1: 78.1127 | Acc@5: 92.2774
[08-12 23:08:08] - Val loss: 1.9669 | Acc@1: 75.0640 | Acc@5: 92.3160
[08-12 23:08:08] - Epoch 88: best loss improved from 1.9669 to 1.9669
[08-12 23:08:08] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [10:54, 3.82it/s, Acc@1=78.189, Acc@5=92.305, Loss=1.9057]
Epoch 89/90. validating: 101it [00:21, 4.73it/s, Acc@1=78.936, Acc@5=94.432, Loss=1.8040]
[08-12 23:19:25] - Train loss: 1.9044 | Acc@1: 78.2006 | Acc@5: 92.3418
[08-12 23:19:25] - Val loss: 1.9668 | Acc@1: 75.1080 | Acc@5: 92.3060
[08-12 23:19:25] - Epoch 89: best loss improved from 1.9669 to 1.9668
[08-12 23:19:25] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [11:01, 3.79it/s, Acc@1=78.248, Acc@5=92.348, Loss=1.9034]
Epoch 90/90. validating: 101it [00:20, 4.95it/s, Acc@1=71.288, Acc@5=90.228, Loss=2.1293]
[08-12 23:30:47] - Train loss: 1.9034 | Acc@1: 78.2496 | Acc@5: 92.3634
[08-12 23:30:47] - Val loss: 1.9666 | Acc@1: 75.1080 | Acc@5: 92.3280
[08-12 23:30:47] - Epoch 90: best loss improved from 1.9668 to 1.9666
[08-12 23:30:49] - Acc@1 75.108 Acc@5 92.328
[08-12 23:30:49] - Total time: 16h 28.6mСтало даже чутка хуже, чем без bias
- PreAct Resnet34-50 + space2depth + stride 2
PreAct Resnet34-50 + space2depth + stride 2 no bias in last conv in block
Epoch 85/90. training: 2504it [11:15, 3.71it/s, Acc@1=78.280, Acc@5=92.398, Loss=1.8987]
Epoch 85/90. validating: 101it [00:19, 5.11it/s, Acc@1=79.512, Acc@5=94.752, Loss=1.7871]
[08-12 07:01:41] - Train loss: 1.8980 | Acc@1: 78.3003 | Acc@5: 92.4126
[08-12 07:01:41] - Val loss: 1.9460 | Acc@1: 75.5800 | Acc@5: 92.6040
[08-12 07:01:41] - Epoch 85: best loss improved from 1.9464 to 1.9460
[08-12 07:01:41] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [11:44, 3.56it/s, Acc@1=78.551, Acc@5=92.501, Loss=1.8907]
Epoch 86/90. validating: 101it [00:21, 4.68it/s, Acc@1=71.684, Acc@5=90.604, Loss=2.1036]
[08-12 07:13:47] - Train loss: 1.8908 | Acc@1: 78.5276 | Acc@5: 92.5174
[08-12 07:13:47] - Val loss: 1.9452 | Acc@1: 75.5780 | Acc@5: 92.6720
[08-12 07:13:47] - Epoch 86: best loss improved from 1.9460 to 1.9452
[08-12 07:13:47] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [11:49, 3.53it/s, Acc@1=78.531, Acc@5=92.541, Loss=1.8888]
Epoch 87/90. validating: 101it [00:21, 4.73it/s, Acc@1=79.404, Acc@5=94.716, Loss=1.7858]
[08-12 07:25:58] - Train loss: 1.8885 | Acc@1: 78.5534 | Acc@5: 92.5353
[08-12 07:25:58] - Val loss: 1.9450 | Acc@1: 75.5000 | Acc@5: 92.6180
[08-12 07:25:58] - Epoch 87: best loss improved from 1.9452 to 1.9450
[08-12 07:25:58] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [11:50, 3.53it/s, Acc@1=78.655, Acc@5=92.528, Loss=1.8879]
Epoch 88/90. validating: 101it [00:20, 4.94it/s, Acc@1=71.592, Acc@5=90.524, Loss=2.1038]
[08-12 07:38:09] - Train loss: 1.8881 | Acc@1: 78.6161 | Acc@5: 92.5393
[08-12 07:38:09] - Val loss: 1.9447 | Acc@1: 75.5060 | Acc@5: 92.6360
[08-12 07:38:09] - Epoch 88: best loss improved from 1.9450 to 1.9447
[08-12 07:38:10] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [11:58, 3.49it/s, Acc@1=78.760, Acc@5=92.623, Loss=1.8820]
Epoch 89/90. validating: 101it [00:23, 4.38it/s, Acc@1=79.444, Acc@5=94.716, Loss=1.7855]
[08-12 07:50:31] - Train loss: 1.8816 | Acc@1: 78.7766 | Acc@5: 92.6388
[08-12 07:50:31] - Val loss: 1.9446 | Acc@1: 75.5220 | Acc@5: 92.6180
[08-12 07:50:31] - Epoch 89: best loss improved from 1.9447 to 1.9446
[08-12 07:50:31] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [11:57, 3.49it/s, Acc@1=78.766, Acc@5=92.631, Loss=1.8809]
Epoch 90/90. validating: 101it [00:24, 4.08it/s, Acc@1=71.568, Acc@5=90.512, Loss=2.1037]
[08-12 08:02:55] - Train loss: 1.8813 | Acc@1: 78.7659 | Acc@5: 92.6370
[08-12 08:02:55] - Val loss: 1.9446 | Acc@1: 75.5140 | Acc@5: 92.6240
[08-12 08:02:55] - Epoch 90: best loss improved from 1.9446 to 1.9446
[08-12 08:02:56] - Acc@1 75.514 Acc@5 92.624
[08-12 08:02:56] - Total time: 18h 14.7m Space To Depth вход работает заметно лучше дефолтного. Качество pre-activation версии становится как у обычной. Похоже, что идея linear bottleneck работает лучше, чем pre-activation. Кажется что одна из причин - меньшее количество активаций в основном потоке. эксперимент Откидываю идею pre-activation и начинаю использовать только linear bottleneck
- PreAct Resnet34-50 + s2d + stride 2 + groups=16
PreAct Resnet34-50 + s2d + stride 2 + groups=16
Epoch 85/90. training: 2504it [15:32, 2.69it/s, Acc@1=76.150, Acc@5=91.373, Loss=1.9929]
Epoch 85/90. validating: 101it [00:18, 5.38it/s, Acc@1=77.780, Acc@5=94.052, Loss=1.8557]
[08-12 14:43:45] - Train loss: 1.9908 | Acc@1: 76.2302 | Acc@5: 91.4008
[08-12 14:43:45] - Val loss: 2.0190 | Acc@1: 74.0760 | Acc@5: 91.7280
[08-12 14:43:45] - Epoch 85: best loss improved from 2.0192 to 2.0190
[08-12 14:43:45] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [15:33, 2.68it/s, Acc@1=76.406, Acc@5=91.519, Loss=1.9835]
Epoch 86/90. validating: 101it [00:18, 5.58it/s, Acc@1=70.412, Acc@5=89.500, Loss=2.1812]
[08-12 14:59:37] - Train loss: 1.9841 | Acc@1: 76.4197 | Acc@5: 91.5135
[08-12 14:59:37] - Val loss: 2.0179 | Acc@1: 74.1400 | Acc@5: 91.7400
[08-12 14:59:37] - Epoch 86: best loss improved from 2.0190 to 2.0179
[08-12 14:59:37] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [15:45, 2.65it/s, Acc@1=76.465, Acc@5=91.546, Loss=1.9829]
Epoch 87/90. validating: 101it [00:18, 5.39it/s, Acc@1=77.896, Acc@5=94.056, Loss=1.8537]
[08-12 15:15:42] - Train loss: 1.9829 | Acc@1: 76.4750 | Acc@5: 91.5496
[08-12 15:15:42] - Val loss: 2.0175 | Acc@1: 74.1600 | Acc@5: 91.7620
[08-12 15:15:42] - Epoch 87: best loss improved from 2.0179 to 2.0175
[08-12 15:15:42] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [15:33, 2.68it/s, Acc@1=76.547, Acc@5=91.537, Loss=1.9806]
Epoch 88/90. validating: 101it [00:18, 5.54it/s, Acc@1=70.472, Acc@5=89.436, Loss=2.1809]
[08-12 15:31:34] - Train loss: 1.9809 | Acc@1: 76.5281 | Acc@5: 91.5374
[08-12 15:31:34] - Val loss: 2.0174 | Acc@1: 74.1620 | Acc@5: 91.7320
[08-12 15:31:34] - Epoch 88: best loss improved from 2.0175 to 2.0174
[08-12 15:31:34] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [15:22, 2.71it/s, Acc@1=76.641, Acc@5=91.598, Loss=1.9768]
Epoch 89/90. validating: 101it [00:18, 5.41it/s, Acc@1=77.836, Acc@5=94.064, Loss=1.8537]
[08-12 15:47:15] - Train loss: 1.9755 | Acc@1: 76.6587 | Acc@5: 91.6229
[08-12 15:47:15] - Val loss: 2.0173 | Acc@1: 74.1680 | Acc@5: 91.7560
[08-12 15:47:15] - Epoch 89: best loss improved from 2.0174 to 2.0173
[08-12 15:47:15] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [15:23, 2.71it/s, Acc@1=76.703, Acc@5=91.625, Loss=1.9751]
Epoch 90/90. validating: 101it [00:18, 5.54it/s, Acc@1=70.540, Acc@5=89.468, Loss=2.1806]
[08-12 16:02:57] - Train loss: 1.9749 | Acc@1: 76.7064 | Acc@5: 91.6282
[08-12 16:02:57] - Val loss: 2.0172 | Acc@1: 74.2000 | Acc@5: 91.7680
[08-12 16:02:57] - Epoch 90: best loss improved from 2.0173 to 2.0172
[08-12 16:02:58] - Acc@1 74.200 Acc@5 91.768
[08-12 16:02:58] - Total time: 23h 37.0mдовольно плохо, но надо делать скидку на то, что тут всего 6М параметров. пока эту ветку экспериментов прикрою, буду экспериментировать с группами для linear bottleneck версии
- R34-50 noact in last conv + space2depth
Это продолжение 5го эксперимента. Если сравнивать с 9м, видно что linear bottleneck работает лучше.
simpl R34 noact space2depth
Epoch 85/90. training: 2504it [11:36, 3.60it/s, Acc@1=78.404, Acc@5=92.435, Loss=1.9039]
Epoch 85/90. validating: 101it [00:18, 5.35it/s, Acc@1=79.568, Acc@5=94.716, Loss=1.7837]
[08-13 04:56:52] - Train loss: 1.9032 | Acc@1: 78.4491 | Acc@5: 92.4576
[08-13 04:56:52] - Val loss: 1.9395 | Acc@1: 75.8460 | Acc@5: 92.7200
[08-13 04:56:52] - Epoch 85: best loss improved from 1.9405 to 1.9395
[08-13 04:56:53] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [11:39, 3.58it/s, Acc@1=78.646, Acc@5=92.540, Loss=1.8955]
Epoch 86/90. validating: 101it [00:18, 5.40it/s, Acc@1=72.100, Acc@5=90.696, Loss=2.0945]
[08-13 05:08:51] - Train loss: 1.8956 | Acc@1: 78.6410 | Acc@5: 92.5483
[08-13 05:08:51] - Val loss: 1.9391 | Acc@1: 75.8540 | Acc@5: 92.7380
[08-13 05:08:51] - Epoch 86: best loss improved from 1.9395 to 1.9391
[08-13 05:08:51] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [11:33, 3.61it/s, Acc@1=78.664, Acc@5=92.597, Loss=1.8936]
Epoch 87/90. validating: 101it [00:20, 4.99it/s, Acc@1=79.604, Acc@5=94.776, Loss=1.7830]
[08-13 05:20:45] - Train loss: 1.8937 | Acc@1: 78.6835 | Acc@5: 92.6029
[08-13 05:20:45] - Val loss: 1.9390 | Acc@1: 75.8440 | Acc@5: 92.7300
[08-13 05:20:45] - Epoch 87: best loss improved from 1.9391 to 1.9390
[08-13 05:20:45] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [11:34, 3.61it/s, Acc@1=78.807, Acc@5=92.566, Loss=1.8922]
Epoch 88/90. validating: 101it [00:19, 5.22it/s, Acc@1=72.056, Acc@5=90.664, Loss=2.0943]
[08-13 05:32:39] - Train loss: 1.8925 | Acc@1: 78.7634 | Acc@5: 92.5752
[08-13 05:32:39] - Val loss: 1.9386 | Acc@1: 75.8440 | Acc@5: 92.7020
[08-13 05:32:39] - Epoch 88: best loss improved from 1.9390 to 1.9386
[08-13 05:32:40] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [11:41, 3.57it/s, Acc@1=78.878, Acc@5=92.676, Loss=1.8871]
Epoch 89/90. validating: 101it [00:19, 5.21it/s, Acc@1=79.628, Acc@5=94.764, Loss=1.7828]
[08-13 05:44:40] - Train loss: 1.8864 | Acc@1: 78.8939 | Acc@5: 92.6786
[08-13 05:44:40] - Val loss: 1.9386 | Acc@1: 75.8500 | Acc@5: 92.7200
[08-13 05:44:40] - Epoch 89: best loss improved from 1.9386 to 1.9386
[08-13 05:44:41] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [11:43, 3.56it/s, Acc@1=78.922, Acc@5=92.641, Loss=1.8857] ^N
Epoch 90/90. validating: 101it [00:20, 5.00it/s, Acc@1=72.088, Acc@5=90.668, Loss=2.0943]
[08-13 05:56:45] - Train loss: 1.8857 | Acc@1: 78.9320 | Acc@5: 92.6633
[08-13 05:56:45] - Val loss: 1.9385 | Acc@1: 75.8560 | Acc@5: 92.7180
[08-13 05:56:45] - Epoch 90: best loss improved from 1.9386 to 1.9385
[08-13 05:56:46] - Acc@1 75.856 Acc@5 92.718
[08-13 05:56:46] - Total time: 18h 33.7m- ResNet 34 noact s2d + groups 16. 6.08M параметров.
ResNet 34 noact s2d + groups 16
Epoch 85/90. training: 2504it [17:50, 2.34it/s, Acc@1=76.085, Acc@5=91.208, Loss=2.0204]
Epoch 85/90. validating: 101it [00:23, 4.34it/s, Acc@1=78.220, Acc@5=94.012, Loss=1.8706]
[08-14 10:57:53] - Train loss: 2.0194 | Acc@1: 76.1246 | Acc@5: 91.2158
[08-14 10:57:53] - Val loss: 2.0323 | Acc@1: 74.2680 | Acc@5: 91.6980
[08-14 10:57:53] - Epoch 85: best loss improved from 2.0325 to 2.0323
[08-14 10:57:53] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [17:39, 2.36it/s, Acc@1=76.348, Acc@5=91.314, Loss=2.0118]
Epoch 86/90. validating: 101it [00:23, 4.28it/s, Acc@1=70.368, Acc@5=89.448, Loss=2.1930]
[08-14 11:15:57] - Train loss: 2.0121 | Acc@1: 76.3121 | Acc@5: 91.3243
[08-14 11:15:57] - Val loss: 2.0314 | Acc@1: 74.3040 | Acc@5: 91.7000
[08-14 11:15:57] - Epoch 86: best loss improved from 2.0323 to 2.0314
[08-14 11:15:57] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [17:28, 2.39it/s, Acc@1=76.317, Acc@5=91.387, Loss=2.0107]
Epoch 87/90. validating: 101it [00:23, 4.28it/s, Acc@1=78.224, Acc@5=93.980, Loss=1.8695]
[08-14 11:33:49] - Train loss: 2.0105 | Acc@1: 76.3668 | Acc@5: 91.3731
[08-14 11:33:49] - Val loss: 2.0312 | Acc@1: 74.2940 | Acc@5: 91.7080
[08-14 11:33:49] - Epoch 87: best loss improved from 2.0314 to 2.0312
[08-14 11:33:50] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [17:32, 2.38it/s, Acc@1=76.464, Acc@5=91.341, Loss=2.0082]
Epoch 88/90. validating: 101it [00:22, 4.43it/s, Acc@1=70.436, Acc@5=89.428, Loss=2.1926]
[08-14 11:51:45] - Train loss: 2.0087 | Acc@1: 76.4059 | Acc@5: 91.3571
[08-14 11:51:45] - Val loss: 2.0310 | Acc@1: 74.3680 | Acc@5: 91.7060
[08-14 11:51:45] - Epoch 88: best loss improved from 2.0312 to 2.0310
[08-14 11:51:46] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [17:43, 2.35it/s, Acc@1=76.593, Acc@5=91.455, Loss=2.0034]
Epoch 89/90. validating: 101it [00:21, 4.60it/s, Acc@1=78.272, Acc@5=93.992, Loss=1.8692]
[08-14 12:09:51] - Train loss: 2.0027 | Acc@1: 76.5807 | Acc@5: 91.4581
[08-14 12:09:51] - Val loss: 2.0310 | Acc@1: 74.3660 | Acc@5: 91.6940
[08-14 12:09:51] - Epoch 89: best loss improved from 2.0310 to 2.0310
[08-14 12:09:51] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [17:03, 2.45it/s, Acc@1=76.564, Acc@5=91.436, Loss=2.0031]
Epoch 90/90. validating: 101it [00:21, 4.74it/s, Acc@1=70.440, Acc@5=89.372, Loss=2.1925]
[08-14 12:27:18] - Train loss: 2.0028 | Acc@1: 76.5911 | Acc@5: 91.4674
[08-14 12:27:18] - Val loss: 2.0308 | Acc@1: 74.3560 | Acc@5: 91.6860
[08-14 12:27:18] - Epoch 90: best loss improved from 2.0310 to 2.0308
[08-14 12:27:19] - Acc@1 74.356 Acc@5 91.686
[08-14 12:27:19] - Total time: 27h 5.4чёт пока никаких выводов
- ResNet 34 noact s2d + groups width 16. 6.02M параметров
ResNet 34 noact s2d + groups width 16
Epoch 85/90. training: 2504it [17:49, 2.34it/s, Acc@1=76.525, Acc@5=91.436, Loss=2.0038]
Epoch 85/90. validating: 101it [00:22, 4.42it/s, Acc@1=78.780, Acc@5=94.096, Loss=1.8557]
[08-14 11:02:42] - Train loss: 2.0034 | Acc@1: 76.5725 | Acc@5: 91.4357
[08-14 11:02:43] - Val loss: 2.0149 | Acc@1: 74.8440 | Acc@5: 91.9020
[08-14 11:02:43] - Epoch 85: best loss improved from 2.0157 to 2.0149
[08-14 11:02:43] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [17:34, 2.37it/s, Acc@1=76.753, Acc@5=91.506, Loss=1.9971]
Epoch 86/90. validating: 101it [00:21, 4.81it/s, Acc@1=70.920, Acc@5=89.676, Loss=2.1727]
[08-14 11:20:39] - Train loss: 1.9968 | Acc@1: 76.7448 | Acc@5: 91.5391
[08-14 11:20:39] - Val loss: 2.0142 | Acc@1: 74.8500 | Acc@5: 91.9040
[08-14 11:20:39] - Epoch 86: best loss improved from 2.0149 to 2.0142
[08-14 11:20:39] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [17:34, 2.37it/s, Acc@1=76.821, Acc@5=91.555, Loss=1.9941]
Epoch 87/90. validating: 101it [00:21, 4.66it/s, Acc@1=78.772, Acc@5=94.124, Loss=1.8548]
[08-14 11:38:36] - Train loss: 1.9945 | Acc@1: 76.8168 | Acc@5: 91.5559
[08-14 11:38:36] - Val loss: 2.0140 | Acc@1: 74.8600 | Acc@5: 91.9120
[08-14 11:38:36] - Epoch 87: best loss improved from 2.0142 to 2.0140
[08-14 11:38:36] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [17:38, 2.37it/s, Acc@1=76.957, Acc@5=91.560, Loss=1.9925]
Epoch 88/90. validating: 101it [00:21, 4.80it/s, Acc@1=70.908, Acc@5=89.736, Loss=2.1726]
[08-14 11:56:36] - Train loss: 1.9928 | Acc@1: 76.8930 | Acc@5: 91.5636
[08-14 11:56:36] - Val loss: 2.0137 | Acc@1: 74.8080 | Acc@5: 91.9180
[08-14 11:56:36] - Epoch 88: best loss improved from 2.0140 to 2.0137
[08-14 11:56:36] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [17:17, 2.41it/s, Acc@1=76.986, Acc@5=91.693, Loss=1.9872]
Epoch 89/90. validating: 101it [00:20, 4.85it/s, Acc@1=78.688, Acc@5=94.100, Loss=1.8545]
[08-14 12:14:15] - Train loss: 1.9872 | Acc@1: 76.9975 | Acc@5: 91.6908
[08-14 12:14:15] - Val loss: 2.0136 | Acc@1: 74.7920 | Acc@5: 91.9120
[08-14 12:14:15] - Epoch 89: best loss improved from 2.0137 to 2.0136
[08-14 12:14:15] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [16:44, 2.49it/s, Acc@1=76.962, Acc@5=91.636, Loss=1.9884]
Epoch 90/90. validating: 101it [00:18, 5.46it/s, Acc@1=70.908, Acc@5=89.728, Loss=2.1726]
[08-14 12:31:19] - Train loss: 1.9878 | Acc@1: 76.9904 | Acc@5: 91.6604
[08-14 12:31:19] - Val loss: 2.0136 | Acc@1: 74.7900 | Acc@5: 91.9200
[08-14 12:31:19] - Epoch 90: best loss improved from 2.0136 to 2.0136
[08-14 12:31:21] - Acc@1 74.790 Acc@5 91.920
[08-14 12:31:21] - Total time: 27h 8.0mменьше параметров чем в 13-м эксперименте, но при этом качество заметно выше. возможно из-за того, что в первых слоях больше параметров.
- CSP ResNet34-50
тут 9М параметров, что больше чем в варианте с groups, но при этом работает хуже.
Хотя в целом для 9М параметров и такой скорости не плохо. (оно раза в 1.5 быстрее чем вариант с grouped свертками, в эксперименте 10).
Нужно попробовать без transition conv перед concat, может станет лучше.
upd. на самом деле кажется, что дело может быть в слишком маленьком количестве сверток в самом начале. нужно попробовать убрать их.
CSP ResNet34-50 + noact + s2d
Epoch 85/90. training: 2504it [09:00, 4.63it/s, Acc@1=74.384, Acc@5=90.380, Loss=2.0751]
Epoch 85/90. validating: 101it [00:16, 6.20it/s, Acc@1=77.924, Acc@5=93.860, Loss=1.8691]
[08-13 05:25:53] - Train loss: 2.0739 | Acc@1: 74.4454 | Acc@5: 90.3918
[08-13 05:25:53] - Val loss: 2.0313 | Acc@1: 74.0460 | Acc@5: 91.5980
[08-13 05:25:53] - Epoch 85: best loss improved from 2.0322 to 2.0313
[08-13 05:25:53] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [09:00, 4.63it/s, Acc@1=74.662, Acc@5=90.471, Loss=2.0671]
Epoch 86/90. validating: 101it [00:15, 6.37it/s, Acc@1=70.208, Acc@5=89.316, Loss=2.1919]
[08-13 05:35:10] - Train loss: 2.0673 | Acc@1: 74.6425 | Acc@5: 90.4857
[08-13 05:35:10] - Val loss: 2.0303 | Acc@1: 74.0600 | Acc@5: 91.5880
[08-13 05:35:10] - Epoch 86: best loss improved from 2.0313 to 2.0303
[08-13 05:35:10] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [09:00, 4.63it/s, Acc@1=74.672, Acc@5=90.526, Loss=2.0652]
Epoch 87/90. validating: 101it [00:16, 6.20it/s, Acc@1=77.944, Acc@5=93.808, Loss=1.8678]
[08-13 05:44:27] - Train loss: 2.0655 | Acc@1: 74.6920 | Acc@5: 90.5052
[08-13 05:44:27] - Val loss: 2.0300 | Acc@1: 74.0900 | Acc@5: 91.5820
[08-13 05:44:27] - Epoch 87: best loss improved from 2.0303 to 2.0300
[08-13 05:44:27] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [09:00, 4.64it/s, Acc@1=74.778, Acc@5=90.543, Loss=2.0636]
Epoch 88/90. validating: 101it [00:15, 6.38it/s, Acc@1=70.268, Acc@5=89.356, Loss=2.1915]
[08-13 05:53:44] - Train loss: 2.0637 | Acc@1: 74.7393 | Acc@5: 90.5345
[08-13 05:53:44] - Val loss: 2.0296 | Acc@1: 74.1160 | Acc@5: 91.5840
[08-13 05:53:44] - Epoch 88: best loss improved from 2.0300 to 2.0296
[08-13 05:53:44] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [09:00, 4.63it/s, Acc@1=74.806, Acc@5=90.577, Loss=2.0601]
Epoch 89/90. validating: 101it [00:16, 6.20it/s, Acc@1=77.932, Acc@5=93.864, Loss=1.8673]
[08-13 06:03:00] - Train loss: 2.0590 | Acc@1: 74.8482 | Acc@5: 90.6015
[08-13 06:03:00] - Val loss: 2.0295 | Acc@1: 74.1180 | Acc@5: 91.6040
[08-13 06:03:00] - Epoch 89: best loss improved from 2.0296 to 2.0295
[08-13 06:03:00] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [09:00, 4.64it/s, Acc@1=74.904, Acc@5=90.595, Loss=2.0579]
Epoch 90/90. validating: 101it [00:15, 6.38it/s, Acc@1=70.284, Acc@5=89.372, Loss=2.1914]
[08-13 06:12:17] - Train loss: 2.0578 | Acc@1: 74.9071 | Acc@5: 90.6071
[08-13 06:12:17] - Val loss: 2.0294 | Acc@1: 74.1280 | Acc@5: 91.6140
[08-13 06:12:17] - Epoch 90: best loss improved from 2.0295 to 2.0294
[08-13 06:12:18] - Acc@1 74.128 Acc@5 91.614
[08-13 06:12:18] - Total time: 13h 56.2m- CSP ResNet34-50 no x2 transition
CSP ResNet34-50 no x2 transition
Epoch 85/90. training: 2504it [08:44, 4.77it/s, Acc@1=75.104, Acc@5=90.720, Loss=2.0511]
Epoch 85/90. validating: 101it [00:16, 6.28it/s, Acc@1=77.932, Acc@5=93.912, Loss=1.8666]
[08-13 23:50:06] - Train loss: 2.0500 | Acc@1: 75.1298 | Acc@5: 90.7145
[08-13 23:50:06] - Val loss: 2.0283 | Acc@1: 74.0860 | Acc@5: 91.7280
[08-13 23:50:06] - Epoch 85: best loss improved from 2.0287 to 2.0283
[08-13 23:50:06] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [08:44, 4.77it/s, Acc@1=75.276, Acc@5=90.828, Loss=2.0435]
Epoch 86/90. validating: 101it [00:15, 6.47it/s, Acc@1=70.316, Acc@5=89.524, Loss=2.1886]
[08-13 23:59:06] - Train loss: 2.0435 | Acc@1: 75.2927 | Acc@5: 90.8264
[08-13 23:59:06] - Val loss: 2.0274 | Acc@1: 74.1380 | Acc@5: 91.7060
[08-13 23:59:06] - Epoch 86: best loss improved from 2.0283 to 2.0274
[08-13 23:59:07] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [08:43, 4.78it/s, Acc@1=75.378, Acc@5=90.871, Loss=2.0408]
Epoch 87/90. validating: 101it [00:16, 6.28it/s, Acc@1=77.984, Acc@5=93.880, Loss=1.8656]
[08-14 00:08:06] - Train loss: 2.0414 | Acc@1: 75.3660 | Acc@5: 90.8340
[08-14 00:08:06] - Val loss: 2.0272 | Acc@1: 74.1120 | Acc@5: 91.6900
[08-14 00:08:06] - Epoch 87: best loss improved from 2.0274 to 2.0272
[08-14 00:08:06] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [08:44, 4.78it/s, Acc@1=75.391, Acc@5=90.849, Loss=2.0403]
Epoch 88/90. validating: 101it [00:15, 6.46it/s, Acc@1=70.176, Acc@5=89.524, Loss=2.1880]
[08-14 00:17:07] - Train loss: 2.0403 | Acc@1: 75.3748 | Acc@5: 90.8567
[08-14 00:17:07] - Val loss: 2.0268 | Acc@1: 74.1140 | Acc@5: 91.7120
[08-14 00:17:07] - Epoch 88: best loss improved from 2.0272 to 2.0268
[08-14 00:17:07] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [08:44, 4.78it/s, Acc@1=75.532, Acc@5=90.934, Loss=2.0353]
Epoch 89/90. validating: 101it [00:16, 6.27it/s, Acc@1=78.052, Acc@5=93.920, Loss=1.8654]
[08-14 00:26:07] - Train loss: 2.0338 | Acc@1: 75.5366 | Acc@5: 90.9607
[08-14 00:26:07] - Val loss: 2.0267 | Acc@1: 74.1340 | Acc@5: 91.7520
[08-14 00:26:07] - Epoch 89: best loss improved from 2.0268 to 2.0267
[08-14 00:26:07] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [08:44, 4.78it/s, Acc@1=75.570, Acc@5=90.934, Loss=2.0336]
Epoch 90/90. validating: 101it [00:15, 6.44it/s, Acc@1=70.300, Acc@5=89.584, Loss=2.1878]
[08-14 00:35:08] - Train loss: 2.0336 | Acc@1: 75.5688 | Acc@5: 90.9464
[08-14 00:35:08] - Val loss: 2.0266 | Acc@1: 74.1660 | Acc@5: 91.7500
[08-14 00:35:08] - Epoch 90: best loss improved from 2.0267 to 2.0266
[08-14 00:35:09] - Acc@1 74.166 Acc@5 91.750
[08-14 00:35:09] - Total time: 13h 39.7mчуть быстрее чем версия С x2 transition, чуть лучше (но это не очень достоверно), ниже лосс на трейне, т.е. как будто лучше обучается. пока кажется, что смысла в transition нет
- CSP ResNet34-50 no x2 transition no first CSP 0.75 csp ratio. 12.45М параметров
Removed csp in first block to reduce representation bottleneck
Increased csp ratio to have more parameters
Сравнивать нужно с 12м. Этот вариант быстрее, на 4М меньше параметров и хуже по качеству. кажется что просадка не значительная. нужно попробовать 12й и 17 на 100 классов
CSP ResNet34-50 no x2 transition no first CSP 0.75 csp ratio
[08-18 03:08:46] - Epoch 85 | lr 3.08e-03
[08-18 03:19:49] - Train loss: 1.9607 | Acc@1: 77.1407 | Acc@5: 91.8156
[08-18 03:19:49] - Val loss: 1.9719 | Acc@1: 75.2340 | Acc@5: 92.3480
[08-18 03:19:49] - Epoch 85: best loss improved from 1.9723 to 1.9719
[08-18 03:19:49] - Epoch 86 | lr 2.21e-03
[08-18 03:30:49] - Train loss: 1.9535 | Acc@1: 77.3081 | Acc@5: 91.9243
[08-18 03:30:49] - Val loss: 1.9713 | Acc@1: 75.2460 | Acc@5: 92.3360
[08-18 03:30:49] - Epoch 86: best loss improved from 1.9719 to 1.9713
[08-18 03:30:49] - Epoch 87 | lr 1.48e-03
[08-18 03:41:49] - Train loss: 1.9520 | Acc@1: 77.3594 | Acc@5: 91.9464
[08-18 03:41:49] - Val loss: 1.9705 | Acc@1: 75.2700 | Acc@5: 92.3720
[08-18 03:41:49] - Epoch 87: best loss improved from 1.9713 to 1.9705
[08-18 03:41:49] - Epoch 88 | lr 8.98e-04
[08-18 03:52:47] - Train loss: 1.9498 | Acc@1: 77.4201 | Acc@5: 91.9562
[08-18 03:52:47] - Val loss: 1.9703 | Acc@1: 75.2920 | Acc@5: 92.3880
[08-18 03:52:47] - Epoch 88: best loss improved from 1.9705 to 1.9703
[08-18 03:52:48] - Epoch 89 | lr 4.58e-04
[08-18 04:03:42] - Train loss: 1.9440 | Acc@1: 77.5878 | Acc@5: 92.0432
[08-18 04:03:42] - Val loss: 1.9703 | Acc@1: 75.2920 | Acc@5: 92.3900
[08-18 04:03:42] - Epoch 90 | lr 1.65e-04
[08-18 04:14:40] - Train loss: 1.9443 | Acc@1: 77.5711 | Acc@5: 92.0437
[08-18 04:14:40] - Val loss: 1.9702 | Acc@1: 75.2880 | Acc@5: 92.3960
[08-18 04:14:40] - Epoch 90: best loss improved from 1.9703 to 1.9702
[08-18 04:14:41] - Acc@1 75.288 Acc@5 92.396
[08-18 04:14:41] - Total time: 16h 39.4m- CSP ResNet34-50 no x2 transition no first CSP 0.5 csp ratio. 9.35М параметров
Отличие от 16го - нет CSP в первом блоке
Работает лучше чем 16, хотя параметров всего на 0.03М больше. значит no first csp - хорошая идея
CSP ResNet34-50 no x2 transition no first CSP 0.5 csp ratio
[08-18 02:53:50] - Epoch 85 | lr 3.08e-03
[08-18 03:04:43] - Train loss: 2.0415 | Acc@1: 75.2850 | Acc@5: 90.8460
[08-18 03:04:43] - Val loss: 2.0199 | Acc@1: 74.4380 | Acc@5: 91.8240
[08-18 03:04:43] - Epoch 85: best loss improved from 2.0202 to 2.0199
[08-18 03:04:43] - Epoch 86 | lr 2.21e-03
[08-18 03:15:28] - Train loss: 2.0353 | Acc@1: 75.4592 | Acc@5: 90.9374
[08-18 03:15:28] - Val loss: 2.0192 | Acc@1: 74.4380 | Acc@5: 91.7880
[08-18 03:15:28] - Epoch 86: best loss improved from 2.0199 to 2.0192
[08-18 03:15:29] - Epoch 87 | lr 1.48e-03
[08-18 03:26:20] - Train loss: 2.0326 | Acc@1: 75.5330 | Acc@5: 90.9636
[08-18 03:26:20] - Val loss: 2.0186 | Acc@1: 74.4280 | Acc@5: 91.8100
[08-18 03:26:20] - Epoch 87: best loss improved from 2.0192 to 2.0186
[08-18 03:26:20] - Epoch 88 | lr 8.98e-04
[08-18 03:37:06] - Train loss: 2.0312 | Acc@1: 75.5901 | Acc@5: 91.0040
[08-18 03:37:06] - Val loss: 2.0183 | Acc@1: 74.4200 | Acc@5: 91.8500
[08-18 03:37:06] - Epoch 88: best loss improved from 2.0186 to 2.0183
[08-18 03:37:06] - Epoch 89 | lr 4.58e-04
[08-18 03:47:54] - Train loss: 2.0258 | Acc@1: 75.7582 | Acc@5: 91.0742
[08-18 03:47:54] - Val loss: 2.0181 | Acc@1: 74.4300 | Acc@5: 91.8520
[08-18 03:47:54] - Epoch 89: best loss improved from 2.0183 to 2.0181
[08-18 03:47:55] - Epoch 90 | lr 1.65e-04
[08-18 03:58:40] - Train loss: 2.0254 | Acc@1: 75.7276 | Acc@5: 91.0748
[08-18 03:58:40] - Val loss: 2.0180 | Acc@1: 74.4640 | Acc@5: 91.8600
[08-18 03:58:40] - Epoch 90: best loss improved from 2.0181 to 2.0180
[08-18 03:58:41] - Acc@1 74.464 Acc@5 91.860
[08-18 03:58:41] - Total time: 16h 23.5m- ResNet 34 noact s2d + groups width 16 + no groups in stride 2 blocks. 9.01M параметров
Убрал groups в блоках, где stride=2. количество параметров заметно увеличилось, скорость упала. Лосс лучше чем в 14. и 13. но возможно все дело в лишних параметрах. не понятно пока
ResNet 34 noact s2d + groups width 16 + no groups in stride 2 blocks. 9.01M параметров
Epoch 85/90. validating: 101it [00:17, 5.74it/s, Acc@1=78.108, Acc@5=94.380, Loss=1.8438]
[08-18 06:30:07] - Train loss: 1.9924 | Acc@1: 76.4693 | Acc@5: 91.4885
[08-18 06:30:07] - Val loss: 2.0006 | Acc@1: 74.5840 | Acc@5: 92.1380
[08-18 06:30:07] - Epoch 85: best loss improved from 2.0011 to 2.0006
[08-18 06:30:08] - Epoch 86 | lr 2.21e-03
Epoch 86/90. training: 2504it [13:04, 3.19it/s, Acc@1=76.684, Acc@5=91.619, Loss=1.9843]
Epoch 86/90. validating: 101it [00:17, 5.89it/s, Acc@1=71.076, Acc@5=89.960, Loss=2.1563]
[08-18 06:43:30] - Train loss: 1.9848 | Acc@1: 76.6472 | Acc@5: 91.6114
[08-18 06:43:30] - Val loss: 1.9999 | Acc@1: 74.6200 | Acc@5: 92.1680
[08-18 06:43:30] - Epoch 86: best loss improved from 2.0006 to 1.9999
[08-18 06:43:30] - Epoch 87 | lr 1.48e-03
Epoch 87/90. training: 2504it [13:05, 3.19it/s, Acc@1=76.760, Acc@5=91.633, Loss=1.9825]
Epoch 87/90. validating: 101it [00:17, 5.75it/s, Acc@1=78.212, Acc@5=94.344, Loss=1.8426]
[08-18 06:56:53] - Train loss: 1.9829 | Acc@1: 76.7349 | Acc@5: 91.6323
[08-18 06:56:53] - Val loss: 1.9995 | Acc@1: 74.6540 | Acc@5: 92.1560
[08-18 06:56:53] - Epoch 87: best loss improved from 1.9999 to 1.9995
[08-18 06:56:53] - Epoch 88 | lr 8.98e-04
Epoch 88/90. training: 2504it [13:04, 3.19it/s, Acc@1=76.844, Acc@5=91.617, Loss=1.9805]
Epoch 88/90. validating: 101it [00:17, 5.90it/s, Acc@1=71.068, Acc@5=89.916, Loss=2.1558]
[08-18 07:10:15] - Train loss: 1.9809 | Acc@1: 76.7961 | Acc@5: 91.6332
[08-18 07:10:15] - Val loss: 1.9991 | Acc@1: 74.6820 | Acc@5: 92.1540
[08-18 07:10:15] - Epoch 88: best loss improved from 1.9995 to 1.9991
[08-18 07:10:15] - Epoch 89 | lr 4.58e-04
Epoch 89/90. training: 2504it [13:04, 3.19it/s, Acc@1=76.942, Acc@5=91.713, Loss=1.9755]
Epoch 89/90. validating: 101it [00:17, 5.76it/s, Acc@1=78.244, Acc@5=94.364, Loss=1.8421]
[08-18 07:23:37] - Train loss: 1.9751 | Acc@1: 76.9273 | Acc@5: 91.7137
[08-18 07:23:37] - Val loss: 1.9988 | Acc@1: 74.6800 | Acc@5: 92.1480
[08-18 07:23:37] - Epoch 89: best loss improved from 1.9991 to 1.9988
[08-18 07:23:38] - Epoch 90 | lr 1.65e-04
Epoch 90/90. training: 2504it [13:04, 3.19it/s, Acc@1=76.914, Acc@5=91.726, Loss=1.9750]
Epoch 90/90. validating: 101it [00:17, 5.88it/s, Acc@1=71.128, Acc@5=89.964, Loss=2.1554]
[08-18 07:37:00] - Train loss: 1.9750 | Acc@1: 76.9392 | Acc@5: 91.7437
[08-18 07:37:00] - Val loss: 1.9987 | Acc@1: 74.6940 | Acc@5: 92.1680
[08-18 07:37:00] - Epoch 90: best loss improved from 1.9988 to 1.9987
[08-18 07:37:01] - Acc@1 74.694 Acc@5 92.168
[08-18 07:37:01] - Total time: 20h 2.2m??. поменять количество слоёв на 3, 4, 6, 3 -> 2, 4, 8, 2
??. сделать модель пожирнее 64, 128, 256, 512 -> 64, 160, 400, 1024 (идея из regnet, что нужно увеличивать больше чем в 2 раза)
??. идея из rexnet - увеличивать количесто слоёв каждый блок, а не каждый stage