Я пытаюсь обучить CNN с двумерными массивами нормализованных чисел.
Пример двумерного тренировочного массива:
36,36,37,37,38,38,39,39,39,40,40,40,40,40,40,40,40,41,41,41,41,41,42,42,42,42,43,43,44,44,45,45,46,47,49,51,54,56,58,61
10,10,10,10,10,11,11,11,11,11,11,12,12,12,12,12,13,13,13,13,14,14,14,14,14,15,15,15,15,16,16,16,16,17,17,18,19,19,20,21
Со следующей моделью я получаю приличные результаты при тестировании, но, поскольку я все еще довольно новичок в ML, я не уверен, какие слои,
фильтры и пул лучше всего подходят для набора данных
Я провел несколько оптимизаций гиперпараметра, но реальные слои и их индивидуальные настройки сложнее
для меня, чтобы попытаться оптимизировать, поэтому любой совет в получении достойной модели для моего типа данных или просто толчок в правильном направлении
будет высоко ценится.
Итак, в общем, мой вопрос таков: как я узнаю, когда моя модель спроектирована наилучшим образом? Можно ли как-нибудь протестировать / перебрать все различные настройки (слои модели, фильтры, пул и т. Д.), Чтобы получить наилучшую модель / конечный результат?
Модель:
model=Sequential()
# CNN 1
model.add(Conv2D(64, kernel_size=(2, 2), activation='relu', input_shape=(2, 40, 1), padding='same' ))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
model.add(Dropout(0.2))
# CNN 2
model.add(Conv2D(128, kernel_size=(2, 2), activation='relu', padding='same' ))
model.add(MaxPooling2D(pool_size=(1, 1), strides=(1, 1)))
model.add(Dropout(0.2))
# CNN 3
model.add(Conv2D(128, kernel_size=(1, 1), activation='relu', padding='same' ))
model.add(MaxPooling2D(pool_size=(1, 1), strides=(1, 1)))
model.add(Dropout(0.2))
model.add(Flatten())
#Dense 1
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.2))
#Dense 2
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
# Output
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer=SGD(lr = 0.001, decay=0.0, momentum=0.9), metrics=['accuracy'])
Обучение:
Train on 15 samples, validate on 7 samples
Epoch 1/40
15/15 [==============================] - 1s 81ms/step - loss: 1.0414 - acc: 0.1889 - val_loss: 0.8904 - val_acc: 0.2143
Epoch 2/40
15/15 [==============================] - 0s 532us/step - loss: 0.9349 - acc: 0.2000 - val_loss: 0.7926 - val_acc: 0.2619
Epoch 3/40
15/15 [==============================] - 0s 532us/step - loss: 0.8128 - acc: 0.2111 - val_loss: 0.7212 - val_acc: 0.3333
Epoch 4/40
15/15 [==============================] - 0s 598us/step - loss: 0.7302 - acc: 0.3778 - val_loss: 0.6571 - val_acc: 0.5952
Epoch 5/40
15/15 [==============================] - 0s 532us/step - loss: 0.6573 - acc: 0.7444 - val_loss: 0.6054 - val_acc: 0.7381
Epoch 6/40
15/15 [==============================] - 0s 598us/step - loss: 0.5965 - acc: 0.7889 - val_loss: 0.5694 - val_acc: 0.8095
Epoch 7/40
15/15 [==============================] - 0s 665us/step - loss: 0.5571 - acc: 0.8000 - val_loss: 0.5513 - val_acc: 0.8333
Epoch 8/40
15/15 [==============================] - 0s 599us/step - loss: 0.5409 - acc: 0.8111 - val_loss: 0.5380 - val_acc: 0.8333
Epoch 9/40
15/15 [==============================] - 0s 599us/step - loss: 0.5269 - acc: 0.8111 - val_loss: 0.5249 - val_acc: 0.8333
Epoch 10/40
15/15 [==============================] - 0s 532us/step - loss: 0.5121 - acc: 0.8222 - val_loss: 0.5125 - val_acc: 0.8333
Epoch 11/40
15/15 [==============================] - 0s 515us/step - loss: 0.4976 - acc: 0.8333 - val_loss: 0.5014 - val_acc: 0.8333
Epoch 12/40
15/15 [==============================] - 0s 598us/step - loss: 0.4841 - acc: 0.8333 - val_loss: 0.4919 - val_acc: 0.8333
Epoch 13/40
15/15 [==============================] - 0s 598us/step - loss: 0.4717 - acc: 0.8333 - val_loss: 0.4823 - val_acc: 0.8333
Epoch 14/40
15/15 [==============================] - 0s 598us/step - loss: 0.4603 - acc: 0.8333 - val_loss: 0.4725 - val_acc: 0.8333
Epoch 15/40
15/15 [==============================] - 0s 598us/step - loss: 0.4492 - acc: 0.8333 - val_loss: 0.4629 - val_acc: 0.8333
Epoch 16/40
15/15 [==============================] - 0s 665us/step - loss: 0.4386 - acc: 0.8333 - val_loss: 0.4539 - val_acc: 0.8333
Epoch 17/40
15/15 [==============================] - 0s 598us/step - loss: 0.4284 - acc: 0.8333 - val_loss: 0.4462 - val_acc: 0.8333
Epoch 18/40
15/15 [==============================] - 0s 599us/step - loss: 0.4204 - acc: 0.8333 - val_loss: 0.4390 - val_acc: 0.8333
Epoch 19/40
15/15 [==============================] - 0s 665us/step - loss: 0.4128 - acc: 0.8333 - val_loss: 0.4320 - val_acc: 0.8333
Epoch 20/40
15/15 [==============================] - 0s 601us/step - loss: 0.4052 - acc: 0.8333 - val_loss: 0.4256 - val_acc: 0.8333
Epoch 21/40
15/15 [==============================] - 0s 598us/step - loss: 0.3977 - acc: 0.8333 - val_loss: 0.4195 - val_acc: 0.8333
Epoch 22/40
15/15 [==============================] - 0s 532us/step - loss: 0.3907 - acc: 0.8333 - val_loss: 0.4133 - val_acc: 0.8333
Epoch 23/40
15/15 [==============================] - 0s 661us/step - loss: 0.3838 - acc: 0.8333 - val_loss: 0.4074 - val_acc: 0.8333
Epoch 24/40
15/15 [==============================] - 0s 732us/step - loss: 0.3770 - acc: 0.8333 - val_loss: 0.4017 - val_acc: 0.8333
Epoch 25/40
15/15 [==============================] - 0s 665us/step - loss: 0.3704 - acc: 0.8333 - val_loss: 0.3960 - val_acc: 0.8333
Epoch 26/40
15/15 [==============================] - 0s 533us/step - loss: 0.3640 - acc: 0.8333 - val_loss: 0.3902 - val_acc: 0.9048
Epoch 27/40
15/15 [==============================] - 0s 532us/step - loss: 0.3577 - acc: 0.8333 - val_loss: 0.3846 - val_acc: 0.9286
Epoch 28/40
15/15 [==============================] - 0s 597us/step - loss: 0.3515 - acc: 0.8556 - val_loss: 0.3790 - val_acc: 0.9286
Epoch 29/40
15/15 [==============================] - 0s 531us/step - loss: 0.3454 - acc: 0.8889 - val_loss: 0.3734 - val_acc: 0.9762
Epoch 30/40
15/15 [==============================] - 0s 533us/step - loss: 0.3393 - acc: 0.9111 - val_loss: 0.3680 - val_acc: 0.9762
Epoch 31/40
15/15 [==============================] - 0s 532us/step - loss: 0.3333 - acc: 0.9556 - val_loss: 0.3625 - val_acc: 1.0000
Epoch 32/40
15/15 [==============================] - 0s 597us/step - loss: 0.3274 - acc: 1.0000 - val_loss: 0.3570 - val_acc: 1.0000
Epoch 33/40
15/15 [==============================] - 0s 664us/step - loss: 0.3218 - acc: 1.0000 - val_loss: 0.3514 - val_acc: 1.0000
Epoch 34/40
15/15 [==============================] - 0s 599us/step - loss: 0.3162 - acc: 1.0000 - val_loss: 0.3457 - val_acc: 1.0000
Epoch 35/40
15/15 [==============================] - 0s 532us/step - loss: 0.3103 - acc: 1.0000 - val_loss: 0.3388 - val_acc: 1.0000
Epoch 36/40
15/15 [==============================] - 0s 531us/step - loss: 0.3028 - acc: 1.0000 - val_loss: 0.3310 - val_acc: 1.0000
Epoch 37/40
15/15 [==============================] - 0s 532us/step - loss: 0.2943 - acc: 1.0000 - val_loss: 0.3231 - val_acc: 1.0000
Epoch 38/40
15/15 [==============================] - 0s 595us/step - loss: 0.2857 - acc: 1.0000 - val_loss: 0.3154 - val_acc: 1.0000
Epoch 39/40
15/15 [==============================] - 0s 532us/step - loss: 0.2773 - acc: 1.0000 - val_loss: 0.3082 - val_acc: 1.0000
Epoch 40/40
15/15 [==============================] - 0s 535us/step - loss: 0.2694 - acc: 1.0000 - val_loss: 0.3014 - val_acc: 1.0000
22/22 [==============================] - 0s 680us/step
Сводка модели:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 3, 20, 64) 320
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 2, 19, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 2, 19, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 2, 19, 128) 32896
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 2, 19, 128) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 2, 19, 128) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 2, 19, 128) 16512
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 2, 19, 128) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 2, 19, 128) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4864) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 311360
_________________________________________________________________
dropout_4 (Dropout) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 32) 2080
_________________________________________________________________
dropout_5 (Dropout) (None, 32) 0
_________________________________________________________________
dense_3 (Dense) (None, 6) 198
=================================================================
Total params: 363,366
Trainable params: 363,366
Non-trainable params: 0
_________________________________________________________________
Train score: 0.27449503540992737
Train accuracy: 1.0