Модель глубокого обучения Tensorflow дает меньшую точность для 10 классов, тогда как очень хорошо работает как набор из 3 классов. - PullRequest
0 голосов
/ 06 августа 2020

Я работаю над проектом компьютерного зрения, классификация слов на основе движения губ . Есть 10 классов (слов) для классификации. каждый класс в наборе данных будет иметь последовательность изображений или кадров. Для задачи я выбрал модель с распределением по времени и модель LSTM. Изначально набор данных будет преобразован в массив numpy, который сначала будет загружен на слои CNN для идентификации особенностей каждого изображения. Результат передается на уровень с распределением по времени и LSTM для обработки кадров как временных рядов. Наконец, для классификации используются несколько плотных слоев.

Проблема, с которой я сталкиваюсь, заключается в том, что когда я обучаю модель отдельно для 3–4 классов или слов, я получаю высокую точность (~ 80–90%) и предсказание действительно хорошее. Но когда я тренирую модель для 10 классов или слов вместе, точность очень очень низкая.

Я не знаю, в чем причина этого. Может ли кто-нибудь помочь мне с этим?

Мой код

from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, Flatten, Dense, LSTM, Dropout, TimeDistributed,BatchNormalization,MaxPool2D, GlobalMaxPool2D

def convmodel(shape=(24, 48, 3)):
    momentum = .9
    model = tf.keras.models.Sequential()
    model._name = "CNN1210"
    
    model.add(tf.keras.layers.Conv2D(64, (3,3), input_shape=shape,padding='same', activation='relu', name = "CNN1") )
    model.add(tf.keras.layers.Conv2D(64, (3,3), padding='same', activation='relu', name = "CNN2"))
    model.add(tf.keras.layers.BatchNormalization(momentum=momentum , name = "Batch1"))
    
    model.add(tf.keras.layers.MaxPool2D(name="Maxpool1") )
    
    model.add(tf.keras.layers.Conv2D(128, (3,3), padding='same', activation='relu', name = "CNN3"))
    model.add(tf.keras.layers.Conv2D(128, (3,3), padding='same', activation='relu', name = "CNN4"))
    model.add(tf.keras.layers.BatchNormalization(momentum=momentum, name = "batch2") )
    
    model.add(tf.keras.layers.MaxPool2D(name = "Maxpool2"))
    
    model.add(tf.keras.layers.Conv2D(256, (3,3), padding='same', activation='relu', name = "CNN5"))
    model.add(tf.keras.layers.Conv2D(256, (3,3), padding='same', activation='relu', name = "CNN6"))
    model.add(tf.keras.layers.BatchNormalization(momentum=momentum, name = "Batch3") )
    
    model.add(tf.keras.layers.MaxPool2D(name = "Maxpool3"))
    
    model.add(tf.keras.layers.Conv2D(256, (3,3), padding='same', activation='relu', name = "CNN7"))
    model.add(tf.keras.layers.Conv2D(256, (3,3), padding='same', activation='relu', name = "CNN8"))
    model.add(tf.keras.layers.BatchNormalization(momentum=momentum, name = "Batch4"))
    
    model.add(tf.keras.layers.MaxPool2D(name = "Maxpool4"))
    
    
    model.add(tf.keras.layers.Conv2D(512, (3,3), padding='same', activation='relu', name = "CNN9"))
    model.add(tf.keras.layers.Conv2D(512, (3,3), padding='same', activation='relu', name = "CNN10"))
    model.add(tf.keras.layers.BatchNormalization(momentum=momentum, name = "Batch5"))
    
    
    # flatten...
    model.add(tf.keras.layers.Flatten(name = "Flatten1"))
    
    
    return model



def action_model(shapes , nbout=3):
    # Create our convnet with (112, 112, 3) input shape
    convnet = convmodel(shapes[1:])
    print(convnet)
    print("convolution over")
    # then create our final model
    model = tf.keras.models.Sequential()
    model._name="1210model"
    # add the convnet with (5, 112, 112, 3) shape
    model.add(TimeDistributed(convnet, input_shape=shapes, name="Timedist1210"))
    print("Time distributed over")
    # here, you can also use GRU or LSTM
    model.add(tf.keras.layers.LSTM(100, name = "LSTM1210"))
    # and finally, we make a decision network
    model.add(tf.keras.layers.Dense(1024, activation='relu', name = "Dense12101"))
    model.add(tf.keras.layers.Dropout(.8, name = "drop1"))
    model.add(tf.keras.layers.Dense(1024, activation='relu', name = "Dense12102"))
    model.add(tf.keras.layers.Dropout(.8, name = "drop2"))
    model.add(tf.keras.layers.Dense(512, activation='relu', name = "Dense12103"))
    model.add(tf.keras.layers.Dropout(.7, name = "drop3"))
    model.add(tf.keras.layers.Dense(128, activation='relu', name = "Dense12104"))
    model.add(tf.keras.layers.Dropout(.6 ,name = "drop4"))
    model.add(tf.keras.layers.Dense(128, activation='relu', name = "Dense12105"))
    model.add(tf.keras.layers.Dropout(.5 ,name = "drop5"))
    model.add(tf.keras.layers.Dense(64, activation='relu', name = "Dense12106"))
    model.add(tf.keras.layers.Dense(32, activation='relu', name = "Dense12107"))
    model.add(tf.keras.layers.Dense(16, activation='relu', name = "Dense12108"))
    model.add(tf.keras.layers.Dense(8, activation='relu', name = "Dense12109"))
    print("Final dense layer")
    model.add(tf.keras.layers.Dense(nbout, activation='softmax', name = "Dense12110"))
        
    return model

TimeDistmodel = action_model((10, 24, 48, 3),10)

optimizer = tf.keras.optimizers.Adam(0.001)

TimeDistmodel.compile(
    optimizer,
    'categorical_crossentropy',
    metrics=['acc']
)

checkpoint_path = "training_all/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)

# Create a callback that saves the model's weights
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
                                                 save_weights_only=True,
                                                 verbose=1)

#TimeDistmodel.summary()
finalModel = TimeDistmodel.fit(trainX,trainY,epochs=100, validation_data=(testX,testY),batch_size= 50, callbacks=[cp_callback])

Вывод для 10 классов

Epoch 1/200
79/79 [==============================] - ETA: 0s - loss: 2.3026 - acc: 0.1033
Epoch 00001: saving model to training_1210\cp.ckpt
79/79 [==============================] - 259s 3s/step - loss: 2.3026 - acc: 0.1033 - val_loss: 2.3039 - val_acc: 0.0917
Epoch 2/200
79/79 [==============================] - ETA: 0s - loss: 2.3029 - acc: 0.1048
Epoch 00002: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3029 - acc: 0.1048 - val_loss: 2.3040 - val_acc: 0.0917
Epoch 3/200
79/79 [==============================] - ETA: 0s - loss: 2.3028 - acc: 0.1038
Epoch 00003: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3028 - acc: 0.1038 - val_loss: 2.3040 - val_acc: 0.0917
Epoch 4/200
79/79 [==============================] - ETA: 0s - loss: 2.3025 - acc: 0.1041
Epoch 00004: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3025 - acc: 0.1041 - val_loss: 2.3043 - val_acc: 0.0917
Epoch 5/200
79/79 [==============================] - ETA: 0s - loss: 2.3025 - acc: 0.0969
Epoch 00005: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3025 - acc: 0.0969 - val_loss: 2.3041 - val_acc: 0.0917
Epoch 6/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00006: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3043 - val_acc: 0.0917
Epoch 7/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1033
Epoch 00007: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1033 - val_loss: 2.3044 - val_acc: 0.0917
Epoch 8/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00008: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3044 - val_acc: 0.0917
Epoch 9/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00009: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3045 - val_acc: 0.0917
Epoch 10/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00010: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3046 - val_acc: 0.0917
Epoch 11/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00011: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3046 - val_acc: 0.0917
Epoch 12/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0985
Epoch 00012: saving model to training_1210\cp.ckpt
79/79 [==============================] - 243s 3s/step - loss: 2.3024 - acc: 0.0985 - val_loss: 2.3046 - val_acc: 0.0917
Epoch 13/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.0954
Epoch 00013: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.0954 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 14/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1015
Epoch 00014: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1015 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 15/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1036
Epoch 00015: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3023 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 16/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.0997
Epoch 00016: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.0997 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 17/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00017: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 18/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.0974
Epoch 00018: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.0974 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 19/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1000
Epoch 00019: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1000 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 20/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1023
Epoch 00020: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1023 - val_loss: 2.3047 - val_acc: 0.0929
Epoch 21/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1028
Epoch 00021: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1028 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 22/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00022: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 23/200
79/79 [==============================] - ETA: 0s - loss: 2.3025 - acc: 0.0990
Epoch 00023: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3025 - acc: 0.0990 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 24/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1036
Epoch 00024: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.1036 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 25/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1005
Epoch 00025: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1005 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 26/200
79/79 [==============================] - ETA: 0s - loss: 2.3025 - acc: 0.0969
Epoch 00026: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3025 - acc: 0.0969 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 27/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1003
Epoch 00027: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.1003 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 28/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00028: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 29/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00029: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3046 - val_acc: 0.0917
Epoch 30/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00030: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 31/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00031: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 32/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0982
Epoch 00032: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.0982 - val_loss: 2.3046 - val_acc: 0.0917
Epoch 33/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00033: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 34/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1036
Epoch 00034: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 35/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1000
Epoch 00035: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1000 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 36/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0992
Epoch 00036: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.0992 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 37/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0977
Epoch 00037: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.0977 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 38/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1005
Epoch 00038: saving model to training_1210\cp.ckpt
79/79 [==============================] - 246s 3s/step - loss: 2.3024 - acc: 0.1005 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 39/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00039: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3046 - val_acc: 0.0917
Epoch 40/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0941
Epoch 00040: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.0941 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 41/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0980
Epoch 00041: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.0980 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 42/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00042: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 43/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0946
Epoch 00043: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.0946 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 44/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00044: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 45/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0967
Epoch 00045: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.0967 - val_loss: 2.3048 - val_acc: 0.0929
Epoch 46/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0967
Epoch 00046: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.0967 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 47/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00047: saving model to training_1210\cp.ckpt
79/79 [==============================] - 243s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 48/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1010
Epoch 00048: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1010 - val_loss: 2.3047 - val_acc: 0.0929
Epoch 49/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.0995
Epoch 00049: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.0995 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 50/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1036
Epoch 00050: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3023 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 51/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00051: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3046 - val_acc: 0.0917
Epoch 52/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1010
Epoch 00052: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.1010 - val_loss: 2.3046 - val_acc: 0.0917
Epoch 53/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00053: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 54/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.0954
Epoch 00054: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.0954 - val_loss: 2.3047 - val_acc: 0.0929
Epoch 55/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0992
Epoch 00055: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.0992 - val_loss: 2.3047 - val_acc: 0.0935
Epoch 56/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1026
Epoch 00056: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1026 - val_loss: 2.3046 - val_acc: 0.0929
Epoch 57/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1031
Epoch 00057: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.1031 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 58/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.0987
Epoch 00058: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.0987 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 59/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0972
Epoch 00059: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.0972 - val_loss: 2.3047 - val_acc: 0.0929
Epoch 60/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1031
Epoch 00060: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1031 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 61/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1036
Epoch 00061: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3023 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 62/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0957
Epoch 00062: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.0957 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 63/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1036
Epoch 00063: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 64/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00064: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 65/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00065: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 66/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00066: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 67/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00067: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 68/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00068: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 69/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1013
Epoch 00069: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.1013 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 70/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1003
Epoch 00070: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1003 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 71/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0972
Epoch 00071: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.0972 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 72/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0987
Epoch 00072: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.0987 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 73/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00073: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 74/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00074: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 75/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1036
Epoch 00075: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.1036 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 76/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1005
Epoch 00076: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1005 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 77/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.0964
Epoch 00077: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3023 - acc: 0.0964 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 78/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1036
Epoch 00078: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.1036 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 79/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1036
Epoch 00079: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3023 - acc: 0.1036 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 80/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00080: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 81/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00081: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3049 - val_acc: 0.0917
Epoch 82/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1013
Epoch 00082: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3024 - acc: 0.1013 - val_loss: 2.3046 - val_acc: 0.0917
Epoch 83/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00083: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3048 - val_acc: 0.0917
Epoch 84/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00084: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3046 - val_acc: 0.0917
Epoch 85/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00085: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 86/200
79/79 [==============================] - ETA: 0s - loss: 2.3023 - acc: 0.1036
Epoch 00086: saving model to training_1210\cp.ckpt
79/79 [==============================] - 245s 3s/step - loss: 2.3023 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 87/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00087: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3046 - val_acc: 0.0917
Epoch 88/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.0962
Epoch 00088: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.0962 - val_loss: 2.3047 - val_acc: 0.0917
Epoch 89/200
79/79 [==============================] - ETA: 0s - loss: 2.3024 - acc: 0.1036
Epoch 00089: saving model to training_1210\cp.ckpt
79/79 [==============================] - 244s 3s/step - loss: 2.3024 - acc: 0.1036 - val_loss: 2.3047 - val_acc: 0.0917

1 Ответ

0 голосов
/ 07 августа 2020

Я думаю, это потому, что ваш набор обучающих данных из 79 слишком мал. Вы можете:

  1. увеличить набор обучающих данных путем получения дополнительных данных, таких как этот - http://spandh.dcs.shef.ac.uk/gridcorpus/

или

использовать предварительно обученные веса для передачи обучения от другой соответствующей нейронной сети, такой как Li pNet - https://github.com/rizkiarm/LipNet. В их репозитории есть пошаговые инструкции.
...