Какова максимальная сумма потерь при обучении модели керас? - PullRequest
1 голос
/ 09 мая 2019

Я тренирую свою модель с Керасом, и я пытаюсь прочитать статистику оценки. Я знаю, для чего предназначена функция потерь, но какова максимальная возможная величина? Чем ближе к нулю, тем лучше, но я не знаю, хорошо ли 0.2. Я вижу, что потери увеличиваются после нескольких итераций, а также увеличивается точность.

Мой код для обучения модели:

def trainModel(bow,unitlabels,units):
    x_train = np.array(bow)
    print("X_train: ", x_train)
    y_train = np.array(unitlabels)
    print("Y_train: ", y_train)
    model = tf.keras.models.Sequential([
            tf.keras.layers.Dense(256, activation=tf.nn.relu),
            tf.keras.layers.Dropout(0.2),
            tf.keras.layers.Dense(len(units), activation=tf.nn.softmax)])
    model.compile(optimizer='adam',
                         loss='sparse_categorical_crossentropy',
                         metrics=['accuracy'])
    model.fit(x_train, y_train, epochs=50)
    return model

и мои результаты:

Epoch 1/50
1249/1249 [==============================] - 0s 361us/sample - loss: 0.8800 - acc: 0.7590
Epoch 2/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.4689 - acc: 0.8519
Epoch 3/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.3766 - acc: 0.8687
Epoch 4/50
1249/1249 [==============================] - 0s 92us/sample - loss: 0.3339 - acc: 0.8663
Epoch 5/50
1249/1249 [==============================] - 0s 89us/sample - loss: 0.3057 - acc: 0.8719
Epoch 6/50
1249/1249 [==============================] - 0s 87us/sample - loss: 0.2877 - acc: 0.8799
Epoch 7/50
1249/1249 [==============================] - 0s 88us/sample - loss: 0.2752 - acc: 0.8815
Epoch 8/50
1249/1249 [==============================] - 0s 89us/sample - loss: 0.2650 - acc: 0.8783
Epoch 9/50
1249/1249 [==============================] - 0s 92us/sample - loss: 0.2562 - acc: 0.8847
Epoch 10/50
1249/1249 [==============================] - 0s 91us/sample - loss: 0.2537 - acc: 0.8799
Epoch 11/50
1249/1249 [==============================] - 0s 89us/sample - loss: 0.2468 - acc: 0.8903
Epoch 12/50
1249/1249 [==============================] - 0s 88us/sample - loss: 0.2436 - acc: 0.8927
Epoch 13/50
1249/1249 [==============================] - 0s 89us/sample - loss: 0.2420 - acc: 0.8935
Epoch 14/50
1249/1249 [==============================] - 0s 88us/sample - loss: 0.2366 - acc: 0.8935
Epoch 15/50
1249/1249 [==============================] - 0s 94us/sample - loss: 0.2305 - acc: 0.8951
Epoch 16/50
1249/1249 [==============================] - 0s 98us/sample - loss: 0.2265 - acc: 0.8991
Epoch 17/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.2280 - acc: 0.8967
Epoch 18/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.2247 - acc: 0.8951
Epoch 19/50
1249/1249 [==============================] - 0s 92us/sample - loss: 0.2237 - acc: 0.8975
Epoch 20/50
1249/1249 [==============================] - 0s 102us/sample - loss: 0.2196 - acc: 0.8991
Epoch 21/50
1249/1249 [==============================] - 0s 102us/sample - loss: 0.2223 - acc: 0.8983
Epoch 22/50
1249/1249 [==============================] - 0s 102us/sample - loss: 0.2163 - acc: 0.8943
Epoch 23/50
1249/1249 [==============================] - 0s 100us/sample - loss: 0.2177 - acc: 0.8983
Epoch 24/50
1249/1249 [==============================] - 0s 101us/sample - loss: 0.2165 - acc: 0.8983
Epoch 25/50
1249/1249 [==============================] - 0s 100us/sample - loss: 0.2148 - acc: 0.9007
Epoch 26/50
1249/1249 [==============================] - 0s 98us/sample - loss: 0.2189 - acc: 0.8903
Epoch 27/50
1249/1249 [==============================] - 0s 98us/sample - loss: 0.2099 - acc: 0.9023
Epoch 28/50
1249/1249 [==============================] - 0s 98us/sample - loss: 0.2102 - acc: 0.9023
Epoch 29/50
1249/1249 [==============================] - 0s 94us/sample - loss: 0.2091 - acc: 0.8975
Epoch 30/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.2064 - acc: 0.9015
Epoch 31/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.2044 - acc: 0.9023
Epoch 32/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.2070 - acc: 0.9031
Epoch 33/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.2045 - acc: 0.9039
Epoch 34/50
1249/1249 [==============================] - 0s 94us/sample - loss: 0.2007 - acc: 0.9063
Epoch 35/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.1999 - acc: 0.9055
Epoch 36/50
1249/1249 [==============================] - 0s 103us/sample - loss: 0.2010 - acc: 0.9039
Epoch 37/50
1249/1249 [==============================] - 0s 111us/sample - loss: 0.2053 - acc: 0.9031
Epoch 38/50
1249/1249 [==============================] - 0s 99us/sample - loss: 0.2018 - acc: 0.9039
Epoch 39/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.2023 - acc: 0.9055
Epoch 40/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.2019 - acc: 0.9015
Epoch 41/50
1249/1249 [==============================] - 0s 92us/sample - loss: 0.2040 - acc: 0.8983
Epoch 42/50
1249/1249 [==============================] - 0s 103us/sample - loss: 0.2033 - acc: 0.8943
Epoch 43/50
1249/1249 [==============================] - 0s 97us/sample - loss: 0.2024 - acc: 0.9039
Epoch 44/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.2047 - acc: 0.9079
Epoch 45/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.1996 - acc: 0.9039
Epoch 46/50
1249/1249 [==============================] - 0s 91us/sample - loss: 0.1979 - acc: 0.9079
Epoch 47/50
1249/1249 [==============================] - 0s 90us/sample - loss: 0.1960 - acc: 0.9087
Epoch 48/50
1249/1249 [==============================] - 0s 97us/sample - loss: 0.1969 - acc: 0.9055
Epoch 49/50
1249/1249 [==============================] - 0s 99us/sample - loss: 0.1950 - acc: 0.9087
Epoch 50/50
1249/1249 [==============================] - 0s 98us/sample - loss: 0.1956 - acc: 0.9071

1 Ответ

1 голос
/ 09 мая 2019

Максимальная потеря для потери кроссентропии происходит, когда у вас есть равномерное распределение по вашим классам, нет склонности к какому-либо классу, и вы получаете максимальную энтропию. Глядя на формулу:

enter image description here

Вы можете рассчитать максимальные потери, часто натуральный журнал ln используется для журнала. Так как у вас будет 1 горячая цель, сумма уменьшится до -log(a^(i)_k) и с равномерным предположением a^(i) = 1/len(units). Например, в двоичной классификации задайте a=0.5 и -ln(0.5) ~ 0.693147, чтобы максимальные потери были около 0,69.

...