Моя точность обучения и проверки очень низкая и не увеличивается с каждой эпохой. - PullRequest
0 голосов
/ 09 июля 2020

Моя точность обучения и проверки довольно низкая (около 0,1). Они не увеличиваются за эпоху

ЭТО МОЙ КОД:


from keras.datasets import cifar10
(x_train_orig,y_train_orig),(x_test_orig,y_test_orig)=cifar10.load_data()


from matplotlib import pyplot
from sklearn.preprocessing import OneHotEncoder
from keras.layers import Conv2D,BatchNormalization,MaxPool2D,Flatten,Dropout,Dense,Activation
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator,array_to_img,img_to_array,load_img


for i in range(12):
    pyplot.subplot(3,4,i+1)
    pyplot.imshow(x_train_orig[i])
pyplot.show()

def data_preprocessing(x,y):
    x=x/255
  
    one=OneHotEncoder(categorical_features="all")
    y=one.fit_transform(y).toarray()
    
    return x,y

x_test,y_test=data_preprocessing(x_test_orig,y_test_orig)
x_train,y_train=data_preprocessing(x_train_orig,y_train_orig)

classifier=Sequential()

classifier.add(Conv2D(16,kernel_size=(1,1),strides=(1,1),activation="relu",input_shape=(32,32,3)))
classifier.add(Conv2D(16,kernel_size=(2,2),strides=(1,1)))
classifier.add(BatchNormalization(axis=3))
classifier.add(Activation("relu"))

classifier.add(Conv2D(32,kernel_size=(2,2),strides=(1,1),padding="same"))
classifier.add(MaxPool2D(pool_size=(2,2)))
classifier.add(BatchNormalization(axis=3))
classifier.add(Activation("relu"))

classifier.add(Flatten())
classifier.add(Dense(32,activation="relu"))
classifier.add(Dropout(0.2))
classifier.add(Dense(10,activation="softmax"))



classifier.compile(optimizer="adam",loss="categorical_crossentropy",metrics=["accuracy"])

classifier.fit(x_train,y_train,batch_size=16,epochs=20,validation_data=(x_test,y_test))

ЭТО РЕЗУЛЬТАТ:

classifier.fit(x_train,y_train,batch_size=16,epochs=20,validation_data=(x_test,y_test))
Train on 50000 samples, validate on 10000 samples
Epoch 1/20
50000/50000 [==============================] - 73s 1ms/step - loss: 2.3040 - accuracy: 0.0997 - val_loss: 2.3026 - val_accuracy: 0.1000
Epoch 2/20
50000/50000 [==============================] - 59s 1ms/step - loss: 2.3029 - accuracy: 0.0980 - val_loss: 2.3026 - val_accuracy: 0.1000
Epoch 3/20
50000/50000 [==============================] - 60s 1ms/step - loss: 2.3028 - accuracy: 0.1005 - val_loss: 2.3028 - val_accuracy: 0.1000
Epoch 4/20
50000/50000 [==============================] - 60s 1ms/step - loss: 2.3028 - accuracy: 0.0990 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 5/20
50000/50000 [==============================] - 60s 1ms/step - loss: 2.3029 - accuracy: 0.0991 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 6/20
50000/50000 [==============================] - 61s 1ms/step - loss: 2.3028 - accuracy: 0.0975 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 7/20
50000/50000 [==============================] - 60s 1ms/step - loss: 2.3029 - accuracy: 0.0981 - val_loss: 2.3026 - val_accuracy: 0.1000
Epoch 8/20
50000/50000 [==============================] - 61s 1ms/step - loss: 2.3028 - accuracy: 0.0987 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 9/20
50000/50000 [==============================] - 60s 1ms/step - loss: 2.3028 - accuracy: 0.0993 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 10/20
50000/50000 [==============================] - 61s 1ms/step - loss: 2.3029 - accuracy: 0.0987 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 11/20
50000/50000 [==============================] - 61s 1ms/step - loss: 2.3029 - accuracy: 0.0987 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 12/20
50000/50000 [==============================] - 57s 1ms/step - loss: 2.3028 - accuracy: 0.0970 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 13/20
50000/50000 [==============================] - 51s 1ms/step - loss: 2.3029 - accuracy: 0.0978 - val_loss: 2.3026 - val_accuracy: 0.1000
Epoch 14/20
50000/50000 [==============================] - 51s 1ms/step - loss: 2.3029 - accuracy: 0.0994 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 15/20
50000/50000 [==============================] - 51s 1ms/step - loss: 2.3028 - accuracy: 0.1006 - val_loss: 2.3028 - val_accuracy: 0.1000
Epoch 16/20
50000/50000 [==============================] - 52s 1ms/step - loss: 2.3029 - accuracy: 0.0981 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 17/20
50000/50000 [==============================] - 51s 1ms/step - loss: 2.3028 - accuracy: 0.0993 - val_loss: 2.3026 - val_accuracy: 0.1000
Epoch 18/20
50000/50000 [==============================] - 52s 1ms/step - loss: 2.3029 - accuracy: 0.0977 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 19/20
50000/50000 [==============================] - 52s 1ms/step - loss: 2.3029 - accuracy: 0.0983 - val_loss: 2.3027 - val_accuracy: 0.1000- ETA: 7s - loss: 2.3028 - accuracy: 0.0994
Epoch 20/20
50000/50000 [==============================] - 51s 1ms/step - loss: 2.3029 - accuracy: 0.0977 - val_loss: 2.3027 - val_accuracy: 0.1000
Out[27]: <keras.callbacks.callbacks.History at 0x19fdc77a308>

ЭТО РЕЗЮМЕ:

Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_7 (Conv2D)            (None, 32, 32, 16)        64        
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 31, 31, 16)        1040      
_________________________________________________________________
batch_normalization_1 (Batch (None, 31, 31, 16)        64        
_________________________________________________________________
activation_1 (Activation)    (None, 31, 31, 16)        0         
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 31, 31, 32)        2080      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 15, 15, 32)        0         
_________________________________________________________________
batch_normalization_2 (Batch (None, 15, 15, 32)        128       
_________________________________________________________________
activation_2 (Activation)    (None, 15, 15, 32)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 7200)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 32)                230432    
_________________________________________________________________
dropout_1 (Dropout)          (None, 32)                0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                330       
=================================================================
Total params: 234,138
Trainable params: 234,042
Non-trainable params: 96

Я тренировал эту сеть на 20 эпохах, но точность обучения не улучшилась. Это набор данных CIFAR10 с 50000 обучающими и 10000 тестовыми данными. Что я должен делать. Спасибо за помощь

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...