Я работаю над проблемой компьютерного зрения, которая различает поддельные и оригинальные подписи. Входами в сеть являются два изображения
Для этого я извлекаю элементы из слоя 'block3_pool' VGG 16 (shape (None, 28), 28,256)), и я создал слой, который вычисляет квадратный корень абсолютной разности каждого из 256 фильтров двух изображений
Однако сеть не в состоянии учиться и выдает ту же ошибку обучения и проверки длякаждая эпоха, настройка скорости обучения не работает, изменение архитектуры не работает.Входными данными для сети являются два изображения: привязка и данные, форма которых (224,224,3)
Я обучил его на очень маленьком наборе данных, ожидал, что он перегрузится, но сеть не изучит даже после поворотовразмер набора данных
Данные имеют формат, для каждого пользователя имеется 24 исходных и 24 поддельных подписей. Выбор одного случайного числа из исходного набора дает данные привязки, а случайный выбор между исходным и поддельным дает массив данных.Таким образом, для каждого пользователя у меня есть якорный массив (24 изображения одного и того же образца) и данные (всего 24 изображения разных исходных и поддельных образцов), а форма якоря и массив данных для одного пользователя: (24 224 224,3)
#model hyper parameters ,using sgd optimizer
epochs=50
learning_rate=0.1
decay=learning_rate/epochs
batch_size=8
keep_prob=0.8
#This is the function for lambda layer 'Layer_distance'
def root_diff(x):
diff=K.sqrt(K.sum(K.abs(x[:,:,:,:,0]-x[:,:,:,:,1]),axis=(1,2)))
return diff
#This creates an instance of a pre trained VGG-16 model
def base_model(input_dims=(224,224,3),output_dims=128):
base_model=VGG16(include_top=False,weights='imagenet',input_shape=input_dims)
for layers in base_model.layers:
layers.trainable=False
x=base_model.get_layer('block3_pool').output
model=Model(inputs=base_model.input,outputs=x)
return model
def siamese_model(anchor,data,label,anchor_valid,data_valid,label_valid,input_shape=(224,224,3)):
anchor_input=Input(input_shape)
data_input=Input(input_shape)
#----------------------------Model begins from here-------------------------------------------------------#
model_resnet=base_model(input_dims=input_shape)
encodings_anchor=model_resnet(anchor_input)
encodings_data=model_resnet(data_input)
layer_expand_dims=Lambda(lambda x:K.expand_dims(x,axis=4))
anchor_expanded=layer_expand_dims(encodings_anchor)
data_expanded=layer_expand_dims(encodings_data)
encodings=concatenate([anchor_expanded,data_expanded],axis=4) #gives the shape as (None,28,28,256,2)
Layer_distance=Lambda(root_diff)(encodings) #Should give a vector of (256)
dense_1=Dense(256,activation=None,kernel_initializer='glorot_uniform',bias_initializer='zeros')(Layer_distance)
prediction=Dense(1,activation='sigmoid',kernel_initializer='glorot_uniform')(dense_1)
# Connect the inputs with the outputs
siamese_net = Model(inputs=[anchor_input,data_input],outputs=prediction)
print(siamese_net.summary())
for layer in siamese_net.layers:
print("Input shape: "+str(layer.input_shape)+". Output shape: "+str(layer.output_shape))
sgd= optimizers.SGD(lr=learning_rate, decay=1e-9, momentum=0.9, nesterov=True)
siamese_net.compile(loss='binary_crossentropy', optimizer=sgd,metrics=['accuracy'])
history=siamese_net.fit(x=[anchor,data],y=label,batch_size=batch_size,epochs=epochs
,validation_data=([anchor_valid,data_valid],label_valid))
Краткое описание модели (siamese_net)
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
model_1 (Model) (None, 28, 28, 256) 1735488 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 28, 28, 256, 0 model_1[1][0]
model_1[2][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 28, 28, 256, 0 lambda_1[0][0]
lambda_1[1][0]
__________________________________________________________________________________________________
lambda_2 (Lambda) (None, 256) 0 concatenate_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 256) 65792 lambda_2[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 257 dense_1[0][0]
==================================================================================================
Total params: 1,801,537
Trainable params: 66,049
Non-trainable params: 1,735,488
__________________________________________________________________________________________________
Результат обучения
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 48 samples, validate on 48 samples
Epoch 1/50
2019-04-21 06:10:00.354542: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
48/48 [==============================] - 4s 90ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 2/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 3/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 4/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 5/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 6/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 7/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 8/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 9/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 10/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 11/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 12/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 13/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 14/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 15/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 16/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 17/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 18/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 19/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 20/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 21/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 22/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 23/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 24/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 25/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 26/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 27/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 28/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 29/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 30/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 31/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 32/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 33/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 34/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 35/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 36/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 37/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 38/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 39/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 40/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 41/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 42/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 43/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 44/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 45/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 46/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 47/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 48/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 49/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 50/50
48/48 [==============================] - 1s 19ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Saved model to disk