Размеры следующих переменных данных следующие:
print (x_train.shape) равен (1750, 784)
print (y_train.shape) равен (1750, 10)
print (x_test.shape) равно (749, 784)
print (y_test.shape) равно (749, 10)
orig_dims равно 784
inner_dims - 10
Код ниже.Ошибка, которую я получаю в строке 'validation_data = (x_test, [y_test, x_test]))', является InvalidArgumentError: Несовместимые формы: [1750,784] против [1750,28,28,1] [[Узел: loss_18 / dens_400_loss /logistic_loss / mul = Mul [T = DT_FLOAT, _class = ["loc: @train ... ad / Reshape"], _device = "/ job: localhost / replica: 0 / task: 0 / device: CPU: 0"](loss_18 / density_400_loss / Log, _arg_dense_400_target_0_2)]].
x_test = x_test.reshape(749,28,28,1)
x_train = x_train.reshape(1750,28,28,1)
#Create first input and dense layer
input_layer = Input(shape=(28,28,1))
x = Conv2D(64,(3,3),strides = (1,1),name='layer_conv1',padding='same',
input_shape=(28, 28, 1))(input_layer)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D((2,2),name='maxPool1')(x)
x1 = Flatten()(x)
flatLayer1 = Dense(64,activation = 'relu',name='fc0')(x1)
encoded_layer = Dense(inner_dims, activation=activation_f)(flatLayer1)
#Create layers - 1 dense layer
for i in xrange(layers-1):
encoded_layer = Dense(inner_dims, activation=activation_f)(encoded_layer)
#Initialize shared hidden state layer
hidden_state = Dense(orig_dims,activation=activation_f,name='h')
#Create latent layer to output
encoded = hidden_state(encoded_layer)
#Create latent layer for decoder
encoded_output = hidden_state(encoded_layer)
#Create decoder
decoded = Dense(inner_dims, activation=activation_f)(encoded_output)
for i in xrange(layers-1):
decoded = Dense(inner_dims, activation=activation_f)(decoded)
output_layer = Dense(orig_dims, activation=activation_f)(decoded)
###output_layer = Dense(10, activation=activation_f)(decoded)
encoder = Model(input_layer, encoded)
encoder_2 = Model(input_layer,encoded_layer)
#Wrappers for keras
def custom_loss1(y_true,y_pred):
bcro = losses.binary_crossentropy(y_true,encoded)
return bcro
def custom_loss2(y_true,y_pred):
recon_loss = losses.binary_crossentropy(y_true, y_pred)
return recon_loss
autoencoder = Model(input_layer, outputs=[encoded,output_layer])
autoencoder.compile(optimizer='adadelta', loss=[custom_loss1, custom_loss2],loss_weights=[0.1, 1.])
autoencoder.fit(x_train, [y_train,x_train],
batch_size=batch_size,
epochs=epochs,
shuffle=True,
validation_data=(x_test, [y_test,x_test]))
Как это можно исправить?