Я использую кеоры tenorflow для создания простой модели CNN_3D.
inputs = keras.Input(shape=(65, 65, 65, 1), name='t1_image')
x = layers.Conv3D(16, (4, 4, 4), name='cnn_1')(inputs)
x = layers.Dropout(0.3)(x)
x = layers.BatchNormalization()(x)
x = layers.LeakyReLU()(x)
x = layers.Conv3D(24, (3, 3, 3), name='cnn_2')(x)
x = layers.Dropout(0.3)(x)
x = layers.BatchNormalization()(x)
x = layers.LeakyReLU()(x)
x = layers.MaxPooling3D((2, 2, 2), name='max_pool_1')(x)
x = layers.Conv3D(28, (3, 3, 3), name='cnn_3')(x)
x = layers.Dropout(0.3)(x)
x = layers.BatchNormalization()(x)
x = layers.LeakyReLU()(x)
x = layers.MaxPooling3D((2, 2, 2), name='max_pool_2')(x)
x = layers.Conv3D(34, (4, 4, 4), name='cnn_4')(x)
x = layers.Dropout(0.3)(x)
x = layers.BatchNormalization()(x)
x = layers.LeakyReLU()(x)
x = layers.Conv3D(2, (4, 4, 4), name='cnn_5')(x)
x = layers.Dropout(0.3)(x)
x = layers.BatchNormalization()(x)
x = layers.LeakyReLU()(x)
outputs = layers.Dense(1, activation='sigmoid', name='predictions')(x)
#print(outputs.shape)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=2e-5),
loss=tf.keras.losses.KLDivergence(), metrics=['accuracy'])
Итак, из печати отладочного сообщения форма выходных данных (None, 8, 8, 8, 1) и моя метка Форма также (8, 8, 8, 1). Так что в основном я хочу вычислить KLD-дивергенцию между двумя кубами.
Однако я получаю это сообщение об ошибке;
Traceback (most recent call last):
File "new_seg.py", line 136, in <module>
loss=tf.keras.losses.KLDivergence(), metrics=['accuracy'])
File "/N/soft/rhel7/deeplearning/Python-3.7.6/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py", line 75, in symbolic_fn_wrapper
return func(*args, **kwargs)
File "/N/soft/rhel7/deeplearning/Python-3.7.6/lib/python3.7/site-packages/keras/engine/training.py", line 229, in compile
self.total_loss = self._prepare_total_loss(masks)
File "/N/soft/rhel7/deeplearning/Python-3.7.6/lib/python3.7/site-packages/keras/engine/training.py", line 692, in _prepare_total_loss
y_true, y_pred, sample_weight=sample_weight)
File "/N/u/jp109/Carbonate/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/losses.py", line 128, in __call__
losses, sample_weight, reduction=self._get_reduction())
File "/N/u/jp109/Carbonate/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/losses_utils.py", line 107, in compute_weighted_loss
losses, sample_weight)
File "/N/u/jp109/Carbonate/.local/lib/python3.7/site-packages/tensorflow_core/python/ops/losses/util.py", line 148, in scale_losses_by_sample_weight
sample_weight = weights_broadcast_ops.broadcast_weights(sample_weight, losses)
File "/N/u/jp109/Carbonate/.local/lib/python3.7/site-packages/tensorflow_core/python/ops/weights_broadcast_ops.py", line 167, in broadcast_weights
with ops.control_dependencies((assert_broadcastable(weights, values),)):
File "/N/u/jp109/Carbonate/.local/lib/python3.7/site-packages/tensorflow_core/python/ops/weights_broadcast_ops.py", line 103, in assert_broadcastable
weights_rank_static, values.shape, weights.shape))
ValueError: weights can not be broadcast to values. values.rank=4. weights.rank=1. values.shape=(None, 8, 8, 8). weights.shape=(None,).
Я предполагаю, что важная строка такова;
ValueError: веса не могут быть переданы значениям. values.rank = 4. weights.rank = 1. values.shape = (Нет, 8, 8, 8). weights.shape = (None,).
из этой строки;
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=2e-5),
loss=tf.keras.losses.KLDivergence(), metrics=['accuracy'])
Я не понимаю, какую роль здесь играет вес и почему функция потерь не работает.
Кто-нибудь знает или есть какие-либо предложения по этому вопросу?