ValueError: Shape (None, 50) и (None, 1) несовместимы в Tensorflow и Colab - PullRequest
2 голосов
/ 17 апреля 2020

Я тренирую модель Tensorflow с LSTM для профилактического обслуживания. Для каждого экземпляра я создаю матрицу (50,4), где 50 - длина последовательности гистотрии, а 4 - количество признаков для каждой записи, поэтому для обучения модели я использую, например, (55048, 50, 4) тензор и а (55048, 1) в качестве метки. Когда я тренируюсь на Jupyter на моем компьютере, это работает (очень медленно, но работает), но на Colab я получаю эту ошибку:


Training data shape is (55048, 50, 4)
Labels shape is (55048, 1)
WARNING:tensorflow:Layer lstm will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm (LSTM)                  (None, 50, 100)           42000     
_________________________________________________________________
dense (Dense)                (None, 50, 1)             101       
=================================================================
Total params: 42,101
Trainable params: 42,101
Non-trainable params: 0
_________________________________________________________________
Epoch 1/50
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.

ValueError: in user code:

    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function  *
        outputs = self.distribute_strategy.run(
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
        return fn(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:543 train_step  **
        self.compiled_metrics.update_state(y, y_pred, sample_weight)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:406 update_state
        metric_obj.update_state(y_t, y_p)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/metrics_utils.py:90 decorated
        update_op = update_state_fn(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/metrics.py:2083 update_state
        label_weights=label_weights)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/metrics_utils.py:351 update_confusion_matrix_variables
        y_pred.shape.assert_is_compatible_with(y_true.shape)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_shape.py:1117 assert_is_compatible_with
        raise ValueError("Shapes %s and %s are incompatible" % (self, other))

    ValueError: Shapes (None, 50) and (None, 1) are incompatible

Я делюсь с вами некоторыми фрагментами кода. Я знаю, что это довольно долго:

def build_lstm(train_data, train_labels, structure=(100,), epochs=50, activation_fun="relu", dropout_rate=0.1,
             loss_function="binary_crossentropy", optimizer="adagrad", val_split=0.2, seq_length=50):
    #n_features = len(train_data.columns)
    print("Train data is\n",train_data)
    acceptable_ids = [idx for idx in train_data['id'].unique() if train_data[train_data['id']==idx].shape[0]>seq_length]
    seq_gen = [list(gen_sequence(train_data[train_data['id']==idx], seq_length)) for idx in acceptable_ids]
    print("Seq gen is\n")
    print(np.array(seq_gen).shape)
    seq_array = np.concatenate(seq_gen,0).astype(np.float32)
    print("Training data shape is", seq_array.shape)
    #train_labels = np.asarray(train_labels).astype('float32').reshape((-1,1))
    label_gen = [gen_labels(train_labels[train_labels['id']==idx], seq_length) for idx in acceptable_ids]
    label_array = np.concatenate(label_gen).astype(np.float32)
    print("Labels shape is", label_array.shape)
    first_layer=True
    model = tf.keras.Sequential()
    for layer_nodes in structure:
        if first_layer:
            model.add(LSTM(layer_nodes, activation=activation_fun, input_shape=(seq_length,train_data.shape[1]-1),
                           dropout=dropout_rate, return_sequences=True))
            first_layer=False
        else:
            model.add(LSTM(layer_nodes, activation=activation_fun, 
                           dropout=dropout_rate, return_sequences=False))
    model.add(Dense(1, activation='sigmoid'))
    model.summary()
    model.compile(loss=loss_function, 
                       optimizer=optimizer, 
                       metrics=['AUC','accuracy'])
    history = model.fit(seq_array,label_array, epochs=epochs, shuffle=True, validation_split=val_split, callbacks=[earlystop_callback])
    return model    
def gen_sequence(id_df, seq_length):
    """ Only sequences that meet the window-length are considered, no padding is used. This means for testing
    we need to drop those which are below the window-length. An alternative would be to pad sequences so that
    we can use shorter ones """
    # for one id I put all the rows in a single matrix
    data_matrix = id_df.drop("id",1).values
    num_elements = data_matrix.shape[0]
    # Iterate over two lists in parallel.
    # For example id1 have 192 rows and sequence_length is equal to 50
    # so zip iterate over two following list of numbers (0,112),(50,192)
    # 0 50 -> from row 0 to row 50
    # 1 51 -> from row 1 to row 51
    # 2 52 -> from row 2 to row 52
    # ...
    # 111 191 -> from row 111 to 191
    for start, stop in zip(range(0, num_elements-seq_length), range(seq_length, num_elements)):
        #print(data_matrix[start:stop, :],"\n")
        yield data_matrix[start:stop, :] 
def gen_labels(id_df, seq_length):
    data_array = id_df.drop("id",1).values
    num_elements = data_array.shape[0]
    return data_array[seq_length:num_elements, :]

...
for comb_hyp in hyp_combinations:
        for id_validation in training_folds_2:
            print(id_validation)
            ## SEPARATE TRAINING SET AND VALIDATION SET
            X_val = X[X.id.isin(id_validation)].copy() 
            X_train = X[~X.id.isin(id_validation)].copy()  
            y_val = y[y.id.isin(id_validation)].copy() 
            y_train = y[~y.id.isin(id_validation)].copy()  
            ## TRAIN THE CLASSIFIER
            clf = build_lstm(train_data=X_train, train_labels=y_train, structure=comb_hyp[2], epochs=EPOCHS, activation_fun=comb_hyp[0], optimizer=SOLVER, seq_length=SEQ_LENGTH)
...

Почему это работает в Jupyter, а не в Colab? Спасибо за ваше внимание.

Ответы [ 2 ]

0 голосов
/ 05 мая 2020

Я работал с установленным временем выполнения на GPU. Это работает, если я в качестве последнего слоя выбрал не плотный слой с одним узлом (для двоичной классификации), а слой LSTM с одним узлом. Может быть, это потому, что LSTM и Dense не следует смешивать. Спасибо за ваши ответы.

0 голосов
/ 01 мая 2020

В моем случае я удалил tensorflow, а затем установил tensorflow-gpu, и проблема была решена

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...