это модель, которую я использовал:
# Input layer of the encoder :
encoder_input = Input(shape=(None,))
# Hidden layers of the encoder :
encoder_embedding = Embedding(input_dim = num_q_words, output_dim = vec_len)(encoder_input)
encoder_dropout = (TimeDistributed(Dropout(rate = dropout_rate)))(encoder_embedding)
encoder_LSTM = LSTM(latent_dim, return_sequences=True)(encoder_dropout)
# Output layer of the encoder :
encoder_LSTM2_layer = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder_LSTM2_layer(encoder_LSTM)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Input layer of the decoder :
decoder_input = Input(shape=(None,))
# Hidden layers of the decoder :
decoder_embedding_layer = Embedding(input_dim = num_a_words, output_dim = vec_len)
decoder_embedding = decoder_embedding_layer(decoder_input)
decoder_dropout_layer = (TimeDistributed(Dropout(rate = dropout_rate)))
decoder_dropout = decoder_dropout_layer(decoder_embedding)
decoder_LSTM_layer = LSTM(latent_dim, return_sequences=True)
decoder_LSTM = decoder_LSTM_layer(decoder_dropout, initial_state = encoder_states)
decoder_LSTM_2_layer = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_LSTM_2,_,_ = decoder_LSTM_2_layer(decoder_LSTM)
# Output layer of the decoder :
decoder_dense = Dense(num_a_words, activation='softmax')
decoder_outputs = decoder_dense(decoder_LSTM_2)
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_input, decoder_input], decoder_outputs)
но когда я хочу сделать вывод, используя этот код:
# Encoder model:
encoder_model = Model(encoder_input, encoder_states)
# Input and Input States for the Decoder:
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
# Output from the Decoder:
decoder_outputs, state_h, state_c = decoder_LSTM_layer(decoder_dropout(decoder_input), initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
# Decoder Model:
decoder_model = Model(
[decoder_input] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
Я получил следующую ошибку:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-54-e293a5c0f9ee> in <module>
8
9 # Output from the Decoder:
---> 10 decoder_outputs, state_h, state_c = decoder_LSTM_layer(decoder_dropout(decoder_input), initial_state=decoder_states_inputs)
11 decoder_states = [state_h, state_c]
12 decoder_outputs = decoder_dense(decoder_outputs)
TypeError: 'Tensor' object is not callable
но я не знаю, как это решить.
Я немного новичок в этом домене
Я использую keras версии 2.3.1 с tenorflow версии 2.1.0
Если кто-то может мне помочь, я был бы очень благодарен!