InvalidArgumentError: найдено 2 root ошибок. Несовместимые фигуры в модели текстовой классификации Tensorflow - PullRequest
4 голосов
/ 22 января 2020

Я пытаюсь заставить работать код следующего репо , основанного на этой бумаге . В нем было много ошибок, но в основном я работал. Тем не менее, я продолжаю получать ту же проблему, и я действительно не понимаю, как устранить эту проблему / что даже идет не так.

Ошибка возникает во второй раз при проверке, если встречаются критерии оператора. Первый раз всегда работает, потом перерывы на второй. Я включаю вывод, который он печатает, прежде чем сломать, если это полезно. Смотрите ошибку ниже:

step = 1, train_loss = 1204.7784423828125, train_accuracy = 0.13725490868091583
counter = 1, dev_loss = 1188.6639287274584, dev_accuacy = 0.2814199453625912
step = 2, train_loss = 1000.983154296875, train_accuracy = 0.26249998807907104
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
   1364     try:
-> 1365       return fn(*args)
   1366     except errors.OpError as e:

7 frames
InvalidArgumentError: 2 root error(s) found.
  (0) Invalid argument: Incompatible shapes: [2,185] vs. [2,229]
	 [[{{node loss/cond/add_1}}]]
	 [[viterbi_decode/cond/rnn_1/while/Switch_3/_541]]
  (1) Invalid argument: Incompatible shapes: [2,185] vs. [2,229]
	 [[{{node loss/cond/add_1}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

InvalidArgumentError                      Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
   1382                     '\nsession_config.graph_options.rewrite_options.'
   1383                     'disable_meta_optimizer = True')
-> 1384       raise type(e)(node_def, op, message)
   1385 
   1386   def _extend_graph(self):

InvalidArgumentError: 2 root error(s) found.
  (0) Invalid argument: Incompatible shapes: [2,185] vs. [2,229]
	 [[node loss/cond/add_1 (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
	 [[viterbi_decode/cond/rnn_1/while/Switch_3/_541]]
  (1) Invalid argument: Incompatible shapes: [2,185] vs. [2,229]
	 [[node loss/cond/add_1 (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'loss/cond/add_1':
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py", line 16, in <module>
    app.launch_new_instance()
  File "/usr/local/lib/python3.6/dist-packages/traitlets/config/application.py", line 664, in launch_instance
    app.start()
  File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelapp.py", line 477, in start
    ioloop.IOLoop.instance().start()
  File "/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py", line 888, in start
    handler_func(fd_obj, events)
  File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 450, in _handle_events
    self._handle_recv()
  File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv
    self._run_callback(callback, msg)
  File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 432, in _run_callback
    callback(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
    return self.dispatch_shell(stream, msg)
  File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell
    handler(stream, idents, msg)
  File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
    user_expressions, allow_stdin)
  File "/usr/local/lib/python3.6/dist-packages/ipykernel/ipkernel.py", line 196, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "/usr/local/lib/python3.6/dist-packages/ipykernel/zmqshell.py", line 533, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
    interactivity=interactivity, compiler=compiler, result=result)
  File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
    if self.run_code(code, result):
  File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-11-90859dc83f76>", line 66, in <module>
    main()
  File "<ipython-input-11-90859dc83f76>", line 12, in main
    model = DAModel()
  File "<ipython-input-9-682db36e2a23>", line 148, in __init__
    self.logits, self.labels, self.dialogue_lengths)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/contrib/crf/python/ops/crf.py", line 257, in crf_log_likelihood
    transition_params)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/contrib/crf/python/ops/crf.py", line 116, in crf_sequence_score
    false_fn=_multi_seq_fn)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/utils.py", line 202, in smart_cond
    pred, true_fn=true_fn, false_fn=false_fn, name=name)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/smart_cond.py", line 59, in smart_cond
    name=name)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/control_flow_ops.py", line 1235, in cond
    orig_res_f, res_f = context_f.BuildCondBranch(false_fn)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/control_flow_ops.py", line 1061, in BuildCondBranch
    original_result = fn()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/contrib/crf/python/ops/crf.py", line 104, in _multi_seq_fn
    unary_scores = crf_unary_score(tag_indices, sequence_lengths, inputs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/contrib/crf/python/ops/crf.py", line 287, in crf_unary_score
    flattened_tag_indices = array_ops.reshape(offsets + tag_indices, [-1])
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/math_ops.py", line 899, in binary_op_wrapper
    return func(x, y, name=name)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/math_ops.py", line 1197, in _add_dispatch
    return gen_math_ops.add_v2(x, y, name=name)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/gen_math_ops.py", line 549, in add_v2
    "AddV2", x=x, y=y, name=name)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
    attrs, op_def, compute_device)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
    op_def=op_def)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
    self._traceback = tf_stack.extract_stack()

Вот код (который немного отличается от репо для запуска:

Версии: Python 3

tenorflow == 1.15.0

pandas == 0.25.3

numpy == 1.17.5

import glob
import pandas as pd
import tensorflow as tf
import pandas as pd
import numpy as np


# preprocess data

file_list = []
for f in glob.glob('swda/*'):
  file_list.append(f)

df_list = []
for i in file_list:
  df = pd.read_csv(i)
  df_list.append(df)

text_list = []
label_list = []

for df in df_list:
  df['utterance_no_specialchar_'] = df.utterance_no_specialchar.astype(str)
  text = df.utterance_no_specialchar_.tolist()
  labels = df.da_category.tolist()
  text_list.append(text)
  label_list.append(labels)

### new preprocessing step  
text_list = [[[j] for j in i] for i in text_list]

tok_data = [y[0] for x in text_list for y in x]

tokenizer = tf.keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts(tok_data)

sequences = []
for x in text_list:
  tmp = []
  for y in x:
    tmp.append(tokenizer.texts_to_sequences(y)[0])
  sequences.append(tmp)

def _pad_sequences(sequences, pad_tok, max_length):
    """
    Args:
        sequences: a generator of list or tuple
        pad_tok: the char to pad with
    Returns:
        a list of list where each sublist has same length
    """
    sequence_padded, sequence_length = [], []

    for seq in sequences:
        seq = list(seq)
        seq_ = seq[:max_length] + [pad_tok]*max(max_length - len(seq), 0)
        sequence_padded +=  [seq_]
        sequence_length += [min(len(seq), max_length)]

    return sequence_padded, sequence_length

def pad_sequences(sequences, pad_tok, nlevels=1):
    """
    Args:
        sequences: a generator of list or tuple
        pad_tok: the char to pad with
        nlevels: "depth" of padding, for the case where we have characters ids
    Returns:
        a list of list where each sublist has same length
    """
    if nlevels == 1:
        max_length = max(map(lambda x : len(x), sequences))
        sequence_padded, sequence_length = _pad_sequences(sequences,
                                            pad_tok, max_length)

    elif nlevels == 2:
        max_length_word = max([max(map(lambda x: len(x), seq))
                               for seq in sequences])
        sequence_padded, sequence_length = [], []
        for seq in sequences:
            # all words are same length now
            sp, sl = _pad_sequences(seq, pad_tok, max_length_word)
            sequence_padded += [sp]
            sequence_length += [sl]

        max_length_sentence = max(map(lambda x : len(x), sequences))
        
        sequence_padded, _ = _pad_sequences(sequence_padded,
                [pad_tok]*max_length_word, max_length_sentence)
        sequence_length, _ = _pad_sequences(sequence_length, 0,
                max_length_sentence)

    return sequence_padded, sequence_length

def minibatches(data, labels, batch_size):
  data_size = len(data)
  start_index = 0

  num_batches_per_epoch = int((len(data) + batch_size - 1) / batch_size)
  for batch_num in range(num_batches_per_epoch):
      start_index = batch_num * batch_size
      end_index = min((batch_num + 1) * batch_size, data_size)
      yield data[start_index: end_index], labels[start_index: end_index]

def select(parameters, length):
  """Select the last valid time step output as the sentence embedding
  :params parameters: [batch, seq_len, hidden_dims]
  :params length: [batch]
  :Returns : [batch, hidden_dims]
  """
  shape = tf.shape(parameters)
  idx = tf.range(shape[0])
  idx = tf.stack([idx, length - 1], axis = 1)
  return tf.gather_nd(parameters, idx)


class DAModel():
    def __init__(self):
        with tf.variable_scope("placeholder"):

            self.dialogue_lengths = tf.placeholder(tf.int32, shape = [None], name = "dialogue_lengths")
            self.word_ids = tf.placeholder(tf.int32, shape = [None,None,None], name = "word_ids")
            self.utterance_lengths = tf.placeholder(tf.int32, shape = [None, None], name = "utterance_lengths")
            self.labels = tf.placeholder(tf.int32, shape = [None, None], name = "labels")
            self.clip = tf.placeholder(tf.float32, shape = [], name = 'clip')

######################## EMBEDDINGS ###########################################

        with tf.variable_scope("embeddings"):
            _word_embeddings = tf.get_variable(
                name = "_word_embeddings",
                dtype = tf.float32,
                shape = [words, word_dim],
                initializer = tf.random_uniform_initializer()
                )
            word_embeddings = tf.nn.embedding_lookup(_word_embeddings,self.word_ids, name="word_embeddings")
            self.word_embeddings = tf.nn.dropout(word_embeddings, 0.8)
                    
        with tf.variable_scope("utterance_encoder"):
            s = tf.shape(self.word_embeddings)
            batch_size = s[0] * s[1]
            
            time_step = s[-2]
            word_embeddings = tf.reshape(self.word_embeddings, [batch_size, time_step, word_dim])
            length = tf.reshape(self.utterance_lengths, [batch_size])

            fw = tf.nn.rnn_cell.LSTMCell(hidden_size_lstm_1, forget_bias=0.8, state_is_tuple= True)
            bw = tf.nn.rnn_cell.LSTMCell(hidden_size_lstm_1, forget_bias=0.8, state_is_tuple= True)
            
            output, _ = tf.nn.bidirectional_dynamic_rnn(fw, bw, word_embeddings,sequence_length=length, dtype = tf.float32)
            output = tf.concat(output, axis = -1) # [batch_size, time_step, dim]
            # Select the last valid time step output as the utterance embedding, 
            # this method is more concise than TensorArray with while_loop
            # output = select(output, self.utterance_lengths) # [batch_size, dim]
            output = select(output, length) # [batch_size, dim]

            # output = tf.reshape(output, s[0], s[1], 2 * hidden_size_lstm_1)
            output = tf.reshape(output, [s[0], s[1], 2 * hidden_size_lstm_1])

            output = tf.nn.dropout(output, 0.8)

        with tf.variable_scope("bi-lstm"):
            cell_fw = tf.contrib.rnn.BasicLSTMCell(hidden_size_lstm_2, state_is_tuple = True)
            cell_bw = tf.contrib.rnn.BasicLSTMCell(hidden_size_lstm_2, state_is_tuple = True)
            
            (output_fw, output_bw), _ = tf.nn.bidirectional_dynamic_rnn(cell_fw, cell_bw, output, sequence_length = self.dialogue_lengths, dtype = tf.float32)
            outputs = tf.concat([output_fw, output_bw], axis = -1)
            outputs = tf.nn.dropout(outputs, 0.8)
        
        with tf.variable_scope("proj1"):
            output = tf.reshape(outputs, [-1, 2 * hidden_size_lstm_2])
            W = tf.get_variable("W", dtype = tf.float32, shape = [2 * hidden_size_lstm_2, proj1], initializer= tf.contrib.layers.xavier_initializer())
            b = tf.get_variable("b", dtype = tf.float32, shape = [proj1], initializer=tf.zeros_initializer())
            output = tf.nn.relu(tf.matmul(output, W) + b)

        with tf.variable_scope("proj2"):
            W = tf.get_variable("W", dtype = tf.float32, shape = [proj1, proj2], initializer= tf.contrib.layers.xavier_initializer())
            b = tf.get_variable("b", dtype = tf.float32, shape = [proj2], initializer=tf.zeros_initializer())
            output = tf.nn.relu(tf.matmul(output, W) + b)

        with tf.variable_scope("logits"):
            nstep = tf.shape(outputs)[1]
            W = tf.get_variable("W", dtype = tf.float32,shape=[proj2, tags], initializer = tf.random_uniform_initializer())
            b = tf.get_variable("b", dtype = tf.float32,shape = [tags],initializer=tf.zeros_initializer())

            pred = tf.matmul(output, W) + b
            self.logits = tf.reshape(pred, [-1, nstep, tags])
        
        with tf.variable_scope("loss"):
            log_likelihood, self.trans_params = tf.contrib.crf.crf_log_likelihood(
                        self.logits, self.labels, self.dialogue_lengths)
            self.loss = tf.reduce_mean(-log_likelihood) + tf.nn.l2_loss(W) + tf.nn.l2_loss(b)
            #tf.summary.scalar("loss", self.loss)
        

        with tf.variable_scope("viterbi_decode"):
            viterbi_sequence, _ = tf.contrib.crf.crf_decode(self.logits, self.trans_params,  self.dialogue_lengths)
            
            batch_size = tf.shape(self.dialogue_lengths)[0]

            output_ta = tf.TensorArray(dtype = tf.float32, size = 1, dynamic_size = True)
            def body(time, output_ta_1):
                length = self.dialogue_lengths[time]
                vcode = viterbi_sequence[time][:length]
                true_labs = self.labels[time][:length]
                accurate = tf.reduce_sum(tf.cast(tf.equal(vcode, true_labs), tf.float32))

                output_ta_1 = output_ta_1.write(time, accurate)
                return time + 1, output_ta_1


            def condition(time, output_ta_1):
                return time < batch_size

            i = 0
            [time, output_ta] = tf.while_loop(condition, body, loop_vars = [i, output_ta])
            output_ta = output_ta.stack()
            accuracy = tf.reduce_sum(output_ta)
            self.accuracy = accuracy / tf.reduce_sum(tf.cast(self.dialogue_lengths, tf.float32))
            #tf.summary.scalar("accuracy", self.accuracy)

        with tf.variable_scope("train_op"):
            optimizer = tf.train.AdagradOptimizer(0.1)
            #if tf.greater(self.clip , 0):
            grads, vs = zip(*optimizer.compute_gradients(self.loss))
            grads, gnorm = tf.clip_by_global_norm(grads, self.clip)
            self.train_op = optimizer.apply_gradients(zip(grads, vs))
            #else:
            #    self.train_op = optimizer.minimize(self.loss)
        #self.merged = tf.summary.merge_all()


### Set model variables

hidden_size_lstm_1 = 200
hidden_size_lstm_2 = 200
tags = 39 # assuming number of classes to predict?
word_dim = 300
proj1 = 200
proj2 = 100
words = 20001 
# words = 8759 + 1 # max(num_unique_word_tokens)
batchSize = 2
log_dir = "train"
model_dir = "DAModel"
model_name = "ckpt"

### Run model

def main():
    # tokenize and vectorize text data to prepare for embedding
    train_data = sequences[:75]
    train_labels = label_list[:75]
    dev_data = sequences[75:]
    dev_labels = label_list[75:]
    config = tf.ConfigProto()
    config.gpu_options.per_process_gpu_memory_fraction = 0.4
    
    with tf.Session(config = config) as sess:
        model = DAModel()
        sess.run(tf.global_variables_initializer())
        clip = 2
        saver = tf.train.Saver()
        #writer = tf.summary.FileWriter("D:\\Experimemts\\tensorflow\\DA\\train", sess.graph)
        writer = tf.summary.FileWriter("train", sess.graph)
        counter = 0
        for epoch in range(10):
            for dialogues, labels in minibatches(train_data, train_labels, batchSize):
                _, dialogue_lengthss = pad_sequences(dialogues, 0)
                word_idss, utterance_lengthss = pad_sequences(dialogues, 0, nlevels = 2)
                true_labs = labels
                labs_t, _ = pad_sequences(true_labs, 0)
                counter += 1
                train_loss, train_accuracy, _ = sess.run([model.loss, model.accuracy,model.train_op], feed_dict = {model.word_ids: word_idss, model.utterance_lengths: utterance_lengthss, model.dialogue_lengths: dialogue_lengthss, model.labels:labs_t, model.clip :clip} )
                #writer.add_summary(summary, global_step = counter)
                print("step = {}, train_loss = {}, train_accuracy = {}".format(counter, train_loss, train_accuracy))
                
                train_precision_summ = tf.Summary()
                train_precision_summ.value.add(
                    tag='train_accuracy', simple_value=train_accuracy)
                writer.add_summary(train_precision_summ, counter)

                train_loss_summ = tf.Summary()
                train_loss_summ.value.add(
                    tag='train_loss', simple_value=train_loss)
                writer.add_summary(train_loss_summ, counter)
                
                if counter % 1 == 0:
                    loss_dev = []
                    acc_dev = []
                    for dev_dialogues, dev_labels in minibatches(dev_data, dev_labels, batchSize):
                        _, dialogue_lengthss = pad_sequences(dev_dialogues, 0)
                        word_idss, utterance_lengthss = pad_sequences(dev_dialogues, 0, nlevels = 2)
                        true_labs = dev_labels
                        labs_t, _ = pad_sequences(true_labs, 0)
                        dev_loss, dev_accuacy = sess.run([model.loss, model.accuracy], feed_dict = {model.word_ids: word_idss, model.utterance_lengths: utterance_lengthss, model.dialogue_lengths: dialogue_lengthss, model.labels:labs_t})
                        loss_dev.append(dev_loss)
                        acc_dev.append(dev_accuacy)
                    valid_loss = sum(loss_dev) / len(loss_dev)
                    valid_accuracy = sum(acc_dev) / len(acc_dev)

                    dev_precision_summ = tf.Summary()
                    dev_precision_summ.value.add(
                        tag='dev_accuracy', simple_value=valid_accuracy)
                    writer.add_summary(dev_precision_summ, counter)

                    dev_loss_summ = tf.Summary()
                    dev_loss_summ.value.add(
                        tag='dev_loss', simple_value=valid_loss)
                    writer.add_summary(dev_loss_summ, counter)
                    print("counter = {}, dev_loss = {}, dev_accuacy = {}".format(counter, valid_loss, valid_accuracy))
if __name__ == "__main__":
    tf.reset_default_graph()
    main()

Данные поступают с здесь и выглядят так:

[[['what  '],
 ['do you want to start '],
 ['f uh  laughter  you hit  you hit  f uh   '],
 ['it doesnt matter  '],
 ['f um  were discussing the capital punishment i believe '],
 ['right  '],
 ['you are right  '],
 ['yeah  '],
 ['  i  i  suppose i should have '],
 ['f uh  which  '],
 ['i  am  am  pro capital punishment except that i dont like the way its done '],
 ['uhhuh  '],
 ['f uh  yeah  '],
 ['f uh   i  f uh  i  guess  i  i  hate to see anyone die f uh   ']
 ...
 ]]

Набор данных для обучения модели можно найти здесь: https://github.com/cmeaton/Hierarchical_BiLSTM-CRF_Encoder/tree/master/swda_parsed

Мне трудно понять, что эта ошибка даже означает и как подойти к ее пониманию. Любые советы будут высоко ценится. Спасибо.

Ответы [ 2 ]

0 голосов
/ 31 января 2020

Давайте сосредоточимся на ошибке:

Invalid argument: Incompatible shapes: [2,185] vs. [2,229]

Похоже, проблема в том, что операция между двумя тензорами не удалась, потому что их формы несовместимы.


Возможно, что * Выбранная вами версия tensorflow менее разрешительна, чем используемая автором.

Согласно этой проблеме , автор предполагает, что он использовал tensorflow==1.8.

Итак, сначала я бы предложил вам попробовать использовать эту более раннюю версию или другие до \ после этого (1.7, 1.9, 1.10 и т. Д. c).


Кроме того, более ранние версии могут не иметь * Пакет 1018 * интегрирован в них, как и сегодня, поэтому вы можете использовать указанную c keras версию.

Например, в соответствии с этой проблемой , что помогло было понижение до keras==2.2.2.


Если это не поможет, возможно, один из них будет: 1 , 2 , 3 , 4 , 5 , 6

0 голосов
/ 31 января 2020

Введение

Я думаю, что основная проблема заключается в несоответствии данных в размерах массивов (или матриц или других структур), которые вы вводите sess.run. В частности, когда вы звоните:

t rain_loss, train_accuracy, _ = sess.run([model.loss, model.accuracy,model.train_op], feed_dict = {model.word_ids: word_idss, model.utterance_lengths: utterance_lengthss, model.dialogue_lengths: dialogue_lengthss, model.labels:labs_t, model.clip :clip} )

И, более конкретно, эта ошибка здесь намекает на проблему несоответствия:

tensorflow.python.framework.errors_impl.InvalidArgumentError: 
indices[317] = [317, -1] does not index into param shape [318,39,400]
             [[{{node utterance_encoder/GatherNd}}]]

Я рассмотрел возможно, что запуск на fre sh install может привести к безошибочному запуску.

Я получаю похожие ошибки, но также и целый список предупреждений. Обратите внимание, что я работаю на windows 7 и использую python 3.6.1.


Версии

Я пробовал следующие версии тензорного потока, но безуспешно:

  • TF 1,15
  • TF 1,14
  • TF 1,13,1
  • TF 1,12
  • TF 1,11
  • TF 1,10
  • tf 1.10 с пониженными керами до 2.2.1

Steps

  • Установлено python 3.6.1 (поддерживается версия tenorflow) , Установлено для всех пользователей. Установите путь. Установлен в C: \ Python36
  • pip3 install --user --upgrade tenorflow == 1.15
  • pip3 install --user --upgrade pandas == 0.25.3
  • pip3 install --user --upgrade numpy == 1.17.5
  • Загрузите следующее: https://github.com/cmeaton/Hierarchical_BiLSTM-CRF_Encoder/tree/master/swda_parsed
  • Запустите предоставленный код

Результат (включает много предупреждений)

Я думаю, что может быть важно следующее:

tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[317] = [317, -1] does not index into param shape [318,39,400]
         [[{{node utterance_encoder/GatherNd}}]]

Full Trace

WARNING:tensorflow:From test.py:313: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

WARNING:tensorflow:From test.py:256: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

WARNING:tensorflow:From test.py:259: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2020-01-31 12:13:10.096283: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
WARNING:tensorflow:From test.py:119: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

WARNING:tensorflow:From test.py:121: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From test.py:130: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.

WARNING:tensorflow:From test.py:137: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From test.py:147: LSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From test.py:150: bidirectional_dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.Bidirectional(keras.layers.RNN(cell))`, which is equivalent to this API
WARNING:tensorflow:From D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\ops\rnn.py:464: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.RNN(cell)`, which is equivalent to this API
WARNING:tensorflow:From D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\ops\rnn_cell_impl.py:958: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.add_weight` method instead.
WARNING:tensorflow:From D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\ops\rnn_cell_impl.py:962: calling Zeros.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\ops\rnn.py:244: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

WARNING:tensorflow:From test.py:163: BasicLSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From test.py:223: The name tf.train.AdagradOptimizer is deprecated. Please use tf.compat.v1.train.AdagradOptimizer instead.

WARNING:tensorflow:From D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\training\adagrad.py:76: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From test.py:261: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.

WARNING:tensorflow:From test.py:263: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

WARNING:tensorflow:From test.py:265: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.

2020-01-31 12:13:16.563989: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at gather_nd_op.cc:47 : Invalid argument: indices[317] = [317, -1] does not index into param shape [318,39,400]
Traceback (most recent call last):
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\client\session.py", line 1365, in _do_call
    return fn(*args)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\client\session.py", line 1350, in _run_fn
    target_list, run_metadata)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\client\session.py", line 1443, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[317] = [317, -1] does not index into param shape [318,39,400]
         [[{{node utterance_encoder/GatherNd}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "test.py", line 314, in <module>
    main()
  File "test.py", line 274, in main
    train_loss, train_accuracy, _ = sess.run([model.loss, model.accuracy,model.train_op], feed_dict = {model.word_ids: word_idss, model.utterance_lengths: utterance_lengthss, model.dialogue_lengths: dialogue_lengthss, model.labels:labs_t, model.clip :clip} )
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\client\session.py", line 956, in run
    run_metadata_ptr)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
    feed_dict_tensor, options, run_metadata)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
    run_metadata)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[317] = [317, -1] does not index into param shape [318,39,400]
         [[node utterance_encoder/GatherNd (defined at D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]

Original stack trace for 'utterance_encoder/GatherNd':
  File "test.py", line 314, in <module>
    main()
  File "test.py", line 260, in main
    model = DAModel()
  File "test.py", line 155, in __init__
    output = select(output, length) # [batch_size, dim]
  File "test.py", line 114, in select
    return tf.gather_nd(parameters, idx)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\util\dispatch.py", line 180, in wrapper
    return target(*args, **kwargs)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\ops\array_ops.py", line 4277, in gather_nd
    return gen_array_ops.gather_nd(params, indices, name=name)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\ops\gen_array_ops.py", line 3975, in gather_nd
    "GatherNd", params=params, indices=indices, name=name)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
    attrs, op_def, compute_device)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
    op_def=op_def)
  File "D:\Users\bakopme\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
    self._traceback = tf_stack.extract_stack()
Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...