tenorflow. python .framework.errors_impl.InvalidArgumentError: индексы [12] = 12 не находятся в [0, 0) - PullRequest
0 голосов
/ 07 августа 2020

Я пытаюсь написать эквивалент этого кода , который преобразует CSV в записи TF, но вместо этого я пытаюсь преобразовать из JSON в TFrecords. Я пытаюсь создать TFrecords для использования в API обнаружения объектов .

Вот мое полное сообщение об ошибке

Traceback (most recent call last):
  File "model_main_tf2.py", line 113, in <module>
    tf.compat.v1.app.run()
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\absl\app.py", line 299, in run
    _run_main(main, args)
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\absl\app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "model_main_tf2.py", line 109, in main
    record_summaries=FLAGS.record_summaries)
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\object_detection\model_lib_v2.py", line 561, in train_loop
    unpad_groundtruth_tensors)
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\object_detection\model_lib_v2.py", line 342, in load_fine_tune_checkpoint
    features, labels = iter(input_dataset).next()
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 645, in next
    return self.__next__()
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 649, in __next__
    return self.get_next()
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 694, in get_next
    self._iterators[i].get_next_as_list_static_shapes(new_name))
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 1474, in get_next_as_list_static_shapes
    return self._iterator.get_next()
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\tensorflow\python\data\ops\multi_device_iterator_ops.py", line 581, in get_next
    result.append(self._device_iterators[i].get_next())
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 825, in get_next
    return self._next_internal()
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 764, in _next_internal
    return structure.from_compatible_tensor_list(self._element_spec, ret)
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\tensorflow\python\eager\context.py", line 2105, in execution_mode
    executor_new.wait()
  File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\tensorflow\python\eager\executor.py", line 67, in wait
    pywrap_tfe.TFE_ExecutorWaitForAllPendingNodes(self._handle)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[12] = 12 is not in [0, 0)
         [[{{node GatherV2_4}}]]
         [[MultiDeviceIteratorGetNextFromShard]]
         [[RemoteCall]]

А вот мой код, попытаться преобразовать JSON файлы в TFrecords

Образец JSON файл

{
  "0.jpg59329": {
    "filename": "0.jpg",
    "size": 59329,
    "regions": [{
      "shape_attributes": {
        "name": "rect",
        "x": 412,
        "y": 130,
        "width": 95,
        "height": 104
      },
      "region_attributes": {}
    }, {
      "shape_attributes": {
        "name": "rect",
        "x": 521,
        "y": 82,
        "width": 126,
        "height": 106
      },
      "region_attributes": {}
    }
}

Мой Python Код

# Ref 1: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/using_your_own_dataset.md
# Ref 2: https://github.com/datitran/raccoon_dataset/blob/master/generate_tfrecord.py

import json
import glob
from object_detection.utils import dataset_util
import tensorflow as tf
from pathlib import Path

flags = tf.compat.v1.app.flags
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
FLAGS = flags.FLAGS


def json_to_tf(jsonFile, im):
    with open(im, "rb") as image:
        encoded_image_data = image.read()

    with open(jsonFile) as json_file:
        data = json.load(json_file)

        for key, value in data.items():
            width = 1920
            height = 1080
            filename = value["filename"]
            filename = filename.encode('utf8')
            image_format = b'jpeg'
            xmins = []
            xmaxs = []
            ymins = []
            ymaxs = []
            classes_text = []
            classes = []

            for x in value["regions"]:
                xmins.append(x["shape_attributes"]['x'])
                xmaxs.append(x["shape_attributes"]['width'] + x["shape_attributes"]['x'])
                ymins.append(x["shape_attributes"]['y'])
                ymaxs.append(x["shape_attributes"]['height'] + x["shape_attributes"]['y'])
                classes_text.append("cars".encode('utf8'))
                classes.append(1)

            tf_example = tf.train.Example(features=tf.train.Features(feature={
                'image/height': dataset_util.int64_feature(height),
                'image/width': dataset_util.int64_feature(width),
                'image/filename': dataset_util.bytes_feature(filename),
                'image/source_id': dataset_util.bytes_feature(filename),
                'image/encoded': dataset_util.bytes_feature(encoded_image_data),
                'image/format': dataset_util.bytes_feature(image_format),
                'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
                'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
                'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
                'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
                'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
                'image/object/class/label': dataset_util.int64_list_feature(classes),
            }))

            return tf_example


writer = tf.compat.v1.python_io.TFRecordWriter("train.record")

for fn in glob.glob("annotation_refined\\*.json"):
    for img in glob.glob("images\\*.jpg"):
        if Path(fn).stem == Path(img).stem:
            tf_example_1 = json_to_tf(fn, img)
            writer.write(tf_example_1.SerializeToString())

writer.close()

Может кто-нибудь подскажет, что идет не так?

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...