Как я могу преобразовать замороженный график Tensorflow в модель TF Lite? - PullRequest
0 голосов
/ 04 мая 2020

Я использую Faster RCNN , репо, который я использую, можно найти по ссылке, чтобы обнаружить автомобили в видеокадре. Я использовал Keras 2.2.3 и Tensorflow 1.15.0. Я хочу развернуть и запустить его на моем устройстве Android. Каждая часть в Faster RCNN реализована в Keras, и для ее развертывания на Android я хочу преобразовать их в модель TF Lite. Конечная сеть, классификатор, имеет собственный слой, который называется RoiPoolingConv, и я не могу преобразовать конечную сеть в TF Lite. Сначала я попробовал следующее

converter = tf.lite.TFLiteConverter.from_keras_model_file('model_classifier_with_architecture.h5',
                custom_objects={"RoiPoolingConv": RoiPoolingConv})
tfmodel = converter.convert()
open ("model_cls.tflite" , "wb") .write(tfmodel)

Это дает следующую ошибку

Traceback (most recent call last):
  File "Keras-FasterRCNN/model_to_tflite.py", line 26, in <module>
    custom_objects={"RoiPoolingConv": RoiPoolingConv})
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 747, in from_keras_model_file
    keras_model = _keras.models.load_model(model_file, custom_objects)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py", line 146, in load_model
    return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 212, in load_model_from_hdf5
    custom_objects=custom_objects)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/model_config.py", line 55, in model_from_config
    return deserialize(config, custom_objects=custom_objects)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 89, in deserialize
    printable_module_name='layer')
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object
    list(custom_objects.items())))
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1131, in from_config
    process_node(layer, node_data)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1089, in process_node
    layer(input_tensors, **kwargs)
  File "/home/alp/.local/lib/python3.6/site-packages/keras/engine/base_layer.py", line 475, in __call__
    previous_mask = _collect_previous_mask(inputs)
  File "/home/alp/.local/lib/python3.6/site-packages/keras/engine/base_layer.py", line 1441, in _collect_previous_mask
    mask = node.output_masks[tensor_index]
AttributeError: 'Node' object has no attribute 'output_masks'

В качестве обходного пути я попытался преобразовать модели Keras в замороженный граф Tensorflow и затем выполнить TF Облегченное преобразование на этих замороженных графиках. Это приводит к следующей ошибке

Traceback (most recent call last):
  File "/home/alp/.local/bin/toco_from_protos", line 11, in <module>
    sys.exit(main())
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/alp/.local/lib/python3.6/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/alp/.local/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/alp/.local/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33, in execute
    output_str = tensorflow_wrap_toco.TocoConvert(model_str, toco_str, input_str)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CAST, CONCATENATION, CONV_2D, DEPTHWISE_CONV_2D, FULLY_CONNECTED, MUL, PACK, RESHAPE, RESIZE_BILINEAR,   SOFTMAX, STRIDED_SLICE. Here is a list of operators for which you will need custom implementations: AddV2.

Есть ли способ добиться преобразования модели с пользовательским слоем в модель TF Lite?

...