Как изменить форму ввода сохраненной модели в Tensorflow? - PullRequest
0 голосов
/ 25 апреля 2019

Я хочу, чтобы этот репо https://github.com/ildoonet/tf-pose-estimation работал с Intel Movidius, поэтому я попытался преобразовать модель pb с помощью mvNCCompile.

Проблема в том, что mvNCCompile требуется фиксированная форма ввода, но модель у меня естьявляется динамическим.

Я попробовал это

    graph_path = 'models/graph/mobilenet_thin/graph_opt.pb'

    with tf.gfile.GFile(graph_path, 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    graph = tf.get_default_graph()
    tf.import_graph_def(graph_def, name='TfPoseEstimator')
    x = graph.get_tensor_by_name('TfPoseEstimator/image:0')
    x.set_shape([1, 368, 368, 3])
    x = graph.get_tensor_by_name('TfPoseEstimator/MobilenetV1/Conv2d_0/Conv2D:0')
    x.set_shape([1, 368, 368, 24])

и получил это

(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_0/weights:0' shape=(3, 3, 3, 24) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/image:0' shape=(1, 368, 368, 3) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_0/Conv2D:0' shape=(1, 368, 368, 24) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_0/Conv2D_bn_offset:0' shape=(24,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_0/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 24) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_0/Relu:0' shape=(?, ?, ?, 24) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_1_depthwise/depthwise_weights:0' shape=(3, 3, 24, 1) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_1_pointwise/weights:0' shape=(1, 1, 24, 48) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_1_depthwise/depthwise:0' shape=(?, ?, ?, 24) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_1_pointwise/Conv2D:0' shape=(?, ?, ?, 48) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_1_pointwise/Conv2D_bn_offset:0' shape=(48,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_1_pointwise/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 48) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_1_pointwise/Relu:0' shape=(?, ?, ?, 48) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_2_depthwise/depthwise_weights:0' shape=(3, 3, 48, 1) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_2_pointwise/weights:0' shape=(1, 1, 48, 96) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_2_depthwise/depthwise:0' shape=(?, ?, ?, 48) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_2_pointwise/Conv2D:0' shape=(?, ?, ?, 96) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_2_pointwise/Conv2D_bn_offset:0' shape=(96,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_2_pointwise/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 96) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_2_pointwise/Relu:0' shape=(?, ?, ?, 96) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_3_depthwise/depthwise_weights:0' shape=(3, 3, 96, 1) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_3_pointwise/weights:0' shape=(1, 1, 96, 96) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_3_depthwise/depthwise:0' shape=(?, ?, ?, 96) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_3_pointwise/Conv2D:0' shape=(?, ?, ?, 96) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_3_pointwise/Conv2D_bn_offset:0' shape=(96,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_3_pointwise/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 96) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_3_pointwise/Relu:0' shape=(?, ?, ?, 96) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_4_depthwise/depthwise_weights:0' shape=(3, 3, 96, 1) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_4_pointwise/weights:0' shape=(1, 1, 96, 192) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_4_depthwise/depthwise:0' shape=(?, ?, ?, 96) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_4_pointwise/Conv2D:0' shape=(?, ?, ?, 192) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_4_pointwise/Conv2D_bn_offset:0' shape=(192,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_4_pointwise/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 192) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_4_pointwise/Relu:0' shape=(?, ?, ?, 192) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_5_depthwise/depthwise_weights:0' shape=(3, 3, 192, 1) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_5_pointwise/weights:0' shape=(1, 1, 192, 192) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_5_depthwise/depthwise:0' shape=(?, ?, ?, 192) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_5_pointwise/Conv2D:0' shape=(?, ?, ?, 192) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_5_pointwise/Conv2D_bn_offset:0' shape=(192,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_5_pointwise/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 192) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_5_pointwise/Relu:0' shape=(?, ?, ?, 192) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_6_depthwise/depthwise_weights:0' shape=(3, 3, 192, 1) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_6_pointwise/weights:0' shape=(1, 1, 192, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_6_depthwise/depthwise:0' shape=(?, ?, ?, 192) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_6_pointwise/Conv2D:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_6_pointwise/Conv2D_bn_offset:0' shape=(384,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_6_pointwise/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_6_pointwise/Relu:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_7_depthwise/depthwise_weights:0' shape=(3, 3, 384, 1) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_7_pointwise/weights:0' shape=(1, 1, 384, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_7_depthwise/depthwise:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_7_pointwise/Conv2D:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_7_pointwise/Conv2D_bn_offset:0' shape=(384,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_7_pointwise/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_7_pointwise/Relu:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_8_depthwise/depthwise_weights:0' shape=(3, 3, 384, 1) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_8_pointwise/weights:0' shape=(1, 1, 384, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_8_depthwise/depthwise:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_8_pointwise/Conv2D:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_8_pointwise/Conv2D_bn_offset:0' shape=(384,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_8_pointwise/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_8_pointwise/Relu:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_9_depthwise/depthwise_weights:0' shape=(3, 3, 384, 1) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_9_pointwise/weights:0' shape=(1, 1, 384, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_9_depthwise/depthwise:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_9_pointwise/Conv2D:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_9_pointwise/Conv2D_bn_offset:0' shape=(384,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_9_pointwise/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_9_pointwise/Relu:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_10_depthwise/depthwise_weights:0' shape=(3, 3, 384, 1) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_10_pointwise/weights:0' shape=(1, 1, 384, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_10_depthwise/depthwise:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_10_pointwise/Conv2D:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_10_pointwise/Conv2D_bn_offset:0' shape=(384,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_10_pointwise/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_10_pointwise/Relu:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_11_depthwise/depthwise_weights:0' shape=(3, 3, 384, 1) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_11_pointwise/weights:0' shape=(1, 1, 384, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_11_depthwise/depthwise:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_11_pointwise/Conv2D:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_11_pointwise/Conv2D_bn_offset:0' shape=(384,) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_11_pointwise/BatchNorm/FusedBatchNorm:0' shape=(?, ?, ?, 384) dtype=float32>,)
(<tf.Tensor 'TfPoseEstimator/MobilenetV1/Conv2d_11_pointwise/Relu:0' shape=(?, ?, ?, 384) dtype=float32>,)

Другие слои кроме TfPoseEstimator/image:0 и TfPoseEstimator/MobilenetV1/Conv2d_0/Conv2D:0 все еще имеют ?shape.

Я очень новичок в Tensorflow, так что это может быть глупым вопросом, но как изменить форму ввода сохраненной модели?

1 Ответ

1 голос
/ 25 апреля 2019

Мне удается решить эту проблему, используя это.

import tensorflow as tf
if __name__ == '__main__':
    graph_path = 't/tf_model.pb'
    with tf.gfile.GFile(graph_path, 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
    graph = tf.get_default_graph()
    tf_new_image = tf.placeholder(shape=(1, 368, 368, 3), dtype='float32', name='new_image')
    tf.import_graph_def(graph_def, name='TfPoseEstimator', input_map={"image:0": tf_new_image})
    tf.train.write_graph(graph, "t", "mobilenet_thin_model.pb", as_text=False)

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...