Модель тензорного потока: Как определить имя узлов ввода / вывода из файла прототипа? - PullRequest
0 голосов
/ 16 мая 2019

Я использую базовые линии OpenAI для обучения модели RL (deepq). Входные данные включают 19 функций:

observation_space = spaces.Box(0, 100, (19, 1), dtype=np.float_)

, и вывод:

action_space =spaces.Discrete(6)

Все переменные модели из:

for i, var in enumerate(saver._var_list):
    print('Var {}: {}'.format(i, var))

похожи на:

Var 0: <tf.Variable 'deepq/eps:0' shape=() dtype=float32_ref>
Var 1: <tf.Variable 'deepq/q_func/mlp_fc0/w:0' shape=(19, 64) dtype=float32_ref>
Var 2: <tf.Variable 'deepq/q_func/mlp_fc0/b:0' shape=(64,) dtype=float32_ref>
Var 3: <tf.Variable 'deepq/q_func/mlp_fc1/w:0' shape=(64, 64) dtype=float32_ref>
Var 4: <tf.Variable 'deepq/q_func/mlp_fc1/b:0' shape=(64,) dtype=float32_ref>
Var 5: <tf.Variable 'deepq/q_func/action_value/fully_connected/weights:0' shape=(64, 256) dtype=float32_ref>
Var 6: <tf.Variable 'deepq/q_func/action_value/fully_connected/biases:0' shape=(256,) dtype=float32_ref>
Var 7: <tf.Variable 'deepq/q_func/action_value/fully_connected_1/weights:0' shape=(256, 6) dtype=float32_ref>
Var 8: <tf.Variable 'deepq/q_func/action_value/fully_connected_1/biases:0' shape=(6,) dtype=float32_ref>
Var 9: <tf.Variable 'deepq/q_func/state_value/fully_connected/weights:0' shape=(64, 256) dtype=float32_ref>
Var 10: <tf.Variable 'deepq/q_func/state_value/fully_connected/biases:0' shape=(256,) dtype=float32_ref>
Var 11: <tf.Variable 'deepq/q_func/state_value/fully_connected_1/weights:0' shape=(256, 1) dtype=float32_ref>
Var 12: <tf.Variable 'deepq/q_func/state_value/fully_connected_1/biases:0' shape=(1,) dtype=float32_ref>
Var 13: <tf.Variable 'deepq/target_q_func/mlp_fc0/w:0' shape=(19, 64) dtype=float32_ref>
Var 14: <tf.Variable 'deepq/target_q_func/mlp_fc0/b:0' shape=(64,) dtype=float32_ref>
Var 15: <tf.Variable 'deepq/target_q_func/mlp_fc1/w:0' shape=(64, 64) dtype=float32_ref>
Var 16: <tf.Variable 'deepq/target_q_func/mlp_fc1/b:0' shape=(64,) dtype=float32_ref>
Var 17: <tf.Variable 'deepq/target_q_func/action_value/fully_connected/weights:0' shape=(64, 256) dtype=float32_ref>
Var 18: <tf.Variable 'deepq/target_q_func/action_value/fully_connected/biases:0' shape=(256,) dtype=float32_ref>
Var 19: <tf.Variable 'deepq/target_q_func/action_value/fully_connected_1/weights:0' shape=(256, 6) dtype=float32_ref>
Var 20: <tf.Variable 'deepq/target_q_func/action_value/fully_connected_1/biases:0' shape=(6,) dtype=float32_ref>
Var 21: <tf.Variable 'deepq/target_q_func/state_value/fully_connected/weights:0' shape=(64, 256) dtype=float32_ref>
Var 22: <tf.Variable 'deepq/target_q_func/state_value/fully_connected/biases:0' shape=(256,) dtype=float32_ref>
Var 23: <tf.Variable 'deepq/target_q_func/state_value/fully_connected_1/weights:0' shape=(256, 1) dtype=float32_ref>
Var 24: <tf.Variable 'deepq/target_q_func/state_value/fully_connected_1/biases:0' shape=(1,) dtype=float32_ref>
Var 25: <tf.Variable 'deepq_1/beta1_power:0' shape=() dtype=float32_ref>
Var 26: <tf.Variable 'deepq_1/beta2_power:0' shape=() dtype=float32_ref>
Var 27: <tf.Variable 'deepq/deepq/q_func/mlp_fc0/w/Adam:0' shape=(19, 64) dtype=float32_ref>
Var 28: <tf.Variable 'deepq/deepq/q_func/mlp_fc0/w/Adam_1:0' shape=(19, 64) dtype=float32_ref>
Var 29: <tf.Variable 'deepq/deepq/q_func/mlp_fc0/b/Adam:0' shape=(64,) dtype=float32_ref>
Var 30: <tf.Variable 'deepq/deepq/q_func/mlp_fc0/b/Adam_1:0' shape=(64,) dtype=float32_ref>
Var 31: <tf.Variable 'deepq/deepq/q_func/mlp_fc1/w/Adam:0' shape=(64, 64) dtype=float32_ref>
Var 32: <tf.Variable 'deepq/deepq/q_func/mlp_fc1/w/Adam_1:0' shape=(64, 64) dtype=float32_ref>
Var 33: <tf.Variable 'deepq/deepq/q_func/mlp_fc1/b/Adam:0' shape=(64,) dtype=float32_ref>
Var 34: <tf.Variable 'deepq/deepq/q_func/mlp_fc1/b/Adam_1:0' shape=(64,) dtype=float32_ref>
Var 35: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected/weights/Adam:0' shape=(64, 256) dtype=float32_ref>
Var 36: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected/weights/Adam_1:0' shape=(64, 256) dtype=float32_ref>
Var 37: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected/biases/Adam:0' shape=(256,) dtype=float32_ref>
Var 38: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected/biases/Adam_1:0' shape=(256,) dtype=float32_ref>
Var 39: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam:0' shape=(256, 6) dtype=float32_ref>
Var 40: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam_1:0' shape=(256, 6) dtype=float32_ref>
Var 41: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam:0' shape=(6,) dtype=float32_ref>
Var 42: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam_1:0' shape=(6,) dtype=float32_ref>
Var 43: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected/weights/Adam:0' shape=(64, 256) dtype=float32_ref>
Var 44: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected/weights/Adam_1:0' shape=(64, 256) dtype=float32_ref>
Var 45: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected/biases/Adam:0' shape=(256,) dtype=float32_ref>
Var 46: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected/biases/Adam_1:0' shape=(256,) dtype=float32_ref>
Var 47: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam:0' shape=(256, 1) dtype=float32_ref>
Var 48: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam_1:0' shape=(256, 1) dtype=float32_ref>
Var 49: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam:0' shape=(1,) dtype=float32_ref>
Var 50: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam_1:0' shape=(1,) dtype=float32_ref>

И вывод модели сохраняется с помощью

tf.train.write_graph(sess.graph_def, './model', 'my_deepq.pbtxt')

как файл протофафа. Баффовый файл profo выглядит примерно так: Как мне определить имена входного узла (слоя) и выходного узла (слоя) из этого файла протоффа? Спасибо!

node {
  name: "deepq/observation"
  op: "Placeholder"
  attr {
    key: "dtype"
    value {
      type: DT_DOUBLE
    }
  }
  attr {
    key: "shape"
    value {
      shape {
        dim {
          size: -1
        }
        dim {
          size: 19
        }
        dim {
          size: 1
        }
      }
    }
  }
}
node {
  name: "deepq/ToFloat"
  op: "Cast"
  input: "deepq/observation"
  attr {
    key: "DstT"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "SrcT"
    value {
      type: DT_DOUBLE
    }
  }
  attr {
    key: "Truncate"
    value {
      b: false
    }
  }
}

 :
 :

node {
  name: "save/Assign_49"
  op: "Assign"
  input: "deepq_1/beta1_power"
  input: "save/RestoreV2:49"
  attr {
    key: "T"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "_class"
    value {
      list {
        s: "loc:@deepq/q_func/action_value/fully_connected/biases"
      }
    }
  }
  attr {
    key: "use_locking"
    value {
      b: true
    }
  }
  attr {
    key: "validate_shape"
    value {
      b: true
    }
  }
}
node {
  name: "save/Assign_50"
  op: "Assign"
  input: "deepq_1/beta2_power"
  input: "save/RestoreV2:50"
  attr {
    key: "T"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "_class"
    value {
      list {
        s: "loc:@deepq/q_func/action_value/fully_connected/biases"
      }
    }
  }
  attr {
    key: "use_locking"
    value {
      b: true
    }
  }
  attr {
    key: "validate_shape"
    value {
      b: true
    }
  }
}

:
:
node {
  name: "deepq_1/group_deps_1"
  op: "NoOp"
  input: "^deepq_1/Adam"
}
node {
  name: "deepq_1/group_deps_2"
  op: "NoOp"
  input: "^deepq_1/group_deps"
}
node {
  name: "deepq_1/group_deps_3"
  op: "NoOp"
}
node {
  name: "init"
  op: "NoOp"
  input: "^deepq/deepq/q_func/action_value/fully_connected/biases/Adam/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected/biases/Adam_1/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected/weights/Adam/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected/weights/Adam_1/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam_1/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam_1/Assign"
  input: "^deepq/deepq/q_func/mlp_fc0/b/Adam/Assign"
  input: "^deepq/deepq/q_func/mlp_fc0/b/Adam_1/Assign"
  input: "^deepq/deepq/q_func/mlp_fc0/w/Adam/Assign"
  input: "^deepq/deepq/q_func/mlp_fc0/w/Adam_1/Assign"
  input: "^deepq/deepq/q_func/mlp_fc1/b/Adam/Assign"
  input: "^deepq/deepq/q_func/mlp_fc1/b/Adam_1/Assign"
  input: "^deepq/deepq/q_func/mlp_fc1/w/Adam/Assign"
  input: "^deepq/deepq/q_func/mlp_fc1/w/Adam_1/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected/biases/Adam/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected/biases/Adam_1/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected/weights/Adam/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected/weights/Adam_1/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam_1/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam_1/Assign"
  input: "^deepq/eps/Assign"
  input: "^deepq/q_func/action_value/fully_connected/biases/Assign"
  input: "^deepq/q_func/action_value/fully_connected/weights/Assign"
  input: "^deepq/q_func/action_value/fully_connected_1/biases/Assign"
  input: "^deepq/q_func/action_value/fully_connected_1/weights/Assign"
  input: "^deepq/q_func/mlp_fc0/b/Assign"
  input: "^deepq/q_func/mlp_fc0/w/Assign"
  input: "^deepq/q_func/mlp_fc1/b/Assign"
  input: "^deepq/q_func/mlp_fc1/w/Assign"
  input: "^deepq/q_func/state_value/fully_connected/biases/Assign"
  input: "^deepq/q_func/state_value/fully_connected/weights/Assign"
  input: "^deepq/q_func/state_value/fully_connected_1/biases/Assign"
  input: "^deepq/q_func/state_value/fully_connected_1/weights/Assign"
  input: "^deepq/target_q_func/action_value/fully_connected/biases/Assign"
  input: "^deepq/target_q_func/action_value/fully_connected/weights/Assign"
  input: "^deepq/target_q_func/action_value/fully_connected_1/biases/Assign"
  input: "^deepq/target_q_func/action_value/fully_connected_1/weights/Assign"
  input: "^deepq/target_q_func/mlp_fc0/b/Assign"
  input: "^deepq/target_q_func/mlp_fc0/w/Assign"
  input: "^deepq/target_q_func/mlp_fc1/b/Assign"
  input: "^deepq/target_q_func/mlp_fc1/w/Assign"
  input: "^deepq/target_q_func/state_value/fully_connected/biases/Assign"
  input: "^deepq/target_q_func/state_value/fully_connected/weights/Assign"
  input: "^deepq/target_q_func/state_value/fully_connected_1/biases/Assign"
  input: "^deepq/target_q_func/state_value/fully_connected_1/weights/Assign"
  input: "^deepq_1/beta1_power/Assign"
  input: "^deepq_1/beta2_power/Assign"
}
node {
  name: "init_1"
  op: "NoOp"
  input: "^deepq/deepq/q_func/action_value/fully_connected/biases/Adam/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected/biases/Adam_1/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected/weights/Adam/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected/weights/Adam_1/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam_1/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam/Assign"
  input: "^deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam_1/Assign"
  input: "^deepq/deepq/q_func/mlp_fc0/b/Adam/Assign"
  input: "^deepq/deepq/q_func/mlp_fc0/b/Adam_1/Assign"
  input: "^deepq/deepq/q_func/mlp_fc0/w/Adam/Assign"
  input: "^deepq/deepq/q_func/mlp_fc0/w/Adam_1/Assign"
  input: "^deepq/deepq/q_func/mlp_fc1/b/Adam/Assign"
  input: "^deepq/deepq/q_func/mlp_fc1/b/Adam_1/Assign"
  input: "^deepq/deepq/q_func/mlp_fc1/w/Adam/Assign"
  input: "^deepq/deepq/q_func/mlp_fc1/w/Adam_1/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected/biases/Adam/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected/biases/Adam_1/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected/weights/Adam/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected/weights/Adam_1/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam_1/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam/Assign"
  input: "^deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam_1/Assign"
  input: "^deepq/eps/Assign"
  input: "^deepq/q_func/action_value/fully_connected/biases/Assign"
  input: "^deepq/q_func/action_value/fully_connected/weights/Assign"
  input: "^deepq/q_func/action_value/fully_connected_1/biases/Assign"
  input: "^deepq/q_func/action_value/fully_connected_1/weights/Assign"
  input: "^deepq/q_func/mlp_fc0/b/Assign"
  input: "^deepq/q_func/mlp_fc0/w/Assign"
  input: "^deepq/q_func/mlp_fc1/b/Assign"
  input: "^deepq/q_func/mlp_fc1/w/Assign"
  input: "^deepq/q_func/state_value/fully_connected/biases/Assign"
  input: "^deepq/q_func/state_value/fully_connected/weights/Assign"
  input: "^deepq/q_func/state_value/fully_connected_1/biases/Assign"
  input: "^deepq/q_func/state_value/fully_connected_1/weights/Assign"
  input: "^deepq/target_q_func/action_value/fully_connected/biases/Assign"
  input: "^deepq/target_q_func/action_value/fully_connected/weights/Assign"
  input: "^deepq/target_q_func/action_value/fully_connected_1/biases/Assign"
  input: "^deepq/target_q_func/action_value/fully_connected_1/weights/Assign"
  input: "^deepq/target_q_func/mlp_fc0/b/Assign"
  input: "^deepq/target_q_func/mlp_fc0/w/Assign"
  input: "^deepq/target_q_func/mlp_fc1/b/Assign"
  input: "^deepq/target_q_func/mlp_fc1/w/Assign"
  input: "^deepq/target_q_func/state_value/fully_connected/biases/Assign"
  input: "^deepq/target_q_func/state_value/fully_connected/weights/Assign"
  input: "^deepq/target_q_func/state_value/fully_connected_1/biases/Assign"
  input: "^deepq/target_q_func/state_value/fully_connected_1/weights/Assign"
  input: "^deepq_1/beta1_power/Assign"
  input: "^deepq_1/beta2_power/Assign"
}
node {
  name: "init_2"
  op: "NoOp"
}
node {
  name: "save/filename/input"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_STRING
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_STRING
        tensor_shape {
        }
        string_val: "model"
      }
    }
  }
}
node {
  name: "save/filename"
  op: "PlaceholderWithDefault"
  input: "save/filename/input"
  attr {
    key: "dtype"
    value {
      type: DT_STRING
    }
  }
  attr {
    key: "shape"
    value {
      shape {
      }
    }
  }
}
node {
  name: "save/Const"
  op: "PlaceholderWithDefault"
  input: "save/filename"
  attr {
    key: "dtype"
    value {
      type: DT_STRING
    }
  }
  attr {
    key: "shape"
    value {
      shape {
      }
    }
  }
}
node {
  name: "save/SaveV2/tensor_names"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_STRING
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_STRING
        tensor_shape {
          dim {
            size: 51
          }
        }
        string_val: "deepq/deepq/q_func/action_value/fully_connected/biases/Adam"
        string_val: "deepq/deepq/q_func/action_value/fully_connected/biases/Adam_1"
        string_val: "deepq/deepq/q_func/action_value/fully_connected/weights/Adam"
        string_val: "deepq/deepq/q_func/action_value/fully_connected/weights/Adam_1"
        string_val: "deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam"
        string_val: "deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam_1"
        string_val: "deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam"
        string_val: "deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam_1"
        string_val: "deepq/deepq/q_func/mlp_fc0/b/Adam"
        string_val: "deepq/deepq/q_func/mlp_fc0/b/Adam_1"
        string_val: "deepq/deepq/q_func/mlp_fc0/w/Adam"
        string_val: "deepq/deepq/q_func/mlp_fc0/w/Adam_1"
        string_val: "deepq/deepq/q_func/mlp_fc1/b/Adam"
        string_val: "deepq/deepq/q_func/mlp_fc1/b/Adam_1"
        string_val: "deepq/deepq/q_func/mlp_fc1/w/Adam"
        string_val: "deepq/deepq/q_func/mlp_fc1/w/Adam_1"
        string_val: "deepq/deepq/q_func/state_value/fully_connected/biases/Adam"
        string_val: "deepq/deepq/q_func/state_value/fully_connected/biases/Adam_1"
        string_val: "deepq/deepq/q_func/state_value/fully_connected/weights/Adam"
        string_val: "deepq/deepq/q_func/state_value/fully_connected/weights/Adam_1"
        string_val: "deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam"
        string_val: "deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam_1"
        string_val: "deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam"
        string_val: "deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam_1"
        string_val: "deepq/eps"
        string_val: "deepq/q_func/action_value/fully_connected/biases"
        string_val: "deepq/q_func/action_value/fully_connected/weights"
        string_val: "deepq/q_func/action_value/fully_connected_1/biases"
        string_val: "deepq/q_func/action_value/fully_connected_1/weights"
        string_val: "deepq/q_func/mlp_fc0/b"
        string_val: "deepq/q_func/mlp_fc0/w"
        string_val: "deepq/q_func/mlp_fc1/b"
        string_val: "deepq/q_func/mlp_fc1/w"
        string_val: "deepq/q_func/state_value/fully_connected/biases"
        string_val: "deepq/q_func/state_value/fully_connected/weights"
        string_val: "deepq/q_func/state_value/fully_connected_1/biases"
        string_val: "deepq/q_func/state_value/fully_connected_1/weights"
        string_val: "deepq/target_q_func/action_value/fully_connected/biases"
        string_val: "deepq/target_q_func/action_value/fully_connected/weights"
        string_val: "deepq/target_q_func/action_value/fully_connected_1/biases"
        string_val: "deepq/target_q_func/action_value/fully_connected_1/weights"
        string_val: "deepq/target_q_func/mlp_fc0/b"
        string_val: "deepq/target_q_func/mlp_fc0/w"
        string_val: "deepq/target_q_func/mlp_fc1/b"
        string_val: "deepq/target_q_func/mlp_fc1/w"
        string_val: "deepq/target_q_func/state_value/fully_connected/biases"
        string_val: "deepq/target_q_func/state_value/fully_connected/weights"
        string_val: "deepq/target_q_func/state_value/fully_connected_1/biases"
        string_val: "deepq/target_q_func/state_value/fully_connected_1/weights"
        string_val: "deepq_1/beta1_power"
        string_val: "deepq_1/beta2_power"
      }
    }
  }
}
node {
  name: "save/SaveV2/shape_and_slices"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_STRING
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_STRING
        tensor_shape {
          dim {
            size: 51
          }
        }
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
        string_val: ""
      }
    }
  }
}

:
:
node {
  name: "save/restore_all"
  op: "NoOp"
  input: "^save/Assign"
  input: "^save/Assign_1"
  input: "^save/Assign_10"
  input: "^save/Assign_11"
  input: "^save/Assign_12"
  input: "^save/Assign_13"
  input: "^save/Assign_14"
  input: "^save/Assign_15"
  input: "^save/Assign_16"
  input: "^save/Assign_17"
  input: "^save/Assign_18"
  input: "^save/Assign_19"
  input: "^save/Assign_2"
  input: "^save/Assign_20"
  input: "^save/Assign_21"
  input: "^save/Assign_22"
  input: "^save/Assign_23"
  input: "^save/Assign_24"
  input: "^save/Assign_25"
  input: "^save/Assign_26"
  input: "^save/Assign_27"
  input: "^save/Assign_28"
  input: "^save/Assign_29"
  input: "^save/Assign_3"
  input: "^save/Assign_30"
  input: "^save/Assign_31"
  input: "^save/Assign_32"
  input: "^save/Assign_33"
  input: "^save/Assign_34"
  input: "^save/Assign_35"
  input: "^save/Assign_36"
  input: "^save/Assign_37"
  input: "^save/Assign_38"
  input: "^save/Assign_39"
  input: "^save/Assign_4"
  input: "^save/Assign_40"
  input: "^save/Assign_41"
  input: "^save/Assign_42"
  input: "^save/Assign_43"
  input: "^save/Assign_44"
  input: "^save/Assign_45"
  input: "^save/Assign_46"
  input: "^save/Assign_47"
  input: "^save/Assign_48"
  input: "^save/Assign_49"
  input: "^save/Assign_5"
  input: "^save/Assign_50"
  input: "^save/Assign_6"
  input: "^save/Assign_7"
  input: "^save/Assign_8"
  input: "^save/Assign_9"
}
versions {
  producer: 27
}

1 Ответ

1 голос
/ 16 мая 2019

Я не могу дать полный ответ.

Тем не менее, для идентификации входного узла вы обычно ищете узел с операцией "заполнитель".Их может быть несколько, особенно если используется сложный оптимизатор или выпадающий слой.

Когда дело доходит до результатов, это даже стоит: каждый узел по сути является выходом.Здесь я могу порекомендовать 2 варианта: искать операцию с ожидаемым именем вывода, например, softmax.Или проанализируйте все определение файла и найдите терминальные узлы.Узлы, которые не используются ни в одной другой операции.

Еще один вариант - загрузить график, сохранить контрольную точку и изучить ее на тензорной доске.

Я не знаю, какой инструмент программирования способенпрекрасного представления tf.graph для запросов Python.

Если вы хотите создать файл .pb, вы можете воспользоваться помощью этого ответа: Учитывая тензорный граф модели потока, как найти входной узели имена выходных узлов

...