Ошибка API обнаружения объектов TensorFlow: Невозможно назначить устройство для операции стека - PullRequest
0 голосов
/ 17 февраля 2020

Я пытаюсь выполнить трансферное обучение, используя SSD Inception v2, предварительно обученный в COCO. Когда я запускаю опцию обучения, появляется следующая ошибка:

tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation unstack: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='' supported_device_types_=[CPU] possible_devices_=[]
Unpack: CPU

Я искал решение, и людям удалось запустить его, заставив запускать его на ЦП, чего я не хочу к. До вчерашнего дня все работало нормально. Вот что я изменил, что могло повлиять на это:

  1. Включен гибридный режим (Lenovo), который включает переключаемую графику. Пробовал отключить его, но не помог с ошибкой.
  2. Чтобы выполнить обучение переносу, я заморозил некоторые слои сети, используя следующее регулярное выражение: freeze_variables: "FeatureExtractor / InceptionV2 / Mixed _ ([1- 5]) ([ас]) _ ([1-2]) / Branch _ ([[0-3]) / ([_ | / _ Conv2d ([0-9] | 10) ([_ | аз])? /." Удаление этого из конфигурации также не устраняет ошибку. Я был бы очень признателен, если бы кто-то помог мне.

Вот трассировка стека:

2020-02-17 13:22:18.667138: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-02-17 13:22:19.372761: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: GeForce RTX 2060 major: 7 minor: 5 memoryClockRate(GHz): 1.2
pciBusID: 0000:01:00.0
2020-02-17 13:22:19.379658: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2020-02-17 13:22:19.387170: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2020-02-17 13:22:19.399953: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll
2020-02-17 13:22:19.412672: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll
2020-02-17 13:22:19.429498: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll
2020-02-17 13:22:19.441628: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll
2020-02-17 13:22:19.468049: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-02-17 13:22:19.473601: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-02-17 13:22:20.019784: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-02-17 13:22:20.024010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0
2020-02-17 13:22:20.027097: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N
2020-02-17 13:22:20.031578: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4606 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2060, pci bus id: 0000:01:00.0, compute capability: 7.5)
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>, Cannot assign a device for operation unstack: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='' supported_device_types_=[CPU] possible_devices_=[]
Unpack: CPU

Colocation members, user-requested devices, and framework assigned devices, if any:
  unstack (Unpack) /device:GPU:0

Op: Unpack
Node attrs: T=DT_STRING, num=24, axis=0
Registered kernels:
  device='CPU'; T in [DT_INT64]
  device='CPU'; T in [DT_INT32]
  device='CPU'; T in [DT_UINT16]
  device='CPU'; T in [DT_INT16]
  device='CPU'; T in [DT_UINT8]
  device='CPU'; T in [DT_INT8]
  device='CPU'; T in [DT_HALF]
  device='CPU'; T in [DT_BFLOAT16]
  device='CPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_DOUBLE]
  device='CPU'; T in [DT_COMPLEX64]
  device='CPU'; T in [DT_COMPLEX128]
  device='CPU'; T in [DT_BOOL]
  device='CPU'; T in [DT_STRING]
  device='CPU'; T in [DT_RESOURCE]
  device='CPU'; T in [DT_VARIANT]
  device='GPU'; T in [DT_HALF]
  device='GPU'; T in [DT_FLOAT]
  device='GPU'; T in [DT_DOUBLE]
  device='GPU'; T in [DT_BFLOAT16]
  device='GPU'; T in [DT_UINT8]
  device='GPU'; T in [DT_BOOL]
  device='GPU'; T in [DT_INT32]
  device='GPU'; T in [DT_INT64]

         [[node unstack (defined at D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]

Original stack trace for 'unstack':
  File "train.py", line 187, in <module>
    tf.app.run()
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\platform\app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "D:\Anaconda\envs\dl\lib\site-packages\absl\app.py", line 299, in run
    _run_main(main, args)
  File "D:\Anaconda\envs\dl\lib\site-packages\absl\app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "train.py", line 183, in main
    graph_hook_fn=graph_rewriter_fn)
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 295, in train
    clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue])
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\slim\deployment\model_deploy.py", line 194, in create_clones
    outputs = model_fn(*args, **kwargs)
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 185, in _create_losses
    train_config.use_multiclass_scores)
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 128, in get_inputs
    read_data_list = input_queue.dequeue()
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\core\batcher.py", line 121, in dequeue
    unbatched_tensor_list = tf.unstack(batched_tensor)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 1323, in unstack
    return gen_array_ops.unpack(value, num=num, axis=axis, name=name)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\ops\gen_array_ops.py", line 12000, in unpack
    "Unpack", value=value, num=num, axis=axis, name=name)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
    attrs, op_def, compute_device)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
    op_def=op_def)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
    self._traceback = tf_stack.extract_stack()

I0217 13:22:20.578925  3984 coordinator.py:224] Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>, Cannot assign a device for operation unstack: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='' supported_device_types_=[CPU] possible_devices_=[]
Unpack: CPU

Colocation members, user-requested devices, and framework assigned devices, if any:
  unstack (Unpack) /device:GPU:0

Op: Unpack
Node attrs: T=DT_STRING, num=24, axis=0
Registered kernels:
  device='CPU'; T in [DT_INT64]
  device='CPU'; T in [DT_INT32]
  device='CPU'; T in [DT_UINT16]
  device='CPU'; T in [DT_INT16]
  device='CPU'; T in [DT_UINT8]
  device='CPU'; T in [DT_INT8]
  device='CPU'; T in [DT_HALF]
  device='CPU'; T in [DT_BFLOAT16]
  device='CPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_DOUBLE]
  device='CPU'; T in [DT_COMPLEX64]
  device='CPU'; T in [DT_COMPLEX128]
  device='CPU'; T in [DT_BOOL]
  device='CPU'; T in [DT_STRING]
  device='CPU'; T in [DT_RESOURCE]
  device='CPU'; T in [DT_VARIANT]
  device='GPU'; T in [DT_HALF]
  device='GPU'; T in [DT_FLOAT]
  device='GPU'; T in [DT_DOUBLE]
  device='GPU'; T in [DT_BFLOAT16]
  device='GPU'; T in [DT_UINT8]
  device='GPU'; T in [DT_BOOL]
  device='GPU'; T in [DT_INT32]
  device='GPU'; T in [DT_INT64]

         [[node unstack (defined at D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]

Original stack trace for 'unstack':
  File "train.py", line 187, in <module>
    tf.app.run()
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\platform\app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "D:\Anaconda\envs\dl\lib\site-packages\absl\app.py", line 299, in run
    _run_main(main, args)
  File "D:\Anaconda\envs\dl\lib\site-packages\absl\app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "train.py", line 183, in main
    graph_hook_fn=graph_rewriter_fn)
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 295, in train
    clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue])
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\slim\deployment\model_deploy.py", line 194, in create_clones
    outputs = model_fn(*args, **kwargs)
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 185, in _create_losses
    train_config.use_multiclass_scores)
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 128, in get_inputs
    read_data_list = input_queue.dequeue()
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\core\batcher.py", line 121, in dequeue
    unbatched_tensor_list = tf.unstack(batched_tensor)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 1323, in unstack
    return gen_array_ops.unpack(value, num=num, axis=axis, name=name)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\ops\gen_array_ops.py", line 12000, in unpack
    "Unpack", value=value, num=num, axis=axis, name=name)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
    attrs, op_def, compute_device)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
    op_def=op_def)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
    self._traceback = tf_stack.extract_stack()

Traceback (most recent call last):
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\client\session.py", line 1365, in _do_call
    return fn(*args)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\client\session.py", line 1348, in _run_fn
    self._extend_graph()
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\client\session.py", line 1388, in _extend_graph
    tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation unstack: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='' supported_device_types_=[CPU] possible_devices_=[]
Unpack: CPU

Colocation members, user-requested devices, and framework assigned devices, if any:
  unstack (Unpack) /device:GPU:0

Op: Unpack
Node attrs: T=DT_STRING, num=24, axis=0
Registered kernels:
  device='CPU'; T in [DT_INT64]
  device='CPU'; T in [DT_INT32]
  device='CPU'; T in [DT_UINT16]
  device='CPU'; T in [DT_INT16]
  device='CPU'; T in [DT_UINT8]
  device='CPU'; T in [DT_INT8]
  device='CPU'; T in [DT_HALF]
  device='CPU'; T in [DT_BFLOAT16]
  device='CPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_DOUBLE]
  device='CPU'; T in [DT_COMPLEX64]
  device='CPU'; T in [DT_COMPLEX128]
  device='CPU'; T in [DT_BOOL]
  device='CPU'; T in [DT_STRING]
  device='CPU'; T in [DT_RESOURCE]
  device='CPU'; T in [DT_VARIANT]
  device='GPU'; T in [DT_HALF]
  device='GPU'; T in [DT_FLOAT]
  device='GPU'; T in [DT_DOUBLE]
  device='GPU'; T in [DT_BFLOAT16]
  device='GPU'; T in [DT_UINT8]
  device='GPU'; T in [DT_BOOL]
  device='GPU'; T in [DT_INT32]
  device='GPU'; T in [DT_INT64]

         [[{{node unstack}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 187, in <module>
    tf.app.run()
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\platform\app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "D:\Anaconda\envs\dl\lib\site-packages\absl\app.py", line 299, in run
    _run_main(main, args)
  File "D:\Anaconda\envs\dl\lib\site-packages\absl\app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "train.py", line 183, in main
    graph_hook_fn=graph_rewriter_fn)
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 420, in train
    saver=saver)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\contrib\slim\python\slim\learning.py", line 753, in train
    master, start_standard_services=False, config=session_config) as sess:
  File "D:\Anaconda\envs\dl\lib\contextlib.py", line 112, in __enter__
    return next(self.gen)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\training\supervisor.py", line 1014, in managed_session
    self.stop(close_summary_writer=close_summary_writer)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\training\supervisor.py", line 839, in stop
    ignore_live_threads=ignore_live_threads)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\training\coordinator.py", line 389, in join
    six.reraise(*self._exc_info_to_raise)
  File "D:\Anaconda\envs\dl\lib\site-packages\six.py", line 703, in reraise
    raise value
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\training\supervisor.py", line 1003, in managed_session
    start_standard_services=start_standard_services)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\training\supervisor.py", line 734, in prepare_or_wait_for_session
    init_fn=self._init_fn)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\training\session_manager.py", line 296, in prepare_session
    sess.run(init_op, feed_dict=init_feed_dict)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
    run_metadata_ptr)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
    feed_dict_tensor, options, run_metadata)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
    run_metadata)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation unstack: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='' supported_device_types_=[CPU] possible_devices_=[]
Unpack: CPU

Colocation members, user-requested devices, and framework assigned devices, if any:
  unstack (Unpack) /device:GPU:0

Op: Unpack
Node attrs: T=DT_STRING, num=24, axis=0
Registered kernels:
  device='CPU'; T in [DT_INT64]
  device='CPU'; T in [DT_INT32]
  device='CPU'; T in [DT_UINT16]
  device='CPU'; T in [DT_INT16]
  device='CPU'; T in [DT_UINT8]
  device='CPU'; T in [DT_INT8]
  device='CPU'; T in [DT_HALF]
  device='CPU'; T in [DT_BFLOAT16]
  device='CPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_DOUBLE]
  device='CPU'; T in [DT_COMPLEX64]
  device='CPU'; T in [DT_COMPLEX128]
  device='CPU'; T in [DT_BOOL]
  device='CPU'; T in [DT_STRING]
  device='CPU'; T in [DT_RESOURCE]
  device='CPU'; T in [DT_VARIANT]
  device='GPU'; T in [DT_HALF]
  device='GPU'; T in [DT_FLOAT]
  device='GPU'; T in [DT_DOUBLE]
  device='GPU'; T in [DT_BFLOAT16]
  device='GPU'; T in [DT_UINT8]
  device='GPU'; T in [DT_BOOL]
  device='GPU'; T in [DT_INT32]
  device='GPU'; T in [DT_INT64]

         [[node unstack (defined at D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]

Original stack trace for 'unstack':
  File "train.py", line 187, in <module>
    tf.app.run()
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\platform\app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "D:\Anaconda\envs\dl\lib\site-packages\absl\app.py", line 299, in run
    _run_main(main, args)
  File "D:\Anaconda\envs\dl\lib\site-packages\absl\app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "train.py", line 183, in main
    graph_hook_fn=graph_rewriter_fn)
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 295, in train
    clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue])
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\slim\deployment\model_deploy.py", line 194, in create_clones
    outputs = model_fn(*args, **kwargs)
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 185, in _create_losses
    train_config.use_multiclass_scores)
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 128, in get_inputs
    read_data_list = input_queue.dequeue()
  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\core\batcher.py", line 121, in dequeue
    unbatched_tensor_list = tf.unstack(batched_tensor)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 1323, in unstack
    return gen_array_ops.unpack(value, num=num, axis=axis, name=name)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\ops\gen_array_ops.py", line 12000, in unpack
    "Unpack", value=value, num=num, axis=axis, name=name)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
    attrs, op_def, compute_device)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
    op_def=op_def)
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
    self._traceback = tf_stack.extract_stack()

ERROR:tensorflow:==================================
Object was never used (type <class 'tensorflow.python.framework.ops.Tensor'>):
<tf.Tensor 'init_ops/report_uninitialized_variables/boolean_mask/GatherV2:0' shape=(?,) dtype=string>
If you want to mark it as used call its "mark_used()" method.
It was originally created here:
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
    return func(*args, **kwargs)  File "train.py", line 183, in main
    graph_hook_fn=graph_rewriter_fn)  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 420, in train
    saver=saver)  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\contrib\slim\python\slim\learning.py", line 796, in train
    should_retry = True  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\util\tf_should_use.py", line 198, in wrapped
    return _add_should_use_warning(fn(*args, **kwargs))
==================================
E0217 13:22:21.312196  3984 tf_should_use.py:76] ==================================
Object was never used (type <class 'tensorflow.python.framework.ops.Tensor'>):
<tf.Tensor 'init_ops/report_uninitialized_variables/boolean_mask/GatherV2:0' shape=(?,) dtype=string>
If you want to mark it as used call its "mark_used()" method.
It was originally created here:
  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
    return func(*args, **kwargs)  File "train.py", line 183, in main
    graph_hook_fn=graph_rewriter_fn)  File "C:\Users\DivyanshuSharma\Documents\MAVI\model\object_detection\legacy\trainer.py", line 420, in train
    saver=saver)  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\contrib\slim\python\slim\learning.py", line 796, in train
    should_retry = True  File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow_core\python\util\tf_should_use.py", line 198, in wrapped
    return _add_should_use_warning(fn(*args, **kwargs))
==================================
...