Если этот вопрос не по теме, пожалуйста, не стесняйтесь обращаться к другому сайту StackExchange. : -)
Я работаю с Keras и у меня довольно ограниченная память на моем GPU (GeForce GTX 970, ~ 4G). Как следствие, у меня заканчивается память (OOM), когда я работаю с Keras, у которого размер пакета установлен выше определенного уровня. Уменьшение размера пакета У меня нет этой проблемы, но Keras выводит следующие предупреждения:
2019-01-02 09:47:03.173259: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.57GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.211139: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.68GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.268074: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.95GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.685032: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.39GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.732304: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.56GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.850711: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.39GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.879135: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.48GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.963522: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.42GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.984897: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.47GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:04.058733: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.08GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
Что означают эти предупреждения для меня как пользователя? Что это за прирост производительности? Означает ли это, что он просто вычисляется быстрее или я даже получаю лучшие результаты с точки зрения лучшей потери при проверке?
В моей настройке я использую Keras с бэкэндом Tensorflow и tenorflow-gpu == 1.8.0.