Я следую примеру, опубликованному здесь , чтобы сериализовать модель Keras и обернуть ее как pandas_udf в spark. Однако, запустив пример, я получил ошибку:
RecursionError: maximum recursion depth exceeded while calling a Python object
Py4JJavaError: An error occurred while calling o1560.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 24.0 failed 4 times, most recent failure: Lost task 0.3 in stage 24.0 (TID 1842, 10.170.249.117, executor 5): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 182, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 695, in loads
return pickle.loads(obj, encoding=encoding)
File "/databricks/python/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/databricks/python/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "/databricks/python/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/databricks/python/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
.....
Кто-нибудь знает, почему это произошло? Это специфично c для блоков данных?