PySpark MultiLayerPercepTronClassifier, кажется, не работает с OneHotEncoding - PullRequest
0 голосов
/ 06 ноября 2019

Я выполняю пример глупости для выполнения классификации с PySpark.

Я создал конвейер ETL, в котором метки преобразуются в OneHotEncoding, но PySpark выдает:

IllegalArgumentException: 'requirement failed: Column label must be of type numeric but was actually of type struct<type:tinyint,size:int,indices:array<int>,values:array<double>>.'

Код для Sparse One-hot

from pyspark.ml.feature import StringIndexer, StandardScaler, OneHotEncoderEstimator, StandardScaler
from pyspark.ml import Pipeline
from pyspark.ml.classification  import MultilayerPerceptronClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.sql.functions import rand 

df = spark.createDataFrame([
    ("Music", 3.45,1245),
    ("Sports", 4.49,3456),
    ("Music", 1.22, 323),
    ("Animals", 2.45,24)], ["category", "rating", "views"])

"""ETL Pipeline over
the whole dataset
"""
indexer = StringIndexer(inputCol="category", outputCol="class",handleInvalid="skip")
encoder = OneHotEncoderEstimator(inputCols=["class"],
                                 outputCols=["label"])
encoder.setDropLast(False)
vectorizer  = VectorAssembler(inputCols=["rating","views"],
                            outputCol="unscaled_features")

etl_pipeline    = Pipeline(stages=[indexer,encoder,vectorizer])
etlModel        = etl_pipeline.fit(df)
tr_df           = etlModel.transform(df)

tr_df.show()

"""Training Pipeline
"""
train_data, test_data = tr_df.randomSplit([.8, .2],seed=23487)

scaler = StandardScaler(inputCol="unscaled_features", outputCol="features",
                        withStd=True, withMean=True)
# specify layers for the neural network:
layers = [4, 5, 4, 3]
# create the trainer and set its parameters
trainer = MultilayerPerceptronClassifier(maxIter=100, layers=layers, blockSize=1, seed=1234)

ml_pipeline = Pipeline(stages=[scaler, trainer])
mlModel = ml_pipeline.fit(train_data)
result = mlModel.transform(test_data)
predictionAndLabels = result.select("prediction", "label")
evaluator = MulticlassClassificationEvaluator(metricName="accuracy")
print("Test set accuracy = " +    str(evaluator.evaluate(predictionAndLabels)))

Out

+--------+------+-----+-----+-------------+-----------------+
|category|rating|views|class|        label|unscaled_features|
+--------+------+-----+-----+-------------+-----------------+
|   Music|  3.45| 1245|  0.0|(3,[0],[1.0])|    [3.45,1245.0]|
|  Sports|  4.49| 3456|  2.0|(3,[2],[1.0])|    [4.49,3456.0]|
|   Music|  1.22|  323|  0.0|(3,[0],[1.0])|     [1.22,323.0]|
| Animals|  2.45|   24|  1.0|(3,[1],[1.0])|      [2.45,24.0]|
+--------+------+-----+-----+-------------+-----------------+


IllegalArgumentException: 'requirement failed: Column label must be of type numeric but was actually of type struct<type:tinyint,size:int,indices:array<int>,values:array<double>>.'

Странная вещь заключается в том, что хотя я конвертирую SparseVecotr из one-hot-метки для DenseVector, ошибка все еще остается. Похоже, что MultilayerPerceptronClassifier преобразует плотные метки в разреженные, но не работает должным образом ...

Код для ETL с плотным однократным

from pyspark.ml.feature import StringIndexer, StandardScaler, OneHotEncoderEstimator, StandardScaler
from pyspark.ml import Pipeline
from pyspark.ml.classification  import MultilayerPerceptronClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.sql.functions import rand 

df = spark.createDataFrame([
    ("Music", 3.45,1245),
    ("Sports", 4.49,3456),
    ("Music", 1.22, 323),
    ("Animals", 2.45,24)], ["category", "rating", "views"])

"""ETL Pipeline over
the whole dataset
"""
indexer = StringIndexer(inputCol="category", outputCol="class",handleInvalid="skip")
encoder = OneHotEncoderEstimator(inputCols=["class"],
                                 outputCols=["label"])
encoder.setDropLast(False)
vectorizer  = VectorAssembler(inputCols=["rating","views"],
                            outputCol="unscaled_features")

etl_pipeline    = Pipeline(stages=[indexer,encoder,vectorizer])
etlModel        = etl_pipeline.fit(df)
tr_df           = etlModel.transform(df)


tr_df = tr_df.select("label", "unscaled_features")
rdd = tr_df.rdd.map(lambda x: Row(label=DenseVector(x[0].toArray()),unscaled_features=x[1])
                     if (len(x)>1 and hasattr(x[0], "toArray"))
                     else Row(label=None, unscaled_features=DenseVector([])))
tr_df = rdd.toDF()

tr_df.show()


"""Training Pipeline
"""
train_data, test_data = tr_df.randomSplit([.8, .2],seed=23487)

scaler = StandardScaler(inputCol="unscaled_features", outputCol="features",
                        withStd=True, withMean=True)
# specify layers for the neural network:
layers = [4, 5, 4, 3]
# create the trainer and set its parameters
trainer = MultilayerPerceptronClassifier(maxIter=100, layers=layers, blockSize=1, seed=1234)

ml_pipeline = Pipeline(stages=[scaler, trainer])
mlModel = ml_pipeline.fit(train_data)
result = mlModel.transform(test_data)
predictionAndLabels = result.select("prediction", "label")
evaluator = MulticlassClassificationEvaluator(metricName="accuracy")
print("Test set accuracy = " + str(evaluator.evaluate(predictionAndLabels)))

Out

+-------------+-----------------+
|        label|unscaled_features|
+-------------+-----------------+
|[1.0,0.0,0.0]|    [3.45,1245.0]|
|[0.0,0.0,1.0]|    [4.49,3456.0]|
|[1.0,0.0,0.0]|     [1.22,323.0]|
|[0.0,1.0,0.0]|      [2.45,24.0]|
+-------------+-----------------+

IllegalArgumentException: 'requirement failed: Column label must be of type numeric but was actually of type struct<type:tinyint,size:int,indices:array<int>,values:array<double>>.'

ОБНОВЛЕНИЕ 1: УДАЛЕНИЕ ЕДИНООБРАЗНОГО КОДИРОВАНИЯ ИЗ ТРУБОПРОВОДА

Код

from pyspark.ml.feature import StringIndexer, StandardScaler, OneHotEncoderEstimator, StandardScaler
from pyspark.ml import Pipeline
from pyspark.ml.classification  import MultilayerPerceptronClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.sql.functions import rand 

df = spark.createDataFrame([
    ("Music", 3.45,1245),
    ("Sports", 4.49,3456),
    ("Music", 1.22, 323),
    ("Animals", 2.45,24)], ["category", "rating", "views"])

"""ETL Pipeline over
the whole dataset
"""
indexer = StringIndexer(inputCol="category", outputCol="label",handleInvalid="skip")
# encoder = OneHotEncoderEstimator(inputCols=["class"],
#                                  outputCols=["label"])
# encoder.setDropLast(False)
vectorizer  = VectorAssembler(inputCols=["rating","views"],
                            outputCol="unscaled_features")

etl_pipeline    = Pipeline(stages=[indexer,vectorizer])
etlModel        = etl_pipeline.fit(df)
tr_df           = etlModel.transform(df)


tr_df.show()


"""Training Pipeline
"""
train_data, test_data = tr_df.randomSplit([.8, .2],seed=23487)

scaler = StandardScaler(inputCol="unscaled_features", outputCol="features",
                        withStd=True, withMean=True)
# specify layers for the neural network:
layers = [4, 5, 4, 3]
# create the trainer and set its parameters
trainer = MultilayerPerceptronClassifier(maxIter=100, layers=layers, blockSize=128, seed=1234)

ml_pipeline = Pipeline(stages=[scaler, trainer])
mlModel = ml_pipeline.fit(train_data)
result = mlModel.transform(test_data)
predictionAndLabels = result.select("prediction", "label")
evaluator = MulticlassClassificationEvaluator(metricName="accuracy")
print("Test set accuracy = " + str(evaluator.evaluate(predictionAndLabels)))

OUT

+--------+------+-----+-----+-----------------+
|category|rating|views|label|unscaled_features|
+--------+------+-----+-----+-----------------+
|   Music|  3.45| 1245|  0.0|    [3.45,1245.0]|
|  Sports|  4.49| 3456|  2.0|    [4.49,3456.0]|
|   Music|  1.22|  323|  0.0|     [1.22,323.0]|
| Animals|  2.45|   24|  1.0|      [2.45,24.0]|
+--------+------+-----+-----+-----------------+

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-9-58967f1d5bce> in <module>
     60 
     61 ml_pipeline = Pipeline(stages=[scaler, trainer])
---> 62 mlModel = ml_pipeline.fit(train_data)
     63 result = mlModel.transform(test_data)
     64 predictionAndLabels = result.select("prediction", "label")

~/.local/lib/python3.5/site-packages/pyspark/ml/base.py in fit(self, dataset, params)
    130                 return self.copy(params)._fit(dataset)
    131             else:
--> 132                 return self._fit(dataset)
    133         else:
    134             raise ValueError("Params must be either a param map or a list/tuple of param maps, "

~/.local/lib/python3.5/site-packages/pyspark/ml/pipeline.py in _fit(self, dataset)
    107                     dataset = stage.transform(dataset)
    108                 else:  # must be an Estimator
--> 109                     model = stage.fit(dataset)
    110                     transformers.append(model)
    111                     if i < indexOfLastEstimator:

~/.local/lib/python3.5/site-packages/pyspark/ml/base.py in fit(self, dataset, params)
    130                 return self.copy(params)._fit(dataset)
    131             else:
--> 132                 return self._fit(dataset)
    133         else:
    134             raise ValueError("Params must be either a param map or a list/tuple of param maps, "

~/.local/lib/python3.5/site-packages/pyspark/ml/wrapper.py in _fit(self, dataset)
    293 
    294     def _fit(self, dataset):
--> 295         java_model = self._fit_java(dataset)
    296         model = self._create_model(java_model)
    297         return self._copyValues(model)

~/.local/lib/python3.5/site-packages/pyspark/ml/wrapper.py in _fit_java(self, dataset)
    290         """
    291         self._transfer_params_to_java()
--> 292         return self._java_obj.fit(dataset._jdf)
    293 
    294     def _fit(self, dataset):

~/.local/lib/python3.5/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

~/.local/lib/python3.5/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

~/.local/lib/python3.5/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o870.fit.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 26.0 failed 1 times, most recent failure: Lost task 0.0 in stage 26.0 (TID 26, localhost, executor driver): java.lang.ArrayIndexOutOfBoundsException
    at java.lang.System.arraycopy(Native Method)
    at org.apache.spark.ml.ann.DataStacker$$anonfun$5$$anonfun$apply$3$$anonfun$apply$4.apply(Layer.scala:665)
    at org.apache.spark.ml.ann.DataStacker$$anonfun$5$$anonfun$apply$3$$anonfun$apply$4.apply(Layer.scala:664)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at org.apache.spark.ml.ann.DataStacker$$anonfun$5$$anonfun$apply$3.apply(Layer.scala:664)
    at org.apache.spark.ml.ann.DataStacker$$anonfun$5$$anonfun$apply$3.apply(Layer.scala:660)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
    at org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:222)
    at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:299)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1165)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1156)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
    at org.apache.spark.rdd.RDD.count(RDD.scala:1168)
    at org.apache.spark.mllib.optimization.LBFGS$.runLBFGS(LBFGS.scala:195)
    at org.apache.spark.mllib.optimization.LBFGS.optimize(LBFGS.scala:142)
    at org.apache.spark.ml.ann.FeedForwardTrainer.train(Layer.scala:854)
    at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun$train$1.apply(MultilayerPerceptronClassifier.scala:249)
    at org.apache.spark.ml.classification.MultilayerPerceptronClassifier$$anonfun$train$1.apply(MultilayerPerceptronClassifier.scala:205)
    at org.apache.spark.ml.util.Instrumentation$$anonfun$11.apply(Instrumentation.scala:185)
    at scala.util.Try$.apply(Try.scala:192)
    at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:185)
    at org.apache.spark.ml.classification.MultilayerPerceptronClassifier.train(MultilayerPerceptronClassifier.scala:205)
    at org.apache.spark.ml.classification.MultilayerPerceptronClassifier.train(MultilayerPerceptronClassifier.scala:114)
    at org.apache.spark.ml.Predictor.fit(Predictor.scala:118)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArrayIndexOutOfBoundsException
    at java.lang.System.arraycopy(Native Method)
    at org.apache.spark.ml.ann.DataStacker$$anonfun$5$$anonfun$apply$3$$anonfun$apply$4.apply(Layer.scala:665)
    at org.apache.spark.ml.ann.DataStacker$$anonfun$5$$anonfun$apply$3$$anonfun$apply$4.apply(Layer.scala:664)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at org.apache.spark.ml.ann.DataStacker$$anonfun$5$$anonfun$apply$3.apply(Layer.scala:664)
    at org.apache.spark.ml.ann.DataStacker$$anonfun$5$$anonfun$apply$3.apply(Layer.scala:660)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
    at org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:222)
    at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:299)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1165)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1156)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ... 1 more

Версия PySPark

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.4
      /_/

Using Scala version 2.11.12, OpenJDK 64-Bit Server VM, 1.8.0_222

Версия Java

openjdk версия "1.8.0_222" Среда выполнения OpenJDK (сборка 1.8.0_222-8u222-b10-1ubuntu1 ~ 16.04.1-b10) 64-разрядная серверная виртуальная машина OpenJDK (сборка 25.222-b10, смешанный режим)

1 Ответ

0 голосов
/ 07 ноября 2019

Функции следует запускать через VectorAssembler, но вам не нужно выполнять горячее кодирование для столбца label. Вы должны просто передать столбец labels как число classes, как они есть:

+------+-----------------+
| label|unscaled_features|
+------+-----------------+
|     0|    [3.45,1245.0]|
|     2|    [4.49,3456.0]|
|     0|     [1.22,323.0]|
|     1|      [2.45,24.0]|
+------+-----------------+

Это должно решить вашу ошибку.

...