Создайте алгоритм для классификации товаров на рынке, чтобы я не мог вернуть метку прогноза, я перепробовал несколько команд, но все они имеют ошибку (ниже).Как вернуть прогноз и метку в процентах (я использую перекрестную проверку)?
Пример:
Я хочу сообщить вам товар "Браслет из 7 чакр" Браслет из 7 чакр, синий иличерный »и знать, что такое метка и точность (метка возвращает« Браслет »для этого продукта)
Данные обучения
data = spark.createDataFrame([
("Bracelet"," 7 Shakra Bracelet 7 chakra bracelet, in blue or black."),
("Bracelet"," Anchor Bracelet Mens Black leather bracelet with gold or silver anchor for men."),
("Bracelet"," Bangle Bracelet Gold bangle bracelet with studded jewels."),
("Bracelet"," Boho Bangle Bracelet Gold boho bangle bracelet with multicolor tassels."),
("Earrings"," Boho Earrings Turquoise globe earrings on 14k gold hooks."),
("Necklace"," Choker with Bead Black choker necklace with 14k gold bead."),
("Necklace"," Choker with Triangle Black choker with silver triangle pendant."),
("Necklace"," Dainty Gold Necklace Dainty gold necklace with two pendants."),
("Necklace"," Dreamcatcher Pendant Necklace Turquoise beaded dream catcher necklace. Silver feathers adorn this beautiful dream catcher, which move and twinkle as you walk."),
("Earrings"," Galaxy Earrings One set of galaxy earrings, with sterling silver clasps."),
("Necklace"," Gold Bird Necklace 14k Gold delicate necklace, with bird between two chains."),
("Earrings"," Gold Elephant Earrings Small 14k gold elephant earrings, with opal ear detail."),
("Earrings"," Guardian Angel Earrings Sterling silver guardian angel earrings with diamond gemstones."),
("Bracelet"," Moon Charm Bracelet Moon 14k gold chain friendship bracelet."),
("Necklace"," Origami Crane Necklace Sterling silver origami crane necklace."),
("Necklace"," Pretty Gold Necklace 14k gold and turquoise necklace. Stunning beaded turquoise on gold and pendant filled double chain design."),
("Necklace"," Silver Threader Necklace Sterling silver chain thread through circle necklace."),
("Necklace"," Stylish Summer Necklace Double chained gold boho necklace with turquoise pendant.")
], ["id", "description"])
Токен, Обработка текста и счетчик векторов
from pyspark.ml.feature import RegexTokenizer, StopWordsRemover, CountVectorizer
from pyspark.ml.classification import LogisticRegression
# regular expression tokenizer
regexTokenizer = RegexTokenizer(inputCol="description", outputCol="words", pattern="\\W")
# stop words
add_stopwords = ["http","https","amp","rt","t","c","the"]
stopwordsRemover = StopWordsRemover(inputCol="words", outputCol="filtered").setStopWords(add_stopwords)
# bag of words count
countVectors = CountVectorizer(inputCol="filtered", outputCol="features", vocabSize=10000, minDF=5)
Создание метки и создание набора данных
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler
label_stringIdx = StringIndexer(inputCol = "id", outputCol = "label")
pipeline = Pipeline(stages=[regexTokenizer, stopwordsRemover, countVectors, label_stringIdx])
# Fit the pipeline to training documents.
pipelineFit = pipeline.fit(data)
dataset = pipelineFit.transform(data)
До сих пор результатом моего набора данных был этот
Заполнение перекрестного алгоритма
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0)
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
# Create ParamGrid for Cross Validation
paramGrid = (ParamGridBuilder()
.addGrid(lr.regParam, [0.1, 0.3, 0.5]) # regularization parameter
.addGrid(lr.elasticNetParam, [0.0, 0.1, 0.2]) # Elastic Net Parameter (Ridge = 0)
# .addGrid(model.maxIter, [10, 20, 50]) #Number of iterations
# .addGrid(idf.numFeatures, [10, 100, 1000]) # Number of features
.build())
# Create 5-fold CrossValidator
cv = CrossValidator(estimator=lr, \
estimatorParamMaps=paramGrid, \
evaluator=evaluator, \
numFolds=5)
cvModel = cv.fit(dataset)
Создание данных для классификации
testData = spark.createDataFrame([
(10," 7 Shakra Bracelet 7 chakra bracelet, in blue or black."),
(11," Anchor Bracelet Mens Black leather bracelet with gold or silver anchor for men."),
(12," Bangle Bracelet Gold bangle bracelet with studded jewels."),
(13," 7 Shakra Bracelet 7 chakra bracelet, in blue or black."),
(14," Anchor Bracelet Mens Black leather bracelet with gold or silver anchor for men."),
(15," Bangle Bracelet Gold bangle bracelet with studded jewels."),
(100," 7 Shakra Bracelet 7 chakra bracelet, in blue or black."),
(16," Anchor Bracelet Mens Black leather bracelet with gold or silver anchor for men."),
(17," Bangle Bracelet Gold bangle bracelet with studded jewels."),
(101," 7 Shakra Bracelet 7 chakra bracelet, in blue or black."),
(18," Anchor Bracelet Mens Black leather bracelet with gold or silver anchor for men."),
(19," Bangle Bracelet Gold bangle bracelet with studded jewels."),
(104," 7 Shakra Bracelet 7 chakra bracelet, in blue or black."),
(20," Anchor Bracelet Mens Black leather bracelet with gold or silver anchor for men."),
(21," Bangle Bracelet Gold bangle bracelet with studded jewels.")
], ["rowid", "description"])
Я создаю новый набор данных, который должен быть отсортирован, удаляя только столбец labelIndex
pipeline = Pipeline(stages=[regexTokenizer, stopwordsRemover, countVectors])
# Fit the pipeline to training documents.
pipelineFit = pipeline.fit(testData)
datasetTest = pipelineFit.transform(testData)
Здесь я вычисляюНовый прогноз с помощью datasetTest
Здесь все работает правильно
Теперь, когда возникает проблема, я не вижу никакой информации изпеременные предсказания
Я попробовал команду ниже, но все ошибкиили происходит