Предварительная обработка текста для подачи в модель, обученную на наборе данных imdb - PullRequest
0 голосов
/ 16 июня 2020

Я обучил эту модель:

(training_eins,training_zwei),(test_eins,test_zwei) = tf.keras.datasets.imdb.load_data(num_words=10_000)
training_eins = tf.keras.preprocessing.sequence.pad_sequences(training_eins,maxlen=200)
test_eins = tf.keras.preprocessing.sequence.pad_sequences(test_eins,maxlen=200)

modell = Sequential()
modell.add(layers.Embedding(10_000,256,input_length=200))
modell.add(layers.Dropout(0.3))
modell.add(layers.GlobalMaxPooling1D())
modell.add(layers.Dense(128))
modell.add(layers.Activation("relu"))
modell.add(layers.Dropout(0.5))
modell.add(layers.Dense(1))
modell.add(layers.Activation("sigmoid"))

modell.compile(loss = "binary_crossentropy", optimizer = "adam", metrics = ["acc"])
modell.summary()

ergebnis = modell.fit(training_eins,
                      training_zwei, 
                      epochs = 10, 
                      verbose = 1, 
                      batch_size = 500, 
                      validation_data = (test_eins,test_zwei))

Теперь я хочу протестировать производительность модели на этом тексте (в качестве примера): very bad, I am truly disappointed

Итак, как можно Я преобразовываю этот текст в список, который можно передать модели?


Я знаю только, что модель ожидает списки вроде

[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297,
98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

1 Ответ

0 голосов
/ 17 июня 2020

Разобрался:

buch = tf.keras.datasets.imdb.get_word_index()
buch = {k:(v+3) for k,v in buch.items()}
buch["<PAD>"] = 0
buch["<START>"] = 1
buch["<UNK>"] = 2
buch["<UNUSED>"] = 3
eingabe = re.sub(r"[^a-zA-Z0-9 ]", "", "very bad, I am truly disappointed")
munition = [[((buch[inhalt] if buch[inhalt] < 10_000 else 0) if inhalt in buch else 0) for inhalt in (eingabe).split(" ")]]

training_eins = tf.keras.preprocessing.sequence.pad_sequences(munition, maxlen=200)
print(modell.predict(training_eins))

Или, в более общем смысле, если у вас более одного предложения:

buch = tf.keras.datasets.imdb.get_word_index()
buch = {k:(v+3) for k,v in buch.items()}
buch["<PAD>"] = 0
buch["<START>"] = 1
buch["<UNK>"] = 2
buch["<UNUSED>"] = 3

modell = tf.keras.models.load_model("...")

eingabe_eins = ["the movie was boring and i did not really enjoy it",
                "i love it, great movie",
                "that was a good movie, very funny. i recommend it highly", 
                "the worst thing i've seen in my life yet. not good, very very bad!", 
                "very bad, fully dissapointed!", 
                "very good if u wanna stress out someone. but couldnt ever watch this myself..", 
                "Yet another film that tries to pass off a whole lot of screaming and crying as great acting. This fails especially in Brosnan's big crying scene, in which he audibly squeaks", 
                "the movie is very good, i love it so much, my favorite movie ever, its beatiful. besides that i acknowledge that im not an expert, but i know this movie is somthing special. Ive never felt such a great atmosphere."]

eingabe_zwei = [re.sub(r"[^a-zA-Z0-9 ]", "", inhalt) for inhalt in eingabe_eins]

munition = [[((buch[inhalt] if buch[inhalt] < 10_000 else 0) if inhalt in buch else 0) for inhalt in eingabe.split(" ")] for eingabe in eingabe_zwei]
maximal = np.max([len(i) for i in munition])

munition = [np.append(i, np.zeros((maximal-len(i),))) if maximal != len(i) else i for i in munition]
print(munition)

training_eins = tf.keras.preprocessing.sequence.pad_sequences(munition,maxlen=200)
print(modell.predict(training_eins))

Кстати: все входные данные были правильно классифицированы, кроме very good if u wanna stress out someone. but couldnt ever watch this myself... Я использовал это, чтобы проверить, можно ли обмануть ИИ - по-видимому, это можно сделать довольно легко.

...