Я пытаюсь предсказать массив значений с помощью нейронной сети Keras следующим образом:
def create_network():
np.random.seed(0)
number_of_features = (22)
# start neural network
network = models.Sequential()
# Adding three layers
# Add fully connected layer with ReLU
network.add(layers.Dense(units=35, activation="relu", input_shape=(number_of_features,)))
# Add fully connected layer with ReLU
network.add(layers.Dense(units=35, activation='relu'))
# Add fully connected layer with no activation function
network.add(layers.Dense(units=1))
# Compile neural network
network.compile(loss='mse',
optimizer='RMSprop',
metrics=['mse'])
# Return compiled network
return network
neural_network = KerasClassifier(build_fn=create_network, epochs = 10, batch_size = 100, verbose = 0)
neural_network.fit(x_train, y_train)
neural_network.predict(x_test)
При использовании кода для прогнозирования я получаю следующие выходные данные:
array([[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031],
[-0.23991031]])
Различные значения в этом массиве не должны быть одинаковыми, поскольку точки входных данных не совпадают. Почему это происходит?
Данные разбиваются на поезда и тестируются следующим образом:
x_train, x_test, y_train, y_test = train_test_split(df3, df2['RETURN NEXT 12 MONTHS'], test_size=0.2)# 0.2 means 20% of the values are used for testing
Вот несколько примеров данных:
x_train
RETURN ON INVESTED CAPITAL ... VOLATILITY
226 0.0436 ... 0.3676
309 0.1073 ... 0.3552
306 0.1073 ... 0.3660
238 0.1257 ... 0.4352
254 0.1960 ... 0.4230
308 0.1073 ... 0.3661
327 0.2108 ... 0.2674
325 0.2108 ... 0.2836
...
Кадр данныхвыше имеет 22 столбца и около 100 строк.
Соответствующие данные обучения y:
y_train
226 0.137662
309 1.100000
306 0.725738
238 0.244292
254 -0.557806
308 1.052402
327 -0.035730
...
Я пытался использовать разное количество эпох, размер пакета и разную архитектуру модели, но все они дают одинаковый вывод для всех входных данных.
За эпоху потерь:
Epoch 1/10
10/100 [==>...........................] - ETA: 1s - loss: 1525.8176 - mse: 1525.8176
100/100 [==============================] - 0s 2ms/step - loss: 13771.8389 - mse: 13771.8389
Epoch 2/10
10/100 [==>...........................] - ETA: 0s - loss: 4315.0015 - mse: 4315.0015
30/100 [========>.....................] - ETA: 0s - loss: 23554.2446 - mse: 23554.2441
40/100 [===========>..................] - ETA: 0s - loss: 18089.7297 - mse: 18089.7305
50/100 [==============>...............] - ETA: 0s - loss: 15002.7878 - mse: 15002.7871
100/100 [==============================] - 0s 2ms/step - loss: 10520.1019 - mse: 10520.1025
Epoch 3/10
10/100 [==>...........................] - ETA: 0s - loss: 2722.1135 - mse: 2722.1135
100/100 [==============================] - 0s 167us/step - loss: 8500.4698 - mse: 8500.4697
Epoch 4/10
10/100 [==>...........................] - ETA: 0s - loss: 3192.2231 - mse: 3192.2231
50/100 [==============>...............] - ETA: 0s - loss: 4860.0622 - mse: 4860.0620
90/100 [==========================>...] - ETA: 0s - loss: 7377.6898 - mse: 7377.6904
100/100 [==============================] - 0s 1ms/step - loss: 6911.2499 - mse: 6911.2500
Epoch 5/10
10/100 [==>...........................] - ETA: 0s - loss: 1996.5687 - mse: 1996.5687
60/100 [=================>............] - ETA: 0s - loss: 6902.6661 - mse: 6902.6660
70/100 [====================>.........] - ETA: 0s - loss: 6162.6467 - mse: 6162.6470
90/100 [==========================>...] - ETA: 0s - loss: 6195.4129 - mse: 6195.4131
100/100 [==============================] - 0s 4ms/step - loss: 5773.2919 - mse: 5773.2925
Epoch 6/10
10/100 [==>...........................] - ETA: 0s - loss: 3063.1946 - mse: 3063.1946
80/100 [=======================>......] - ETA: 0s - loss: 5351.2784 - mse: 5351.2793
90/100 [==========================>...] - ETA: 0s - loss: 5100.9203 - mse: 5100.9214
100/100 [==============================] - 0s 2ms/step - loss: 4755.7785 - mse: 4755.7793
Epoch 7/10
10/100 [==>...........................] - ETA: 0s - loss: 3710.8032 - mse: 3710.8032
70/100 [====================>.........] - ETA: 0s - loss: 4607.9606 - mse: 4607.9609
100/100 [==============================] - 0s 943us/step - loss: 3847.0730 - mse: 3847.0732
Epoch 8/10
10/100 [==>...........................] - ETA: 0s - loss: 1742.0632 - mse: 1742.0632
30/100 [========>.....................] - ETA: 0s - loss: 2304.5816 - mse: 2304.5818
100/100 [==============================] - 0s 1ms/step - loss: 3109.5293 - mse: 3109.5293
Epoch 9/10
10/100 [==>...........................] - ETA: 0s - loss: 2027.7537 - mse: 2027.7537
100/100 [==============================] - 0s 574us/step - loss: 2537.4794 - mse: 2537.4795
Epoch 10/10
10/100 [==>...........................] - ETA: 0s - loss: 2966.5125 - mse: 2966.5125
100/100 [==============================] - 0s 177us/step - loss: 2191.3686 - mse: 2191.3687