Я пытаюсь обучить NN для двоичных меток, вот мои данные:
Train: {'0': 126315, '1': 2915}
Val : {'0': 31579, '1': 729}
Test : {'0': 27864, '1': 643}
Вот мои веса:
tensor([7.9167e-06, 3.4305e-04], device='cuda:0')
Я получаю следующие предупреждения:
C:\Users\___\venv\lib\site-packages\torch\utils\data\sampler.py:115: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
self.weights = torch.tensor(weights, dtype=torch.double)
NN_FullyConnected(
(layer_1): Linear(in_features=611, out_features=512, bias=True)
(layer_2): Linear(in_features=512, out_features=256, bias=True)
(layer_3): Linear(in_features=256, out_features=128, bias=True)
(layer_4): Linear(in_features=128, out_features=64, bias=True)
(layer_5): Linear(in_features=64, out_features=32, bias=True)
(layer_out): Linear(in_features=32, out_features=2, bias=True)
(relu): ReLU()
(dropout): Dropout(p=0.45)
(batchnorm1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm3): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm4): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(batchnorm5): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
Begin training.
C:\Users\__\PycharmProjects\myproject\network_creator_nn.py:50: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
y = nn.LogSoftmax()(x)
C:\Users\__\PycharmProjects\__\venv\lib\site-packages\sklearn\metrics\classification.py:1145: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true samples.
'recall', 'true', average, warn_for)
Epoch 000: | Train Loss: 0.66666 | Val Loss: 0.68134 | Train Acc: 0.835| Val Acc: 0.498 | Train F_score: 4.892| Val F_score: 7.191
2020-07-14 08:47:07
Epoch 001: | Train Loss: 0.48715 | Val Loss: 0.73657 | Train Acc: 0.850| Val Acc: 0.040 | Train F_score: 18.034| Val F_score: 4.398
Traceback (most recent call last):
File "C:/Users/__/PycharmProjects/__/complete_proccess.py", line 239, in <module>
val_epoch_f_score += val_f_score.item()
AttributeError: 'float' object has no attribute 'item'
Почему я получил эту ошибку через 2 эпохи? моя функция оценки F1:
def binary_f1_score(y_pred, y_test):
y_pred_tag = torch.log_softmax(y_pred, dim=1)
_, y_pred_tags = torch.max(y_pred_tag, dim=1)
f_result = f1_score(y_test.cpu().numpy(), y_pred_tags.cpu().numpy())
f_result = f_result * 100
return f_result
Если я добавлю к функции оценки f1 аргумент «макрос», она будет работать хорошо, но я не знаю почему. Спасибо