FastAI выдает ошибку времени выполнения при использовании пользовательских наборов поездов и тестов - PullRequest
0 голосов
/ 03 мая 2020

Я работаю над набором данных Food-101, и, как вы, возможно, знаете, набор данных включает в себя как составные, так и испытательные части. Поскольку набор данных больше не может быть найден в ссылке ETH Zurich, мне пришлось разделить их на разделы <1 ГБ каждый и клонировать их в Colab и собрать заново. Это очень утомительная работа, но у меня все получилось. Я опущу код Python, но структура файла выглядит следующим образом: </p>

Food-101
      images
            train
               ...75750 train images
            test
               ...25250 test images
      meta
            classes.txt
            labes.txt
            test.json
            test.txt
            train.json
            train.txt
      README.txt
      license_agreement.txt

Следующий код показывает, что выдает ошибку времени выполнения

train_image_path = Path('images/train/')
test_image_path = Path('images/test/')
path = Path('../Food-101')

food_names = get_image_files(train_image_path)

file_parse = r'/([^/]+)_\d+\.(png|jpg|jpeg)'

data = ImageDataBunch.from_folder(train_image_path, test_image_path, valid_pct=0.2, ds_tfms=get_transforms(), size=224)
data.normalize(imagenet_stats)

Я предполагаю, что ImageDataBunch.from_folder() это то, что выдает ошибку, но я не знаю, почему она попадает в типы данных, так как (я не думаю) я предоставляю ей любые данные, которые имеют тип c.

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
You can deactivate this warning by passing `no_check=True`.
/usr/local/lib/python3.6/dist-packages/fastai/basic_data.py:262: UserWarning: There seems to be something wrong with your dataset, for example, in the first batch can't access these elements in self.train_ds: 9600,37233,16116,38249,1826...
  warn(warn_msg)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/IPython/core/formatters.py in __call__(self, obj)
    697                 type_pprinters=self.type_printers,
    698                 deferred_pprinters=self.deferred_printers)
--> 699             printer.pretty(obj)
    700             printer.flush()
    701             return stream.getvalue()

11 frames
/usr/local/lib/python3.6/dist-packages/fastai/vision/image.py in affine(self, func, *args, **kwargs)
    181         "Equivalent to `image.affine_mat = image.affine_mat @ func()`."
    182         m = tensor(func(*args, **kwargs)).to(self.device)
--> 183         self.affine_mat = self.affine_mat @ m
    184         return self
    185 

RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #3 'mat2' in call to _th_addmm_out
...