Как использовать fread в R для экспортированного формата bigquery * .csv? - PullRequest
0 голосов
/ 17 апреля 2019

Я экспортировал очень большой набор данных из Google BigQuery:

  1. Я сохранил свой результат запроса в (новой) bq-таблице
  2. затем экспортировал эту таблицу как разделенную * .csv (сжатый gzip) в корзину в GCS
  3. наконец загрузил эти файлы локально, используя gsutil -m cp -R gs: // имя сегмента.
  4. ... теперь я хочу прочитать эти * .csv файлы в R (Studio)!

Это работает, когда я использую read.csv:

tmp_file <- read.csv(path_to_csv_file)

К сожалению, это очень медленно, как мы все знаем - поэтому я хочу (ed) использовать fread ():

tmp_file <- fread(path_to_csv_file, verbose = TRUE)

Но тогда это терпит неудачу! Сообщение об ошибке:

omp_get_num_procs()==12
R_DATATABLE_NUM_PROCS_PERCENT=="" (default 50)
R_DATATABLE_NUM_THREADS==""
omp_get_thread_limit()==2147483647
omp_get_max_threads()==12
OMP_THREAD_LIMIT==""
OMP_NUM_THREADS==""
data.table is using 6 threads. This is set on startup, and by setDTthreads(). See ?setDTthreads.
RestoreAfterFork==true
Input contains no \n. Taking this to be a filename to open
[01] Check arguments
  Using 6 threads (omp_get_max_threads()=12, nth=6)
  NAstrings = [<<NA>>]
  None of the NAstrings look like numbers.
  show progress = 1
  0/1 column will be read as integer
[02] Opening the file
  Opening file /000000000007.csv
  File opened, size = 377.0MB (395347735 bytes).
  Memory mapped ok
[03] Detect and skip BOM
[04] Arrange mmap to be \0 terminated
  \n has been found in the input and different lines can end with different line endings (e.g. mixed \n and \r\n in one file). This is common and ideal.
  File ends abruptly with 'O'. Final end-of-line is missing. Using cow page to write 0 to the last byte.
[05] Skipping initial rows if needed
  Positioned on line 1 starting: <<>>
[06] Detect separator, quoting rule, and ncolumns
  Detecting sep automatically ...
  No sep and quote rule found a block of 2x2 or greater. Single column input.
  Detected 1 columns on line 1. This line is either column names or first data row. Line starts as: <<>>
  Quote rule picked = 0
  fill=false and the most number of columns found is 1
[07] Detect column types, good nrow estimate and whether first row is column names
  Number of sampling jump points = 100 because (395347735 bytes from row 1 to eof) / (2 * 3 jump0size) == 65891289
  Type codes (jump 000)    : 2  Quote rule 0
  A line with too-many fields (1/1) was found on line 4 of sample jump 2. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 2 of sample jump 4. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 2 of sample jump 7. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 10. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 12. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 14. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 2 of sample jump 16. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 18. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 2 of sample jump 20. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 23. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 25. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 3 of sample jump 28. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 4 of sample jump 30. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 33. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 41. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 3 of sample jump 48. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 4 of sample jump 57. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 58. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 59. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 65. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 2 of sample jump 69. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 5 of sample jump 70. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 2 of sample jump 72. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 74. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 2 of sample jump 75. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 79. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 80. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 83. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 85. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 86. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 3 of sample jump 89. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 94. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 96. Most likely this jump landed awkwardly so type bumps here will be skipped.
  A line with too-many fields (1/1) was found on line 1 of sample jump 98. Most likely this jump landed awkwardly so type bumps here will be skipped.
  'header' determined to be true due to column 1 containing a string on row 1 and a lower type (bool8) in the rest of the 6626 sample rows
  =====
  Sampled 6626 rows (handled \n inside quoted fields) at 101 jump points
  Bytes from first data row on line 2 to the end of last row: 395347732
  Line length: mean=1.30 sd=17.01 min=0 max=639
  Estimated number of rows: 395347732 / 1.30 = 304460027
  Initial alloc = 334906029 rows (304460027 + 9%) using bytes/max(mean-2*sd,min) clamped between [1.1*estn, 2.0*estn]
  =====
[08] Assign column names
[09] Apply user overrides on column types
  After 0 type and 0 drop user overrides : 2
[10] Allocate memory for the datatable
  Allocating 1 column slots (1 - 0 dropped) with 334906029 rows
[11] Read the data
  jumps=[0..378), chunk_size=1045893, total_size=395347732
Error in fread(all_csvs[i], integer64 = "character", verbose = TRUE) : 
  Internal error: invalid head position. jump=1, headPos=0000000188EA0003, thisJumpStart=0000000188F9F5EA, sof=0000000188EA0000

Когда я открываю * .csv, он показывает шестнадцатеричное кодирование (если это помогает). (Как) я могу использовать fread для этой задачи - или есть (быстрое) альтернативное решение для импорта этих * .csv файлов (по сравнению с read.csv)?

С уважением, Дэвид

1 Ответ

0 голосов
/ 10 мая 2019

Недавно запущенный пакет vroom решает эту проблему намного лучше.vroom не читает весь файл сразу.Он использует платформу Altrep для ленивой загрузки данных.Он также использует несколько потоков для индексации, материализации несимвольных столбцов и при записи для дальнейшего повышения производительности.

Чтение Vroom Benchmark для сравнения.Он может читать файлы со скоростью 900MB/sec

vroom использует тот же интерфейс, что и readr для указания типов столбцов.

vroom::vroom("mtcars.tsv",
  col_types = list(cyl = "i", gear = "f",hp = "i", disp = "_",
                   drat = "_", vs = "l", am = "l", carb = "i")
)
#> # A tibble: 32 x 10
#>   model           mpg   cyl    hp    wt  qsec vs    am    gear   carb
#>   <chr>         <dbl> <int> <int> <dbl> <dbl> <lgl> <lgl> <fct> <int>
#> 1 Mazda RX4      21       6   110  2.62  16.5 FALSE TRUE  4         4
#> 2 Mazda RX4 Wag  21       6   110  2.88  17.0 FALSE TRUE  4         4
#> 3 Datsun 710     22.8     4    93  2.32  18.6 TRUE  TRUE  4         1
#> # … with 29 more rows
Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...