# select version();
> PostgreSQL 11.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 8.3.0-3ubuntu1) 8.3.0, 64-bit
Я сломал archive_command
на некоторое время. П.Г. хранил WAL до тех пор, пока они не были заархивированы, как ожидалось. Затем я убил процесс pg и запустил его. И я заметил, что все WAL, которые были готовы к архивированию, были удалены. В корзине Google также не было WAL.
pg logs после перезапуска:
2020-04-22 14:27:23.702 UTC [7] LOG: database system was interrupted; last known up at 2020-04-22 14:27:08 UTC
2020-04-22 14:27:24.819 UTC [7] LOG: database system was not properly shut down; automatic recovery in progress
2020-04-22 14:27:24.848 UTC [7] LOG: redo starts at 4D/BCEF6BA8
2020-04-22 14:27:24.848 UTC [7] LOG: invalid record length at 4D/BCEFF0C0: wanted 24, got 0
2020-04-22 14:27:24.848 UTC [7] LOG: redo done at 4D/BCEFF050
2020-04-22 14:27:25.286 UTC [1] LOG: database system is ready to accept connections
Я повторил сценарий с параметром conf log_min_messages=DEBUG5
и увидел, что pg удалил старые WALs игнорируя тот факт, что они ожидали архивирования.
2020-04-23 14:55:42.819 UTC [6] LOG: redo starts at 0/22000098
2020-04-23 14:55:50.138 UTC [6] LOG: redo done at 0/22193FB0
2020-04-23 14:55:50.138 UTC [6] DEBUG: resetting unlogged relations: cleanup 0 init 1
2020-04-23 14:55:50.266 UTC [6] DEBUG: performing replication slot checkpoint
2020-04-23 14:55:50.336 UTC [6] DEBUG: attempting to remove WAL segments older than log file 000000000000000000000021
2020-04-23 14:55:50.349 UTC [6] DEBUG: recycled write-ahead log file "000000010000000000000015"
2020-04-23 14:55:50.365 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000012"
2020-04-23 14:55:50.372 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001B"
2020-04-23 14:55:50.382 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001E"
2020-04-23 14:55:50.390 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000013"
2020-04-23 14:55:50.402 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000014"
2020-04-23 14:55:50.412 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001D"
2020-04-23 14:55:50.424 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001C"
2020-04-23 14:55:50.433 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000000F"
2020-04-23 14:55:50.442 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001F"
2020-04-23 14:55:50.455 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001A"
2020-04-23 14:55:50.471 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000020"
2020-04-23 14:55:50.480 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000018"
2020-04-23 14:55:50.489 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000011"
2020-04-23 14:55:50.502 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000016"
2020-04-23 14:55:50.518 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000017"
2020-04-23 14:55:50.529 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000010"
2020-04-23 14:55:50.536 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000019"
2020-04-23 14:55:50.547 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000021"
2020-04-23 14:55:50.559 UTC [6] DEBUG: MultiXactId wrap limit is 2147483648, limited by database with OID 1
2020-04-23 14:55:50.559 UTC [6] DEBUG: MultiXact member stop limit is now 4294914944 based on MultiXact 1
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(0): 1 before_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(0): 4 on_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: proc_exit(0): 2 callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: exit(0)
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: proc_exit(-1): 0 callbacks to make
2020-04-23 14:55:50.571 UTC [1] DEBUG: reaping dead processes
2020-04-23 14:55:50.572 UTC [10] DEBUG: autovacuum launcher started
2020-04-23 14:55:50.572 UTC [1] DEBUG: starting background worker process "logical replication launcher"
2020-04-23 14:55:50.572 UTC [10] DEBUG: InitPostgres
2020-04-23 14:55:50.572 UTC [10] DEBUG: my backend ID is 1
Есть ли способ предотвратить удаление неархивированной WAL pg?