Мой slurmctld не сохраняет задания, которые находятся в очереди при выходе (через ctrl + c).
Я даю ему около 1000 заданий, выход (ctrl + c) и при перезапуске он сообщает длякаждое задание (754 в этом примере) является несуществующим и очищает задание:
slurmctld: Purged files for defunct batch JobId=754
Вот стандартный вывод при выходе:
slurmctld: _job_complete: JobId=22 WEXITSTATUS 0
slurmctld: _job_complete: JobId=22 done
^Cslurmctld: Terminate signal (SIGINT or SIGTERM) received
slurmctld: Saving all slurm state
slurmctld: layouts: all layouts are now unloaded.
Вот стандартный вывод при перезапуске службы:
jonathan@jonathan-ubuntudesktop:~$ sudo slurmctld -Dcv
slurmctld: slurmctld version 18.08.3 started on cluster jonathan-inspiron-13-7378
slurmctld: Munge cryptographic signature plugin loaded
slurmctld: Consumable Resources (CR) Node Selection plugin loaded with argument 4
slurmctld: preempt/none loaded
slurmctld: ExtSensors NONE plugin loaded
slurmctld: Accounting storage NOT INVOKED plugin loaded
slurmctld: No memory enforcing mechanism configured.
slurmctld: layouts: no layout to initialize
slurmctld: topology NONE plugin loaded
slurmctld: sched: Backfill scheduler plugin loaded
slurmctld: route default plugin loaded
slurmctld: layouts: loading entities/relations information
slurmctld: cons_res: select_p_node_init
slurmctld: cons_res: preparing for 1 partitions
slurmctld: Purged files for defunct batch JobId=1183
slurmctld: Purged files for defunct batch JobId=1023
...
slurmctld: Purged files for defunct batch JobId=1384
slurmctld: Recovered state of 0 reservations
slurmctld: _preserve_plugins: backup_controller not specified
slurmctld: cons_res: select_p_reconfigure
slurmctld: cons_res: select_p_node_init
slurmctld: cons_res: preparing for 1 partitions
slurmctld: Running as primary controller
slurmctld: No parameter for mcs plugin, default values set
slurmctld: mcs: MCSParameters = (null). ondemand set.
slurmctld: job_complete: invalid JobId=986
slurmctld: job_complete: invalid JobId=988
slurmctld: job_complete: invalid JobId=989
slurmctld: job_complete: invalid JobId=987
slurm.conf:
ControlAddr=192.168.1.2
AuthType=auth/munge
CryptoType=crypto/munge
MaxJobCount=1000000
MpiDefault=none
ProctrackType=proctrack/pgid
ReturnToService=1
SlurmctldPidFile=/home/jonathan/Documents/COMPANYNAME/slurmctl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/home/jonathan/Documents/COMPANYNAME/slurmctl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/home/jonathan/Documents/COMPANYNAME/slurmctl/save_state/slurmd
SlurmUser=jonathan
SlurmdUser=jonathan
StateSaveLocation=/home/jonathan/Documents/COMPANYNAME/slurmctl/save_state
SwitchType=switch/none
TaskPlugin=task/none
TaskPluginParam=Sched
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
FastSchedule=1
SchedulerType=sched/backfill
SelectType=select/cons_res
SelectTypeParameters=CR_Core
SchedulerPort=7321
AccountingStorageType=accounting_storage/none
AccountingStoreJobComment=YES
ClusterName=jonathan-Inspiron-13-7378
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmdDebug=3
NodeName=jonathan-Inspiron-13-7378 NodeAddr=192.168.1.4 CPUs=4 State=UNKNOWN
PartitionName=Grid0 Nodes=jonathan-Inspiron-13-7378 Default=YES MaxTime=INFINITE State=UP
"/ home / jonathan / Documents / COMPANYNAME / slurmctl / save_state" Владелец jonathan: jonathan и имеет 750 прав доступа.
Установка Slurm-18.08.3 была просто базовой ./configure, make и make install.
Что я делаю не так?Спасибо за помощь, это очень ценится!