Slurm User Guide for Great Lakes
Slurm is a combined batch scheduler and resource manager that allows users to run their jobs on the University of Michigan’s high performance computing (HPC) clusters. This document describes the process for submitting and running jobs under the Slurm Workload Manager on the Great Lakes cluster.
The Batch Scheduler and Resource Manager
The batch scheduler and resource manager work together to run jobs on an HPC cluster. The batch scheduler, sometimes called a workload manager, is responsible for finding and allocating the resources that fulfill the job’s request at the soonest available time. When a job is scheduled to run, the scheduler instructs the resource manager to launch the application(s) across the job’s allocated resources. This is also known as “running the job”.
The user can specify conditions for scheduling the job. One condition is the completion (successful or unsuccessful) of an earlier submitted job. Other conditions include the availability of a specific license or access to a specific hardware accelerator.
Computing Resources
An HPC cluster is made up of a number of compute nodes, each with a complement of processors, memory and GPUs. The user submits jobs that specify the application(s) they want to run along with a description of the computing resources needed to run the application(s).
Login Resources
Users interact with an HPC cluster through login nodes. Login nodes are a place where users can login, edit files, view job results and submit new jobs. Login nodes are a shared resource and should not be used to run application workloads.
Jobs and Job Steps
A job is an allocation of resources assigned to an individual user for a specified amount of time. Job steps are sets of (possibly parallel) tasks within a job. When a job runs, the scheduler selects and allocates resources to the job. The invocation of the application happens within the batch script, or at the command line for interactive and jobs.
When an application is launched using srun, it runs within a “job step”. The srun command causes the simultaneous launching of multiple tasks of a single application. Arguments to srun specify the number of tasks to launch as well as the resources (nodes, CPUs, memory, and GPUs) on which to launch the tasks.
Multiple srun commands can be invoked sequentially, or in parallel by backgrounding them. Furthermore, the resources specified on any srun command can be less than the total resources that were allocated to the job, but the total resources of all concurrently executing srun commands cannot exceed that total.
Batch Jobs
The sbatch command is used to submit a batch script to Slurm. It is designed to reject the job at submission time if there are requests or constraints that Slurm cannot fulfill as specified. This gives the user the opportunity to examine the job request and resubmit it with the necessary corrections. To submit a batch script simply run sbatch <scriptName>
$ sbatch myJob.sh
Submitting a Job in One Line
If you wish to submit a job without needing a separate script, you can use sbatch --wrap=<command string>. This will wrap the specified command in a simple “sh” shell script, which is then submitted to the Slurm controller.
Anatomy of a Batch Job
The batch job script is composed of three main components:
- The interpreter used to execute the script
- #SBATCH directives that convey submission options
- The application(s) to execute along with its input arguments and options
An Example Slurm job
#!/bin/bash # The interpreter used to execute the script #“#SBATCH” directives that convey submission options: #SBATCH --job-name=example_job #SBATCH --mail-user=uniqname@umich.edu #SBATCH --mail-type=BEGIN,END #SBATCH --cpus-per-task=1 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --mem-per-cpu=1000m #SBATCH --time=10:00 #SBATCH --account=test #SBATCH --partition=standard #SBATCH --output=/home/%u/%x-%j.log # The application(s) to execute along with its input arguments and options: /bin/hostname sleep 60
Common Job Submission Options
Option | Slurm Command (#SBATCH) | Great Lakes Usage |
Job name | --job-name=<name> | --job-name=gljob1 |
Account | --account=<account> | --account=test |
Queue | --partition=<name> | --partition=partitionname
Available partitions: standard (default), gpu (GPU jobs only), largemem (large memory jobs only), viz, debug, standard-oc (on-campus software only) |
Wall time limit | --time=<dd-hh:mm:ss> | --time=01-02:00:00 |
Node count | --nodes=<count> | --nodes=2 |
Process count per node | --ntasks-per-node=<count> | --ntasks-per-node=1 |
Core count (per process) | --cpus-per-task=<cores> | --cpus-per-task=1 |
Memory limit | --mem=<limit> (Memory per node in MB) | --mem=12000m If not set, the scheduler defaults will be applied |
Minimum memory per processor | --mem-per-cpu=<memory> | --mem-per-cpu=1000m If not set, the scheduler defaults will be applied |
Request GPUs | --gres=gpu:<count> | --gres=gpu:2 |
Process count per GPU | --ntasks-per-gpu=<count> Must be used with --ntasks or --gres=gpu: |
--ntasks-per-gpu=2 --gres=gpu:4 8 total tasks |
Job array | --array=<array indices> | --array=0-15 |
Standard output file | --output=<file path> (path must exist) | --output=/home/%u/%x-%j.log %u = username %x = job name %j = job ID |
Standard error file | --error=<file path> (path must exist) | --error=/home/%u/error-%x-%j.log |
Combine stdout/stderr to stdout | --output=<combined out and err file path> | --output=/home/%u/%x-%j.log |
Copy environment | --export=ALL (default)
--export=NONE (to not export environment) |
--export=ALL |
Copy environment variable | --export=<variable=value,var2=val2> | --export=EDITOR=/bin/vim |
Job dependency | --dependency=after:jobID[:jobID...]
--dependency=afterok:jobID[:jobID...] --dependency=afternotok:jobID[:jobID...] --dependency=afterany:jobID[:jobID...] |
--dependency=after:1234[:1233] |
Request software license(s) |
--licenses=<application>@slurmdb:<N> |
--licenses=stata@slurmdb:1 |
Request event notification |
--mail-type=<events> Note: multiple mail-type requests may be specified in a comma separated list: --mail-type=BEGIN,END,NONE,FAIL,REQUEUE,ARRAY_TASKS Note: there are multiple mail-type options one can use: NONE, BEGIN, END, FAIL, REQUEUE, ALL, INVALID_DEPEND, STAGE_OUT, TIME_LIMIT, TIME_LIMIT_90, TIME_LIMIT_80, TIME_LIMIT_50, ARRAY_TASKS |
--mail-type=BEGIN,END,FAIL |
Email address | --mail-user=<email address> | --mail-user=uniqname@umich.edu |
Defer job until the specified time | --begin=<date/time> | --begin=2020-12-25T12:30:00 |
Please note that if your job is set to utilize more than one node, make sure your code is MPI enabled in order to run across these nodes. You can use any of srun, mpirun or mpiexec commands to start your MPI job.
Interactive Jobs
An interactive job is a job that returns a command line prompt (instead of running a script) when the job runs. Interactive jobs are useful when debugging or interacting with an application. The salloc command is used to submit an interactive job to Slurm. When the job starts, a command line prompt will appear on one of the compute nodes assigned to the job. From here commands can be executed using the resources allocated on the local node.
[user@gl-login1 ~]$ salloc --account=test
salloc: Granted job allocation 23764789
salloc: Waiting for resource configuration
salloc: Nodes gl3032 are ready for job
[user@gl3032 ~ 3]$ hostname
gl3032.arc-ts.umich.edu
[user@gl3032 ~]$
Jobs submitted with salloc will be assigned the cluster default values of 1 CPU and 768MB of memory. If the account is not specified, your default account will be used. If additional resources are required, they can be requested as options to the salloc command. The following example job is assigned 2 nodes with 4 CPUS and 4GB of memory each:
[user@gl-login1 ~ 1]$ salloc --account=test --nodes=2 --ntasks-per-node=4 --mem-per-cpu=1GB --cpus-per-task=1
salloc: Granted job allocation 23765418
salloc: Waiting for resource configuration
salloc: Nodes gl[3041-3042] are ready for job
[user@gl3041 ~ 3]$ srun hostname
gl3041.arc-ts.umich.edu
gl3041.arc-ts.umich.edu
gl3041.arc-ts.umich.edu
gl3041.arc-ts.umich.edu
gl3042.arc-ts.umich.edu
gl3042.arc-ts.umich.edu
gl3042.arc-ts.umich.edu
gl3042.arc-ts.umich.edu
In the above example srun is used within the job from the first compute node to run a command once for every task in the job on the assigned resources. srun can be used to run on a subset of the resources assigned to the job. See the srun man page for more details.
GPU and Large Memory Jobs
Jobs can request GPUs with the job submission options --partition=gpu and a count option from the table below. All counts can be represented by gputype:number or just a number (default type will be used). Available GPU types can be found with the command sinfo -O gres -p <partition>. GPUs can be requested in both Batch and Interactive jobs.
Description | Slurm directive (#SBATCH or srun option) | Example |
GPUs per node | --gpus-per-node=<gputype:number> | --gpus-per-node=2 or --gpus-per-node=v100:2 |
GPUs per job | --gpus=<gputype:number> | --gpus=2 or --gpus=v100:2 |
GPUs per socket | --gpus-per-socket=<gputype:number> | --gpus-per-socket=2 or --gpus-per-socket=v100:2 |
GPUs per task | --gpus-per-task=<gputype:number> | --gpus-per-task=2 or --gpus-per-task=v100:2 |
CPUs required per GPU | --cpus-per-gpu=<number> | --cpus-per-gpu=4 |
Memory per GPU | --mem-per-gpu=<number> | --mem-per-gpu=1000m |
Jobs can request nodes with large amounts of RAM with --partition=largemem.
Job Dependencies
You may want to run a set of jobs sequentially, so that the second job runs only after the first one has completed. This can be accomplished using Slurm’s job dependencies options. For example, if you have two jobs, Job1.sh and Job2.sh, you can utilize job dependencies as in the example below.
[user@gl-login1]$ sbatch Job1.sh 123213 [user@gl-login1]$ sbatch --dependency=afterany:123213 Job2.sh 123214
The flag --dependency=afterany:123213 tells the batch system to start the second job only after completion of the first job. afterany indicates that Job2 will run regardless of the exit status of Job1, i.e. regardless of whether the batch system thinks Job1 completed successfully or unsuccessfully.
Once job 123213 completes, job 123214 will be released by the batch system and then will run as the appropriate nodes become available.
Exit status: The exit status of a job is the exit status of the last command that was run in the batch script. An exit status of ‘0’ means that the batch system thinks the job completed successfully. It does not necessarily mean that all commands in the batch script completed successfully.
There are several options for the –dependency flag that depend on the status of Job1:
–dependency=afterany:Job1 | Job2 will start after Job1 completes with any exit status |
–dependency=after:Job1 | Job2 will start any time after Job1 starts |
–dependency=afterok:Job1 | Job2 will run only if Job1 completed with an exit status of 0 |
–dependency=afternotok:Job1 | Job2 will run only if Job1 completed with a non-zero exit status |
Making several jobs depend on the completion of a single job is done in the example below:
[user@gl-login1]$ sbatch Job1.sh 13205 [user@gl-login1]$ sbatch --dependency=afterany:13205 Job2.sh 13206 [user@gl-login1]$ sbatch --dependency=afterany:13205 Job3.sh 13207 [user@gl-login1]$ squeue -u $USER -S S,i,M -o "%12i %15j %4t %30E" JOBID NAME ST DEPENDENCY 13205 Job1.bat R 13206 Job2.bat PD afterany:13205 13207 Job3.bat PD afterany:13205
Making a job depend on the completion of several other jobs: example below.
[user@gl-login1]$ sbatch Job1.sh 13201 [user@gl-login1]$ sbatch Job2.sh 13202 [user@gl-login1]$ sbatch --dependency=afterany:13201,13202 Job3.sh 13203 [user@gl-login1]$ squeue -u $USER -S S,i,M -o "%12i %15j %4t %30E" JOBID NAME ST DEPENDENCY 13201 Job1.sh R 13202 Job2.sh R 13203 Job3.sh PD afterany:13201,afterany:13202
Chaining jobs is most easily done by submitting the second dependent job from within the first job. Example batch script:
#!/bin/bash cd /data/mydir run_some_command sbatch --dependency=afterany:$SLURM_JOB_ID my_second_job
Job dependencies documentation adapted from https://hpc.nih.gov/docs/userguide.html#depend
Job Arrays
Job arrays are multiple jobs to be executed with identical parameters. Job arrays are submitted with -a <indices> or --array=<indices>. The indices specification identifies what array index values should be used. Multiple values may be specified using a comma separated list and/or a range of values with a “-” separator: --array=0-15 or --array=0,6,16-32.
A step function can also be specified with a suffix containing a colon and number. For example,--array=0-15:4 is equivalent to --array=0,4,8,12.
A maximum number of simultaneously running tasks from the job array may be specified using a “%” separator. For example --array=0-15%4 will limit the number of simultaneously running tasks from this job array to 4. The minimum index value is 0. The maximum value is 499999.
To receive mail alerts for each individual array task, --mail-type=ARRAY_TASKS should be added to the Slurm job script. Unless this option is specified, mail notifications on job BEGIN, END and FAIL apply to a job array as a whole rather than generating individual email messages for each task in the job array.
Execution Environment
For each job type above, the user has the ability to define the execution environment. This includes environment variable definitions as well as shell limits (bash ulimit or csh limit). sbatch and salloc provide the --export option to convey specific environment variables to the execution environment. sbatch and salloc provide the --propagate option to convey specific shell limits to the execution environment. By default Slurm does not source the files ~./bashrc or ~/.profile when requesting resources via sbatch (although it does when running srun / salloc ). So, if you have a standard environment that you have set in either of these files or your current shell then you can do one of the following:
- Add the command #SBATCH --get-user-env to your job script (i.e. the module environment is propagated).
- Source the configuration file in your job script:
< #SBATCH statements > source ~/.bashrc
Note: You may want to remove the influence of any other current environment variables by adding #SBATCH --export=NONE to the script. This removes all set/exported variables and then acts as if #SBATCH --get-user-env has been added (module environment is propagated).
Environment Variables
Slurm recognizes and provides a number of environment variables.
The first category of environment variables are those that Slurm inserts into the job’s execution environment. These convey to the job script and application information such as job ID (SLURM_JOB_ID) and task ID (SLURM_PROCID). For the complete list, see the “OUTPUT ENVIRONMENT VARIABLES” section under the sbatch, salloc, and srun man pages.
The next category of environment variables are those use user can set in their environment to convey default options for every job they submit. These include options such as the wall clock limit. For the complete list, see the “INPUT ENVIRONMENT VARIABLES” section under the sbatch, salloc, and srun man pages.
Finally, Slurm allows the user to customize the behavior and output of some commands using environment variables. For example, one can specify certain fields for the squeue command to display by setting the SQUEUE_FORMAT variable in the environment from which you invoke squeue.
Commonly Used Environment Variables
Info | Slurm | Notes |
Job name | $SLURM_JOB_NAME | |
Job ID | $SLURM_JOB_ID | |
Submit directory | $SLURM_SUBMIT_DIR | Slurm jobs starts from the submit directory by default. |
Submit host | $SLURM_SUBMIT_HOST | |
Node list | $SLURM_JOB_NODELIST | The Slurm variable has a different format to the PBS one.
To get a list of nodes use: scontrol show hostnames $SLURM_JOB_NODELIST |
Job array index | $SLURM_ARRAY_TASK_ID | |
Queue name | $SLURM_JOB_PARTITION | |
Number of nodes allocated | $SLURM_JOB_NUM_NODES
$SLURM_NNODES |
|
Number of processes | $SLURM_NTASKS | |
Number of processes per node | $SLURM_TASKS_PER_NODE | |
Requested tasks per node | $SLURM_NTASKS_PER_NODE | |
Requested CPUs per task | $SLURM_CPUS_PER_TASK | |
Scheduling priority | $SLURM_PRIO_PROCESS | |
Job user | $SLURM_JOB_USER | |
Hostname | $HOSTNAME == $SLURM_SUBMIT_HOST | Unless a shell is invoked on an allocated resource, the HOSTNAME variable is propagated (copied) from the submit machine environments will be the same on all allocated nodes. |
Job Output
Slurm merges the job’s standard error and output by default and saves it to an output file with a name that includes the job ID (slurm-<job_ID>.out for normal jobs and "slurm-<job_ID_index.out for arrays"). You can specify your own output and error files to the sbatch command using the -o /file/to/output and -e /file/to/error options respectively. If both standard out and error should go to the same file, only specify -o /file/to/output Slurm will append the job’s output to the specified file(s). If you want the output to overwrite any existing files, add the --open-mode=truncate option. The files are written as soon as output is created. It does not spool on the compute node and then get copied to the final location after the job ends. If not specified in the job submission, standard output and error are combined and written into a file in the working directory from which the job was submitted.
For example if I submit job 93 from my home directory, the job output and error will be written to my home directory in a file called slurm-93.out. The file appears while the job is still running.
[user@gl-login1 ~]$ sbatch test.sh Submitted batch job 93 [user@gl-login1 ~]$ ll slurm-93.out -rw-r–r– 1 user hpcstaff 122 Jun 7 15:28 slurm-93.out [user@gl-login1 ~]$ squeue JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 93 standard example user R 0:04 1 gl3160
If you submit from a working directory which is not a shared filesystem, your output will only be available locally on the compute node and will need to be copied to another location after the job completes. /home, /scratch, and /nfs are all networked filesystems which are available on the login nodes and all compute nodes.
For example if I submit a job from /tmp on the login node, the output will be in /tmp on the compute node.
[user@gl-login1 tmp]$ pwd /tmp [user@gl-login1 tmp]$ sbatch /home/user/test.sh Submitted batch job 98 [user@gl-login1 tmp]$ squeue JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 98 standard example user R 0:03 1 gl3160 [user@gl-login1 tmp]$ ssh gl3160 [user@gl3160 ~]$ ll /tmp/slurm-98.out -rw-r–r– 1 user hpcstaff 78 Jun 7 15:46 /tmp/slurm-98.out
Serial vs. Parallel jobs
Parallel jobs launch applications that are comprised of many processes (aka tasks) that communicate with each other, typically over a high speed switch. Serial jobs launch one or more tasks that work independently on separate problems.
Parallel applications must be launched by the srun command. Serial applications can use srun to launch them, but it is not required in one node allocations.
Job Partitions
A cluster is often highly utilized and may not be able to run a job when it is submitted. When this occurs, the job is placed in a partition. Specific compute node resources are defined for every job partition. The Slurm partition is synonymous with the term queue.
Each partition can be configured with a set of limits which specify the requirements for every job that can run in that partition. These limits include job size, wall clock limits, and the users who are allowed to run in that partition.
The Great Lakes cluster has the “standard” (used for most production jobs, 14 day max walltime), “largemem” (used for jobs that require large amounts of RAM, 14 day max), “gpu” (used for GPU-intensive tasks, 14 day max), “debug” (only to verify/debug jobs, 1 day max) and “viz” (used for visualization jobs, 1 day max) partitions.
Commands related to partitions include:
sinfo | Lists all partitions currently configured |
scontrol show partition <name> | Provides details about each partition |
squeue | Lists all jobs currently on the system, one line per job |
Job Status
Most of a job’s specifications can be seen by invoking scontrol show job <jobID>. More details about the job can be written to a file by using scontrol write batch_script <jobID> output.txt. If no output file is specified, the script will be written to slurm<jobID>.sh.
Slurm captures and reports the exit code of the job script (sbatch jobs) as well as the signal that caused the job’s termination when a signal caused a job’s termination.
A job’s record remains in Slurm’s memory for 30 minutes after it completes. scontrol show job will return “Invalid job id specified” for a job that completed more than 30 minutes ago. At that point, one must invoke the sacct command to retrieve the job’s record from the Slurm database.
Modifying a Batch Job
Many of the batch job specifications can be modified after a batch job is submitted and before it runs. Typical fields that can be modified include the job size (number of nodes), partition (queue), and wall clock limit. Job specifications cannot be modified by the user once the job enters the Running state.
Beside displaying a job’s specifications, the scontrol command is used to modify them. Examples:
scontrol -dd show job <jobID> | Displays all of a job’s characteristics |
scontrol write batch_script <jobID> | Retrieve the batch script for a given job |
scontrol update JobId=<jobID> Account=science | Change the job’s account to the “science” account |
scontrol update JobId=<jobID> Partition=priority | Changes the job’s partition to the priority partition |
Holding and Releasing a Batch Job
If a user’s job is in the pending state waiting to be scheduled, the user can prevent the job from being scheduled by invoking the scontrol hold <jobID> command to place the job into a Held state. Jobs in the held state do not accrue any job priority based on queue wait time. Once the user is ready for the job to become a candidate for scheduling once again, they can release the job using the scontrol release <jobID> command.
Signalling and Cancelling a Batch Job
Pending jobs can be cancelled (withdrawn from the queue) using the scancel command (scancel <jobID>). The scancel command can also be used to terminate a running job. The default behavior is to issue the job a SIGTERM, wait 30 seconds, and if processes from the job continue to run, issue a SIGKILL command.
The -s option of the scancel command (scancel -s <signal> <jobID>) allows the user to issue any signal to a running job.
Common Job Commands
Command | Slurm |
Submit a job | sbatch <job script> |
Delete a job | scancel <job ID> |
Job status (all) | squeue |
Job status (by job) | squeue -j <job ID> |
Job status (by user) | sq (equivalent of: squeue -u <your_uniqname>)
sq <user> (equivalent of: squeue -u <user>) |
Job status (detailed) | scontrol show job -dd <job ID> |
Show expected start time | squeue -j <job ID> --start |
Queue list / info | scontrol show partition <name> |
Node list | scontrol show nodes |
Node details | scontrol show node <node> |
Hold a job | scontrol hold <job ID> |
Release a job | scontrol release <job ID> |
Cluster status | sinfo |
Start an interactive job | salloc <args> |
X forwarding | salloc <args> --x11 |
Read stdout messages at runtime | No equivalent command / not needed. Use the --output option instead. |
Monitor or review a job’s resource usage | sacct -j <job_num> --format JobID,jobname,NTasks,nodelist,CPUTime,ReqMem,Elapsed
(see sacct for all format options) |
View job batch script | scontrol write batch_script <jobID> [filename] |
View accounts you can submit to | sacctmgr show assoc user=$USER |
View users with access to an account | sacctmgr show assoc account=<account> |
View default submission account and wckey | sacctmgr show User <account> |
Job States
The basic job states are these:
- Pending – the job is in the queue, waiting to be scheduled
- Held – the job was submitted, but was put in the held state (ineligible to run)
- Running – the job has been granted an allocation. If it’s a batch job, the batch script has been run
- Complete – the job has completed successfully
- Timeout – the job was terminated for running longer than its wall clock limit
- Preempted – the running job was terminated to reassign its resources to a higher QoS job
- Failed – the job terminated with a non-zero status
- Node Fail – the job terminated after a compute node reported a problem
For the complete list, see the “JOB STATE CODES” section under the squeue man page.
Pending Reasons
A pending job can remain pending for a number of reasons:
- Dependency – the pending job is waiting for another job to complete
- Priority – the job is not high enough in the queue
- Resources – the job is high in the queue, but there are not enough resources to satisfy the job’s request
- Partition Down – the queue is currently closed to running any new jobs
For the complete list, see the “JOB REASON CODES” section under the squeue man page.
Displaying Computing Resources
As stated above, computing resources are nodes, CPUs, memory, and generic resources like GPUs. The resources of each compute node can be seen by running the scontrol show node command. The characteristics of each partition can be seen by running the scontrol show partition command. Finally, a load summary report for each partition can be seen by running sinfo.
To show a summary of cluster resources on a per partition basis:
[user@gl-login1 ~]$ sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST standard up 14-00:00:00 24 idle gl31[60-83] gpu up 14-00:00:00 2 idle gl10[18-19] largemem up 14-00:00:00 3 idle gl000[0-3]
[user@gl-login1 ~]$ sstate ——————————————————————————————————————— Node AllocCPU TotalCPU PercentUsedCPU CPULoad AllocMem TotalMem PercentUsedMem NodeState ——————————————————————————————————————— gl3160 0 36 0.00 0.03 0 192000 0.00 IDLE gl3160 0 36 0.00 0.04 0 192000 0.00 IDLE ...
In this example the user “user” has access to submit workloads to the accounts support and hpcstaff on the Great Lakes cluster. To show associations for the current user:
[user@gl-login1 ~]$ sacctmgr show assoc user=$USER Cluster Account User Partition ... ——————————————————————————————————————— greatlakes support user 1 greatlakes hpcstaff user 1
Job Statistics and Accounting
The sreport command provides aggregated usage reports by user and account over a specified period. Examples:
By user: sreport -T billing cluster AccountUtilizationByUser Start=2017-01-01 End=2017-12-31
By account: sreport -T billing cluster UserUtilizationByAccount Start=2017-01-01 End=2017-12-31
For all of the sreport options see the sreport man page.
Time Remaining in an Application
If a running application overruns its wall clock limit, all its work could be lost. To prevent such an outcome, applications have two means for discovering the time remaining in the application.
The first means is to use the sbatch --signal=<sig_num>[@<sig_time>] option to request a signal (like USR1 or USR2) at sig_time number of seconds before the allocation expires. The application must register a signal handler for the requested signal in order to to receive it. The handler takes the necessary steps to write a checkpoint file and terminate gracefully.
The second means is for the application to issue a library call to retrieve its remaining time periodically. When the library call returns a remaining time below a certain threshold, the application can take the necessary steps to write a checkpoint file and terminate gracefully.
Slurm offers the slurm_get_rem_time() library call that returns the time remaining. On some systems, the yogrt library (man yogrt) is also available to provide the time remaining.