Quick reference: HTCondor submit file options

Introduction

This page lists common HTCondor submit file options for jobs. Use this page as a quick reference. For users who are just starting out, read our linked guides to understand the full context of each command or option.

Please note the following!

  • Bracketed items (<>) denote where to place your input. Do not include the brackets in your command.
  • All commands should be entered on a single line.

Commands to submit jobs

See job submission basics.

Command Use Notes and Examples
condor_submit <submit_file> submits job(s) as specified by submit_file Example:
condor_submit helloWorld.sub
condor_submit -i <submit_file> submits an interactive job as specified by submit_file Example:
condor_submit -i helloWorld.sub

Basic submit file options

See job submission basics.

Option Use Notes and Examples
executable = <script_or_binary> Path to the executable script or binary. Cannot be used with shell. The executable is automatically transferred to the Execution Point (EP) by HTCondor.

Example:
executable = helloWorld.py
arguments = "<args>" lists arguments to be passed to the executable as part of the command line Arguments are space separated. To embed spaces in an argument using single quotes.

Example:
arguments = "--print 'hello world'"
shell = <command> The command and arguments to execute. Cannot be used with executable. You may need to transfer your executable script in transfer_input_files

Example:
shell = python3 code.py
log = <job.log> denotes the path to the log file We recommend always specifying log to help with troubleshooting. If log is not provided, no log file is written.

Example:
log = log_files/job1.log
output = <job.out> denotes the path to the file capturing stdout screen output Can be merged with stderr by denoting the same path in error = <path>.

Example:
output = log_files/job1.out
error = <job.err> denotes the path to file capturing stderr screen output Example:
error = log_files/job1.err
batch_name = <name> optional user-defined name for the job Defaults to the job ID if not specified

Example:
batch_name = train_$(Cluster)_$(Process)

Note: If log, error, or output is not defined, troubleshooting jobs will be significantly more difficult.

Transfer files

Visit our file transfer guide for more details.

Option Use Notes and Examples
transfer_input_files = <file1>, <file2> lists all the input files to be transferred to the Execution Point (EP) Comma-separated list. Various file transfer protocols can be used including file:///, osdf:///, and pelican:///.

Examples:
transfer_input_files = osdf:///chtc/staging/...
transfer_output_files = <file1>, <file2> explicitly lists the path to files on the EP to be returned to the working directory on the AP. If this is not specified, HTCondor will only transfer new and changed files in the top-level directory of the Execution Point. Subdirectories are not transferred.

Example:
transfer_output_files = results.txt
transfer_output_remaps = "<file>=<new_path>; <file2>=<new_path>" remaps output files to a specified path. Can be used for renaming files. Delimited by semicolons. Can be used in conjunction with various file transfer protocols.

Example:
transfer_output_remaps = "results.txt=osdf:///chtc/staging/<user>/job1_results.txt"

Request resources

Option Use Notes and Examples
request_cpus = <integer> requests number of CPUs (cores) Example:
request_cpus = 4
request_disk = <quantity> requests disk space (Default in KiB) Can use units like K, M, G, or T.

Example:
request_disk = 40GB
request_memory = <quantity> requests memory for job (Default in MB) Example:
request_memory = 250GB
request_gpus = <integer> requests number of GPUs See our GPU jobs guide.
gpus_minimum_capability = <version> sets minimum GPU capability Example:
gpus_minimum_capability = 8.5
gpus_maximum_capability = <version> sets maximum GPU capability Example:
gpus_maximum_capability = 9.0
gpus_minimum_memory = <quantity> requests minimum GPU VRAM memory (Default in MB) Example:
gpus_minimum_memory = 3200
requirements = <ClassAd Boolean> Specify job requirements to restrict jobs to certain execution points Example:
requirements = (HasCHTCStaging == true)

For more information on submitting GPU jobs, please visit our Using GPUs guide.

Software environment

Option Use Notes and Examples
container_image = <image_file> defines container image path Can pull from DockerHub or use a local .sif file.

Example:
container_image = docker://pytorch/pytorch:latest
environment = <parameter_list> lists environmental variables for use in your jobs Wrapped by quotes(“) and space separated.

Example:
environment = "VARIABLE1=Value1 VAR2='hello world'"

Note: For more information on using containers in your jobs, please visit our Using Apptainer Containers or Running HTC Jobs Using Docker Containers guide.

Submit multiple jobs

See our multiple jobs guide.

Option Use Notes and Examples
queue submits a single job If no other options specified, submits one job.
queue <N> submits <N> number jobs Example:
queue 10
queue <var> from <list> submits jobs using variables from a list The <var> value(s) can be used elsewhere in the submit file using $(var) syntax.

Example:
queue name from listOfEmployeeNames.txt
queue <var1>,<var2> from <list> submits jobs using multiple variables from a list Example:
queue first, last from listOfEmployeeNames.txt
queue <var> in [slice] <list> submits jobs using Python-style slicing Example:
queue name in [5:18] listOfEmployeeNames.txt
queue <var> matching <globbing_string> submits jobs from file pattern matches Example:
queue sampleID matching samples/sampleID_*

HTCondor default variables

Option Use Notes and Examples
$(Cluster) The unique job ID generated for each job submission. Example:
log = job_$(Cluster).log generates a unique log file, i.e., job_12345.log
$(Process) The unique process ID generated for each job in a cluster of submissions. Starts at 0. Example:
out = $(Cluster)_$(Process).out generates a unique stdout file for each job, i.e., 12345_0.out, 12345_1.out, 12345_2.out

Scale beyond local capacity

These options are best for short (<8hr) or checkpointable jobs. Expands matching opportunities across campus. Read more about scaling beyond local capacity. If you are using these options, do not include HasCHTCStaging in the requirements.

Option Use Notes and Examples
want_campus_pools = true Allows jobs to run on other HTCondor pools on campus Scale beyond CHTC’s capacity using UW’s shared compute capacity
want_ospool = true Allows jobs to match to the national Open Science Pool (OSPool) Scale beyond CHTC’s capacity using the Open Science Pool

Note: While scaling up, we recommend reaching out to the Research Computing Facilitation team for more information about running jobs using the Campus or Open Science Pool.

Glossary

Term Meaning
access point The machine you log into for submitting jobs (e.g., CHTC login node).
error file / standard error The file where your job writes error messages.
execution point The machine where your job actually runs.
held/hold Job has encountered an issue and paused.
idle Job hasn’t matched to an execution point yet.
job ID Unique ID made of ClusterID.ProcID like 12345.0.
log file Tracks job events and resource usage.
output file / standard out File where job writes standard output (e.g., print statements).
process ID ID of an individual job in a job cluster.
running Job has matched and is currently running.
submit file File specifying everything needed to run your job.
HTC guides