Distributed computing with Julia and Slurm
Recently I had a project where I had to run several thousand different tests of an algorithm with ranges of different parameter values. Each of these tests takes between 5-40min to run on a single CPU. This is a fairly common scenario, and one that can be quite easily tackled with the massive parallelism offered by distributed computing clusters.
These clusters are often run by universities or labs and are usually shared between many hundreds of users over multiple institutions. In order to manage compute resources and handle job submission and queueing, workload managers such as Slurm and PBS are used. Users can submit jobs to the job queue, and these are then allocated compute nodes and memory to execute their work.
There are two main methods that I have seen for running a large number of tests over a range of parameter tuples on high performance clusters:
- Submit many individual jobs; each for a single node. Each job essentially acts as a single program on a single CPU. Each individual job can run a single parameter tuple which are often defined through shell scripting and command line arguments. In this case distribution of computation is handled by the HPCs queuing system.
- Submit a single job which spans many nodes which runs all required parameter sets. Distribution of computation is handled by the job submitter, rather than the queueing system.
The first method is often simpler to set up, but can add some manual complexity by requiring more shell scripting and result collection. By unifying all the computation into a single job it greatly simplifies the required batch script, and allows all the results to be processed in the main language (in my case, Julia).
My solution to this problem is a combination of the RemoteChannel example from the Julia Manual and an example on github by magerton. A full example of my solution can be found in this gist.
The code
Most of the necessary functionality is built into the standard library in the Distributed
module for Julia and are detailed in the manual. Another key component is the ClusterManagers library which adds functionality for job queue systems that are usually used on shared high performance compute clusters.
The first step in distributed computing in Julia is adding the worker processes. If working locally this is done with the addprocs
function. This starts a Julia worker process for each individual CPU core that can independently run code. The ClusterManagers library adds functionality to start the worker processes on the individual compute nodes (essentially a CPU core) allocated through Slurm (and many other job management systems used on HPCs). The number of workers is equal to the number of tasks requested through sbatch
, which we can read directly from the environment variable.
|
|
Because all the individual workers are independent Julia processes, we need to ensure that all functions that will be doing work are defined and compiled on all workers. This is done with the @everywhere
macro. In this case I’m just declaring a single function with @everywhere
, however this could also be using a module with @everywhere using JuMP
or including a file @everywhere include("workfunctions.jl")
.1
|
|
To coordinate our workers, we need two queues: One for jobs that still need to be done, and one for results that are ready to be processed. A RemoteChannel
is a queue like FIFO data structure. Elements can be added to the queue with put!
and removed with take!
.
|
|
Our workers will repeatedly remove a job from the job queue, execute it, and put the result in the result queue. Calls to take!
on a channel will block if the channel is empty, and calls to put!
block if the channel is full.
|
|
Next we need to start the main loop on every worker. Right now they won’t actually be doing any work, because we haven’t submitted anything to the queue.
|
|
In this set up, a job consists of a tuple of the function name and the arguments for that function. A nicer alternative to this would be to have each job be a closure with the function to be executed, however I found sending closures through remote channels did not work2.
|
|
Jobs can be submitted for execution by adding them to the jobqueue channel with put!
. Because put!
blocks if the channel is full, it is important to add jobs to the queue asynchronously to avoid deadlocks.
|
|
Once the jobs are submitted, each of our workers will continuously take a job from the queue, execute it, and put the result into the results queue. I’ve found the most convenient form for the results is a NamedTuple
, as it is directly compatible with the Tables.jl
interface. This means you can directly write it to a CSV file or push it to a DataFrame
or database as needed.
|
|
|
|
Finally the workers need to be stopped gracefully.
|
|
Submitting the job
The Slurm batch script for this job is very simple, because all the inputs, outputs, and data processing is contained within the Julia script. The following is an example of a run script.
|
|
The number of tasks determines the number of worker processes. Optimisation is enabled in running the script, as this can potentially add runtime improvements at the tradeoff of higher initial compilation costs.
The above script (saved as run.sh
) can be submitted to Slurm with sbatch run.sh
.
Conclusions and future improvements
Using this method I was able to run jobs with up to 48 CPUs (although there is no reason more would not work) over several compute nodes. These jobs were able to maintain a very high average CPU utilisation (~95%). Load is balanced over all the CPU cores throughout the whole duration of the Slurm job.
There are a few improvements that can be made:
- All jobs right now are expected to return the same type of results. If different types of jobs are submitted, the results would need to be handled separately in the results processing loop.
- I have arbitrarily set the length of the jobs and results queue to 32. There may be fewer interruptions if this is increased, especially if the number of workers is large.
- Error handling within jobs is currently not really supported. In my jobs I simply have a
try ... catch
block within my job functions and the results are set to containNaN
values. This isn’t ideal and a better system of reporting errors would be useful.
Full example scripts for can be found in this gist.
-
Due to world age issues, any work functions (or functions that are called from work functions) need to be defined/included before the main loop for the workers is started. ↩︎
-
I’m not quite sure why, but I would guess it has something to do with compiled code from one worker not being compatible with the other. I also have no idea how closures would be stored in the channel. I wasn’t able to get any meaningful error messages so I settled on sending the function name and arguments. It works fine, although looks a little uglier. ↩︎