User Tools

Site Tools


language_lab:cluster

This is an old revision of the document!


Smallvoice - the Language and Voice lab computing cluster


Smallvoice, uses Slurm workload manager to create a computing cluster

When logged on to the cluster user is always on the login node, called freedom and should do all his work there.
/home are hosted on a NFS server, so every nodes have the same “physical” disks
All user-jobs should run using slurm sbatch job.sh, please do not run job locally on the login node

The computing (slurm) environment

There are 3 partitions / queue's available

NameNodesGPUTimelimitUsage
doTrain4Nvidia A100 GPUno limitstaff only
basic2Nvidia A100 GPU36 hoursfor students
bigVoice1Nvidia A100 GPUno limitstaff
lvlWork4Nvidia A100 GPU7 daysstaff only

The default queue for staff is doTrain (and basic for student) so it's not necessery to choose a queue, but it's possible to specify a different one.

Installed software and drivers

* NVIDIA A100 GPU drivers
* CuDA toolkit [version 11.7]
* Intel oneAPI Math Kernel Library
* Python 3.9.7
* pip 20.3.4
* ffmpeg + sox

If additional software is needed or different version, you can ask sysadmin (compute@ru.is) for assistance

language_lab/cluster.1698656379.txt.gz · Last modified: 2024/10/14 14:24 (external edit)