How do I connect to the cluster?

If you are currently a Cosine customer you already have access to our cluster. To access the server, SSH into submit.hpc.cosine.oregonstate.edu using your cosine/science username and password.

 If connecting from Off campus, please refer to I can't connect to the cluster from off campus.

Join the Cosine HPC mailing list for notifications about software updates and maintenance. Visit http://lists.science.oregonstate.edu/mailman/listinfo/cosine-hpc

 

How is storage handled on the Cluster?

User home directories on the cluster are provided by a dedicated server with 26TB of disk space. For performance reasons, this space is not backed up.

How can I bring my science home (Z:) to the cluster?

  1. Create a directory in your home for the z drive mount point
  2. sshfs shell.cosine.oregonstate.edu: <your mount point>
  3. Please copy files from zdrive to cluster home before execution

Example

mkdir ~/zdrive

sshfs shell.cosine.oregonstate.edu: ~/zdrive

 

 

 

How do I view and set my environment?

The cluster uses the environments module to provide an easy way to switch between software revisions. These modules configure environmental variables such as PATH for each piece of software. 

To get a list of available modules to load, execute:

module avail

To get a list of what modules are currently loaded, execute:

module list 

To load the Matlab 2014b module, execute

module load matlab/R2014b 

To display modules set to load during login:

module initlist

To set a module to automatically load during login:

module initadd matlab/R2014b

To remove a module from loading during login:

module initrm matlab/R2014b

 

How do I submit a job?

Jobs should be submitted using a special sh script which tells the scheduler how to handle the job.

An example with common options can be seen below: 

 submit.sh

#!/bin/sh

# Give the job a name #$ -N example_job # set the shell #$ -S /bin/sh # set working directory on all host to # directory where the job was started #$ -cwd # send all process STDOUT (fd 2) to this file #$ -o job_output.txt # send all process STDERR (fd 3) to this file #$ -e job_output.err # email information #$ -m e # Just change the email address. You will be emailed when the job has finished. #$ -M myusername@science.oregonstate.edu # generic parallel environment with 2 cores requested #$ -pe orte 2 # Load a module, if needed module load sprng/5 # Commands ./my_program


If necessary, make my_program executable

chmod +x my_program

Submit the job

qsub submit.sh

How do I check the status the queue?

qstat is used to check the status of jobs on the cluster. By itself it will show a brief overview

qstat

To show the status of all nodes and queued processes, execute

qstat -u '*'

 

The state codes that dare displayed in the last column of qstat are as follows:

Category State SGE Letter Code
Pending pending qw
  pending, user hold qw
  pending, system hold hqw
  pending, user and system hold hqw
  pending, user hold, re-queue hRwq
  pending, system hold, re-queue hRwq
  pending, user and system hold, re-queue hRwq
Running running r
  transferring t
  running, re-submit Rr
  transferring, re-submit Rt
Suspended job suspended s, ts
  queue suspended S, tS
  queue suspended by alarm T, tT
  all suspended with re-submit Rs, Rts, RS, RtS, RT RtT
Error all pending states with error Eqw, Ehqw, EhRqw
Deleted all running and suspended states with deletion dr, dt, dRr, dRt, ds, dS, dT, dRs, dRS, dRT

How do I get the status of a job?

If a job is currently running:

qstat -j <jobId>

After a job has been executed:

qacct -j <jobId>

How do I delete a job?

The command:

qdel <job id of process>

Is used to remove a job from the queue. If the job is in a dr state the -f flag must be used to force the job to stop. The job ID is supplied as an argument to qdel.

How do I make sure a node has enough memory?

Nodes in the all.q have mixed memory sizes. To ensure that a job lands on a node with enough memory, the mem_free resource can be used.

For example. to execute on nodes with at least 60GB of RAM available:

qsub -l mem_free=60G submit.sh

How do I use R on the cluster?

Create a Submission File

Place the following code into a .sh file (i.e. submit.sh):

#!/bin/sh
# Give the job a name
$ -N JOB_NAME
# set the shell
$ -S /bin/sh
# set working directory on all host to directory where the job was started
$ -cwd
# send all ERROR messages to this file
$ -e errors.txt
# Change the email address to YOUR email, and you will be emailed when the job has finished.
$ -m e
$ -M email_address@oregonstate.edu
# Ask for 1 core, as R can only use 1 core for processing
$ -pe orte 1
# Load the R Module
module load R
# Commands to run job
R inputFile.r > outputFile.out

 

Submit the Job to the Cluster

Type the following commands, replacing "submit.sh" with the name of your .sh file:

qsub submit.sh

R Examples

can be found on the cluster inside the /cm/shared/examples/R folder

How do I Install R libraries?

You can download and install any R Libraries which you might need to run on the cluster into your home directory and simply use them from there. These instructions give you the steps to accomplish this.

 ssh to the cluster

  1.  Load R module
    module load R
  2. Launch R
    R
  3. Type the command to install the desired package
    install.packages("package_name") 

 

If this is the first time you have run the install.packages() command, you will be asked if you want to create a personal library.  Answer 'y'.

 Follow the prompts to pick a mirror, etc.

 

 

R will download and install the library into the newly created personal library (in your home directory).

To use this library, use the library command (like any other installed library):

library(library_name)

How do I use Gaussian on the Cluster?

Create a Submission File

Place the following code into a .sh file (i.e. submit.sh):

#!/bin/sh
# Give the job a name
$ -N JOB_NAME
# set the shell
$ -S /bin/sh
# set working directory on all host to directory where the job was started
$ -cwd
# send all ERROR messages to this file
$ -e errors.txt
# Change the email address to YOUR email, and you will be emailed when the job has finished.
$ -m e
$ -M email_address@oregonstate.edu
# Use 4 cores for processing
$ -pe orte 4
# Load the Gaussian Module
module load gaussian/g16
# Commands to run job
g16 < inputFile.com > outputFile.out

 

Submit the Job to the Cluster

Type the following commands, replacing "submit.sh" with the name of your .sh file:

qsub submit.sh

Gaussian Examples

Examples for Gaussian can be found on the cluster inside the /cm/shared/examples/g09 folder

How do I use MATLAB on the cluster?

Thanks to the new campus agreement with Mathworks, Matlab Distributed Computing Server is available and installed on the Cosine cluster. 

In order to use matlab, the module must be loaded

$ module avail matlab

$ module load matlab/R2014b

Interactive Matlab

Interactive Matlab sessions can be run in text-only mode or using the full Matlab GUI:

  • the matlab command will try to start the Matlab desktop GUI using X-Windows, if X is not available, then a text-only session will be started
  • to specify a text-only interactive Matlab session use matlab -nodisplay.

If you want to run a text-only Matlab session, you should:

  • log in to the cluster e.g. using ssh ...
  • start a session on a node on the cluster using qlogin -pe orte <numberOfCoresRequested>
  • from this session, you should:
    • load the appropriate Matlab module e.g. module load matlab
    • start Matlab using matlab -nodisplay

 

Non-Interactive Matlab Jobs

There are three main categories of non-interactive Matlab SGE jobs that you can run on the cluster:

  • array jobs run multiple copies of a job across the cluster differentiated by a task ID;
  • distributed jobs use Matlab Distributed Computing Server (MDCS) to run across nodes on the cluster and allow communication between tasks.

 

Array jobs

Array jobs should be used when the job does not require any synchronisation between tasks. The script will be launched multiple times, with a varying index. The index is accessible via the environment variable SGE_TASK_ID.

Typical uses of array jobs would include:

  • processing a set of input files with each job processing a different file;
  • processing a single large file using multiple jobs each of which processes a section of the file;
  • examining the performance of a model using multiple sets of model parameters.

An example can be found on the cluster in:

/cm/shared/examples/matlab/array

 

Distributed Jobs

Rather than submitting SGE jobs that execute Matlab scripts on the cluster nodes, distributed jobs launch tasks on cluster nodes from within Matlab. Distributed jobs require the cluster to be configured within Matlab, and submission scripts which define how tasks should be launched on cluster nodes. The submission of the jobs is performed via Matlab GUI or command line interface.

In order to distributed jobs, you should:

  1. Configure Matlab to use the cluster, either using a cluster profile or programmatically
  2. Create a independent and/or communicating job submission script
  3. Submit (run) your job

 

Matlab cluster profiles

Using GUI configuration utility

In order to configure it, start Matlab GUI and then go Parallel -> Manage Cluster Profiles 

New Window will pop up. In the new window, click on Add -> Custom -> Generic

 

New profile will be created. Re-name it to something sensible (you will be referring to it through the code). Lets call it Cosine.

Next, make sure you have provided the following info in the Properties tab (leaving all of the other options as default:

 

Main Properties
Description of this cluster: Cosine HPC
Folder where cluster stores job data: use default (unless you want to specify alternative location)
Number of workers available to the cluster: 32
Root folder of MATLAB installation for workers: use default
Cluster uses MathWorks hosted licensing: false
Submit Functions
Function called when submitting independent jobs: @independentSubmitFcn
Function called when submitting communicating jobs: @communicatingSubmitFcn
Cluster Environment
Cluster nodes' operating system: Unix
Job storage location is accessible from client and cluster nodes: yes
Workers
Range of number of workers to run the job: [1 32]
Jobs and task functions
Function to query cluster about the job state: @getJobStateFcn
Function to manage cluster when you call delete on a job: @deleteJobFcn

 

 

Note, that once profile has been loaded, you can override the settings from the submission script

Once the profile has been set up click ok. Next select newly created profile, and validate the configuration.

 
Importing Cluster Profiles

You can import a profile using either the Cluster Profile Manager or the Matlab parallel.importProfile(filename) command.

parallel.importProfile('/cm/shared/examples/matlab/distributed/Cosine.settings');

To import settings from the Cluster Profile Manager, use:

  • Parallel -> Manage Cluster Profiles
  • Add -> Import
  • and select the appropriate settings file.
 
Programmatically

Rather than using a previously defined cluster profile, the cluster details can be configured ad-hoc in a .m script file:

cluster = parallel.cluster.Generic();
cluster.NumWorkers = 32;
cluster.JobStorageLocation = '/homes/cosine/helpdesk/matlab/';
cluster.IndependentSubmitFcn = @independentSubmitFcn;
cluster.CommunicatingSubmitFcn = @communicatingSubmitFcn;
cluster.OperatingSystem = 'unix';
cluster.HasSharedFilesystem = true;
cluster.GetJobStateFcn = @getJobStateFcn;
cluster.DeleteJobFcn = @deleteJobFcn;
cluster.RequiresMathWorksHostedLicensing = false;

To save the cluster definition as a profile for later re-use, use:

cluster.saveAsProfile('Cosine')

To load a previously saved cluster definition, use:

cluster = parcluster('Cosine')
 
Passing Additional Parameters to SGE

If you want to pass additional arguments to the SGE specify the submit function as {@communicatingSubmitFcn, 'list_of_additional_qsub_parameters'}

e.g. to specify that 4GB of memory should be requested, and that emails should be sent to name@domain.name at the beginning and end of the job the submit functions should be specified as:

cluster = parcluster('Cosine');
cluster.CommunicatingSubmitFcn = {@communicatingSubmitFcn, '-l h_vmem=5G -m be -M name@domain.name'};
pp = parpool(cluster);
parfor i=1:10
        hn = system('hostname');
        disp(hn);
end
...
delete(pp)

Make sure that the options you pass to the qsub command are syntactically correct, otherwise the job will fail (see the qsub man page for the list of available options).

 

Independent Jobs

An independent job is defined as follows (from http://www.mathworks.co.uk/help/distcomp/program-independent-jobs.html):

 

An Independent job is one whose tasks do not directly communicate with each other, that is, the tasks are independent of each other. The tasks do not need to run simultaneously, and a worker might run several tasks of the same job in succession. Typically, all tasks perform the same or similar functions on different data sets in an embarrassingly parallel configuration.

Independent jobs are created using the Matlab createJob() function.

 

An independent job example:

 /cm/shared/examples/matlab/distributed/independent

Note: Matlab will submit the job to SGE without the need to write a submission script:

$ matlab -nodisplay < independent.m 
Communicating Jobs

A communicating job is defined as follows (from http://www.mathworks.co.uk/help/distcomp/introduction.html):

Communicating jobs are those in which the workers can communicate with each other during the evaluation of their tasks. A communicating job consists of only a single task that runs simultaneously on several workers, usually with different data. More specifically, the task is duplicated on each worker, so each worker can perform the task on a different set of data, or on a particular segment of a large data set. The workers can communicate with each other as each executes its task. The function that the task runs can take advantage of a worker's awareness of how many workers are running the job, which worker this is among those running the job, and the features that allow workers to communicate with each other.

Communicating jobs are required for:

  • parfor loops which allow multiple loop iterations to be executed in parallel;
  • spmd blocks which run a single program on multiple data - i.e. the same program runs on all workers with behaviour determined by the varying data on each worker (see here).

Communicating jobs are created using the Matlab createCommunicatingJob() function and can have a Type of either pool or spmd:

  • pool job runs the supplied task on one worker and uses the remaining workers as a pool to execute parfor loops, spmd blocks etc., the total number of workers available for parallel code is therefore one less than the total number of workers;
  • spmd job runs the supplied task on all the workers, with no task fundamentally in control - effectively, an spmd job acts as if the entire task is within an spmd block.

Communication beween spmd workers (whether in an spmd job or spmd block) occurs using the lab* functions (see Matlab help). Control of spmd workers is usually exerted by message passing and testing data values (e.g. using the worker with labindex of 1 to control other workers).

An independent job example:

 /cm/shared/examples/matlab/distributed/communication

 

Note: Matlab will submit the job to SGE without the need to write a submission script:

$ matlab -nodisplay < communication.m

 

Non-blocking jobs

The examples in both the communicating and independent jobs sections submit the job then wait (block) until the job is complete, subsequently extracting the results and deleting the job, i.e.

 

cluster = parcluster('Cosine');
ijob = createJob(cluster);
....
submit(ijob);
wait(job, 'finished');  %Wait for the job to finish
results = getAllOutputArguments(ijob); %retrieve results
...
destroy(job); %destroy the job

In some situations, this might not be desired - e.g. where client is not allowed to run for long times on the submit host. In such cases a non-blocking submit script should be used instead. The only difference to the communicating and independent scrips defined earlier is that a non-blocking job doesn't have the wait and destroy calls.

 

A non-blocking independent submit script:

/cm/shared/examples/matlab/distributed/independent/independent_noblock.m 

A non-blocking communicating submit script:

/cm/shared/examples/matlab/distributed/communication/communication_noblock.m 

 

Once the job has been completed, the results can be fetched programmatically:

cluster = parcluster('Cosine');
job = cluster.findJob('ID',1);
job_output = fetchOutputs(job);

 The ID used in cluster.findJob('ID', ...) above is the internal Matlab job ID as displayed at the end of the example non-blocking submit scripts, not the SGE job ID.

Once you have finished with it you can delete it using

destroy(job);

 

How do I use GPU/CUDA Resources?

The node cosine004 has two NVIDIA Tesla K40m GPU Computing Accelerators. Each card provides 2880 cores and 12GB of RAM. Each card has been set to exclusive mode, meaning only one process can access the gpu at a time.

The device names of these cards are /dev/nvidia0 and /dev/nvidia1. 

A dedicated queue, gpu.q has been created for these resources. 

For interactive use, use qlogin and specify the queue:

 qlogin -q gpu.q

For batch use, use qsub in the standard fashion, but specify the queue:

 qsub -q gpu.q submit.sh

 

CUDA 7.5 tools are installed, but must be loaded with the modules system, typically you will include the toolkit and the gdk

module load cuda75/toolkit/7.5.18
module load cuda75/gdk/352.79

NOTE: when comping software with nvcc, there is a module conflict with gcc/5.1.0, remove this module to use the system gcc 4.8.5 

module unload gcc/5.1.0

CUDA Example

A simple CUDA example can be found in the directory: /cm/shared/examples/cuda

I can't connect to the cluster from off campus

The Cosine cluster is protected by a firewall which blocks access from off campus. Users need to establish a VPN connection or first ssh into a campus server before connecting to the submit node.

We recommend connecting using the VPN. However, if you cannot use the OSU VPN but are able to remote into a campus PC, you can SSH from the campus PC to access the cluster.
If you prefer to work locally when submitting jobs and accessing the cluster, you can also set up SSH multi-hopping.

 

I get font errors running qmon, how do I fix it?

If you run into an issue where:

[root@cluster-submit ~]# qmon
Warning: Cannot convert string "-adobe-helvetica-medium-r-*--14-*-*-*-p-*-*-*" to type FontStruct
Warning: Cannot convert string "-adobe-helvetica-bold-r-*--14-*-*-*-p-*-*-*" to type FontStruct
Warning: Cannot convert string "-adobe-helvetica-medium-r-*--20-*-*-*-p-*-*-*" to type FontStruct
Warning: Cannot convert string "-adobe-helvetica-medium-r-*--12-*-*-*-p-*-*-*" to type FontStruct
Warning: Cannot convert string "-adobe-helvetica-medium-r-*--24-*-*-*-p-*-*-*" to type FontStruct
Warning: Cannot convert string "-adobe-courier-medium-r-*--14-*-*-*-m-*-*-*" to type FontStruct
Warning: Cannot convert string "-adobe-courier-bold-r-*--14-*-*-*-m-*-*-*" to type FontStruct
Warning: Cannot convert string "-adobe-courier-medium-r-*--12-*-*-*-m-*-*-*" to type FontStruct
Warning: Cannot convert string "-adobe-helvetica-medium-r-*--10-*-*-*-p-*-*-*" to type FontStruct
X Error of failed request: BadName (named color or font does not exist)
Major opcode of failed request: 45 (X_OpenFont)
Serial number of failed request: 525
Current serial number in output stream: 536

Appears when attempting to run qmon, then you will need to install a package on your machine.

For machines running Ubuntu:

apt-get install xfstt
service xfstt start
apt-get install xfonts-75dpi
xset +fp /usr/share/fonts/X11/75dpi
xset fp rehash

What are the specs of the cluster?

 

NODE CPU CORES RAM (GB) Network
TOTALS:   2616 11952  
cosine001 E5-2620 v3 @ 2.40GHz 24 64 1 Gbps
cosine002 E5-2680 v3 @ 2.50GHz 48 256 1 Gbps
cosine003 E5-2697 v2 @ 2.70GHz 48 256 1 Gbps
cosine004 E5-2620 v4 @ 2.10GHz 32 64 1 Gbps
cosine005 E5-2695 v4 @ 2.10GHz 72 256 1 Gbps
cosine006 E5-2620 v4 @ 2.10GHz 16 64 1 Gbps
cosine007 E5-2620 v4 @ 2.10GHz 16 64 1 Gbps
cosine008 E5-2620 v4 @ 2.10GHz 16 64 1 Gbps
cosine009 Silver 4216 CPU @ 2.10GHz 64 96 1 Gbps
cosine010 Silver 4216 CPU @ 2.10GHz 64 96 1 Gbps
di001 E5-2680 v3 @ 2.50GHz 48 256 1 Gbps
di002 E5-2680 v3 @ 2.50GHz 48 256 1 Gbps
di003 E5-2680 v3 @ 2.50GHz 48 256 1 Gbps
finch001 E5-2630 v2 @ 2.60GHz 24 128 1 Gbps
lazzati001 E5-2695 v3 @ 2.30GHz 48 128 FDR
lazzati002 E5-2695 v3 @ 2.30GHz 48 128 FDR
lazzati003 E5-2695 v3 @ 2.30GHz 48 128 FDR
lazzati004 E5-2695 v3 @ 2.30GHz 48 128 FDR
lazzati005 Gold 5218 CPU @ 2.30GHz 64 192 FDR
lazzati006 Gold 5218 CPU @ 2.30GHz 64 192 FDR
lazzati007 Gold 5218 CPU @ 2.30GHz 64 192 FDR
lazzati008 Gold 5218 CPU @ 2.30GHz 64 192 FDR
schneider001 E5-2620 @ 2.00 GHz 24 128 1 Gbps
schneider002 E5-2630 v2 @ 2.60 GHz 24 128 1 Gbps
nmr001 Silver 4214R CPU @ 2.40GHz 48 192 1 Gbps
spp001 Gold 6334 CPU @ 3.60GHz 32 512 1 Gbps
hazoun001 AMD EPYC 7543 32-Core 64 512 1 Gbps
zuehlsdorff001 Xeon(R) Gold 6230R CPU @ 2.10GHz 104 376 1 Gbps
zuehlsdorff002 Xeon(R) Gold 6230R CPU @ 2.10GHz 104 376 1 Gbps
zuehlsdorff003 Gold 6230 CPU @ 2.10GHz 80 192 1 Gbps
zuehlsdorff004 Gold 6230 CPU @ 2.10GHz 80 192 1 Gbps
zuehlsdorff005 Gold 6230 CPU @ 2.10GHz 80 192 1 Gbps
zuehlsdorff006 Gold 6230 CPU @ 2.10GHz 80 192 1 Gbps
zuehlsdorff007 Gold 6230 CPU @ 2.10GHz 80 192 1 Gbps
zuehlsdorff008 Gold 6230 CPU @ 2.10GHz 80 192 1 Gbps
dri001 Gold 5118 CPU @ 2.30GHz 48 384 1 Gbps
dri002 Gold 5118 CPU @ 2.30GHz 48 384 1 Gbps
dri003 Gold 5118 CPU @ 2.30GHz 48 384 1 Gbps
dri004 Gold 5118 CPU @ 2.30GHz 48 384 1 Gbps
dri005 Gold 5118 CPU @ 2.30GHz 48 384 1 Gbps
dri006 Gold 5118 CPU @ 2.30GHz 48 384 1 Gbps
dri007 Gold 5118 CPU @ 2.30GHz 48 384 1 Gbps
dri008 Gold 5118 CPU @ 2.30GHz 48 384 1 Gbps
dri009 Gold 5118 CPU @ 2.30GHz 48 384 1 Gbps
dri010 Gold 5118 CPU @ 2.30GHz 48 384 1 Gbps
sun01 E5-2697 v2 @ 2.70GHz 48 256 1 Gbps
sun02 E5-2697 v2 @ 2.70GHz 48 256 1 Gbps
sun03 E5-2697 v2 @ 2.70GHz 48 256 1 Gbps
sun04 E5-2697 v2 @ 2.70GHz 48 256 1 Gbps
sun05 E5-2697 v2 @ 2.70GHz 48 256 1 Gbps
 
TOTALS:   2616 11952  
         

 

- Where cores are allocated to more than one queue the investor queues take precedence during scheduling.

GPU Resources

cosine004: gpu.q 2 NVIDIA Tesla K40m GPUs 2880 cores and 12GB of RAM.

cosine004: hendrix-gpu.q 2 NVIDIA Tesla K40c GPUs 2880 cores and 12GB of RAM.

cosine001: sun-gpu.q 1 NVIDIA Tesla K40m GPUs 2880 cores and 12GB of RAM.

zuehlsdorff001-002: 8x GeForce RTX 2080 12GB RAM (each)
zuehlsdorff004-008:  4x GeForce RTX 3080 12GB RAM (each)

I am receiving an error: /lib64/libstdc++.so.6: version `GLIBCXX_3.X.XX' not found

If this error occurs, you need to load an a newer version of gcc that has an updated libstdc++so.6 library. In your submit script, add the following lines to switch from gcc 5.1.0 to 9.2.0

module unload gcc/5.1.0
module load gcc/9.2.0