Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.

See also the instructions for All of the compute nodes have moved to the New cluster (October 2018)


 Please Contact Us  for any question you don't find answered, or if you have suggestions or corrections.


Jan 2019), see the instructions there.

titleThis old cluster is being decommissioned

This old cluster described below is only available for transferring files. Please move your files by Jan 15, 2019.  Contact Us with questions

System Overview

Hpc64 is a centrally administered computing cluster dedicated to research computing, composed of servers owned by various labs in the Division of Science.

The system is accessible at to users with accounts. Upon login the users land on the "login node", and from there they can organize their files, compile the software and prepare the runs, and then interact with the job scheduler to submit the calculations to the compute nodes.

NOTE: the login node must not be used to run directly calculations, but only to submit calculation to the scheduler.

There is an  HPC Advisory Committee that oversees the cluster. System management, application and user support and users training falls to Arash Nemati Hayati.

If you have a technical question or problem that cannot be addressed by the documentation below, or you need HPC related advise, please open a ticket using the form at this link: Open a Ticket


titleOld technical information

 The cluster runs 64-bit Rocks/RHEL linux and is composed by an heterogeneous collection of Intel Xeon based hardware ranging from 2 sockets ( 8 cores/node ) Dell PowerEdge R410,  to more recent 2 sockets (16 cores/node) Supermicro servers and 4 sockets ( 32 cores/node ) M820Blade servers. 

The system  is currently comprised of 1900 physical cpu cores and 168 GPUs, on 145 computational nodes each of which has 8 to 32 physical cores ranging in clock speed from 2.20 GHz to 2.80 GHz.
Four nodes of the cluster are connected to 12 NVIDIA Tesla M2050 GPUs,  11 nodes are connected to 52 NVIDIA GeForce GTX 780 Ti GPUs and 17 nodes are connected to 104 NVIDIA GeForce TitanX GPUs.

The  RAM on the nodes ranges from 8GB (1GB/CPU core) for many of the older Dell R410 servers to 128GB (4GB/CPU core) for the recent Dell M820Blade servers. The storage amounts to about 34TB.

The system features also 1 node with 512GB RAM and 32 Haswell cores, dedicated to large memory jobs and 1 node with 2 Nvidia GPU dedicated to remote visualization, which allow remote accelerated rendering using VirtualGL.


The system is structured as a 'condo model', and access to resources is connected to hardware ownership as described in the section Policy and queues.