Tag Archives: hpc

New SMP+GPU system: Executor

** Forward this to any group members that may be interested in new HPC systems **

Executor:

A new large memory footprint SMP+GPU system has been released, named executor.structbio.pitt.edu. These types of system designs are useful when many cores (more than would be on a single cluster node) require access to a large bank of shared memory modules.

Executor is similar to Archer in usage scenarios, but has more cores (which are faster), larger RAM, and more high-performance local scratch space. In addition, it has two of the latest generation GPUs for parallel co-processing.

The specs:

64 AMD “Epyc” CPU Cores running at 3.0GHz (with boost)
512GB RAM
20TB scratch volume (located at /executor)
2 – RTX 2080 TI GPUS (11 GB GDDR6 RAM per GPU)
8704 total CUDA compute cores

Executor uses the SLURM job scheduler for job submission, and environment modules for loading software environments. Login via SSH similar to other HPC systems using your Structbio ID. SLURM is slightly different from PBS in syntax, but script conversion is not difficult. Read more about that here.

Currently, this system has been tested with CryoEM software such as Relion 3, MotionCor, EMAN2, CTFFIND, Auto3dem. If you would like additional software / modules installed, send me an email request.

Happy Computing,

Doug

Ultron storage expanded

I recently completed upgrades for the Ultron storage space, doubling the space from 11TB to 22TB.  This is all-flash storage in RAID10 and offers extreme throughput for HPC jobs.

Note that this is still space only for processing.  Once data is processed it should be moved to one of the file servers since that is much less expensive space.  If you need any assistance with this, email the systems administrator.