Tizard Machine

eResearch SA’s new supercomputer, the Tizard machine, is the state’s most powerful high-performance computing (HPC) system. Unlike our previous supercomputers, Tizard is a powerful mix of different computing systems, all optimised to complete specific tasks faster than ever before.

The compute servers making up the Tizard HPC system provide an aggregate total of 40 TFlops of compute power, making it 6 times more powerful than our previous supercomputer.

The $700,000 machine was purchased with funds from an ARC Linkage Infrastructure, Equipment and Facilities grant. Tizard represents a big win for the South Australian research community.

Tizard will be available to all researchers at the University of Adelaide, University of South Australia and Flinders University. The machine will also be used to support some State Government research facilities and groups.

The Tizard machine is named in memory of James Tizard, the founding CEO of SABRENet (2007-2011) and Director of eResearch SA (2009-10), who passed away in 2011.

The mix of different computing systems that make up the Tizard include:
    • General purpose CPU Cluster
    • Big memory nodes
    • Tesla GPU nodes
    • Consumer GPU nodes
    • Virtualisation server

Each of these is targeted at supporting different types of computational tasks, as described below:

CPU Cluster

  • 48 SGI compute nodes connected by a high-speed QDR Infiniband network
  • Each node has 48 cores (4 AMD 6238 12-core 2.6Ghz CPUs) and 128GB memory (2.7GB per core)
  • A total of 2304 cores with a peak performance of 24 TFLOPS

For general purpose computing, supports single processor jobs, multi-core applications that need to run on a single node, or parallel programs that can utilise many cores across multiple compute nodes.  If you require more than 4GB per core you should use the big memory nodes. If you only require 8 cores or less then the Australian Research Cloud is your best option.

 

Big memory nodes 

  • 1 Dell R910 server with 4 Intel Xeon E7-8837 8-core 2.66 GHz processors, 1TB memory, 3 TB of local scratch disk
  • 1 Dell R810 server with 4 Intel Xeon E7-4830 8-core 2.13 GHz processors, 512GB memory, 1.7 TB of local scratch disk

For applications that require relatively small numbers of cores and large memory per core.

 

Tesla GPU nodes 

  • Each node has 4 nVIDIA Tesla M2090 GPUs (6GB GPU memory per card), 2 x Intel Xeon L5640 6-core CPUs @ 2.26Ghz, 96GB memory
  • Each node provides of 2.7 TFlops (single precision) from the GPUs (1/2 of this for double precision)
  • 5 nodes giving 13.5 TFlops total single precision (7 TFlops double precision).

For applications that have been ported to run on GPUs and need good double precision performance, large GPU memory and error correcting (ECC) GPU memory.

 

Consumer (GTX580) GPU nodes

  • Each node has 4 GeForce GTX580 GPUs (3GB GPU memory per card), 2 x Intel Xeon L5630 4-core CPUs @ 2.13Ghz, 24GB memory
  • Each node provides of 2.7 TFlops (single precision) from the GPUs (1/4 of this for double precision)
  • 12 nodes giving 32 TFlops total single precision (8 TFlops double precision).

For applications that have been ported to run on GPUs and are mostly single precision calculations and don't need large GPU memory or error correcting (ECC) GPU memory.

 

Virtualisation server

  • 1 Dell R815 server with 4 AMD Opteron 6128 8-core processors, 256GB memory and 3.6 TB disk.

For hosting virtual machines supporting applications that require interactive access (e.g. using a GUI) and/or do not run on the operating systems used on the eRSA HPC systems.

Tizard User Guide

https://www.ersa.edu.au/hpctwiki/bin/view/Main/TizardUserGuide