In order to explore innovative computer architectures for HPC, the Center for Computing and Communication has installed a GPU-cluster in July 2011. Because of its innovative character, this cluster does not yet run in real production mode, nevertheless, it will be tried to keep it as stable and reliable as possible.
Access to the GPU cluster is open to all cluster users but need additional registration. If you are interested in using GPUs, make a request to firstname.lastname@example.org. We will grant access to the GPU-cluster (or the Windows GPU machines) and to the GPGPU-Wiki which contains detailed documentation about the systems and how to program them.
The GPU-cluster comprises 28 nodes each with two GPUs, and one head node with one GPU. In detail, there are 57 NVIDIA Quadro 6000 GPUs, i.e. NVIDIA’s Fermi architecture. Furthermore, each node is a two socket Intel Xeon “Westmere” EP (X5650) server which contains a total of twelve cores running at 2.7 GHz and 24GB DDR3 memory. All nodes are conntected by QDR InfiniBand. The head node and 24 of the double GPU nodes are used on weekdays (at daytime) for interactive visualizations by the Virtual Reality Group of the Center for Computing and Communication. During the nighttime and on weekends, they are available for GPU compute batch jobs. The four remaining nodes enable, on the one hand, GPU batch computing all-day and, on the other hand, interactive access to GPU hardware to prepare the GPU compute batch jobs and to test and debug GPU applications.
The software environment on the GPU-cluster is as similar as possible to the one on the RWTH Compute Cluster (Linux part). GPU-related software (like NVIDIA’s CUDA Toolkit, PGI’s Accelerator Model or a CUDA debugger) is additionally provided. In the future the software stack (including Linux version) may drift apart due to experimental status of the GPGPU cluster.
Furthermore, there is also the possibility to use a couple of high-end GPUs under Windows.