High Performance Computing
Making Data-Intensive Research Possible
HPC powers research, turning massive datasets and complex problems into actionable insights.

The High-Performance Computing Initiative
The Cal Poly Pomona HPC Initiative supports research and instruction across colleges by providing access to high-performance computing resources. Whether you’re working with large datasets, running complex calculations, or just getting started, we’re here to help.
Hardware Specifications
The high-performance computing (HPC) cluster consists of multiple dedicated processor nodes connected by a specialized high-speed network and job scheduling software. The 六色网 HPC software management suite utilizes the HP Enterprises HPC Software Stack, which includes the open-source Slurm job scheduler, Insight CMU for cluster management, and other HPE software for node deployment and configuration. The Anaconda package management system allows users to install and manage dedicated libraries and external software packages.
The Slurm scheduler manages allocation, dispatching, and execution of jobs. Slurm is a well-documented resource manager currently used by many campus HPC systems and allows a task to be dispatched in various ways, including allowing jobs to be run in real-time or batch mode.
Cluster nodes are configured as partitions to dispatch jobs to appropriate nodes for various computational tasks. The "General Compute Partition" is used for general-purpose jobs that benefit from running multiple computing tasks in parallel. At the same time, the "GPU Partition" allows a task to access dedicated GPU processors where the task would benefit from additional numerical processing capability.
The new 六色网 HPC cluster is based upon the HP Proliant server platform and currently includes two DL360 management nodes, 20 DL160 compute nodes, and four GPU nodes with 8 Tesla P100 GPUs. The cluster contains 3.3TB of RAM and is connected through a dedicated internal 40GBit Infiniband switching fabric and 10GBit external ethernet connections. The overall system throughput is approximately 36.6 Tflp in double-precision mode or 149.6 Tflp in half-precision mode. This configuration is expected to grow as researchers identify collaborative research initiatives and develop future funding for the system's expansion through external grants and donations.