The National Computational Infrastructure (NCI) has signed a $2 million contract with Dell to supply a 3200 core high-performance compute Cloud.
The agreement also involves establishing a node of the National eResearch Collaboration Tools and Resources (NeCTAR) Research Cloud.
NCI director professor, Lindsay Botten, said the node builds on NCI’s existing portfolio of Cloud services and digital laboratories, creating an integrated platform that will be part of its research goals.
“The establishment of a Cloud alongside the NCI petascale supercomputer and the National High-Performance Data Node of the Research Data Storage Infrastructure (RDSI) initiative will enhance the scale of data-intensive science, leveraging the impact and value of each infrastructure component,” Botten said.
“As the nature of research becomes increasingly collaborative, the cloud will support users with self-service abilities to publish research data, share knowledge and rapidly access software applications.”
The node capability will be enhanced by NCI’s investment in high-performance hardware - Infiniband interconnect, large memory, and accelerators. It will also provide more access to Cloud-appropriate applications from the extensive software library via an implementation of the NCI operating environment in a virtual machine.
It will also develop a comprehensive digital laboratories which will advance research in climate change, earth system science, the environment and the geosciences, and simultaneously provide computational services.
The Cloud will complement the compute power of Australia's highest performance supercomputer, Raijin, a 1.2 petaflop Fujitsu Primergy cluster with 57,472 cores, 160 terabytes of memory, 10 petabytes of storage, and a Mellanox FDR Infiniband interconnect with 9TBps bandwidth.
NCI Cloud services manager, Dr Joseph Antony, said that the new science cloud will provide Australian researchers, for the first time, on-demand access to high-performance compute and storage resources.
“The distinguishing and innovative characteristics of the NCI node are the use of floating-point optimised Intel CPUs, high performance Intel SSDs for demanding high-IOPS science workloads and a 56Gbps Mellanox Ethernet interconnect – all of which are not the mainstay of commercial or academic Cloud offerings,” Antony said. “The multiplier effect from hosting the node at the NCI comes from holistic access to research artefacts generated on Raijin, deep integration with both archival storage and online fast disk and demonstrated fast 10GigE wide-area network access to international and local research network backbones using SXTransPORT and AARNet.”