Menu
NCI spends big on Fujitsu/NetApp and HPE systems in supercomputing deal

NCI spends big on Fujitsu/NetApp and HPE systems in supercomputing deal

Purchases new systems to augment its supercomputer and storage capabilities

The National Computational Infrastructure (NCI) has spent millions on high-performance storage solutions from Fujitsu/NetApp, and Hewlett Packard Enterprise (HPE), in a bid to augment its supercomputer and storage capabilities.

The data storage revamp for Australia's high-performance supercomputer, cloud and data repository follows a $7 million boost from the Australian Government’s NCRIS Agility Fund in 2016.

This was then matched dollar-for-dollar by the NCI Collaboration, Funded by Australian National University, CSIRO, Bureau of Meteorology, Geoscience Australia and the Australian Research Council.

The new storage systems, purchased from Fujitsu/NetApp and HPE will replace NCI’s original 8 Petabyte Lustre filesystem, named gdata1, which was purchased in 2011 and has reached its operational end of life.

The first stage of the gdata1 replacement, from Fujitsu, will use the NetApp E-series storage arrays to provide a Lustre file system design in excess of 10 PB.

The second stage of the gdata1 replacement will be from HPE, utilising the HPE Apollo 4520 High Performance Computing storage and will provide a ZFS based Lustre file system with about 12 PB of usable storage.

ZFS is a combined file system and logical volume manager designed by Sun Microsystems.

“HPE supports the movement to open storage using industry standard servers and storage for Lustre filesystem solutions," HPE Asia-Pacific and Japan servers and HPC chief technologist, Steve Tolnai, said.

"By choosing HPE to provide this next generation data storage system, NCI has recognised this shift to an open software defined architecture, which reduces the reliance on expensive hardware based RAID arrays for Lustre solutions.”

The new gdata1 system will also have Mellanox EDR InfiniBand, aimed at providing performance of about 70GB/sec of bandwidth connection to NCI’s 83,068 core Raijin supercomputer. According to NCI, the additional systems will take its total data storage capacity to more than 36 Petabytes.

According to NCI, construction of its global filesystems began in 2013 to meet researcher demands for a large, fast, persistent filesystem to support growing data sets required in high-performance computing and high-performance data analysis.

In April 2015, it started work with Fujitsu and NetApp in a $2 million deal to supply and install NetApp FAS, E-series, and EF all-flash storage arrays with a raw storage capacity of 11 petabytes on the premise.

NCI also signed a $2 million contract with Dell in July 2013 to supply a 3200 core high-performance compute Cloud.

NCI said the notion of building a global filesystem was, and continues to be, part of its integrated research environment strategy to enable data to be accessed both on the high-performance supercomputer and by researchers on NCI’s high-performance data intensive cloud environment.

It added that the move enables it to continue to meet the demand for Australia’s rapidly expanding data collections.

“This integrated environment has delivered efficiency gains to researchers by negating the time-consuming process of copying data from one computer system to another, and has enabled multiple research groups on different systems to access and work concurrently on the same shared data with the appropriate security permissions,” the company said, in a statement.


Follow Us

Join the newsletter!

Error: Please check your email address.

Tags NCIsupercomputersstoragenetappFujitsuHPE

Show Comments