Squiz taps Techflow for overhauled data centre

Squiz taps Techflow for overhauled data centre

Digital platform provider turns to data centre services provider for assistance

Global Switch data centre, Sydney

Global Switch data centre, Sydney

Credit: Techflow

Digital platform provider Squiz has tapped data centre service provider Techflow to overhaul its data centre, increasing its transit capacity by 10 times, network bandwidth by 25 times.

Due to advancements in data centre technology, Squiz was looking to upgrade its systems but found itself needing three times its existing power supply that it was receiving from its current colocation provider.

Fortunately data centre service provider Techflow was on hand to help the company re-assess its offerings. Squiz was introduced to the Sydney-headquartered Techflow in 2017.

The two then began work on modernising Squiz's data centre through a colocation solution in a large Sydney data centre in late 2018.

During the project, Techflow utilised a number of vendor technologies, such as those from APC, ServerTech and Juniper, as well as some in-house technology, according to Techflow director Shah Hardik.

It took the services provider over six months to take the project through all the way to its conclusion, finishing before the estimated schedule on 1 September in a Global Switch data centre.

“There were a few teething issues along the way (as always) however nothing critical. I don’t think customers were exposed to any impact,” Hardik said.

The final result, according to Justin Higgins, CIO of Squiz, “is undoubtedly cost reduction”, but the improved equipment capabilities meant Squiz can be more nimble when reacting to customer requests and market changes – in fact, Higgins said the overhaul meant Squiz can provide 10 times as much transit capacity and 25 times as much network bandwidth.

An example of where the increased speed comes into play for Squiz, Higgins added, is the time it takes to bring a single storage device back up to speed after a failure.

“Previously device failure in the storage subsystem took six to 24 hours for online RAID rebuild to bring the system back up to full resiliency,” he said.

“By rebuilding data redundancy in parallel across multiple chassis connected via the upgraded network spine, full resilience is achieved in the worst case in less than seven minutes after device failure (with no downtime).”

Follow Us

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags SquizTechFLow


Show Comments