Menu
Pure Storage CEO on all-flash data centres and the cloud

Pure Storage CEO on all-flash data centres and the cloud

Former Cisco exec Charlie Giancarlo talks how data-centric storage can boost application performance and how Pure's subscription agreements promise infrastructure upgrades without downtime

Charlie Giancarlo (Pure Storage)

Charlie Giancarlo (Pure Storage)

Credit: Network World

One year ago Charlie Giancarlo took the helm of Pure Storage, which in fiscal year 2018 reported its first billion-dollar year.

Giancarlo was a managing director and senior advisor at Silver Lake Partners before joining Pure Storage. Prior to that, he held multiple executive positions at Cisco, where he helped steer the company into markets such as Ethernet switching, VoIP, Wi-Fi and telepresence.

Giancarlo talked with Network World's Ann Bednarz about what Pure is doing to keep the storage industry moving forward, and how the experience he gained during Cisco’s growth spurt is helping. 

He described Pure's vision for a data-centric architecture – an approach that combines the simplicity of direct-attached storage with the scalability and reliability of network storage – and how it will lead to the eventual collapse of storage tiers. 

Giancarlo also talked about the fate of magnetic disk drives (only for cold storage); why NVMe is important (enables even greater efficiency in flash); and what’s distinctive about the company's pay-per-use Evergreen storage services (no rip-and-replace upgrades).

Here is an edited transcript of that conversation.

Enterprise storage has been stuck with the perception that it’s boring. Is that changing? Is the storage industry becoming more innovative?

There’s always a bottleneck to the progress in computation, and I think the bottleneck for the last decade, with the growth of data, has been how to handle all the data.

Frankly, I think the technology has been behind. Now we’re finally starting to see some real advances in storage. That’s what makes it exciting. When it becomes a bottleneck, that also means there’s a lot of opportunity.

What stands out to you after your first year at Pure Storage? How did the company perform?

I think our performance speaks for itself. If you look over the last year, we’ve grown an average of 40 per cent year-over-year. We’ve come out with some great new products that are growing very well. And we continue to lead the market in advancing new technologies. It speaks to the quality of the company overall.

Of course, a lot of that was in place before I came on board. It’s a bit too early for me to talk about any real accomplishments.

But I do think that what I saw here was a company that had great potential, that was transitioning from being a midsize company to a large company, and that needed to transition some of ways in which it did business. I think I’ve been able to help them start to advance to the next stage.

That has to do with the way we work with our partners in field, the way we scale our sales force and our development organisation, and the way we aspire to looking at new opportunities for the business.

In May, Pure Storage outlined its vision for a data-centric architecture that delivers on the need for agility and performance in enterprise settings. Can you explain the data-centric architecture? What does it involve from a technical standpoint?

I’ll go back a little bit, in terms of the way that customers design their environment, and I’ll talk about why we now have an opportunity to modify that, and why we should modify that.

If you think about what an ideal situation would be, if you could snap your fingers and make magic happen, you’d have one super powerful processor that could address storage that was located right next to it, at speed of light. That would be an ultimate, easy, very straightforward architecture.

Now going back 10 or 15 years, the fastest connection that people had was 1 Gigabit Ethernet. They had disks that were maybe 1 terabyte at most, and we had distributed processors.

In order to handle the world’s largest computation problems, we still need lots of processors – that hasn’t changed. But other things have changed.

For one, networking speeds are at 100 Gigabit Ethernet and even moving to 400 Gigabit Ethernet. Data has continued to grow explosively, to the point where you can’t just fit it on one disk or SSD. So we need to scale that.

But with the very high networking speed and with the density we’re able to get with solid-state storage now, we're able to make it look as if all the data you want is right next to the processors.

Another thing that has changed is that many years ago, the application stack was very heavy. It was difficult to construct. It was customised to the specific application environment.

You had an operating system, you had security software associated with that operating system, you had remote management software associated with that operating system, and then you had an application that was tuned to it.

And it was stuck there. And now you had to get access to lots of data. And the only way to do that was to spread the data out, what was known as scale-out architecture for the data.

But today, applications are very lightweight. They’re virtualised and increasingly containerised. They can be placed anywhere. But the data itself is heavy. When you have petabytes of data – even in an array such as ours, which can fit a petabyte in about five inches on a rack – moving all that data would take a long time.

It’s far better to move the application to the data than to move the data to the application. Now, with 100 Gigabit Ethernet interfaces, we can do that.

So that’s what we mean by data-centric architecture. It’s designing the architecture for your data processing around the data, rather than designing the data around the application.

The other thing that used to happen years ago, and even happens today, is that data was constantly replicated because every application wanted its own copy of the data. Part of the reason for that was performance. They didn’t want to have to share what was limited performance with other applications.

Today with solid state data, we can have multiple applications access the data with the full performance they need with quality-of-service guarantees so it’s not affecting the other applications when it gets access to the data.

That's another thing we mean by data-centric architecture. It’s reducing the number of copies of your data.

Making it easier to get access to it from all the applications that you need – which reduces costs and increases performance. It also increases security and compliance, because now you’ve reduced the number of copies of your data across the enterprise.

A differentiator for Pure is its storage-as-a-service pricing model. Can you talk about the Evergreen Storage Service (ES2)?

Our competitors would view it as just pricing. But it’s a lot more than that. We promise our customers that if they’re on our evergreen model, which is a subscription model, that we will keep their storage system constantly updated to the latest hardware and software – meaning that they never have to migrate their data off the system.

Our competitors can’t do that because they can’t do what is called a non-disruptive upgrade. They can’t replace the hardware and the software without downtime.

When our competitor goes to a new product model and obsoletes the old model, they force the customer to migrate the data. They can’t upgrade the old model.

So this gives assurance that if a customer is buying now, they won’t need to change out an array in a few years?

Exactly. We do it all in place. If they’re paying the subscription, they don’t pay any more money. We upgrade the system for them as far as the subscription. Another benefit is that we don’t charge them again for the same storage.

Let me give you an example. Let’s say they buy a system with 50 terabytes of storage in it. A few years later, if they want to upgrade that to 250 terabytes, they only pay for 200 terabytes. They don’t need to pay for the first 50 terabytes over again.

Is that a common scenario? Are customers typically making the shift to flash storage in increments? What’s a typical adoption path?

We do see that. We see, with our top 25 customers, between 10x and 12x over the next four years, from their first purchase. We see on the order of 4x to 5x over the first two or three years, on average for all of our customers.

Are we going to see disk drives disappear?

We’ll still see disk drives, but they’ll start to migrate to cold storage. We believe that tier one and tier two will collapse.

Read more on the next page...


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Pure Storage

Show Comments