Pimp your apps
- 17 June, 2008 10:13
Tools for speeding sluggish applications traditionally are of two types: application-delivery controllers designed to ease the load on Web servers, and WAN optimization devices aimed at mitigating network latency and bandwidth constraints. Some say it's time for these two to consolidate.
"I'd like to see convergence of traditional data-center load-balancers and general WAN-optimization devices. It has always confused me that a convergence of those boxes has not occurred," says Michael Morris, network architect at a US$3 billion high-tech company.
The two product categories tackle different performance-related problems. Companies deploy load-balancers and traffic-management devices in the data center primarily to improve the performance of Web applications that users access over the Internet. WAN devices, on the other hand, are deployed symmetrically (at both ends of WAN links) and generally use such techniques as caching, compression and protocol acceleration to improve the performance of business applications that internal users access over dedicated WAN links.
Over time, however, the lines have blurred, and users are accessing business-critical applications - Microsoft SharePoint and SAP software, for example - across public and private networks. In addition, data-center gear and WAN appliances have grown to include some common features, such as compression and SSL optimization.
So, should the two categories be merged into a single product? Or if not merged, should they at least be better integrated so IT staff could take advantage of their respective acceleration talents to optimize applications from the data center to the desktop?
Morris makes a case for merging them. "It makes perfect sense that the same device that is essentially handing out the connections from the servers holds the data and then does everything it can to optimize that traffic down to the clients, which are generally around the world," he says.
At a minimum, if the devices remain separate edge and data-center boxes, Morris would like to see them share information about application and network conditions. "They could at least have some sort of communication going on, saying 'this is what I'm seeing, this is what you're seeing,' and optimize traffic that way," he says.
Choose your platform
At a high level, setting application-delivery policies that span data-center and network devices has merit, as does taking into account where a request is coming from, says Rob Whiteley, principal analyst and research director at Forrester Research.
"It makes sense to be able to control a policy that says, 'OK, do as much as you can in the load-balancer, especially if the endpoint I'm serving this to is across an extranet or across some kind of public link where I don't own the endpoint. And if it's going out across my private network, then turn off whatever feature I would use on the load-balancer and turn on a more robust version at the data-center perimeter in the WAN-optimization box,'" Whiteley says.
Nonetheless, the question of where WAN-optimization features physically belong isn't easy to answer.
Over the next few years, Whiteley expects to see WAN-optimization technology shift from being deployed as a dedicated hardware device to being integrated as a feature on a more universal platform. "WAN optimization should not be viewed as a solution unto itself," he says. "In the long term, it's going to be built into part of the network infrastructure."
Three architectural scenarios are possible, Whiteley says. First, WAN optimization could wind up in the router or packet-layer infrastructure, an approach that such vendors as Cisco and Juniper Networks are putting their weight behind. In other cases it could become part of the application-layer infrastructure, along with load-balancers and other application-oriented technology; he expects such vendors as F5 Networks and Citrix Systems to advance this option. Third, enterprises could buy a services platform wherein WAN optimization becomes one of many services (print, file, DHCP, DNS) running on an appliance. Microsoft and Riverbed Technology are going in this direction, he says.
For enterprises, committing to an architectural model is no small decision. "Large companies must think long and hard, from an architectural perspective, about how they want to do this," says Jim Metzler, a principal at Ashton, Metzler & Associates.
Making a case for disaggregation
For Joe Skorupa, a research vice president at Gartner, the key issue is less about where optimization features will reside and more about how they can be deployed in a flexible, manageable way to accommodate different enterprise priorities.
Skorupa once thought application-delivery controllers and WAN-optimization devices would merge, "but in fact it hasn't happened to any significant degree," he says. What he is seeing instead is a trend toward disaggregation, separating WAN devices into component parts that can be deployed - in the data center, branch offices and the network - and reused as necessary. "You can place a particular function where it happens to make the most sense. And if it makes sense to have the same function, such as QoS, in two different places, then the nice thing is that you get consistent behavior in both locations," he says.
F5 is the furthest along this path, Skorupa says, citing as an example the vendor's WebAccelerator module, which is built to accelerate dynamic Web pages. "It can run on the Big-IP application-delivery platform, it can run as a stand-alone device, and with an extra lease of software, you'll actually be able to put it on one of F5's WANJets in a branch office. It brings different value depending on where it's placed," he says.
F5's TMOS common operating system unites the vendor's platform elements. Similarly, Blue Coat Systems has engineered a common operating system and platform for its acceleration devices and security-gateway products. A single box runs all Blue Coat functions, so enterprises can turn on or off the features they need, including Web filtering, logging, antivirus software and peer-to-peer blocking.
What's in demand?
Evolutionary predictions aside, there's ample demand for WAN-optimization gear as it exists today. Some of today's most ambitious IT projects - including server and storage virtualization, data-center consolidation and Web-services deployments - have one big thing in common: They take a toll on application performance.
"We want all our stuff in the data center because we want it where we can keep an eye on it and where it has our best power, our best cooling," says Rich De Brino, CIO at Advances in Technology (AiT), an IT services company that consolidated its business-critical applications under one roof. "The problem is, unless we have fast - and I mean really fast - links to all of our locations, users hate us," he says.
AiT employees are heavy users of unified-communications tools, desktop video applications and other collaboration technologies, De Brino says. For adequate WAN performance, particularly for video applications, AiT invested in network-optimization gear from Talari Networks (see "Four cool network-optimization start-ups").
"We want our apps to perform well enough that nobody says they're not using something because it's too slow. I don't ever want to hear that," De Brino says.
Similarly, Concord Hospital invested in Juniper's WAN-optimization gear after consolidating a suite of clinical and administrative applications in the data center on its main campus. The applications had been running in its many healthcare centers, clinics and physician offices. The consolidation project resulted in delays for users trying to access those applications across the WAN. "People would complain things were slow, but network utilization was not that high," recalls Mark Starry, manager of IT infrastructure and security at the hospital. "Most of the delay was due to latency," he says.
The hospital deployed Juniper's WXC 590 appliance at the data center and installed WX 500s and WX 250s at 10 remote sites. "The difference has been unbelievable," Starry says.
Stories like these, detailing how an enterprise deployed network-optimization technology to solve a particular problem, are in ready supply. Deployments like these made the market what it is today. "WAN optimization is very popular because it allows me to overcome a particular problem with a relatively small investment," Forrester's Whiteley says. "I could have a multimillion [dollar] consolidation initiative under way that isn't working well because the WAN is too bumpy. With a $50,000 to $100,000 investment, I can make that work really well," he says.
Most of the success stories, however, represent tactical deployments of application-acceleration and WAN-optimization technologies. Now, as application environments and network conditions become more complex, enterprises must begin thinking more strategically about optimization.
Fast-forward to a time when service-oriented architecture (SOA) deployments are more widespread and applications consist of multiple services supplied by many providers. "In some cases a small increase in network delay has a very big increase in application delay," Metzler says. "With SOA, when you have the WAN coming into play three or five or seven times [in a single transaction], you've got potential for significant delay," he says.
Greater use of virtualization technologies also will complicate things: Imagine a branch-office user on a virtualized desktop accessing a branch-office router over a virtual LAN to get to applications running on virtual servers in the data center, consultant Metzler posits. With so many systems and configuration scenarios, how does IT troubleshoot a performance problem?
It comes down to stellar management capabilities and fine-grained visibility into network applications and traffic, industry watchers say. These are works in progress for most network-optimization vendors.
To improve monitoring and visibility, some vendors have been working on integrating their technologies tightly. Cisco and NetQoS, for example, last summer announced plans to embed the performance-management vendor's monitoring and reporting technology in Cisco's Wide Area Application Services (WAAS) gear.
Another development trend that's upping the complexity quotient is the addition of third-party products to WAN-acceleration devices. Riverbed customers, for example, can run DNS, DHCP, IP address management and other network services from Infoblox on their Steelhead appliances thanks to a technology partnership the two vendors struck in April. Cisco, too, plans to let customers run a stripped-down version of Microsoft Windows for DNS, DHCP and print services on its WAAS gear, Gartner's Skorupa says.
These pairings can help enterprises reduce the number of physical appliances running in branch offices, but they raise more management issues - particularly concerning IT personnel. For example, adding a Web-application firewall to an acceleration device makes it something an IT security team wants to control. Adding dynamic Web caching to an appliance brings application developers into the mix.
Vendors then have to win over not only network buyers but also, perhaps, storage staff, server teams, security specialists or application developers. "One of the challenges for application-delivery controller vendors, in particular, is that as they develop these more advanced features, they may wind up having to sell the same box to three different people in the company," Skorupa says.
In addition, roles-based access becomes critical. "When you aggregate functions, you need to make sure that you still can disaggregate the management functions so that you can have the appropriate separation of management," Skorupa says.
That's not unprecedented; Cisco's Application Control Engine devices can be deployed by a network team and the applications fine-tuned by specialists, Network World blogger Morris points out. "The underlying blade itself and the basic construct of the load-balancer are controlled by the network team, but then each application's load-balancing can be virtualized all the way into configuration and given to an application team."
Morris sees WAN-acceleration boxes also heading in that direction, whereby application and infrastructure teams share configuration responsibilities, with application specialists making the more detailed, protocol-specific optimization decisions. (See story, "Dear IT: Forget the technology")
For IT departments, the trend provides one more pressing reason to break open the lines of communication among application, data-center and network teams. The sooner the better; plenty is at stake.
Data center consolidation projects won't be successful if application performance over the WAN is insufferable. No one will applaud network teams if an SOA deployment intended to conserve development resources falls flat because the Web services run too slow. It's time to start thinking strategically.