Loose cables is an irreverent look behind the scenes at testing computer products, in particular at IDG's Infoworld lab in the US. Our insights are gleaned during the long hours spent testing, and even longer hours spent sorting through outrageous vendor claims and press releases. Some of the insights are technical, some are political, and some are just funThe rush we get after finishing a Test Centre comparison is the closest thing we have to school summer holidays in the working world. After wrapping up testing of intranet load balancers recently, we threw back a few silver cans in celebration and thought about the pack of rats among us who don't get to experience the joy of putting these pages to bed. (Ouch -- it's not a good idea to chew bubble gum with tongue in cheek.)The other ratsAlthough technical director Laura Wonnacott writes the Test Centre Rx column, the infrastructure group she heads works mostly behind the scenes.
Besides Wonnacott, the group includes test manager Brooks Talley, senior software engineer Yun Wang, enterprise platforms manager Stuart McClure, test platforms manager Rod Chapin and inventory controller Ronald Paulino.
Their days aren't driven by the whip of a weekly deadline, but the work they do now affects testing for the rest of us next month, next quarter and next year.
Simply put (but not simply done), the infrastructure group's job is to expand and evolve the lab so it is increasingly capable of enterprise-level testing.
The group's other ongoing projects include expanding server platforms, stringing new network topologies, adding more WAN connectivity and enhancing existing workloads that can realistically stress the products we test.
For the load-balancing comparison mentioned earlier, we built and put to use an IP-based workload that simulated 100 simultaneous users on a 1000-user intranet.
This workload is the first piece of the Test Centre's year-long plan to develop numerous enterprise-level benchmarks.
A benchmark needs solid definitions, strong execution tools and thorough reporting metrics -- but all of these must be in the service of simulating in-the-field conditions. Our goal is to design workloads that accurately reflect the tasks performed in real-world networks.
We want our benchmarks to be as pragmatic as they are hermetic.
More to come
Of the workloads we're planning, the first we expect to fully complete is a security suite that comprises the 16 most popular groups of attacks on operating systems, networks and firewalls. After that comes online transaction processing and decision support systems workloads. Also this year, the infrastructure group will be building workload shells for Windows 95, Windows 98 and Windows NT, updating our 10-application file-and-print workload suite, and finishing the rebuild of our notebook battery tester.
Last but not least is the IP-based workload we initiated for the load-balancing comparison. In a few weeks, we'll give you a peek behind the benchmark, a Test Centre work-in-progress that will grow during the coming months to include most anything served over IP. And we'll be reporting on our progress with the other workloads throughout the coming months. It's going to be a busy year.
Future benchmark efforts
Above we have spotlighted the InfoWorld Test Centre's infrastructure group and described its plans for the coming year. Much of the effort will involve rewriting existing benchmarks and workloads (including database, IP, and file/print services) to simulate the network demands of real-world, growing enterprises.
We completed a portion of the IP benchmark suite earlier this year for the intranet load-balancing Test Centre comparison. The full suite will eventually include a range of static and dynamic content to represent both traditional Web servers and newer Web-based application servers, but for load-balancing testing we built a benchmark that generated HTML files and CGI scripts only. It was a good place to start -- sort of a lowest common denominator supported by all Web servers.
The benchmark generates HTML files configurable by number and size; for the load-balancing comparison, we created 100 pages ranging from 3KB to 12KB. In terms of CGI content, the benchmark includes 33 server-side executables written in C. Each executable queries a file and returns results formatted in HTML. The benchmark allows the ratio of HTML files served to CGI requests processed to be adjusted. To simulate the intranet scenario the comparison assumed, we skewed the ratio heavily (90 to 10 per cent) toward static content.
We are also concerned with reporting client-side interaction in terms of response time and latency. This is how users perceive real-world performance.
Our client model is a C++ program using Microsoft Foundation Classes to implement socket communication. The client contains a configurable run-time and wait-time metric for the server to respond to each client request. The reporting metrics supported by the client include server response time, average response time and error codes.
We'd like to briefly mention an electronic data interchange (EDI) solutions supplier whose products didn't make it into a recent com- parison. Sterling Commerce (www.stercomm.com) impressed us during a visit to the lab with multiple options for businesses seeking to extend trading partners via an EDI network.
Sterling's Commerce:Webforms is an Internet-based product similar to the three we reviewed, and its Commerce Now is an associate program that provides trading partners with access to a customised Web site so they can familiarise themselves with the benefits of doing business electronically with your company.
Unfortunately, the Weblink product requires a Gentran server, and Sterling was unable to allocate the resources needed to implement it.