Hong Kong industry observers have said that the performance benchmarks publicised by some CPU vendors, in particular Intel's iCOMP Index 2.0, are neither completely objective nor very useful for end users considering the purchase of a new PC.
The iComp Index is a graphical comparison of the relative performance of Intel microprocessors.
"From the end-user point of view, I don't think benchmarks really mean a lot. Consumers actually don't pay much attention to technical issues," said Cherry Velarde, a PC industry analyst at Dataquest Asia-Pacific.
"End users in the Asia-Pacific region are more concerned with the issue of price," she added.
"Benchmarks are good for the vendors to support their marketing claims," Velarde said. "As far as the end users are concerned, what are SPEC, P-Rating and all those other things? What do they really mean?"
Observers also questioned the accuracy of benchmark testing carried out by vendors.
"Benchmarking is confusing because one benchmark can say that this CPU is better and another benchmark can say that another CPU is better. I think at the end of the day, people tend to fall back on just that one number, the 200, the 166, or whatever else," said Valdis Dunis, managing director of semiconductor distributor Serial System Hong Kong.
Dunis said that having a common metric to compare CPUs from all of the major vendors would be "really good".
For many observers, objectivity is a critical component of accurate benchmarking.
"What made me quite sceptical about the iCOMP was that as soon as they announced the MMX features, Intel said, 'Oh, we're going to redefine iCOMP and redo all the weightings and everything else," said Dunis. "iCOMP is just something to make particular CPUs look good."
A spokesman for Intel said that the company uses the iCOMP Index 2.0 as part of its efforts to ensure that end users have access to the right information before buying a Pentium-based PC.
"We definitely encourage customers to look at benchmarks published by other people," said Tom Garfinkel, CPU pricing manager at Intel's Microprocessor Marketing and Business Planning division.
Other sources of benchmarking information include Intel's main competitors, AMD and Cyrix. Both Cyrix and AMD use the WinStone 97 benchmark, a Windows 95 application-based benchmark developed by Ziff-Davis Labs, which determines a processor's performance according to the P-Rating system.
The P-Rating system compares the perfor-mance of a processor to its approximate Pentium equivalent. The scale assigns ratings to each processor based on the clock speed of its Pentium equivalent.
"End users are most concerned with how their PC will run their applications," said Ben Lam, country manager of Cyrix China/Hong Kong. "The WinStone 97 is a pretty good benchmark because it is relevant to the needs of the end user."
Garfinkel derided the WinStone 97-based P-Rating as not being a comprehensive measurement of a processor's performance. The five benchmarks used to compile the iCOMP Index were more representative of a processor's true performance, Garfinkel claimed.
Lam, however, questioned the relevance of the iCOMP Index to end users. "From my experience, I have found that end users don't really trust the iCOMP benchmarks," said Lam. "End users put more trust in those third-party benchmarks."
Garfinkel acknowledged that, because 20 per cent of the iCOMP Index is generated using a benchmark application written by Intel, some observers may not necessarily perceive the iCOMP as an objective rating system. However, the iCOMP was not intended to be used as an objective benchmark, he said. "The iCOMP is there to help end users compare the relative performance of Pentium, Pentium Pro and Pentium II processors," he explained.
"CPU vendors should only design the product, not benchmark it," Lam said. "We should let the real world benchmark our products."