Menu
The long arm of Moore's Law

The long arm of Moore's Law

In 1965, an engineer at Fairchild Semiconductor named Gordon Moore noted that the number of transistors on a chip doubled every 18 to 24 months. A corollary to "Moore's Law", as that observation came to be known, is that the speed of microprocessors, at a constant cost, also doubles every 18 to 24 months.

Moore's Law has held up for more than 30 years. It worked in 1969 when Moore's startup, Intel, put its first processor chip - the 4-bit, 104KHz 4004 - into a Japanese calculator. And it still works today for Intel's 32-bit, 450MHz Pentium II processor, which has 7.5 million transistors and is 233,000 times faster than the 2300-transistor 4004.

Intel says it will have 100-million-transistor chips on the market in 2001 and a 1-billion-transistor powerhouse performing at 100,000 mips in 2011.

For your customers, it's been a fast, fun and mostly free ride. But can it last?

Exponential gains

Although observers have been saying for decades that exponential gains in chip performance would slow in a few years, experts today generally agree that Moore's Law will continue to govern the industry for another 10 years, at least. Nevertheless, it does face two other formidable sets of laws: those of physics and economics.

A mind-numbing variety of things get exponentially harder as the density of circuits on a silicon wafer increases. The Semiconductor Industry Association's (SIA) 1997 Technology Roadmap identified a number of "grand challenges" as the width of individual circuits on a semiconductor chip shrinks from today's 250 nanometres (or billionths of a metre) to 100 nanometres in 2006, four product cycles later. One hundred nanometres is seen as a particularly challenging hurdle because conventional manufacturing techniques begin to fail as chip features approach that size.

Bad publicity

And it isn't just making the chips that's getting more difficult - as Intel discovered in 1994 when an obscure flaw in its then-new Pentium processor triggered a firestorm of bad publicity that cost the company $US475 million. Modern chips are so complex that it's impossible, as a practical matter, to test them exhaustively. Increasingly, chip manufacturers rely on incomplete testing combined with statistical analysis. The same methods are used to test very complex software, such as operating systems - but for whatever reason, users who are willing to put up with software bugs are intolerant of flaws in hardware.

At the present rate of improvement in test equipment, the factory yield of good chips will plummet from 90 per cent today to an unacceptable 52 per cent in 2012. At that point, it will cost more to test chips than to make them, the SIA says.

Chip manufacturers are hustling to improve testing equipment - and are extremely reluctant to discuss the matter, which they see as vital to their future competitiveness.

More costly than nukes

Although the cost of a chip on a per-transistor or per-unit-of-performance basis continues to fall smartly, it masks a grim reality for chip manufacturers. A fabrication plant costs about $US2 billion today, and the price is expected to zoom to $US10 billion - more than a nuclear power plant - as circuit widths shrink below 100 nanometres. Significantly, "scaling" isn't one of the SIA's grand challenges. "Affordable scaling" is.

Indeed, the industry's progress may eventually be slowed by a lack of capital, said James Clemens, head of very large-scale integration research at Bell Laboratories, a pioneering IT research organisation. "Social and financial issues, not technical issues, may ultimately limit the widespread application of advanced [sub-100 nanometres] integrated circuit technology," he said.

As an analogy, Clemens pointed to the airline industry, which knows how to routinely fly passengers faster than sound but, due to the cost and technical complexity, doesn't do it.

"A lot of people are worried about cost," said John Shen, a professor of electrical and computer engineering at Carnegie Mellon University in the US. "You see more and more companies bailing out."

Optical lithography

Transistors are etched onto silicon by optical lithography, a process by which ultraviolet light is beamed through a mask to print a pattern of interconnecting lines on a chemically sensitive surface. The conventional approaches that work at 250 nanometres probably can be refined to etch features as small as 130 nanometres: 400 atoms wide, which is a thousand times thinner than a human hair. But at 100 nanometres and below, where the wavelength of UV light exceeds the size of the smallest features, entirely new methods will be needed.

An Intel-led consortium is working on "extreme ultraviolet" lithography, which uses xenon gas to produce wavelengths down to 10 nanometres. An approach favoured by IBM uses X-rays with a wavelength of 5 nanometres. Meanwhile, Bell Labs is developing lithography that uses a beam of electrons. These and other alternatives are complex, costly and still unproven.

Continued progress

Continued progress in processor speeds will require better ways of designing and making chips, but the biggest obstacles to higher performance may currently lie just off the chip: in the motherboard and in the logic that connects the chip to cache memory, graphics ports and other things.

"We do not have the design or manufacturing capabilities in those off-chip structures to keep up with the rapid growth in processor clock speeds," said Bruce Shriver, a computer science professor at the University of Tromso in Norway. "Unless the design and implementation capabilities in those areas catch up, then they will be a critical limiting point."

Temporary problem

But Albert Yu, general manager of Intel's Microprocessor Products Group, said Shriver is worried about a "very temporary problem". Increasingly, off-chip units such as cache will become integrated onto the processor chip, allowing them to work at the same high frequencies as the processor and eliminating the bus between them, he says.

In just the past few months, a number of promising announcements have come out of US research labs:

Last month, IBM began shipping 400MHz PowerPC chips that use copper wiring instead of the conventional aluminium, which doesn't perform as well but is easier to manufacture. As circuits shrink, the performance and cost advantages of copper grow.

IBM announced last month that it could boost transistor switching speeds 25 to 35 per cent by putting an insulating layer of silicon dioxide - called "silicon-on-insulator" - between the transistor and its silicon substrate. Refinements of the technology, which reduces distortion and current drain, could push feature widths down to 50 nanometres, IBM said.

In February, a graduate student research team at the University of Texas, working with the industry consortium Sematech, printed 80-nanometre features (one-third the size of today's) on a semiconductor wafer. Remarkably, the tiny features were produced with conventional deep ultraviolet light. The advance was due to a special etched-quartz mask developed by DuPont Photomasks.

None of these is the breakthrough that will buy another decade for Moore's Law. But they illustrate the kinds of advances that chip away at the brick wall toward which Moore's Law is habit-ually said to be headed.

Said Carnegie Mellon's Shen: "We've always said there's this wall out there, but when you get closer to it, it sort of fades away or gets pushed back."

Many hands make light work

Ultimately, users don't care about transistor counts, clock speeds or even mips. They care how much real work their computers get done. One way to make the processor do more work is to move some of the work from hardware to software.

Today's microprocessors are able to achieve "superscalar" performance by executing several instructions simultaneously. Intel's Pentium II - which can execute up to five instructions at a time - predicts the flow of a program through several branches by looking ahead in the program. It analyses program flow and schedules execution in the most efficient sequence. It also executes instructions "speculatively" - before they are needed - and holds the results in suspense until the predicted branches are confirmed.

But there's a law of diminishing returns for this technique because the chip must devote more and more of its circuitry to management of the complex processes.

Now an old concept - the very long instruction word (VLIW) processor - is making a comeback, notably in the new 64-bit Merced chip, part of the Explicitly Parallel Instruction Computing (EPIC) family of processors being developed by Intel and Hewlett-Packard. VLIW counts on the compiler, and to some extent the programmer, to specify where parallel execution of code is possible, relieving the processor of that burden.

VLIW has some pitfalls, said Carnegie Mellon University's chip expert John Shen. "Merced is hoping that, by moving the work to the compiler, you can make your hardware very clean and fast," he said. But complexity in software traditionally has been harder to manage than complexity in hardware, he said, and it takes longer to develop new compilers than new microprocessors.

Intel senior vice president Albert Yu won't reveal how EPIC works, but he said labelling it a VLIW architecture is a "misinterpretation". But, he said, "We rely on the compiler to do a lot of stuff."

Bruce Shriver said improvements in hardware-based branch prediction algorithms will allow superscalar processors to execute a dozen or more instructions simultaneously, twice what is possible today. And he says compilers will be created that do a better job of optimising code for more efficient execution.


Follow Us

Join the newsletter!

Error: Please check your email address.
Show Comments