Menu
The case for 16 bits

The case for 16 bits

We've had PCs capable of 32-bit computing ever since the introduction of Intel's 80386 chip almost a decade ago. OS/2, Windows NT, and Windows 95 now provide operating systems that are ready to run 32-bit software. Although vendors suggest this has opened up vast new computing vistas, it has in fact added to the "bloatware" problem, making many small, fast applications bigger and slower.

If you're doing any PC-based development, you'll be facing this issue. If you've already embraced a 32-bit operating system, you've probably found that recompiling your 16-bit applications can make them fatter and slower. If you have a mixed environment, where work gets done in every operating system from DOS to Windows NT, you may be surprised to find that both 16- and 32-bit applications can be options on all of your platforms.

There are advantages to both 16- and 32-bit computing. Each is better some of the time, and both are viable choices almost all of the time.

Some applications demand the power of 32-bit computing. For example, 3-D virtual reality modelling (VRM) and CAD benefit from 32-bit computing. The game Doom is a widely known example of VRM, but the technology has much broader potential than just game playing.

In 32-bit computing, the basic unit of data is 32 bits wide. This means that character data (all SQL database operations are character-based) can be processed four characters at a time. Integers that range from negative two billion to positive two billion can be handled as a single unit.

The CPU can manipulate 32-bit values in a single instruction and often in a single clock cycle. Given a large enough cache, for example, a 100MHz Pentium-class computer can perform about 100 million 32-bit additions in a single second.

In addition, code addresses are 32 bits wide in 32-bit computing, which expands the amount of directly addressable RAM from 64K (16-bit addresses) to 4Gb (32-bit addresses) per segment.

Powersoft prefers the "flat" memory model provided in 32-bit systems because of the increased amount of addressable RAM, according to Allan Dull, the company's project manager for emerging technologies. In 16-bit DOS, applications can manage multiple segments, each one restricted to 64K, to get as much as 500K of addressable memory. Comparatively, in a 32-bit Windows NT flat model, applications can address 500Mb in a single segment.

With all these advantages, why not do all application development in 32 bits?

Because nothing in computing ever comes without a price.

Bloat with no benefit

In the case of 32-bit computing, the cost is the loss of the smaller size of 16-bit applications. In many cases, 16 bits handle an application's dominant data values. These applications (in fact, some portions of all applications) simply expand and slow down when they are converted to 32-bit code.

Symantec provides 16- and 32-bit versions of its development tools by running its own 16-bit and 32-bit compilers on a common code base.

"The 32-bit version is usually bigger," says Mansour Safai, Symantec's general manager of development tools.

Consider a workgroup scheduling application that adds 7 to a date when it's ready to progress from one week to the next. In 16-bit code, the instruction that adds 7 takes three bytes in the executable - one byte for the machine instruction that says "add" and two bytes for the numeric constant 7. In 32-bit code, the same operation takes five bytes - the same byte says "add" but the numeric constant fills four bytes.

In the Intel architecture, 16- and 32-bit instructions do not differ in timing. An integer add, for example, takes the same number of clock cycles regardless of the data's width. However, program size is critically important to overall speed. The CPU handles data almost 10 times faster than the bus transfers data from RAM to CPU. The smaller the program, the larger the probability that its instructions and data are in the CPU's cache or in an intermediate cache.

Similarly, 32-bit addresses are a mixed blessing. For example, a 100K program using 32-bit addresses doubles the size of each address without any offsetting improvement. With both data size and address size doubling, many programs that do not require 32-bit computing get fatter and, consequently, slow down. If the application is larger than 1Mb, 32-bit code will probably be the better choice. One way to test this is to generate both the 16-bit and 32-bit executables, then compare sizes and times of typical operations.

Theoretical flexibility

Intel's microprocessor architecture lets programmers use 16-bit data and addresses in 32-bit code, and it permits 32-bit data and addresses in 16-bit code. In the earlier example of adding a numeric constant 7, that could have been done in four bytes, not the five suggested above, if the programmer specified that the value was 16 bits. The programmer's compiler or assembler could generate an additional byte that tells the CPU to reverse the status (16 or 32 bits) for a single instruction.

This lets a program that works primarily with 16-bit values take advantage of 32-bit instructions when they are helpful. Alternatively, it would let a 32-bit program use 16-bit addresses if the large address space were not required. C and C++ programmers can specify the size of their data values. Some compilers take advantage of these specifications to optimise the output code via a mix of instruction sizes.

In an .EXE or other executable file, code is stored in related groups known as segments. The actual setting of 16- or 32-bit size is specified in the segment's header. The CPU can switch between 16- and 32-bit code when it switches from one segment to another.

Unfortunately, current operating systems tend to ignore this capability, so it is more theoretical than practical. Windows 3.x and Windows 95, for example, require a "thunking" layer to resolve the difference in 16- and 32-bit addresses. The thunking process is slow enough to lose whatever advantage might be realised from optimising segment types to fit the data needs of the program. Thunking is primarily useful for using older 16-bit library code with newer 32-bit programs.

A heterogeneous universe

Most organisations, large ones in particular, have a mix of PC computing environments. Although 16-bit Windows 3.x is still the most popular desktop operating system, it is being replaced by 32-bit Windows 95 and Windows NT. OS/2 and Windows NT Server, both 32-bit systems, are commonly found on applications servers and are used for some high-end, end-user applications. Good old 16-bit DOS is small, but it is still an almost universal part of every desktop.

The exact speed with which 32-bit computing is replacing 16-bit computing remains open to debate.

Many companies have opted to bypass Windows 95 altogether and are waiting for NT 4.0 to arrive. Others have slowed their migration to Windows 95. And still others have opted to stick with Windows 3.1 for the foreseeable future.

What does this mean if you are developing applications that will be deployed across the enterprise? The most important point to keep in mind is that an operating system's "bitness" is not strictly correct.

Remember that VRM applications, such as Doom, require 32-bit computing power. But Doom is a DOS-based application. According to Geno Coschi, development manager for Powersoft's Watcom, Doom runs in a DOS-extending environment that selects the 32-bit computing size for applications and returns to the 16-bit size for DOS's hardware support services, such as disk reads. The game was written using the 32-bit Watcom DOS compiler. Windows 95 uses a similar hybrid technology.

Windows 3.x applications are commonly 16-bit programs, but using Microsoft's Win32s API you can run 32-bit programs under Windows 3.x. Even Windows NT, an almost purely 32-bit system, runs 16-bit DOS applications.

You practically have a choice of 16- or 32-bit computing, regardless of the platform. Fortunately, most development tools vendors support both 16- and 32-bit compiling from a single code base. In many cases, the programmer simply selects the output target from a menu and clicks on a Build button to generate the finished executable.

Using the Watcom compilers you simply pick the compiler and run it, Coschi says. Watcom will support both 16- and 32-bit code as long as customers demand it, he adds. Borland International and Symantec have similar policies. (Microsoft is minimising support for 16-bit development - you have to use Visual C++ 1.52 for 16-bit code.) Using many tools, your programmers can generate both 16- and 32-bit output in a matter of seconds or, for very large systems, within a few minutes. Although there is no substitute for actual testing, you'll usually find that the version with the smallest executable size is also the fastest version of your application.

Some development tools, such as Powersoft's PowerBuilder, produce a byte code output that is independent of underlying 16- or 32-bit architecture. The byte code is processed by a small interpreter that is custom written for each operating platform.

Powersoft's Dull said simply, "We're a bit neutral."

Byte code systems free developers from any concern about the underlying code size, but, once again, there is a price. Interpreted byte code cannot run at the same speed as, for example, compiled C++. Visual Basic and Java applications also use byte code technology.

Achieving the 32-bit blessing

Recompiling 16-bit programs into 32-bit programs will frequently prove disappointing. Applications originally written for 16-bit environments may be simple to recompile in the form of 32-bit applications, but chances are that recompiling will create a fatter, slower program. Large programs will often need a thorough rewrite to benefit from the potential of 32-bit code. Many smaller programs will perform better as 16-bit programs.

Although the promotions of some vendors would have you believe that 32-bit software will quickly replace all the "old-fashioned" 16-bit code, the truth is that 32-bit software is just another arrow in the quiver.

Sometimes it will be a dramatic improvement over the old software it replaces. It will certainly enable new applications that weren't possible in a 16-bit world. But if you're not careful, it can change small, fast code into bigger, slower code.

If developers bear in mind that their work will probably be used in both 16- and 32-bit environments, they can enjoy the best of both worlds.


Follow Us

Join the newsletter!

Error: Please check your email address.
Show Comments