When a software company won't make the source code for a product available, one must put one's faith in something called "security through obscurity". The argument for security through obscurity is simple: if crackers can get to the source code, they can easily find ways to exploit weaknesses in the product.
That sounds logical, but the premise is easily refuted. If you are not convinced by the numerous Windows, Internet Explorer, and Microsoft Outlook exploits, visit Game Copy World (www.gamecopyworld.com) to see how easily people can break the copy protection for games without looking at the source code. The site often publishes work-arounds the same day a game is released.
Ironically, we open-source advocates base our confidence in the security of open source by applying the same defence for security through obscurity. It's true that source code makes it easier to spot a product's weaknesses. We take the next step in the logic: if having the source code makes it easy to spot weaknesses, the best way to find and plug security holes is to make the source code as widely available as possible and solicit the input of those who use it.
But a greater security risk than system cracking concerns me, and it is one that is only made possible through obscurity: intentional back doors.
A few years ago, researchers discovered that Windows 95/98/2000 and Windows NT included two cryptographic keys. When a Windows service pack accidentally failed to cloak the identity of the keys, someone discovered the second key was called _NSAKEY. The implication is that Microsoft is providing the National Security Agency (NSA) with a way to crack into any Windows box for surveillance or data recovery purposes.
Microsoft denies this, saying the NSA label is there only to indicate that the key meets the agency's cryptographic requirements. Unfortunately, how do you know if Microsoft is telling the truth? After all, if the NSA has a back door into every Windows system, then what could one expect Microsoft to say? Such an admission might as well be followed by instructions on how to remove Windows and replace it with just about anything else.
It's bad enough Microsoft and the NSA may have peepholes into our desktops and servers, but what about the crackers who broke into Microsoft recently? Did they modify any Microsoft source code to introduce or expand existing back doors into Windows? Did they steal any of the source code? Who are the crackers? One person operating alone or with the aid of a rival company or nation? With whom are these people going to share this source code, assuming it has been stolen?
Depending on the answers, assuming we might never know them, this recent security breach could lead to some alarming possibilities. Given enough information, unknown people, companies, or even nations could soon have the ability to easily crack into our systems and decrypt the information that is there. If Microsoft knows this, it's not in a position to admit it. To do so would compromise its alleged relationship with the NSA and cause a switch to another operating system practically overnight.
It's time for Microsoft to face reality and open source Windows. Aside from encouraging customers to go into denial, that is the only way Microsoft can restore confidence in the security of Windows. People must be able to examine, modify, recompile and reinstall the code on their own. Then and only then can Windows customers have any assurance that their systems are safe from prying eyes.
Nicholas Petreley is editorial director of LinuxWorld. Reach him at firstname.lastname@example.org