Since its inception, code and state sharing has been a key component of Windows. In the early days, primarily, it was also a requirement, as Microsoft's engineers fought to overcome the limited memory and disk space found on early PCs. In fact, Microsoft's first two versions of Windows were tuned to run on floppy-based systems; it wasn't until Windows 3.0 in 1990 that hard drives started to become a de facto component of a Windows system. It's somewhat ironic, given the size and complexity of today's Windows, to consider that the original design of this operating system was inherently thrifty with system resources. And to combat the relatively tiny amount of disk space that would likely be available, the early designers of Windows invented DLLs, or dynamic link libraries, which are files that simply contain code that can be shared between multiple applications and the operating system, simultaneously, so that this code wouldn't need to exist on the system in numerous locations.

DLLs were a good idea for the time. But as Windows grew in complexity and size, DLLs started to wear out their welcome and have increasingly become a key source of system instability. The problem has to do with version conflicts: Each DLL is updated with new functionality over time, with newer versions providing (hopefully) a superset of the functionality of earlier versions. To ensure that their applications have the correct versions of DLLs, software developers would ship these DLLs with their applications and install them on a system when the application was installed. Properly written applications will, of course, check to ensure that they are not overwriting a newer version, but this doesn't always happen and, perhaps more disconcertingly, newer versions of DLLs aren't always completely backwards compatible with earlier versions. <% ' Added so can inventory as Connected Home articles. kw = "CH" %>

The end result to this mess is "DLL hell," a condition all Windows users are familiar with though they might not no it by name: You install an application that causes Windows not to boot, perhaps, or a previously working application suddenly refuses to run. Microsoft acknowledges the problem and has been working to fix it since Windows 3.1, but with the runaway success of 32-bit versions of Windows such as Windows 9x and NT/2000, and the complexity that these systems bring to the table, DLL hell has only gotten worse over the years.

Today, the fragility of a Windows system can be tied directly to problems associated with DLL hell and the fact that Windows systems are inherently complex at the code level. But DLL hell isn't the only issue, though it's a convenient name for a boogeyman we all know and understand. Sharing at other levels--such as COM objects and device drivers--also contributes to the underlying fragility of Windows. Internally, Microsoft describes the problem like so:

DLL Hell is not just about DLLs. It?s about sharing. It?s about sharing code in the form of DLLs, COM objects, device drivers. It?s about sharing state in the form of the registry or ad-hoc stores built on top of the file-system. It?s about the complexity of a system that shares almost everything and can?t truly be tested the way that we used to test it. It?s about a market that leaves little for second place products and forces companies to ship as rapidly as possible, before complexity is understood and before problems can be found and fixed. . Sharing is the glue that binds the various dimensions of complexity together. We need to make it possible to selectively break the bonds.

After almost a decade of study, Microsoft has come to the conclusion that sharing is a double-edged sword, with benefits and associated problems that are difficult to reconcile. And though it might be easier to simply fix this problem in tandem with a future platform shift, the problem is severe enough that customers need a fix now, on the current Windows platform. So the company began implementing a series of technologies, over time, that addresses this problem. And those technologies are lumped under the name "Fusion," an internal code-name that Microsoft has never publicly discussed. What's amazing about this technology is that we've already seen three basic examples that were developed under the Fusion umbrella, it's just that we didn't understand that that's what they were. But for Fusion, as Microsoft likes to say, the best is yet to come, and future versions of Fusion may actually solve the DLL hell problem once and for all. But before we get to that, let's take a look at the evolution of Fusion and see where it's taken us so far.

Fusion 1.0: Side-by-side DLLs and Windows File Protection
The first Fusion technology that saw the light of day is "side-by-side DLLs," which debuted with Windows 98 Second Edition in mid-1999. This essentially allows an applications developer to rename system DLLs and make them available in a separate, private location. For example, a developer may need a particular version of COMCTL32.DLL, which typically exists in the SYSTEM or SYSTEM32 folder of a 32-bit Windows system. Rather than replace the version of this DLL that's found in Windows, and foist potential incompatibilities on other applications or even Windows itself, the developer could simply rename the correct version of this DLL and provide it somewhere under the directory structure created to house his application. Then, when the application needed the services of the DLL, it would call the private version. But the operating system and other applications could continue to use the "public" version and there wouldn't be any conflicts.

This is a decent solution to the problem, with one major caveat: It's optional, and the applications developer must manually provide this feature. Still, this itself wouldn't be a kiss of death but for one other issue: It only works on Windows 98 SE (not 95 or the original version of 98) or newer (and on Windows 2000 or newer, but not NT). So anyone implementing this feature would be ignoring the vast majority of the market, as most Windows users aren't yet using the latest and greatest versions. Of course developers could opt to include this feature on systems that were running 98 SE or newer and simply behave normally on legacy Windows systems, but that's a lot of extra work for a relatively small segment of the user base. And it ignores the very basic fact that DLL hell is a problem with Windows and something that should logically be fixed by Microsoft, not by third-party software developers on an application-by-application basis.

In Windows 2000, Microsoft released a second technology that might be considered part of Fusion 1.0, Windows File Protection (WFP). In Windows 2000, Windows File Protection prevents the replacement of protected system files (Specifically, certain .sys, .dll, .ocx, .ttf, .fon, and .exe files) by errant applications. So it's impossible for a poorly written application to install an older version of a key system file on Windows 2000. However, service packs, hot fixes, Windows Update, and other operating system updates can overwrite key system files with newer versions. This feature was also made available in Windows Millennium Edition ("Windows Me"), where it is known as System File Protection (SFP).

Windows File Protection is a good feature and it does work to increase reliability and stability on the Windows 2000 platform. But Windows File Protection does so at the risk of breaking application compatibility: There are numerous instances of newer DLLs not working with older applications that expect a different version. This was a conscious trade-off, of course, and one that Microsoft was quite open about during the development of Windows 2000. But many applications simply won't run on Windows 2000 until they are upgraded to work with the updated DLLs and other system files that are found on that OS. And WFP has another liability: Microsoft has given away the "keys" to WFP by supporting something called "exception packs" so that updated DLLs that are installed on an NT 4.0 system which has been upgraded to Windows 2000 can be migrated to the new OS. This means that a system upgraded to Windows 2000 may be inherently broken from day one, despite WFP.

Going forward, Microsoft wants to create an environment in which legacy applications will continue to install and run correctly, while providing a more elegant platform for new applications that will adhere to new installation and co-existence guidelines. Future versions of Fusion enable just this.

Fusion 2.0: Protecting the Server
First up is Fusion 2.0, technologies that will first be made available with Visual Studio 7 and COM+ 2.0 (both expected in. Q1 2001). And as you might expect, given the products with which this technology will become available, Fusion 2.0 will deal largely with protecting COM+ 2.0 components, on both Web browser clients and the server, from each other, and with protecting the operating system from these COM+ components. Microsoft shipped COM+ 1.0 with Windows 2000 in early 2000; COM+ 2.0 will be available as an add-on to Windows 2000 and it will also ship with Whistler, the follow-up to Windows 2000 due in the first half of 2001. Improvements in this release are geared largely toward Web servers.

Fusion 2.5: Protecting Consumers
Later in 2001, Microsoft will ship its Fusion 2.5 technologies in "Whistler," the follow-up to Windows 2000. Whistler is what Microsoft calls a "Fusion-enabled componentized OS that will provide device driver and OS install/update reliability." With the release of Fusion 2.5 in Whistler, Microsoft will support protected and isolated COM+ 2.0, COM classic, and Win32 components. The focus with this release will be consumer needs, and it's expected that Whistler will usher in a more transparent form of protection for components, including legacy components that don't know about the presence of the feature.

How Fusion works
To counter the problematic side effects of sharing, Microsoft will use Fusion technologies to introduce isolation to Windows. Applications and components, for example, which were written with the assumption that they would share system resources, will automatically be isolated from the rest of the system transparently. That is, they will install as normal, and to the user, nothing changes. But the OS will actually be copying the shared DLLs and other files that the application installs into a private location, so that the application will "think" that it is running normally, as it would on a legacy Windows system. But on Whistler, that application will really be running in a psuedo-clean virtual machine, one in which it is the only installed application. It will be unable to break other applications or the OS.

For new applications, the picture is even brighter. Application installation, maintenance, and removal will be simpler and largely automatic. For the user, setup, concepts like drives and directories, and reboots because of application install will disappear. New installs won't break other applications or the OS, something that will be equally important in the new service-based application model that Microsoft is pushing with Next Generation Windows Services (NGWS). When Whistler is updated with new features and bug fixes, these changes to the system will be safe and incremental. And though this is still up in the air, these features are technically possible to implement on existing Win32 platforms, so there's a chance that Microsoft will retroactively add some Fusion 2.5 technologies to Windows 2000 or even Windows Me. I won't be holding my breath, of course.

To describe the physical makeup, exposed entry points, and dependencies of an application or component, Microsoft is introducing the concept of a manifest that will make these bits self-documenting to the underlying OS. So the manifest is a blueprint, or database, that describes the application or component. This information, which is typically lost during today's installation phase, will be stored by the system to ensure that the application or components always executes correctly. For example, when an application requires a particular version of a dependency (typically a DLL), the system will ensure that the application always uses that version of the DLL when it runs. So an application (or component) will always run in a known correct state. The manifest provides a number of other benefits, however. Setup will be simplified and the system can proactively cache code on demand and delete code that is never used. And the repair of broken applications will be dramatically simplified because the system will always know what the application needs to run correctly.

Incidentally, Microsoft is going to use XML to represent its application manifests. A working group within the company made up of members of the Fusion, IE, COM+, IIS and WBEM teams is currently determining the structure of the manifest and the way that this information will be made available. For example, one question regards the physical location of an application's manifest: Should it somehow be incorporated in binary form into the application's executable file? Or should it be entered into the Registry or other disk-based database?

Conclusion
Windows is fragile today largely because of the code and state sharing features that are fundamental to the architecture of the OS. Microsoft is working to solve this problem by adding code and state isolation to the underlying operating system in a form that will work with legacy applications as well as new applications that have a better understanding of this feature. So in Whistler, sharing will become the exception rather than the default, as it's been in Windows since 1985. Features such as self-repair and self-diagnosis, provided by a new manifest feature, will enable applications, and thus Windows itself, to be more reliable. And installation and maintenance will be much simpler as a welcome side benefit of this new architecture. So, with apologies to Sun Microsystems, Whistler will enable applications that run once to run forever.