By Gael Core
Some users hope to regain control
and reduce costs
by reining in their distributed systems.
When client/server computing first drew wide notice at the end of the 1980s, many companies were eager to venture into the promised land of distributed systems. Building these systems, they believed, would provide greater IT functionality to end users than centralized mainframe computing. Business users would be able to make quicker, better informed decisions by accessing various kinds of legacy data and processing it at the desktop, through graphical user interfaces (GUIs) and productivity applications. It was also expected that new business applications could be written in a few months to address changing business needs instead of taking the typical year or more for development on a mainframe.
But it turned out that the new technology was not all upside. Today, businesses face many of the same management issues and problems they had to deal with in the days of mainframe computing; among them are budgetary constraints, conflicting user needs, communications within the organization and dealing with IT vendors. IS professionals claim the problems haven't changed--only the names of the hats people wear have changed. "The major problems in an IT organization concerning managing systems haven't gone away, in fact they have gotten worse," says Mike Kennedy, program director for advanced information management with the Meta Group in Burlingame, CA.
These ongoing pressures are forcing IS professionals to rethink the benefits of distributed computing and weigh them against the downside, which includes the cost of administering these complex systems, training users in the technologies, and finding and retaining experienced technical staff capable of maintaining them. Increasingly, user organizations are reining in some of the distributed functions in areas where a centralized computing model can save money and shrink administrative overhead.
Among their strategies are more central management of server code; corporate-wide agreement on IS architectures, including standards and specific technologies; and purchase of integrated management packages. "The trend toward centralization has been going on for a long time," says Amy Wohl, president of Wohl and Associates Consulting in Narberth, PA. "People are trying to do more complicated things than they were before, and the more complicated things require more centralization."
IS staff in many large enterprises have been working with distributed systems for several years at least. The architecture generally has been well-received by end users, who like the autonomy of having their own desktop computers. But now other considerations are leading to a backlash in some organizations. "We already have a standardized desktop for PCs, which limits what a user can do but reduces support costs," says a systems analyst with a large oil company, who requested anonymity. "A current push [in our environment] is to remove computers and disks from the desktops to reduce costs. [The IS department is] putting in 'compute nodes' for use by X terminals or low-end computers that act as X terminals. This lets our less computation-intensive users have access to a reasonably sized computer without the related cost of capital assets."
The analyst says recentralization may be catching on. Some departments want IS to build a large, centralized disk farm in the data center and remove disks from desktops and compute servers. "This is to help relieve the problem of users not backing up their data and insufficient network strength to handle full remote backups on a regular basis," he explains.
Moving servers into the glass house is one way to trim the high administrative costs of maintaining servers throughout an enterprise. Moreover, IS departments gain from being able to provide more consistency of support. However, centralization can result in longer response time to help-request calls from end users. And slow service turnarounds can tax the end users' patience.
"Recentralization is happening when you get into complex areas," says William Rosser, vice president and research director in management of technology at the Gartner Group in Stamford, CT. "We see the issue emerging as more and more is asked of distributed systems. People are spending money as if they were going to gain control [of their distributed systems], but it's a big waste. So they are pulling back and saying we have to cut [distributed systems] a different way."
IS professionals know not only that technologies change but that business requirements change around them. Originally, the compelling needs of end users drove the building of distributed systems, not a desire to lower costs and maintenance overhead. In fact, several studies indicate that distributed computing is more expensive to deploy and maintain. For example, a survey by International Data Corp. of Framingham, MA, found that the cost of a distributed network is more than twice that of a large centralized environment ($762 versus $319 per month per user).
Achieving the delicate balance between building distributed systems that solve pressing business problems and trying to manage and predict costs for these systems is a key challenge, day in and day out, for IS organizations. "Generally, going to distributed client/server systems increases IT costs, so you have to get the payoff from a distributed system through benefits to the user community," says Rosser.
Even some benefits of a distributed architecture are proving to be mixed blessings. For example, desktop PCs, unlike host-dependent terminals, have the ability to do heavy data processing. Client/server applications can be designed to take advantage of this power on the desktop and, combined with GUIs, deliver productivity boosts to users. But those performance gains can also present management headaches. "If you can get the processing out nearer to the end user, your performance is probably going to be much better," says Ron Welf, senior technical lead at Charles Schwab Corp. in San Francisco, an investment brokerage that manages more than $122 billion in assets. Schwab uses more than 6,000 Windows NT workstations and several hundred Unix servers. "But it is harder to manage, especially if you've got an application where the user needs to tap into a variety of different data sources. They [the data sources] need to be managed and updated system-wide."
Another reason IS professionals have been pushed to support distributed computing is because executives believe it can help their company maneuver quickly in a highly competitive business climate. But that pressure to keep up with business needs can undermine the success of a distributed environment. "We've got the business driving us for new services, and it's a scramble to provide this stuff," says Welf. "We're using new techniques and organizations that are aligned very closely to business functions."
Furthermore, it is difficult to manage components strewn across different networks and time zones. "When things are centralized, you can measure them, whereas in a distributed environment one of the problems is being able to simply generate and collect measurement data used for predicting performance," Welf says. "In performance management, we're attempting wherever possible to automate routine functions, like data collection and routine analysis."
These management tasks are made harder by the loudly lamented lack of distributed systems management tools. It took more than two decades of products to reach the maturity of mainframe tools available today, and today's user organizations cannot wait upon such an evolutionary process. Among the tools cherished by IS professionals are problem management, job scheduling, network management, disk management and file backup. "The whole array of typical kinds of systems management, tool functions, diagnosis, monitoring and control levels is not present, because you have such enormous variety of products and protocols," says Rosser of Gartner Group.
How do IS departments grapple with immature management tools and the high costs of making different products interoperate? When does a business recentralize its management tasks and for what business applications? Answers to these questions depend on the business and its goals, but companies are employing variations on some basic strategies that help to overcome some of these deficiencies.
For starters, a corporate-wide agreement on information systems architecture that includes important standards and technologies is essential. By having such a comprehensive plan, each business unit can create a new system that builds upon the expertise and knowledge already gained from previous projects. This architecture breaks down into applications, data, services, middleware, raw platforms, processors, routers and operating systems, as well as standards and repeatable processes. The architectures should not be rigid. "The architecture is constantly changing because there are new products being released, and you have to keep an architecture modern to continue to get people to be willing to comply with the standards," Rosser says.
When distributed systems are not well-thought-out, they can cause new business objectives to conflict with existing corporate structures. Businesses that have been successful in building and deploying a distributed system "have closely matched their process and the topology of the network against their organizational chart," says Kennedy of Meta Group. That's a far cry from what businesses moving toward distributed systems have done in the past. "Historically, businesses have designed their own sites and done their own thing, and through business process reengineering have found it inordinately expensive to tie together systems into cross-functional processes," he says.
Consequently, some of those organizations are looking at centralization of much of the IS function, especially the application function. "Some corporations are saying, 'Despite the fact that you have different businesses, we are going to establish some strong guidelines that define a minimal level of PC platform on the desktop,'" Rosser says. It boils down to asking a few simple questions, such as does the distributed system represent what the business needs in terms of putting processing of data where it is needed?
An information blueprint can save money, especially in areas such as support and maintenance, and by reducing confusion about products and technologies. Similarly, this also forces a business to identify why distributed systems are warranted and whether they will provide a substantial payoff. "There have been big failures where you've tried to use distributed systems and client/server systems that don't have any particular benefit in applications," says Rosser.
Coupled with the idea of creating a blueprint is the desirability of integrated management tools, so the user doesn't have to spend time and money to make them work together. That goes against the early ideas promoted in distributed computing circles that envisioned users buying "best of breed" products. Those products would interoperate through support for standard interfaces. It was another promise that didn't pan out. Instead, now analysts believe purchasing an integrated package is the best way to ensure minimum interoperability. "The motivation has shifted somewhat, from buying packages that are best of breed, which you have to tie together. That gets into a lot of systems management problems," Rosser says. "We have this diversity of products and expectations, so the vendors are trying to respond and come up with more universal, more powerful approaches."
Keeping on track with a distributed computing plan also requires keeping engineers and programmers well-versed in the technologies. That can be difficult to do in a changing environment. Stanford University in Palo Alto, CA, has made it a point to avoid turnover of personnel and its subsequent turmoil in the systems. "We don't have a lot of new people cruise through with wholesale new ideas who want to change things every six months," says Milt Mallory, network specialist in distributed computing and communication services. Mallory manages more than 25,000 computers, which include proprietary minicomputers, Unix servers and PCs.
Companies have poured money into distributed computing pilot projects and deployed systems throughout parts of their enterprises. They've done this even without mature systems management tools like those found in mainframe environments. In some instances, the push has run into resistance from IS departments. "Our decentralization plans were driven by non-IS departments trying to cut costs the best way they could find and to keep having the same computing resources they needed," says the oil company systems analyst. "Distributed systems are good for many uses but not all. Distributed systems seem to give power where it is needed in small, incremental chunks. Mainframes are great for massive number-crunching where large databases and printing reports are involved."
Stanford has had problems in moving to distributed systems, but those problems haven't forced the university to throw in the towel. In fact, distributed computing fits nicely in that the campus is a collection of fiefdoms, each of which is funded independently by the provost. Nevertheless, a lack of standard tools is felt. "There isn't any standard way to manage the stuff we have, and that's certainly an issue, but it's not a big enough issue for people to stop and say we shouldn't do this," Mallory says. Stanford's IS group continues to grow its mainframe support, recently having upgraded its mainframe in parallel with the growth of its distributed computing infrastructure.
But companies and organizations are having to deal with much more than just mainframes and distributed computing networks. Intranet technologies have caught on, and desktops are being equipped with Web browsers from which end users can access legacy data and Web sites on the Internet. Proponents of a recentralized environment may find the network computer (NC) attractive. These stripped-down PCs will be low-cost desktop devices (though how low is in question) designed for running networked applications, including Java applets. One of the key advantages of these devices is low administrative costs; without a hard disk drive and resident software to upgrade and maintain, PC admins and network managers may be less burdened. (For more on the NC, see "Desktop on a Diet,")
Again, management may play a major role in adoption of these emerging technologies. Companies want to leverage the hot IT but have predictable costs in maintaining them. That further complicates an IS group's management tasks. "What is missing, even across the Internet, is end-to-end monitoring, analysis and other tools," Mallory says. "All of this has to be in one tool, so there is no ambiguity or confusion about where a particular problem is and who has the responsibility to fix it."
From an observer's perspective, there is a sort of helter-skelter absurdity in racing back to old models of computing when new ones run into trouble. But smart organizations know there is no competitive advantage in turning back the clock. As the pendulum swings, some are looking at combining the best aspects of both styles.
At the University of Pittsburgh Medical Center, IS staff has balanced a distributed model alongside a centralized mainframe environment. Access to information resources, such as patient records, lab tests and pharmacy information, is maintained centrally but distributed to several thousand employees on 3,000 Intel processor-based PCs and 100 Unix workstations. IS has accomplished this without having to recentralize the computing infrastructure, which is unlikely to happen. "There is no movement back to the glass-house model. We're just keeping the greenhouse heated and adding on as needed," says Chuck Nagy, an IS administrator. "Mainframe-based transactions systems are still the most cost-effective and reliable solutions for large networks," by which he means over 4,000 devices.
Clearly, most large computing environments exhibit a variety of computing needs. IS departments can run into problems with centralized systems when a business has computing needs that fall outside of those architectures. "Putting the data in a glass-house style of centralized environment would be nice, but we can't seem to get the bandwidth required for visualization at a price we can afford," says the oil company systems analyst. On the other hand, he adds, "Centralized systems, mainframes or even 'servers' collected into a central location do seem to make sense if the bandwidth is sufficient for the business to be done."
Most large corporate IS departments are only beginning to rethink their distributed computing infrastructures. They're being driven to rein in costs while maintaining control of complex, distributed resources. Eventually a middle ground will be reached, although few people can say what that area will look like. It's likely that distributed systems will remain in businesses that need to empower their end users at the desktop, yet in those same networks, server resources and large disk farms will grow to reduce administrative overhead. The next generation of the enterprise computing infrastructure could resemble several of its ancestors.
Gael Core is a free-lance writer based in San Francisco. She can be reached at gcoresf@aol.com.