By Philip J. Gill
An established technology, once taken for granted, moves from transaction management to providing enterprise middleware services.
[Chart]: Open TP Monitor Product
Market Share, 1995
[Chart]: Sears' SReRS Data Warehouse
[Chart]: FundServ's Unix Server
[Chart]: First Chicago Mercantile
In the world of legacy systems, the transaction processing (TP) monitor
was an easy piece of software to define. Although it could do other things
as well, the TP monitor's main function was to manage the interaction between
a collection of remote dumb terminals and a core application on a proprietary
mainframe host. That simple definition held sway for about 20 years and
was typified by IBM's Customer Information Control System (CICS), a product
virtually synonymous with TP monitor technology even today.
An old mainframe, however, might have a hard time recognizing some of today's open systems-based TP monitors, usually referred to as open or distributed TP monitors. For one thing, although these newer TP monitors still manage access to host resources, it's an intelligent desktop client such as a PC or a Macintosh, not a dumb terminal, that asks for access to the host. And that host, more often than not, is a high-end Unix symmetric multiprocessing (SMP) or massively parallel processing (MPP) server, not a mainframe.
Perhaps most importantly, what a TP monitor can do is also changing. These products were once largely procedural in nature, but they are beginning to incorporate more than transaction management services into their products. These include types of middleware such as messaging, "publish and subscribe," remote procedure call (RPC), and even interfaces to object request brokers (ORBs) built on the Object Management Group's Common Object Request Broker Architecture (CORBA) specification.
All the leading open TP monitors are also moving to embrace technologies for the Internet and the World Wide Web, including Sun Microsystems' Java programming language and Microsoft's ActiveX software component technology. In short, open TP monitors are emerging as a key middleware layer in implementing enterprise-wide information infrastructures.
"Open TP monitors are starting to catch on more, because they are expanding their capabilities, the environments they run on and the types of activities they support," says Mitch Kramer, consulting editor on middleware and application development technologies at the Patricia Seybold Group of Boston.
Open TP monitors have been around for some time. The oldest is Tuxedo, which was developed at AT&T's Bell Laboratories about 20 years ago to help manage Unix-based telephone switching equipment. When AT&T sold the Unix operating system and related technologies to Novell, Inc., three years ago, Tuxedo went with the package. Early this year, a group of investors bought Tuxedo from Novell and formed BEA Systems. This San Jose, CA, firm was the market share leader in 1995, capturing 36 percent of the market for open TP monitors last year, according to the Standish Group International of Dennis, MA. (For a comparison of products' market shares, see the chart on page 24.)
In all, open TP monitor products accounted for a $130 million market in 1995, up about 30 percent over the year before, says Jim Johnson, Standish president and CEO. That growth runs counter to the popular misconception that open TP monitors have been much talked about but little deployed, says Johnson. "They've gotten bad PR," he says.
A main reason for this ill repute, Johnson says, is that implementing an open TP monitor takes time; environments sometimes call for using the open TP monitor to integrate hundreds of legacy and open applications into a cohesive environment. "Anybody who buys an open TP monitor today won't finish deploying it until two or three years," says Johnson. "And many of the companies that use them don't want to talk about them, because they believe they give them competitive advantage."
Still, Johnson notes that the open TP monitor market is just a fraction of the IBM mainframe CICS market's $700 million in 1995. But that market is growing at a slower rate--12 percent over the year before--than its open equivalent.
Johnson and others point out that the arguments for the open TP monitor become more compelling over time. Like its legacy predecessors, an open TP monitor helps users to manage available resources. For instance, it can support multiple users per connection to the server--for example, doubling the number of supported users from 500 to 1,000 or more. Also like the proprietary predecessors, it provides traditional TP management services, such as system backup and recovery, transaction integrity and recovery, and fail-over capabilities.
The open TP monitor, in particular, provides client/server applications with a high degree of scalability. Alfred Spector, CEO and president of Transarc, a Pittsburgh-based subsidiary of IBM that makes the open TP monitor Encina, says he believes the use of open TP monitors will increase as more companies move their client/server applications from two-tier to three-tier environments.
A survey from the Gartner Group of Stamford, CT, estimates that more than 90 percent of all existing client/server applications are two-tier. That is, they have a bottom tier of desktop clients and a second-tier server. Those servers can reliably support workgroups or small departments of 100 or so users, but when it becomes necessary to scale that client/server application to support 1,000 users, it can't be done efficiently without implementing a third tier to put the business logic on the second tier and the database on the third. "A TP monitor makes it possible to implement that second tier reliably," says Spector.
Others point out that open TP monitors also help scale applications across heterogeneous computing environments. Because a TP monitor isolates the various elements of a client/server environment, the same application can access or write to multiple applications and databases running on multiple hardware platforms with multiple operating systems, all linked through diverse protocols across a single network.
Today's client/server applications need to scale in a number of ways, says Bill Coleman, chairman and CEO of BEA Systems. These include the number of users, the number of transactions and the number of systems across which the application is distributed. "When any combination of those factors increases, you get to the point where you need an infrastructure," says Coleman. That infrastructure, he argues, should include an open TP monitor.
Isolating the different elements in a client/server application also cuts the maintenance burden, says MaryAnn Anderson, vice president of technology at First Chicago Mercantile Bank. Her Chicago-based firm operates a new electronic tax collection and payment processing system for the United States Internal Revenue Service (IRS), using multiple Unix systems and the Top End open TP monitor from NCR of Dayton, OH.
"If I need to add a new database to the environment, I just change a table in the TP monitor," Anderson explains. "I don't have to go into the application and change code."
Middleware comes in six flavors, including open TP monitors as one. In the past, each has tended to offer a narrow range of capabilities. For instance, database connectivity middleware connects desktop clients to one or more remote databases on a server. Messaging or message-oriented middleware (MOM) places incoming requests for services or data in a messaging queue and processes them accordingly. RPCs, on the other hand, maintain the connection between client and server until the client has received a response to its request and the transaction is complete. Publish and subscribe mechanisms, as the name indicates, publish specific information, such as query results, to a specific list of subscribers. ORBs, the newest form of middleware, allow objects to exchange messages or data.
In the past, the TP monitor's forte was its transaction management services, such as allocating resources on the host, sharing connections between multiple clients and ensuring transaction integrity through rollback in the event of failure. Lately, however, TP monitors have been taking on some of the characteristics and capabilities of rival forms of middleware. For instance, all the leading open TP monitors support some kind of messaging and message queuing. Encina has what Transarc calls a "transactional RPC," which combines RPC and transaction management into one middleware service. Transarc also plans to tie Encina to Orbix, an ORB from Iona Technologies of Dublin, Ireland, which is based on the CORBA specification, so Encina can pass objects between different systems on a network. NCR has what it calls a "transaction request broker," which links its Top End to CORBA-compliant ORBs.
In the realm of the Internet and Web, last year UniKix Technologies of Billerica, MA (a subsidiary of Bull Information Systems), came out with a family of three products called WebKix, which Web-enable existing legacy CICS applications and CICS applications ported to a UniKix client/server TP environment. More recently, BEA Systems introduced BEA Jolt, which allows users of Java-enabled Web browsers to access enterprise applications behind corporate firewalls. BEA Jolt translates between Java applets and the Tuxedo TP monitor, which in turn passes information or queries to corporate resources and applications.
As these products increase their capabilities, users are deploying them in environments where a monitor's traditional transaction management services are minor functions. "Only five to eight percent [of our customers] use Top End only for its transaction management services," says Dave Findlay, director of middleware marketing for NCR. "The rest use it primarily for its communications services."
Dave Matthews, vice president of marketing for UniKix Technologies, puts open TP monitor vendors into one of two camps: Those who target new application development tend to stress the TP monitor's communications services, and those who target existing legacy applications for rehosting on Unix and open systems servers emphasize transaction services. In UniKix's case, about 80 percent of its customers use the software for its TP capabilities and about 20 percent for communications, says Matthews.
The lines may be blurring between open TP monitors and other forms of middleware, but Ed Acley, director of middleware research for International Data Corp. (IDC) in Framingham, MA, cautions users against thinking that any one type of middleware will solve all their enterprise needs. "TP monitors are adding messaging support, but their messaging support is not as robust as MOM products," Acley says. "The chief value-add of TP monitors is their transaction management services, but not every application requires transaction management."
The largest single segment of the middleware market remains products for database connectivity. IDC estimates that sales of database connectivity products, such as Information Builders' EDA/SQL, Sybase's Open Client/Open Server and others, were $448 million in 1995. That market should grow to nearly $2.5 billion by the year 2000, when open TP monitor sales will only reach about $0.5 billion. "They've not invented an application yet that doesn't need to access data," says Acley. Rather than settle for a one-size-fits-all middleware approach, users are better off realizing that they'll need multiple kinds of services and products to provide them.
The following case studies show some of the ways in which open TP monitors are ranging beyond their traditional uses.
From its headquarters in Hoffman Estates, IL, Sears Roebuck & Co. runs the nation's largest chain of department stores, more than 2,800 in total. Keeping up on the ongoing business operations of such a vast, geographically dispersed business can present a problem to upper management, who may need to respond to changing consumer tastes and market demands on short notice.
That's why Sears built the Sales Performance Reporting System (SPeRS). This 1.5-terabyte data warehouse currently holds two years of sales information, and more information is added daily from Sears' operational systems. To control SPeRS, Sears chose NCR's Top End.
Over 2,500 users have access to SPeRS, including regional managers, headquarters clerical staff and all senior executives. "They can run reports and queries against sales data to get some picture of the business," says Tom Sletten, a senior systems analyst. "They can view that sales information by store; by district, which is a group of stores; by region, which is a group of districts. They can also view each department or get a global view of the business, or just a specific SKU [stock-keeping unit], such as how many blue wrenches the hardware department in store X sold last month."
SPeRS uses a three-tier client/server architecture. The desktop client is a PowerBuilder application that provides the presentation services. Sears' desktop standard is IBM's OS/2 operating system. The top layer is the data warehouse, which resides on a Teradata system, a dedicated database machine also from NCR.
The middle layer in that three-tier environment is an NCR WorldMark 5100 MPP server running NCR's Unix System V.4-based operating system and Top End TP. This "Top End node," as Sletten calls it, provides three basic Unix services: security, validation and reporting. When a user goes to access the SPeRS data warehouse, the security and validation services on the Top End node make sure the user is authentic and is authorized to access that data. Once those processes are completed, the data is downloaded to the Top End node, which also runs a report so the main database machine doesn't get bogged down.
When Sears began building SPeRS, it looked to Top End to provide access from virtually any system or any location. Like other open TP monitors, Top End can isolate layers of a client/server application from differences in hardware, operating systems, databases and network protocols. "Originally, we were using Top End to provide access to SPeRS from anywhere," explains Sletten. "We wanted to use PowerBuilder to provide access to the server and use Top End to connect to the mainframe. With Top End, you can write CICS applications that send Top End transactions to the warehouse server. That way we would need only one copy of the application logic on the Top End server."
As time went on, however, Sears realized it could also use Top End and its various communications and transaction services to get control over access to SPeRS as well. "We use Top End to monitor who's getting in and who's getting resources," says Sletten. "If we need to, we can also use it to monitor requests and provide queuing services. That way we can prioritize the queries for those who need quicker response. This provides us a single point of control."
Sletten says Sears doesn't use any of Top End's transaction management services, such as transaction integrity and rollback. All those functions have been pushed out to the client. "We use PowerBuilder to control database functions," he says.
Sletten says the only alternative to using an open TP monitor such as Top End was to go "straight OLTP." By that, he means that Sears could have constructed a two-tier, client/server OLTP environment that had a direct connection between client and the data warehouse. But that wouldn't have provided the control Sears was after.
Under straight OLTP, the desktop workstations would have had to wait for responses from the host. "With a TP monitor, we can put in requests and queue them," Sletten says. "This prevents the users from getting locked up, as they don't have to wait for a response. They can do useful work and check back later to see if their report has been processed and is ready. That way we can give control back to the workstation."
In this way, Sears is able to get more than it originally bargained for out of its SPeRS data warehouse.
For a growing number of companies, information technology not only supports the business, IT is the business. In the case of FundServ, Inc., of Toronto, its business is middleware. FundServ's sole service is to provide a common clearinghouse and settlement service for some of Canada's largest mutual funds companies and their dealers and brokers, as well as independent financial advisors who are authorized to sell the funds. "We're the middle tier between the dealers and brokers below us, and the mutual fund companies above us," says Gordon Divitt, company president.
Five years ago, some of the largest mutual fund companies in Canada--including the local operations of such international concerns as Fidelity Investments and Templeton Funds--pooled resources to build a common clearinghouse and settlement service. The brokers, dealers and financial advisors take orders to purchase shares in the mutual funds of participating companies and submit those orders through a TCP/IP network backbone.
Those purchase orders pass through FundServ's data center in Toronto, which operates on a six-CPU Sun Microsystems Sparccenter 2000 SMP server running the Solaris 2.4 version of Unix. This server runs two applications, the clearinghouse and the settlement system, which includes electronic funds transfers (EFTs). Both are written in Oracle's ProC language and run atop an Oracle 7.1 relational database management system (RDBMS), which keeps a record of all transactions.
The purchase orders get passed on to the participating mutual fund companies. When they acknowledge those orders, that information in turn passes back across the TCP/IP backbone to the FundServ Sparccenter 2000, which routes them to the appropriate dealer, broker or financial advisor.
Bruce Pinn is vice president of Corellan Communications, a Toronto-based computing services company that operates the FundServ network and data center under contract. He says FundServ relies on the Tuxedo open TP monitor from BEA Systems to keep purchase orders and acknowledgments routed to the appropriate place.
FundServ originally had intended to use Tuxedo as a traditional TP monitor, to ensure transaction integrity, provide rollbacks in the event of a system failure and so forth. It didn't work out that way, and instead its primary use is for communications. "We currently use Tuxedo as a transaction routing mechanism, through our applications to the dealers, brokers and financial consultants on one side, and the mutual fund companies on the other," explains Pinn.
The business and application logic of the two applications are encoded into the TP monitor. This isolates the various components of the system from each other and the underlying Oracle RDBMS. It also keeps the information flowing, as the application logic determines on the fly when and if any of the incoming information needs to go to the database or if it can simply pass it through to the other end.
FundServ's busiest time of the year is the first three to four months. As in the U.S., that's tax season for Canadians, who make the bulk of their contributions to tax-deductible retirement accounts early in the year. With Tuxedo routing transactions and balancing the processing load, "the Sun system has never even broken a sweat," says Pinn.
The decision to deploy an open TP monitor as part of the new IRS tax collection system at First Chicago NDB's First Chicago Mercantile Bank subsidiary grew out of a desire for high availability. Operational since June, the system electronically collects and processes business tax payments for the IRS, which has contracted out the development and operation of the new congressionally mandated system to the First Chicago Mercantile operation, a joint venture with another major bank. The system specifications from the IRS require that the tax payment system must be up and running on a daily basis.
"Our use of a TP monitor evolved out of our need for increased availability," says MaryAnn Anderson, the subsidiary's vice president of technology. Before realizing it might want a TP monitor, First Chicago Mercantile decided to start building the IRS system on a hardware architecture that would maximize availability. The hardware component of that architecture is an MPP environment built on a cluster of four NCR 3500 Series boxes that share the same Oracle 7.1 RDBMS. Each 3500 box contains four Intel Pentium CPUs. If any of the four boxes should fail, the other systems in the cluster will kick in and take over before any transactions are interrupted. The software to control the hardware is Top End, which manages the messages going back and forth between the different CPUs on the MPP system.
That achieved, First Chicago Mercantile also decided to use Top End for other purposes. The bank's main data center consists of a Hewlett-Packard HP 9000 and the four NCR 3500 systems. "Each serves a different function in the tax payment collection system," Anderson says.
The bank runs the Informix RDBMS on the HP 9000; that database keeps track of all tax payments received. The Oracle database, meanwhile, validates the incoming transactions in realtime. With Top End, the tax payment system can automatically write the same transaction to both databases.
The bank's disaster recovery plan also depends on Top End. The bank operates a data center in Chicago and a duplicate "hot standby" site in another city. Top End keeps the bank's primary data center complex and its backup in sync. "We use Top End to simultaneously update multiple systems at multiple sites," she says. "Top End does data replication. It goes out and writes transactions to both our primary systems and the hot standby. That way I don't have to worry about how many sites I have to write to. I write to all the systems I need to in one transaction."
Philip J. Gill is a free-lance writer and editor based in San Diego. He can be reached at philipgill@aol.com.