THE LONG AND SHORT OF HORROR STORIES

by Wayne M. Krakau - Chicago Computer Guide, November 1993

You've all heard about them. You've probably read about them quite frequently (including in the pages of this publication). They are downsizing and client-server horror stories.

The story lines usually run along similar tracks. A large firm with a well-run mainframe (or minicomputer) installation needs to cut costs or reduce the typical two-year backlog of mainframe applications. The potential solution - downsizing - using PCs and networks to replace functions previously performed by larger computers.

These days, the use of client server technology is often a major part of the solution. This involves separating the information requesting and displaying functions of a database from the searching and supplying functions. The workstation retains only the requesting and displaying functions while the server takes over searching and supplying.

(Note to techies: Stop shaking your heads and clucking your tongues - this simplistic definition is good enough for the sake of this discussion.)

Sometimes outside "consultants" are called in. Other times the in-house staff tries to "wing-it" (usually good for a few laughs).

After the transition to PCs and LANs is made, all hell breaks loose. Those nasty ol' PCs and associated client-server databases fail the company in a big way. The applications don't work right. The computers break down. Users have nervous breakdowns. Thousands (millions?) of dollars are lost. Dynasties fall. Peace talks fail. Talk shows are canceled. (You get the picture.)

What you don't see in these articles are the facts on the conditions prior to the downsizing adventure. The most important overriding fact: Implementing any new technology is a potential managerial nightmare. This applies to new mainframe and minicomputer systems (both hardware and software) just as much as it applies to PCs and networks.

Many mainframe computers crash (suffer a complete stoppage) on a regular basis. Mainframe users often state that they expect this! It has been this way for so long - since long before PCs existed - that they think this is standard operating procedure!

Also, mainframe applications are often so unreliable that I have encountered users who would actually pre-calculate the results of equations by hand so that they wouldn't accidentally enter a combination of data that would lock up an individual application, subsequently corrupting the underlying database. Convincing them that they can safely enter data at full speed after redesigning an application to work safely can be very difficult. They are so used to mistrusting computer people that they simply won't believe that a reliable application can be created.

It is not uncommon for mainframe programming projects to progress so slowly that they are declared obsolete and are canceled prior to completion - with up to multimillion dollar writeoffs. That, of course, assumes that the programs would have filled the users' needs in the first place. Many won't. Users are normally the lowest personnel on the programming design totem pole, so their opinions and true business needs get filtered out of the equation.

This is not to say that the data processing professionals involved are incompetent or that they just don't care. In fact, my personal belief is that a prime reason for the high turnover rate among data processing staff is their emotional and ethical dissatisfaction over being involved in inadequate and buggy computer systems.

The problem is that large, complicated computer projects (regardless of computer size) require teams of people from diverse backgrounds - users, applications programmers, systems programmers, systems analysts, business analysts, assorted managers, etc. - to work in concert with limited resources. These include not enough personnel, inadequate training, unrealistic budgets, and - my own favorite - artificial deadlines made up to fulfill political, as opposed to business, needs. These factors work together to chip away at the reliability and practicality of computer systems.

Another factor is the overall size of the team. Research into software engineering has shown that problems in large computer projects are usually "solved" by throwing "warm bodies" (I am not making this term up - it's really used!) at the project. This same research has also shown, conclusively, that the more people involved in a project, the lower the productivity, accuracy, useability, and reliability of the project. There is even a point of diminishing returns where the project becomes impossible to complete!

The task at hand in this article is not to find a cure for these problems. Successful projects do occur, so we know that the problems are inherently curable. The real task is to step back and look at the reports of failed downsizing and figure out what the track record was before the downsizing took place. How effective was the company involved when it did its last mainframe (or mini) upgrade or system redesign? If they had problems in a field where they had major in-house expertise, they could easily have problems with a technology in which they had little or no experience.

Also, find out why the company is downsizing. Often, the reason is that the existing system is an unmanageable mess. Again, unless a major methodology change is undertaken, future projects, whether downsizing or not, are likely to end up in even worse shape.

©1993, Wayne M. Krakau