Magazine article Mortgage Banking

Client/server Q&A

Magazine article Mortgage Banking

Client/server Q&A

Article excerpt


Even the most casual of technology conversations these days is unlikely to progress far without the term "client/server" appearing. Yet even several years after the term was introduced, there is still confusion about the concept, particularly among business managers. Therefore, this month I start a series of columns on client/server, using as a framework those questions I am most frequently asked about this topic.

Q: Just what is the difference between client/server and traditional computing?

A: There are many definitions of "client/server," but my preferred one is "an architecture for computing in which applications and/or data is split between intelligent client workstations and their servers (which can be other workstations, midrange computers or mainframes), interconnected by means of network."

Traditional computing systems consist of a central processing unit (CPU) - a mainframe or minicomputer - and dumb terminals. "Dumb" means that they have no means of processing or storing data. Instead, they consist of a keyboard, a screen capable of displaying text characters and a mechanism for communication with the CPU.

Since all computing is done on the central machine, the CPU has to be all things to all people, capable of handling all tasks - applications as varied as batch reporting, on-line transaction processing, word processing, statistical analysis and tasks as varied as storage management, data base management, network management and printing. Teams of computing specialists are required to configure and monitor these systems so they run efficiently.

To keep up with demand, machine capacity was constantly upgraded. By the end of the 1980s, top-of-the-line mainframes cost in excess of $20 million. These jumps in capacity are heavy hits on the bottom line of companies.

To handle the diversity of applications, information/technology specialists began to logically partition the CPU, with literally sections of processing, memory and storage allocated to batch, inquiry and reporting, data base,and so on. This is a very complex environment to manage. Moreover, in the interests of reliability, this configuration has to be replicated elsewhere on another machine, so that in the event of failure the computing load could still be handled.

Client/server, by putting intelligence on the desktop (or now, for mobile users, laptop), allows computing to be split between the central machine (server). This allows for deployment of graphical user interface (GUI) technology, popularized by Microsoft's Windows.

Instead of a single server, we now have multiple servers, each configured to handle particular tasks in an optimal fashion (e.g., a data base server, application server, printer server). Capacity can be added in smaller increments ($20,000 to $100,000), which is much more attractive from an accounting standpoint. Tying everything together is the network, which is now the real core of the computer architecture.

Q: Client/server just sounds like the latest technology buzzword. Is this for real?

A: Yes. Companies large and small are making huge investments in retooling the information systems (and the people who build and maintain them). More and more technology vendors are building systems based on the client/server architecture. The technology has been evolving rapidly, and there are hundreds of client/server systems in place. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.