Introduction to Client/Server


Client/Server and Multi-tier Design

After years of attending seminars and reading articles you have been brow-beaten into believing client/server design has advantages over the monoliths companies typically create. Now what? How do you design a server? How do you design its interface and how do clients communicate with it?

Client/server systems are not just for databases. Client/server design has real and immediate benefits to software developers, regardless the size of your system. This paper's purpose is to demystify client/server by examining the design and technological considerations of building your own client/server systems. This paper is not theory based, but instead is the result of the development and implementation of over 20 multipurpose client/server systems.

Client/Server Evolution

First, it's important to realize client/server is not a new software paradigm, as much as consultants and trade magazines would like you to believe it is. It is one step on an evolutionary path to distributed computing. Understanding where it came from will explain its purpose and increase your ability to exploit it.

To see why it exists, let's tour its ancestry first: structured programming, modular design, static libraries, runtime libraries, and inter-process communications and networks.

Structured Programming

Structured programming, thought by many to be the sole providence of PASCAL, was one of the first steps to organize programs into discrete blocks of code that flowed from one to another, eliminating what is still referred to as spaghetti code (a single noodle winding itself around an algorithm for which debugging is analogous to sucking one end of the noodle until the other is liberated, but not nearly as gourmet). To encourage structured programming, many third-generation languages like BASIC included control-of-flow statements into the language itself, making constructs like loops, ifs, and if-elses part of the language grammar. Even so, computer programs were still written as self-contained entities responsible for all their own services, basically everything between the user's monitor and the computer hardware.

Modular Design

It wasn't long (read: hours) that programmers discovered many of the functions required in one program were needed in many, and rather than re-code the function each time, it was easier to include the templates of the function in the source code. It made more sense to use Joe's code for calculating SINs and ARCs, because Joe's code worked, than it did to write your own.

Eventually, computer hardware manufacturers added instructions to computer chips to simplify jumping from one code segment to another and returning where you started; functions invoked this way are often called subroutines. Fortran, another computer language, includes many mathematical functions into the language itself-these are called intrinsic functions.

The whole modularization of code continued by leaving each function in its own file and letting another program link them all into the final program. This process is called linking or binding.

Static Libraries

Static libraries are collections of functions with a similar theme, like math or screen IO, in a single file called a library. This made linking faster and easier. For developers, distributing (publishing) and controlling libraries was easier than distributing and controlling hundreds of subroutines. Unfortunately, fixing or improving a library started a chain reaction:

Although libraries improved the process, there was still much to be desired.

Runtime and Dynamic Libraries

Static libraries were linked with the program before the program was executed (run) by the user. Dynamic libraries, also called runtime libraries, are linked with the program immediately after (or simultaneous with) the program being executed. Now, the code in the library may be independently modified from the application as long as the interface between them remains unchanged. This means libraries can be repaired or improved without modifying or relinking (statically) the application. In fact, once they were separated from the application, major changes could be made.

Although there was still a chain reaction, it was smaller and included fewer files:

IPCs, RPCs, and Networks

When modules are linked together, either statically or dynamically, they share the same logical address space. The logical address space is the memory programs use to store code and data. For one module to call another, only the address of the next piece of code needs to be changed. Interprocess communications (IPCs) and Remote procedure calls (RPCs) are mechanisms that allow a program in one address space to invoke functions from another in a separate address space. Since IPCs and RPCs are messages (similar to e-mail) and messages can travel over networks, the second program doesn't even need to be on the same computer as the first-it only has to be on the same network. This is the corner-stone of client/server computing. Client/Server is the natural, evolutionary maturation of dynamic libraries.

The exciting potential of Client/Server programming is just now being realized as businesses begin to install networks. Libraries, running as processes (servers) anywhere on the network have become specialized into database, transaction, and output services, to name a few. Now, instead of simply reusing code we reuse entire processes.

In the Real World

Windows and OS/2 both use the filename extensions .LIB and .DLL to denote static and dynamic libraries respectively. Both also have several flavors of IPCs. The main limitation of Windows (prior to '95 and NT) is the absence of truly separate address spaces for programs (DOS limitation). This allows an errant word processor or spreadsheet to pollute the entire machine, forcing the user to reboot Windows and sometimes the whole computer. Because OS/2, 95, and NT actually uses true logical address spaces, a program's attempt to address memory it doesn't own (memory outside its logical address space) causes a trap that is handled by the operating system and the offending program is the only one terminated. The rest of the machine remains uninfected.

UNIX has several IPC mechanisms as well as protected logical address spaces for programs. It is difficult for an unprivileged program to crash the operating system. Because of these features, client/server systems are typically more reliable on OS/2, Windows NT, and UNIX, than Windows. Another advantage UNIX (and other more robust operating systems) has is its maturity. Flavors of UNIX have been around since the early '70s and have been steadily improved since.

Benefits to Developers

The benefits of client/server design to end-users have been trumpeted for years. But the benefits aren't lopsided in end-users' favor. There are benefits to developers as well.

Reliability

Because server routines are separated (physically) from application code, there is a reduced chance of one corrupting the other. This is not the case with static or runtime libraries. Libraries eventually share the same logical address space with the application, risking the traumas of wayward pointer manipulation. Servers, however, are insulated-running as their own process in their own address space and, quite frequently, on their own machine. In the case of machine failure, redundant servers elsewhere on the network (along with the necessary error-recovery code in the RPC library) can continue processing client requests, all transparent to end-users.

In many software projects, bugs are found at a rate proportional to their use. The more a function is used the more likely each of its logic branches will be taken. Additionally, the longer a process runs the more likely memory leaks and other wear-items are likely to be discovered. Server processes epitomize both scenarios: they are frequently used and run for a long time. Both of these features commonly yield more bugs in the development and testing phase than typical, library-only systems-saving developers grief and agony by finding bugs early before customers do.

Security

RPCs provide as much protection to servers as they do clients. More so, in fact, if access security is implemented. Logon security is an important firewall to database, transaction, and mail servers.

Mobility/Porting

Instead of porting your entire system, you can port just the client software or the server software. To get into new markets or new platforms your whole application doesn't have to move, just the part you want to move. This is where the concept of software plumbing originates.

Competitive Strategy

To penetrate markets typically inaccessible to you because of incompatible hardware, it's much easier to introduce a small, self-contained, black box than it is to convert their accounting system to your technology or your software to theirs. Making the leap onto your software is now far less a financial risk (in terms of time and money) than would normally be the case.

Server software also allows aggressive software companies to develop interfaces that resemble their competitors' interfaces, but use their own back-end.

Organizations are typically less opposed to new hardware being introduced into their shops than they are to converting their database.

Standard Interface

Once a standard interface is created, any number of client applications can be connected to it. If the developer wanted, and if the interface was designed well-enough, the interface could be published-inviting third parties to develop front-ends for your server. A thriving third party market will give your system the staying power and momentum your competitors don't have.

If you develop multiple servers with identical interfaces, (ex. ASCII-based) a single front-end application could attach to each of them providing an immediate, interactive test harness.

Easier Software Distribution

If your company distributes both client and server software to networked environments, imagine the ease of upgrading customers systems by simply loading the server onto a single machine. This isn't unlike the benefits of using runtime libraries; except runtimes must be loaded onto each machine where software that uses them will run.

Easier to Debug

Servers are incredibly easy to debug. Because they are independent from applications, they can be started under a debugger (or attached to by a debugger) without changing the client's environment.

Because of the way servers are traditionally designed, a breakpoint can be set at the entry to the server to catch all requests from all clients. This is also a great place to put logging routines.

Easier to Support

Because the server is separated from applications, support personnel don't have to be as concerned about the applications. They can focus on the server and the diagnostics available and ignore the application (until the process of elimination proves the problem is not the server's).

Metrics

Because servers are typically an end-point to multiple clients, they are also the best place to collect statistical information about the system. How many clients have been on? How many are on now? What is the average response time? What's been the slowest response time? What are the clients doing now? Since you can't tune what you can't measure, servers are a great place to start measuring since it can all be collected at one spot.

Economics

Real business reasons are needed to fund new development. There are more than a few economic reasons to begin implementing client/server designs, and they are more than superficial.

Two main financial reasons to develop client/server systems follow a similar theme: first, for new systems, lowering the cost of entry-level client systems and then to preserve investments in equipment already purchased.

Desktop Workstations-By moving much of the heavier processing to servers that have the capacity to support it, client machines don't require as powerful a CPU, as much memory, the disk space, or even special hardware that is available on the server. It is also quite possible that the client machines may be dumb terminals or PC- like devices on which developers don't care to or cannot develop sophisticated software.

If there already exists a large inventory of older PCs (either yours or your client's), for the cost of a network card these boxes may benefit from the processing power/functionality of the server.

Alternatively, if they're still too cheap to invest in network cards, some software for communicating over the serial or parallel ports could do the trick. Even at the falling prices of newer hardware, some controllers may still be reluctant to give-up the old. A cost justification for developing serial communications vs. installing network cards would be prudent.

Client/Server Casualties

According to most publications (technical and non-technical) the number of client/server success stories is far out-numbered by client/server failures. This is not surprising considering two things: the common belief client/server applies strictly to database and GUIs, and when it does work it provides a decided advantage to the succeeding company who is not about to share their strategy with competitors. This also accounts for many consulting organization's announcements of client/server's low-acceptance and premature death. Interestingly, the second point accounts for the myth that client/server is new-companies employing it back in the '70s weren't running ads or writing articles about their designs for fear they'd lose the advantage they'd just created.

Consider a typical scenario: a company is trying to downsize by porting applications from the mainframe to "client/server." However they go about it and whatever tools they end up using (CASE or otherwise), two tasks are initiated: converting the database into a relational model and developing user-friendly front- ends for end-users. If they're assisted by an "experienced" consultant, they may even have used separate hardware architectures for the two sides-thereby implementing textbook client/server at both the hardware and software levels. Right?

When it's boiled down to that, it's easy to see why the expected benefits were rarely achieved. They basically re-created the same programs, but now they constructed them on two machines often using two operating systems and with new database technology. Even if these aren't the only reasons, they certainly contribute to potential failures.

Demystification

Clearly, client/server programming is not rocket science. Reports of its sophistication and complexity have been exaggerated. Considering where client/server came from, it's easy to understand what it is and how it's used.

The next step to client/server's demystification is the realization that client-server relationships and interfaces exist all around you. The more easily you recognize these relationships in your everyday life the easier it is to recognize them in software.

Consider:
A man goes to a local diner and sits at his favorite booth. He picks up the menu and selects his lunch choice. The waiter writes-down his order and retreats to the kitchen, leaving the order with the cook. The cook prepares the meal, the waiter picks it up and delivers it to the man. After completing his meal he leaves a tip, pays the cashier, and exits the diner.

In this simple and ubiquitous example, there are multiple clients, multiple servers, and multiple interfaces. Some so obvious we overlook them:

ClientServer Interface
manwaitermenu
waiter cookorder
mancashierbill

It is interesting to note the waiter was both a server and a client. Below are some things to consider along with the computer/OS/network features that facilitate them:

What might a non-client server example look like?

Extra Credit question:


Copyright © 1993, 1997, Isect

For more information, contact Isect.