NETWORK COMPONENTS
The client/server model of computing
The client/server architecture consists of client computers such as PCs sharing resources such as a database stored on more powerful server computers. Processing can be shared between the clients and the servers.
The client/server model of computing
The client/server architecture consists of client computers such as PCs sharing resources such as a database stored on more powerful server computers. Processing can be shared between the clients and the servers.
Client/server architecture is significant since most modern networked information systems are based on this structure. The client/server model involves a series of clients, typically desktop PCs, which are the access points for end-user applications. As shown in Figure 5.2, the clients are connected to a more powerful PC or server computer via a localarea network within one site of a company, or a wide-area network connecting different sites and/or companies. The network is made up of both telecommunications processors to help route the information and the channels and media which carry the information.
The server is a more powerful computer that is usually used to store the application and the data shared by the users. When a user wants to run a program on a PC in a client/server system, the applications, such as a word processor, will usually be stored on the hard disk of the server and then loaded into the memory of the client PC, running or ‘executing’ on the processor of the client. The document the user creates would be saved back to the hard disk of the server. This is only one alternative. One of the benefits of client/server is that there are many choices for sharing the workload between resources. The system designers can decide to distribute data and processing across both servers and client computers, as described in Chapter 11. There we also explain how different functions can be partitioned between client and server and the merits of using ‘thin’ or ‘fat’ clients, applications on the former being smaller and easier to maintain.
The server is a more powerful computer that is usually used to store the application and the data shared by the users. When a user wants to run a program on a PC in a client/server system, the applications, such as a word processor, will usually be stored on the hard disk of the server and then loaded into the memory of the client PC, running or ‘executing’ on the processor of the client. The document the user creates would be saved back to the hard disk of the server. This is only one alternative. One of the benefits of client/server is that there are many choices for sharing the workload between resources. The system designers can decide to distribute data and processing across both servers and client computers, as described in Chapter 11. There we also explain how different functions can be partitioned between client and server and the merits of using ‘thin’ or ‘fat’ clients, applications on the former being smaller and easier to maintain.
- To summarise, the main components of a client/server system shown in Figure 5.2 can be defined as follows
- Client software is the interface by which the end-user accesses the software. It includes both the operating system, such as Windows 8, and the applications software, such as word processors. Increasingly, web-based browsers are being used as clients on a company intranet
- Server software is used to store information, administer the system and provide links to other company systems. Again, this may be a web server or a database server.
- The application development environment provides interactive programming tools to develop applications through the application programming interface (API) of the package
- The infrastructure or plumbing of the system. This is based on local- and wide-area networking techniques and consists of the telecommunication processors and media.
Why use client/server?
The adoption of the client/server architecture was part of a trend to ‘downsize’ from large mainframes with arrays of user terminals which had limited functionality. This latter type of architecture was widespread in businesses during the 1970s and 1980s. The client/server model represented a radically new architecture compared to the traditional, centralised method of a mainframe, with its character-based ‘dumb terminals’ which dated back nearly to the birth of computers. Rather than all the tasks involved in program execution (other than display) occurring on the mainframe, client/server gives the opportunity for them to be shared between a central server and clients. This gives the potential for faster execution, as processing is distributed across many clients.
Cost savings were originally used to drive the introduction of client/server. PC-based servers were much cheaper than mainframes, although the client PCs were more expensive than dumb terminals. The overall savings were dramatic. These savings were coupled with additional benefits of ease of use of the new clients compared with the older terminals. The clients used new graphical user interfaces which were easier to use thanks to a mouse, and the graphics could improve analysis of business data. Customisation of the client is also possible – the end-user is empowered through being able to develop their own applications and view data to their preference. With queries occurring on the back end, this reduces the amount of network traffic that is required. Centralised control of the user administration and data security and archiving can still be retained.
With these advantages, there are also a host of system management problems which were not envisaged when client/server was first adopted. These have been partly responsible for the reduced costs promised with this ‘downsizing’ not materialising. To some extent there is now a backlash, in which the new ‘network-centric’ model is being suggested as a means of reducing these management problems. These disbenefits include:
- High cost of ownership. Although the purchase price for a PC is relatively low, the extra potential for running different applications and modifications by end-users means that there is much more that can go wrong in comparison with a dumb terminal. More support staff are required to solve problems resulting from the complex hardware and software. The annual cost of ownership of a PC is estimated by the Gartner Group using their total cost of ownership (TCO) measure.
- Instability. Client/server technology is often complex and involves integrating different hardware and software components from many different companies. Given this, client/ server systems may be less reliable than mainframe systems.
- Performance. For some mission-critical applications, a smaller server cannot deliver the power required. In a travel agency business, for example, this will give rise to longer queues and poorer customer service. For this reason, many banks and travel agents have retained their mainframe-based systems where performance is critical. The use of a PC can also cause delays at the client end, as the screen takes a long time to redraw graphics compared to a teletext terminal.
- Lack of worker focus. Although PCs can potentially empower end-users, the freedom of choice can also lead to non-productive time-wasting, as users rearrange the colours and wallpaper on their desktop rather than performing their regular tasks!
Despite these difficulties, the compelling arguments of ease of use and flexibility of client/ server still remain. The empowerment of the end-user to develop their own applications and to use and share the data as they see fit is now considered to be the main benefit ofclient/server.
No comments:
Post a Comment