The remaining steps in the systems development process translate the solution specifications established during systems analysis and design into a fully operational information system. These concluding steps consist of programming, testing, conversion, production, and maintenance.
Programming
During the programming stage, system specifications that were prepared during the design stage are translated into software program code. Today, many organizations no longer do their own programming for new systems. Instead, they purchase the software that meets the requirements for a new system from external sources such as software packages from a commercial software vendor, software services from an application service provider, or outsourcing firms that develop custom application software for their clients.
Testing
Exhaustive and thorough testing must be conducted to ascertain whether the system produces the right results. Testing answers the question, “Will the system produce the desired results under known conditions? some companies are starting to use cloud computing services for this work. The amount of time needed to answer this question has been traditionally underrated in systems project planning . Testing is time-consuming: Test data must be carefully prepared, results reviewed, and corrections made in the system. In some instances, parts of the system may have to be redesigned. The risks resulting from glossing over this step are enormous. Testing an information system can be broken down into three types of activities: unit testing, system testing, and acceptance testing. Unit testing, or program testing, consists of testing each program separately in the system. It is widely believed that the purpose of such testing is to guarantee that programs are errorfree, but this goal is realistically impossible. Testing should be viewed instead as a means of locating errors in programs, focusing on finding all the ways to make a program fail. Once they are pinpointed, problems can be corrected.
System testing tests the functioning of the information system as a whole. It tries to determine whether discrete modules will function together as planned and whether discrepancies exist between the way the system actually works and the way it was conceived. Among the areas examined are performance time, capacity for file storage and handling peak loads, recovery and restart capabilities, and manual procedures. Acceptance testing provides the final certification that the system is ready to be used in a production setting. Systems tests are evaluated by users and reviewed by management. When all parties are satisfied that the new system meets their standards, the system is formally accepted for installation. The systems development team works with users to devise a systematic test plan. The test plan includes all of the preparations for the series of tests we have just described.
Figure 13.5 shows an example of a test plan. The general condition being tested is a record change. The documentation consists of a series of test plan screens maintained on a database (perhaps a PC database) that is ideally suited to this kind of application. Conversion is the process of changing from the old system to the new system. Four main conversion strategies can be employed: the parallel strategy, the direct cutover strategy, the pilot study strategy, and the phased approach strategy. In a parallel strategy, both the old system and its potential replacement are run together for a time until everyone is assured that the new one functions correctly. This is the safest conversion approach because, in the event of errors or processing disruptions, the old system can still be used as a backup. However, this approach is very expensive, and additional staff or resources may be required to run the extra system.
During the programming stage, system specifications that were prepared during the design stage are translated into software program code. Today, many organizations no longer do their own programming for new systems. Instead, they purchase the software that meets the requirements for a new system from external sources such as software packages from a commercial software vendor, software services from an application service provider, or outsourcing firms that develop custom application software for their clients.
Testing
Exhaustive and thorough testing must be conducted to ascertain whether the system produces the right results. Testing answers the question, “Will the system produce the desired results under known conditions? some companies are starting to use cloud computing services for this work. The amount of time needed to answer this question has been traditionally underrated in systems project planning . Testing is time-consuming: Test data must be carefully prepared, results reviewed, and corrections made in the system. In some instances, parts of the system may have to be redesigned. The risks resulting from glossing over this step are enormous. Testing an information system can be broken down into three types of activities: unit testing, system testing, and acceptance testing. Unit testing, or program testing, consists of testing each program separately in the system. It is widely believed that the purpose of such testing is to guarantee that programs are errorfree, but this goal is realistically impossible. Testing should be viewed instead as a means of locating errors in programs, focusing on finding all the ways to make a program fail. Once they are pinpointed, problems can be corrected.
System testing tests the functioning of the information system as a whole. It tries to determine whether discrete modules will function together as planned and whether discrepancies exist between the way the system actually works and the way it was conceived. Among the areas examined are performance time, capacity for file storage and handling peak loads, recovery and restart capabilities, and manual procedures. Acceptance testing provides the final certification that the system is ready to be used in a production setting. Systems tests are evaluated by users and reviewed by management. When all parties are satisfied that the new system meets their standards, the system is formally accepted for installation. The systems development team works with users to devise a systematic test plan. The test plan includes all of the preparations for the series of tests we have just described.
Figure 13.5 shows an example of a test plan. The general condition being tested is a record change. The documentation consists of a series of test plan screens maintained on a database (perhaps a PC database) that is ideally suited to this kind of application. Conversion is the process of changing from the old system to the new system. Four main conversion strategies can be employed: the parallel strategy, the direct cutover strategy, the pilot study strategy, and the phased approach strategy. In a parallel strategy, both the old system and its potential replacement are run together for a time until everyone is assured that the new one functions correctly. This is the safest conversion approach because, in the event of errors or processing disruptions, the old system can still be used as a backup. However, this approach is very expensive, and additional staff or resources may be required to run the extra system.
The direct cutover strategy replaces the old system entirely with the new system on an appointed day. It is a very risky approach that can potentially be more costly than running two systems in parallel if serious problems with the new system are found. There is no other system to fall back on. Dislocations, disruptions, and the cost of corrections may be enormous. The pilot study strategy introduces the new system to only a limited area of the organization, such as a single department or operating unit. When this pilot version is complete and working smoothly, it is installed throughout the rest of the organization, either simultaneously or in stages.
The phased approach strategy introduces the new system in stages, either by functions or by organizational units. If, for example, the system is introduced by function, a new payroll system might begin with hourly workers who are paid weekly, followed six months later by adding salaried employees (who are paid monthly) to the system. If the system is introduced by organizational unit, corporate headquarters might be converted first, followed by outlying operating units four months later.
The phased approach strategy introduces the new system in stages, either by functions or by organizational units. If, for example, the system is introduced by function, a new payroll system might begin with hourly workers who are paid weekly, followed six months later by adding salaried employees (who are paid monthly) to the system. If the system is introduced by organizational unit, corporate headquarters might be converted first, followed by outlying operating units four months later.
Moving from an old system to a new one requires that end users be trained to use the new system. Detailed documentation showing how the system works from both a technical and end-user standpoint is finalized during conversion time for use in training and everyday operations. Lack of proper training and documentation contributes to system failure, so this portion of the systemsdevelopment process is very important.
Production and Maintenance
After the new system is installed and conversion is complete, the system is said to be in production. During this stage, the system will be reviewed by both users and technical specialists to determine how well it has met its original objectives and to decide whether any revisions or modifications are in order. In some instances, a formal postimplementation audit document is prepared. After the system has been fine-tuned, it must be maintained while it is in production to correct errors, meet requirements, or improve processing efficiency. Changes in hardware, software, documentation, or procedures to a production system to correct errors, meet new requirements, or improve processing efficiency are termed maintenance.
Production and Maintenance
After the new system is installed and conversion is complete, the system is said to be in production. During this stage, the system will be reviewed by both users and technical specialists to determine how well it has met its original objectives and to decide whether any revisions or modifications are in order. In some instances, a formal postimplementation audit document is prepared. After the system has been fine-tuned, it must be maintained while it is in production to correct errors, meet requirements, or improve processing efficiency. Changes in hardware, software, documentation, or procedures to a production system to correct errors, meet new requirements, or improve processing efficiency are termed maintenance.
Approximately 20 percent of the time devoted to maintenance is used for debugging or correcting emergency production problems. Another 20 percent is concerned with changes in data, files, reports, hardware, or system software. But 60 percent of all maintenance work consists of making user enhancements, improving documentation, and recoding system components for greater processing efficiency. The amount of work in the third category of maintenance problems could be reduced significantly through better systems analysis and design practices. Table 13.2 summarizes the systems development activities.
No comments:
Post a Comment