Page images
PDF
EPUB
[merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small]

Appendix F M/P MAP

TIME LEVEL1 LEVEL2 LEVEL3 LEVEL4 LEVEL5 LEVEL6 TRANS COR REA PUN PRI TAP FSN REF FFC FIS

FAR FGL AHF RAH SAD TRA

[blocks in formation]
[blocks in formation]

PERFORMANCE EVALUATION OF TIME SHARING SYSTEMS

T. W. Potter

Bell Telephone Labs

Introduction

One of the most severe problems we face today in a new inhouse time sharing system is that it becomes overloaded and degraded overnight. If it were possible to determine under what load conditions time-sharing performance degrades beyond acceptance then one could estimate when to upgrade. More important than knowing when to upgrade is knowing what to upgrade in order to obtain the most cost-effective performance possible. This paper presents techniques that have helped to solve these problems. It begins by describing methods of analyzing how the user is using the time-sharing system. Through these methods one can create reasonable synthetic jobs that represent the user load. These synthetic jobs can be used in benchmarks under elaborate experimental design to obtain response curves. The response curves will aid in determining under what load conditions the time-sharing system will degrade beyond acceptance and the most cost-effective method of upgrading. This information together with a reasonable growth study, which is also discussed, will keep the time sharing system from severe performance degradation.

The response data obtained from benchmarks can be used to create an analytical model of the time-sharing system as well as validating other models. Such models and their usefulness will be discussed.

Some new in-house computer systems can provide batch, remote batch, transaction processing and time-sharing services simultaneously. These sophisticated systems do not lend themselves properly to performance analysis. Therefore, the task of keeping the time-sharing system tuned to maximum performance or minimum response time is difficult. In order to standardize more stringently the techniques available to keep time-sharing response minimal we must separate time-sharing from the other environments or dimensions and present the resulting techniques in this paper. These techniques can then be extended to the multi-dimensional case.

Techniques

Let us assume we are given the task of creating a five-year growth plan for an existing in-house time-sharing system. If we know what kind of users are on the system and how they are using it we would know the usage characteristic or user profiles of the time-sharing system. Now given the estimated growth of time-sharing we could determine the systems reaction to these users.

Therefore if we know the growth (Figure 1) and we know how the time-sharing users are using the system at this time (user profile) then by benchmarking we can determine the system reaction at higher loads and obtain a response curve (Figure 2). This assumes of course we can define response time. If we re-benchmark using different configurations we can then see the effect on performance of alternate configurations.

[merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small]

The hardware monitor can collect information relating to the instruction mix which can be very helpful in reproducing the processor requirement. The monitor can also collect information relative to the terminal user in terms of average number of characters received/transmitted per user interaction. Since the monitor can also collect information relative to the peak periods of utilization we can determine when to study the log files. In order to guarantee performance one needs to reproduce the peak period loads and therefore information on non-peak periods need not be used in most cases.

Typically the log files give information relative to average memory requirement, average processor time and average disk and terminal I/O. Most log files account for the timesharing processor time, disk and terminal I/O and number of commands or languages requested. For example, if Fortran was used then those resources used by Fortran during the sample period would be recorded. Typically by studying the commands or languages we can find a small number of them that account for the majority of the resource utilization. A user survey is very helpful in determining the most frequent programming languages, the program size in arrays versus code, and the types and sizes of disk files. The console log usually gives an indication of the number of concurrent users during different periods for an entire day. Console log information can be used to make sure that the time-sharing system is relatively consistent from day to day. This can be done by graphing the maximum number of concurrent user during each hour of a day and comparing the daily graphs. (See Figure 4C). If large fluctuations do occur then one must compare the user profiles that are created during the different peak periods and determine which profile to use as a pessimistic representation of the user load (a harder load on the system).

Let us assume that a hardware monitor has been connected to the processor(s), channel(s), and communication controller or processor. This monitor should at least collect utilization data to determine peak periods of resource utilization. The monitor should also collect or sample the instructions (op codes) executing within the processor(s). The monitor could also collect terminal related data by sampling a register within the communication processor which contains information on each terminal 1/0. This information could consist of the terminal number; whether the terminal was presently sending or receiving data; and the number of characters transferred. The terminal data could also be collected via a software monitor. Now let us assume that there is only one synthetic job which will represent the entire load. What might we observe as typical hardware monitor data?

[blocks in formation]
[blocks in formation]
[blocks in formation]

These four major tools (monitors, log file, surveys and console logs) now allow us to create one or more synthetic jobs that reasonably represent the user load. A synthetic job is like any other job in that it requires a programming language, processor time, disk and terminal I/O, and memory requirements. If more than one programming language has large usage then more than one synthetic job is required. Let us work through an example on how these tools are used to create synthetic jobs. Data contained within this example are purely hypothetical.

[blocks in formation]
« PreviousContinue »