Page images
PDF
EPUB

by Foto-Mem Inc.6.156 and the use of thin dielectric films at Hughes Research Laboratories.6.157 At Stanford Research Institute, a program for the U.S. Army Electronics Command is concerned with investigations of high-density arrays of micron-size storage elements, which are addressed by electron beam. The goal is a digital storage density of 108 bits per square centimeter.6.158

Still another development is the NCR heat-mode recording technique. (Carlson and Ives, 1968). This involves the use of relatively low power CW lasers to achieve real-time, high-resolution (150: 1) recording on a variety of thin films on suitable substrates.6.159 In particular, microimage recordings can be achieved directly from electronic charactergeneration devices.6.160 Newberry of General Electric has described an electron optical data storage technique involving a 'fly's eye' lens system for which a "a packing density of 108 bits per square inch has already been demonstrated with 1 micron beam diameter." (1966, p. 727-728).

Then there is a new recording-coding system, from Kodak, that uses fine-grained photographic media, diffraction grating patterns, and laser light sources.6.161 As a final example of recent recording developments we note that Gross (1967) has described a variety of investigations at Ampex, including color video recordings on magnetic film plated discs, silver halide film for both digital and analog recordings, and use of magneto-optic effects for reading digital recordings.6.162

Areas where continuing R & D efforts appear to be indicated include questions of read-out from highly compact data storage,6 6.163 of vacuum equipment in the case of electron beam recording,6.164 and of noise in some of the reversible media.6.165 Then it is noted that "at present it is not at all clear what compromises between direct image recording and holographic image recording will best preserve high

information density with adequate redundancy, but the subject is one that attracts considerable research interest." (Smith, 1966, p. 1298).

Materials and media for storage are also subjects of continuing R & D concern in both the achievement of higher packing densities with fast direct access and in the exploration of prospects for storage of multivalued data at a single physical location. For example: "A frontal attack on new materials for storage is crucial if we are to use the inherent capability of the transducers now at our disposal to write and read more than 1 bit of data at 1 location..

"One novel approach for a multilevel photographic store now being studied is the use of color photography techniques to achieve multibit storage at each physical memory location. . . Color film can store multilevels at the same point because both intensity and frequency can be detected." (Hoagland, 1965, p. 58).

“An experimental device which changes the color of a laser beam at electronic speeds has been developed... IBM scientists believe it could lead to the development of color-coded computer memories with up to a hundred million bits of information stored on one square inch of photographic film." (Commun. ACM 9, 707 (1966).)

Such components and materials would have extremely high density, high resolution characteristics. One example of intriguing technical possibilities is reported by Fleisher et al. (1965) in terms of a standing-wave, read-only memory where n color sources might provide n information bits, one for each color, at each storage location.6.166 These authors claim that an apparently unique feature of this memory would be a capability for storing both digital and analog (video) information,6.167 and that parallel word selection, accomplished by fiber-optic light splitting or other means, would be useful in associative selection and retrieval.6.168

7. Debugging, On-Line Diagnosis, Instrumentation, and Problems of Simulation

Beyond the problems of initial design of information processing systems are those involved in the provision of suitable and effective debugging, selfmonitoring, self-diagnosis, and self-repair facilities in such systems. Overall system design R & D requirements are, finally, epitomized in increased concern over the needs for on-line instrumentation, simulation, and formal modelling of information flows and information handling processes, and with the difficulties so far encountered in achieving solutions to these problems. In turn, many of these problems are precisely involved in questions of systems evaluation.

It has been cogently suggested that the area of aids to debugging "has been given more lip service and less attention than any other" 7.1 in considerations of information processing systems design.

Special, continuing, R & D requirements are raised in the situations, first, of checking out very large programs, and secondly, of carrying out checkout operations under multiple-access, effectually online, conditions.7.2 In particular, the checkout of very large programs presents special problems.7.3

7.1. Debugging Problems

Program checkout and debugging are also problems of increasing severity in terms of multipleaccess systems. Head states that "testing of many non-real-time systems - even large ones - has all too often been ill-planned and haphazard with numerous errors discovered only after cutover. In most real-time systems, the prevalence of errors after cutover, any one of which could force the

system to go down, is intolerable." (1963, p. 41.) Bernstein and Owens (1968) suggest that conventional debugging tools are almost worthless in the time-sharing situation and propose requirements for an improved debugging support system.7.4

On-line debugging provides particular challenges to the user, the programmer and the system designer.7.5 It is important that the console provide versatile means of accomplishing system and program self-diagnosis, to determine what instruction caused a hang-up, to inspect appropriate registers in a conflict situation, and to display anticipated results of a next instruction before it is executed. A major consideration is the ability to provide interpretation and substitution of instructions, with traps, from the console. A recent system for on-line debugging, EXDAMS (EXtendable Debugging and Monitoring System), is described by Balzer (1969). 7.6

Aids to debugging and performance evaluation provided by a specific system design should therefore include versatile features for address traps, instruction traps, and other traps specified by the programmer. For example, if SIMSCRIPT programs are to be run, a serious debugging problem arises because of the dynamic storage allocation situation where the clients needs to find out where he is and provide dynamic dumping, e.g., by panel interrupt without halting the machine. Programmers checking out a complex program need an interrupt-and-trapto-a-fixed location system, the ability to bounce out of a conflict without being trapped in a halt, to jump if a program accesses a particular address, to take special action if a data channel is tied up for expected input not yet received, or to jump somewhere else on a given breakpoint and then come back to scheduled address, e.g., on emergence of an overflow condition.7.7

Problems of effective debugging, diagnostic, and simulation languages are necessarily raised.7.8 For example, McCarthy et al. report: "In our opinion the reduction in debugging time made possible by good typewriter debugging languages and adequate access to the machine is comparable to that provided by the use of ALGOL type languages for numerical calculation." (McCarthy et al., 1963, p. 55). Still another debugging and diagnostic R & D requirement is raised with respect to reconfigurations of available installations and tentative evaluations of the likely success of the substitution of one configuration for another.7.9

In at least one case, a combined hardware-software approach has been used to tackle another special problem of time-shared, multiple-user systems, that of machine maintenance with minimum interference to ongoing client programs. The STROBES technique (for Shared-time-repair of big electronic systems) has been developed at the Computation Center of the Carnegie Institute of Technology.7.10 This type of development is of significance because as Schwartz and his co-authors report (1965, p. 16): "Unlike more traditional systems, a time-sharing system cannot stop and start

over when a hardware error occurs. During timesharing, the error must be analyzed, corrected if possible, and the user or users affected must be notified. For all those users not affected, no significant interruption should take place."

7.2. On-Line Diagnosis and Instrumentation

Self-diagnosis is an important area of R & D concern with respect both to the design and the utilization of computer systems.7.11 In terms of potentials for automatic machine-self-repair, it is noted that "a self-diagnosable computer is a computer which has the capabilities of automatically detecting and isolating a fault (within itself) to a small number of replaceable modules." (Forbes et al., 1965, p. 1073).7.12 .7.12 To what extent can the machine itself be used to generate its own programs and procedures? Forbes et al. suggest that: "If the theory of selfdiagnosing computers is to become practical for a family of machines, further study and development of machine generation of diagnostic procedures is necessary." (1965, p. 1085).

ware,

Several different on-line instrumentation* techniques have been experimentally investigated by Estrin and associates (1967), by Hoffman (1965), Scherr (1965) and by Sutherland (1965), among others.7.13 Monitoring systems for hardware, softor both are described, for example, by Avižiensis (1967, 1968),7.14 Jacoby (1959),7.15 and Wetherfield (1966), 7.16 while a monitoring system for the multiplexing of slow-speed peripheral equipment at the Commonwealth Scientific and Industrial Research Organization in Australia is described by Abraham et al. (1966). Moulton and Muller (1967) describe DITRAN (Diagnostic FORTRAN), a compiler with extensive error checking capabilities that can be applied both at compilation time and during program execution, and Whiteman (1966) discusses "computer hypochondria".7.17

Fine et al. (1966) have developed an interpreter program to analyze running programs with respect to determining sequences of instructions between page calls, page demands by time intervals, and page demands by programs. In relatively early work in this area, Licklider and Clark report that "Program Graph and Memory Course are but two of many possible schemes for displaying the internal processes of the computer. We are working on others that combine graphical presentation with symbolic representation . . . By combining graphical with symbolic presentation, and putting the mode of combination under the operator's control via light pen, we hope to achieve both good speed and good discrimination of detailed information." (1962, p. 120). However, Sutherland comments that: "The information processing industry is uniquely wanting in good instrumentation; every other industry has meters, gauges, magnifiers-instruments to measure

*"Instrumentation" in this context means diagnostic and monitoring procedures which are applied to operating programs in a "subject" computer as they are being executed in order to assemble records of workload, system utilization, and other similar data.

and record the performance of machines appropriate to that industry." (Sutherland, 1965, p. 12). More effective on-line instrumentation techniques are thus urgently required, especially for the multiple-access processing system.

Huskey supports the contentions of Sutherland and of Amdahl that: "Much more instrumentation of on-line systems is needed so that we know what is going on, what the typical user does, and what the variations are from the norms. It is only with this information that systems can be 'trimmed' so as to optimize usefulness to to the customer array." (Huskey, 1965, p. 141).

Sutherland in particular points out that plots of times spent by the program in doing various subtasks, can tighten up frequently used program and sub-routine loops and thus save significant amounts of processor running-time costs.7.18 He also refers to a system developed by Kinslow in which a pictorial representation of "which parts of memory were 'occupied' as a function of time for his timesharing system. The result shows clearly the small spaces which develop in memory and must remain unused because no program is short enough to fit into them." (Sutherland, 1965, p. 13). In general, it is hoped that such on-line instrumentation techniques will bring about better understanding of the interactions of programs and data within the processing system.7.19

Improved techniques for the systematic analysis of multiple-access systems are also needed. As Brown points out: "The feasibility of time-sharing depends quite strongly upon not only the timesharing procedures, but also upon. . . the following properties, characteristic of each program when it is run alone:

(1) The percentage of time actually required for execution of the program.

...

(2) The spectrum of delay times during which the program awaits a human response. (3) A spectrum of program execution burst lengths.

A direct measurement of these properties is difficult; a reasonable estimate of them is important, however, in determining the time-sharing feasibility of any given program." (1965, p. 82). However, most of the analyses implied are significantly lacking to date, although some examples of benefits to be anticipated are given by Cantrell and Ellison (1968) and by Campbell and Heffner (1968).

Schwartz et al. emphasize that "another researchable area of importance to proper design is the mathematical analysis of time-shared computer operation. The object in such an analysis is to provide solutions to problems of determining the user capacity of a given system, the optimum values for the scheduling parameters (such as quantum size) to be used by the executive system, and, in general, the most efficient techniques for sequencing the object programs." (Schwartz et al., 1965, p. 21).

Continuing, they point to the use of simulation

techniques as an alternative. "Because of the large number of random variables- many of which are interdependent-that must be taken into account in a completely general treatment of time-sharing operation, one cannot expect to proceed very far with analyses of the above nature. Thus, it seems clear that simulation must also be used to study time-shared computer operation." (Schwartz et al., 1965, p. 21). A 1967 review by Borko reaches similar conclusions.7.20

7.3. Simulation

The on-going analysis and evaluation of information processing systems will clearly require the further development of more sophisticated and more accurate simulation models than are available today.7.21 Special difficulties are to be noted in the case of models of multiple access system where "the addition of pre-emptive scheduling complicates the mathematics beyond the point where models can even be formulated" (Scherr, 1965, p. 32) and in that of information selection and retrieval applications where, as has been frequently charged, "no accurate models exist". (Hayes, 1963, p. 284).

In these and other areas, then, a major factor is the inadequacy of present-day mathematical techniques. 7.22 In particular, Scherr asserts that "simulation models are required because the level of detail necessary to handle some of the features studied is beyond the scope of mathematically tractable models." (Scherr, 1965, p. 32). The importance of continuing R & D efforts in this area, even if they should have only negative results, has, however been emphasized by workers in the field.7.23

Thus, for example, at the 1966 ACM-SUNY Conference, "Professor C. West Churchman pointed to the very large [computer] models that can now be built, and the very much larger models that we will soon be able to build, and stated that the models are not realistic because the quality of information is not adequate and because the right questions remain unasked. Yet he strongly favored the building of models, and suggested that much information could be obtained from attempts to build several different and perhaps inconsistent models of the same system." (Commun. ACM 9, 645 (1966).)

We are led next, then, to problems of simulation. There are obvious problems in this area also. First there is the difficulty of "determining and building meaningful models" (Davis, 1965, p. 82), especially where a high degree of selectivity must be imposed upon the collection of data appropriately representative of the highly complex real-life environments and processes that are to be simulated.7.24

Beyond the questions of adequate selectivity in simulation-representation of the phenomena, operations, and possible system capabilities being modelled are those of the adequacy of the simulation languages as discussed by Scherr, Steel, and others.7.25 Teichroew and Lubin present a compre

hensive survey of computer simulation languages and applications, with tables of comparative characteristics, as of 1966.7.26 In addition, IBM has provided a bibliography on simulation, also as of 1966.

Again, as in the area of graphic input manipulation and output, the field of effective simulation has specific R & D requirements for improved and more versatile machine models and programming languages. Clancy and Fineberg suggest that "the very number and diversity of languages suggests that the field [of digital simulation languages] suffers from a lack of perspective and direction." (1965, p. 23).

The area of improved simulation languages is one that has a multiple interaction between software and hardware, especially where a computer is to be used to simulate another computer, perhaps one whose design is not yet complete 7.27 or to simulate many different scheduling, queuing and storage allocation alternatives in time-shared systems (see, for example, Blunt 1965). Such problems are also discussed by Scherr (1965) and by Larsen and Mano (1965), among others, while Parnas (1966) describes a modification of ALGOL (SFD-ALGOL, for "System Function Description") applicable to the simulation of synchronous systems.

However, there are difficult current problems in that languages such as SIMSCRIPT do not take advantage of the modularity of many processing systems, that conditional scheduling of sequences of events is extremely difficult 7.28 and that "we are still plagued by our inability to program for simultaneous action, even for the scheduling of large units in a computing system." (Gorn, 1966, p. 232).

In addition, for simulation and similar applications, heuristic or ad hoc programming facilities may be required. Thus, "a computer program which is to serve as a model must be able to have well-organized yet manipulatable data storage, easily augmentable and modifiable. The program must be self-modifying in a similarly organized way. It should be able to handle large blocks of data or program routines by specification of merely a name." (Strom, 1965, p. 114.)

For simulations or testings with controls, and without discernible interruption or reallocation of normal servicing of client processing requests, compilers must be available that will transform queries expressed in one or more commonly available customer languages to the language(s) most effectively used by the substituted experimental system and to the format(s) available in a master data base.

Then there are problems in the development of an appropriate "scenario", or sequence of events to be simulated.7.29 Burdick and Naylor (1966) provide a survey account of the problems of design and analysis of computer simulation experiments.

The problems of effective simulation of complex, interdependent processes are another area of increasing concern. Suppose, for example, that we are seeking to simulate a process in which many

separate operations are carried out concurrently or in parallel, and that the simulation technique requires a serial sequencing of these operations. Depending upon the choice of which one of the theoretically concurrent operations is processed first in the sequentializing procedure, the results of the simulation may be significantly different in one case than in another.7.30

For example, the SL/1 language being developed at the University of Pisa under Caracciolo di Forino (1965) is based in part on SOL (Simulation-Oriented Language, see Knuth and McNeley, 1964) and in part on SIMULA (the ALGOL extension developed by O. J. Dahl, of the Norwegian Computing Center, Oslo).7.31 A second version, SL/2, now under development, will provide self-adapting features to optimize the system. Caracciolo emphasizes that, for any set of deterministic processes that are to be applied simultaneously, but where problems of incompatibility may arise, the problems can be reduced to a set of probabilistic processes. Otherwise, if one sequentializes parallel, concurrent processes actually dependent upon the order of sequentialization, then hidden problems of incompatibility may vitiate the results obtained.

Despite difficulties, however, progress has been and is being made. Thus computer simulation has been investigated as a means of system simulation for determination of probable costs and benefits in advance of major investments in equipment or procedures.7.32 Then, as reported by Gibson (1967), simulation studies have been used to determine that block transfers of 4 to 16 words will facilitate reduction of effective internal access times to a few nanoseconds. Other programs to simulate digital data processing, time-shared system performance, and the like, are discussed by Larsen and Mano (1965) and by Scherr (1965). Simulation studies in terms of multiprocessor systems are represented by Lindquist et al. (1966) 7.33 and by Huesmann and Goldberg (1967),7.34

Other advantages from research and development efforts to be anticipated from computer simulation experiments are those of transfer of applications from a given computer to another not yet installed or available,7.35 advancements in techniques of pictorial data processing and transmission,7.36 advance appraisals of performance of time-shared systems, 7.37 and investigations of probable performance of adaptive recognition systems.

7.38

Finally, we note prospects for system simulation as a means of evaluation and of redesign, including the alteration of scheduling priorities to meet changing requirements of the system's clientele. Three examples from the literature are as follows: (1) "Use of a simulator permits the installation to continue running its programs as reprogramming proceeds on a reasonable schedule.' (Trimble, 1965, p. 18).

(2) "Effective response time simulation can be

easily modified to provide operating costs of retrieval." (Blunt, 1965, p. 9). (3) "When large systems are being developed another set of programs is involved to perform a function not required for simpler situations. These are the simulation and analysis programs for system evaluation and-for semiautomated systems having a human component-system training." (Steel, 1965, p. 232). On the other hand, as Davis warns: "It is obvious that there is some threshold beyond which the real environment is too complex to permit meaningful simulation." (1965, p. 82). For the future, therefore, a system of multiple-working-hypotheses might well

be developed: "The benefits and drawbacks of empirical data gathering vs. simulation vs. mathematical analysis are well documented. What we would really like to be able to do is a little of all three, back and forth, until our gradually increasing comprehension of the problem becomes the desired solution." (Greenberger, 1966, p. 347). Similarly, it may be claimed that simulation models ". often cumbersome and difficult to adapt to new configurations, with results of somewhat uncertain interpretation due to statistical sampling variability. Ideally, simulation and analytic techniques should supplement each other, for each approach has its advantages." (Gaver, 1967, p. 423).

66

are

8. Conclusions

As we have seen, major trends in input/output, storage, processor, and programming design relate to multiple access, multiprogrammed, and multiprocessor systems. On-line simulation, instrumentation, and performance evaluation capabilities are necessary in order to effectively measure and test proposed techniques, systems, and networks of broad future significance to improved utilization of automatic data processing techniques.

We may therefore close this report on overall system design considerations with the following quotations:

(1) "In rating the completeness, clarity, and simplicity of the system diagnostics, command language and keyboard procedures, we found their 'goodness' was inversely related to the running efficiency of the system System developers should examine this condition to determine whether inefficient execution is an inherent feature of system[s] supplying complete and easily understood diagnostics, or a function of the specific interests and prejudices of the developers." (O'Sullivan, 1967, p. 170).

(2) "An engineer who wishes to concern himself with performance criteria in the synthesis of new systems is frustrated by the weakness of measurement of computer system behavior." (Estrin et al., 1967, p. 645.)

(3) "The setting up of criteria of evaluation

demands user participation and provides an indication of whether the user understands the reason for the system, the role of the system and his responsibilities as a prospective system user." (Davis, 1965, p. 82.)

(4) "Today, and to an even greater extent tomorrow, the use of multiple functional units within the information processing system, the multiplexing of input and output messages, and the increased use of software to permit multiprogramming will require more subtle measures to evaluate a particular system's performance." (Nisenoff, 1966, p. 1828.)

(5) "Broad areas for further research are indicated... Comparative experimental studies of computer facility performance, such as online, offline, and hybrid installations, systematically permuted against broad classes of program languages (machine-oriented, procedure-oriented, and problem-oriented languages), and representative classes of programming tasks." (Sackman et al., 1968, p. 10), and

(6) "Improved methods of simulation, optimizing techniques, scheduling algorithms, methods of dealing with stochastic variables, these are the important developments that are pushing back the limits of our ability to deal with very large systems." (Harder, 1968, p. 233.)

Finally we note that the problems of the information processing system designer, then, are today aggrevated not only by networking, time-sharing, time-slicing, multiprocessor and multiprogramming potentialities, but also by critical questions involving the values and the costs of maintaining the integrity of privileged files. By the terminology "privileged files", we suggest the interpretation of all data stored in a machine-useful system that may have varying degrees of privacy, confidentiality, or security restrictions placed upon unauthorized access. Some of the background considerations affecting both policy and design factors will be discussed in the next report in this series.

« PreviousContinue »