Page images
PDF
EPUB

cessed by a single word drive line without need for more than one_set of sense amplifiers and bit current drivers. The set contains only the number of amplifiers needed to process the bits of one word (or byte) in parallel." (Fedde, 1967, p. 595). Simpson (1968) discusses the thin film memory developed at Texas Instruments.6.103

Nevertheless, the known number of storage elements capable of matching ultrafast processing and control cycle times (100-nanosecond or less) are relatively few,6.104 and there are many difficulties to be encountered in currently available advanced techniques.6.105 Some specific R & D requirements indicated in the literature include materials research to lower the high voltages presently required for light-switching in optically addressed memories (Kohn, 1965),6.106 attacks on noise problems in integrated circuit techniques (Merryman, 1965),6.107 and the provision of built-in redundancy against element failures encountered in batch fabrication techniques (Kohn, 1965). In the case of cryotrons used for memory design, Rajchman (1965) notes that the "cost and relative inconvenience of the necessary cooling equipment is justified only for extremely large storage capacities" (p. 126), such as those extending beyond 10 million bits, and Van Dam and Michener (1967) concur.6.108 Considerations of "break-even" economics with respect to cryogenicelement memories such as to balance high density storage and high speed access against the "cooling" costs has been assessed at a minimum randomaccess memory requirement of 107 bits.6.109 As of 1967-68, however, practical realizations of such techniques have been largely limited to small-scale, special-purpose auxiliary and content-addressable memories, to be discussed next.

6.3.2. High-Speed, Special-Purpose, and Associative or Content-Addressable Memories

Small, high-speed, special-purpose memories have been used as adjuncts to main memories in computer design for some years.6.110 One major purpose is to provide increased speed of instruction access or address translation, or both. The "readonly-stores" (ROS) in particular represent relatively recent advances in "firmware," or built-in microprogramming.6.111

It is noted that "the mode of implementing ROM's spans the art, from capacitor and resistor arrays and magnetic core ropes and snakes to selectively deposited magnetic film arrays." (Nisenoff, 1966, p. 1826.) An Israeli entry involves a two-level memory system with a microprogrammed "Read Only Store" having an access time of 400 nanoseconds. (Dreyer, 1968.) A variation for instructionaccess processes is the MYRA (MYRi Aperture) ferrite disk described by Briley (1965). This, when accessed, produces pulses in sequential trains on 64 or more wires. A macro instruction is addressed to an element in the MYRA memory which then produces gating signals for the arithmetic unit and signals for fetching both operands and the next

macro instructions. Further, "Picoinstructions are stored at constant radii upon a MYRA disk, in the proper order to perform the desired task. The advantages of the MYRA element are that the picoinstructions are automatically accessed in se"6.112 Holographic ROM possibilities

quence

are also under consideration.6.113

In the area of associative, or content-addressable memories,6.114 advanced hardware developments to date have largely been involved in processor design and provision of small-scale auxiliary or "scratchpad" memories rather than for massive selectionretrieval and data bank management applications.6.115 "Scratchpad" memories, also referred to as "slave" memories, e.g., by Wilkes (1965),6.116 are defined by Gluck (1965) as "small uniform access memories with access and cycle times matched to the clock of the logic." They are used for such purposes as reducing instruction-access time, for microprogramming, for buffering of instructions or data that are transferable in small blocks (as in the "four-fetch" design of the B 8500),6.117 for storage of intermediate results, as table lookup devices,6.118 as index registers and, to a limited extent, for content addressing.6.119

Another example is the modified "interactive" cell assembly design of content-addressable memory where entries are to be retrieved by coincidence of a part of an input or query pattern with a part of stored reference patterns, including other variations on particular match operations (Gaines and Lee, 1965).6.120 In addition, we note developments with respect to a solenoid array 6.121 and stacks of plastic card resistor arrays, 6.122 both usable for associative memory purposes; the GAP (Goodyear Associative Processor), 6.123 the APP (Associative Parallel Processor) described by Fuller and Bird (1965), 6.124 the ASP (Association-Storing Processor) machine organization,6.125 and various approaches which compromise somewhat on speed, including bitrather than word-parallel searching 6.126 or the use of circulating memories such as glass delay lines.6.127

Cryogenic approaches to the hardware realization of associative memory concepts have been under investigation since at least the mid-1950's (Slade and McMahon, 1957), while McDermid and Peterson (1961) report work on a magnetic core technique as of 1960. However, the technology for developing high-speed reactivity in these special-purpose memories has been advanced in the past few years. On the basis of experimental demonstration, at least, there have been significant advances with respect to parallel-processing, associative-addressing, internal but auxiliary techniques in the form of memories built-into some of the recently developed large computer systems.6.128

The actual incorporation of such devices, even if of somewhat limited scale, in operational computer system designs is of considerable interest, whether of 25- or 250-nanosecond performance. For example, Ammon and Neitzert report RCA experiments that "show the feasibility of a 256-word

376-411 O - 70 - 3

scratchpad memory with an access time of 30 nanoseconds The read/write cycle time, however, will still be limited by the amplifier recovery so that with the best transistors available it appears that 60 nanoseconds are required". (1965, p. 659). RCA developments also include a sonic film memory in which thin magnetic films and scanning strain waves are combined for serial storage of digital information.6.129

Crawford et al. (1965) have claimed that an IBM tunnel diode memory of 64 48-bit words and a read/ restore or clear/write cycle time of less than 25 nanoseconds was "the first complete memory system using any type of technology reported in this size and speed range". (p. 636).6.130 Then there is an IBM development of a read-only, deposited magnetic film memory, having high-speed read capability (i.e., 19ns access time) and promising economics because the technique is amenable to batch fabrication.6.131 (Matick et al., 1966).

Catt and associates of Motorola describe "an integrated circuit memory containing 64 words of 8 bits per word, which is compatible in respect to both speed and signal level with high-speed currentmode gates. The memory has a nondestructive read cycle of 17 nanoseconds and a write cycle of 10 nanoseconds without cycle overlap." (Catt et al., 1966, p. 315).6.132 Anacker et al. (1966) discuss 1,000-bit film memories with 30 nanosecond access times." Kohn et al. (1967) have investigated a 140,000 bit, nondestructive read-out magnetic film memory that can be read with a 20-nanosecond read cycle time, a 30-nanosecond access time, and a 65-nanosecond write time. More recently, IBM has announced a semi-conductor memory with 40 nanosecond access.6.134

6.133

Memories of this type that are of somewhat larger capacity but somewhat less speed (in the 100-500 nanosecond range) are exemplified by such commercially-announced developments as those of Electronic Memories,6.135 Computer Control Company, 6.136 and IBM.6.137 Thus, Werner et al. (1967) describe a 110-nanosecond ferrite core memory with a word capacity of 8,192 words,6.138 while Pugh et al. (1967) report other IBM developments involving a 120-nanosecond film memory of 600,000-bit capacity. McCallister and Chong (1966) describe an experimental plated wire memory system of 150,000-bit capacity with a 500-nanosecond cycle time and a 300nanosecond access time, developed at UNIVAC.6.139 Another UNIVAC development involves planar thin films.6.140 A 16,384-word, 52-bit, planar film memory with half-microsecond or less, (350 nanosecond) cycle time, under development at Burroughs laboratories for some years, has been described by Bittman (1964).6.141 Other recent developments have been discussed by Seitzer (1967) 6. 6.142 and Raffel et al. (1968),6.143 among others.

For million-bit and higher capacities, recent IBM investigations have been directed toward the use of "chain magnetic film storage elements" 6.144 in both DRO and NDRO storage systems with 500

nanosecond cycle times.6.145 It is noted, however, that “a considerable amount of development work is still required to establish the handling, assembly, and packaging techniques." (Abbas et al., 1967, p. 311).

[ocr errors]

A plated wire random access memory is under development by UNIVAC for the Rome Air Development Center. "The basic memory module consists of 107 bits; the mechanical package can hold 10 modules. The potential speed is a 1-to-2 microsecond word rate. Ease of fabrication has been emphasized in the memory stack design. These factors, together with the low plated wire element cost, make an inexpensive mass plated wire store a distinct possibility." (Chong et al., 1967, p. 363),6.146 RADC's interests in associative processing are also reflected in contracts with Goodyear Aerospace Corp., Akron, Ohio, for investigation and experimental fabrication of associative memories and processors. (See, for example, Gall, 1966).

6.3.3. High-Density Data Recording and Storage Techniques

[ocr errors]

Another important field of investigation with respect to advanced data recording, processing, and storage techniques is that of further development of high-density data recording media and methods and bulk storage techniques, including block-oriented random access memories. 6.147 Magnetic techniques cores, tapes, and cards-continue to be pushed toward multimillion bit capacities.6.148 A single-wall domain magnetic memory system has recently been patented by Bell Telephone Laboratories.6.149 In terms of R & D requirements for these techniques, further development of magnetic heads, recording media, and means for track location has been indicated,6.150 as is also the case for electron or laser beam recording techniques.6.151 Videotape developments are also to be noted.6.152

In addition to the examples of laser, holographic, and photochromic technologies applied to high density data recording previously given, we may note some of the other advanced techniques that are being developed for large-capacity, compact storage. These developments include the materials and media as well as techniques for recording with light, heat, electrons, and laser beams. In particular, "a tremendous amount of research work is being undertaken in the area of photosensitive materials. Part of this has been sparked by the acute shortage of silver for conventional films and papers. In October, more than 800 people attend a symposium in Washington, D.C., on Unconventional Photographic Systems. Progress was described in a number of areas, including deformable films, electrophotography, photochromic systems, unconventional silver systems, and photopolymers." (Hartsuch, 1968, p. 56).

Examples include the General Electric Photocharge,6.153 the IBM Photo-Digital system,6.154 the UNICON mass memory,' 6.155 a system announced

by Foto-Mem Inc.6.156 and the use of thin dielectric films at Hughes Research Laboratories.6.157 At Stanford Research Institute, a program for the U.S. Army Electronics Command is concerned with investigations of high-density arrays of micron-size storage elements, which are addressed by electron beam. The goal is a digital storage density of 108 bits per square centimeter.6.158

Still another development is the NCR heat-mode recording technique. (Carlson and Ives, 1968). This involves the use of relatively low power CW lasers to achieve real-time, high-resolution (150 : 1) recording on a variety of thin films on suitable substrates.6.159 In particular, microimage recordings can be achieved directly from electronic charactergeneration devices.6.160 Newberry of General Electric has described an electron optical data storage technique involving a 'fly's eye' lens system for which a "a packing density of 108 bits per square inch has already been demonstrated with 1 micron beam diameter." (1966, p. 727-728).

Then there is a new recording-coding system, from Kodak, that uses fine-grained photographic media, diffraction grating patterns, and laser light sources.6.161 As a final example of recent recording developments we note that Gross (1967) has described a variety of investigations at Ampex, including color video recordings on magnetic film plated discs, silver halide film for both digital and analog recordings, and use of magneto-optic effects for reading digital recordings.6.162

Areas where continuing R & D efforts appear to be indicated include questions of read-out from highly compact data storage,6.163 of vacuum equipment in the case of electron beam recording,6.164 and of noise in some of the reversible media.6.165 Then it is noted that "at present it is not at all clear what compromises between direct image recording and holographic image recording will best preserve high

information density with adequate redundancy, but the subject is one that attracts considerable research interest." (Smith, 1966, p. 1298).

Materials and media for storage are also subjects of continuing R & D concern in both the achievement of higher packing densities with fast direct access and in the exploration of prospects for storage of multivalued data at a single physical location. For example: "A frontal attack on new materials for storage is crucial if we are to use the inherent capability of the transducers now at our disposal to write and read more than 1 bit of data at 1 location .

"One novel approach for a multilevel photographic store now being studied is the use of color photography techniques to achieve multibit storage at each physical memory location . . . Color film can store multilevels at the same point because both intensity and frequency can be detected." (Hoagland, 1965, p. 58).

"An experimental device which changes the color of a laser beam at electronic speeds has been developed . . . IBM scientists believe it could lead to the development of color-coded computer memories with up to a hundred million bits of information stored on one square inch of photographic film." (Commun. ACM 9, 707 (1966).)

Such components and materials would have extremely high density, high resolution characteristics. One example of intriguing technical possibilities is reported by Fleisher et al. (1965) in terms of a standing-wave, read-only memory where n color sources might provide n information bits, one for each color, at each storage location.6.166 These authors claim that an apparently unique feature of this memory would be a capability for storing both digital and analog (video) information,6.167 and that parallel word selection, accomplished by fiber-optic light splitting or other means, would be useful in associative selection and retrieval.6.168

7. Debugging, On-Line Diagnosis, Instrumentation, and Problems of Simulation

Beyond the problems of initial design of information processing systems are those involved in the provision of suitable and effective debugging, selfmonitoring, self-diagnosis, and self-repair facilities in such systems. Overall system design R & D requirements are, finally, epitomized in increased concern over the needs for on-line instrumentation, simulation, and formal modelling of information flows and information handling processes, and with the difficulties so far encountered in achieving solutions to these problems. In turn, many of these problems are precisely involved in questions of systems evaluation.

It has been cogently suggested that the area of aids to debugging "has been given more lip service and less attention than any other" 7.1 in considerations of information processing systems design.

Special, continuing, R & D requirements are raised in the situations, first, of checking out very large programs, and secondly, of carrying out checkout operations under multiple-access, effectually online, conditions.7.2 In particular, the checkout of very large programs presents special problems.7.3

7.1. Debugging Problems

Program checkout and debugging are also problems of increasing severity in terms of multipleaccess systems. Head states that "testing of many non-real-time systems-even large ones - has all too often been ill-planned and haphazard with numerous errors discovered only after cutover.

In most real-time systems, the prevalence of errors after cutover, any one of which could force the

system to go down, is intolerable." (1963, p. 41.) Bernstein and Owens (1968) suggest that conventional debugging tools are almost worthless in the time-sharing situation and propose requirements for an improved debugging support system. 7.4

On-line debugging provides particular challenges to the user, the programmer and the system designer.7.5 It is important that the console provide versatile means of accomplishing system and program self-diagnosis, to determine what instruction caused a hang-up, to inspect appropriate registers in a conflict situation, and to display anticipated results of a next instruction before it is executed. A major consideration is the ability to provide interpretation and substitution of instructions, with traps, from the console. A recent system for on-line debugging, EXDAMS (EXtendable Debugging and Monitoring System), is described by Balzer (1969). 7.6

Aids to debugging and performance evaluation provided by a specific system design should therefore include versatile features for address traps, instruction traps, and other traps specified by the programmer. For example, if SIMSCRIPT programs are to be run, a serious debugging problem arises because of the dynamic storage allocation situation where the clients needs to find out where he is and provide dynamic dumping, e.g., by panel interrupt without halting the machine. Programmers checking out a complex program need an interrupt-and-trapto-a-fixed location system, the ability to bounce out of a conflict without being trapped in a halt, to jump if a program accesses a particular address, to take special action if a data channel is tied up for expected input not yet received, or to jump somewhere else on a given breakpoint and then come back to scheduled address, e.g., on emergence of an overflow condition.7.7

Problems of effective debugging, diagnostic, and simulation languages are necessarily raised.7.8 For example, McCarthy et al. report: "In our opinion the reduction in debugging time made possible by good typewriter debugging languages and adequate access to the machine is comparable to that provided by the use of ALGOL type languages for numerical calculation." (McCarthy et al., 1963, p. 55). Still another debugging and diagnostic R & D requirement is raised with respect to reconfigurations of available installations and tentative evaluations of the likely success of the substitution of one configuration for another.7.9

In at least one case, a combined hardware-software approach has been used to tackle another special problem of time-shared, multiple-user systems, that of machine maintenance with minimum interference to ongoing client programs. The STROBES technique (for Shared-time-repair of big electronic systems) has been developed at the Computation Center of the Carnegie Institute of Technology.7.10 This type of development is of significance because as Schwartz and his co-authors report (1965, p. 16): "Unlike more traditional systems, a time-sharing system cannot stop and start

over when a hardware error occurs. During timesharing, the error must be analyzed, corrected if possible, and the user or users affected must be notified. For all those users not affected, no significant interruption should take place.”

7.2. On-Line Diagnosis and Instrumentation

Self-diagnosis is an important area of R & D concern with respect both to the design and the utilization of computer systems.7.11 In terms of potentials for automatic machine-self-repair, it is noted that "a self-diagnosable computer is a computer which has the capabilities of automatically detecting and isolating a fault (within itself) to a small number of replaceable modules." (Forbes et al., 1965, p. 1073).7.12 To what extent can the machine itself be used to generate its own programs and procedures? Forbes et al. suggest that: "If the theory of selfdiagnosing computers is to become practical for a family of machines, further study and development of machine generation of diagnostic procedures is necessary." (1965, p. 1085).

Several different on-line instrumentation* techniques have been experimentally investigated by Estrin and associates (1967), by Hoffman (1965), Scherr (1965) and by Sutherland (1965), among others.7.13 Monitoring systems for hardware, software, or both are described, for example, by Avižiensis (1967, 1968),7.14 Jacoby (1959),7.15 and Wetherfield (1966),7.16 while a monitoring system for the multiplexing of slow-speed peripheral equipment at the Commonwealth Scientific and Industrial Research Organization in Australia is described by Abraham et al. (1966). Moulton and Muller (1967) describe DITRAN (DIagnostic FORTRAN), a compiler with extensive error checking capabilities that can be applied both at compilation time and during program execution, and Whiteman (1966) discusses "computer hypochondria".7.17

Fine et al. (1966) have developed an interpreter program to analyze running programs with respect to determining sequences of instructions between page calls, page demands by time intervals, and page demands by programs. In relatively early work in this area, Licklider and Clark report that "Program Graph and Memory Course are but two of many possible schemes for displaying the internal processes of the computer. We are working on others that combine graphical presentation with symbolic representation . . . By combining graphical with symbolic presentation, and putting the mode of combination under the operator's control via light pen, we hope to achieve both good speed and good discrimination of detailed information." (1962, p. 120). However, Sutherland comments that: "The information processing industry is uniquely wanting in good instrumentation; every other industry has meters, gauges, magnifiers - instruments to measure

*"Instrumentation" in this context means diagnostic and monitoring procedures which are applied to operating programs in a "subject" computer as they are being executed in order to assemble records of workload, system utilization, and other similar data.

and record the performance of machines appropriate to that industry." (Sutherland, 1965, p. 12). More effective on-line instrumentation techniques are thus urgently required, especially for the multiple-access processing system.

Huskey supports the contentions of Sutherland and of Amdahl that: "Much more instrumentation of on-line systems is needed so that we know what is going on, what the typical user does, and what the variations are from the norms. It is only with this information that systems can be 'trimmed' so as to optimize usefulness to the customer array." (Huskey, 1965, p. 141).

Sutherland in particular points out that plots of times spent by the program in doing various subtasks, can tighten up frequently used program and sub-routine loops and thus save significant amounts of processor running-time costs.7.18 He also refers to a system developed by Kinslow in which a pictorial representation of "which parts of memory were 'occupied' as a function of time for his timesharing system. The result shows clearly the small spaces which develop in memory and must remain unused because no program is short enough to fit into them." (Sutherland, 1965, p. 13). In general, it is hoped that such on-line instrumentation techniques will bring about better understanding of the interactions of programs and data within the processing system.7.19

Improved techniques for the systematic analysis of multiple-access systems are also needed. As Brown points out: “The feasibility of time-sharing depends quite strongly upon not only the timesharing procedures, but also upon . . . the following properties, characteristic of each program when it is run alone:

(1) The percentage of time actually required for execution of the program

[ocr errors]

(2) The spectrum of delay times during which the program awaits a human response.

(3) A spectrum of program execution burst lengths.

[ocr errors]

A direct measurement of these properties is difficult; a reasonable estimate of them is important, however, in determining the time-sharing feasibility of any given program." (1965, p. 82). However, most of the analyses implied are significantly lacking to date, although some examples of benefits to be anticipated are given by Cantrell and Ellison (1968) and by Campbell and Heffner (1968).

Schwartz et al. emphasize that "another researchable area of importance to proper design is the mathematical analysis of time-shared computer operation. The object in such an analysis is to provide solutions to problems of determining the user capacity of a given system, the optimum values for the scheduling parameters (such as quantum size) to be used by the executive system, and, in general, the most efficient techniques for sequencing the object programs." (Schwartz et al., 1965, p. 21).

Continuing, they point to the use of simulation

techniques as an alternative. “Because of the large number of random variables - many of which are interdependent—that must be taken into account in a completely general treatment of time-sharing operation, one cannot expect to proceed very far with analyses of the above nature. Thus, it seems clear that simulation must also be used to study time-shared computer operation." (Schwartz et al., 1965, p. 21). A 1967 review by Borko reaches similar conclusions.7.20

7.3. Simulation

The on-going analysis and evaluation of information processing systems will clearly require the further development of more sophisticated and more accurate simulation models than are available today.7.21 Special difficulties are to be noted in the case of models of multiple access system where "the addition of pre-emptive scheduling complicates the mathematics beyond the point where models can even be formulated" (Scherr, 1965, p. 32) and in that of information selection and retrieval applications where, as has been frequently charged, "no accurate models exist". (Hayes, 1963, p. 284).

In these and other areas, then, a major factor is the inadequacy of present-day mathematical techniques. 7.22 In particular, Scherr asserts that "simulation models are required because the level of detail necessary to handle some of the features studied is beyond the scope of mathematically tractable models." (Scherr, 1965, p. 32). The importance of continuing R & D efforts in this area, even if they should have only negative results, has, however been emphasized by workers in the field. 7.23

Thus, for example, at the 1966 ACM-SUNY Conference, "Professor C. West Churchman . pointed to the very large [computer] models that can now be built, and the very much larger models that we will soon be able to build, and stated that the models are not realistic because the quality of information is not adequate and because the right questions remain unasked. Yet he strongly favored the building of models, and suggested that much information could be obtained from attempts to build several different and perhaps inconsistent models of the same system." (Commun. ACM 9, 645 (1966).)

We are led next, then, to problems of simulation. There are obvious problems in this area also. First there is the difficulty of "determining and building meaningful models" (Davis, 1965, p. 82), especially where a high degree of selectivity must be imposed upon the collection of data appropriately representative of the highly complex real-life environments and processes that are to be simulated.7.24

Beyond the questions of adequate selectivity in simulation-representation of the phenomena, operations, and possible system capabilities being modelled are those of the adequacy of the simulation languages as discussed by Scherr, Steel, and others.7.25 Teichroew and Lubin present a compre

« PreviousContinue »