« PreviousContinue »
essentially the same thing at the same time. Whenever that is the case, efficiency in use of memory space would be gained if the several programs shared a common subprogram." (Licklider, 1965, p. 26).
5.59 "In artificial intelligence problems, this process [code, run & see, code] must often be prolonged beyond the debugging phase. It is important for the programmer to experiment with the working program, making alterations and seeing the effects of the changes. Only in this way can he evaluate it or extend it to cover more general cases." (Teitelman, 1966, p. 2).
"This appears to be the best way to use a truly interactive man-machine facility-i.e., not as device for rapidly debugging a code representing a fully thought out solution to a problem, but rather as an aid for the exploration of problem solving strategies." (Weizenbaum, 1966, p. 42).
"If computers are to render frequent and intensive service to many people engaged in creative endeavors (i.e., working on new problems, not merely resolving old ones), an effective compromise between programming and ad hoc programming is required." (Licklider, 1965, p. 181).
5.60 "Programs very likely to contain errors must be run but must not be permitted to interfere with the execution of other concurrent computations. Moreover, it is an extremely difficult task to determine when a program is completely free of errors. Thus, in a large operational computer system in which evolution of function is required, it is unlikely that the large amount of programming involved is at any time completely free from errors, and the possibility of system collapse through software failure is perpetually present. It is becoming clear that protection mechanisms are essential to any multiprogrammed computer system to reduce the chance of such failure producing catastrophic shutdown of a system." (Dennis, 1965, p. 590).
5.61 "The functional language makes no reference to the specific subject matter of the problem. . . The program must be organized to separate its general problem-solving procedures from the application of these to a specific task." (Newell and Simon, 1959, p. 22).
"There has been a shift away from a concern with difficulty and toward a concern with generality. This means both a concern that the problem solver accept a general language for the problem statement, and that the internal representation be very general." (Newell, 1965, p. 17).
5.62 For example, "Multilang is a problemoriented language that translates the user's statement of the problem into requests for relevant programs and data in the system's memory. The language was designed specifically to assist in problem-solving and, in so doing, to ‘accumulate knowledge'. For example, it may not recognize the term 'eligible voter', but it can be told that an eligible voter is a thing that is 'human', ‘age over 21' and
either 'born in the U.S.' or ‘naturalized'. If these terms have been previously defined, the computer can find an answer to the question; additionally, the next time it is asked about eligible voters, it will know what is meant." (Carr and Prywes, 1965, pp. 88-89).
5.63 "Each user, and each user's program, must be restricted so that he and it can never access (read, write, or execute) unauthorized portions of the high-speed store, or of the auxiliary store. This is necessary (1) for privacy reasons, (2) to prevent a defective program from damaging the supervisor or another user's program, and (3) to make the operation of a defective program independent of the state of the rest in store." (Samuel, 1965, p. 10).
5.64 "The TRAC (Text Reckoning And Compiling) language system is a user language for control of the computer and storage parts of a reactive typewriter system." (Mooers, 1966, p. 215).
"A solution to this problem is to use a machineindependent computer language, designed to operate with a reactive typewriter, to operate the local computer. With this method, the computer acts in place of the human controller to gain access to remote computer systems. This approach is possible only with an extremely versatile language, such as the TRAC language. . . . It is relatively easy to describe in TRAC the set of actions which must be taken in order to make the remote computer perform and bring forth the desired files." (Fox et al., 1966, p. 161).
5.65 "The basic property of symbolic languages is that they can make use in a text of a set of local symbols, whose meaning and form must be declared within the text (as in ALGOL) or is to be deduced by the context (as simple variables in FORTRAN)." (Caracciolo di Forino, 1965, p. 227). However, "it is ... regrettable from the standpoint of the emerging real-time systems that languages like COBOL are so heavily oriented toward processing of sequential tape file data." (Head, 1963, p. 40).
5.66 Some other recent examples include LECOM, L, LISP II, CORAL, and TREET, characterized briefly as follows:
"The compiler language, called LECOM, is a version of COMIT, and is especially designed for small (8K) computers. The microcategorization program was written in LECOM, and assigns an appropriate syntactic category to each word of an input sentence." (Reed and Hillman, 1966, p. 1).
"Bell Telephone Laboratories' Low-Level Linked List Language (L6, pronounced 'L-six') contains many of the facilities which underlie such list processors as IPL, LISP, COMIT and SNOBOL, but it permits the user to get much closer to machine code in order to write faster-running programs, to use storage more efficiently and to build a wider variety of linked data structures." (Knowlton, 1966, p. 616).
"L... is presently being used for a variety of purposes, including information retrieval, simu
lation of digital circuits, and automatic drawing of flowcharts and other graphical output." (Knowlton, 1966, p. 617).
"The most important features of Lsix which distinguish it from other list processors such as IPL, LISP, COMIT and SNOBOL are the availability of several sizes of storage blocks and a flexible means of specifying within them fields containing data or pointers to other blocks. Data structures are built by appropriating blocks of various sizes, defining fields (simultaneously in all blocks) and filling these fields with data and pointers to other blocks. Available blocks are of lengths 2" machine words where n is an integer in the range 0-7. The user may define up to 36 fields in blocks, which have as names single letters or digits. Thus the D field may be defined as bits 5 through 17 of the first word of any block. Any field which is long enough to store an address may contain a pointer to another block. The contents of a field are interpreted according to the context in which they are used." (Housden, 1969, p. 15).
“LISP 2 is a new programming language designed for use in problems that require manipulation of highly complex data structures as well as lengthy arithmetic operations. Presently implemented on the AN/FSQ-32V computer at the System Development Corporation... A particularly important part of the program library is a group of programs for bootstrapping LISP 2 onto a new machine. (Bootstrapping is the standard method for creating a LISP 2 system on a new machine). The bootstrapping capability is sufficiently powerful so that the new machine requires no resident programs other than the standard monitor system and a binary loader." (Abrahams et al., 1966, pp. 661-662).
"This list structure processing system and language being developed at Lincoln is called CORAL (Class Oriented Ring Association Language). The language consists of a set of operators for building, modifying, and manipulating a list structure as well as a set of algebraic and conditional forms." (Roberts, 1965, p. 212).
"TREET is a general-purpose list processing system written for the IBM 7030 computer at the MITRE Corporation. All programs in TREET are coded as functions. A function normally has a unique value (which may be an arbitrarliy complex list structure), a unique name, and operates with zero or more arguments." (Bennett et al., 1965, pp. 452-453).
5.67 "The growing importance of the family concept accentuates the need for levels of software. These levels of software will be geared to configuration size instead of family member. In other words, the determining factor will be the amount of memory and the number of peripheral units associated . . ." (Clippinger, 1965, p. 210). "The advantages of high-level programming languages . . . [include] higher machine inde
pendence for transition to other computers, and otherwise for compatibility with hardware. [and] better documentation (compatibility among programs and different programmers)." (Burkhardt, 1965, p. 4).
"The user needs to employ data structures and processes that he defined in the past, or that were defined by colleagues, and he needs to refresh his understanding of those objects. The language must therefore have associated with it a metalanguage and a retrieval system. If there is more than one working language, the metalanguage should be common to all the languages of the system." (Licklider, 1965, p. 185).
"The over-all language will be a system because all the sublanguages will fall within the scope of one metalanguage. Knowing one sublanguage will make it easier to learn another. Some sublanguages will be subsets of others." (Licklider, 1965, p. 126).
5.68 "The most immediate need is for a general compiling system capable of implementing a variety of higher-level languages, including in particular, string manipulations, list processing facilities, and complete arithmetic capabilities." (Salton, 1966, p. 208).
5.69 Licklider, 1965, p. 119.
"It will be absolutely necessary, if an effective procognitive system is ever to be achieved, to have excellent languages with which to control processing and application of the body of knowledge. There must be at least one (and preferably there should be only one) general, procedure-oriented language for use by specialists. There must be a large number of convenient, compatible field-oriented languages for the substantive users." (Licklider, 1965, p. 67).
5.70 "There is, in fact, an applied scientific lag in the study of computer programmers and computer programming-a widening and critical lag that threatens the industry and the profession with the great waste that inevitably accompanies the absence of systematic and established methods and findings and their substitution by anecdotal opinion, vested interests, and provincialism." (Sackman et al., 1968, p. 3).
“Work on programming languages will continue to provide a basis for studies on languages in general, on the concept of grammar, on the relation between actions, objects and words, on the essence of imperative and declarative sentences, etc. Unfortunately we do not know yet how to achieve a definition of programming languages that covers. both their syntactic and pragmatic aspects. To this goal a first step may be the thorough study of special languages, such as programming languages for machine tools, and simulation languages." (Caracciolo di Forino, 1965, p. 6).
5.71 "As Levien & Maron point out, and Bobrow analyzes in detail, natural language is much too syntactically complex and semantically ambiguous to be efficient for man-machine communication. An alternative is to develop formalized languages with
a simplified syntax and vocabulary. Examination of several query languages, for example, COLINGO and GIS, reveals a general (and natural) dependence on, and adaptation of, the rules of formal logic. However, even with English words for logical operations, relations, and object names, formal query languages have been a less-than-ideal solution to the man-machine communication problem. Except for the simplest queries, punctuating a nested Boolean logical statement can be tricky and can lead to errors. Furthermore, syntactic problems aside, a common difficulty arises when the user does not know the legal names of the data for which he is searching or the structural relationships among the data items in the data base, which may make one formulation of his query very difficult and expensive to answer whereas a slightly altered one may be simple to answer." (Minker and Sable, 1967, p. 136).
"The possibility of user-guided natural-language programming offers a promise of bridging the manmachine communication gap that is today's greatest obstacle to wider enjoyment of the services of the computer." (Halpern, 1966, p. 649).
"Such a language would be largely built by the users themselves, the processor being designed to facilitate the admission of new functions and notation of any time. The user of such a system would begin by studying not a manual of a programming language, but a comparatively few pages outlining what the computer must be told about the location and format of data, the options it offers in output media and format, the functions already available in the system, and the way in which further functions and notation may be introduced. He would then describe the procedure he desired in terms natural to himself." (Halpern, 1967, p. 143).
5.72 "Further investigation is required in searching and maintaining relationships represented by graph structures, as in fact retrieval' systems. Problems in which parts of the graph exist in one store while other parts are in another store must be investigated, particularly when one breaks a link in the graphs. The coding of data and of relations also needs much work." (Minker and Sable, 1967, p. 151).
5.73 "This program package has been used in the analysis of several multivariate data bases, including sociological questionnaires, projective test responses, and a sociopolitical study of Colombia. It is anticipated that the program will also prove useful in pattern recognition, concept learning, medical diagnosis, and so on." (Press and Rogers, 1967, p. 39).
5.74 "The execution of programs at different installations whose total auxiliary storage capacities are made up of different amounts of random access storage media with different access characteristics can be facilitated by the organization of the auxiliary storage devices into a multilevel storage hierarchy and the application of level changing." (Morenoff and McLean, 1967, p. 1).
5.75 "A systems problem that has received considerable attention is how to determine which data should be in computer memory and which should be in the various members of the mass storage hierarchy." (Bonn, 1966, p. 1865).
"The key requirement in multiprogramming systems is that information structures be represented in a hardware-independent form until the moment. of execution, rather than being converted to a hardware-dependent form at load time. This requirement leads directly to the concept of hardwareindependent virtual address spaces, and to the concept of virtual processors which are linked to physical computer resources through address mapping tables." (Wegner, 1967, p. 135).
5.76 "With respect to the central processing unit, the major compromise of future needs with present economy is the limitation on addressing capacity." (Brooks, 1965, p. 90).
"Other major problems of large capacity memories are related to the tremendous amount of electronic circuitry required for addressing and sensing." (Kohn, 1965, p. 132).
5.77 For example, "the problem of assigning locations in name space for procedures that may be referenced by several system functions and may perhaps share references to other procedures, is not widely recognized and leads to severe complications when implementation is attempted in the context of conventional memory addressing." (Dennis and Glaser, 1965, p. 6).
5.78 "A particularly troublesome phenomenon, thrashing, may seriously interfere with the performance of paged memory systems, reducing computing giants (Multics, IBM System 360, and others not necessarily excepted) to computing dwarfs. The term thrashing denotes excessive overhead and severe performance degradation or collapse caused by too much paging. Thrashing inevitably turns a shortage of memory space into a surplus of processor time." (Denning, 1968, p. 915).
. . . Global weather prediction. Here a three-dimensional grid covering the entire world must be stepped along through relatively short periods of simulated time to produce a forecast in a reasonable amount of time. This type of problem with its demand for increased speed in processing large arrays of data illustrates the applicability of a computer designed specifically for array processing." (Senzig and Smith, 1965, p. 117).
"Most engineering data is best represented in the computer in array form. To achieve optimum capability and remove the restrictions presently associated with normal FORTRAN DIMENSIONed array storage, arrays should be dynamically allocated. Dynamic allocation of data achieves the following:
"1. Arrays are allocated space at execution time rather than at compilation time. They are only allocated the amount of space needed for the problem being solved. The
size of the array (i.e., the amount of space used) may be changed at any time during program execution. If an array is not used during the execution of a particular problem, then no space will be allocated.
"2. Arrays are automatically shifted between primary and secondary storage to optimize the use of primary memory.
"Dynamic memory allocation is a necessary requirement for an engineering computer system capable of solving different problems with different data size requirements. A dynamic command structured language requires a dynamic internal data structure. The result of dynamic memory allocation is that the size of a problem that can be solved is virtually unlimited since secondary storage becomes a logical extension of primary storage." (Roos, 1965, p. 426).
5.80 "Any language which lacks provision for performing necessary operations, such as bit editing for telemetered data, forces the user to write segments in assembly language. This destroys the machine independence of the program and complicates the checkout." (Clippinger, 1965, p. 207).
5.81 "Thus one must consider not only whether the logical possibilities of a new device are ignored when one is restricted to a binary logic, but also whether one is sufficiently using the signals when only one of the parameters characterizing that signal is used." (Ring et al., 1965, p. 33).
5.82 "For a variety of reasons, not the least of which is maturing of integrated circuits with their low cost and high density, central processors are becoming more complex in their organization.” (Clippinger, 1965, p. 209).
5.83 "No large system is a static entityit must be capable of expansion of capacity and alteration of function to meet new and unforeseen requirements." (Dennis and Glaser, 1965, p. 5).
"Changing objectives, increased demands for use, added functions, improved algorithms and new technologies all call for flexible evolution of the system, both as a configuration of equipment and as a collection of programs." (Dennis and Van Horn, 1965, p. 4).
"By pooling, the number of components provided need not be large enough to accommodate peak requirements occurring concurrently in each computer, but may instead accommodate a peak in one occurring at the same time as an average requirement in the other." (Amdahl, 1965, pp. 38-39).
5.84 "The use of modular configurations of components and the distributed executive principle. . . insures there are multiple components of each system resource." (Dennis and Glaser, 1965, p. 14).
"Computers must be designed which allow the incremental addition of modular components, the use by many processors of high speed random
access memory, and the use by many processors of peripheral and input/output equipment. This implies that high speed switching devices not now incorporated in conventional computers be developed and integrated with systems." (Bauer, 1965, p. 23). See also note 2.52.
5.85 "The actual execution of data movement commands should be asychronous with the main processing operation. It should be an excellent use of parallel processing capability." (Opler, 1965, p. 276).
5.86 "Work currently in progress [at Western Data Processing Center, UCLA] includes: investigations of intra-job parallel processing which will attempt to produce quantitative evaluations of component utilization; the increase in complexity of the task of programming; and the feasibility of compilers which perform the analysis necessary to convert sequential programs into parallel path programs." (Digital Computer Newsletter 16, No. 4, 21 (1964)).
5.87 "The motivation for encouraging the use of parallelism in a computation is not so much to make a particular computation run more efficiently as it is to relax constraints on the order in which parts of a computation are carried out. A multiprogram scheduling algorithm should then be able to take advantage of this extra freedom to allocate system resources with greater efficiency." (Dennis and Van Horn, 1965, pp. 19-20).
5.88 "The parallel processing capability of an associative processor is well suited to the tasks of abstracting pattern properties and of pattern classification by linear threshold techniques." (Fuller and Bird, 1965, p. 112).
5.89 "The idea of DO TOGETHER was first mentioned (1959) by Mme. Jeanne Poyen in discussing the AP 3 compiler for the BULL Gamma 60 computer." (Opler, 1965, p. 307).
5.90 "To date, there have been relatively few attempts made to program problems for parallel processing. It is not known how efficient, for example, one can make a compiler to handle the parallel processing of mathematical problems. Furthermore, it is not known how one breaks down problems, such as mathematical differential equations, such that parts can be processed independently and then recombined. These tasks are quite formidable, but they must be undertaken to establish whether the future development lies in the area of parallel processing or not." (Fernbach, 1965, p. 82).
5.91 For example, in machine-aided simulations of nonsense syllable learning processes, Daly et al. comment: "Presuming that, for the parallel logic machine, the nonsense syllables were presented on an optical retina in a fixed point fixed position set-up, there would be a requirement for recognizing (26)3 or about 104 different patterns. If three sequential classification decisions were performed on the three letters
of the nonsense word only 3(26) or 78 different patterns would be involved.
"In the above simple example converting from purely parallel logic to partially sequential processing reduced the machine complexity by two order(s) of magnitude. The trend is typical and may involve much larger numbers in a more complicated problem. Using both parallel and sequential logic as design tools the designer is able to trade-off time versus size and so has an extra degree of freedom in developing his system." (Daly et al., 1962, pp. 23-24).
5.92 The SOLOMON concept proposed by Slotnick at Westinghouse. Here it is planned that as many as a thousand independent simple processors be made to operate in parallel under an instruction from a network sequencer." (Fernbach, 1965, p. 82).
"Both the Solomon and Holland machines belong to a growing class of so-called 'iterative machines'. These machines are structured with many identical, and often interacting, elements.
"The Solomon machine resulted from the study of a number of problems whose solution procedures call for similar operations over many pieces of data. The Solomon system contains, essentially, a memory unit, an instruction unit, and an array of execution units. Each individual execution unit works on a small part of a large problem. All of the execution units are identical, that all can can operate simultaneously under control of the single instruction unit.
"Holland, on the other hand, has proposed a fully distributed network of processors. Each processor has its own local control, local storage, local processing ability, and local ability to control pathfinding to other processors in the network. Since all processors are capable of independent operation, the topology leads to the concept of 'programs floating in a sea of hardware'." (Hudson, 1968, p. 42).
"The SOLOMON (Simultaneous Operation Linked Ordinal MOdulator Network), a parallel network computer, is a new system involving the interconnections and programming, under the supervision of a central control unit, of many identical processing elements (as few or as many as a given problem requires), in an arrangement that can simulate directly the problem being solved." (Slotnick et al., 1962, p. 97).
"Three features of the computer are:
(1) The structure of the computer is a 2-dimensional modular (or iterative) network so that, if it were constructed, efficient use could be made of the high element density and 'template' techniques now being considered in research on microminiature elements. (2) Sub-programs can be spatially organized and can act simultaneously, thus facilitating the simulation or direct control of 'highly-parallel' systems with many points or parts interacting
simultaneously (e.g., magneto-hydrodynamic problems or pattern recognition).
(3) The computer's structure and behavior can, with simple generalizations, be formulated in a way that provides a formal basis for theoretical study of automata with changing structure (cf. the relation between Turing machines and computable numbers)." (Holland, 1959, p. 108).
5.93 .. The development of the Illiac III computer, which incorporates a 'pattern articulation unit' (PAU) specifically designed for performing local operations in parallel, on pictures or similar arrays." (Pfaltz et al., 1968, p. 354).
"One of the modules of the proposed ILLIAC III will be designed as a list processor for interpreting the list structure representation of bubble chamber photographs." (Wigington, 1963, p. 707).
5.94 "I use this term ['firmware'] to designate microprograms resident in the computer's control memory, which specializes the logical design for a special purpose, e.g., the emulation of another computer. I project a tremendous expansion of firmware-obviously at the expense of hardware but also at the expense of software.
"Once the production of microprogrammed computers was commenced, a further area of hardwaresoftware interaction was opened via microprogramming. For example, more than one set of microprograms can be supplied with one computer. A second set might provide for execution of the order set of a different computer-perhaps one of the second generation. Additional microprogram sets might take over certain functions of software systems as simulators, compilers and control programs. Provided that the microsteps remain a small fraction of a main memory access cycle, microprogramming is certain to influence future software design." (Opler, 1966, p. 1759).
"Incompatibility between logic and memory speeds... has also led to the introduction of microprogramming, in which instruction execution is controlled by a read-only memory. The fast access time of this memory allows full use of the speed capabilities offered by the fast logic." (Pyke, 1967, p.161).
"A microprogrammed control section utilizes a macroinstruction to address the first word of series of microinstructions contained in an internal, comparatively fast, control memory. These microinstructions are then decoded much as normal instructions are in wired-in control machines . . ." (Briley, 1965, p. 93).
"The microprogrammed controller concept has been used to implement the IBM 2841 Storage Control Unit, by means of which random access storage devices may be connected to a System/360 central processor. Because of its microprogram implementation, the 2841 can accommodate an unusually wide variety of devices, including