Page images
PDF
EPUB

to determine accuracy - a mismatch is printed out for human analysis since it is either a misspelled or a new word), and checking for illegitimate characters. The data is now on tape; any necessary correction changes or updating can be made directly." (Magnino, 1965, p. 204).

"Prior to constructing the name file, a ‘legitimate name' list and a 'common error' name list are tabulated . . . The latter list is formed by taking character error information compiled by the instrumentation system and thresholding it so only errors with significant probabilities remain; i.e., 'e' for 'a'. These are then substituted one character at a time in the names of the 'legitimate name' list to create a 'common error' name list. Knowing the probability of error and the frequency of occurrence of the 'legitimate name' permits the frequency of occurrence for the 'common error' name to be calculated." (Hennis, 1967, pp. 12-13).

2.35 "When a character recognition device errs in the course of reading meaningful English words it will usually result in a letter sequence that is itself not a valid word; i.e., a 'misspelling'," (Cornew, 1968, p. 79).

2.36 "Several possibilities exist for using the information the additional constraints provide. A particularly obvious one is to use special purpose dictionaries, one for physics texts, one for chemistry, one for novels, etc., with appropriate word lists and probabilities in each. . . ."

"Because of the tremendous amount of storage which would be required by such a 'word digram' method, an alternative might be to associate with each word its one or more parts of speech, and make use of conditional probabilities for the transition from one part of speech to another." (Vossler and Branston, 1964, p. D2.4-7).

2.37 "In determining whether or not to adopt an EDC system, the costliness and consequences of any error must be weighed against the cost of installing the error detection system. For example, in a simple telegram or teleprinter message, in which all the information appears in word form, an error in one or two letters usually does not prevent a reader from understanding the message. With training, the human mind can become an effective error detection and correction system; it can readily identify the letter in error and make corrections. Of course, the more unrelated the content of the message, the more difficult it is to detect a random mistake. In a list of unrelated numbers, for example, it is almost impossible to tell if one is incorrect." (Gentle, 1965, p. 70).

2.38 In addition to examples cited in a previous report in this series, we note the following:

"In the scheme used by McElwain and Evens, undisturbed digrams or trigrams in the garbled message were used to locate a list of candidate words each containing the digram or trigram. These were then matched against the garbled sequence taking into account various possible errors, such as a missing or extra dash, which might have occurred

in Morse Code transmission." (Vossler and Branston, 1964, p. D2.4−1).

“Harmon, in addition to using digram frequencies to detect errors, made use of a confusion matrix to determine the probabilities of various letter substitutions as an aid to correcting these errors." (Vossler and Branston, 1964, pp. D2.4-1- D2.4–2).

"An interesting program written by McElwain and Evens was able to correct about 70% of the garbles in a message transmitted by Morse Code, when the received message contained garbling in 0-10% of the characters." (Vossler and Branston, 1964, p. D2.4—1).

"The design of the spoken speech output modality for the reading machine of the Cognitive Information Processing Group already calls for a large, disc-stored dictionary . . . The possibility of a dual use of this dictionary for both correct spelling and correct pronunciation prompted this study." (Cornew, 1968, p. 79).

"Our technique was first evaluated by a test performed on the 1000 most frequent words of English which, by usage, comprise 78% of the written language . . . For this, a computer program was written which first introduced into each of these words one randomly-selected, randomlyplaced letter substitution error, then applied this technique to correct it. This resulted in the following overall statistics 739 correct recoveries of the original word prior to any other; 241 incorrect recoveries in which another word appeared sooner; 20 cases where the misspelling created another valid word." (Cornew, 1968, p. 83).

"In operation, the word consisting of all first choice characters is looked up. If found, it is assumed correct; if not, the second choice characters are substituted one at a time until a matching word is found in the dictionary or until all second choice substitutions have been tried. In the latter case a multiple error has occurred (or the word read correctly is not in the dictionary)." (Andrews, 1962, p. 302).

2.39 "There are a number of different techniques for handling spelling problems having to do with names in general and names that are homonyms. Present solutions to the handling of name files are far from perfect." (Rothman, 1966, p. 13).

2.40 "The chief problem associated with . . large name files rests with the misspelling or misunderstanding of names at time of input and with possible variations in spelling at the time of search. In order to overcome such difficulties, various coding systems have been devised to permit filing and searching of large groups of names phonetically as well as alphabetically . . . A Remington Rand Univac computer program capable of performing the phonetic coding of input names has been prepared." (Becker and Hayes, 1963, p. 143).

"A particular technique used in the MGH [Massachusetts General Hospital] system is probably worth mentioning; this is the technique for phonetic indexing reported by Bolt et al. The use described

involves recognition of drug names that have been typed in, more or less phonetically, by doctors or nurses; in the longer view this one aspect of a large effort that must be expended to free the manmachine interface from the need for letter-perfect information representation by the man. People just don't work that way, and systems must be developed that can tolerate normal human imprecision without disaster." (Mills, 1967, p. 243).

2.41 “... The object of the study is to determine if we can replace garbled characters in names. The basic plan was to develop the empirical frequency of occurrence of sets of characteres in names and use these statistics to replace a missing character." (Carlson, 1966, p. 189).

"The specific effect on error reduction is impressive. If a scanner gives a 5% character error rate, the trigram replacement technique can correct approximately 95% of these errors. The remaining error is thus . . . 0.25% overall.

"A technique like this may, indeed, reduce the cost of verifying the mass of data input coming from scanners ... [and] reduce the cost of verifying massive data conversion coming from conventional data input devices like keyboards, remote terminals, etc." (Carlson, 1966, p. 191.)

2.42 "The rules established for coding structures are integrated in the program so that the computer is able to take a fairly sophisticated look at the chemist's coding and the keypunch operator's work. It will not allow any atom to have too many or too few bonds, nor is a '7' bond code permissible with atoms for which ionic bonds are not 'legal'. Improper atom and bond codes and misplaced characters are recognized by the computer, as are various other types of errors." (Waldo and De Backer, 1959, p. 720).

ex

2.43 "Extensive automatic verification of the file data was achieved by a variety of techniques. As an example, extracts were made of principal lines plus the sequence number of the record: specifically, all corporate name lines were tracted and sorted; any variations on a given name were altered to conform to the standard. Similarly, all law firm citations were checked against each other. All city-and-state fields are uniform. A zipcode-and-place-name abstract was made, with the resultant file being sorted by zip code: errors were easy to sort and correct, as with Des Moines appearing in the Philadelphia listing." (North, 1968, p. 110).

66

Then there is the even more sophisticated case where . . . An important input characteristic is that the data is not entirely developed for processing or retrieval purposes. It is thus necessary first to standardize and develop the data before manipulating it. Thus, to mention one descriptor, 'location', the desired machine input might be 'coordinate', 'city', and 'state', if a city is mentioned; and 'state' alone when no city is noted. However, inputs to the system might contain a coordinate and city without mention of a state.

It is therefore necessary to develop the data and standardize before further processing commences.

"It is then possible to process the data against the existing file information . . . The objective of the processing is to categorize the information with respect to all other information within the files . . . To categorize the information, a substantial amount of retrieval and association of data is often required

Many [data] contradictions are resolvable by the system." (Gurk and Minker, 1961, pp. 263-264). 2.44 "A number of new developments are based on the need for serving clustered environments. A cluster is defined as a geographic area of about three miles in diameter. The basic concept is that within a cluster of stations and computers, it is possible to provide communication capabilities at low cost. Further, it is possible to provide communication paths between clusters, as well as inputs to and outputs from other arrangements as optional features, and still maintain economies within each cluster. This leads to a very adaptable system. It is expected to find wide application on university campuses, in hospitals, within industrial complexes, etc." (Simms, 1968, p. 23).

2.45 "Among the key findings are the following: • Relative cost-effectiveness between timesharing and batch processing is very sensitive to and varies widely with the precise manmachine conditions under which experimental comparisons are made.

Time-sharing shows a tendency toward fewer man-hours and more computer time for experimental tasks than batch processing.

The controversy is showing signs of narrowing down to a competition between conversationally interactive time-sharing versus fast-turnaround batch systems.

• Individual differences in user performance are generally much larger and are probably more economically important than time-sharing/ batch-processing system differences.

Users consistently and increasingly prefer interactive time-sharing or fast turnaround batch over conventional batch systems.

⚫ Very little is known about individual performance differences, user learning, and human decision-making, the key elements underlying the general behavioral dynamics of man-computer communication.

Virtually no normative data are available on data-processing problems and tasks, nor on empirical use of computer languages and system support facilities - the kind of data necessary to permit representative sampling of problems, facilities and subjects for crucial experiments that warrant generalizable results." (Sackman, 1968, p. 350).

However, on at least some occasions, some clients of a multiple-access, time-shared system may be satisfied with, or actually prefer, operation in a

batch or job-shop mode to extensive use of the conversational mode.

"Critics (see Patrick 1963, Emerson 1965, and MacDonald 1965) claim that the efficiency of timesharing systems is questionable when compared to modern closed-shop methods, or with economical small computers." (Sackman et al., 1968, p. 4).

Schatzoff et al. (1967) report on experimental comparisons of time-sharing operations (specifically, MIT's CTSS system) with batch processing as employed on IBM's IBSYS system.

'... One must consider the total spectrum of tasks to which a system will be applied, and their relative importance to the total computing load." (Orchard-Hays, 1965, p. 239).

". . . A major factor to be considered in the design of an operating system is the expected job mix." (Morris et al., 1967, p. 74).

"In practice, a multiple system may contain both types of operation: a group of processors fed from a single queue, and many queues differentiated by the type of request being serviced by the attached processor group . ." (Scherr, 1965, p. 17).

2.46 "Normalization is a necessary preface to the merge or integration of our data. By merge, or integration, as I use the term here to represent the last stage in our processes, I am referring to a complex interfiling of segments of our data-the entries. In this 'interfiling,' we produce, for each article or book in our file, an entry which is a composite of information from our various sources. If one of our sources omits the name of the publisher of a book, but another gives it, the final entry will contain the publisher's name. If one source gives the volume of a journal in which an article appears, but not the month, and another gives the month, but not the volume, our final entry will contain both volume and month. And so on." (Sawin, 1965, p. 95).

"Normalize. Each individual printed source, which has been copied letter by letter, has features of typographical format and style, some of which are of no significance, others of which are the means by which a person consulting the work distinguishes the several 'elements' of the item. The family of programs for normalizing the several files of data will insert appropriate information separators to distinguish and identify the elements of each item and rearrange it according to a selected canonical style, which for the Pilot Study is one which conforms generally to that of the Modern Language Association." (Crosby, 1965, p. 43).

2.47 "Some degree of standardized processing and communication is at the heart of any information system, whether the system is the basis for mounting a major military effort, retrieving documents from a central library, updating the clerical and accounting records in a bank, assigning airline reservations, or maintaining a logistic inventory. There are two reasons for this. First, all information systems are formal schemes for handling the informational aspects of a formally specified venture.

99

Second, the job to be done always lies embedded within some formal organizational structure.' (Bennett, 1964, p. 98).

"Formal organizing protocol exists relatively independently of an organization's purposes, origins, or methods. These established operating procedures of an organization impose constraints upon the available range of alternatives for individual behavior. In addition to such constraints upon the degrees of freedon within an organization as restrictions upon mode of dress, conduct, range of mobility, and style of performance, there are protocol constraints upon the format, mode, pattern, and sequence of information processing and information flow. It is this orderly constraint upon information processing and information flow that we call, for simplicity, the information system of an organization. The term 'system' implies little more than procedural restriction and orderliness. By 'information processing' we mean some actual change in the nature of data or documents. By 'information flow' we indicate a similar change in the location of these data or documents. Thus we may define an information system as simply that set of constraining specifications for the collection, storage, reduction, alteration, transfer, and display of organizational facts, opinions, and associated documentation which is established in order to manage, command if you will, and control the ultimate performance of an organization. . .

"With this in mind, it is possible to recognize the dangers associated with prematurely standardizing the information-processing tools, the forms, the data codes, the message layouts, the procedures for message sequencing, the file structures, the calculations, and especially the data-summary forms essential for automation. Standardization of these details of a system is relatively simple and can be accomplished by almost anyone familiar with the design of automatic procedures. However, if the precise nature of the job and its organizational implications are not understood in detail, it is not possible to know the exact influence that these standards will have on the performance of the system." (Bennett, 1964, pp. 99, 103).

2.48 "There is a need for design verification. That is, it is necessary to have some method for ensuring that the design is under control and that the nature of the resulting system can be predicted before the end of the design process. In commandand-control systems, the design cycle lasts from two to five years, the design evolving from a simple idea into complex organizations of hardware, software, computer programs, displays, human operations, training, and so forth. At all times during this cycle the design controller must be able to specify the status of the design, the impact that changes in the design will have on the command, and the probability that certain components of the system will work. Design verification is the process that gives the designer this control. The methods that

make up the design-verification process range from analysis and simulation on paper to full-scale system testing." (Jacobs, 1964, p. 44).

2.49 "Measurement of the system was a major area which was not initially recognized. It was necessary to develop the tools to gather data and introduce program changes to generate counts and parameters of importance. Future systems designers should give this area more attention in the design phase to permit more efficient data collection." (Evans, 1967, p. 83.)

[ocr errors][merged small][merged small]

2.51 "We will probably see a trend toward the concept of a computer as a collection of memories, buses and processors with distributed control of their assignments on a dynamic basis." (Clippinger, 1965, p. 209).

"Both Dr. Gilbert C. McCann of Cal. Tech and Dr. Edward E. David, Jr., of Bell Telephone Laboratories stressed the need for hierarchies of computers interconnected in large systems to perform the many tasks of a time-sharing system." (Commun. ACM 9, 645 (Aug. 1966).)

2.52 "Every part of the system should consist of a pool of functionally identical units (memories, processors and so on) that can operate independently and can be used interchangeably or simultaneously at all times

[ocr errors]

"Moreover, the availability of duplicate units would simplify the problem of queuing and the allocation of time and space to users." (Fano and Corbató, 1966, pp. 134-135).

"Time-sharing demands high system reliability and maintainability, encourages redundant, modular, system design, and emphasizes high-volume storage (both core and auxiliary) with highly parallel system operation." (Gallenson and Weissman, 1965, p. 14).

"A properly organized multiple processor system provides great reliability (and the prospect of continuous operation) since a processor may be trivially added to or removed from the system. A processor undergoing repair or preventive maintenance merely lowers the capacity of the system, rather than rendering the system useless." (Saltzer, 1966, p. 2).

"Greater modularity of the systems will mean easier, quicker diagnosis and replacement of faulty parts." (Pyke, 1967, p. 162).

"To meet the requirements of flexibility of capacity and of reliability, the most natural form . . . is

a modular multiprocessor system arranged so that processors, memory modules and file storage

units may be added, removed or replaced in accordance with changing requirements." (Dennis and Van Horn, 1965, p. 4). See also notes 5.83, 5.84.

2.53 "The actual execution of data movement commands should be asynchronous with the main processing operation. It should be an excellent use of parallel processing capability." (Opler, 1965, p. 276).

2.54 "Work currently in progress [at Western Data Processing Center, UCLA] includes: investigations of intra-job parallel processing which will attempt to produce quantititative evaluations of component utilization; the increase in complexity of the task of programming; and the feasibility of compilers which perform the analysis necessary to convert sequential programs into parallel-path programs." (Dig. Computer Newsletter 16, No. 4, 21 (1964).)

2.55 "The motivation for encouraging the use of parallelism in a computation is not so much to make a particular computation run more efficiently as it is to relax constraints on the order in which parts of a computation are carried out. A multi-program scheduling algorithm should then be able to take advantage of this extra freedom to allocate system resources with greater efficiency." (Dennis and Van Horn, 1965, pp. 19-20).

2.56 Amdahl remarks that "the principal motivations for multiplicity of components functioning in an on-line system are to provide increased capacity or increased availability or both." (1965, p. 38). He notes further that "by pooling, the number of components provided need not be large enough to accommodate peak requirements occurring concurrently in each computer, but may instead accommodate a peak in one occurring at the same time as an average requirement in the other." (Amdahl, 1965, pp. 38-39).

2.57 "No large system is a static entity-it must be capable of expansion of capacity and alteration of function to meet new and unforeseen requirements." (Dennis and Glaser, 1965, p. 5).

"Changing objectives, increased demands for use, added functions, improved algorithms and new technologies all call for flexible evolution of the system, both as a configuration of equipment and as a collection of programs." (Dennis and Van Horn, 1965, p. 4).

"A design problem of a slightly different character, but one that deserves considerable emphasis, is the development of a system that is 'open-ended'; i.e., one that is capable of expansion to handle new plants or offices, higher volumes of traffic, new applications, and other difficult-to-foresee developments associated with the growth of the business. The design and implementation of a data communications system is a major investment; proper planning at design time to provide for future growth will safeguard this investment." (Reagan, 1966, p. 24).

2.58 "Reconfiguration is used for two prime purposes: to remove a unit from the system for

service or because of malfunction, or to reconfigure the system either because of the malfunction of one of the units or to 'partition' the system so as to have two or more independent systems. In this last case, partitioning would be used either to debug a new system supervisor or perhaps to aid in the diagnostic analysis of a hardware malfunction where more than a single system component were needed." (Glaser et al., 1965, p. 202.)

"Often, failure of a portion of the system to provide services can entail serious consequences to the system users. Thus severe reliability standards are placed on the system hardware. Many of these systems must be capable of providing service to a range in the number of users and must be able to grow as the system finds more users. Thus, one finds the need for modularity to meet these demands. Finally, as these systems are used, they must be capable of change so that they can be adapted to the ever changing and wide variety of requirements, problems, formats, codes and other characteristics of their users. As a result general-purpose stored program computers should be used wherever possible." (Cohler and Rubenstein, 1964, p. 175).

2.59 "On-line systems are still in their early development stage, but now that systems are beginning to work, I think that it is obvious that more attention should be paid to the fail safe aspects of the problem." (Huskey, 1965, p. 141). "From our experience we have concluded that system reliability. . must provide for several levels of failure leading to the term 'fail-soft' rather than 'fail-safe"." (Baruch, 1967, p. 147).

Related terms are "graceful degradation" and "high availability", as follows:

"The military is becoming increasingly interested in multiprocessors organized to exhibit the property of graceful degradation. This means that when one of them fails, the others can recognize this and pick up the work load of the one that failed, continuing this process until all of them have failed." (Clippinger, 1965, p. 210).

"The term 'high availability' (like its synonym 'fail safe') has now become a cliche, and lacks any precise meaning. It connotes a system characteristic which permits recovery from all hardware errors. Specifically, it appears to promise that critical system and user data will not be destroyed, that system and job restarts will be minimized and that critical jobs can most surely be executed, despite failing hardware. If this is so, then multiprocessing per se aids in only one of the three characteristics of high availability." (Witt, 1968, p. 699). "The structure of a multi-computer system planned for high availability is principally determined by the permissible reconfiguration time and the ability to fail safely or softly. The multiplicity and modularity of system components should be chosen to provide the most economical realization of these requirements . . .

"A multi-computer system which can perform the full set of tasks in the presence of a single mal

function is fail-safe. Such a system requires at least one more unit of each type of system component, with the interconnection circuitry to permit it to replace any of its type in any configuration

"A multi-computer system which can perform a satisfactory subset of its tasks in the presence of a malfunction is fail-soft. The set of tasks which must still be performed to provide a satisfactory through degraded level of operation, determines the minimum number of each component required after a failure of one of its type." (Amdahl, 1965, p. 39).

"Systems are designed to provide either full service or graceful degradation in the face of failures that would normally cause operations to cease. A standby computer, extra mass storage devices, auxiliary power sources to protect against public utility failure, and extra peripherals and communication lines are sometimes used. Manual or automatic switching of spare peripherals between processors may also be provided." (Bonn, 1966, p. 1865).

2.60 "A third main feature of the communication system being described is high reliability. The emphasis here is not just on dependable hardware but on techniques to preserve the integrity of the data as it moves from entry device, through the temporary storage and data modes, over the transmission lines and eventually to computer tape or hard copy printer." (Hickey, 1966, p. 181.) 2.61 In addition to the examples cited in the discussion of client and system protection in the previous report in this series (on processing, storage, and output requirements, Section 2.2.4), we note the following:

"The primary objective of an evolving specialpurpose time-sharing system is to provide a real service for people who are generally not computer programmers and furthermore depend on the system to perform their duties. Therefore the biggest operational problem is reliability. Because the data attached to special-purpose system are important and also must be maintained for a long time, reliability is doubly crucial, since errors affecting the data base cannot only interrupt users' current procedures but also jeopardize past work." (Castleman, 1967, p. 17).

"If the system is designed to handle both specialpurpose functions and programming development, then why is reliability a problem? It is a problem because in a real operating environment some new 'dangerous' programs cannot be tested on the system at the same time that service is in effect. As a result, new software must be checked out during offhours, with two consequences. First, the system is not subjected to its usual daytime load during checkout time. It is a characteristic of time-shared programs that different 'bugs' may appear depending on the conditions of the overall system activity. For example, the time-sharing bug' of a program manipulating data incorrectly because another program processes the same data at virtually the same

« PreviousContinue »