Page images
PDF
EPUB

2.1.1. Job Assignment Module

Within a computational level, the Job Assignment module maintains a queue of commands received from the world model and the operator. It accepts all incoming commands and assigns them a position in the queue according to the priority level assigned to the command. The priority level is based on the requirements of the plan developed by the Planner at the next higher level. For example, when task decomposition requires information about an object's position, it activates a plan to detect the identifying features of that object and to update their positions. The activation of a plan raises the priority level assigned to the class of algorithms responsible for extracting the required information.

The use of a queue enables incoming commands to be prioritized as they are received. In this way, the information needed most immediately is always serviced first, but all information buffers are updated at specified time intervals. At the completion of execution, the Job Assignment module returns status to the requesting process.

At each level of the hierarchy, the operator interfaces only with the Job Assignment module. He/she may request a specific type of output, output from a particular algorithm, or termination of execution of an active process. He/she may also request a change of parameters for a specific algorithm or request processing in a special window of interest. The Job Assignment module writes parameters supplied by the operator into the world model global memory where they can be read by the Execution module at any level. The operator also specifies the mode of operation for each command he/she issues: either continuous operation until a "halt processing" command is received or execution for a fixed number of times. In all cases, an operator request is assigned the highest priority. Output from an operator's command is returned in the form of graphic displays, ASCII strings, or other easily understandable formats.

2.1.2. Planner Module

The Planner module reads commands from the top of the Job Assignment queue. It distinguishes between commands to control hardware and commands which will initiate a sensory processing algorithm. For the former case, it interprets and passes activation commands to the Execution module. For the latter case, it determines which algorithm within the general class of algorithms capable of being performed in the sensory processing module at the given level is best suited for providing results. Since each class of algorithms contains many methods of computing the required output (Appendices A and B describe the types of algorithms included in the class of filtering, enhancing, and segmentation techniques), the Planner module acts as a rule based system to choose the most appropriate algorithm for a given situation. Decisions are based on criteria such as timing requirements, precision requirements, statistical analysis of sensed information, and knowledge about the environment (lighting conditions, power constraints, etc.). The world model global memory contains this information, and the Planner module reads and analyzes it as required. At completion of the command, it returns status information to the Job Assignment module.

2.1.3. Execution Module

The Execution module receives its commands from the Planner module in the same level of the hierarchy. It is responsible for issuing commands to control a physical device or sensor or

passing algorithm parameters to sensory processing and activating the sensory processing system to execute the selected algorithm. When a particular algorithm is chosen for execution, the Execution module reads the parameters required for its execution from the world model global memory. The types of parameters stored in the world model include threshold values, histories of past performance for each algorithm, and sensor model information such as physical sensor parameters, initial conditions, etc. The Execution module then passes the algorithm command (or a pointer to the algorithm command) and all parameters needed for its execution to the world modeling module.

2.2. Sensory Processing

The sensory processing modules of the real-time control system compare incoming data with predicted information, integrate sensory data over space and time, and determine the detection of an event. At each level of the hierarchy, this information is used to update the world model. Each sensory processing module consists of four submodules: comparators, temporal integrators, a spatial integrator, and a detection threshold (fig. 5). A specific example of how these modules interact at a given level is given in section 3.2, where the sensory processing module at Level 1 is discussed.

[blocks in formation]

The order of the integrator modules can be reconfigured a priori depending on the algorithm applied. It may be appropriate for a specific application to perform temporal integration after spatial integration, such as when tracking a centroid of a moving object, or it may be unnecessary to do either spatial or temporal integration.

2.2.1. Comparator Module

The comparator modules receive input from two sources: the world model and the sensory processing module at the next lower level. The input from the world model is a model of the expected output. The input from the level below in the sensory processing hierarchy consists of the results generated by that level. The comparator modules perform algorithm specific computations using these two inputs to generate values which are passed either to the temporal integrators or the spatial integrator.

2.2.2. Temporal Integrators

Each temporal integrator combines its inputs over a given time window. The length of the time interval is supplied by the world model and depends upon factors such as timing and accuracy requirements. In addition, the window usually covers a shorter interval at lower levels of the control hierarchy and a longer interval at higher levels. The output from the temporal integrators is passed to both the world model and to the spatial integrator.

2.2.3. Spatial Integrator

The spatial integrator module integrates values over space to produce a single response value. The range of the spatial integral is supplied by the world model, and the results of the spatial integration are sent to the model to update confidence factors.

2.2.4. Detection Module

The output from the spatio-temporal integration process is passed to the detection module for evaluation or event detection. When the output surpasses a prespecified threshold, indicating correspondence between observations and the prediction of the world model, event detection occurs. An event can be defined to be the detection of an edge point, the fit of a line, or the recognition of an object, depending on the level in the control hierarchy at which the detection is occurring. The correspondence of a prediction occurs when, for example, a moving object's centroid is within a small distance from its prediction based on a past centroid measurement and the object's velocity. The results of event detection are passed to the world model to update global memory.

2.3. World Modeling

World modeling maintains the system's internal model of the world by continuously updating the model based upon sensory information. It consists of two components: support processes or functions which simultaneously and asynchronously support sensory processing and task decomposition, and the global data system which is updated by the world modeling support processes. The term world model refers to the two hierarchies of support pro

cesses together with the global data system. Throughout this document, the terms world model, world model support, and global database will be used interchangeably. Any of these terms implies the combined function of the world modeling Level 1 support module and the global data system.

2.3.1. World Modeling to Task Decomposition Interfaces

The interface with the world model provides decision-making criteria to the task decomposition system. It allows the Planner module to access global memory in order to select the optimal algorithm in a given situation. The Planner uses histories of performance, timing criteria, lighting conditions, expected range to the object, etc. to choose an algorithm or to manipulate hardware. This information is stored in the world model database. The Execution module selects the parameters or initialization conditions required for sensory processing or it actually executes the control algorithm. These parameters are also stored in the world model.

2.3.2. World Modeling to Sensory Processing Interfaces

The interfaces from the world model to sensory processing allow sensory processing to read the algorithm selected by the task decomposition Planner, the parameters selected by the Execution module, and any additional command parameters, such as integral ranges. The world model support module analyzes the selected algorithm in order to provide the model required by the sensory processing comparator. In addition to providing sensory processing with an algorithm and its parameters, the world model also provides a prediction to the detection module. The prediction is a range of acceptable values that are used to determine whether an event has been successfully detected. A threshold value used in edge detection or a window for the centroid value of a moving object are two examples. The results of the sensory processing integration and detection processes are sent to the world model where they are used to update confidence factors and global memory.

3. Level 1 Interfaces and Operation

The following sections describe the functions of the task decomposition module, the sensory processing module, and the world model at Level 1 of the visual perception branch of the control system. Within the task decomposition system, the Job Assignment module accepts and queues commands from Level 2 and the human operator. The commands are passed to Planner modules which plan to activate or deactivate the camera and select the most appropriate preprocessing and/or segmentation algorithm. Execution modules are responsible for sending current to the camera actuators and obtaining algorithm parameters and writing the command, the selected algorithm, and its parameters into an area of the world memory. The sensory processing modules read the status of the camera and execute the selected algorithm on any incoming image data.

3.1. Level 1 Task Decomposition Module

Information that resides in the world model global data system is required by the task decomposition system to guide algorithm selection for the sensory processing system.

Figure 6 details the information requirements of Level 1 modules from the world model, from other processing levels, and from a human operator.

3.1.1. Level 1 Job Assignment Module

The Job Assignment module at Level 1 maintains a prioritized command queue for requests for processed data received from the Level 2 task decomposition module and/or the operator. A background or default algorithm associated with each class of processing is assigned a priority so that it is performed periodically for system reliability and is executed when no other requests are pressing. Commands received from the operator are always assigned highest priority, and the Job Assignment module places these commands at the top of the queue. In this way, operator commands are always acted upon immediately. When the Job Assignment module receives status information indicating the completion of an operator command, it reads the output information from a predefined buffer and displays it in an easily understandable manner such as a graphic display.

3.1.1.1. Level 2 to Level 1 Job Assignment Module Interface

The Job Assignment module at Level 1 interfaces with Level 2 and an operator. It accepts requests to control the camera or to choose which operations are performed on brightness pixels. All incoming commands are coordinated through this module by prioritizing them in a single queue. The contents of these commands are described in the following sections and are detailed in figure 7.

The commands from Level 2 request that either the camera be activated and that preprocessing and/or segmentation be performed on pixel data or that the camera be turned off. Each command includes some or all of the following information:

Command number

The process desiring data must be able to identify its status. Level 1 associates the condition of a request with its unique command number.

Processing request

Level 2 or the operator request the type of information to be extracted from the data. The request states which class of information to extract. For example, if edges are needed by Level 2, the direction sent may specify the need for an edge point image.

Timing requirements

The update rate of results is specified to keep current information supplied to the rest of the system. The mode may be specified as continuous so that information is processed without needing to be requested repeatedly. The results may need to be supplied within a specified amount of time so that other processes may rely on its accuracy. The velocity and acceleration of the manipulator impacts the amount of time required in locating image features. High rates of velocity and acceleration of the robot manipulator imply a high update rate.

Precision requirements

The distance between the manipulator and objects in its workspace dictate the amount of

« PreviousContinue »