Rapidly Reconfigurable Event-Set Based Line-Oriented Evaluations Generator

About RRLOE

Home
About RRLOE
RRLOE Team
News
Support
Forum
Related Info
Contact Us
Line-Oriented Evaluation (LOE) is an evaluation methodology used in the Advanced Qualification Program (AQP) to evaluate trainee performance, and to validate trainee proficiency. LOEs consist of flight simulation scenarios that are developed by the training organization in accordance with a methodology approved by the Federal Aviation Administration's Advanced Qualification Program Branch Office (AFS-230). The scenarios themselves are approved by the local FAA Principal Operations Inspector's (POI) office.

In the past, LOEs were developed and approved individually. That required each LOE to be separately conceived, developed, and tested by the training organization, and then to be individually reviewed and approved by the FAA. Thus, the development of LOEs was both costly and time consuming. As a result, training organizations usually had only a limited number of LOEs available for evaluation, each of which was approved for only a limited time period. Repeated use of a small number of fixed LOE scenarios created the potential for the LOE events to become known in advance by the trainees in the organization, thus reducing the validity of both individual trainee evaluation and fleet proficiency assurance.

One way to improve the validity of the LOE methodology would be to develop a process that avoids the use of a small set of fixed scenarios over an extended period of training within an organization. If events which form the building blocks of a full LOE could be individually developed and approved by the FAA, then these events could be used as a database from which to assemble complete, unique LOEs. The training organization could rapidly build new LOE scenarios with desired events within them, without seeking FAA approval of each complete scenario. The specific content could be varied while controlling the general content and overall difficulty. This would lead to fair evaluations for all trainees on fresh, variable, and valid training scenarios.

A procedure like the one described above, however, poses numerous challenges to the human factors (HF) researcher. In particular, four areas require the integration of psychological research, HF principles, and software development. These are described below.

Skill-based Training

It is generally accepted that the most effective approach to enhancing teamwork in the cockpit is to engender specific knowledge and skills. Skill-based training, therefore, has become a central tenet of the Advanced Qualification Program (AQP). The advantages of skill-based training come at some cost, however. First, it requires the elaboration of both the technical and coordination skills that are to be trained. This step has been completed effectively by advanced participants in AQP. However, identifying the skills, in and of itself, is not enough to result in effective training. Thus, a second requirement of skill-based training is the creation of a mechanism to ensure appropriate practice and feedback. This need is one of the practical drivers of the RRLOE research and development program.

Event-based Assessment

A logical consequence of skill-based training is skill-based assessment, for which Line-Oriented Evaluation (LOE) is the mechanism under AQP. There is a predictable advantage to a methodology designed to link the LOE development process from the identification of skills and knowledges via event sets and observable behaviors to simulator scenario scripts. Another practical driver of the RRLOE effort was to generate not only a theoretical procedure, but also a practical tool, to speed up the development of LOE scenarios.

Applied Research Issues in the Development of an RRLOE Generator

Realism. A first challenge in developing this type of methodology and associated tools has been to conduct a program of research to identify those aspects of LOE scenarios that are required to make LOEs a valid and realistic assessment situation. We approached this problem by integrating the results of modern knowledge elicitation techniques with innovative database development. The result of this process was a theoretical approach (i.e., the 'Domino-Method,' see Bowers, Jentsch, Baker, Prince, & Salas, 1997) that became the basis for the existing expert system that tracks realism and continuity in the RRLOE program. This novel approach allows the addition of individual event sets to a library of event sets for LOEs without the need to check the compatibility with other event sets manually. In fact, the resulting system can track more parameters relevant to a flight than a human LOE developer could in the past.

A second issue pursuant to realism is the impact of weather in LOEs. Recent research has demonstrated that weather is a prime consideration in aircrew decision making (e.g., Jentsch, Irvin, & Bowers, 1997). However, weather has traditionally been a difficult in LOE development. Consequently, weather reports given to pilots in LOEs often were developed on the basis of small changes to existing weather paperwork and did not capture the complexities of the operational environment. To remedy this situation, we have conducted extensive research into the variables that affect pilot decision making and have generated both a mechanism and a sample library of advanced weather scenarios which systematically vary those critical elements. It should be noted that this is the first time that theoretical research on the effects of weather on aviator decision making has been made available to aviation training organizations in the form of a practical tool.

Difficulty. Another major research thrust of the RRLOE project relates to the estimation and measurement of LOE difficulty. Every LOE, notwithstanding whether it is generated by a human or a computer, must fall within an acceptable range of difficulty. A human LOE developer achieves this task largely through an experiential process that takes the combination of event sets, environmental conditions, and assessment expectations into account. Therefore, the challenge for RRLOE was to identify and describe the process used by humans and translate it into a mathematical model that the machine could execute. It was also important to validate the consequent model. We have done this through several studies at air carriers and have presented the results at international meetings (cf. Jentsch, Abbott, & Bowers, 1999). We now feel confident that the method proposed by us leads to a range of equivalent and fair scenarios. Again, this is the first time that a method for LOE difficulty assessment has been developed, validated, and described for the operational community.

HF Aspects of the Software

Finally, the results of the research and development effort described above needed to be translated into a user-friendly tool set, so as to facilitate transfer of the research results to the operational arena. In developing this tool set, we had to study and apply human factors principles related to airline culture, operational environment, and computer sophistication across a wide range of operators participating in AQP. The resulting software tool set includes both tools for operators that are just beginning the AQP process and those which are far advanced within it. Finally, following the skill-based training approach described above, we had to integrate existing task and skill analyses with the goals and tools of RRLOE. This required the consequent application of HF guidelines in Computer-Human Interface (CHI) design to our program. Further, through research at various airlines, we were able to come to a standardized way to describe event sets that can be used by all participants in the AQP process.



References

Bowers, C., Jentsch, F., Baker, D., Prince, C., & Salas, E. (1997). Rapidly reconfigurable event-set based line operational evaluation scenarios. Proceedings of the Human Factors and Ergonomics Society 41st Annual Meeting, Albuquerque, NM (pp. 912-915). Santa Monica, CA: Human Factors and Ergonomics Society.

Jentsch, F., Abbott, D., & Bowers, C. (1999). Do three easy tasks make one difficult one' Studying the perceived difficulty of simulation scenarios. Proceedings of the Tenth International Symposium on Aviation Psychology. Columbus: The Ohio State University.

Jentsch, F., Irvin, J., & Bowers, C. (1997). Differences in situation assessment between experts and prospective first officers. Proceedings of the Ninth International Symposium on Aviation Psychology (pp. 1228-1232). Columbus: The Ohio State University.
Back to top
NAWCTSD / FAA / UCF - The Partnership for Aviation Team Training Research