NDM Tools

Through its scientific and applied endeavors, the NDM community has generated many tools, including methods, models, guidance, and research supports. This list is not intended to be comprehensive; it is only a survey of landscape.

These NDM community members contributed to the list: Cindy Dominguez, Julie Gore, Robert Hoffman, Devorah Klein, Gary Klein, Laura Militello, Brian Moon, Emilie Roth, Jan Maarten Schraagen, and Neelam Naikar. Adam Zaremsky added the short descriptors and references.

Knowledge Elicitation

a. Critical Decision Method (CDM): A knowledge elicitation technique that uses a set of cognitive probes to determine the basis for situation assessment and decision making during nonroutine incidents in multiple sweeps of the incident in retrospection.

  • Klein, G. A., Calderwood, R., & MacGregor, D. (1989). Critical decision method for eliciting knowledge. IEEE Transactions on systems, man, and cybernetics19(3), 462-472.
  • Hoffman, R. R., Crandall, B., & Shadbolt, N. (1998). Use of the critical decision method to elicit expert knowledge: A case study in the methodology of cognitive task analysis. Human factors40(2), 254-276.

b. Situation Awareness Record: Specific changes in cue usage and goals in an expert’s understanding of the dynamics of a particular case.

  • Klein, G. A., Calderwood, R., & Macgregor, D. (1989). Critical decision method for         eliciting knowledge. IEEE Transactions on systems, man, and cybernetics19(3), 462-472.

c. Applied Cognitive Task Analysis (ACTA): A streamlined set of interview methods for identifying cognitive demands and skills needed to perform a task proficiently.

  • Militello, L. G., & Hutton, R. J. (1998). Applied cognitive task analysis (ACTA): a           practitioner’s toolkit for understanding cognitive task demands. Ergonomics41(11),   1618-1641.

d. Knowledge Audit (from ACTA): A set of probes organized to capture the most important knowledge categories recognized as characteristic of expertise.

  • Militello, L. G., & Hutton, R. J. (1998). Applied cognitive task analysis (ACTA): a           practitioner’s toolkit for understanding cognitive task demands. Ergonomics41(11),   1618-1641.
  • Klein, G., & Militello, L. (2004). The knowledge audit as a method for cognitive task       analysis. How professionals make decisions, 335-342.

e. Cognitive Audit: A method to assess and identify which specifically trainable cognitive skills an instructor might want to address.

f. Concept Maps: A tool for organizing knowledge, by showing the relationships of events and objects between which a perceived regularity exists, through the use of connecting lines, circles, and boxes of some type.

  • Novak, J. D., & Cañas, A. J. (2006). The theory underlying concept maps and how to construct them. Florida Institute for Human and Machine Cognition1(1), 1-31.
  • Novak, J. D., & Canas, A. J. (2007). Theoretical origins of concept maps, how to construct them, and uses in education. Reflecting Education3(1), 29-42.

Cognitive Specifications and Representation

a. Cognitive Requirements Table: A format for practitioners to focus on the analysis of intended project goals, with headings based on the types of information needed to develop a new course or design a new system.

  • Militello, L. G., & Hutton, R. J. (1998). Applied cognitive task analysis (ACTA): a practitioner’s toolkit for understanding cognitive task demands. Ergonomics41(11), 1618-1641.

b. Use of Cognitive Requirements to guide training and design: A use of Naturalistic Decision-Making cognitive frameworks to more accurately describe the processes involved in real-world decision making, to help decision makers with what they are actually trying to do.

  • Militello, L. G., & Klein, G. (2013). Decision-centered design. The Oxford handbook of cognitive engineering, 261-271.
  • Militello, L. G., Dominguez, C. O., Lintern, G., & Klein, G. (2010). The role of cognitive systems engineering in the systems engineering design process. Systems Engineering13(3), 261-273.

c. Critical Cue Inventory: A collection of all of the informational and perceptual cues that are pinpointed in critical decision interviews.

  • Klein, G. A., Calderwood, R., & Macgregor, D. (1989). Critical decision method for eliciting knowledge. IEEE Transactions on systems, man, and cybernetics19(3), 462-472.

d. Cognimeter: A system for visual monitoring of a user’s mental state, assessing in real time human emotion, levels of mental workload and stress.

  • Chrenka, J., Hotton, R.J. B., Klinger, D. W., & Anastasi, D. (2001). The cognimeter: Focusing cognitive task analysis in the cognitive function model. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 4, 1738-1742.

e. Integrated Cognitive Analyses for Human-Machine Teaming (ICA-HMT): A strategy for facilitating human-machine teaming consisting of activities to analyze function allocation trade space, analyze operational demands and work requirements, analyze interdependencies between humans and automation, evaluate alternative options with human performance modeling/simulation, and identify and evaluate alternative function allocation and crewing options.

  • Ernst, K., Roth, E., Militello, L., Sushereba, C., DiIulio, J., Wonderly, S., … & Taylor, G. (2019). A strategy for determining optimal crewing in future vertical lift: Human-automation function allocation. Proceedings of the Vertical Lift Society’s 75th Annual Forum & Technology Display, Philadelphia, PA.
  • Sushereba, C. E., Diiulio, J. B., Militello, L. G., & Roth, E. (2019, November). A tradespace framework for evaluating crewing configurations for future vertical lift. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 63, No. 1, pp. 352-356). Sage CA: Los Angeles, CA: SAGE Publications.

f. Contextual Activity Templates: A representation for characterizing activity in work systems that can be decomposed into both work situations and work functions, by capturing all of the combinations of work situations, work functions and control tasks that are possible.

  • Naikar, N., Moylan, A., & Pearce, B. (2006). Analyzing activity in complex systems with cognitive work analysis: concepts, guidelines and case study for control task analysis. Theoretical Issues in Ergonomics Science7(4), 371-394.

g. Diagram of Work Organization Possibilities: A diagram with multiple features for complex sociotechnical systems including: showcasing what work demands an actor can be responsible for, depiction of the fundamental boundaries on the allocation or distribution of work demands from which various possibilities may be derived, with these possibilities regarded as emergent.

  • Naikar, N., & Elix, B. (2016). Integrated system design: Promoting the capacity of sociotechnical systems for adaptation through extensions of cognitive work analysis. Frontiers in psychology7, 962.

Training

a. ShadowBox: A training approach that allows trainees to learn how experts make sense of situations, what they pay attention to, why they make their choices, and what their mental models are, all without the expert having to be present.

  • Klein, G., & Borders, J. (2016). The ShadowBox approach to cognitive skills training: An empirical evaluation. Journal of Cognitive Engineering and Decision Making10(3), 268-280.
  • Klein, G., Hintze, N., & Saab, D. (2013, May). Thinking inside the box: The ShadowBox method for cognitive skill development. In Proceedings of the 11th International Conference on Naturalistic Decision Making (pp. 121-124).

b. Tactical Decision Games/Decision Making Exercises: Tactical Decision Games (TDGs) are simple, fun, and effective exercises to improve one’s decision making ability and tactical acumen by repeatedly playing through problems to learn to make decisions better as well as better decisions.

  • Schmitt, J. F. (1994). Mastering Tactics: A tactical decision games workbook. Marine Corps Association.

c. Artificial Intelligence Quotient (AIQ) for helping users operate AI systems: A set of tools for human-machine systems designed to increase the human operators’ knowledge and understanding of the technologies explainability.  

d. On-the-Job Training: A cognitive model of training that has one primary function, learning management, and six subordinate functions of a provider: setting/clarifying goals, providing instruction, assessing trainee proficiency and diagnosing barriers to progress, sharing expertise, setting a climate conducive to learning, and promoting ownership of the learning process and performance of the trainee.

  • Zsambok, C. E., Kaempf, G. L., Crandall, B., & Kyne, M. (1997). A Comprehensive Program to Deliver On-The-Job Training (OJT). Klein Associates Inc Fairborn OH.

e. CAARGO: A new training tool designed to improve the ability of decision makers in the oil, gas, and petrochemical industries to help plant operators build richer mental models and more effective mindsets, helping them to make better decisions.

  • Borders, J., Klein, G., & Besuijen, R. (2020) Mental Models: Cognitive After-Action Review Guide for Observers Video Final Report. Center for Operator Performance (Unpublished Report).

Design

a. Decision-Centered Design: A framework focused on utilizing CTA methods to identify the tough, key decisions of performance, and then create the design of the technology, training, and processes based upon those identified requirements.

  • Militello, L. G., & Klein, G. (2013). Decision-centered design. The Oxford handbook of cognitive engineering, 261-271.

b. Principles for Collaborative Automation: A replanning tool designed to support human operators to collaborate with an automated support collaborative planner, where the planner was created to operate with observable and directable functioning, with a shared frame of reference to the human operator, to allow for work to be completed iteratively and jointly.

  • Scott, R., Roth, E., Truxler, R., Ostwald, J., & Wampler, J. (2009, October). Techniques for effective collaborative automation for air mission replanning. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 53, No. 4, pp. 202-206). Sage CA: Los Angeles, CA: SAGE Publications.

c. Principles of Human-Centered Computing: A set of methodologies applicable in any context when humans directly interact with computing devices and systems, that facilitates any personal, social or cultural aspects, and addresses issues such as information design, human-information interaction, human-computer interaction, human-human interaction, and the relationships between computing technology and art, social, and cultural issues.

  • Hoffman, R. R., Roesler, A., & Moon, B. M. (2004). What is design in the context of human-centered computing?. IEEE Intelligent Systems19(4), 89-95.
  • Zhang, J., Patel, V. L., Johnson, K. A., & Smith, J. W. (2002). Designing human-centered distributed information systems. IEEE intelligent systems17(5), 42-47.

Evaluation and Assessment

a. Sero!: A software platform for conducting concept mapping-based assessments of mental models.

  • www.serolearn.com; Moon, B., Johnston, C., & Moon, S. (2018). A Case for the Superiority of Concept Mapping-Based Assessments for Assessing Mental Models. In Concept Mapping: Renewing Learning and Thinking. Proceedings of the 8th Int. Conference on Concept Mapping, Medellín, Colombia: Universidad EAFIT.

b. Concept Maps: A tool for organizing knowledge, showing the relationships of events and objects between which a perceived regularity exists, through the use of connecting lines, circles, and boxes of some type. (Concept Maps are also listed above as a knowledge elicitation method.)

  • Novak, J. D., & Cañas, A. J. (2006). The theory underlying concept maps and how to construct them. Florida Institute for Human and Machine Cognition1(1), 1-31.
  • Novak, J. D., & Canas, A. J. (2007). Theoretical origins of concept maps, how to construct them, and uses in education. Reflecting Education3(1), 29-42.

c. Decision Making Record for Performance Reviews: An alternative method to standard performance reviews, where the reviewer and reviewee independently draw up and evaluate a list of decisions made in the prior year, focusing attention to the quality of the decision itself, not the decision maker.

d. Work-Centered Evaluation (method for technology evaluation): An evaluation framework that focuses on a support system’s usability, usefulness, and impact in supporting human performance in complex work environments.

  • Eggleston, R. G., Roth, E. M., & Scott, R. (2003, October). A framework for work-centered product evaluation. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 47, No. 3, pp. 503-507). Sage CA: Los Angeles, CA: SAGE Publications.

Teamwork

a. SA Calibration Questions: A process for team members to share SA information, which may include a group exercise of questioning norms, checking for conflicting information, setting up coordination and prioritization of tasks, and establishing contingency planning.   

  • Klinger, D. W., & Klein, G. (1999). Emergency response organizations: An accident waiting to happen. Ergonomics in Design, 7(3), 20-25.

b. Cultural Lens Model: A model that captures the nature and origin of international cognitive differences, usually arising from a group’s origin in a specific physical and social ecology and provides mechanisms for increasing comprehension and effectiveness in the face of these cognitive differences.

  • Klein, H. A. (2004). Cognition in natural settings: The cultural lens model. Cultural ergonomics. 249-280.

Risk Assessment

a. Pre-mortem: A managerial strategy at the outset of a project where a team engages in a hypothetical presumption that it has failed spectacularly, subsequently working backwards to identify what weaknesses or threats to the project they can avoid.

  • Klein, G. (2007). Performing a project premortem. Harvard business review85(9), 18-19.

Measurement

a. Macrocognitive Measures: The six main macrocognitive functions are identified as, decision making, sensemaking, problem detection, planning, adapting, and coordinating.

  • Klein, G. (2018). Macrocognitive measures for evaluating cognitive work. Macrocognition Metrics and Scenarios 47-64. CRC Press.

b. Performance assessment by order statistics: A novel method of statistical data analysis for assessing the learnability of cognitive work methods.

  • Hoffman, R. R., Marx, M., Amin, R., & McDermott, P. L. (2010). Measurement for evaluating the learnability and resilience of methods of cognitive work. Theoretical Issues in Ergonomics Science11(6), 561-575.

c. Scales for Explainable Artificial Intelligence: A means to explain a computational system to decision makers who rely on Artificial Intelligence, so that they may decide on the reasonableness of that system.

            1. Trust: A series of active exploration measures aimed at maintaining an appropriate       context-dependent expectation for users to know whether, when and why to trust or     mistrust an XAI system.

            2. Explanation Satisfaction: The degree to which users feel that they understand the AI     system or process being explained to them.

            3. Explanation Goodness: Utilizing factors such as clarity and precision, a checklist for     researchers to either try and design goodness into the explanation that their XAI system generates, or to evaluate a priori goodness of the explanations generated.

            4. Quality of Mental Models: A measure to assess the “goodness” (i.e., correctness,          comprehensiveness, coherence, usefulness) of a users’ mental model in regard to an XAI       system, by calculating the percentage of concepts, relations, and propositions that are in a user’s explanation that are also in an expert’s explanation.

  • Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608.

Conceptual Descriptions

a. RPD model: A decision-making model that explains how people use situation assessment to generate plausible courses of action, while simultaneously using mental simulation to evaluate generated courses of action.

  • Klein, G., Calderwood, R., & Clinton-Cirocco, A. (2010). Rapid decision making on the fire ground: The original study plus a postscript. Journal of Cognitive Engineering and Decision Making, 4, 186-209. 

b. Computational model of RPD: A combination of the conceptual RPD model with computer programming to simulate realistic expert decision making.

  • Hutton, R. J., Warwick, W., Stanard, T., McDermott, P. L., & McIlwaine, S. (2001, October). Computational model of recognition-primed decisions (RPD): improving realism in computer-generated forces (CGF). In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 45, No. 26, pp. 1833-1837). Sage CA: Los Angeles, CA: SAGE Publications.

c. Data/Frame model of sensemaking: A description of the ever-changing and interconnected relationship between data, which is available information, and cognitive frames, which are explanatory structures that define entities by explaining their relationship to other entities.

  • Klein, G., Phillips, J. K., Rall, E. L., & Peluso, D. A. (2007, May). A data-frame theory of sensemaking. In Expertise out of context: Proceedings of the sixth international conference on naturalistic decision making (pp. 113-155). New York, NY: Lawrence Erlbaum Assoc Inc.
  • Moore, D. T., & Hoffman, R. R. (2011). Data-frame theory of sensemaking as a best model for intelligence. American Intelligence Journal29(2), 145-158.

d: Computational model of D/F: Use of the Data Frame model of sensemaking in the creation of computational simulations for machine systems to support human decision makers, utilizing key characteristics of computational cognition, including ontology representation, network theory, and reasoning processes with recursive feedback.

  • Kodagoda, N., Pontis, S., Simmie, D., Attfield, S., Wong, B. W., Blandford, A., & Hankin, C. (2017). Using machine learning to infer reasoning provenance from user interaction log data: based on the data/frame theory of sensemaking. Journal of Cognitive Engineering and Decision Making11(1), 23-41.
  • Codjoe, E. A. A., Ntuen, C., & Chenou, J. (2010). A case study in sensemaking using Data/Frame Model. In IIE Annual Conference. Proceedings (p. 1). Institute of Industrial and Systems Engineers (IISE).

e. Management-by-Discovery/Flexecution: Intertwined concepts, both revolving around good managers being able to change goals as they go based on discoveries, by trying to learn more about those goals even as they pursue them.

  • Klein, G. (2007) Flexecution as a paradigm for replanning, part 1. IEEE Intelligent           Systems 22.5, 79-83.
  • Klein, G. A. (2011) Streetlights and shadows: Searching for the keys to adaptive decision            making. MIT Press.

f. Triple Path Model of Insight: This model describes three paths that can lead people to having insights: contradictions, connections, and creative desperation.

  • Klein, G. (2013) Seeing what others don’t: The remarkable ways we gain insights. Public Affairs.

g. Naturalistic Models of Explaining: Three models of how explanations are formed when a person tries to explain the reasons for the decisions, actions, or workings of a device to another person: Local explaining, global explaining and self-explaining.

  • Klein, G., Hoffman, R., & Mueller, S. (2019) Naturalistic Psychological Model of Explanatory Reasoning: How People Explain Things to Others and to Themselves. International Conference on Naturalistic Decision Making, San Francisco, California

h. Problem Detection Model: A model of sensemaking, where in the process by which a person becomes aware of a problem, or an unexpected or undesirable direction of a situation, that person begins to reconceptualize the situation they found themselves in.

  • Klein, G., Pliske, R. M., Crandall, B., & Woods, D. (2005). Problem detection. Cognition, Technology, and Work, 7, 14-28.

%d bloggers like this: