Sam Hepenstal, firstname.lastname@example.org
This case study concerns the high-risk and high-consequence domain of Defence and Security. CTA explores expert reasoning and hypothesis generation during investigations performed by criminal intelligence analysts.
Generic description of sponsoring organization or customer:
A national defense agency sponsored the research presented in this case study.
Cognitive Task Analysis Method(s):
This case study took a cognitive engineering approach to design and evaluate a transparent conversational AI system for intelligence analysis, where transparency is defined by the Algorithmic Transparency Framework (Hepenstal et. al. 2019a).
Multiple applications of CTA are demonstrated. CTA was first performed with 4 analysts following the Critical Decision Method (CDM) (Klein et al., 1989) (B. Wong, 2004) to elicit analyst expertise, cues, goals and decision-making about a memorable investigation they were involved with from start to end (Hepenstal et al. 2019b). Following this, a further study was conducted with 4 intelligence analysts from a national-level crime agency who were taken through mock investigation tasks with examples of conversational system responses and were asked a series of questions about each (Hepenstal et al. 2020a). The findings from these two studies resulted in the design and creation of a prototype conversational system that aimed to provide transparency of system processes, called Pan (Hepenstal et al. 2020b, Hepenstal et al. 2021a).
Finally, a CTA approach was used to evaluate the provision of transparency by the system with 10 expert criminal intelligence analysts. Each analyst interacted with Pan to perform an investigation and they were asked to describe their thinking and reasoning throughout. In this evaluation study, half the analysts were provided with transparency and the other half were not, to assess the impact of our design.
Number of Participants:
This case study covers three separate CTA studies used to design, build and evaluate a transparent conversational information retrieval system.
The first CTA study included 4 intelligence analysts, all of whom had more than 4 years operational experience.
The second study included 4 analysts (different from those in the first study), each with at least 10 years of operational experience.
For the final study, the investigation exercise required expertise in criminal network analysis, thus each of the 10 participants had at least 3 years full-time experience in a role involving network analysis. In this evaluation study, participants were split where 5 had access to the transparency information and 5 did not.
Total Number = 18;
Total Number of Proficient Performers = 18;
Method for determining proficiency: length of relevant operational experience.
The research described by this case study took place over a three-year period.
In this case study CTA was used for various purposes. Firstly, it provided an understanding of the way that experts retrieved information and conducted lines of inquiry in investigations. This initial study informed the design of the intent architecture of a conversational AI system (Hepenstal et al. 2019b, 2020a, 2020b, 2021a). It also provided the foundations for a system that learned to predict and explore semantically relevant lines of inquiry (Hepenstal et al. 2020c).
A crucial feature of these systems was that they provided transparency through information granules that were aligned with the decision-making of experts (Hepenstal et al. 2021b). A second CTA was applied to reflect the important information required for system transparency and analyst decision-making on the user interface, creating a framework to describe what users needed to understand about different components of the system (Hepenstal et al. 2020a, Hepenstal and McNeish, 2020).
The third CTA provided an evaluation of the impact and value of our approach to delivering transparency of system intent. The data captured from the investigation exercises helped us to identify analyst insight-seeking behaviors, strategies and rules, and a prototype system was built to demonstrate the benefits of recommending lines of inquiry (Hepenstal et al. 2021c, 2022).
Demonstration of value:
The CTA studies aimed to inform the design of system transparency with an understanding of the expertise used by analysts to make information retrieval decisions.
In our evaluation, we found both qualitative and quantitative evidence that the design of the system provided effective transparency across all three levels of the Situation-Awareness Transparency model (Chen et al. 2014).
As a result, this enhanced the ability of analysts to construct explanatory hypotheses and direct their inquiries. All analysts with transparency completed their investigations in less time than the fastest analyst with no transparency (Hepenstal et al. 2023).
Analysts with no transparency tried to make sense of the system processes and this arguably placed a cognitive burden because they had to continually reassess with no means to verify accuracy. Those with transparency understood the system’s intent and could recognize and interpret the goals and constraints. Furthermore, the analysts did not appreciate the impact of transparency on their ability to reason about the data until they had experienced it (Hepenstal et al. 2021a).
The conversational system has been deployed for experimentation in numerous domains across UK Government and has acquired over £500k in development funding. We have gathered testimonials from customers, both end users and data scientists, that highlight the benefits of using the system and the necessity for system transparency in high-risk and high-consequence domains.
Chen, J., Procci, K., Boyce, M., Wright, J., Garcia, A. & Barnes, M. (2014, 01). Situation
awareness–based agent transparency. US Army Research Laboratory
Hepenstal, S., Wong, B. W., Zhang, L. & Kodogoda, N. (2019b). How analysts think: A
preliminary study of human needs and demands for AI-based conversational agents. Proceedings
of the Human Factors and Ergonomics Society Annual Meeting, 63(1), 178-182. Retrieved from
https:// doi.org/10.1177/1071181319631333 doi: 10.1177/1071181319631333
Hepenstal, S., Zhang, L., Kodagoda, N. & Wong, B. L. W. (2020a). What are you thinking?
explaining conversation agent responses for criminal investigations. In EXSSA TEC@IUI.
Hepenstal, S., Zhang, L., Kodagoda, N. & Wong, B. L. W. (2020b). Pan: Conversational agent
for criminal investigations. In Proceedings of the 25th international conference on intelligent
user interfaces companion (p. 134–135). New York, NY, USA: Association for Computing
Machinery. Retrieved from https://doi.org/10.1145/3379336.3381463 doi:
Hepenstal, S., Zhang, L., Kodogoda, N. & Wong, B. W. (2020c). Providing a foundation for
interpretable autonomous agents through elicitation and modeling of criminal investigation
pathways. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 64(1),
Hepenstal, S., McNeish, D. (2020). Explainable Artificial Intelligence: What Do You Need to
Know?. In: Schmorrow, D.D., Fidopiastis, C.M. (eds) Augmented Cognition. Theoretical and
Technological Approaches. HCII 2020. Lecture Notes in Computer Science(), vol 12196.
Hepenstal, S., Zhang, L., Kodagoda, N. & Wong, B. l. w. (2021a, aug). Developing
conversational agents for use in criminal investigations. ACM Trans. Interact. Intell. Syst., 11(3–
4). Retrieved from https://doi.org/10.1145/3444369 doi: 10.1145/3444369
Hepenstal, S., Zhang, L., Kodagoda, N. & Wong, B. L. W. (2021b). A granular computing
approach to provide transparency of intelligent systems for criminal investigations. In W.
Pedrycz & S.-M. Chen (Eds.), Interpretable artificial intelligence: A perspective of granular
computing (pp. 333–367). Cham: Springer International Publishing.
Hepenstal, S., Zhang, L. & Wong, B. W. (2021). Automated identification of insight seeking
behaviours, strategies and rules: a preliminary study. Proceedings of the Human Factors and
Ergonomics Society Annual Meeting, 65(1), 1269-1273. Retrieved from https://doi.org/10.1177/
1071181321651348 doi: 10.1177/1071181321651348
Hepenstal, S., Kodagoda, N., Zhang, L., Paudyal, P., Wong, W.: Algorithmic transparency of
conversational agents. In: Trattner, C., Parra, D., Riche, N. (eds.) IUI 2019 Workshop on
Intelligent User Interfaces for Algorithmic Transparency in Emerging Technologies, pp. 17–20.
Los Angeles, CA, USA (2019a)
Hepenstal, S., Zhang, L. & Wong, B. W. (2022). Designing a system to mimic expert cognition:
prototype implementation. Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, 66(1), 2057–2061. https://doi.org/10.1177/1071181322661092
Hepenstal, S., Zhang, L. & Wong, B. W. (2023). The Impact of System Transparency on
Klein, G., Calderwood, R. & MacGregor, D. (1989). Critical decision method for eliciting
knowledge. Transactions on Systems, Man, and Cybernetics, 19(3), 462-472.
Wong, B. (2004). Data analysis for the critical decision method. The Handbook of Task Analysis
for Human-computer Interaction.