Browsing by Author "Tomsett, R."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Conference paperPublication Metadata only Interpretability of deep learning models: a survey of results(IEEE, 2018-06-26) Chakraborty, S.; Tomsett, R.; Raghavendra, R.; Harborne, D.; Alzantot, M.; Cerutti, F.; Srivastava, M.; Preece, A.; Julier, S.; Rao, R. M.; Kelley, T. D.; Braines, D.; Şensoy, Murat; Willis, C. J.; Gurram, P.; Computer Science; ŞENSOY, MuratDeep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process - incorporating these networks into mission critical processes such as medical diagnosis, planning and control - requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.Conference paperPublication Metadata only Uncertainty-aware situational understanding(SPIE, 2019) Tomsett, R.; Kaplan, L.; Cerutti, F.; Sullivan, P.; Vente, D.; Vilamala, M. R.; Kimmig, A.; Preece, A.; Şensoy, Murat; Computer Science; Pham, T.; ŞENSOY, MuratSituational understanding is impossible without causal reasoning and reasoning under and about uncertainty, i.e. prob-abilistic reasoning and reasoning about the confidence in the uncertainty assessment. We therefore consider the case of subjective (uncertain) Bayesian networks. In previous work we notice that when observations are out of the ordinary, confidence decreases because the relevant training data-effective instantiations-to determine the probabilities for unobserved variables-on the basis of the observed variables-is significantly smaller than the size of the training data-the total number of instantiations. It is therefore of primary importance for the ultimate goal of situational understanding to be able to efficiently determine the reasoning paths that lead to low confidence whenever and wherever it occurs: this can guide specific data collection exercises to reduce such an uncertainty. We propose three methods to this end, and we evaluate them on the basis of a case-study developed in collaboration with professional intelligence analysts.