Browsing by Author "Julier, S."
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Conference paperPublication Metadata only Interpretability of deep learning models: a survey of results(IEEE, 2018-06-26) Chakraborty, S.; Tomsett, R.; Raghavendra, R.; Harborne, D.; Alzantot, M.; Cerutti, F.; Srivastava, M.; Preece, A.; Julier, S.; Rao, R. M.; Kelley, T. D.; Braines, D.; Şensoy, Murat; Willis, C. J.; Gurram, P.; Computer Science; ŞENSOY, MuratDeep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process - incorporating these networks into mission critical processes such as medical diagnosis, planning and control - requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.Conference paperPublication Metadata only Misclassification risk and uncertainty quantification in deep classifiers(IEEE, 2021) Şensoy, Murat; Saleki, Maryam; Julier, S.; Aydoğan, Reyhan; Reid, J.; Computer Science; ŞENSOY, Murat; AYDOĞAN, Reyhan; Saleki, MaryamIn this paper, we propose risk-calibrated evidential deep classifiers to reduce the costs associated with classification errors. We use two main approaches. The first is to develop methods to quantify the uncertainty of a classifier’s predictions and reduce the likelihood of acting on erroneous predictions. The second is a novel way to train the classifier such that erroneous classifications are biased towards less risky categories. We combine these two approaches in a principled way. While doing this, we extend evidential deep learning with pignistic probabilities, which are used to quantify uncertainty of classification predictions and model rational decision making under uncertainty.We evaluate the performance of our approach on several image classification tasks. We demonstrate that our approach allows to (i) incorporate misclassification cost while training deep classifiers, (ii) accurately quantify the uncertainty of classification predictions, and (iii) simultaneously learn how to make classification decisions to minimize expected cost of classification errors.Conference paperPublication Metadata only Not all mistakes are equal(The ACM Digital Library, 2020) Şensoy, M.; Saleki, Maryam; Julier, S.; Aydoğan, Reyhan; Reid, J.; Computer Science; AYDOĞAN, Reyhan; Saleki, MaryamIn many tasks, classifiers play a fundamental role in the way an agent behaves. Most rational agents collect sensor data from the environment, classify it, and act based on that classification. Recently, deep neural networks (DNNs) have become the dominant approach to develop classifiers due to their excellent performance. When training and evaluating the performance of DNNs, it is normally assumed that the cost of all misclassification errors are equal. However, this is unlikely to be true in practice. Incorrect classification predictions can cause an agent to take inappropriate actions. The costs of these actions can be asymmetric, vary from agent-to-agent, and depend on context. In this paper, we discuss the importance of considering risk and uncertainty quantification together to reduce agents' cost of making misclassifications using deep classifiers.