Browsing by Author "Cerutti, F."
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
ArticlePublication Metadata only Handling epistemic and aleatory uncertainties in probabilistic circuits(Springer, 2022-04) Cerutti, F.; Kaplan, L. M.; Kimmig, A.; Şensoy, Murat; Computer Science; ŞENSOY, MuratWhen collaborating with an AI system, we need to assess when to trust its recommendations. If we mistakenly trust it in regions where it is likely to err, catastrophic failures may occur, hence the need for Bayesian approaches for probabilistic reasoning in order to determine the confidence (or epistemic uncertainty) in the probabilities in light of the training data. We propose an approach to Bayesian inference of posterior distributions that overcomes the independence assumption behind most of the approaches dealing with a large class of probabilistic reasoning that includes Bayesian networks as well as several instances of probabilistic logic. We provide an algorithm for Bayesian inference of posterior distributions from sparse, albeit complete, observations, and for deriving inferences and their confidences keeping track of the dependencies between variables when they are manipulated within the unifying computational formalism provided by probabilistic circuits. Each leaf of such circuits is labelled with a beta-distributed random variable that provides us with an elegant framework for representing uncertain probabilities. We achieve better estimation of epistemic uncertainty than state-of-the-art approaches, including highly engineered ones, while being able to handle general circuits and with just a modest increase in the computational effort compared to using point probabilities.Conference paperPublication Metadata only Interpretability of deep learning models: a survey of results(IEEE, 2018-06-26) Chakraborty, S.; Tomsett, R.; Raghavendra, R.; Harborne, D.; Alzantot, M.; Cerutti, F.; Srivastava, M.; Preece, A.; Julier, S.; Rao, R. M.; Kelley, T. D.; Braines, D.; Şensoy, Murat; Willis, C. J.; Gurram, P.; Computer Science; ŞENSOY, MuratDeep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process - incorporating these networks into mission critical processes such as medical diagnosis, planning and control - requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.Conference paperPublication Metadata only Learning and reasoning in complex coalition information environments: a critical analysis(IEEE, 2018-09-05) Cerutti, F.; Alzantot, M.; Xing, T.; Harborne, D.; Bakdash, J. Z.; Braines, D.; Chakraborty, S.; Kaplan, L.; Kimmig, A.; Preece, A.; Raghavendra, R.; Şensoy, Murat; Srivastava, M.; Computer Science; ŞENSOY, MuratIn this paper we provide a critical analysis with metrics that will inform guidelines for designing distributed systems for Collective Situational Understanding (CSU). CSU requires both collective insight - i.e., accurate and deep understanding of a situation derived from uncertain and often sparse data and collective foresight - i.e., the ability to predict what will happen in the future. When it comes to complex scenarios, the need for a distributed CSU naturally emerges, as a single monolithic approach not only is unfeasible: it is also undesirable. We therefore propose a principled, critical analysis of AI techniques that can support specific tasks for CSU to derive guidelines for designing distributed systems for CSU.Conference paperPublication Metadata only Privacy enforcement through policy extension(IEEE, 2016) Arunkumar, S.; Srivatsa, M.; Soyluoglu, Berker; Şensoy, Murat; Cerutti, F.; Computer Science; ŞENSOY, Murat; Soyluoglu, BerkerSuccessful coalition operations require contributions from the coalition partners which might have hidden goals and desiderata in addition to the shared coalition goals. Therefore, there is an inevitable risk-utility trade-off for information producers due to the need-to-know vs. need-to-hide tension, which must take into account the trustworthiness of the other coalition partners. A balance is often achieved by deliberate obfuscation of the shared information. In this paper, we show how to integrate obfuscation capabilities within the current OASIS standard for access control policies, namely XACML.Conference paperPublication Open Access Probabilistic logic programming with beta-distributed random variables(Association for the Advancement of Artificial Intelligence, 2019-07-17) Cerutti, F.; Kaplan, L.; Kimmig, A.; Şensoy, Murat; Computer Science; ŞENSOY, MuratWe enable aProbLog-a probabilistic logical programming approach-to reason in presence of uncertain probabilities represented as Beta-distributed random variables. We achieve the same performance of state-of-the-art algorithms for highly specified and engineered domains, while simultaneously we maintain the flexibility offered by aProbLog in handling complex relational domains. Our motivation is that faithfully capturing the distribution of probabilities is necessary to compute an expected utility for effective decision making under uncertainty: unfortunately, these probability distributions can be highly uncertain due to sparse data. To understand and accurately manipulate such probability distributions we need a well-defined theoretical framework that is provided by the Beta distribution, which specifies a distribution of probabilities representing all the possible values of a probability when the exact value is unknown.Conference paperPublication Metadata only Subjective bayesian networks and human-in-the-loop situational understanding(Springer, 2018-03-21) Braines, D.; Thomas, A.; Kaplan, L.; Şensoy, Murat; Bakdash, J. Z.; Ivanovska, M.; Preece, A.; Cerutti, F.; Computer Science; ŞENSOY, MuratIn this paper we present a methodology to exploit human-machine coalitions for situational understanding. Situational understanding refers to the ability to relate relevant information and form logical conclusions, as well as identify gaps in information. This process for comprehension of the meaning information requires the ability to reason inductively, for which we will exploit the machines’ ability to ‘learn’ from data. However, important phenomena are often rare in occurrence with high degrees of uncertainty, thus severely limiting the availability of instance data for training, and hence the applicability of many machine learning approaches. Therefore, we present the benefits of Subjective Bayesian Networks—i.e., Bayesian Networks with imprecise probabilities—for situational understanding, and the role of conversational interfaces for supporting decision makers in the evolution of situational understanding.Conference paperPublication Metadata only Uncertainty-aware deep classifiers using generative models(Association for the Advancement of Artificial Intelligence, 2020) Şensoy, Murat; Kaplan, L.; Cerutti, F.; Saleki, Maryam; Computer Science; ŞENSOY, Murat; Saleki, MaryamDeep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions. Some recent approaches quantify classification uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-of-distribution samples. However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimensional data such as images. In this work we develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space. To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training. Through extensive analysis, we demonstrate that the proposed approach provides better estimates of uncertainty for in- and out-of-distribution samples, and adversarial examples on well-known data sets against state-of-the-art approaches including recent Bayesian approaches for neural networks and anomaly detection methods.Conference paperPublication Metadata only Uncertainty-aware situational understanding(SPIE, 2019) Tomsett, R.; Kaplan, L.; Cerutti, F.; Sullivan, P.; Vente, D.; Vilamala, M. R.; Kimmig, A.; Preece, A.; Şensoy, Murat; Computer Science; Pham, T.; ŞENSOY, MuratSituational understanding is impossible without causal reasoning and reasoning under and about uncertainty, i.e. prob-abilistic reasoning and reasoning about the confidence in the uncertainty assessment. We therefore consider the case of subjective (uncertain) Bayesian networks. In previous work we notice that when observations are out of the ordinary, confidence decreases because the relevant training data-effective instantiations-to determine the probabilities for unobserved variables-on the basis of the observed variables-is significantly smaller than the size of the training data-the total number of instantiations. It is therefore of primary importance for the ultimate goal of situational understanding to be able to efficiently determine the reasoning paths that lead to low confidence whenever and wherever it occurs: this can guide specific data collection exercises to reduce such an uncertainty. We propose three methods to this end, and we evaluate them on the basis of a case-study developed in collaboration with professional intelligence analysts.