Browsing by Author "Chakraborty, S."
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
ArticlePublication Metadata only Inference management, trust and obfuscation principles for quality of information in emerging pervasive environments(Elsevier, 2014-04) Bisdikian, C.; Gibson, C.; Chakraborty, S.; Srivastava, M. B.; Şensoy, Murat; Norman, T. J.; Computer Science; ŞENSOY, MuratThe emergence of large scale, distributed, sensor-enabled, machine-to-machine pervasive applications necessitates engaging with providers of information on demand to collect the information, of varying quality levels, to be used to infer about the state of the world and decide actions in response. In these highly fluid operational environments, involving information providers and consumers of various degrees of trust and intentions, information transformation, such as obfuscation, is used to manage the inferences that could be made to protect providers from misuses of the information they share, while still providing benefits to their information consumers. In this paper, we develop the initial principles for relating to inference management and the role that trust and obfuscation plays in it within the context of this emerging breed of applications. We start by extending the definitions of trust and obfuscation into this emerging application space. We, then, highlight their role as we move from the tightly-coupled to loosely-coupled sensory-inference systems and describe how quality, value and risk of information relate in collaborative and adversarial systems. Next, we discuss quality distortion illustrated through a human activity recognition sensory system. We then present a system architecture to support an inference firewall capability in a publish/subscribe system for sensory information and conclude with a discussion and closing remarks.Conference paperPublication Metadata only Interpretability of deep learning models: a survey of results(IEEE, 2018-06-26) Chakraborty, S.; Tomsett, R.; Raghavendra, R.; Harborne, D.; Alzantot, M.; Cerutti, F.; Srivastava, M.; Preece, A.; Julier, S.; Rao, R. M.; Kelley, T. D.; Braines, D.; Şensoy, Murat; Willis, C. J.; Gurram, P.; Computer Science; ŞENSOY, MuratDeep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process - incorporating these networks into mission critical processes such as medical diagnosis, planning and control - requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.Conference paperPublication Metadata only Learning and reasoning in complex coalition information environments: a critical analysis(IEEE, 2018-09-05) Cerutti, F.; Alzantot, M.; Xing, T.; Harborne, D.; Bakdash, J. Z.; Braines, D.; Chakraborty, S.; Kaplan, L.; Kimmig, A.; Preece, A.; Raghavendra, R.; Şensoy, Murat; Srivastava, M.; Computer Science; ŞENSOY, MuratIn this paper we provide a critical analysis with metrics that will inform guidelines for designing distributed systems for Collective Situational Understanding (CSU). CSU requires both collective insight - i.e., accurate and deep understanding of a situation derived from uncertain and often sparse data and collective foresight - i.e., the ability to predict what will happen in the future. When it comes to complex scenarios, the need for a distributed CSU naturally emerges, as a single monolithic approach not only is unfeasible: it is also undesirable. We therefore propose a principled, critical analysis of AI techniques that can support specific tasks for CSU to derive guidelines for designing distributed systems for CSU.ArticlePublication Open Access Partial observable update for subjective logic and its application for trust estimation(Elsevier, 2015) Kaplan, L.; Şensoy, Murat; Chakraborty, S.; de Mel, G.; Computer Science; ŞENSOY, MuratSubjective Logic (SL) is a type of probabilistic logic, which is suitable for reasoning about situations with uncertainty and incomplete knowledge. In recent years, SL has drawn a significant amount of attention from the multi-agent systems community as it connects beliefs and uncertainty in propositions to a rigorous statistical characterization via Dirichlet distributions. However, one serious limitation of SL is that the belief updates are done only based on completely observable evidence. This work extends SL to incorporate belief updates from partially observable evidence. Normally, the belief updates in SL presume that the current evidence for a proposition points to only one of its mutually exclusive attribute states. Instead, this work considers that the current attribute state may not be completely observable, and instead, one is only able to obtain a measurement that is statistically related to this state. In other words, the SL belief is updated based upon the likelihood that one of the attributes was observed. The paper then illustrates properties of the partial observable updates as a function of the state likelihood and illustrates the use of these likelihoods for a trust estimation application. Finally, the utility of the partial observable updates is demonstrated via various simulations including the trust estimation case.Conference paperPublication Metadata only Reasoning under uncertainty: variations of subjective logic deduction(IEEE, 2013) Kaplan, L. M.; Şensoy, Murat; Tang, Y.; Chakraborty, S.; Bisdikian, C.; de Mel, G.; Computer Science; ŞENSOY, MuratThis work develops alternatives to the classical subjective logic deduction operator. Given antecedent and consequent propositions, the new operators form opinions of the consequent that match the variance of the consequent posterior distribution given opinions on the antecedent and the conditional rules connecting the antecedent with the consequent. As a result, the uncertainty of the consequent actually map to the spread for the probability projection of the opinion. Monte Carlo simulations demonstrate this connection for the new operators. Finally, the work uses Monte Carlo simulations to evaluate the quality of fusing opinions from multiple agents before and after deduction.