Person:
ŞENSOY, Murat

Loading...
Profile Picture

Email Address

Birth Date

WoSScopusGoogle ScholarORCID

Name

Job Title

First Name

Murat

Last Name

ŞENSOY
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 49
  • Conference paperPublicationOpen Access
    Evidential deep learning to quantify classification uncertainty
    (Neural Information Processing Systems Foundation, 2018) Şensoy, Murat; Kaplan, L.; Kandemir, M.; Computer Science; ŞENSOY, Murat
    Deterministic neural nets have been shown to learn effective predictors on a wide range of machine learning problems. However, as the standard approach is to train the network to minimize a prediction loss, the resultant model remains ignorant to its prediction confidence. Orthogonally to Bayesian neural nets that indirectly infer prediction uncertainty through weight uncertainties, we propose explicit modeling of the same using the theory of subjective logic. By placing a Dirichlet distribution on the class probabilities, we treat predictions of a neural net as subjective opinions and learn the function that collects the evidence leading to these opinions by a deterministic neural net from data. The resultant predictor for a multi-class classification problem is another Dirichlet distribution whose parameters are set by the continuous output of a neural net. We provide a preliminary analysis on how the peculiarities of our new loss function drive improved uncertainty estimation. We observe that our method achieves unprecedented success on detection of out-of-distribution queries and endurance against adversarial perturbations.
  • Placeholder
    Conference paperPublication
    FUSE-BEE: Fusion of subjective opinions through behavior estimation
    (IEEE, 2015) Şensoy, Murat; Kaplan, L.; Ayci, Gönül; de Mel, G.; Computer Science; ŞENSOY, Murat; Ayci, Gönül
    Information is critical in almost all decision making processes. Therefore, it is important to get the right information at the right time from the right sources. However, information sources may behave differently while providing information - i.e., they may provide unreliable, erroneous, noisy, or misleading information deliberately or unintentionally. Motivated by this observation, in this paper, we propose a statistical information fusion approach based on behavior estimation. Our approach transforms the conveyed information into more useful form by tempering them with the estimated behaviors of sources. Through extensive simulations, we have shown that our approach has a lower computational complexity, and achieves significantly low behavior estimation and fusion errors.
  • Conference paperPublicationOpen Access
    Reasoning with uncertain information and trust
    (SPIE, 2013) Şensoy, Murat; Mel, G. de; Fokoue, A.; Norman, T. J.; Pan, J. Z.; Tang, Y.; Oren, N.; Sycara, K.; Kaplan, L.; Pham, T.; Computer Science; ŞENSOY, Murat
    A limitation of standard Description Logics is its inability to reason with uncertain and vague knowledge. Although probabilistic and fuzzy extensions of DLs exist, which provide an explicit representation of uncertainty, they do not provide an explicit means for reasoning about second order uncertainty. Dempster-Shafer theory of evidence (DST) overcomes this weakness and provides means to fuse and reason about uncertain information. In this paper, we combine DL-Lite with DST to allow scalable reasoning over uncertain semantic knowledge bases. Furthermore, our formalism allows for the detection of conflicts between the fused information and domain constraints. Finally, we propose methods to resolve such conflicts through trust revision by exploiting evidence regarding the information sources. The effectiveness of the proposed approaches is shown through simulations under various settings.
  • Placeholder
    ArticlePublication
    Handling epistemic and aleatory uncertainties in probabilistic circuits
    (Springer, 2022-04) Cerutti, F.; Kaplan, L. M.; Kimmig, A.; Şensoy, Murat; Computer Science; ŞENSOY, Murat
    When collaborating with an AI system, we need to assess when to trust its recommendations. If we mistakenly trust it in regions where it is likely to err, catastrophic failures may occur, hence the need for Bayesian approaches for probabilistic reasoning in order to determine the confidence (or epistemic uncertainty) in the probabilities in light of the training data. We propose an approach to Bayesian inference of posterior distributions that overcomes the independence assumption behind most of the approaches dealing with a large class of probabilistic reasoning that includes Bayesian networks as well as several instances of probabilistic logic. We provide an algorithm for Bayesian inference of posterior distributions from sparse, albeit complete, observations, and for deriving inferences and their confidences keeping track of the dependencies between variables when they are manipulated within the unifying computational formalism provided by probabilistic circuits. Each leaf of such circuits is labelled with a beta-distributed random variable that provides us with an elegant framework for representing uncertain probabilities. We achieve better estimation of epistemic uncertainty than state-of-the-art approaches, including highly engineered ones, while being able to handle general circuits and with just a modest increase in the computational effort compared to using point probabilities.
  • Placeholder
    Conference paperPublication
    A knowledge driven policy framework for internet of things
    (ScitePress, 2017) Goynugur, Emre; De Mel, G.; Şensoy, Murat; Talamadupula, K.; Calo, S.; Computer Science; ŞENSOY, Murat; Goynugur, Emre
    With the proliferation of technology, connected and interconnected devices (henceforth referred to as IoT) are fast becoming a viable option to automate the day-to-day interactions of users with their environment—be it manufacturing or home-care automation. However, with the explosion of IoT deployments we have observed in recent years, manually governing the interactions between humans-to-devices—and especially devices-to- devices—is an impractical task, if not an impossible task. This is because devices have their own obligations and prohibitions in context, and humans are not equip to maintain a bird’s-eye-view of the interaction space. Motivated by this observation, in this paper, we propose an end-to-end framework that (a) automatically dis- covers devices, and their associated services and capabilities w.r.t. an ontology; (b) supports representation of high-level—and expressive—user policies to govern the devices and services in the environment; (c) pro- vides efficient procedur es to refine and reason about policies to automate the management of interactions; and (d) delegates similar capable devices to fulfill the interactions, when conflicts occur. We then present our initial work in instrumenting the framework and discuss its details.
  • Placeholder
    EditorialPublication
    Preface
    (Springer International Publishing, 2016) Dignum, V.; Noriega, P.; Şensoy, Murat; Sichman, J. S.; Computer Science; ŞENSOY, Murat
    The pervasiveness of open systems raises a range of challenges and opportunities for research and technological development in the area of autonomous agents and multi-agent systems. Open systems comprise loosely coupled entities interacting within a social space. These entities join the social space in order to achieve some goals that are unattainable by agents in isolation. However, when those entities are autonomous, they might misbehave and, furthermore, in open systems one may not know what entities will be active beforehand, when they may become active, or when they may leave the system.
  • Placeholder
    ArticlePublication
    Agent-based semantic collaborative search
    (Executive Committee, Taiwan Academic Network, Ministry of Education, 2013) Şensoy, Murat; Computer Science; ŞENSOY, Murat
    Next generation of the Web builds upon technologies such as Semantic Web and Intelligent Software Agents. These technologies aim at knowledge representation that allows both humans and software agents to understand and reason about the content on the Web. In this paper, we propose an agent-based approach for collaborative distributed semantic search of the Web resources. Our approach enables a human user to semantically describe his search interest to an agent. Depending on the interests of their users, the agents evolve their ontologies and create search concepts. Based on these search concepts, the agents coordinate and compose virtual communities. Within these communities, agents with similar interests interact to locate and share URLs relevant to search interests of their users. Through these interactions, shared vocabularies are cooperatively emerged by agents to communicate properly within the communities. Our empirical evaluations and analysis of the proposed approach show that our approach combines Semantic Web technologies and multi-agent systems in a novel way to enable users to find and share the URLs relevant to their search interests.
  • Placeholder
    Conference paperPublication
    SHACL constraints with inference rules
    (Springer Nature, 2019) Pareti, P.; Konstantinidis, G.; Norman, T. J.; Şensoy, Murat; Computer Science; Ghidini, C.; Hartig, O.; Maleshkova, M.; Svatek, V.; Cruz, I.; Hogan, A.; Song, J.; Lefrancois, M.; Gandon, F.; ŞENSOY, Murat
    The Shapes Constraint Language (SHACL) has been recently introduced as a W3C recommendation to define constraints that can be validated against RDF graphs. Interactions of SHACL with other Semantic Web technologies, such as ontologies or reasoners, is a matter of ongoing research. In this paper we study the interaction of a subset of SHACL with inference rules expressed in datalog. On the one hand, SHACL constraints can be used to define a "schema" for graph datasets. On the other hand, inference rules can lead to the discovery of new facts that do not match the original schema. Given a set of SHACL constraints and a set of datalog rules, we present a method to detect which constraints could be violated by the application of the inference rules on some graph instance of the schema, and update the original schema, i.e, the set of SHACL constraints, in order to capture the new facts that can be inferred. We provide theoretical and experimental results of the various components of our approach.
  • Placeholder
    ArticlePublication
    A generalized stereotype learning approach and its instantiation in trust modeling
    (Elsevier, 2018-08) Fang, H.; Zhang, J.; Şensoy, Murat; Computer Science; ŞENSOY, Murat
    Owing to the lack of historical data regarding an entity in online communities, a user may rely on stereotyping to estimate its behavior based on historical data about others. However, these stereotypes cannot accurately reflect the user's evaluation if they are based on limited historical data about other entities. In view of this issue, we propose a novel generalized stereotype learning approach: the fuzzy semantic framework. Specifically, we propose a fuzzy semantic process, incorporated with traditional machine-learning techniques to construct stereotypes. It consists of two sub-processes: a fuzzy process that generalizes over non-nominal attributes (e.g., price) by splitting their values in a fuzzy manner, and a semantic process that generalizes over nominal attributes (e.g., location) by replacing their specific values with more general terms according to a predefined ontology. We also implement the proposed framework on the traditional decision tree method to learn users' stereotypes and validate the effectiveness of our framework for computing trust in e-marketplaces. Experiments on real data confirm that our proposed model can accurately measure the trustworthiness of sellers with which buyers have limited experience.
  • Placeholder
    Conference paperPublication
    Cooperation and trust in the presence of bias
    (2014) Şensoy, Murat; Computer Science; Cohen, R.; Falcone, R.; Norman, T.; ŞENSOY, Murat
    Stereotypes may influence the attitudes that individuals have towards others. Stereotypes, therefore, represent biases toward and against others. In this paper, we formalise stereotypical bias within trust evaluations. Then, using the iterated prisoners’ dilemma game, we quantitatively analyse how cooperation and mutual trust between self-interested agents are affected by stereotypical bias. We present two key findings: i) stereotypical bias of one player may inhibit cooperation by creating incentives for others to defect, ii) even if only one of the players has a stereotypical bias, convergence of mutual trust between players may be strictly determined by the bias.