Browsing by Author "Sycara, K."
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
ArticlePublication Metadata only The effect of culture on trust in automation: reliability and workload(ACM, 2018-11) Chien, S.-Y.; Lewis, M.; Sycara, K.; Liu, J.-S.; Kumru, Asiye; Psychology; KUMRU, AsiyeTrust in automation has become a topic of intensive study since the late 1990s and is of increasing importance with the advent of intelligent interacting systems. While the earliest trust experiments involved human interventions to correct failures/errors in automated control systems, a majority of subsequent studies have investigated information acquisition and analysis decision aiding tasks such as target detection for which automation reliability is more easily manipulated. Despite the high level of international dependence on automation in industry, almost all current studies have employed Western samples primarily from the U.S. The present study addresses these gaps by running a large sample experiment in three (U.S., Taiwan, and Turkey) diverse cultures using a “trust sensitive task” consisting of both automated control and target detection subtasks. This article presents results for the target detection subtask for which reliability and task load were manipulated. The current experiments allow us to determine whether reported effects are universal or specific to Western culture, vary in baseline or magnitude, or differ across cultures. Results generally confirm consistent effects of manipulations across the three cultures as well as cultural differences in initial trust and variation in effects of manipulations consistent with 10 cultural hypotheses based on Hofstede's Cultural Dimensions and Leung and Cohen's theory of Cultural Syndromes. These results provide critical implications and insights for correct trust calibration and to enhance human trust in intelligent automation systems across cultures. Additionally, our results would be useful in designing intelligent systems for users of different cultures. Our article presents the following contributions: First, to the best of our knowledge, this is the first set of studies that deal with cultural factors across all the cultural syndromes identified in the literature by comparing trust in the Honor, Face, Dignity cultures. Second, this is the first set of studies that uses a validated cross-cultural trust measure for measuring trust in automation. Third, our experiments are the first to study the dynamics of trust across cultures.Conference ObjectPublication Metadata only Influence of cultural factors in dynamic trust in automation(IEEE, 2016) Chien, S.-Y.; Lewis, M.; Sycara, K.; Liu, J.-S.; Kumru, Asiye; Psychology; KUMRU, AsiyeThe use of autonomous systems has been rapidly increasing in recent decades. To improve human-automation interaction, trust has been closely studied. Research shows trust is critical in the development of appropriate reliance on automation. To examine how trust mediates the human-automation relationships across cultures, the present study investigated the influences of cultural factors on trust in automation. Theoretically guided empirical studies were conducted in the U.S., Taiwan and Turkey to examine how cultural dynamics affect various aspects of trust in automation. The results found significant cultural differences in human trust attitude in automation.ArticlePublication Metadata only Influence of culture, transparency, trust, and degree of automation on automation use(IEEE, 2020-06) Chien, S. Y.; Lewis, M.; Sycara, K.; Kumru, Asiye; Liu, J. S.; Psychology; KUMRU, AsiyeThe reported study compares groups of 120 participants each, from the United States (U.S.), Taiwan (TW), and Turkey (TK), interacting with versions of an automated path planner that vary in transparency and degree of automation. The nationalities were selected in accordance with the theory of cultural syndromes as representatives of Dignity (U.S.), Face (TW), and Honor (TK) cultures, and were predicted to differ in readiness to trust automation, degree of transparency required to use automation, and willingness to use systems with high degrees of automation. Three experimental conditions were tested. In the first, highlight, path conflicts were highlighted leaving rerouting to the participant. In the second, replanner made requests for permission to reroute when a path conflict was detected. The third combined condition increased transparency of the replanner by combining highlighting with rerouting to make the conflict on which decision was based visible to the user. A novel framework relating transparency, stages of automation, and trust in automation is proposed in which transparency plays a primary role in decisions to use automation but is supplemented by trust where there is insufficient information otherwise. Hypothesized cultural effects and framework predictions were confirmed.Conference ObjectPublication Metadata only Reasoning about uncertain information and conflict resolution through trust revision(International Foundation for Autonomous Agents and Multiagent Systems, 2013) Şensoy, Murat; Fokoue, A.; Pan, J. Z.; Norman, T. J.; Tang, Y.; Oren, N.; Sycara, K.; Computer Science; ŞENSOY, MuratIn information driven MAS, information consumers collect information about their environment from various sources such as sensors. However, there is no guarantee that a source will provide the requested information truthfully and correctly. Even if information is provided only by trustworthy sources, it can contain con-flicts that hamper its usability. In this paper, we propose to exploit such conflicts to revise trust in information. This requires a reasoning mechanism that can accommodate domain constraints, uncertainty, and trust. Our formalism — SDL-Lite— is an extension of a tractable subset of Description Logics with Dempster-Shafer theory of evidence. SDL-Lite allows reasoning about uncertain information and enables conflict detection. Then, we propose methods for conflict resolution through trust revision and analyse them through simulations. We show that the proposed methods allow reasonably accurate estimations of trust in information in realistic settings.Conference ObjectPublication Open Access Reasoning with uncertain information and trust(SPIE, 2013) Şensoy, Murat; Mel, G. de; Fokoue, A.; Norman, T. J.; Pan, J. Z.; Tang, Y.; Oren, N.; Sycara, K.; Kaplan, L.; Pham, T.; Computer Science; ŞENSOY, MuratA limitation of standard Description Logics is its inability to reason with uncertain and vague knowledge. Although probabilistic and fuzzy extensions of DLs exist, which provide an explicit representation of uncertainty, they do not provide an explicit means for reasoning about second order uncertainty. Dempster-Shafer theory of evidence (DST) overcomes this weakness and provides means to fuse and reason about uncertain information. In this paper, we combine DL-Lite with DST to allow scalable reasoning over uncertain semantic knowledge bases. Furthermore, our formalism allows for the detection of conflicts between the fused information and domain constraints. Finally, we propose methods to resolve such conflicts through trust revision by exploiting evidence regarding the information sources. The effectiveness of the proposed approaches is shown through simulations under various settings.