Browsing by Author "Lewis, M."
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
ArticlePublication Metadata only The effect of culture on trust in automation: reliability and workload(ACM, 2018-11) Chien, S.-Y.; Lewis, M.; Sycara, K.; Liu, J.-S.; Kumru, Asiye; Psychology; KUMRU, AsiyeTrust in automation has become a topic of intensive study since the late 1990s and is of increasing importance with the advent of intelligent interacting systems. While the earliest trust experiments involved human interventions to correct failures/errors in automated control systems, a majority of subsequent studies have investigated information acquisition and analysis decision aiding tasks such as target detection for which automation reliability is more easily manipulated. Despite the high level of international dependence on automation in industry, almost all current studies have employed Western samples primarily from the U.S. The present study addresses these gaps by running a large sample experiment in three (U.S., Taiwan, and Turkey) diverse cultures using a “trust sensitive task” consisting of both automated control and target detection subtasks. This article presents results for the target detection subtask for which reliability and task load were manipulated. The current experiments allow us to determine whether reported effects are universal or specific to Western culture, vary in baseline or magnitude, or differ across cultures. Results generally confirm consistent effects of manipulations across the three cultures as well as cultural differences in initial trust and variation in effects of manipulations consistent with 10 cultural hypotheses based on Hofstede's Cultural Dimensions and Leung and Cohen's theory of Cultural Syndromes. These results provide critical implications and insights for correct trust calibration and to enhance human trust in intelligent automation systems across cultures. Additionally, our results would be useful in designing intelligent systems for users of different cultures. Our article presents the following contributions: First, to the best of our knowledge, this is the first set of studies that deal with cultural factors across all the cultural syndromes identified in the literature by comparing trust in the Honor, Face, Dignity cultures. Second, this is the first set of studies that uses a validated cross-cultural trust measure for measuring trust in automation. Third, our experiments are the first to study the dynamics of trust across cultures.Conference ObjectPublication Metadata only Influence of cultural factors in dynamic trust in automation(IEEE, 2016) Chien, S.-Y.; Lewis, M.; Sycara, K.; Liu, J.-S.; Kumru, Asiye; Psychology; KUMRU, AsiyeThe use of autonomous systems has been rapidly increasing in recent decades. To improve human-automation interaction, trust has been closely studied. Research shows trust is critical in the development of appropriate reliance on automation. To examine how trust mediates the human-automation relationships across cultures, the present study investigated the influences of cultural factors on trust in automation. Theoretically guided empirical studies were conducted in the U.S., Taiwan and Turkey to examine how cultural dynamics affect various aspects of trust in automation. The results found significant cultural differences in human trust attitude in automation.ArticlePublication Metadata only Influence of culture, transparency, trust, and degree of automation on automation use(IEEE, 2020-06) Chien, S. Y.; Lewis, M.; Sycara, K.; Kumru, Asiye; Liu, J. S.; Psychology; KUMRU, AsiyeThe reported study compares groups of 120 participants each, from the United States (U.S.), Taiwan (TW), and Turkey (TK), interacting with versions of an automated path planner that vary in transparency and degree of automation. The nationalities were selected in accordance with the theory of cultural syndromes as representatives of Dignity (U.S.), Face (TW), and Honor (TK) cultures, and were predicted to differ in readiness to trust automation, degree of transparency required to use automation, and willingness to use systems with high degrees of automation. Three experimental conditions were tested. In the first, highlight, path conflicts were highlighted leaving rerouting to the participant. In the second, replanner made requests for permission to reroute when a path conflict was detected. The third combined condition increased transparency of the replanner by combining highlighting with rerouting to make the conflict on which decision was based visible to the user. A novel framework relating transparency, stages of automation, and trust in automation is proposed in which transparency plays a primary role in decisions to use automation but is supplemented by trust where there is insufficient information otherwise. Hypothesized cultural effects and framework predictions were confirmed.EditorialPublication Metadata only Preface(2011) Kim, S.; Uchitel, S.; Garbervetsky, D.; Aktemur, Tankut Barış; Kroening, D.; Orso, A.; Nagappan, N.; Xie, T.; Mueller, P.; Cataldo, M.; Tillmann, N.; Margaria-Steffen, T.; Tonetta, S.; Bradley, A.; Chen, N.; Caso, G. de; Ferrara, P.; He, N.; Kassios, I.; Kicillof, N.; Lewis, M.; Meyer, D.; Nagel, R.; Nimal, V.; Pandita, R.; Pavese, E.; Rajan, A.; Roveri, M.; Sawadsky, N.; Schapachnik, F.; Seo, H.; Shakya, K.; Song, Y.; Summers, A.; Xiao, X.; Yilmaz, Buse; Zhang, L.; Bishop, J.; Breitman, K.; Notkin, D.; Computer Science; AKTEMUR, Tankut Bariş; Yilmaz, Buse