Person:
ÜNAL, Ercenur

Loading...
Profile Picture

Email Address

Birth Date

WoSScopusGoogle ScholarORCID

Name

Job Title

First Name

Ercenur

Last Name

ÜNAL
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 14
  • Placeholder
    ArticlePublication
    How children identify events from visual experience
    (Taylor & Francis, 2019) Ünal, Ercenur; Papafragou, A.; Psychology; ÜNAL, Ercenur
    Three experiments explored how well children recognize events from different types of visual experience: either by directly seeing an event or by indirectly experiencing it from post-event visual evidence. In Experiment 1, 4- and 5- to 6-year-old Turkish-speaking children (n = 32) successfully recognized events through either direct or indirect visual access. In Experiment 2, a new group of 4- and 5- to 6-year-olds (n = 37) reliably attributed event recognition to others who had direct or indirect visual access to events (even though performance was lower than Experiment 1). In both experiments, although children's accuracy improved with age, there was no difference between the two types of access. Experiment 3 replicated the findings from the youngest participants of Experiments 1 and 2 with a matched sample of English-speaking 4-year-olds (n = 37). Thus children can use different kinds of visual experience to support event representations in themselves and others.
  • Conference paperPublicationOpen Access
    Universality and diversity in event cognition and language
    (The Cognitive Science Society, 2022) Ji, Y.; Ünal, Ercenur; Papafragou, A.; Psychology; ÜNAL, Ercenur
    Humans are surprisingly adept at interpreting what is happening around them – they spontaneously and rapidly segment and organize their dynamic experience into coherent event construals. Such event construals may offer a starting point for assembling a linguistic description of the event during speaking (Levelt, 1989). However, the precise format of event representations and their mapping to language have remained elusive, partly because research on how people mentally segment and perceive events (see Radvansky & Zacks, 2014 for a review) has largely proceeded separately from analyses of how events are encoded in language (see Truswell, 2019 for a review).
  • ArticlePublicationOpen Access
    Speaking and gesturing guide event perception during message conceptualization: Evidence from eye movements
    (Elsevier, 2022-08) Ünal, Ercenur; Manhardt, F.; Özyürek, A.; Psychology; ÜNAL, Ercenur
    Speakers' visual attention to events is guided by linguistic conceptualization of information in spoken language production and in language-specific ways. Does production of language-specific co-speech gestures further guide speakers' visual attention during message preparation? Here, we examine the link between visual attention and multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers' speech and gesture show language specificity with path of motion mostly expressed within the main verb accompanied by path gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not. Results strongly suggest that speakers' visual attention is guided by language-specific event encoding not only in speech but also in gesture. This provides evidence consistent with models that propose integration of speech and gesture at the conceptualization level of language production and suggests that the links between the eye and the mouth may be extended to the eye and the hand.
  • ArticlePublicationOpen Access
    Multimodal encoding of motion events in speech, gesture and cognition
    (Cambridge University Press, 2023-12) Ünal, Ercenur; Mamus, E.; Özyürek, A.; Psychology; ÜNAL, Ercenur
    How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy's typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner.
  • Placeholder
    ArticlePublication
    Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined
    (Cambridge University Press, 2022-12) Karadoller, D. Z.; Sümer, B.; Ünal, Ercenur; Ozyurek, A.; Psychology; ÜNAL, Ercenur
    Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children's co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers' spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
  • Conference paperPublicationOpen Access
    Linguistic encoding of inferential evidence for events
    (The Cognitive Science Society, 2022) Avcılar, Gökçen; Ünal, Ercenur; Psychology; ÜNAL, Ercenur; Avcılar, Gökçen
    How people learn about events often varies with some events perceived in their entirety and others are inferred based on the available evidence. Here, we investigate how children and adults linguistically encode the sources of their event knowledge. We focus on Turkish - a language that obligatorily encodes source of information for past events using two evidentiality markers. Children (4- to 5-year-olds and 6- to 7-year-olds) and adults watched and described events that they directly saw or inferred based on visual cues with manipulated degrees of indirectness. Overall, participants modified the evidential marking in their descriptions depending on (a) whether they saw or inferred the event and (b) the indirectness of the visual cues giving rise to an inference. There were no differences across age groups. These findings suggest that Turkish-speaking adults' and children's use of evidential markers are sensitive to the indirectness of the inferential evidence for events.
  • ArticlePublicationOpen Access
    Zihinsel durumların dilde ve bilişte temsili
    (Bogazici University Press, 2020) Ünal, Ercenur; Baturlar, Özge; Psychology; ÜNAL, Ercenur; Baturlar, Özge
    Başkalarının zihinsel durumlarını anlama becerisi okul öncesi çağlarda hızlı bir gelişim göstermektedir. Bu makalede dilin, kavramların çocukların zihinlerinde temsil edilişi ile ilişkisi ele alınmıştır. Bu ilişkiyi değerlendirebilmek için kişinin kendisinin ve başkalarının davranışlarına inanç, istek, niyet gibi zihinsel durumları atfetme becerileri (Theory of Mind) incelenmiştir. Özellikle dilin zihinsel durumların temsil edilmesinde gerekli olan kaynakları ne ölçüde sağladığı sorusuna odaklanılmıştır. Bu alandaki görgül bulgular dilin, zihinsel durumların temsilinde ve işlenmesinde kolaylaştırıcı bir araç görevi gördüğünü ancak zihinsel durumların temsili için bir zorunluluk olmadığını ortaya koymaktadır.
  • Placeholder
    ArticlePublication
    From event representation to linguistic meaning
    (Wiley, 2021-01) Ünal, Ercenur; Ji, Y.; Papafragou, A.; Psychology; ÜNAL, Ercenur
    A fundamental aspect of human cognition is the ability to parse our constantly unfolding experience into meaningful representations of dynamic events and to communicate about these events with others. How do we communicate about events we have experienced? Influential theories of language production assume that the formulation and articulation of a linguistic message is preceded by preverbal apprehension that captures core aspects of the event. Yet the nature of these preverbal event representations and the way they are mapped onto language are currently not well understood. Here, we review recent evidence on the link between event conceptualization and language, focusing on two core aspects of event representation: event roles and event boundaries. Empirical evidence in both domains shows that the cognitive representation of events aligns with the way these aspects of events are encoded in language, providing support for the presence of deep homologies between linguistic and cognitive event structure.
  • Conference paperPublicationOpen Access
    Spatial language use predicts spatial memory of children: evidence from sign, speech, and speech-plus-gesture
    (The Cognitive Science Society, 2021) Karadöller, D. Z.; Sümer, B.; Ünal, Ercenur; Özyürek, A.; Psychology; ÜNAL, Ercenur
    There is a strong relation between children’s exposure to spatial terms and their later memory accuracy. In the current study, we tested whether the production of spatial terms by children themselves predicts memory accuracy and whether and how language modality of these encodings modulates memory accuracy differently. Hearing child speakers of Turkish and deaf child signers of Turkish Sign Language described pictures of objects in various spatial relations to each other and later tested for their memory accuracy of these pictures in a surprise memory task. We found that having described the spatial relation between the objects predicted better memory accuracy. However, the modality of these descriptions in sign, speech, or speech-plus-gesture did not reveal differences in memory accuracy. We discuss the implications of these findings for the relation between spatial language, memory, and the modality of encoding.
  • ArticlePublicationOpen Access
    Speaking but not gesturing predicts event memory: a cross-linguistic comparison
    (Cambridge University Press, 2022-09) Ter Bekke, M.; Özyürek, A.; Ünal, Ercenur; Psychology; ÜNAL, Ercenur
    Every day people see, describe, and remember motion events. However, the relation between multimodal encoding of motion events in speech and gesture, and memory is not yet fully understood. Moreover, whether language typology modulates this relation remains to be tested. This study investigates whether the type of motion event information (path or manner) mentioned in speech and gesture predicts which information is remembered and whether this varies across speakers of typologically different languages. Dutch- and Turkish-speakers watched and described motion events and completed a surprise recognition memory task. For both Dutch- and Turkish-speakers, manner memory was at chance level. Participants who mentioned path in speech during encoding were more accurate at detecting changes to the path in the memory task. The relation between mentioning path in speech and path memory did not vary cross-linguistically. Finally, the co-speech gesture did not predict memory above mentioning path in speech. These findings suggest that how speakers describe a motion event in speech is more important than the typology of the speakers' native language in predicting motion event memory. The motion event videos are available for download for future research at https://osf.io/p8cas/.