Person:
AÇIK, Alper

Loading...
Profile Picture

Email Address

Birth Date

WoSScopusGoogle ScholarORCID

Name

Job Title

First Name

Alper

Last Name

AÇIK
Organizational Unit

Publication Search Results

Now showing 1 - 9 of 9
  • ArticlePublicationOpen Access
    Improvement of design of a surgical interface using an eye tracking device
    (Springer Science+Business Media, 2014-05) Erol Barkana, D.; Açık, Alper; Psychology; AÇIK, Alper
    Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability.
  • ArticlePublicationOpen Access
    Evaluation of a surgical interface for robotic cryoablation task using an eye-tracking system
    (Elsevier, 2016-11) Açık, Alper; Erol Barkana, D.; Akgün, G.; Yantaç, A. E.; Aydın,Ç.; Psychology; AÇIK, Alper
    Computer-assisted navigation systems coupled with surgical interfaces (SIs) are providing doctors with tools that are safer for patients compared to traditional methods. Usability analysis of the SIs that guides their development is hence important. In this study, we record the eye movements of doctors and other people with no medical expertise during interaction with an SI that directs a simulated cryoablation task. There are two different arrangements for the layout of the same SI, and the goal is to evaluate whether one of these arrangements is ergonomically better than the other. We use several gaze related statistics some of which are employed in an SI design context for the first time. Even though the performance and gaze related analysis reveals that the two arrangements are comparable in many respects, there are also differences. Specifically, one arrangement leads to more saccades along the vertical and horizontal directions, lower saccade amplitudes in the crucial phase of the task, more locally clustered and yet globally spread viewing. Accordingly, that arrangement is selected for future use. The present study provides a proof of concept for the integration of novel gaze analysis tools developed for scene perception studies into the interface development process.
  • ArticlePublicationOpen Access
    The contributions of image content and behavioral relevancy to overt attention
    (Plos, 2014-04-15) Onat, S.; Açık, Alper; Schumann, F.; König, P.; Psychology; AÇIK, Alper
    During free-viewing of natural scenes, eye movements are guided by bottom-up factors inherent to the stimulus, as well as top-down factors inherent to the observer. The question of how these two different sources of information interact and contribute to fixation behavior has recently received a lot of attention. Here, a battery of 15 visual stimulus features was used to quantify the contribution of stimulus properties during free-viewing of 4 different categories of images (Natural, Urban, Fractal and Pink Noise). Behaviorally relevant information was estimated in the form of topographical interestingness maps by asking an independent set of subjects to click at image regions that they subjectively found most interesting. Using a Bayesian scheme, we computed saliency functions that described the probability of a given feature to be fixated. In the case of stimulus features, the precise shape of the saliency functions was strongly dependent upon image category and overall the saliency associated with these features was generally weak. When testing multiple features jointly, a linear additive integration model of individual saliencies performed satisfactorily. We found that the saliency associated with interesting locations was much higher than any low-level image feature and any pair-wise combination thereof. Furthermore, the low-level image features were found to be maximally salient at those locations that had already high interestingness ratings. Temporal analysis showed that regions with high interestingness ratings were fixated as early as the third fixation following stimulus onset. Paralleling these findings, fixation durations were found to be dependent mainly on interestingness ratings and to a lesser extent on the low-level image features. Our results suggest that both low- And highlevel sources of information play a significant role during exploration of complex scenes with behaviorally relevant information being more effective compared to stimulus features.
  • Placeholder
    ArticlePublication
    Erratum to: improvement of design of a surgical interface using an eye tracking device
    (Springer Nature, 2014-11) Erol Barkana, D.; Açık, Alper; Duru, D. G.; Duru, A. D.; Psychology; AÇIK, Alper
  • ArticlePublicationOpen Access
    Development and evaluation of an interface for pre-operative planning of cryoablation of a kidney tumor
    (2013) Barkana, D. E.; Duru, D. G.; Duru, A. D.; Açık, Alper; Özkan, M.; Psychology; AÇIK, Alper
    Surgical interfaces are used for the interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for pre-operative planning of cryoablation of a kidney tumor. Two experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface using subjective (questionnaires) and objective (eye tracking) methods to obtain the best surgical interface configuration.
  • Conference paperPublicationRestricted
    Exploratory multimodal data analysis with standard multimedia player multimedia containers: a feasible solution to make multimodal research data accessible to the broad audience
    (SCITEPRESS, Science and Technology Publications, 2017) Schoning, J.; Gert, A. L.; Açık, Alper; Kietzmann, T. C.; Heidemann, G.; Konig, P.; Psychology; AÇIK, Alper
    The analysis of multimodal data comprised of images, videos and additional recordings, such as gaze trajectories, EEG, emotional states, and heart rate is presently only feasible with custom applications. Even exploring such data requires compilation of specific applications that suit a specific dataset only. This need for specific applications arises since all corresponding data are stored in separate files in custom-made distinct data formats. Thus accessing such datasets is cumbersome and time-consuming for experts and virtually impossible for non-experts. To make multimodal research data easily shareable and accessible to a broad audience, like researchers from diverse disciplines and all other interested people, we show how multimedia containers can support the visualization and sonification of scientific data. The use of a container format allows explorative multimodal data analyses with any multimedia player as well as streaming the data via the Internet. We prototyped this approach on two datasets, both with visualization of gaze data and one with additional sonification of EEG data. In a user study, we asked expert and non-expert users about their experience during an explorative investigation of the data. Based on their statements, our prototype implementation, and the datasets, we discuss the benefit of storing multimodal data, including the corresponding videos or images, in a single multimedia container. In conclusion, we summarize what is necessary for having multimedia containers as a standard for storing multimodal data and give an outlook on how artificial networks can be trained on such standardized containers.
  • ArticlePublicationOpen Access
    An extensive dataset of eye movements during viewing of complex images
    (Nature, 2017) Wilming, N.; Onat, S.; Ossandón, J. P.; Açık, Alper; Kietzmann, T.C; Kaspar, K.; Gameiro, R. R.; Vormberg, A.; König, P.; Psychology; AÇIK, Alper
    We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7–80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.
  • ArticlePublicationOpen Access
    Real and implied motion at the center of gaze
    (Association for Research in Vision and Ophthalmology, 2014-01) Açık, Alper; Bartel, A.; König, P.; Psychology; AÇIK, Alper
    Even though the dynamicity of our environment is a given, much of what we know on fixation selection comes from studies of static scene viewing. We performed a direct comparison of fixation selection on static and dynamic visual stimuli and investigated how far identical mechanisms drive these. We recorded eye movements while participants viewed movie clips of natural scenery and static frames taken from the same movies. Both were presented in the same high spatial resolution (1080 × 1920 pixels). The static condition allowed us to check whether local movement features computed from movies are salient even when presented as single frames. We observed that during the first second of viewing, movement and static features are equally salient in both conditions. Furthermore, predictability of fixations based on movement features decreased faster when viewing static frames as compared with viewing movie clips. Yet even during the later portion of static-frame viewing, the predictive value of movement features was still high above chance. Moreover, we demonstrated that, whereas the sets of movement and static features were statistically dependent within these sets, respectively, no dependence was observed between the two sets. Based on these results, we argue that implied motion is predictive of fixation similarly to real movement and that the onset of motion in natural stimuli is more salient than ongoing movement is. The present results allow us to address to what extent and when static image viewing is similar to the perception of a dynamic environment.
  • Placeholder
    ArticlePublication
    The relationship between handedness and valence: A gesture study
    (Sage, 2018-12) Çatak, E. N.; Açık, Alper; Göksun, T.; Psychology; AÇIK, Alper
    People with different hand preferences assign positive and negative emotions to different sides of their bodies and produce co-speech gestures with their dominant hand when the content is positive. In this study, we investigated this side preference by handedness in both gesture comprehension and production. Participants watched faceless gesture videos with negative and positive content on eye tracker and were asked to retell the stories after each video. Results indicated no difference in looking preferences regarding being right- or left-handed. Yet, an effect of emotional valence was observed. Participants spent more time looking to the right (actor's left) when the information was positive and to the left (actor's right) when the information was negative. Participants' retelling of stories revealed a handedness effect only for different types of gestures (representational vs beat). Individuals used their dominant hands for beat gestures. For representational gestures, while the right-handers used their right hands more, the left-handers gestured using both hands equally. Overall, the lack of significant difference between handedness and emotional content in both comprehension and production levels suggests that body-specific mental representations may not extend to the conversational level.