Show simple item record

dc.contributor.authorSchoning, J.
dc.contributor.authorGert, A. L.
dc.contributor.authorAçık, Alper
dc.contributor.authorKietzmann, T. C.
dc.contributor.authorHeidemann, G.
dc.contributor.authorKonig, P.
dc.date.accessioned2017-04-01T09:57:27Z
dc.date.available2017-04-01T09:57:27Z
dc.date.issued2017
dc.identifier.issn978-989-758-225-7en_US
dc.identifier.urihttp://hdl.handle.net/10679/4880
dc.identifier.urihttp://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0006260202720279
dc.description.abstractThe analysis of multimodal data comprised of images, videos and additional recordings, such as gaze trajectories, EEG, emotional states, and heart rate is presently only feasible with custom applications. Even exploring such data requires compilation of specific applications that suit a specific dataset only. This need for specific applications arises since all corresponding data are stored in separate files in custom-made distinct data formats. Thus accessing such datasets is cumbersome and time-consuming for experts and virtually impossible for non-experts. To make multimodal research data easily shareable and accessible to a broad audience, like researchers from diverse disciplines and all other interested people, we show how multimedia containers can support the visualization and sonification of scientific data. The use of a container format allows explorative multimodal data analyses with any multimedia player as well as streaming the data via the Internet. We prototyped this approach on two datasets, both with visualization of gaze data and one with additional sonification of EEG data. In a user study, we asked expert and non-expert users about their experience during an explorative investigation of the data. Based on their statements, our prototype implementation, and the datasets, we discuss the benefit of storing multimodal data, including the corresponding videos or images, in a single multimedia container. In conclusion, we summarize what is necessary for having multimedia containers as a standard for storing multimodal data and give an outlook on how artificial networks can be trained on such standardized containers.en_US
dc.language.isoengen_US
dc.publisherSCITEPRESS, Science and Technology Publicationsen_US
dc.relation.ispartofProceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017)en_US
dc.rightsrestrictedAccess
dc.rightsAttribution-NonCommercial-NoDerivs 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.titleExploratory multimodal data analysis with standard multimedia player multimedia containers: a feasible solution to make multimodal research data accessible to the broad audienceen_US
dc.typeConference paperen_US
dc.publicationstatuspublisheden_US
dc.contributor.departmentÖzyeğin University
dc.contributor.authorID(ORCID 0000-0002-9706-4662 & YÖK ID 254804) Açık, Alper
dc.contributor.ozuauthorAçık, Alper
dc.identifier.volume4en_US
dc.identifier.startpage272en_US
dc.identifier.endpage279en_US
dc.identifier.wosWOS:000444907000033
dc.identifier.doi10.5220/0006260202720279
dc.subject.keywordsMultimodal data analysisen_US
dc.subject.keywordsVisualizationen_US
dc.subject.keywordsSonificationen_US
dc.subject.keywordsGaze dataen_US
dc.subject.keywordsEEG dataen_US
dc.identifier.scopusSCOPUS:2-s2.0-85020199630
dc.request.emailalper.acik@ozyegin.edu.tr
dc.request.fullnameAlper Açık
dc.relation.publicationcategoryConference Paper - International - Institutional Academic Staff


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

restrictedAccess
Except where otherwise noted, this item's license is described as restrictedAccess

Share this page