Browsing by Author "Kammachi-Sreedhar, K."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Conference ObjectPublication Metadata only Benchmarking the second edition of the omnidirectional media format standard(IEEE, 2022) Kara, Burak; Akçay, Mehmet Necmettin; Beğen, Ali Cengiz; Ahsan, S.; Curcio, I. D. D.; Kammachi-Sreedhar, K.; Aksu, E.; Computer Science; BEĞEN, Ali Cengiz; Kara, Burak; Akçay, Mehmet NecmettinOmnidirectional MediA Format (OMAF) is the first worldwide virtual reality (VR) standard to store and distribute immersive media, completed in 2019. Later, in 2021, the second edition of this standard (OMAF v2) was published. The second edition kept all the features defined in the first OMAF edition while introducing some new ones, such as overlays and multi-viewpoints. OMAF v2's Tile Index Segments that contain metadata to track fragment data per segment and quality levels create a bandwidth overhead. During the OMAF v2 standardization, multiple methods for the track fragment run representation were studied to deal with this overhead. This paper presents the implementation of one of these methods, the compressed box method using the DEFLATE algorithm (OMAF v2*). It also provides comprehensive test results of OMAF v1, OMAF v2 and OMAF v2∗ with various combinations of three tile grids (6x4, 8x6 and 12x8), three segment durations (300 ms, 900 ms and 3 s), two videos (RollerCoaster and Timelapse), two bitrate groups (each group with four different bitrates) and two HTTP versions (HTTP/1.1 and H2).Conference ObjectPublication Metadata only Quality upshifting with auxiliary I-Frame splicing(IEEE, 2023) Akçay, Mehmet Necmettin; Kara, Burak; Beğen, Ali Cengiz; Ahsan, S.; Curcio, I. D. D.; Kammachi-Sreedhar, K.; Aksu, E.; Computer Science; BEĞEN, Ali Cengiz; Akçay, Mehmet Necmettin; Kara, BurakThis paper introduces the Auxiliary I-Frame Splicing method to reduce bandwidth waste in adaptive streaming. This method involves fetching a high-quality I-frame and splicing it into the already downloaded low-quality segment, resulting in a higher-quality rendering at a lower overhead than replacing the entire low-quality segment. In our experiments with three videos and four quantization parameters, the results show that the bandwidth can be saved up to 87% while still increasing the peak signal-to-noise ratio score by 20% and the video multi-method assessment fusion score by 73%. In the demo, we demonstrate the visual differences between the original and spliced videos.