Person: KINLI, Osman Furkan
Name
Job Title
First Name
Osman Furkan
Last Name
KINLI
13 results
Publication Search Results
Now showing 1 - 10 of 13
ArticlePublication Metadata only Generalization to unseen viewpoint images of objects via alleviated pose attentive capsule agreement(Springer, 2023-02) Özcan, Barış; Kınlı, Osman Furkan; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, BarışDespite their achievements in object recognition, Convolutional Neural Networks (CNNs) particularly fail to generalize to unseen viewpoints of a learned object even with substantial samples. On the other hand, recently emerged capsule networks outperform CNNs in novel viewpoint generalization tasks even with significantly fewer parameters. Capsule networks group the neuron activations for representing higher level attributes and their interactions for achieving equivariance to visual transformations. However, capsule networks have a high computational cost for learning the interactions of capsules in consecutive layers via the, so called, routing algorithm. To address these issues, we propose a novel routing algorithm, Alleviated Pose Attentive Capsule Agreement (ALPACA) which is tailored for capsules that contain pose, feature and existence probability information together to enhance novel viewpoint generalization of capsules on 2D images. For this purpose, we have created a Novel ViewPoint Dataset (NVPD) a viewpoint-controlled texture-free dataset that has 8 different setups where training and test samples are formed by different viewpoints. In addition to NVPD, we have conducted experiments on iLab2M dataset where the dataset is split in terms of the object instances. Experimental results show that ALPACA outperforms its capsule network counterparts and state-of-the-art CNNs on iLab2M and NVPD datasets. Moreover, ALPACA is 10 times faster when compared to routing-based capsule networks. It also outperforms attention-based routing algorithms of the domain while keeping the inference and training times comparable. Lastly, our code, the NVPD dataset, test setups, and implemented models are freely available at https://github.com/Boazrciasn/ALPACA.Conference ObjectPublication Metadata only Modeling the lighting in scenes as style for auto white-balance correction(IEEE, 2023) Kınlı, Osman Furkan; Yılmaz, Doğa; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Yılmaz, Doğa; Özcan, BarışStyle may refer to different concepts (e.g. painting style, hairstyle, texture, color, filter, etc.) depending on how the feature space is formed. In this work, we propose a novel idea of interpreting the lighting in the single- and multi-illuminant scenes as the concept of style. To verify this idea, we introduce an enhanced auto white-balance (AWB) method that models the lighting in single- and mixed-illuminant scenes as the style factor. Our AWB method does not require any illumination estimation step, yet contains a network learning to generate the weighting maps of the images with different WB settings. Proposed network utilizes the style information, extracted from the scene by a multi-head style extraction module. AWB correction is completed after blending these weighting maps and the scene. Experiments on single- and mixed-illuminant datasets demonstrate that our proposed method achieves promising correction results when compared to the recent works. This shows that the lighting in the scenes with multiple illuminations can be modeled by the concept of style. Source code and trained models are available on https://github.com/birdortyedi/lighting-as-style-awb-correction.Conference ObjectPublication Metadata only NTIRE 2023 challenge on night photography rendering(IEEE, 2023) Shutova, A.; Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, BarışThis paper presents a review of the NTIRE 2023 challenge on night photography rendering. The goal of the challenge was to find solutions that process raw camera images taken in nighttime conditions conditions, and thereby produce a photo-quality output images in the standard RGB (sRGB) space. Unlike the previous year's competition, participants were not provided with a large training dataset for the target sensor. Instead, this time they were given images of a color checker illuminated by a known light source. To evaluate the results, a sufficient number of viewers were asked to assess the visual quality of the proposed solutions, considering the subjective nature of the task. The highest ranking solutions were further ranked by Richard Collins, a renowned photographer. The top ranking participants' solutions effectively represent the state-of-the-art in nighttime photography rendering.Conference ObjectPublication Metadata only Reversing image signal processors by reverse style transferring(Springer, 2023) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, BarışRAW image datasets are more suitable than the standard RGB image datasets for the ill-posed inverse problems in low-level vision, but not common in the literature. There are also a few studies to focus on mapping sRGB images to RAW format. Mapping from sRGB to RAW format could be a relevant domain for reverse style transferring since the task is an ill-posed reversing problem. In this study, we seek an answer to the question: Can the ISP operations be modeled as the style factor in an end-to-end learning pipeline? To investigate this idea, we propose a novel architecture, namely RST-ISP-Net, for learning to reverse the ISP operations with the help of adaptive feature normalization. We formulate this problem as a reverse style transferring and mostly follow the practice used in the prior work. We have participated in the AIM Reversed ISP challenge with our proposed architecture. Results indicate that the idea of modeling disruptive or modifying factors as style is still valid, but further improvements are required to be competitive in such a challenge.Conference ObjectPublication Metadata only Fashion image retrieval with capsule networks(IEEE, 2019) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, BarışIn this study, we investigate in-shop clothing retrieval performance of densely-connected Capsule Networks with dynamic routing. To achieve this, we propose Triplet-based design of Capsule Network architecture with two different feature extraction methods. In our design, Stacked-convolutional (SC) and Residual-connected (RC) blocks are used to form the input of capsule layers. Experimental results show that both of our designs outperform all variants of the baseline study, namely FashionNet, without relying on the landmark information. Moreover, when compared to the SOTA architectures on clothing retrieval, our proposed Triplet Capsule Networks achieve comparable recall rates only with half of parameters used in the SOTA architectures.Conference ObjectPublication Metadata only Quaternion capsule networks(IEEE, 2021) Özcan, Barış; Kınlı, Osman Furkan; Computer Science; KINLI, Osman Furkan; Özcan, BarışCapsules are grouping of neurons that allow to represent sophisticated information of a visual entity such as pose and features. In the view of this property, Capsule Networks outperform CNNs in challenging tasks like object recognition in unseen viewpoints, and this is achieved by learning the transformations between the object and its parts with the help of high dimensional representation of pose information. In this paper, we present Quaternion Capsules (QCN) where pose information of capsules and their transformations are represented by quaternions. Quaternions arc immune to the gimbal lock, have straightforward regularization of the rotation representation for capsules, and require less number of parameters than matrices. The experimental results show that QCNs generalize better to novel viewpoints with fewer parameters, and also achieve onpar or better performances with the state-of-the-art Capsule architectures on well-known benchmarking datasets. Our code is available(1).Conference ObjectPublication Metadata only Deterministic neural illumination mapping for efficient auto-white balance correction(IEEE, 2023) Kınlı, Osman Furkan; Yılmaz, Doğa; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Yılmaz, Doğa; Özcan, BarışAuto-white balance (AWB) correction is a critical operation in image signal processors for accurate and consistent color correction across various illumination scenarios. This paper presents a novel and efficient AWB correction method that achieves at least 35 times faster processing with equivalent or superior performance on high-resolution images for the current state-of-the-art methods. Inspired by deterministic color style transfer, our approach introduces deterministic illumination color mapping, leveraging learnable projection matrices for both canonical illumination form and AWB-corrected output. It involves feeding high-resolution images and corresponding latent representations into a mapping module to derive a canonical form, followed by another mapping module that maps the pixel values to those for the corrected version. This strategy is designed as resolution-agnostic and also enables seamless integration of any pre-trained AWB network as the backbone. Experimental results confirm the effectiveness of our approach, revealing significant performance improvements and reduced time complexity compared to state-of-the-art methods. Our method provides an efficient deep learning-based AWB correction solution, promising real-time, high-quality color correction for digital imaging applications.Conference ObjectPublication Metadata only Instagram filter removal on fashionable images(IEEE, 2021-06) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, BarışSocial media images are generally transformed by filtering to obtain aesthetically more pleasing appearances. However, CNNs generally fail to interpret both the image and its filtered version as the same in the visual analysis of social media images. We introduce Instagram Filter Removal Network (IFRNet) to mitigate the effects of image filters for social media analysis applications. To achieve this, we assume any filter applied to an image substantially injects a piece of additional style information to it, and we consider this problem as a reverse style transfer problem. The visual effects of filtering can be directly removed by adaptively normalizing external style information in each level of the encoder. Experiments demonstrate that IFRNet outperforms all compared methods in quantitative and qualitative comparisons, and has the ability to remove the visual effects to a great extent. Additionally, we present the filter classification performance of our proposed model, and analyze the dominant color estimation on the images unfiltered by all compared methods.Conference ObjectPublication Metadata only A benchmark for inpainting of clothing images with irregular holes(Springer, 2020) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, BarışFashion image understanding is an active research field with a large number of practical applications for the industry. Despite its practical impacts on intelligent fashion analysis systems, clothing image inpainting has not been extensively examined yet. For that matter, we present an extensive benchmark of clothing image inpainting on well-known fashion datasets. Furthermore, we introduce the use of a dilated version of partial convolutions, which efficiently derive the mask update step, and empirically show that the proposed method reduces the required number of layers to form fully-transparent masks. Experiments show that dilated partial convolutions (DPConv) improve the quantitative inpainting performance when compared to the other inpainting strategies, especially it performs better when the mask size is 20% or more of the image.Conference ObjectPublication Metadata only Patch-wise contrastive style learning for instagram filter removal(IEEE, 2022) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, BarışImage-level corruptions and perturbations degrade the performance of CNNs on different downstream vision tasks. Social media filters are one of the most common resources of various corruptions and perturbations for real-world visual analysis applications. The negative effects of these dis-tractive factors can be alleviated by recovering the original images with their pure style for the inference of the downstream vision tasks. Assuming these filters substantially inject a piece of additional style information to the social media images, we can formulate the problem of recovering the original versions as a reverse style transfer problem. We introduce Contrastive Instagram Filter Removal Network (CIFR), which enhances this idea for Instagram filter removal by employing a novel multi-layer patch-wise contrastive style learning mechanism. Experiments show our proposed strategy produces better qualitative and quantitative results than the previous studies. Moreover, we present the results of our additional experiments for proposed architecture within different settings. Finally, we present the inference outputs and quantitative comparison of filtered and recovered images on localization and segmentation tasks to encourage the main motivation for this problem.