Person:
KINLI, Osman Furkan

Loading...
Profile Picture

Email Address

Birth Date

ScopusGoogle ScholarORCID

Name

Job Title

First Name

Osman Furkan

Last Name

KINLI
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 13
  • Placeholder
    Conference paperPublication
    Fashion image retrieval with capsule networks
    (IEEE, 2019) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    In this study, we investigate in-shop clothing retrieval performance of densely-connected Capsule Networks with dynamic routing. To achieve this, we propose Triplet-based design of Capsule Network architecture with two different feature extraction methods. In our design, Stacked-convolutional (SC) and Residual-connected (RC) blocks are used to form the input of capsule layers. Experimental results show that both of our designs outperform all variants of the baseline study, namely FashionNet, without relying on the landmark information. Moreover, when compared to the SOTA architectures on clothing retrieval, our proposed Triplet Capsule Networks achieve comparable recall rates only with half of parameters used in the SOTA architectures.
  • Placeholder
    Conference paperPublication
    NTIRE 2022 challenge on night photography rendering
    (IEEE, 2022) Ershov, E.; Kınlı, Osman Furkan; Menteş, Sami; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Menteş, Sami; Özcan, Barış
    This paper reviews the NTIRE 2022 challenge on night photography rendering. The challenge solicited solutions that processed RAW camera images captured in night scenes to produce a photo-finished output image encoded in the standard RGB (sRGB) space. Given the subjective nature of this task, the proposed solutions were evaluated based on the mean opinions of viewers asked to judge the visual appearance of the results. Michael Freeman, a world-renowned photographer, further ranked the solutions with the highest mean opinion scores. A total of 13 teams competed in the final phase of the challenge. The proposed methods provided by the participating teams represent state-of-the-art performance in nighttime photography.
  • Placeholder
    Conference paperPublication
    Patch-wise contrastive style learning for instagram filter removal
    (IEEE, 2022) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    Image-level corruptions and perturbations degrade the performance of CNNs on different downstream vision tasks. Social media filters are one of the most common resources of various corruptions and perturbations for real-world visual analysis applications. The negative effects of these dis-tractive factors can be alleviated by recovering the original images with their pure style for the inference of the downstream vision tasks. Assuming these filters substantially inject a piece of additional style information to the social media images, we can formulate the problem of recovering the original versions as a reverse style transfer problem. We introduce Contrastive Instagram Filter Removal Network (CIFR), which enhances this idea for Instagram filter removal by employing a novel multi-layer patch-wise contrastive style learning mechanism. Experiments show our proposed strategy produces better qualitative and quantitative results than the previous studies. Moreover, we present the results of our additional experiments for proposed architecture within different settings. Finally, we present the inference outputs and quantitative comparison of filtered and recovered images on localization and segmentation tasks to encourage the main motivation for this problem.
  • Placeholder
    Conference paperPublication
    Quaternion capsule networks
    (IEEE, 2021) Özcan, Barış; Kınlı, Osman Furkan; Computer Science; KINLI, Osman Furkan; Özcan, Barış
    Capsules are grouping of neurons that allow to represent sophisticated information of a visual entity such as pose and features. In the view of this property, Capsule Networks outperform CNNs in challenging tasks like object recognition in unseen viewpoints, and this is achieved by learning the transformations between the object and its parts with the help of high dimensional representation of pose information. In this paper, we present Quaternion Capsules (QCN) where pose information of capsules and their transformations are represented by quaternions. Quaternions arc immune to the gimbal lock, have straightforward regularization of the rotation representation for capsules, and require less number of parameters than matrices. The experimental results show that QCNs generalize better to novel viewpoints with fewer parameters, and also achieve onpar or better performances with the state-of-the-art Capsule architectures on well-known benchmarking datasets. Our code is available(1).
  • Placeholder
    Conference paperPublication
    Modeling the lighting in scenes as style for auto white-balance correction
    (IEEE, 2023) Kınlı, Osman Furkan; Yılmaz, Doğa; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Yılmaz, Doğa; Özcan, Barış
    Style may refer to different concepts (e.g. painting style, hairstyle, texture, color, filter, etc.) depending on how the feature space is formed. In this work, we propose a novel idea of interpreting the lighting in the single- and multi-illuminant scenes as the concept of style. To verify this idea, we introduce an enhanced auto white-balance (AWB) method that models the lighting in single- and mixed-illuminant scenes as the style factor. Our AWB method does not require any illumination estimation step, yet contains a network learning to generate the weighting maps of the images with different WB settings. Proposed network utilizes the style information, extracted from the scene by a multi-head style extraction module. AWB correction is completed after blending these weighting maps and the scene. Experiments on single- and mixed-illuminant datasets demonstrate that our proposed method achieves promising correction results when compared to the recent works. This shows that the lighting in the scenes with multiple illuminations can be modeled by the concept of style. Source code and trained models are available on https://github.com/birdortyedi/lighting-as-style-awb-correction.
  • Placeholder
    Conference paperPublication
    NTIRE 2023 challenge on night photography rendering
    (IEEE, 2023) Shutova, A.; Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    This paper presents a review of the NTIRE 2023 challenge on night photography rendering. The goal of the challenge was to find solutions that process raw camera images taken in nighttime conditions conditions, and thereby produce a photo-quality output images in the standard RGB (sRGB) space. Unlike the previous year's competition, participants were not provided with a large training dataset for the target sensor. Instead, this time they were given images of a color checker illuminated by a known light source. To evaluate the results, a sufficient number of viewers were asked to assess the visual quality of the proposed solutions, considering the subjective nature of the task. The highest ranking solutions were further ranked by Richard Collins, a renowned photographer. The top ranking participants' solutions effectively represent the state-of-the-art in nighttime photography rendering.
  • Placeholder
    ArticlePublication
    Generalization to unseen viewpoint images of objects via alleviated pose attentive capsule agreement
    (Springer, 2023-02) Özcan, Barış; Kınlı, Osman Furkan; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    Despite their achievements in object recognition, Convolutional Neural Networks (CNNs) particularly fail to generalize to unseen viewpoints of a learned object even with substantial samples. On the other hand, recently emerged capsule networks outperform CNNs in novel viewpoint generalization tasks even with significantly fewer parameters. Capsule networks group the neuron activations for representing higher level attributes and their interactions for achieving equivariance to visual transformations. However, capsule networks have a high computational cost for learning the interactions of capsules in consecutive layers via the, so called, routing algorithm. To address these issues, we propose a novel routing algorithm, Alleviated Pose Attentive Capsule Agreement (ALPACA) which is tailored for capsules that contain pose, feature and existence probability information together to enhance novel viewpoint generalization of capsules on 2D images. For this purpose, we have created a Novel ViewPoint Dataset (NVPD) a viewpoint-controlled texture-free dataset that has 8 different setups where training and test samples are formed by different viewpoints. In addition to NVPD, we have conducted experiments on iLab2M dataset where the dataset is split in terms of the object instances. Experimental results show that ALPACA outperforms its capsule network counterparts and state-of-the-art CNNs on iLab2M and NVPD datasets. Moreover, ALPACA is 10 times faster when compared to routing-based capsule networks. It also outperforms attention-based routing algorithms of the domain while keeping the inference and training times comparable. Lastly, our code, the NVPD dataset, test setups, and implemented models are freely available at https://github.com/Boazrciasn/ALPACA.
  • Placeholder
    Conference paperPublication
    Instagram filter removal on fashionable images
    (IEEE, 2021-06) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    Social media images are generally transformed by filtering to obtain aesthetically more pleasing appearances. However, CNNs generally fail to interpret both the image and its filtered version as the same in the visual analysis of social media images. We introduce Instagram Filter Removal Network (IFRNet) to mitigate the effects of image filters for social media analysis applications. To achieve this, we assume any filter applied to an image substantially injects a piece of additional style information to it, and we consider this problem as a reverse style transfer problem. The visual effects of filtering can be directly removed by adaptively normalizing external style information in each level of the encoder. Experiments demonstrate that IFRNet outperforms all compared methods in quantitative and qualitative comparisons, and has the ability to remove the visual effects to a great extent. Additionally, we present the filter classification performance of our proposed model, and analyze the dominant color estimation on the images unfiltered by all compared methods.
  • Placeholder
    Conference paperPublication
    Description-aware fashion image inpainting with convolutional neural networks in coarse-to-fine manner
    (The ACM Digital Library, 2020-04-14) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    Inpainting a particular missing region in an image is a challenging vision task, and promising improvements on this task have been achieved with the help of the recent developments in vision-related deep learning studies. Although it may have a direct impact on the decisions of AI-based fashion analysis systems, a limited number of studies for image inpainting have been done in fashion domain, so far. In this study, we propose a multi-modal generative deep learning approach for filling the missing parts in fashion images by constraining visual features with textual features extracted from image descriptions. Our model is composed of four main blocks which can be introduced as textual feature extractor, coarse image generator guided by textual features, fine image generator enhancing the coarse output, and lastly global and local discriminators improving refined outputs. Several experiments conducted on FashionGen dataset with different combination of neural network components show that our multi-modal approach is able to generate visually plausible patches to fill the missing parts in the images.
  • Placeholder
    Conference paperPublication
    A benchmark for inpainting of clothing images with irregular holes
    (Springer, 2020) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    Fashion image understanding is an active research field with a large number of practical applications for the industry. Despite its practical impacts on intelligent fashion analysis systems, clothing image inpainting has not been extensively examined yet. For that matter, we present an extensive benchmark of clothing image inpainting on well-known fashion datasets. Furthermore, we introduce the use of a dilated version of partial convolutions, which efficiently derive the mask update step, and empirically show that the proposed method reduces the required number of layers to form fully-transparent masks. Experiments show that dilated partial convolutions (DPConv) improve the quantitative inpainting performance when compared to the other inpainting strategies, especially it performs better when the mask size is 20% or more of the image.