Person:
KIRAÇ, Mustafa Furkan

Loading...
Profile Picture

Email Address

Birth Date

WoSScopusGoogle ScholarORCID

Name

Job Title

First Name

Mustafa Furkan

Last Name

KIRAÇ
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 24
  • Placeholder
    ArticlePublication
    Automatically learning usage behavior and generating event sequences for black-box testing of reactive systems
    (The ACM Digital Library, 2019-06) Kıraç, Mustafa Furkan; Aktemur, Tankut Barış; Sözer, Hasan; Gebizli, C. Ş.; Computer Science; KIRAÇ, Mustafa Furkan; AKTEMUR, Tankut Bariş; SÖZER, Hasan
    We propose a novel technique based on recurrent artificial neural networks to generate test cases for black-box testing of reactive systems. We combine functional testing inputs that are automatically generated from a model together with manually-applied test cases for robustness testing. We use this combination to train a long short-term memory (LSTM) network. As a result, the network learns an implicit representation of the usage behavior that is liable to failures. We use this network to generate new event sequences as test cases. We applied our approach in the context of an industrial case study for the black-box testing of a digital TV system. LSTM-generated test cases were able to reveal several faults, including critical ones, that were not detected with existing automated or manual testing activities. Our approach is complementary to model-based and exploratory testing, and the combined approach outperforms random testing in terms of both fault coverage and execution time.
  • Placeholder
    ArticlePublication
    Introduction of a spatio-temporal mapping based POE method for outdoor spaces: Suburban university campus as a case study
    (Elsevier, 2018-11) Göçer, Özgür; Göçer, K.; Başol, Altuğ Melik; Kıraç, Mustafa Furkan; Özbil, A.; Bakovic, M.; Siddiqui, Faizan Pervez; Özcan, Barış; Computer Science; Architecture; Mechanical Engineering; BAŞOL, Altuğ Melik; KIRAÇ, Mustafa Furkan; GÖÇER, Özgür; Siddiqui, Faizan Pervez; Özcan, Barış
    Outdoor spaces are important to sustainable cities because they establish a common identity for social life by improving quality of urban living. The relation between outdoor spaces and building groups, competency, use period, and interaction of micro-climatic factors are needed to be investigated from a holistic approach. Unfortunately, the limited and narrow scoped POE studies on outdoor spaces make an overall assessment without causality relation. Other existing studies in outdoor spaces are mostly grouped under the headings such as; user satisfaction, space syntax and behavioral mapping, and biometeorological assessments. The intention of this paper is to introduce a new post-occupancy evaluation (POE) method integrates these studies focusing on various problems in outdoor spaces using spatio-temporal mapping. The comprehensive methodology applied in this research attempted to overcome some of the shortcomings of related studies by conducting a longitudinal study (during a year, as opposed to a few days) and also by objectively analyzing the associations of user behavior and physical attributes as well as the configurational properties of the campus layout. With this method, outdoor spaces can be evaluated in the context of the interaction between the physical environment and its users' behavior and activities, level of satisfaction and perceptions of comfort. The method has been applied on a suburban university campus in İstanbul, Turkey. The main courtyard of the campus has been subjected for map creation and result discussions.
  • Placeholder
    Conference paperPublication
    Fashion image retrieval with capsule networks
    (IEEE, 2019) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    In this study, we investigate in-shop clothing retrieval performance of densely-connected Capsule Networks with dynamic routing. To achieve this, we propose Triplet-based design of Capsule Network architecture with two different feature extraction methods. In our design, Stacked-convolutional (SC) and Residual-connected (RC) blocks are used to form the input of capsule layers. Experimental results show that both of our designs outperform all variants of the baseline study, namely FashionNet, without relying on the landmark information. Moreover, when compared to the SOTA architectures on clothing retrieval, our proposed Triplet Capsule Networks achieve comparable recall rates only with half of parameters used in the SOTA architectures.
  • Placeholder
    Conference paperPublication
    NTIRE 2022 challenge on night photography rendering
    (IEEE, 2022) Ershov, E.; Kınlı, Osman Furkan; Menteş, Sami; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Menteş, Sami; Özcan, Barış
    This paper reviews the NTIRE 2022 challenge on night photography rendering. The challenge solicited solutions that processed RAW camera images captured in night scenes to produce a photo-finished output image encoded in the standard RGB (sRGB) space. Given the subjective nature of this task, the proposed solutions were evaluated based on the mean opinions of viewers asked to judge the visual appearance of the results. Michael Freeman, a world-renowned photographer, further ranked the solutions with the highest mean opinion scores. A total of 13 teams competed in the final phase of the challenge. The proposed methods provided by the participating teams represent state-of-the-art performance in nighttime photography.
  • Placeholder
    ArticlePublication
    Autotuning runtime specialization for sparse matrix-vector multiplication
    (ACM, 2016-04) Yılmaz, Buse; Aktemur, Tankut Barış; Garzaran, M. J.; Kamin, S.; Kıraç, Mustafa Furkan; Computer Science; AKTEMUR, Tankut Bariş; KIRAÇ, Mustafa Furkan; Yılmaz, Buse
    Runtime specialization is used for optimizing programs based on partial information available only at runtime. In this paper we apply autotuning on runtime specialization of Sparse Matrix-Vector Multiplication to predict a best specialization method among several. In 91% to 96% of the predictions, either the best or the second-best method is chosen. Predictions achieve average speedups that are very close to the speedups achievable when only the best methods are used. By using an efficient code generator and a carefully designed set of matrix features, we show the runtime costs can be amortized to bring performance benefits for many real-world cases.
  • Placeholder
    ArticlePublication
    VISOR: A fast image processing pipeline with scaling and translation invariance for test oracle automation of visual output systems
    (The ACM Digital Library, 2018-02) Kıraç, Mustafa Furkan; Aktemur, Tankut Barış; Sözer, Hasan; Computer Science; KIRAÇ, Mustafa Furkan; AKTEMUR, Tankut Bariş; SÖZER, Hasan
    A test oracle automation approach proposed for systems that produce visual output.Root causes of accuracy issues analyzed for test oracles based on image comparison.Image processing techniques employed to improve the accuracy of test oracles.A fast image processing pipeline developed as an automated test oracle.An industrial case study performed for automated regression testing of Digital TVs. Test oracles differentiate between the correct and incorrect system behavior. Hence, test oracle automation is essential to achieve overall test automation. Otherwise, testers have to manually check the system behavior for all test cases. A common test oracle automation approach for testing systems with visual output is based on exact matching between a snapshot of the observed output and a previously taken reference image. However, images can be subject to scaling and translation variations. These variations lead to a high number of false positives, where an error is reported due to a mismatch between the compared images although an error does not exist. To address this problem, we introduce an automated test oracle, named VISOR, that employs a fast image processing pipeline. This pipeline includes a series of image filters that align the compared images and remove noise to eliminate differences caused by scaling and translation. We evaluated our approach in the context of an industrial case study for regression testing of Digital TVs. Results show that VISOR can avoid 90% of false positive cases after training the system for 4h. Following this one-time training, VISOR can compare thousands of image pairs within seconds on a laptop computer.
  • Placeholder
    Conference paperPublication
    Patch-wise contrastive style learning for instagram filter removal
    (IEEE, 2022) Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    Image-level corruptions and perturbations degrade the performance of CNNs on different downstream vision tasks. Social media filters are one of the most common resources of various corruptions and perturbations for real-world visual analysis applications. The negative effects of these dis-tractive factors can be alleviated by recovering the original images with their pure style for the inference of the downstream vision tasks. Assuming these filters substantially inject a piece of additional style information to the social media images, we can formulate the problem of recovering the original versions as a reverse style transfer problem. We introduce Contrastive Instagram Filter Removal Network (CIFR), which enhances this idea for Instagram filter removal by employing a novel multi-layer patch-wise contrastive style learning mechanism. Experiments show our proposed strategy produces better qualitative and quantitative results than the previous studies. Moreover, we present the results of our additional experiments for proposed architecture within different settings. Finally, we present the inference outputs and quantitative comparison of filtered and recovered images on localization and segmentation tasks to encourage the main motivation for this problem.
  • Placeholder
    ArticlePublication
    Illumination-guided inverse rendering benchmark: Learning real objects with few cameras
    (Elsevier, 2023-10) Yılmaz, Doğa; Kıraç, Mustafa Furkan; Computer Science; KIRAÇ, Mustafa Furkan; Yılmaz, Doğa
    The realm of 3D computer vision and graphics has experienced exponential growth recently, enabling the creation of realistic virtual environments and digital representations of real-world objects. Central to this progression are 3D reconstruction methods that facilitate the virtualization of shape, color, and surface details of real objects. Current methods predominantly employ neural scene representations, which despite their efficacy, grapple with limitations such as necessitating a high number of captured images and the complexity of transforming these representations into explicit geometric forms. An alternative strategy that has gained traction is the deployment of methods such as physically-based differentiable rendering (PBDR) and inverse rendering. These approaches require fewer viewpoints, yield explicit format results, and ensure a smoother transition to other representation methods. To meaningfully assess the performance of different 3D reconstruction methods, it is imperative to utilize benchmark scenes for comparison. Despite the existence of standard objects and scenes within the literature, there is a noticeable deficiency in real-world benchmark data that concurrently captures camera, illumination, and scene parameters — all critical to high-fidelity 3D reconstructions using PBDR and inverse rendering-based methods. In this research, we introduce a methodology for capturing real-world scenes as virtual scenes, integrating illumination parameters alongside camera and scene parameters to enhance the veracity of virtual representations. In addition, we introduce a set of ten real-world scenes, along with their virtual counterparts, designed as benchmarks. These benchmarks encompass a fundamental variety of geometric constructs, including convex, concave, plain, and mixed surfaces. Additionally, we demonstrate the 3D reconstruction results of state-of-the-art 3D reconstruction methods employing PBDR in real-world scenes, using both established methodologies and our proposed one.
  • Placeholder
    Conference paperPublication
    Modeling the lighting in scenes as style for auto white-balance correction
    (IEEE, 2023) Kınlı, Osman Furkan; Yılmaz, Doğa; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Yılmaz, Doğa; Özcan, Barış
    Style may refer to different concepts (e.g. painting style, hairstyle, texture, color, filter, etc.) depending on how the feature space is formed. In this work, we propose a novel idea of interpreting the lighting in the single- and multi-illuminant scenes as the concept of style. To verify this idea, we introduce an enhanced auto white-balance (AWB) method that models the lighting in single- and mixed-illuminant scenes as the style factor. Our AWB method does not require any illumination estimation step, yet contains a network learning to generate the weighting maps of the images with different WB settings. Proposed network utilizes the style information, extracted from the scene by a multi-head style extraction module. AWB correction is completed after blending these weighting maps and the scene. Experiments on single- and mixed-illuminant datasets demonstrate that our proposed method achieves promising correction results when compared to the recent works. This shows that the lighting in the scenes with multiple illuminations can be modeled by the concept of style. Source code and trained models are available on https://github.com/birdortyedi/lighting-as-style-awb-correction.
  • Placeholder
    Conference paperPublication
    NTIRE 2023 challenge on night photography rendering
    (IEEE, 2023) Shutova, A.; Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    This paper presents a review of the NTIRE 2023 challenge on night photography rendering. The goal of the challenge was to find solutions that process raw camera images taken in nighttime conditions conditions, and thereby produce a photo-quality output images in the standard RGB (sRGB) space. Unlike the previous year's competition, participants were not provided with a large training dataset for the target sensor. Instead, this time they were given images of a color checker illuminated by a known light source. To evaluate the results, a sufficient number of viewers were asked to assess the visual quality of the proposed solutions, considering the subjective nature of the task. The highest ranking solutions were further ranked by Richard Collins, a renowned photographer. The top ranking participants' solutions effectively represent the state-of-the-art in nighttime photography rendering.