Person:
KIRAÇ, Mustafa Furkan

Loading...
Profile Picture

Email Address

Birth Date

WoSScopusGoogle ScholarORCID

Name

Job Title

First Name

Mustafa Furkan

Last Name

KIRAÇ
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 24
  • ArticlePublicationOpen Access
    Üniversite dış mekânları i̇çin zaman-mekânsal haritalama yöntemine dayanan bir kullanım sonrası değerlendirme modeli
    (Yildiz Technical University Faculty Of Architecture, 2020) Göçer, Ö.; Göçer, K.; Başol, Altuğ Melik; Kıraç, Mustafa Furkan; Torun, A. O.; Bakovic, M.; Siddiqui, F. P.; Özcan, Barış; Computer Science; Mechanical Engineering; BAŞOL, Altuğ Melik; KIRAÇ, Mustafa Furkan; Özcan, Barış
    Üniversite yerleşkeleri yalnızca çeşitli sosyal ve eğitsel binalardan oluşmakla kalmaz, dış mekânları ve donatıları, rekreasyon ve peyzaj alanları ile bütünleşik bir kurgu oluştururlar. Dış mekânlar, yerleşke kullanıcılarının toplumsal etkileşim, dinlenme ve rahatlama, rekreasyon, fikir alışverişinde bulunma ve güçlü bir mülkiyet ve aidiyet hissi oluşturma potansiyeli taşırlar. İnsanların birbirleriyle iletişim kurmalarını ve sosyalleşmelerini sağlayarak sosyal yaşam için ortak bir kimlik oluşturmak dış mekânların en önemli işlevidir. Ne var ki dış mekânlar ne kadar akılcı tasarlansa da, uygulamada beklenenin dışında bir kullanımla karşılaşılabilmektedir. Beklentiler ile uygulanan arasındaki farkın belirlenebilmesi için dış mekânlar ile bina grupları arasındaki ilişki, yeterlilik, kullanım süresi, erişilebilirlik ve fiziksel çevrenin etkileşimi bütünsel bir yaklaşımla incelenmelidir. Dış mekânlarda insan kullanımının ve tasarım niyetinin başarılı olup olmadığını değerlendirmenin en iyi yöntemlerinden biri olarak kullanım sonrası değerlendirme (KSD) önerilmektedir. Ne yazık ki, iç mekânların değerlendirilmesine ilişkin KSD yöntemleri hakkında çalışmalar her geçen gün artsa da, dış mekânların değerlendirilmesine yönelik özellikle üniversite yerleşkelerinde uygulanabilecek kapsamlı bir KSD çalışmasına rastlanmamaktadır. Bu çalışmada dış mekân kullanımını etkileyen değişkenleri ve bu değişkenler arasındaki etkileşimi bütüncül olarak ele alan bir KSD yöntemi tanıtılmıştır. Önerilen yöntem kent dışı bir üniversite yerleşkesinde uygulanmış ve uygulama sonuçları dış mekân kullanım değeri bakımından değerlendirilmiştir.
  • Placeholder
    ArticlePublication
    Generalization to unseen viewpoint images of objects via alleviated pose attentive capsule agreement
    (Springer, 2023-02) Özcan, Barış; Kınlı, Osman Furkan; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    Despite their achievements in object recognition, Convolutional Neural Networks (CNNs) particularly fail to generalize to unseen viewpoints of a learned object even with substantial samples. On the other hand, recently emerged capsule networks outperform CNNs in novel viewpoint generalization tasks even with significantly fewer parameters. Capsule networks group the neuron activations for representing higher level attributes and their interactions for achieving equivariance to visual transformations. However, capsule networks have a high computational cost for learning the interactions of capsules in consecutive layers via the, so called, routing algorithm. To address these issues, we propose a novel routing algorithm, Alleviated Pose Attentive Capsule Agreement (ALPACA) which is tailored for capsules that contain pose, feature and existence probability information together to enhance novel viewpoint generalization of capsules on 2D images. For this purpose, we have created a Novel ViewPoint Dataset (NVPD) a viewpoint-controlled texture-free dataset that has 8 different setups where training and test samples are formed by different viewpoints. In addition to NVPD, we have conducted experiments on iLab2M dataset where the dataset is split in terms of the object instances. Experimental results show that ALPACA outperforms its capsule network counterparts and state-of-the-art CNNs on iLab2M and NVPD datasets. Moreover, ALPACA is 10 times faster when compared to routing-based capsule networks. It also outperforms attention-based routing algorithms of the domain while keeping the inference and training times comparable. Lastly, our code, the NVPD dataset, test setups, and implemented models are freely available at https://github.com/Boazrciasn/ALPACA.
  • Placeholder
    ArticlePublication
    Illumination-guided inverse rendering benchmark: Learning real objects with few cameras
    (Elsevier, 2023-10) Yılmaz, Doğa; Kıraç, Mustafa Furkan; Computer Science; KIRAÇ, Mustafa Furkan; Yılmaz, Doğa
    The realm of 3D computer vision and graphics has experienced exponential growth recently, enabling the creation of realistic virtual environments and digital representations of real-world objects. Central to this progression are 3D reconstruction methods that facilitate the virtualization of shape, color, and surface details of real objects. Current methods predominantly employ neural scene representations, which despite their efficacy, grapple with limitations such as necessitating a high number of captured images and the complexity of transforming these representations into explicit geometric forms. An alternative strategy that has gained traction is the deployment of methods such as physically-based differentiable rendering (PBDR) and inverse rendering. These approaches require fewer viewpoints, yield explicit format results, and ensure a smoother transition to other representation methods. To meaningfully assess the performance of different 3D reconstruction methods, it is imperative to utilize benchmark scenes for comparison. Despite the existence of standard objects and scenes within the literature, there is a noticeable deficiency in real-world benchmark data that concurrently captures camera, illumination, and scene parameters — all critical to high-fidelity 3D reconstructions using PBDR and inverse rendering-based methods. In this research, we introduce a methodology for capturing real-world scenes as virtual scenes, integrating illumination parameters alongside camera and scene parameters to enhance the veracity of virtual representations. In addition, we introduce a set of ten real-world scenes, along with their virtual counterparts, designed as benchmarks. These benchmarks encompass a fundamental variety of geometric constructs, including convex, concave, plain, and mixed surfaces. Additionally, we demonstrate the 3D reconstruction results of state-of-the-art 3D reconstruction methods employing PBDR in real-world scenes, using both established methodologies and our proposed one.
  • Placeholder
    ArticlePublication
    VISOR: A fast image processing pipeline with scaling and translation invariance for test oracle automation of visual output systems
    (The ACM Digital Library, 2018-02) Kıraç, Mustafa Furkan; Aktemur, Tankut Barış; Sözer, Hasan; Computer Science; KIRAÇ, Mustafa Furkan; AKTEMUR, Tankut Bariş; SÖZER, Hasan
    A test oracle automation approach proposed for systems that produce visual output.Root causes of accuracy issues analyzed for test oracles based on image comparison.Image processing techniques employed to improve the accuracy of test oracles.A fast image processing pipeline developed as an automated test oracle.An industrial case study performed for automated regression testing of Digital TVs. Test oracles differentiate between the correct and incorrect system behavior. Hence, test oracle automation is essential to achieve overall test automation. Otherwise, testers have to manually check the system behavior for all test cases. A common test oracle automation approach for testing systems with visual output is based on exact matching between a snapshot of the observed output and a previously taken reference image. However, images can be subject to scaling and translation variations. These variations lead to a high number of false positives, where an error is reported due to a mismatch between the compared images although an error does not exist. To address this problem, we introduce an automated test oracle, named VISOR, that employs a fast image processing pipeline. This pipeline includes a series of image filters that align the compared images and remove noise to eliminate differences caused by scaling and translation. We evaluated our approach in the context of an industrial case study for regression testing of Digital TVs. Results show that VISOR can avoid 90% of false positive cases after training the system for 4h. Following this one-time training, VISOR can compare thousands of image pairs within seconds on a laptop computer.
  • Placeholder
    ArticlePublication
    Automatically learning usage behavior and generating event sequences for black-box testing of reactive systems
    (The ACM Digital Library, 2019-06) Kıraç, Mustafa Furkan; Aktemur, Tankut Barış; Sözer, Hasan; Gebizli, C. Ş.; Computer Science; KIRAÇ, Mustafa Furkan; AKTEMUR, Tankut Bariş; SÖZER, Hasan
    We propose a novel technique based on recurrent artificial neural networks to generate test cases for black-box testing of reactive systems. We combine functional testing inputs that are automatically generated from a model together with manually-applied test cases for robustness testing. We use this combination to train a long short-term memory (LSTM) network. As a result, the network learns an implicit representation of the usage behavior that is liable to failures. We use this network to generate new event sequences as test cases. We applied our approach in the context of an industrial case study for the black-box testing of a digital TV system. LSTM-generated test cases were able to reveal several faults, including critical ones, that were not detected with existing automated or manual testing activities. Our approach is complementary to model-based and exploratory testing, and the combined approach outperforms random testing in terms of both fault coverage and execution time.
  • ArticlePublicationOpen Access
    Image denoising using deep convolutional autoencoder with feature pyramids
    (TÜBİTAK, 2020) Çetinkaya, Ekrem; Kıraç, Mustafa Furkan; Computer Science; KIRAÇ, Mustafa Furkan; Çetinkaya, Ekrem
    Image denoising is 1 of the fundamental problems in the image processing field since it is the preliminary step for many computer vision applications. Various approaches have been used for image denoising throughout the years from spatial filtering to model-based approaches. Having outperformed all traditional methods, neural-network-based discriminative methods have gained popularity in recent years. However, most of these methods still struggle to achieve flexibility against various noise levels and types. In this paper, a deep convolutional autoencoder combined with a variant of feature pyramid network is proposed for image denoising. Simulated data generated by Blender software along with corrupted natural images are used during training to improve robustness against various noise levels. Experimental results show that the proposed method can achieve competitive performance in blind Gaussian denoising with significantly less training time required compared to state of the art methods. Extensive experiments showed the proposed method gives promising performance in a wide range of noise levels with a single network.
  • Placeholder
    ArticlePublication
    Hierarchically constrained 3D hand pose estimation using regression forests from single frame depth data
    (Elsevier, 2014-12-01) Kıraç, Mustafa Furkan; Kara, Y. E.; Akarun, L.; Computer Science; KIRAÇ, Mustafa Furkan
    The emergence of inexpensive 2.5D depth cameras has enabled the extraction of the articulated human body pose. However, human hand skeleton extraction still stays as a challenging problem since the hand contains as many joints as the human body model. The small size of the hand also makes the problem more challenging due to resolution limits of the depth cameras. Moreover, hand poses suffer from self-occlusion which is considerably less likely in a body pose. This paper describes a scheme for extracting the hand skeleton using random regression forests in real-time that is robust to self- occlusion and low resolution of the depth camera. In addition to that, the proposed algorithm can estimate the joint positions even if all of the pixels related to a joint are out of the camera frame. The performance of the new method is compared to the random classification forests based method in the literature. Moreover, the performance of the joint estimation is further improved using a novel hierarchical mode selection algorithm that makes use of constraints imposed by the skeleton geometry. The performance of the proposed algorithm is tested on datasets containing synthetic and real data, where self-occlusion is frequently encountered. The new algorithm which runs in real time using a single depth image is shown to outperform previous methods.
  • Placeholder
    ArticlePublication
    ADVISOR: An adjustable framework for test oracle automation of visual output systems
    (IEEE, 2020-09) Genç, A. E.; Sözer, Hasan; Kıraç, Mustafa Furkan; Aktemur, Tankut Barış; Computer Science; SÖZER, Hasan; KIRAÇ, Mustafa Furkan; AKTEMUR, Tankut Bariş
    Test oracles differentiate between the correct and incorrect system behavior. Automation of test oracles for visual output systems mainly involves image comparison, where a snapshot of the output is compared with respect to a reference image. Hereby, the captured snapshot can be subject to variations such as scaling and shifting. These variations lead to incorrect evaluations. Existing approaches employ computer vision techniques to address a specific set of variations. In this article, we introduce ADVISOR, an adjustable framework for test oracle automation of visual output systems. It allows the use of a flexible combination and configuration of computer vision techniques. We evaluated a set of valid configurations with a benchmark dataset collected during the tests of commercial digital TV systems. Some of these configurations achieved up to 3% better overall accuracy with respect to state-of-the-art tools. Further, we observed that there is no configuration that reaches the best accuracy for all types of image variations. We also empirically investigated the impact of significant parameters. One of them is a threshold regarding image matching score that determines the final verdict. This parameter is automatically tuned by offline training. We evaluated runtime performance as well. Results showed that differences among the ADVISOR configurations and state-of-the-art tools are in the order of seconds per image comparison.
  • Placeholder
    Conference ObjectPublication
    Modeling the lighting in scenes as style for auto white-balance correction
    (IEEE, 2023) Kınlı, Osman Furkan; Yılmaz, Doğa; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Yılmaz, Doğa; Özcan, Barış
    Style may refer to different concepts (e.g. painting style, hairstyle, texture, color, filter, etc.) depending on how the feature space is formed. In this work, we propose a novel idea of interpreting the lighting in the single- and multi-illuminant scenes as the concept of style. To verify this idea, we introduce an enhanced auto white-balance (AWB) method that models the lighting in single- and mixed-illuminant scenes as the style factor. Our AWB method does not require any illumination estimation step, yet contains a network learning to generate the weighting maps of the images with different WB settings. Proposed network utilizes the style information, extracted from the scene by a multi-head style extraction module. AWB correction is completed after blending these weighting maps and the scene. Experiments on single- and mixed-illuminant datasets demonstrate that our proposed method achieves promising correction results when compared to the recent works. This shows that the lighting in the scenes with multiple illuminations can be modeled by the concept of style. Source code and trained models are available on https://github.com/birdortyedi/lighting-as-style-awb-correction.
  • Placeholder
    Conference ObjectPublication
    NTIRE 2023 challenge on night photography rendering
    (IEEE, 2023) Shutova, A.; Kınlı, Osman Furkan; Özcan, Barış; Kıraç, Mustafa Furkan; Computer Science; KINLI, Osman Furkan; KIRAÇ, Mustafa Furkan; Özcan, Barış
    This paper presents a review of the NTIRE 2023 challenge on night photography rendering. The goal of the challenge was to find solutions that process raw camera images taken in nighttime conditions conditions, and thereby produce a photo-quality output images in the standard RGB (sRGB) space. Unlike the previous year's competition, participants were not provided with a large training dataset for the target sensor. Instead, this time they were given images of a color checker illuminated by a known light source. To evaluate the results, a sufficient number of viewers were asked to assess the visual quality of the proposed solutions, considering the subjective nature of the task. The highest ranking solutions were further ranked by Richard Collins, a renowned photographer. The top ranking participants' solutions effectively represent the state-of-the-art in nighttime photography rendering.