Person:
SÖZER, Hasan

Loading...
Profile Picture

Email Address

Birth Date

WoSScopusGoogle ScholarORCID

Name

Job Title

First Name

Hasan

Last Name

SÖZER
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 82
  • Placeholder
    ArticlePublication
    Automated refinement of models for model-based testing using exploratory testing
    (Springer International Publishing, 2017-09) Şahin Gebizli, Ceren; Sözer, Hasan; Computer Science; SÖZER, Hasan; Şahin Gebizli, Ceren
    Model-based testing relies on models of the system under test to automatically generate test cases. Consequently, the effectiveness of the generated test cases depends on models. In general, these models are created manually, and as such, they are subject to errors like omission of certain system usage behavior. Such omitted behaviors are also omitted by the generated test cases. In practice, these faults are usually detected with exploratory testing. However, exploratory testing mainly relies on the knowledge and manual activities of experienced test engineers. In this paper, we introduce an approach and a toolset, ARME, for automatically refining system models based on recorded testing activities of these engineers. ARME compares the recorded execution traces with respect to the possible execution paths in test models. Then, these models are automatically refined to incorporate any omitted system behavior and update model parameters to focus on the mostly executed scenarios. The refined models can be used for generating more effective test cases. We applied our approach in the context of 3 industrial case studies to improve the models for model-based testing of a digital TV system. In all of these case studies, several critical faults were detected after generating test cases based on the refined models. These faults were not detected by the initial set of test cases. They were also missed during the exploratory testing activities.
  • Placeholder
    ArticlePublication
    Using artificial neural networks to provide guidance in extending PL/SQL programs
    (Springer, 2022-12) Ersoy, E.; Sözer, Hasan; Computer Science; SÖZER, Hasan
    Extending legacy systems with new objects for contemporary functionality or technology can lead to architecture erosion. Misplacement of these objects gradually hampers the modular structure, of which documentation is usually missing or outdated. In this work, we aim at addressing this problem for PL/SQL programs, which are highly coupled with databases. We propose a novel approach that employs artificial neural networks to automatically predict the correct placement of a new object among architectural modules. We train a network based on features extracted from the initial version of the source code that is assumed to represent the intended architecture. We use dependencies among the software and database objects as features for this training. Then, given a new object and the list of other objects it uses, the network can predict the architectural module, where the object should be included. We performed two industrial case studies with applications from the telecommunications domain, each of which involves thousands of procedures and database tables. We showed that the accuracy of our approach is 86.7% and 89% for these two applications. The baseline approach that uses coupling and cohesion metrics reaches 55.5% and 57.4% accuracy for the same applications, respectively.
  • Placeholder
    ArticlePublication
    A longitudinal case study on Nexus transformation: Impact on productivity, quality, and motivation
    (Wiley, 2023-09) Ersoy, E.; Çallı, E.; Erdoğan, B.; Bağrıyanık, S.; Sözer, Hasan; Computer Science; SÖZER, Hasan
    There have been success stories reported regarding the adoption of agile software development methods in the industry. There also exist observations on their limitations. One of these limitations is scalability since agile methods like Scrum were originally designed for small software teams. Scalable agile frameworks were introduced to address this limitation. We conducted an industrial case study on the adoption of such a framework, called Nexus. Our study involves quantitative and qualitative evaluation based on observations within a product development organization over a period of 12 months. Scrum is used for the development of a product during the first 6 months of this period. Nexus is used in the remaining 6 months. Data are collected throughout the whole period for measuring productivity, quality, and team member motivation. Results suggest a significant increase in productivity and product quality after switching to Nexus. Team motivation was slightly improved as well.
  • Placeholder
    ArticlePublication
    Increasing test efficiency by risk-driven model-based testing
    (Elsevier, 2018-10) Gebizli, C. Ş.; Kırkıcı, A.; Sözer, Hasan; Computer Science; SÖZER, Hasan
    We introduce an approach and a tool, RIMA, for adapting test models used for model-based testing to augment information regarding failure risk. We represent test models in the form of Markov chains. These models comprise a set of states and a set of state transitions that are annotated with probability values. These values steer the test case generation process, which aims at covering the most probable paths. RIMA refines these models in 3 steps. First, it updates transition probabilities based on a collected usage profile. Second, it updates the resulting models based on fault likelihood at each state, which is estimated based on static code analysis. Third, it performs updates based on error likelihood at each state, which is estimated with dynamic analysis. The approach is evaluated with two industrial case studies for testing digital TVs and smart phones. Results show that the approach increases test efficiency by revealing more faults in less testing time.
  • Placeholder
    ArticlePublication
    VISOR: A fast image processing pipeline with scaling and translation invariance for test oracle automation of visual output systems
    (The ACM Digital Library, 2018-02) Kıraç, Mustafa Furkan; Aktemur, Tankut Barış; Sözer, Hasan; Computer Science; KIRAÇ, Mustafa Furkan; AKTEMUR, Tankut Bariş; SÖZER, Hasan
    A test oracle automation approach proposed for systems that produce visual output.Root causes of accuracy issues analyzed for test oracles based on image comparison.Image processing techniques employed to improve the accuracy of test oracles.A fast image processing pipeline developed as an automated test oracle.An industrial case study performed for automated regression testing of Digital TVs. Test oracles differentiate between the correct and incorrect system behavior. Hence, test oracle automation is essential to achieve overall test automation. Otherwise, testers have to manually check the system behavior for all test cases. A common test oracle automation approach for testing systems with visual output is based on exact matching between a snapshot of the observed output and a previously taken reference image. However, images can be subject to scaling and translation variations. These variations lead to a high number of false positives, where an error is reported due to a mismatch between the compared images although an error does not exist. To address this problem, we introduce an automated test oracle, named VISOR, that employs a fast image processing pipeline. This pipeline includes a series of image filters that align the compared images and remove noise to eliminate differences caused by scaling and translation. We evaluated our approach in the context of an industrial case study for regression testing of Digital TVs. Results show that VISOR can avoid 90% of false positive cases after training the system for 4h. Following this one-time training, VISOR can compare thousands of image pairs within seconds on a laptop computer.
  • Placeholder
    ArticlePublication
    An effective formulation of the multi-criteria test suite minimization problem
    (Elsevier, 2020-10) Özener, Okan Örsan; Sözer, Hasan; Industrial Engineering; Computer Science; ÖZENER, Okan Örsan; SÖZER, Hasan
    Test suite minimization problem has been mainly addressed by employing heuristic techniques or integer linear programming focusing on a specific criterion or bi-criteria. These approaches fall short to compute optimal solutions especially when there exists overlap among test cases in terms of various criteria such as code coverage and the set of detected faults. Nonlinear formulations have also been proposed recently to address such cases. However, these formulations require significantly more computational resources compared to linear ones. Moreover, they are also subject to shortcomings that might still lead to sub-optimal solutions. In this paper, we identify such shortcomings and we propose an alternative formulation of the problem. We have empirically evaluated the effectiveness of our approach based on a publicly available dataset and compared it with respect to the state-of-the-art based on the same objective function and the same set of criteria including statement coverage, fault-revealing capability, and test execution time. Results show that our formulation leads to either better results or the same results, when the previously obtained results were already the optimal ones. In addition, our formulation is a linear formulation, which can be solved much more efficiently compared to non-linear formulations.
  • Placeholder
    ArticlePublication
    Automatically learning usage behavior and generating event sequences for black-box testing of reactive systems
    (The ACM Digital Library, 2019-06) Kıraç, Mustafa Furkan; Aktemur, Tankut Barış; Sözer, Hasan; Gebizli, C. Ş.; Computer Science; KIRAÇ, Mustafa Furkan; AKTEMUR, Tankut Bariş; SÖZER, Hasan
    We propose a novel technique based on recurrent artificial neural networks to generate test cases for black-box testing of reactive systems. We combine functional testing inputs that are automatically generated from a model together with manually-applied test cases for robustness testing. We use this combination to train a long short-term memory (LSTM) network. As a result, the network learns an implicit representation of the usage behavior that is liable to failures. We use this network to generate new event sequences as test cases. We applied our approach in the context of an industrial case study for the black-box testing of a digital TV system. LSTM-generated test cases were able to reveal several faults, including critical ones, that were not detected with existing automated or manual testing activities. Our approach is complementary to model-based and exploratory testing, and the combined approach outperforms random testing in terms of both fault coverage and execution time.
  • Placeholder
    ArticlePublication
    Reproducing failures based on semiformal failure scenario descriptions
    (Springer International Publishing, 2017) Karagöz, G.; Sözer, Hasan; Computer Science; SÖZER, Hasan
    Due to the increasing size and complexity of software systems, it becomes hard to test these systems exhaustively. As a result, some faults can be left undetected. Undetected faults can lead to failures in deployed systems. Such failures are usually reported by the users from the field or test engineers back to developers. It requires considerable time and effort to analyze and reproduce the reported failures because their descriptions are not always complete, structured and formal. In this paper, we introduce a novel approach for automatically reproducing failures to aid their debugging. Our approach relies on semi structured failure scenario descriptions that employ a set of keywords. These descriptions are preprocessed and mapped to a set of predefined test case templates with valid input sets. Then, test cases are generated and executed to reproduce the reported failure scenarios. The approach is evaluated with an industrial case study performed in a company from the telecommunications domain. Several failures were successfully reproduced. The approach is also adopted in the quality assurance process of the company. After one-time preparation of reusable test case templates and training of test engineers, 24.9 % of the reported failures (and 40 % of those that were manually reproducible) could be reproduced without any manual effort.
  • Placeholder
    ArticlePublication
    DILAF: A framework for distributed analysis of large-scale system logs for anomaly detection
    (Wiley, 2019-02) Astekin, M.; Zengin, H.; Sözer, Hasan; Computer Science; SÖZER, Hasan
    System logs constitute a rich source of information for detection and prediction of anomalies. However, they can include a huge volume of data, which is usually unstructured or semistructured. We introduce DILAF, a framework for distributed analysis of large-scale system logs for anomaly detection. DILAF is comprised of several processes to facilitate log parsing, feature extraction, and machine learning activities. It has two distinguishing features with respect to the existing tools. First, it does not require the availability of source code of the analyzed system. Second, it is designed to perform all the processes in a distributed manner to support scalable analysis in the context of large-scale distributed systems. We discuss the software architecture of DILAF and we introduce an implementation of it. We conducted controlled experiments based on two datasets to evaluate the effectiveness of the framework. In particular, we evaluated the performance and scalability attributes under various degrees of parallelism. Results showed that DILAF can maintain the same accuracy levels while achieving more than 30% performance improvement on average as the system scales, compared to baseline approaches that do not employ fully distributed processing.
  • Placeholder
    ArticlePublication
    MOO: An architectural framework for runtime optimization of multiple system objectives in embedded control software
    (Elsevier, 2013-10) Roo, A. de; Sözer, Hasan; Bergmans, L.; Akşit, M.; Computer Science; SÖZER, Hasan
    Today's complex embedded systems function in varying operational conditions. The control software adapts several control variables to keep the operational state optimal with respect to multiple objectives. There exist well-known techniques for solving such optimization problems. However, current practice shows that the applied techniques, control variables, constraints and related design decisions are not documented as a part of the architecture description. Their implementation is implicit, tailored for specific characteristics of the embedded system, tightly integrated into and coupled with the control software, which hinders its reusability, analyzability and maintainability. This paper presents an architectural framework to design, document and realize multi-objective optimization in embedded control software. The framework comprises an architectural style together with its visual editor and domain-specific analysis tools, and a code generator. The code generator generates an optimizer module specific for the given architecture and it employs aspect-oriented software development techniques to seamlessly integrate this module into the control software. The effectiveness of the framework is validated in the context of an industrial case study from the printing systems domain.