Browsing by Author "Sunman, Nezih"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Conference ObjectPublication Metadata only AhBuNe agent: Winner of the eleventh international automated negotiating agent competition (ANAC 2020)(Springer, 2023) Yıldırım, Ahmet Burak; Sunman, Nezih; Aydoğan, Reyhan; Yıldırım, Ahmet Burak; Sunman, NezihThe International Automated Negotiating Agent Competition introduces a new challenge each year to facilitate the research on agent-based negotiation and provide a test benchmark. ANAC 2020 addressed the problem of designing effective agents that do not know their users’ complete preferences in addition to their opponent’s negotiation strategy. Accordingly, this paper presents the negotiation strategy of the winner agent called “AhBuNe Agent”. The proposed heuristic-based bidding strategy checks whether it has sufficient orderings to reason about its complete preferences and accordingly decides whether to sacrifice some utility in return for preference elicitation. While making an offer, it uses the most-desired known outcome as a reference and modifies the content of the bid by adopting a concession-based strategy. By analyzing the content of the given ordered bids, the importance ranking of the issues is estimated. As our agent adopts a fixed time-based concession strategy and takes the estimated issue importance ranks into account, it determines to what extent the issues are to be modified. The evaluation results of the ANAC 2020 show that our agent beats the other participating agents in terms of the received individual score.ArticlePublication Metadata only Automated Web application testing driven by pre-recorded test cases(Elsevier, 2022-11) Sunman, Nezih; Soydan, Yiğit; Sözer, Hasan; Computer Science; SÖZER, Hasan; Sunman, Nezih; Soydan, YiğitThere are fully automated approaches proposed for Web application testing. These approaches mainly rely on tools that explore an application by crawling it. The crawling process results in a state transition model, which is used for generating test cases. Although these approaches are fully automated, they consume too much time and they usually require manual configuration. This is due to the lack of insight and domain knowledge of crawling tools regarding the application under test. We propose a semi-automated approach instead. We introduce a tool that takes a set of recorded event sequences as input. These sequences can be captured during exploratory tests. They are replayed as pre-recorded test cases. They are also exploited for steering the crawling and test case generation process. We performed a case study with 5 Web applications. These applications were randomly tested with state-of-the-art tools. Our approach can reduce the crawling time by hours, while compromising the coverage achieved by 0.2% to 7.43%. In addition, our tool does not require manual configuration before crawling. The input for the tool was created within 15 min of exploratory testing..