Organizational Unit: Department of Computer Science
Loading...
Date established
City
Country
ID
75 results
Publication Search Results
Now showing 1 - 10 of 75
Master ThesisPublication Metadata only Uncertainty assessment for speaker verification systems using a bayesian approach(2021-01-18) Süslü, Çağıl; Demiroğlu, Cenk; Demiroğlu, Cenk; Sözer, Hasan; Güz, Ü.; Department of Computer Science; Süslü, ÇağılThe Automatic Speaker Verification (ASV) systems are developed to discriminate the genuine speakers from the spoofing attacks and they are also used as a security application in various industries (e.g., Banking and telephone-based systems). The spoofing countermeasure systems (SCS) are important for the ASV systems to pro tect themselves against spoofing attacks. In general, the SCSs are developed using the cross entropy loss function and the softmax classification layer to perform the best classification scores. Even though the softmax function is popularly used as a classification layer for the deep neural network tasks, it increases the uncertainty of the estimated class probabilities by squishing the probabilistic predictions of the pre dictive models. The aim of this work was to decrease uncertainty of the conventional cross entropy metrics and softmax function SCS by using the Bayesian approach. To accomplish this, multiple SCSs were developed to outperform the base system of the Automatic Speaker Verification Spoofing and Countermeasures 2017 Challenge. The Bayesian approach was applied to the best model (e.g., the model which performed the lowest EER score) to decrease the uncertainty of the conventional cross entropy met rics and softmax function SCS. The uncertainty of the both systems were compared with the probability distribution function, AUC value and the ROC curve. As it can be observed from the ROC curve, the Bayesian network decreased the uncertainty of the conventional cross entropy metrics and softmax function SCS by increasing AUC value 14%. Also the Bayesian network has provided the lowest EER score (16.79%) by outperforming the base system of the ASV spoof 2017 challenge.Master ThesisPublication Metadata only A self establishing clustered network architecture for blockchainDoğan, Orkun; Çakmakçı, Kübra Kalkan; Çakmakçı, Kübra Kalkan; Sözer, Hasan; Alagöz, F.; Department of Computer ScienceBlockchain technology has branched out into many industries, such as healthcare, manufacturing, agriculture and entertainment, in the shape of both of its public and non-public variants. In principle, blockchain provides these industries with an immutable ledger, allowing the processes in its application environment to be taken care of in a decentralized manner. However, some challenges blockchain has faced to this day remain, such as the degree of its scalability, the level of security it provides and the transparency of the network transactions. In this thesis, a novel approach to a distributed, permission-less blockchain network is explored with the use of hierarchical clustering to gather the nodes based on the latency of their connection to one another. These clusters of nodes are allowed to work on their respective local chains and to add the verified local chains to the actual global chain that is used by the entire system. Network's throughput performance and overall latency are evaluated and compared with other blockchain applications, namely a simulation of the Bitcoin network itself and another approach that makes use of a method called Community Clustering. We collected the data for the correlation in the same environment for our work, Bitcoin and Community Clustering\cite{communityclustering} networks. The comparison of the collected data aligns with our work's clusters to improve the transaction throughput of the network, where an increase in average throughput and a drastic decrease in latency are observed.Master ThesisPublication Metadata only Risk-driven model-based testing(2018-05) Kırkıcı, Abdulhadi; Sözer, Hasan; Sözer, Hasan; Aktemur, Tankut Barış; Aktaş, M.; Department of Computer Science; Kırkıcı, AbdulhadiSoftware is becoming larger and more complex in consumer electronics products. As a result, testing these products for reliability is becoming a major challenge. Traditional and manual testing activities are not effective and efficient in pinpointing faults. Consequently, manual testing activities are being replaced with automated techniques. Modelbased testing is one of these techniques. It uses test models as input and automates test case generation. However, these models are very large for industry-scale systems. Hence, the number of generated test cases can be very large as well. However, it is not feasible to test every functionality of the system exhaustively due to extremely limited resources in the consumer electronics domain. Only those system usage scenarios that are associated with a high likelihood of failures should be tested. Therefore, we propose a risk-driven model-based testing approach in this thesis. Hereby, test models are augmented with information regarding failure risk. Markov chains are used for expressing these models, which are basically composed of states and transitions. Each state transition is annotated with a probability. Probability values are used for generating test cases that cover transitions with the highest probability values. The proposed approach updates transition probability values based on three types of analysis for risk estimation. First, usage profile is used for determining the mostly used features of the system. Second, static analysis is used for estimating fault potential at each state. Third, dynamic analysis is used for estimating error likelihood at each state. Test models are updated based on these analyses and estimations iteratively. The approach is evaluated with three industrial case studies for testing digital TVs, smart phones and washing machines. Results show that the approach increases test efficiency by revealing more faults in less testing time.Master ThesisPublication Metadata only Tool support for model based software product line testing(2018-01) Ergun, Burcu; Sözer, Hasan; Sözer, Hasan; Aktemur, Tankut Barış; Alkaya, A. F.; Department of Computer Science; Ergun, BurcuWe introduce a tool for automated adaptation of test models to be reused for a prod uct family. Test models are specified in the form of hierarchical Markov chains. They represent possible usage behavior regarding the features of systems as part of the product family. A feature model documents the variability among these features. Optional and alternative features in this model are mapped to a set of states in test models. These features are selected or deselected for each product to be tested. Transition probabilities on the test model are updated by our tool according to these (de)selections. As a result, the test case generation process focuses only on the se lected features. We conducted two controlled experiments, both in industrial settings, to evaluate the effectiveness of the tool. We used systems as part of digital TV and wireless access point(WAP) systems. For DTV systems 10 and for wireless access points 5 participants were involved in testing these systems, respectively. We mea sured the effort spent by each participant for the same set of tasks when our tool is used and when it is not. We observed that the tool reduces costs significantly. We also observed that the initial cost for adopting product line testing is amortized even for small product families with 13 DTV and 11 WAP products, respectively.Master ThesisPublication Metadata only Finsentiment : predicting financial sentiment and risk through transfer learningErgün, Zehra Erva; Sefer, Emre; Sefer, Emre; Yıldız, Olcay Taner; Yeniterzi, R.; Department of Computer ScienceThere is an increasing interest in financial text mining tasks. Significant progress has been made by using deep learning-based models on generic corpus, which also shows reasonable results on financial text mining tasks such as financial sentiment analysis. However, financial sentiment analysis is still a demanding work because of insufficiency of labeled data for the financial domain and its specialized language. General-purpose deep learning methods are not as effective mainly due to specialized language used in the financial context. In this study, we focus on enhancing the performance of financial text mining tasks by improving the existing pretrained language models via NLP transfer learning. Pretrained language models demand a small quantity of labeled samples, and they could be enhanced to a greater extent by training them on domain-specific corpora instead. We propose an enhanced model FinSentiment, which incorporates enhanced versions of a number of recentlyproposed pretrained models, such as BERT, XLNet, RoBERTa to better perform across NLP tasks in financial domain by training these models on financial domain corpora. The corresponding finance-specific models in FinSentiment are called Fin-BERT, Fin-XLNet, and Fin-RoBERTa respectively. We also propose variants of these models jointly trained over financial domain and general corpora. Our finance-specific FinSentiment models in general show the best performance across 3 financial sentiment analysis datasets, even when only a subpart of these models are fine-tuned with a smaller training set. Our results exhibit enhancement for each tested performance criteria on the existing results for these datasets. Extensive experimental results demonstrate the effectiveness and robustness of especially RoBERTa pretrained on financial corpora. Overall, we show that NLP transfer learning techniques are favorable solutions to financial sentiment analysis tasks. Financial risk is empirically quantified in terms of asset return volatility, which is degree of deviation from the expected return of the asset. Under risk management in finance, predicting asset volatility is one of the most crucial problems because of its important role in making investment decisions. Even though a number of previous studies have investigated the role of natural language knowledge in enhancing the quality of volatility predictions, volatility estimation can still be enhanced via recent deep learning techniques. Specifically, extracting financial knowledge in text through transfer learning approaches such as BERT has not been used in risk prediction. Here, we come up with RiskBERT, the first BERT-based transfer learning method to predict asset volatility by simultaneously considering both a broad set of financial attributes and financial sentiment. In terms of language dataset, we utilize transcripts from the annually occurring 10-K filings of the publicly trading companies to train our model. Our proposed model, RiskBERT uses attention mechanism to model verbal context and remarkably performs better than the state-of-the-art methods and baselines such as historical volatility. We observe such outperformance even when RiskBERT is finetuned with a smaller training set. We found RiskBERT to be more effective in risk prediction after the Sarbanes-Oxley Act of 2002 has passed since such legislation has made the annual reports more effective. Overall, we show that NLP transfer learning techniques are favorable solutions to financial risk prediction task. Our pretrained models, and source code will be publicly available once the review is finished.Master ThesisPublication Metadata only A security protocol for IoT networks using blacklisting and trust scoringBaykara, Cem Ata; Çakmakçı, Kübra Kalkan; Çakmakçı, Kübra Kalkan; Sözer, Hasan; Alagöz, F.; Department of Computer Science; Baykara, Cem AtaThere have been a number of high-profile incidents to compromise and attack larger networks of IoT devices, drawing attention to the need for IoT security. The purpose of IoT security is to ensure the availability, confidentiality, and integrity of IoT networks. However, due to the heterogeneity of IoT devices and the possibility of attacks from both inside and outside the network, securing an IoT network is a difficult task. Handshake protocols are useful for achieving mutual authentication which allows secure inclusion of devices into the network. However, they cannot prevent malicious network-based attacks once attackers enter the network. Use of autonomous anomaly detection and blacklisting prevent nodes with anomalous behavior from joining, re-joining, or remaining in the network. This is useful for securing an IoT network from insider network-based attacks. Similarly, trust scoring is another popular method that can be used to increase the resilience of the network against behavioral attacks. The contributions of this thesis are threefold. First, we propose a new handshake protocol that can be used in device discovery and mutual authentication to ensure the security of the IoT network from outsider attacks. In the proposed handshake protocol, a Physical Unclonable Function (PUF) is utilized for the session key generation to reduce computational complexity. The proposed protocol is resilient to Man-in-the-middle, replay and reforge attacks as proven in our security analysis. Secondly, we propose a machine learning (ML) based intrusion and anomaly detection to prevent network-based attacks from the insiders. Finally, we propose a trust system which utilizes blockchain for managing the trust of a dynamic IoT network to increase resilience against behavioral attacks. Simulation results show that the proposed comprehensive security framework is capable of ensuring the security of an IoT network from both inside and outside attackers.Master ThesisPublication Metadata only Considering arguments in human-agent negotiationsDoğru, Anıl; Aydoğan, Reyhan; Aydoğan, Reyhan; Yıldız, Olcay Taner; Özgür, A.; Department of Computer ScienceAutonomous negotiating agents, which can interact with other agents, aim to solve decision-making problems involving participants with conflicting interests. Designing agents capable of negotiating with human partners requires considering some human factors, such as emotional states and arguments. For this purpose, we introduce an extended taxonomy of argument types capturing human speech acts during the negotiation and propose an argument-based automated negotiating agent that can extract human arguments from a chat-based environment using a hierarchical classifier. Consequently, the proposed agent can understand the received arguments and adapt its strategy accordingly while negotiating with its human counterparts. We initially conducted human-agent negotiation experiments to construct a negotiation corpus to train our classifier. According to the experimental results, it is seen that the proposed hierarchical classifier successfully extracted the arguments from the given text. Moreover, we conducted a second experiment where we tested the performance of the designed negotiation strategy considering the human opponent's arguments and emotions. Our results showed that the proposed agent beats the human negotiator and gains higher utility than the baseline agent.Master ThesisPublication Metadata only HTTP adaptive streaming with advanced transport(2018-09) Arısu, Şevket; Beğen, Ali Cengiz; Beğen, Ali Cengiz; Civanlar, Reha; Karalar, T. C.; Department of Computer Science; Arısu, ŞevketQUIC (Quick UDP Internet Connections) is an experimental and low-latency transport network protocol proposed by Google, which is still being improved and specified in the IETF. The viewer's quality of experience (QoE) in HTTP adaptive streaming (HAS) applications may be improved with the help of QUIC's low-latency, improved congestion control and multiplexing features. In this master thesis, we measured the streaming performance of QUIC on wireless and cellular networks in order to understand whether the problems that occur when running HTTP over TCP can be reduced by using HTTP over QUIC. The performance of QUIC was tested in the presence of network interface changes caused by the mobility of the viewer. We observed that QUIC resulted in quicker start of media streams, better streaming and seeking experience, especially during the higher levels of congestion in the network and had a better performance than TCP when the viewer was mobile and switched between the wireless networks. Furthermore, we investigated QUIC's performance in an emulated network that had a various amount of losses and delays to evaluate how QUIC's multiplexing feature would be beneficial for HAS applications. We compared the performance of HAS applications using multiplexing video streams with HTTP/1.1 over multiple TCP connections to HTTP/2 over one TCP connection and to QUIC over one UDP connection. To that effect, we observed that QUIC provided better performance than TCP on a network that had large delays. However, QUIC did not provide a significant improvement when the loss rate was large.Master ThesisPublication Metadata only Generating runtime verification specifications based on static code analysis alerts(2017-12) Kılıç, Yunus; Sözer, Hasan; Sözer, Hasan; Aktaş, M.; Aktemur, Tankut Barış; Department of Computer Science; Kılıç, YunusThere are various approaches in order to find bugs in a software system. One of these approaches is static code analysis, which tries to achieve this goal by analyzing code without executing it. Another complementary approach is runtime verification, which is employed to verify dynamic system behavior with respect to a set of specifications at runtime. These specifications are often created manually based on system requirements and constraints. In this thesis, we propose a novel methodology and tool support for automatically generating runtime verification specifications based on alerts that are reported by static code analysis tools. We introduce a domain specific language for defining a set of rules to be checked for an alert type. Violations of the rules indicate either the absence or existence of an actual bug designated by the instances of that alert type. Formal verification specifications are automatically generated for each reported alert instance based on the defined rules. Then, runtime monitors are automatically synthesized and integrated into the system. These monitors report detected errors or false positive alerts during software execution. The set of rules can be reused across different projects. We performed case studies with two open source software systems to illustrate this. Our tool currently supports the use of two different static code analysis tools for generating runtime monitors in Java language. It is designed to be extendible for supporting other tools as well.PhD DissertationPublication Metadata only Enhancing deep learning models for campaign participation prediction(2019-07-31) Ayvaz, Demet; Şensoy, Murat; Şensoy, Murat; Kıraç, Furkan; Akçura, Munir Tolga; Tek, B.; Alkaya, A. F.; Department of Computer Science; Ayvaz, DemetCompanies engage with their customers in order to establish a long-term relationship. Targeting the right audience with the right product is crucial for providing better services to customers, increasing their loyalty to the company, and gaining high pro t. Therefore, companies make huge investments to build campaign management systems, which are mostly rule-based and highly depend on business insight and human expertise. In the last decade, recommendation systems usually use modeling techniques such as deep learning to understand and predict the interests of customers. Classic deep neural networks are good at learning hidden relations within data (generalization); however, they have limited capability for memorization. Wide & Deep network model, which is originally proposed for Google Play App. recommendation, deals with this problem by combining Wide & Deep network models in a joint network. However, this model requires domain expert knowledge and manually crafted features to bene t from memorization. In this thesis, we advocate using Wide & Deep network models for campaign participation prediction, particularly in the area of telecommunication. To deal with the aforementioned issue with that model, this thesis introduces the idea of using decision trees for automatic creation of combinatorial features (cross-product transformations of existing features) instead of demanding them from human experts. A set of comprehensive experiments on campaign participation data from a leading GSM provider has been conducted. The results have shown that automatically crafted features make a signi cant increase in the accuracy and outperform Deep and Wide & Deep models with manually crafted features. Furthermore, since a limited number of access to the customers is allowed, making well-targeted o ers that are likely to be acceptable by the customers plays a crucial role. Therefore, an e ective campaign participation prediction require to avoid falsepositive predictions. Accordingly, we extended our research towards classi cation uncertainty to build network models that can predict whether or not they will fail. Consequently, we adopt evidential deep learning models to capture the uncertainty in prediction. Our experimental evaluation regarding prediction uncertainty has shown that the proposed approach is more con dent for correct predictions while it is more uncertain for inaccurate predictions.