Organizational Unit:
Department of Computer Science

Loading...
OrgUnit Logo

Date established

City

Country

ID

Publication Search Results

Now showing 1 - 10 of 75
  • Placeholder
    Master ThesisPublication
    A security protocol for IoT networks using blacklisting and trust scoring
    Baykara, Cem Ata; Çakmakçı, Kübra Kalkan; Çakmakçı, Kübra Kalkan; Sözer, Hasan; Alagöz, F.; Department of Computer Science; Baykara, Cem Ata
    There have been a number of high-profile incidents to compromise and attack larger networks of IoT devices, drawing attention to the need for IoT security. The purpose of IoT security is to ensure the availability, confidentiality, and integrity of IoT networks. However, due to the heterogeneity of IoT devices and the possibility of attacks from both inside and outside the network, securing an IoT network is a difficult task. Handshake protocols are useful for achieving mutual authentication which allows secure inclusion of devices into the network. However, they cannot prevent malicious network-based attacks once attackers enter the network. Use of autonomous anomaly detection and blacklisting prevent nodes with anomalous behavior from joining, re-joining, or remaining in the network. This is useful for securing an IoT network from insider network-based attacks. Similarly, trust scoring is another popular method that can be used to increase the resilience of the network against behavioral attacks. The contributions of this thesis are threefold. First, we propose a new handshake protocol that can be used in device discovery and mutual authentication to ensure the security of the IoT network from outsider attacks. In the proposed handshake protocol, a Physical Unclonable Function (PUF) is utilized for the session key generation to reduce computational complexity. The proposed protocol is resilient to Man-in-the-middle, replay and reforge attacks as proven in our security analysis. Secondly, we propose a machine learning (ML) based intrusion and anomaly detection to prevent network-based attacks from the insiders. Finally, we propose a trust system which utilizes blockchain for managing the trust of a dynamic IoT network to increase resilience against behavioral attacks. Simulation results show that the proposed comprehensive security framework is capable of ensuring the security of an IoT network from both inside and outside attackers.
  • Placeholder
    Master ThesisPublication
    Black-box test case selection by relating code changes with previously fixed defects
    Çıngıl, Tutku; Sözer, Hasan; Sözer, Hasan; Aydoğan, Reyhan; Ovatman, T.; Department of Computer Science; Çıngıl, Tutku
    Software continuously changes to address new requirements and to fix defects. Regression testing is performed to ensure that the applied changes do not adversely affect existing functionality. The increasing number of test cases makes it infeasible to execute the whole regression test suite. Test case selection is adopted to select a subset of the test suite, which is associated with the changed parts of the software. These parts are assumed to be error-prone. We present and evaluate a test case selection approach in the context of black-box regression testing of embedded systems. In this context, it is challenging to relate test cases with a set of distinct source code elements to be able to select those test cases associated with the modified parts of the source code. We analyze previously fixed defects for this purpose. We relate test cases that detect these defects with the source files that are previously modified for fixing them. Then, we select test cases related with source code files that are modified in the subsequent revision. The strength of this relation is determined as the number of changes associated with fixed defects previously detected by the same test cases. We conduct a case study on 3 real projects from the consumer electronics domain. Results show that it is possible to detect from 65% up to 85% of the defects detected by the whole test suite by selecting from 30% up to 70% of the test cases.
  • Placeholder
    Master ThesisPublication
    Image denoising using deep convolutional autoencoders
    (2019-08-19) Çetinkaya, Ekrem; Kıraç, Mustafa Furkan; Kıraç, Mustafa Furkan; Aydoğan, Reyhan; Akarun, L.; Department of Computer Science; Çetinkaya, Ekrem
    Image denoising is one of the fundamental problems in image processing eld since it is required by many computer vision applications. Various approaches have been used in image denoising throughout the years from spatial ltering to model based approaches. Having outperformed all traditional methods, neural network based discriminative methods have gained popularity in the recent years. However, most of these methods still struggle to achieve exibility against various noise levels and types. In this thesis, we propose a deep convolutional autoencoder combined with a variant of feature pyramid network for image denoising. We use simulated data in Blender software along with corrupted natural images during training to improve robustness against various noise levels and types. Our experimental results show that proposed method can achieve competitive performance in blind Gaussian denoising with significantly less training time required compared to state-of-the-art methods. Extensive experiments showed us proposed method gives promising performance in wide range of noise levels with a single network.
  • Placeholder
    Master ThesisPublication
    HTTP adaptive streaming with advanced transport
    (2018-09) Arısu, Şevket; Beğen, Ali Cengiz; Beğen, Ali Cengiz; Civanlar, Reha; Karalar, T. C.; Department of Computer Science; Arısu, Şevket
    QUIC (Quick UDP Internet Connections) is an experimental and low-latency transport network protocol proposed by Google, which is still being improved and specified in the IETF. The viewer's quality of experience (QoE) in HTTP adaptive streaming (HAS) applications may be improved with the help of QUIC's low-latency, improved congestion control and multiplexing features. In this master thesis, we measured the streaming performance of QUIC on wireless and cellular networks in order to understand whether the problems that occur when running HTTP over TCP can be reduced by using HTTP over QUIC. The performance of QUIC was tested in the presence of network interface changes caused by the mobility of the viewer. We observed that QUIC resulted in quicker start of media streams, better streaming and seeking experience, especially during the higher levels of congestion in the network and had a better performance than TCP when the viewer was mobile and switched between the wireless networks. Furthermore, we investigated QUIC's performance in an emulated network that had a various amount of losses and delays to evaluate how QUIC's multiplexing feature would be beneficial for HAS applications. We compared the performance of HAS applications using multiplexing video streams with HTTP/1.1 over multiple TCP connections to HTTP/2 over one TCP connection and to QUIC over one UDP connection. To that effect, we observed that QUIC provided better performance than TCP on a network that had large delays. However, QUIC did not provide a significant improvement when the loss rate was large.
  • Placeholder
    Master ThesisPublication
    Finsentiment : predicting financial sentiment and risk through transfer learning
    Ergün, Zehra Erva; Sefer, Emre; Sefer, Emre; Yıldız, Olcay Taner; Yeniterzi, R.; Department of Computer Science
    There is an increasing interest in financial text mining tasks. Significant progress has been made by using deep learning-based models on generic corpus, which also shows reasonable results on financial text mining tasks such as financial sentiment analysis. However, financial sentiment analysis is still a demanding work because of insufficiency of labeled data for the financial domain and its specialized language. General-purpose deep learning methods are not as effective mainly due to specialized language used in the financial context. In this study, we focus on enhancing the performance of financial text mining tasks by improving the existing pretrained language models via NLP transfer learning. Pretrained language models demand a small quantity of labeled samples, and they could be enhanced to a greater extent by training them on domain-specific corpora instead. We propose an enhanced model FinSentiment, which incorporates enhanced versions of a number of recentlyproposed pretrained models, such as BERT, XLNet, RoBERTa to better perform across NLP tasks in financial domain by training these models on financial domain corpora. The corresponding finance-specific models in FinSentiment are called Fin-BERT, Fin-XLNet, and Fin-RoBERTa respectively. We also propose variants of these models jointly trained over financial domain and general corpora. Our finance-specific FinSentiment models in general show the best performance across 3 financial sentiment analysis datasets, even when only a subpart of these models are fine-tuned with a smaller training set. Our results exhibit enhancement for each tested performance criteria on the existing results for these datasets. Extensive experimental results demonstrate the effectiveness and robustness of especially RoBERTa pretrained on financial corpora. Overall, we show that NLP transfer learning techniques are favorable solutions to financial sentiment analysis tasks. Financial risk is empirically quantified in terms of asset return volatility, which is degree of deviation from the expected return of the asset. Under risk management in finance, predicting asset volatility is one of the most crucial problems because of its important role in making investment decisions. Even though a number of previous studies have investigated the role of natural language knowledge in enhancing the quality of volatility predictions, volatility estimation can still be enhanced via recent deep learning techniques. Specifically, extracting financial knowledge in text through transfer learning approaches such as BERT has not been used in risk prediction. Here, we come up with RiskBERT, the first BERT-based transfer learning method to predict asset volatility by simultaneously considering both a broad set of financial attributes and financial sentiment. In terms of language dataset, we utilize transcripts from the annually occurring 10-K filings of the publicly trading companies to train our model. Our proposed model, RiskBERT uses attention mechanism to model verbal context and remarkably performs better than the state-of-the-art methods and baselines such as historical volatility. We observe such outperformance even when RiskBERT is finetuned with a smaller training set. We found RiskBERT to be more effective in risk prediction after the Sarbanes-Oxley Act of 2002 has passed since such legislation has made the annual reports more effective. Overall, we show that NLP transfer learning techniques are favorable solutions to financial risk prediction task. Our pretrained models, and source code will be publicly available once the review is finished.
  • Placeholder
    Master ThesisPublication
    Solving 3-SAT problem using a quantum-simulated absorbing classical random walk approach
    Demirezen, Alp; Öztop, Erhan; Öztop, Erhan; Aydoğan, Reyhan; Say, C. C.; Department of Computer Science; Demirezen, Alp
    Quantum computing offers novel approaches for solving computationally hard problems. In this thesis, we present a quantum algorithm based on the quantum simulation of Schöning's algorithm for solving the 3-SAT problem. We first introduce the concept of quantum-simulated classical absorbing random walk on a hypercube and illustrate the idea using Markov chains. Then we describe the quantum algorithm that is built on the mentioned concept for solving the 3-SAT problem. The algorithm starts by creating the equal superposition of all assignments to the variables representing the vertices of a hypercube. The next state is determined by querying the oracle that checks whether a clause is satisfied or not. Accordingly, one of the variables from an unsatisfied clause is flipped as in Schöning's algorithm. The resulting algorithm finds the solution with a probability that is equivalent to the expected success probability of Schöning's algorithm starting at all possible initial states. The algorithm uses a linear number of qubits in the number of variables provided that reset is possible and its performance is demonstrated through several 3-SAT instances. Its performance is compared to Grover's algorithm, and the proposed algorithm outperforms Grover's algorithm in most cases for the number of gates and depth.
  • Placeholder
    Master ThesisPublication
    Uncertainty assessment for speaker verification systems using a bayesian approach
    (2021-01-18) Süslü, Çağıl; Demiroğlu, Cenk; Demiroğlu, Cenk; Sözer, Hasan; Güz, Ü.; Department of Computer Science; Süslü, Çağıl
    The Automatic Speaker Verification (ASV) systems are developed to discriminate the genuine speakers from the spoofing attacks and they are also used as a security application in various industries (e.g., Banking and telephone-based systems). The spoofing countermeasure systems (SCS) are important for the ASV systems to pro tect themselves against spoofing attacks. In general, the SCSs are developed using the cross entropy loss function and the softmax classification layer to perform the best classification scores. Even though the softmax function is popularly used as a classification layer for the deep neural network tasks, it increases the uncertainty of the estimated class probabilities by squishing the probabilistic predictions of the pre dictive models. The aim of this work was to decrease uncertainty of the conventional cross entropy metrics and softmax function SCS by using the Bayesian approach. To accomplish this, multiple SCSs were developed to outperform the base system of the Automatic Speaker Verification Spoofing and Countermeasures 2017 Challenge. The Bayesian approach was applied to the best model (e.g., the model which performed the lowest EER score) to decrease the uncertainty of the conventional cross entropy met rics and softmax function SCS. The uncertainty of the both systems were compared with the probability distribution function, AUC value and the ROC curve. As it can be observed from the ROC curve, the Bayesian network decreased the uncertainty of the conventional cross entropy metrics and softmax function SCS by increasing AUC value 14%. Also the Bayesian network has provided the lowest EER score (16.79%) by outperforming the base system of the ASV spoof 2017 challenge.
  • Placeholder
    Master ThesisPublication
    A self establishing clustered network architecture for blockchain
    Doğan, Orkun; Çakmakçı, Kübra Kalkan; Çakmakçı, Kübra Kalkan; Sözer, Hasan; Alagöz, F.; Department of Computer Science
    Blockchain technology has branched out into many industries, such as healthcare, manufacturing, agriculture and entertainment, in the shape of both of its public and non-public variants. In principle, blockchain provides these industries with an immutable ledger, allowing the processes in its application environment to be taken care of in a decentralized manner. However, some challenges blockchain has faced to this day remain, such as the degree of its scalability, the level of security it provides and the transparency of the network transactions. In this thesis, a novel approach to a distributed, permission-less blockchain network is explored with the use of hierarchical clustering to gather the nodes based on the latency of their connection to one another. These clusters of nodes are allowed to work on their respective local chains and to add the verified local chains to the actual global chain that is used by the entire system. Network's throughput performance and overall latency are evaluated and compared with other blockchain applications, namely a simulation of the Bitcoin network itself and another approach that makes use of a method called Community Clustering. We collected the data for the correlation in the same environment for our work, Bitcoin and Community Clustering\cite{communityclustering} networks. The comparison of the collected data aligns with our work's clusters to improve the transaction throughput of the network, where an increase in average throughput and a drastic decrease in latency are observed.
  • Placeholder
    Master ThesisPublication
    An ecologically valid reference frame for perspective invariant action recognition
    Bayram, Berkay; Öztop, Erhan; Öztop, Erhan; Kıraç, Mustafa Furkan; Uğur, E.; Department of Computer Science; Bayram, Berkay
    In robotics, objects and body parts can be represented in various coordinate frames to ease computation. In biological systems, body or body part centered coordinate frames have been proposed as possible reference frames that the brain uses for interacting with the environment. Coordinate transformations are standard tools in robotics and can facilitate perspective invariant action recognition and action prediction based on observed actions of other agents. Although it is known that human adults can do explicit coordinate transformations, it is not clear whether this capability is used for recognizing and understanding the actions of others. Mirror neurons, found in the ventral premotor cortex of macaque monkeys, seem to undertake action understanding in a perspective invariant way, which may rely on lower level perceptual mechanisms. To this end, in this paper, we propose a novel reference frame that is ecologically plausible and can sustain basic action understanding and mirror function. We demonstrate the potential of this representation by simulation of an upper body humanoid robot with an action repertoire consisting of push, poke, move-away, bring-to-mouth, bring-left and bring-right actions. The simulation experiments indicate that the representation is suitable for action recognition and effect prediction in a perspective invariant way, and thus can be deployed as an artificial mirror system for robotic applications.
  • Master ThesisPublicationRestricted
    High-performance low-complexity near-lossless embedded memory compression for HDTV
    (2014-08) Palaz, Okan; Uğurdağ, H. Fatih; Uğurdağ, H. Fatih; Sözer, Hasan; Erdem, Tanju; Soyak, E.; Gören Uğurdağ, S.; Department of Computer Science; Palaz, Okan
    HDTV video işlemcileri çerçeve hızı değiştirme, binişme giderme ve diğer video iyileştirme işlemlerini yapabilmek için, bir ya da daha fazla geçmiş çerçeveyi hafızada tutmak durumundadır. HD çözünürlüklerde sıkıştırılmamış çerçeve okuyup yazma işlemi yüksek bant genişliği gerektirir. Bu yüksek bant genişliği, çerçeveleri sıkıitırarak azaltılabilir. Video sıkıştırma yöntemleri bu kullanım için uygun değildir, çünkü bu yöntemler ağ trafiğini azaltırken fazladan hafıza trafiği yaratırlar. Bu amaca yçnelik sıkıştırma yönteminin yüksek performanslı, düşük karmaşıklıklı ve kayıpsız (ya da kayıpsıza yakın) olması gerekir. Bu tip sıkıştırma yöntemlerine Gömülü Sıkıştırma (GS) denir. Bu çalışmada, özgün uçtan uca bir gömülü sıkıştırma yöntemi önerilmektedir. Önerilen yöntem, 180nm ASIC üzerinde 4K Ultra HD çözünürlükte 30 Hz frekansında çalıimaya yeterli performansa sahiptir ve çekirdek başına benzer çalışmaların 2 katına yakın saat frekansına erişebilmektedir.