Browsing by Author "Zengin, H."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
ArticlePublication Metadata only DILAF: A framework for distributed analysis of large-scale system logs for anomaly detection(Wiley, 2019-02) Astekin, M.; Zengin, H.; Sözer, Hasan; Computer Science; SÖZER, HasanSystem logs constitute a rich source of information for detection and prediction of anomalies. However, they can include a huge volume of data, which is usually unstructured or semistructured. We introduce DILAF, a framework for distributed analysis of large-scale system logs for anomaly detection. DILAF is comprised of several processes to facilitate log parsing, feature extraction, and machine learning activities. It has two distinguishing features with respect to the existing tools. First, it does not require the availability of source code of the analyzed system. Second, it is designed to perform all the processes in a distributed manner to support scalable analysis in the context of large-scale distributed systems. We discuss the software architecture of DILAF and we introduce an implementation of it. We conducted controlled experiments based on two datasets to evaluate the effectiveness of the framework. In particular, we evaluated the performance and scalability attributes under various degrees of parallelism. Results showed that DILAF can maintain the same accuracy levels while achieving more than 30% performance improvement on average as the system scales, compared to baseline approaches that do not employ fully distributed processing.Conference paperPublication Metadata only Evaluation of distributed machine learning algorithms for anomaly detection from large-scale system logs: a case study(IEEE, 2018) Astekin, M.; Zengin, H.; Sözer, Hasan; Computer Science; SÖZER, HasanAnomaly detection is a valuable feature for detecting and diagnosing faults in large-scale, distributed systems. These systems usually provide tens of millions of lines of logs that can be exploited for this purpose. However, centralized implementations of traditional machine learning algorithms fall short to analyze this data in a scalable manner. One way to address this challenge is to employ distributed systems to analyze the immense amount of logs generated by other distributed systems. We conducted a case study to evaluate two unsupervised machine learning algorithms for this purpose on a benchmark dataset. In particular, we evaluated distributed implementations of PCA and K-means algorithms. We compared the accuracy and performance of these algorithms both with respect to each other and with respect to their centralized implementations. Results showed that the distributed versions can achieve the same accuracy and provide a performance improvement by orders of magnitude when compared to their centralized versions. The performance of PCA turns out to be better than K-means, although we observed that the difference between the two tends to decrease as the degree of parallelism increases.