Astekin, M.Zengin, H.Sözer, Hasan2020-06-102020-06-102018978-153865035-62639-1589http://hdl.handle.net/10679/6600https://doi.org/10.1109/BigData.2018.8621967Anomaly detection is a valuable feature for detecting and diagnosing faults in large-scale, distributed systems. These systems usually provide tens of millions of lines of logs that can be exploited for this purpose. However, centralized implementations of traditional machine learning algorithms fall short to analyze this data in a scalable manner. One way to address this challenge is to employ distributed systems to analyze the immense amount of logs generated by other distributed systems. We conducted a case study to evaluate two unsupervised machine learning algorithms for this purpose on a benchmark dataset. In particular, we evaluated distributed implementations of PCA and K-means algorithms. We compared the accuracy and performance of these algorithms both with respect to each other and with respect to their centralized implementations. Results showed that the distributed versions can achieve the same accuracy and provide a performance improvement by orders of magnitude when compared to their centralized versions. The performance of PCA turns out to be better than K-means, although we observed that the difference between the two tends to decrease as the degree of parallelism increases.enginfo:eu-repo/semantics/restrictedAccessEvaluation of distributed machine learning algorithms for anomaly detection from large-scale system logs: a case studyConference paper2071207700046849930202210.1109/BigData.2018.8621967Log analysisDistributed systemsParallel processingAnomaly detectionBig dataMachine learning2-s2.0-85062634825