Browsing by Author "Kamin, S."
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
ArticlePublication Metadata only Autotuning runtime specialization for sparse matrix-vector multiplication(ACM, 2016-04) Yılmaz, Buse; Aktemur, Tankut Barış; Garzaran, M. J.; Kamin, S.; Kıraç, Mustafa Furkan; Computer Science; AKTEMUR, Tankut Bariş; KIRAÇ, Mustafa Furkan; Yılmaz, BuseRuntime specialization is used for optimizing programs based on partial information available only at runtime. In this paper we apply autotuning on runtime specialization of Sparse Matrix-Vector Multiplication to predict a best specialization method among several. In 91% to 96% of the predictions, either the best or the second-best method is chosen. Predictions achieve average speedups that are very close to the speedups achievable when only the best methods are used. By using an efficient code generator and a carefully designed set of matrix features, we show the runtime costs can be amortized to bring performance benefits for many real-world cases.Conference paperPublication Metadata only Optimization by runtime specialization for sparse matrix-vector multiplication(ACM, 2014) Kamin, S.; Jesus Garzaran, M.; Aktemur, Tankut Barış; Xu, D.; Yılmaz, Buse; Chen, Z.; Computer Science; AKTEMUR, Tankut Bariş; Yılmaz, BuseRuntime specialization optimizes programs based on partial information available only at run time. It is applicable when some input data is used repeatedly while other input data varies. This technique has the potential of generating highly efficient codes. In this paper, we explore the potential for obtaining speedups for sparse matrix-dense vector multiplication using runtime specialization, in the case where a single matrix is to be multiplied by many vectors. We experiment with five methods involving runtime specialization, comparing them to methods that do not (including Intel's MKL library). For this work, our focus is the evaluation of the speedups that can be obtained with runtime specialization without considering the overheads of the code generation. Our experiments use 23 matrices from the Matrix Market and Florida collections, and run on five different machines. In 94 of those 115 cases, the specialized code runs faster than any version without specialization. If we only use specialization, the average speedup with respect to Intel's MKL library ranges from 1.44x to 1.77x, depending on the machine. We have also found that the best method depends on the matrix and machine; no method is best for all matrices and machines.Conference paperPublication Metadata only Seyrek matris-vektör çarpımı için koşut zamanda özelleşmi̇ş kod üreti̇mi̇ ve deneysel opti̇mi̇zasyon(IEEE, 2012) Aktemur, Tankut Barış; Yıldız, Asım; Kamin, S.; Computer Science; AKTEMUR, Tankut Bariş; Yıldız, AsımBu çalışmada seyrek matris-vektor çarpımı için matris içeriğine göre özelleşmiş, yüksek hızlı program üretimi yapan bir kütüphane tasarımı anlatılmaktadır. Kütüphane sinyal işleme uygulamaları, bilimsel hesaplamalar, sonlu eleman analizi gibi mühendislik problemlerinde kullanılan büyük matrisler için kod üretimine olanak verir. Üretilen kod, pek çok seçenek arasından, deneysel optimizasyon yöntemiyle seçilir. Bu sayede koşumun gerçekleştiği makineye en uygun seçimin yapılması hedeflenir.