Faculty of Engineering
Permanent URI for this communityhttps://hdl.handle.net/10679/10
Browse
Browsing by Institution Author "AKTÜRK, Ismail"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Conference ObjectPublication Metadata only Do not predict – Recompute! How value recomputation can truly boost the performance of invisible speculation(IEEE, 2021) Sakalis, C.; Chowdhury, Z.; Wadle, S.; Aktürk, İsmail; Ros, A.; Sjalander, M.; Kaxiras, S.; Karpuzcu, U.; Computer Science; AKTÜRK, IsmailRecent architectural approaches that address speculative side-channel attacks aim to prevent software from exposing the microarchitectural state changes of transient execution. The Delay-on-Miss technique is one such approach, which simply delays loads that miss in the L1 cache until they become non-speculative, resulting in no transient changes in the memory hierarchy. However, this costs performance, prompting the use of value prediction (VP) to regain some of the delay.However, the problem cannot be solved by simply introducing a new kind of speculation (value prediction). Value-predicted loads have to be validated, which cannot be commenced until the load becomes non-speculative. Thus, value-predicted loads occupy the same amount of precious core resources (e.g., reorder buffer entries) as Delay-on-Miss. The end result is that VP only yields marginal benefits over Delay-on-Miss.In this paper, our insight is that we can achieve the same goal as VP (increasing performance by providing the value of loads that miss) without incurring its negative side-effect (delaying the release of precious resources), if we can safely, non-speculatively, recompute a value in isolation (without being seen from the outside), so that we do not expose any information by transferring such a value via the memory hierarchy. Value Recomputation, which trades computation for data transfer was previously proposed in an entirely different context: to reduce energy-expensive data transfers in the memory hierarchy. In this paper, we demonstrate the potential of value recomputation in relation to the Delay-on-Miss approach of hiding speculation, discuss the trade-offs, and show that we can achieve the same level of security, reaching 93% of the unsecured baseline performance (5% higher than Delay-on-miss), and exceeding (by 3%) what even an oracular (100% accuracy and coverage) value predictor could do.Conference ObjectPublication Metadata only Exploring scaling efficiency of intel loihi neuromorphic processor(IEEE, 2023) Uludağ, Recep Buğra; Çaǧdaş, S.; Işler, Y. S.; Şengör, N. S.; Aktürk, İsmail; Computer Science; AKTÜRK, Ismail; Uludağ, Recep BuğraIn this paper, we focus on examining how scaling efficiency evolves in winner-take-all (WTA) network models on Intel Loihi neuromorphic processor, as network-related features such as network size, neuron type, and connectivity scheme change. By analyzing these relationships, our study aims to shed light on the intricate interplay between SNN features and the efficiency of neuromorphic systems as they scale up. The findings presented in this paper are expected to enhance the comprehension of scaling efficiency in neuromorphic hardware, providing valuable insights for researchers and developers in optimizing the performance of large-scale SNNs on neuromorphic architectures.ArticlePublication Metadata only Weight update skipping: Reducing training time for artificial neural networks(IEEE, 2021-12) Safayenikoo, P.; Aktürk, İsmail; Computer Science; AKTÜRK, IsmailArtificial Neural Networks (ANNs) are known as state-of-the-art techniques in Machine Learning (ML) and have achieved outstanding results in data-intensive applications, such as recognition, classification, and segmentation. These networks mostly use deep layers of convolution and/or fully connected layers with many filters in each layer, demanding a large amount of data and tunable hyperparameters to achieve competitive accuracy. As a result, storage, communication, and computational costs of training (in particular time spent for training) become limiting factors to scale them up. In this paper, we propose a new training methodology for ANNs that exploits the observation of improvement of accuracy shows temporal variations which allow us to skip updating weights when the variation is minuscule. During such time windows, we keep updating bias which ensures the network still trains and avoids overfitting; however, we selectively skip updating weights (and their time-consuming computations). This training approach virtually achieves the same accuracy with considerably less computational cost and reduces the time spent on training. We developed two variations of the proposed training method for selectively updating weights, and call them as i) Weight Update Skipping (WUS), and ii) Weight Update Skipping with Learning Rate Scheduler (WUS+LR). We evaluate these two approaches by analyzing state-of-the-art models, including AlexNet, VGG-11, VGG-16, ResNet-18 on CIFAR datasets. We also use ImageNet dataset for AlexNet, VGG-16, and Resnet-18. On average, WUS and WUS+LR reduced the training time (compared to the baseline) by 54%, and 50% on CPU and 22%, and 21% on GPU, respectively for CIFAR-10; and 43% and 35% on CPU and 22%, and 21% on GPU, respectively for CIFAR-100; and finally 30% and 27% for ImageNet, respectively.