Publication: Deep transformer-based asset price and direction prediction
Loading...
Institution Authors
Authors
Journal Title
Journal ISSN
Volume Title
Type
article
Access
openAccess
Attribution-NonCommercial-NoDerivs 4.0 International
Attribution-NonCommercial-NoDerivs 4.0 International
Publication Status
Published
Creative Commons license
Except where otherwised noted, this item's license is described as openAccess
Abstract
The field of algorithmic trading, driven by deep learning methodologies, has garnered substantial attention in recent times. Within this domain, transformers, convolutional neural networks, and patch embedding-based techniques have emerged as popular choices within the computer vision community. Here, inspired by the latest cutting-edge computer vision methodologies and the existing work showing the capability of image-like conversion for time-series datasets, we apply more advanced transformer-based and patch-based approaches for predicting asset prices and directional price movements. The employed transformer models include Vision Transformer (ViT), Data Efficient Image Transformers (DeiT), and Swin. We use ConvMixer for a patch embedding-based convolutional neural network architecture without a transformer. Our tested transformer-based and patch-based methodologies aim to predict asset prices and directional movements using historical price data by leveraging the inherent image-like properties within the historical time-series dataset. Before the implementation of attention-based architectures, the historical time series price dataset is transformed into two-dimensional images. This transformation is facilitated through the incorporation of various common technical financial indicators, each contributing to the data for a fixed number of consecutive days. Consequently, a diverse set of two-dimensional images is constructed, reflecting various dimensions of the dataset. Subsequently, the original images depicting market valleys and peaks are annotated with labels such as Hold, Buy, or Sell. According to the experiments, trained attention-based models consistently outperform the baseline convolutional architectures, particularly when applied to a subset of frequently traded Exchange-Traded Funds (ETFs). This better performance of attention-based architectures, especially ViT, is evident in terms of both accuracy and other financial evaluation metrics, particularly during extended testing and holding periods. These findings underscore the potential of transformer-based approaches to enhance predictive capabilities in asset price and directional forecasting. Our code and processed datasets are available at https://github.com/seferlab/price_transformer.
Date
2024
Publisher
IEEE