Browsing by Author "Kabbani, Taylan"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
ArticlePublication Open Access Deep reinforcement learning approach for trading automation in the stock market(IEEE, 2022) Kabbani, Taylan; Duman, Ekrem; Industrial Engineering; DUMAN, Ekrem; Kabbani, TaylanDeep Reinforcement Learning (DRL) algorithms can scale to previously intractable problems. The automation of profit generation in the stock market is possible using DRL, by combining the financial assets price 'prediction' step and the 'allocation' step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with their environment to make optimal decisions through trial and error. This work represents a DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem as a Partially Observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm reporting a 2.68 Sharpe Ratio on unseen data set (test data). From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of DRL in financial markets over other types of machine learning and proves its credibility and advantages in strategic decision-making.Master ThesisPublication Metadata only Deep reinforcement learning approach for trading automation in the stock marketKabbani, Taylan; Duman, Ekrem; Duman, Ekrem; Albey, Erinç; Alkaya, A. F.; Department of Data Science; Kabbani, TaylanDeep Reinforcement Learning (DRL) algorithms can scale to previously intractable problems. The automation of profit generation in the stock market is possible using DRL, by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous system capable of interacting with its environment to make optimal decisions through trial and error. In this study, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem as a Partially Observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm and achieved a 2.68 Sharpe ratio on the test dataset. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this study demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves the credibility and advantages of strategic decision-making using DRL.