Show simple item record

dc.contributor.authorKabbani, Taylan
dc.contributor.authorDuman, Ekrem
dc.date.accessioned2023-08-11T12:49:30Z
dc.date.available2023-08-11T12:49:30Z
dc.date.issued2022
dc.identifier.issn2169-3536en_US
dc.identifier.urihttp://hdl.handle.net/10679/8644
dc.identifier.urihttps://ieeexplore.ieee.org/document/9877940
dc.description.abstractDeep Reinforcement Learning (DRL) algorithms can scale to previously intractable problems. The automation of profit generation in the stock market is possible using DRL, by combining the financial assets price 'prediction' step and the 'allocation' step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with their environment to make optimal decisions through trial and error. This work represents a DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem as a Partially Observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm reporting a 2.68 Sharpe Ratio on unseen data set (test data). From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of DRL in financial markets over other types of machine learning and proves its credibility and advantages in strategic decision-making.en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.ispartofIEEE Access
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rightsopenAccess
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.titleDeep reinforcement learning approach for trading automation in the stock marketen_US
dc.typeArticleen_US
dc.description.versionPublisher versionen_US
dc.peerreviewedyesen_US
dc.publicationstatusPublisheden_US
dc.contributor.departmentÖzyeğin University
dc.contributor.authorID(ORCID 0000-0001-5176-6186 & YÖK ID 142351) Duman, Ekrem
dc.contributor.ozuauthorDuman, Ekrem
dc.identifier.volume10en_US
dc.identifier.startpage93564en_US
dc.identifier.endpage93574en_US
dc.identifier.wosWOS:000853807800001
dc.identifier.doi10.1109/ACCESS.2022.3203697en_US
dc.subject.keywordsAutonomous agenten_US
dc.subject.keywordsDeep reinforcement learningen_US
dc.subject.keywordsMDPen_US
dc.subject.keywordsSentiment analysisen_US
dc.subject.keywordsStock marketen_US
dc.subject.keywordsTechnical indicatorsen_US
dc.subject.keywordsTwin delayed deep deterministic policy gradienten_US
dc.identifier.scopusSCOPUS:2-s2.0-85137848489
dc.contributor.ozugradstudentKabbani, Taylan
dc.relation.publicationcategoryArticle - International Refereed Journal - Institutional Academic Staff and Graduate Student


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 International
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International

Share this page