Publication: A revised approach to cryptocurrency portfolio optimization using advanced Q-learning and policy iteration frameworks
dc.contributor.advisor | Albey, Erinç | |
dc.contributor.author | Altok, Ceren | |
dc.contributor.committeeMember | Albey, Erinç | |
dc.contributor.committeeMember | Önal, Mehmet | |
dc.contributor.committeeMember | Güler, M. G. | |
dc.contributor.department | Department of Data Science | |
dc.date.accessioned | 2024-08-30T14:23:41Z | |
dc.date.available | 2024-08-30T14:23:41Z | |
dc.description.abstract | Despite all the factors that cause concern among investors, such as volatility and de centralization of crypto world, the popularity of cryptocurrencies continues to grow steadily. The cryptocurrency market still holds its allure for many investors due to the high profit levels it has experienced in the past. With the entrance of numerous alt coins into the market, portfolio management becomes much more challenging. In the literature, we come across numerous studies proposing efficient portfolio management techniques for cryptocurrencies. This study presents proposed models developed based on policy iteration and Q-learning algorithms. Under Q-learning, three distinct sub-models are introduced: Deep Q-Network (DQN), Double Deep Q-Network (DDQN), and Double Dueling Q Network (DDDQN). All of these models are trained using 6-month training periods and compared using 10 different training and testing periods. Additionally, to eval uate both of proposed policy iteration and Q-learning models, baseline models were created for each algorithm, and the performance of the proposed models was assessed against these baseline models. The results indicate that among Policy Iteration models, the proposed model has the highest average ROI value of 3%, making it the top-performing model. Similarly, among Q-learning models, the proposed DQN model surpasses both baseline models and other Q-learning models, with an average ROI value of 2%. Considering all the models, the proposed Policy Iteration model achieves the highest average ROI value, while the proposed DQN and the proposed DDDQN model demonstrates the lowest volatility in terms of ROI standard deviations. | |
dc.description.abstract | Yatırımcılar arasında volatilite ve merkezsizle¸sme gibi endi¸se yaratan t¨um fakt¨orlere ra˘gmen, kripto para birimlerinin pop¨ulerli˘gi istikrarlı bir ¸sekilde artmaya devam et mektedir. Kripto para piyasası, ge¸cmi¸ste ya¸sadı˘gı y¨uksek kar seviyeleri sebebiyle bir¸cok yatırımcı i¸cin hala cazibesini korumaktadır. G¨unl¨uk olarak bir¸cok alternatif kripto para biriminin piyasaya girmesiyle, portf¨oy y¨onetimi ¸cok daha zorlu hale gelmektedir. Literat¨urde, kripto portf¨oylerini verimli bir ¸sekilde y¨onetmek i¸cin ¨onerilen bir¸cok ¸calı¸sma bulunmaktadır. Bu ¸calı¸sma, Policy Iteration ve Q-learning algoritmalarından t¨uretilen yeni model ¨onerileri sunmaktadır. Q-learning altında Deep Q-Network (DQN), Double Deep Q-Network (DDQN) ve Double Dueling Q-Network (DDDQN) olmak ¨uzere ¨u¸c farklı alt model tanıtılmaktadır. Bu modellerin hepsi, 6 aylık e˘gitim d¨onemleri kullanılarak e˘gitilmi¸stir ve 10 farklı e˘gitim ve test d¨onemi kullanılarak kar¸sıla¸stırmalar yapılmı¸stır. Ayrıca, hem ¨onerilen politika iterasyonu hem de ¨onerilen Q-learning modellerini de˘gerlendirebilmek i¸cin iki algoritma i¸cin de basit referans modelleri olu¸sturulmu¸s ve ¨onerilen model de˘gerlendirmeleri bu referans modelleri kul lanılarak yapılmı¸stır. Sonu¸clara g¨ore, ¨onerilen Policy Iteration modeli Policy Iteration modelleri arasında %3 ortalama ROI de˘geri ile en iyi modeldir. Benzer ¸sekilde, ¨onerilen Q-learning mod elleri arasında DQN modeli, hem referans modelini hem de di˘ger Q-learning mod ellerini %2 ortalama ROI de˘geri ile geride bırakmaktadır. T¨um modelleri dikkate aldı˘gımızda, ¨onerilen Politika ˙Iterasyonu modelinin en y¨uksek ortalama ROI de˘gerine sahip oldu˘gunu, ¨onerilen DQN ve DDDQN modellerinin ise ROI standart sapması a¸cısından en d¨u¸s¨uk volatiliteye sahip oldu˘gunu g¨ormekteyiz. | |
dc.identifier.uri | https://discover.ozyegin.edu.tr/iii/encore/record/C__Rb7037840 | |
dc.identifier.uri | https://hdl.handle.net/10679/10189 | |
dc.identifier.uri | https://tez.yok.gov.tr/ | |
dc.language.iso | eng | |
dc.publicationstatus | Unpublished | |
dc.rights | info:eu-repo/semantics/restrictedAccess | |
dc.subject.keywords | Portfolio management | |
dc.subject.keywords | Mathematical models | |
dc.subject.keywords | Financial services industry | |
dc.subject.keywords | Technological innovations | |
dc.subject.keywords | Cryptocurrencies | |
dc.subject.keywords | Machine learning | |
dc.subject.keywords | Reinforcement learning | |
dc.subject.keywords | Data science | |
dc.title | A revised approach to cryptocurrency portfolio optimization using advanced Q-learning and policy iteration frameworks | |
dc.title.alternative | Gelişmiş Q-öğrenme ve politika yineleme çerçeveleri kullanarak kripto para birimi portföyü optimizasyonuna revize edilmiş bir yaklaşım. | |
dc.type | Master's thesis | |
dspace.entity.type | Publication | |
relation.isOrgUnitOfPublication | 532ec7b7-12ad-4d22-8c4e-e0ecdafee80a | |
relation.isOrgUnitOfPublication.latestForDiscovery | 532ec7b7-12ad-4d22-8c4e-e0ecdafee80a |