Browsing by Author "Organ, Zeynel Batuhan"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
ArticlePublication Metadata only An extension to the classical mean–variance portfolio optimization model(Taylor & Francis, 2019-07) Ötken, Çelen Naz; Organ, Zeynel Batuhan; Yıldırım, Elif Ceren; Çamlıca, Mustafa; Cantürk, Volkan Selim; Duman, Ekrem; Teksan, Zehra Melis; Kayış, Enis; Industrial Engineering; TEKSAN, Zehra Melis; KAYIŞ, Enis; Ötken, Çelen Naz; Organ, Zeynel Batuhan; Yıldırım, Elif Ceren; Çamlıca, Mustafa; Cantürk, Volkan Selim; Duman, EkremThe purpose of this study is to find a portfolio that maximizes the risk-adjusted returns subject to constraints frequently faced during portfolio management by extending the classical Markowitz mean-variance portfolio optimization model. We propose a new two-step heuristic approach, GRASP & SOLVER, that evaluates the desirability of an asset by combining several properties about it into a single parameter. Using a real-life data set, we conduct a simulation study to compare our solution to a benchmark (S&P 500 index). We find that our method generates solutions satisfying nearly all of the constraints within reasonable computational time (under an hour), at the expense of a 13% reduction in the annual return of the portfolio, highlighting the effect of introducing these practice-based constraints.Master ThesisPublication Metadata only Rolling look-ahead approaches for optimal classification treesOrgan, Zeynel Batuhan; Kayış, Enis; Kayış, Enis; Danış, Dilek Günneç; Albey, Erinç; Hanalioğlu, T.; Güler, M. G.; Department of Industrial Engineering; Organ, Zeynel BatuhanClassification trees have gained tremendous attention in machine learning applications due to their inherently interpretable nature. Current state-of-the-art formulations for learning optimal binary classification trees suffer from scalability for larger depths or larger instances. Moreover, they mostly fail to prove optimality after long run times and fit perfectly to the training data while minimizing misclassification error which is likely fail to generalize to the test data. We present a simple but powerful new formulation which we call rolling look-ahead learning approach. By dropping tractability variables which are dependent on instance size, we present a novel two-depth optimal binary classification tree formulation with the objective to minimize gini impurity or misclassification error. The approach can be thought of as a middle ground between myopic and global optimization methods. For larger depths, we developed a hybrid approach which learns by looking ahead 2-steps rolling horizon. It is much faster than the fastest known global optimization methods which can solve an instance with around 50K rows & 135 features in less than 4 minutes, for depth 8. Also, in majority of cases, the proposed approach outperforms global optimization methods & CART in terms of win count tested for 7 depths, 10 Fold and 19 benchmark datasets, and increase in out-of-sample accuracy up to 16.8% and 11.9% with respect to global optimization methods and CART, respectively.