Natural and Mathematical Sciences
Permanent URI for this collectionhttps://hdl.handle.net/10679/313
Browse
Browsing by Institution Author "KARIKSIZ, Can Deha"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
ArticlePublication Metadata only Eigenvalues and dynamical properties of weighted backward shifts on the space of real analytic functions(Institute of Mathematics Polish Academy of Sciences, 2018) Domański, P.; Karıksız, Can Deha; Natural and Mathematical Sciences; KARIKSIZ, Can DehaUsually backward shift is neither chaotic nor hypercyclic. We will show that on the space A(Omega) of real analytic functions on a connected set Omega subset of R with 0 is an element of Omega, the backward shift operator is chaotic and sequentially hypercyclic. We give criteria for chaos and for many other dynamical properties for weighted backward shifts on A(Omega). For special classes of them we give full characterizations. We describe the point spectrum and eigenspaces of weighted backward shifts on A(Omega) as above.ArticlePublication Open Access Frequently hypercyclic weighted backward shifts on spaces of real analytic functions(TÜBİTAK, 2018) Anahtarcı, Berkay; Karıksız, Can Deha; Natural and Mathematical Sciences; KARIKSIZ, Can Deha; ANAHTARCI, BerkayWe study frequent hypercyclicity in the case of weighted backward shift operators acting on locally convex spaces of real analytic functions. We obtain certain conditions on frequent hypercyclicity and linear chaoticity of these operators using dynamical transference principles and the frequent hypercyclicity criterion.Conference ObjectPublication Metadata only Learning in discrete-time average-cost mean-field games(IEEE, 2021) Anahtarcı, Berkay; Karıksız, Can Deha; Saldı, Naci; Natural and Mathematical Sciences; ANAHTARCI, Berkay; KARIKSIZ, Can Deha; SALDI, NaciIn this paper, we consider learning of discrete-time mean-field games under an average cost criterion. We propose a Q-iteration algorithm via Banach Fixed Point Theorem to compute the mean-field equilibrium when the model is known. We then extend this algorithm to the learning setting by using fitted Q-iteration and establish the probabilistic convergence of the proposed learning algorithm. Our work on learning in average-cost mean-field games appears to be the first in the literature.ArticlePublication Open Access Learning mean-field games with discounted and average costs(Microtome Publishing, 2023) Anahtarcı, Berkay; Karıksız, Can Deha; Saldı, N.; Natural and Mathematical Sciences; ANAHTARCI, Berkay; KARIKSIZ, Can DehaWe consider learning approximate Nash equilibria for discrete-time mean-field games with stochastic nonlinear state dynamics subject to both average and discounted costs. To this end, we introduce a mean-field equilibrium (MFE) operator, whose fixed point is a mean-field equilibrium, i.e., equilibrium in the infinite population limit. We first prove that this operator is a contraction, and propose a learning algorithm to compute an approximate mean-field equilibrium by approximating the MFE operator with a random one. Moreover, using the contraction property of the MFE operator, we establish the error analysis of the proposed learning algorithm. We then show that the learned mean-field equilibrium constitutes an approximate Nash equilibrium for finite-agent games.ArticlePublication Metadata only Q-learning in regularized mean-field games(Springer, 2023-03) Anahtarcı, Berkay; Karıksız, Can Deha; Saldı, N.; Natural and Mathematical Sciences; ANAHTARCI, Berkay; KARIKSIZ, Can DehaIn this paper, we introduce a regularized mean-field game and study learning of this game under an infinite-horizon discounted reward function. Regularization is introduced by adding a strongly concave regularization function to the one-stage reward function in the classical mean-field game model. We establish a value iteration based learning algorithm to this regularized mean-field game using fitted Q-learning. The regularization term in general makes reinforcement learning algorithm more robust to the system components. Moreover, it enables us to establish error analysis of the learning algorithm without imposing restrictive convexity assumptions on the system components, which are needed in the absence of a regularization term.ArticlePublication Metadata only Value iteration algorithm for mean-field games(Elsevier, 2020-09) Anahtarcı, Berkay; Karıksız, Can Deha; Saldı, Naci; Natural and Mathematical Sciences; ANAHTARCI, Berkay; KARIKSIZ, Can Deha; SALDI, NaciIn the literature, existence of mean-field equilibria has been established for discrete-time mean field games under both the discounted cost and the average cost optimality criteria. In this paper, we provide a value iteration algorithm to compute stationary mean-field equilibrium for both the discounted cost and the average cost criteria, whose existence proved previously. We establish that the value iteration algorithm converges to the fixed point of a mean-field equilibrium operator. Then, using this fixed point, we construct a stationary mean-field equilibrium. In our value iteration algorithm, we use Q-functions instead of value functions.