Person: SALDI, Naci
Name
Job Title
First Name
Naci
Last Name
SALDI
27 results
Publication Search Results
Now showing 1 - 10 of 27
Conference ObjectPublication Metadata only Learning in discrete-time average-cost mean-field games(IEEE, 2021) Anahtarcı, Berkay; Karıksız, Can Deha; Saldı, Naci; Natural and Mathematical Sciences; ANAHTARCI, Berkay; KARIKSIZ, Can Deha; SALDI, NaciIn this paper, we consider learning of discrete-time mean-field games under an average cost criterion. We propose a Q-iteration algorithm via Banach Fixed Point Theorem to compute the mean-field equilibrium when the model is known. We then extend this algorithm to the learning setting by using fitted Q-iteration and establish the probabilistic convergence of the proposed learning algorithm. Our work on learning in average-cost mean-field games appears to be the first in the literature.ArticlePublication Metadata only Regularized stochastic team problems(Elsevier, 2021-03) Saldı, Naci; Natural and Mathematical Sciences; SALDI, NaciIn this paper, we introduce regularized stochastic team problems. Under mild assumptions, we prove that there exists a unique fixed point of the best response operator, where this unique fixed point is the optimal regularized team decision rule. Then, we establish an asynchronous distributed algorithm to compute this optimal strategy. We also provide a bound that shows how the optimal regularized team decision rule performs in the original stochastic team problem.ArticlePublication Metadata only Approximate nash equilibria in partially observed stochastic games with mean-field interactions(Informs, 2019-08) Saldı, Naci; Başar, T.; Raginsky, M.; Natural and Mathematical Sciences; SALDI, NaciEstablishing the existence of Nash equilibria for partially observed stochastic dynamic games is known to be quite challenging, with the difficulties stemming from the noisy nature of the measurements available to individual players (agents) and the decentralized nature of this information. When the number of players is sufficiently large and the interactions among agents is of the mean-field type, one way to overcome this challenge is to investigate the infinite-population limit of the problem, which leads to a mean-field game. In this paper, we consider discrete-time partially observed mean-field games with infinite-horizon discounted-cost criteria. Using the technique of converting the original partially observed stochastic control problem to a fully observed one on the belief space and the dynamic programming principle, we establish the existence of Nash equilibria for these game models under very mild technical conditions. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents.Book PartPublication Metadata only Finite approximations in discrete-time stochastic control quantized models and asymptotic optimality introduction and summary(Birkhäuser Basel, 2018) Saldı, Naci; Linder, T.; Yüksel, S.; Natural and Mathematical Sciences; SALDI, NaciControl and optimization of dynamical systems in the presence of stochastic uncertainty is a mature field with a large range of applications. A comprehensive treatment of such problems can be found in excellent books and other resources including [7, 16, 29, 68, 84, 95, 104], and [6]. To date, there exist a nearly complete theory regarding the existence and structure of optimal solutions under various formulations as well as computational methods to obtain such optimal solutions for problems with finite state and control spaces. However, there still exist substantial computational challenges involving problems with large state and action spaces, such as standard Borel spaces. For such state and action spaces, obtaining optimal policies is in general computationally infeasible.Book PartPublication Metadata only Finite model approximations in decentralized stochastic control(Birkhäuser Basel, 2018) Saldı, Naci; Linder, T.; Yüksel, S.; Natural and Mathematical Sciences; SALDI, NaciIn this chapter, we study the approximation of static and dynamic team problems using finite models which are obtained through the uniform discretization, on a finite grid, of the observation and action spaces of agents. In particular, we are interested in the asymptotic optimality of quantized policies.ArticlePublication Metadata only A topology for team policies and existence of optimal team policies in stochastic team theory(IEEE, 2020-01) Saldı, Naci; Natural and Mathematical Sciences; SALDI, NaciIn this paper, we establish the existence of team-optimal policies for static teams and a class of sequential dynamic teams. We first consider the static team problems and show the existence of optimal policies under certain regularity conditions on the observation channels by introducing a topology on the set of policies. Then, we consider sequential dynamic teams and establish the existence of an optimal policy via the static reduction method of Witsenhausen. We apply our findings to the well-known counterexample of Witsenhausen and the Gaussian relay channel problem.Book PartPublication Metadata only Finite approximations in discrete-time stochastic control : quantized models and asymptotic optimality(Birkhäuser Basel, 2018) Saldı, Naci; Linder, T.; Yüksel, S.; Natural and Mathematical Sciences; SALDI, NaciIn a unified form, this monograph presents fundamental results on the approximation of centralized and decentralized stochastic control problems, with uncountable state, measurement, and action spaces. It demonstrates how quantization provides a system-independent and constructive method for the reduction of a system with Borel spaces to one with finite state, measurement, and action spaces. In addition to this constructive view, the book considers both the information transmission approach for discretization of actions, and the computational approach for discretization of states and actions. Part I of the text discusses Markov decision processes and their finite-state or finite-action approximations, while Part II builds from there to finite approximations in decentralized stochastic control problems. This volume is perfect for researchers and graduate students interested in stochastic controls. With the tools presented, readers will be able to establish the convergence of approximation models to original models and the methods are general enough that researchers can build corresponding approximation results, typically with no additional assumptions.ArticlePublication Metadata only Large deviations principle for discrete-time mean-field games(Springer, 2021-11) Saldı, Naci; Natural and Mathematical Sciences; SALDI, NaciIn this paper, we establish a large deviations principle (LDP) for interacting particle systems that arise from state and action dynamics of discrete-time mean-field games under the equilibrium policy of the infinite-population limit. The LDP is proved under weak Feller continuity of state and action dynamics. The proof is based on transferring LDP for empirical measures of initial states and noise variables under setwise topology to the original game model via contraction principle, which was first suggested by Delarue, Lacker, and Ramanan to establish LDP for continuous-time mean-field games under common noise. We also compare our work with LDP results established in prior literature for interacting particle systems, which are in a sense uncontrolled versions of mean-field games.Book PartPublication Metadata only Finite-state approximation of Markov decision processes(Springer, 2018) Saldı, Naci; Linder, T.; Yüksel, S.; Natural and Mathematical Sciences; SALDI, NaciIn this chapter we study the finite-state approximation problem for computing near optimal policies for discrete-time MDPs with Borel state and action spaces, under discounted and average costs criteria. Even though existence and structural properties of optimal policies of MDPs have been studied extensively in the literature, computing such policies is generally a challenging problem for systems with uncountable state spaces. This situation also arises in the fully observed reduction of a partially observed Markov decision process even when the original system has finite state and action spaces. Here we show that one way to compute approximately optimal solutions for such MDPs is to construct a reduced model with a new transition probability and one-stage cost function by quantizing the state space, i.e., by discretizing it on a finite grid. It is reasonable to expect that when the one-stage cost function and the transition probability of the original model has certain continuity properties, the cost of the optimal policy for the approximating finite model converges to the optimal cost of the original model as the discretization becomes finer. Moreover, under additional continuity conditions on the transition probability and the one stage cost function we also obtain bounds on the accuracy of the approximation in terms of the number of points used to discretize the state space, thereby providing a tradeoff between the computation cost and the performance loss in the system. In particular, we study the following two problems.ArticlePublication Metadata only Markov-Nash equilibria in mean-field games with discounted cost(Society for Industrial and Applied Mathematics Publications, 2018) Saldı, Naci; Başar, T.; Raginsky, M.; Natural and Mathematical Sciences; SALDI, NaciIn this paper, we consider discrete-time dynamic games of the mean-field type with a finite number $N$ of agents subject to an infinite-horizon discounted-cost optimality criterion. The state space of each agent is a Polish space. At each time, the agents are coupled through the empirical distribution of their states, which affects both the agents' individual costs and their state transition probabilities. We introduce a new solution concept of the Markov--Nash equilibrium, under which a policy is player-by-player optimal in the class of all Markov policies. Under mild assumptions, we demonstrate the existence of a mean-field equilibrium in the infinite-population limit $N \to \infty$, and then show that the policy obtained from the mean-field equilibrium is approximately Markov--Nash when the number of agents $N$ is sufficiently large.
- «
- 1 (current)
- 2
- 3
- »