Research Training Site GLoRiClass
Research Training Site GLoRiClass: GLoRiClass Seminar Abstracts

GLoRiClass Seminar Abstracts

February 19, 2009
Julien Cristau, Paris: Graph Games of Ordinal Length

We consider two player games on finite graphs, extended to plays of arbitrary ordinal length. Limit transitions are added to the arenas and allow to continue the game when reaching limit ordinals, and the winning condition is defined as the reachability of a certain state.

We first consider limit transitions based on priorities of states, similar to the parity condition for infinite games. We provide an algorithm to compute the winning regions of both players, allowing us to show that these games are determined, and that the winner has a positional winning strategy.

The determinacy result can be extended to arenas with more general limit transitions, using a reduction to games with priority transitions, based on the LAR construct of Gurevich and Harrington.

January 29, 2009
Jonathan Zvesper, ILLC: Belief revision and backward induction

We give conditions on the beliefs of players which ensure that in any finite perfect-information game they will play according to the backward induction algorithm. We start by reviewing some of the literature on this topic, starting with Aumann [1] Stalnaker's criticism of that [2, cf. 3], and the supposed "paradox" of backward induction [4]. Then we propose our own solution, which we formulate in a formal language based on conditional doxastic logic [5, 6]. To paraphrase the result: common stable belief in rationality entails backward induction. Time permitting, we will also discuss how we think of this in terms of belief revision 'policies'.

[1] R.Y. Aumann, "Backward induction and common knowledge of rationality", GEB, 1995.
[2] R.C. Stalnaker, "Knowledge, beliefs and counterfactual reasoning in games", Economics and Philosophy, 1996.
[3] J.Y. Halpern, "Substantive rationality and backward induction", GEB, 1998.
[4] C. Bicchieri, "Self-refuting theories of strategic interaction: a paradox of common knowledge", Erkenntnis, 1989.
[5] A. Baltag and S. Smets, "A qualitative theory of dynamic interactive belief revision",
[6] J.F.A.K. van Benthem, "Dynamic logic for belief revision", JANCL, 2006

January 15, 2009
Florian Horn, CWI: Stochastic games for verification

Graph games are a classical tool for the synthesis of controllers in reactive systems. In this setting, a game is defined by: an arena, which is a graph modelling the system and its evolution; and a winning condition, which models the specification that the controller must ensure. In each state, the outgoing transition is chosen either by the controller (Eve), an hostile environment (Adam), or a stochastic law (Random). This process is repeated for an infinite number of times, generating an infinite play whose winner depends on the winning condition.

Our first object of study is the fundamental case of reachability games. We present a new effective approach to the computation of the values, based on permutations of random states. In terms of complexity, the resulting "permutation algorithm" is orthogonal to the classical, strategy-based algorithms: it is exponential in the number of random states, but not in the number of controlled states. We also present an improvement heuristic for this algorithm, inspired by the "strategy improvement" algorithm.

We turn next to the very general class of prefix-independent games. We prove the existence of optimal strategies in these games. We also show that our permutation algorithm can be extended into a "meta-algorithm", turning any qualitative algorithm into a quantitative algorithm.

December 11, 2008
Martin Magnusson, Linköping: Logical Agents for Language and Action - Creating Smarter Computer Game Characters

When I began my PhD studies in knowledge representation I was haunted by a question. Why is everyone developing new logics and tools, while virtually no one actually uses them to represent knowledge?

Since then I've discovered just how hard a problem knowledge representation is. There's still plenty of work to do before we can formalize an encyclopedia entry. But I believe the advancements of the past 50 years of research enable promising (and fun!) applications in the relatively simple worlds of computer games.

In this talk I'll present some arguments for applying logic-based knowledge representation in computer games, put up some game scenarios that involve logical reasoning, and present my own attempts at solving them. The idea is to build an agent architecture based on automated theorem proving. By viewing different tasks as proof, such as deductive planning and execution, we achieve a tight integration between them. This is necessary in games where the knowledge is relatively simple but highly dynamic.

Read more about my work at

November 27, 2008
Mikaël Cozic, Paris: Probabilistic Unawareness

The modeling of unawareness has become a major topic in formal epistemology. Convincing models of unawareness are now available for full beliefs in the framework of epistemic logic. Our aim is to investigate the modeling of unawareness in the probabilistic model of partial beliefs. We will show how the use of impossible states allow to model unawareness in the probabilistic case.

November 6, 2008
Leon van der Torre, University of Luxembourg: A normative framework for norm change

Work in progress

Joint work with Guido Boella and Gabriella Pigozzi

It is often observed that in a multiagent system, normative systems must be able to evolve over time. For example, normative system change may be due to actions of agents that create or remove norms in the system. However, less has been said about how normative systems should change. Also, there is no formal framework to evaluate normative system change methods, and there is no formal classification of normative system change. The theory of AGM belief revision has originally been developed as a framework to describe and classify normative system change. However, in this theory norms are represented as propositional formulas. We show in this talk that if we replace propositional formulas by pairs of propositional formulas, representing the rule based character of norms, then some of the properties cannot be expressed, and other properties are consistent only for some logics, but not for others. We therefore introduce a new formal framework for norm change, and we study contraction and revision of norms in this framework.

October 9, 2008
Pelle Guldborg Hansen, Roskilde University: Learning and the problem of the emergence of convention: do stationarity assumptions imply a vicious circularity or a key to salvation?

During the last three decades theories of learning in games have increasingly found application as a general approach to answering the problem of the emergence of social convention as posed in social philosophy. However, besides sharing the main problems - the problems of rule interpretation and availability - that are taken to undermine the original Lewisian account of convention that it is supposed to supersede, this approach may be argued to assume what is to be explained in this problem by the use of various versions of stationarity assumptions. In this talk I discuss how to interpret the stationarity assumption in the context of the problem of emergence of convention. In particular, I try to develop an idea in progress that this kind of assumptions can be interpreted as holding a part of the key to solving the problems of rule interpretation and projection, rather than as a vicious circularity. The talk will be made to be accessible to those not fully familiar with the theory of convention, be philosophical of nature and is intended to encourage discussion.

Its idea springs in particular from reflexions on Thomas Schelling's The Strategy of Conflict (1960:53-60), David Lewis' Convention (1969:10-12,76), Margaret Gilbert's 'Some limitations of rationality', Journal of Philosophy (1983), Peter Vanderschraaf's Learning and Coordination (2001:99-212), and Sugden & Zamarrón 'Finding the key: the riddle of focal points', Journal of Economic Psychology (2006), Hansen 'Why Mixed Equilibria may not be Conventions', Danish Yearbook of Philosophy (2008).

September 18, 2008
Bernhard von Stengel, London School of Economics: Hard-to-Solve Bimatrix Games

A bimatrix game is a two-player game in strategic form, a basic model in game theory. A Nash equilibrium is a pair of (possibly randomized) strategies, one for each player, so that no player can do better by unilaterally changing his or her strategy. The exact computation time needed for finding one Nash equilibrium of a bimatrix game is open. Already a subexponential algorithm would be a major breakthrough in the field.

In this talk, which will introduce the main concepts and geometric tools, we show that the commonly used Lemke-Howson algorithm for finding one equilibrium of a bimatrix game is exponential in the worst case. This question had been open for some time. The algorithm is a pivoting method similar to the simplex algorithm for linear programming. We present a class of square bimatrix games for which the shortest Lemke-Howson path grows exponentially in the dimension d of the game. We construct the games using pairs of dual cyclic polytopes with 2d facets in d-space. The path lengths are described by a Fibonacci sequence. An extension of this work gives games where in addition to long Lemke-Howson paths, a trivial "support guessing" algorithm also takes exponential time on average. (Joint work with Rahul Savani.)

Paper: R. Savani and B. von Stengel (2006), Hard-to-Solve Bimatrix Games. Econometrica 74, 397-429.

June 12, 2008
Eryk Kopczynski, Warsaw: Half-positional Determinacy of Infinite Games

We study infinite games where one of the players always has a positional (memory-less) winning strategy, while the other player may use a history-dependent strategy. We investigate winning conditions which guarantee such a property for all arenas, or all finite arenas. We establish some closure properties of such conditions, which give rise to the XPS class of winning conditions, and discover some common reasons behind several known and new positional determinacy results. We show that this property is decidable in single exponential time for a given prefix independent omega-regular winning condition. We exhibit several new classes of winning conditions having this property: the class of concave conditions (for finite arenas), the classes of monotonic conditions and geometrical conditions (for all arenas).

More: paper

March 13, 2008
Theo Offerman, University of Amsterdam: Noisy Signaling: Theory and Experiment

We investigate a noisy signaling game, in which nature adds random noise to the message chosen. Theoretically, with an unfavorable prior the separating equilibrium vanishes for low noise. It reappears for intermediate and high noise, where messages increase with noise. A pooling equilibrium always exists. In our experiment, noise works as an empirical equilibrium selection device. When noise increases, the separating equilibrium loses ground to the pooling equilibrium. Subjects separate for low noise where no separating equilibrium exists. Conditional on aiming for separation, high-quality senders choose messages that increase monotonically with noise. A simple behavioral model organizes the data well.

Co-authored by Thomas de Haan and Randolph Sloof.

March 6, 2008
Werner Raub, Utrecht University: Trust in social and economic exchange: game-theoretic models and empirical evidence

The presentation focuses on the 'management' (governance) of trust problems in exchange through two mechanisms that derive from social embeddedness. Trust can be based on past experiences (learning) with a partner or trust can be built on possibilities for sanctioning an untrustworthy trustee through own or third-party sanctions (control). Learning and control can operate at different levels: at the dyadic level and at the network level. Complementary empirical evidence on learning and control through dyadic embeddedness and network embeddedness from a survey on buyer-supplier relations and from two vignette experiments will be presented.

Reference: Buskens, Vincent and Werner Raub (2007) "Rational Choice Research on Social Dilemmas", working paper, Utrecht University.

February 7, 2008
Jérôme Lang, Toulouse: From belief change to preference change

There is a huge literature on belief change. In contrast, preference change has been considered only in a few recent papers. There are reasons for that: while there is to some extent a general agreement about the very meaning of belief change, this is definitely not so for preference change. We discuss here the possible meanings of preference change, arguing that we should at least distinguish between four paradigms: preferences evolving after some new fact has been learned, preferences evolving as a result of an evolution of the world, preferences evolving after the rational agent itself evolves, and preferences evolving per se. We then develop in more detail the first of these four paradigms (which we think is the most natural). We give some natural properties that we think preference change should fulfill and define several families of preference change operators, parameterized by a revision function on epistemic states and a semantics for interpreting preferences over formulas.

Joint work with Leon van der Torre (Univ. Luxembourg)

December 20, 2007
Valentin Goranko, University of the Witwatersrand: Strategic commitment and revocability in logics for multi-agent systems

In logics for multi-agent systems such as the Alternating-time Temporal Logic (ATL), one can express statements about the strategic ability of an agent (or a coalition of agents) to achieve a goal, such as: "agent (or, coalition) A has a strategy such that, if A follows that strategy then, no matter what the other agents do, the goal G will be achieved".

However, because of the compositionality of the standard semantics of ATL, the agents' strategies are revocable, in the sense that, in the evaluation of the goal G the agent A is no longer bound by the strategy (s)he has chosen in order to reach the state where the goal is evaluated.

In many scenarios, incl. classical game theory, this revocability of strategies is unnatural and unrealistic. In addressing this inadequacy of the standard semantics of ATL we consider alternative variants of ATL where strategies are irrevocable, i.e., agents commit to them, at least until their goal is achieved (or has become untenable). The difference between revocable and irrevocable strategies shows up when we consider the ability to achieve a goal which, again, involves (nested) strategic ability.

In this talk I will present and compare some variations of logical semantics for multi-agent systems, arising depending on the conditions of commitment or revocability, and whether strategies are memory-based or not. I will also discuss a number of technical and conceptual open problems associated with these logics.

This is joint work with Thomas Ågotnes and Wojtek Jamroga.

December 7, 2007
Willemien Kets, Tilburg University and Santa Fe Institute: Beliefs in Network Games

Networks can have an important effect on economic outcomes. Given the complexity of many of these networks, agents will generally not know their structure. We study the sensitivity of game-theoretical predictions to the specification of players' (common) prior on the network in a setting where players play a fixed game with their neighbors and only have local information on the network structure. We show that two priors are close in a strategic sense if and only if (1) the priors assign similar probabilities to all events that involve a player and his neighbors, and (2) with high probability, a player believes, given his type, that his neighbors' conditional beliefs are similar, and that his neighbors believe, given their type, that... the conditional beliefs of their neighbors are similar, for any number of iterations. Also, we show that the common but unrealistic assumptions that the size of the network is common knowledge or that the types of players are independent are far from innocuous: if these assumptions are violated, small probability events can have a large effect on outcomes through players' conditional beliefs.

November 22, 2007
Nicole Immorlica, CWI: The role of compatibility in the diffusion of technologies in social networks

In many settings, competing technologies - for example, operating systems, instant messenger systems, or document formats - can be seen adopting a limited amount of compatibility with one another; in other words, the difficulty in using multiple technologies is balanced somewhere between the two extremes of impossibility and effortless interoperability. There are a range of reasons why this phenomenon occurs, many of which - based on legal, social, or business considerations - seem to defy concise mathematical models. Despite this, we show that the advantages of limited compatibility can arise in a very simple model of diffusion in social networks, thus offering a basic explanation for this phenomenon in purely strategic terms.

Our approach builds on work on the diffusion of innovations in the economics literature, which seeks to model how a new technology A might spread through a social network of individuals who are currently users of technology B. We consider several ways of capturing the compatibility of A and B, focusing primarily on a model in which users can choose to adopt A, adopt B, or - at an extra cost - adopt both A and B. We characterize how the ability of A to spread depends on both its quality relative to B, and also this additional cost of adopting both, and find some surprising non-monotonicity properties in the dependence on these parameters: in some cases, for one technology to survive the introduction of another, the cost of adopting both technologies must be balanced within a narrow, intermediate range. We also extend the framework to the case of multiple technologies, where we find that a simple model captures the phenomenon of two firms adopting a limited "strategic alliance" to defend against a new, third technology.

Joint work with Jon Kleinberg, Mohammad Mahdian, and Tom Wexler.

October 11, 2007
Jouko Väänänen, ILLC: Dependence Logic

Dependence is a common phenomenon, wherever one looks: ecological systems, astronomy, human history, stock markets. But what is the logic of dependence? In this talk, based on my book (Dependence Logic, Cambridge University Press, 2007) I set out to make a systematic logical study of this important concept.

December 14, 2006
Ravishankar Sarma, ILLC: Contracting with Expert Global Software Outsourcing Firms: A Game Theoretic Model

In the process of cutting the costs and increase in work efficieny of projects, the trend in global software outsourcing continues to grow both in the number of projects and value of outsourcing transactions and in the variety of services which are outsourced. As the outsourcing activities becomes increasingly important the focus is shifting towards increasing value apropriation through assessing talent and expertise at technology clusters. While outsourcing contracts are often of high value, it can have unwanted outcomes such as escalating costs, diminishing service levels, and loss of expertise, to name a few. Hence, a good contract is often the key to a successful IT outsourcing relationship. For a relatively long time contract, persistence of trust between is essential.

The paper investigates the role of reputation effects (reliability, expertise, trust) on the nature and choice of outsourcing contracts. A formal model of game between principal (firm) and agent(client-firm) is presented. The talk focusses on (1) the relation between effort and expertise where the agent's level of effort cannot be observed by the principal (2) How an agent avoids adverse selection problem while choosing the appropriate pricing scheme of an outsourcing contracts(FP or TM). It is shown in that TM contracts can incentivize firms with limited expertise, if the differential rates are spread between industry base rate and upto twice of its value. But, beyond a threshold point, Time and Material(TM) could also fail with Novice firms whose expertise cannot be assesed precisely. In this situation the strtegy of the client is to chose Fixed-price(FP) contracts. An empirical validation of the model through an empirical study of selection of projects in a single domain is carried out by two expert firms.

October 5, 2006
Ramaswamy Ramanujam, Chennai: On strategies defined by properties

We consider infinite plays over finite game graphs where players have possibly overlapping objectives. When strategies are functions that map game positions or plays to moves, existence of bounded memory best response strategies can be easily established. However, when an opponent's strategy is known only by its properties, the notion of best response needs to be re-examined. We propose a structural definition of strategies built up from atomic decisions of the form "when condition x holds play a" and where a player's strategy may depend on properties of other players' strategies. These strategies can be represented by finite state automata.

We re-define the notion of best response, and study the following algorithmic questions: given a strategy π for player 2,

  1. check if strategy $\sigma$ achieves condition C for player 1;
  2. synthesize a strategy to achieve condition $C$ for player 1; and
  3. find the best response strategy for player 1.

We also propose a simple logic to describe composite strategies and reason about how they ensure players' objectives. Checking an assertion in the logic on a game graph is decidable, and the logic admits a complete axiomatization.

The work reported here is joint with Sunil Simon (IMSc, Chennai).