It is a standard assumption both in linguistic pragmatics and in the psycholinguistics of dialogue that coordination between interlocutors is reached via mutual knowledge, i.e., a common ground of iteratively shared beliefs. One of the central tenets of the alignment model is that the assumption of full common ground in this sense is (a) psychologically unrealistic because it is highly resource intensive to establish this kind of mutual knowledge, and (b) not necessary because coordination in dialogue can be achieved via alignment of situation models, leading to an implicit common ground of non-iteratively shared knowledge.

The aim of the project is to identify factors relevant for the establishment of implicit common ground, and to get insights into the process of building up aligned representations. To this end, insights from non-standard versions of game theory (bounded rationality, behavioral game theory) and of the theory of resource-bounded agents from logic and artificial intelligence will be employed to come up with a more realistic picture of the inference processes that take place during alignment. Based on the these findings, a theoretical model of implicit common ground will be developed that makes use of a hybrid system consisting of game-theoretical concepts and modal logics.