How do brains support symbolic reasoning? How do we infer causes, and distinguish between correlation and causation? How do we reason and make valid (or faulty) inferences about the world?
Our aim is to bring together researchers on interactive communication to talk about recent successes in this area alongside the current challenges. Language does not happen in a vacuum, nor within single speakers; however, researchers often make simplifying assumptions by analyzing the behavior of individuals and their linguistic capacities in isolation, but language as a communicative system involves multiple individual who need to coordinate. Interacting speakers use language to coordinate the exchange of information and in turn they rely on a coordinated shared linguistic system. We highlight 3 strands of success and open questions:
A variety of models have been designed to represent the flow of information in the context of communicating agents. On the one hand we focus on the transmission of information in the context of the evolution of language. Using a complex adaptive systems approach, models have been designed of populations of agents, each of which adopts a specific learning mechanism. On the other hand we focus at the logical methods designed to model how rational agents change their opinion and update their knowledge. Different learning methods can be represented in this way. Further one can focus on a group of communicating agents and model different group attitudes as well as their dynamics. A variety of different aspects will be important, ranging from the effects of higher-order reasoning to the effect of reasoning on the basis of (semi)-private and/or public information.
How can we incorporate findings from cognitive science into natural language processing models, and artifical intelligence more generally? How can we build interpretable machine learning models? What tools can we design -- and borrow from cognitive science -- to analyze and interpret state-of-the-art deep learning models in NLP and beyond?