Keynote speakers

Keynote speakers

We are pleased to announce three ECSQARU 2021 keynote speakers

Cassio de Campos

Cassio de Campos is an associate professor in the Uncertainty in Artificial Intelligence Group at Eindhoven University of Technology, NL. His main research interests are reliable machine learning and robust artificial intelligence, focused on theoretical and foundational developments for learning and reasoning with graphical models such as Bayesian networks, (hidden) Markov models, Markov random fields, influence diagrams, Markov decision processes, sum-product networks, and their use in applications.

Probabilistic Graphical Models: Tractability and Robustness

ABSTRACT: This talk will present a view on the theoretical and practical tractability of some probabilistic graphical models such as Bayesian and Markov networks. We will discuss on relations among different graphical models, including models that represent computations explicitly, such as probabilistic circuits. We will also dive into ideas of cautiousness and robustness in AI based on credal graphical models. Finally, I will give a (considerably biased) opinion on how artificial intelligence (AI) is evolving and what we can expect regarding probabilistic graphical models in the next generation of AI.


Tomáš Kroupa

Tomáš Kroupa is a researcher with the Game Theory group at the Artificial Intelligence Center of the Czech Technical University in Prague, CZ. He is currently focused on coalitional games, strategic games with infinite action spaces, and their applications in computer science.

Computation of Equilibria in Infinite Strategic Games

ABSTRACT: The computation of Nash equilibria in large games is one of the central problems in non-cooperative game theory. The state-of-the-art applications combine various AI techniques to solve highly complicated games such as heads-up no-limit poker or StarCraft. Another recent direction is to develop game-theoretic models to improve machine learning algorithms (generative adversarial networks) or to provide performance guarantees in security applications (patrolling, intrusion detection, etc.) Many games appearing in such domains have naturally infinite action spaces. While many algorithms are known for finite games, the computation of equilibria for continuous games is a challenging problem. The main obstruction to computing equilibria in such games is that they are constituted by probability distributions whose support is unknown. In the talk, I will discuss the solution of games in which utility functions are continuous, and the strategy sets are compact. The advanced techniques of global polynomial optimization can be used to recover equilibria in multiplayer network games with polynomial utility functions. I will show that a simple extension of the iterative algorithm for a robot path planning problem yields a numerical method converging to a Nash equilibrium.


Steven Schockaert

Steven Schockaert is a professor at the School of Computer Science and Informatics at Cardiff University, UK. His research interests lay in the area of Artificial Intelligence. He focuses on reasoning with imperfect information, including various forms of commonsense reasoning. This involves the combination of techniques and ideas from the field of Knowledge Representation and Reasoning with methods from Machine Learning, with applications in Natural Languague Processing and Information Retrieval.

Reasoning with entity embeddings

ABSTRACT: Many Natural Language Processing (NLP) systems rely on vectors for representing entities, and on vector manipulations for reasoning about these entities. Despite the popularity of such entity embeddings, and their widespread applicability across NLP, it turns out that many existing embedding models have important theoretical limitations on the kinds of reasoning they can perform, which I will discuss in the first part of the talk. Rather than using entity embedding models for reasoning directly, entity embeddings can also be used to develop strategies for flexible reasoning with symbolic rules. In particular, vector representations of concepts capture aspects of conceptual knowledge that are difficult to encode symbolically, including fine-grained knowledge about different facets of similarity. Such knowledge can be exploited, for instance, to design strategies for plausible reasoning with ontologies.