Invited Speakers

Costas Bekas

Principal Research Staff Member
Manager, Foundations of Cognitive Solutions
IBM Research - Zurich

Costas Bekas is managing the Foundations of Cognitive Computing group at IBM Research-Zurich. He received B. Eng., Msc and PhD diplomas, all from the Computer Engineering & Informatics Department, University of Patras, Greece, in 1998, 2001 and 2003 respectively. Between 2003-2005, he worked as a postdoctoral associate with prof. Yousef Saad at the Computer Science & Engineering Department, University of Minnesota, USA. He has been with IBM since September 2005. Dr. Bekas' main research interests span cognitive computing, massive scale analytics and energy aware algorithms and architectures. Dr. Bekas is a recipient of the PRACE 2012 award and the ACM Gordon Bell 2013 and 2015 prizes.

Frontiers of Cognitive Computing


Cognitive Computing is the new frontier of the information age. Computers have evolved into indispensable tools of our modern societies, having modernized numerous aspects of our everyday lives. Computers have facilitated the acquisition, storage and access of huge amounts of data since the very first electronic general purpose machines of the 1940s. Since then, we learned how to program computers in order to allow uses that even the wildest imagination of computer pioneers of the 50s and 60s did not capture, such as the internet, social networks and simulations of nature of incredible fidelity. Cognitive computing turns our trusted programmable machines, into cognitive companions. The systems are not programmed to simply achieve a task, but rather they are developed to reason with us in ways that are natural for us. They can debate with us, test our ideas, as these are expressed in natural language, against incredible volumes of data and give us insights that ultimately free us and let us focus on and use our deepest of human capabilities: intuition and intelligence. Cognitive systems mimic the way we humans reason, allowing us to express in unstructured ways, such as speech and vision in order to achieve in a small fraction of the previously required time feats such as pharmaceuticals and materials discovery, attacking cancer, understand complex natural ecosystems as well as man-made ecosystems such as the economy and technology. We will discuss the remarkable progress of cognitive computing and give a glimpse of what the future may look like.

Tarek R. Besold, PhD

The KRDB Research Centre
Faculty of Computer Science
Free University of Bozen-Bolzano (Italy)

As of September 1, 2016:
Digital Media Lab
Center for Computing and Communication Technologies (TZI)
University of Bremen, Bremen, Germany

Dr. Tarek R. Besold is a postdoctoral researcher in artificial intelligence, computational creativity, and cognitive systems. He works at the KRDB Research Centre of the Free University of Bozen-Bolzano and—as of September 1, 2016—in the Digital Media Lab of the University of Bremen. He studied mathematics, computer science, and logic in Erlangen, Zaragoza, and Amsterdam, and conducted his PhD research in cognitive science at the Institute of Cognitive Science in Osnabrück, graduating summa cum laude with a thesis on "Cognitive Aspects of Human-Level Artificial Intelligence". Among others, he was the General Chair of the HLAI 2016 Joint Multi-Conference on Human-Level Artificial Intelligence, and founder/organizer/scientific chair of several workshops, conferences, or educational events/schools on topics relating to cognitive AI and computational creativity. Also, he was co-editor of the Springer book "Computational Creativity Research: Towards Creative Machines", and currently serves as associate editor of the Journal of AI Research (JAIR) special track on "Deep Learning, Knowledge Representation, and Reasoning", as topic co-editor of the Frontiers in Psychology: Cognition research topic "Representation in the Brain", and on the editorial board of Elsevier's journal on "Biologically Inspired Cognitive Architectures" (BICA).
For more information please see

To the extent that you are like a grape: Symbolic models of analogy and concept blending in cognitive AI


Analogy is one of the most studied representatives of a family of non-classical forms of reasoning working across different domains, usually taken to play a crucial role in creative thought and problem-solving. In the first part of the talk, I will shortly introduce general principles of computational analogy models (relying on a generalisation-based approach to analogy-making). We will then have a closer look at Heuristic-Driven Theory Projection (HDTP) as an example for a theoretical framework and implemented system: HDTP computes analogical relations and inferences for domains which are represented using many-sorted first-order logic languages, applying a restricted form of higher-order anti-unification for finding shared structural elements common to both domains. The presentation of the framework will be followed by a few reflections on the "cognitive plausibility" of the approach motivated by theoretical complexity and tractability considerations.

In the second part of the talk I will discuss an application of HDTP to modeling essential parts of concept blending processes as current "hot topic" in Cognitive Science. Here, I will sketch an analogy-inspired formal account of concept blending —developed in the European FP7-funded Concept Invention Theory (COINVENT) project— which, among others, combines HDTP with mechanisms from Case-Based Reasoning.

Marc Denecker

Associate Research Professor
Head of the Knowledge Representation and Reasoning
Department of Computing
Catholic University of Leuven, Leuven, Belgium



Prof. Dr. Marc Denecker studied at the Katholic University Leuven in Belgium, where he also did his PhD and has worked till now, with exception of a two year period at the University Libre de Bruxelles. His current interests range from theoretical topics such as foundations of knowledge representation, nonmonotonic reasoning, logic programming, classical logic, fixpoint and modal logics to building inference systems for integrations of these logics and the development of applications.

The FO(.) Knowledge Base System project


The goal of this project is to build a Knowledge Base System for an expressive knowledge representation language. Such systems allow to separate declarative knowledge from the problems that arise in the application domain, allowing to reuse the knowledge base to solve different computational tasks by applying different forms of inference. On the logical level, we start from classical first order logic (FO) (the notation FO(.) is used here as a generic term to denote extensions of classical first order logic FO). In this logic, we integrate various language constructs from different computational logic paradigms: types, inductive definitions, aggregates, (bounded) arithmetic, ... The goal is to achieve an expressive, cleanly integrated knowledge representation language with possible world semantics and a well-understood informal semantics of mathematical precision. On the computational level, the project aims to integrate and extend technologies developed in various computational logic fields to build a Knowledge Base System that supports various forms of inference.

Motivations, principles and research questions raised by such a project will be discussed. I will give an overview and demonstration of the current IDP system and some applications. An application for interactive configuration will serve to highlight a principle that distinguishes declarative modelling from programming: the separation of knowledge from problems and the possibility to apply multipe forms of inference on the knowledge base to solve different computational tasks. We discuss how even interactive systems can be described and "run" within FO(.).

Torsten Schaub

Institut für Informatik
University of Potsdam, Potsdam, Germany



Torsten Schaub received his diploma and dissertation in informatics in 1990 and 1992, respectively, from the Technical University of Darmstadt, Germany, and his habilitation in informatics in 1995 from the University of Rennes I, France. From 1990 to 1993 he was a research assistant at the Technical University at Darmstadt. From 1993 to 1995, he was a research associate at IRISA/INRIA at Rennes. In 1995 he became University Professor at the University of Angers. Since 1997, he is University Professor for knowledge processing and information systems at the University of Potsdam. In 1999, he became Adjunct Professor at the School of Computing Science at Simon Fraser University, Canada; and since 2006 he is also an Adjunct Professor in the Institute for Integrated and Intelligent Systems at Griffith University, Australia. Since 2014, Torsten Schaub holds an Inria International Chair at Inria Rennes - Bretagne Atlantique. Torsten Schaub has become a fellow of ECCAI in 2012. In 2014 he was elected President of the Association of Logic Programming. He served as program (co-)chair of LPNMR'09, ICLP'10, and ECAI'14. The research interests of Torsten Schaub range from the theoretic foundations to the practical implementation of reasoning from incomplete, inconsistent, and evolving information. His current research focus lies on Answer set programming and materializes at, the home of the open source project Potassco bundling software for Answer Set Programming developed at the University of Potsdam.
For more information please see

Hybrid reasoning with Answer Set Programming


Answer Set Programming (ASP) provides an approach to declarative problem solving that combines a rich yet simple modeling language with effective Boolean constraint solving capacities. This makes ASP a model, ground, and solve paradigm, in which a problem is expressed as a set of first-order rules, which are subsequently turned into a propositional format by systematically replacing all variables, before finally the models of the resulting propositional rules are computed. ASP is particularly suited for modeling problems in the area of Knowledge Representation and Reasoning involving incomplete, inconsistent, and changing information due to its non-monotonic semantic foundations. From a formal perspective, ASP allows for solving all search problems in NP (and NP^NP) in a uniform way. Hence, more generally, ASP is well-suited for solving hard combinatorial search (and optimization) problems. Interesting applications of ASP include decision support systems for NASA shuttle controllers, industrial team-building, music composition, natural language processing, package configuration, phylogeneticics, robotics, systems biology, timetabling, and many more.

However, despite its growing popularity, ASP is not a silver bullet. For instance, it became clear early on that ASP fails to handle large numeric domains. This was addressed by Gelfond et al. in 2005 by proposing an integration of ASP and Constraint Processing (CP). This influential work has given rise to the subarea of Constraint ASP (CASP). Although this is an exemplar of hybridizing ASP, the need for integrating special-purpose reasoning is omnipresent when it comes to attacking real-world applications. This includes the integration of ASP with linear programming in bio-informatics, with geometrical reasoning in robotics, simulation in hardware design, and many more. This reveals the need for a principled way of integrating ASP with dedicated reasoning formalisms, both at the semantic and implementation level. Although this development has already been anticipated in the area of Satisfiability Testing (SAT), leading to the subfield of SAT Modulo Theories (SMT), it only serves as a limited blueprint for ASP. This is because (i) it only deals with solving and ignores modeling and grounding and (ii) it is monotonic and thus follows different semantic principles.

The talk will start with an introduction to CASP and sketch important aspects and insights gained in the development of the CASP solver clingcon. Building on this, we will describe the general framework for integrating theory reasoning into ASP offered by the fifth generation of the ASP system clingo. And finally we sketch a novel semantic approach to integrating ASP and CP, called the logic of Here-and-There with constraints.

Keith Stenning

Honorary Professor
The University of Edinburgh, Scotland, UK


Web: TBA

Keith Stenning's research interests centre on human reasoning and discourse processing. Two books are `Seeing Reason: Language and Image in Learning to Think' 2002 and, with Michel van Lambalgen `Human Reasoning and Cognitive Science'. The latter laid out a multiple-logics program for the cognitive modelling of human reasoning which emphasised a distinction between reasoning {\em to} an interpretation, and reasoning {\em from} one. This is essentially a unification of the fields of discourse processing and deductive reasoning/decision making. The book went on to show that experimental work in the psychology of reasoning had paid a high price for ignoring the former, accusing people of irrationality, where the true fault is often the researchers' own failure to communicate what kind of reasoning it is intended their subjects do, and to appreciate how logically heterogeneous human reasoning has to be. More recently he has been engaged in work elucidating the cognitive foundations of reasoning, judgement and decision making in different kinds of uncertainty. He was the founding director of the interdisciplinary Human Communication Research Centre at the University of Edinburgh. He is a Distinguished Fellow of the Cognitive Science Society, and a Foreign Fellow of the Netherlands' Academy of Sciences.

We reason in uncertainty, but of what kinds?


If logic is to be helpful in analysing human reasoning, we first need to acknowledge the heterogeneity of the kinds of reasoning that people do. There has been a strong shift in the study of human reasoning away from classical logic toward probability theory as the formal framework (Over (2009)), and for many researchers probability is all that is needed to analyse any human reasoning. Reasoning in this respect is held to be homogeneous. We have argued elsewhere that this move is from the frying pan of classical logic into the fire of probability, not because probability (or classical logic) cannot be useful, but because homogeneity is empirically and formally disastrous (Stenning et al. (in press); Stenning and van Lambalgen (2008); Besold et al. ,submitted). We take it that in AI, this is commonplace. But some of the insights arising in cognition may be of interest to AI researchers. Engaging with logical multiplicity focusses attention on qualitatively different kinds of uncertainty, and how to characterise them. This talk will present some current thinking on that question. The idea is to use logics to individuate kinds of uncertainty and their twinned necessities. In particular we contrast Logic Programming (LP) as a nonmonotonic logic, here specialised for analysing human discourse processing, and with some track record in modelling discourse semantics, with, on the one hand classical logic, and on the other probability. When examined close up, it emerges just how different kinds of things the uncertainties of these three system are.


Besok, T. R., Garcez, A., Stenning, K., & Torre, L. V. D. (submitted). Reasoning in Non-probabilistic Uncertainty: Logic Programming and Neural-Symbolic Computing as Examples. Minds and Machines.

Stenning, K., Martignon, L., & Varga, A. (submitted). Adaptive reason- ing: integrating fast and frugal heuristics with a logic of interpretation. Decision.

Stenning, K., & van Lambalgen, M. (2008). Human reasoning and cognitive science. Cambridge, MA.: MIT Press.

David E. Over (2009) New paradigm psychology of reasoning. Thinking & Reasoning, 15(4), 431--438.

JELIA 2016


Larnaca, Cyprus


09 - 11 November, 2016


Lordos Beach Hotel


For Academic Matters

Dr. Loizos Michael
Open University of Cyprus

Prof. Antonis C. Kakas
University of Cyprus
P: +357 22 892 700 or
    +357 22 892 706


For Local Arrangements

Easy Conferences

P: +357 22 591 900
F: +357 22 591 700