CAiSE’23 Welcome & Keynote 1 – Prof. Giancarlo Guizzardi. A Meaningful Road to Explanation

Cyber-human systems are systems formed by the coordinated interaction of human and computational components. The latter can only be justified in these systems to the extent that they are meaningful to humans – in both senses of ‘meaning’, i.e., in the sense of semantics as well as in the sense of purpose or significance. On one hand, the data these components manipulate only acquire meaning when mapped to shared human conceptualizations of the world. On the other hand, they can only be justified if ethically designed. Ultimately, we can only build trustworthy cyber-human systems if the interoperation of their components is meaning preserving, i.e., if we are able to: semantically interoperate these components; transparently demonstrate (i.e., explain) how their interoperation positively contributes to human values and goals. To meet these requirements, we must be able to explicitly reveal and safely relate the different theories of the world (i.e., ontologies) embedded in these components. In this talk, I discuss the strong relation between the notions of semantics, ontology, and explanation under certain interpretations. Specifically, I will present a notion of explanation termed Ontological Unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications). I show that the models produced by Ontological Unpacking differ from their traditional counterparts not only in their expressivity but also on their nature: while the latter typically merely have a descriptive nature, the former have an explanatory one. Moreover, I show that it is exactly this explanatory nature that is required for semantic interoperability and, hence, trustworthiness. Finally, I discuss the relation between Ontological Unpacking and other forms of explanation in philosophy and science, as well as in Artificial Intelligence. I will argue that the current trend in XAI (Explainable AI) in which “to explain is to produce a symbolic artifact” (e.g., a decision tree or a counterfactual description) is an incomplete project resting on a false assumption, that these artifacts are not “inherently interpretable”, and that they should be taken as the beginning of the road to explanation, not the end.

Keynote

Speakers