Keynote 3 – Prof. Pnina Soffer. Theory-guided information systems engineering.

Information systems are human-cyber systems, and their engineering should consider both human and technology. The talk will focus on understanding the human side, as a guiding principle for IS engineering (ISE). A multitude of theories that concern humans in the context of IS have been proposed, developed, adapted, or adopted in the related area of management information systems (MIS). These theories, in general, mostly serve for explaining human behavior in the context of IS or for predicting it. The talk will address cases where such theories serve for guiding the development of IS artifacts (systems, processes, methods, models). I will discuss a number of theories that have long been around as examples, and assess their usefulness in guiding ISE and ISE research, in an attempt to indicate what it takes for a theory to be useful for ISE. I will also discuss the difference between bottom-up observational ISE studies and top-down theory-guided ones. Eventually, I will describe some of my recent theory-guided work.

Continue Reading

Report from the sustainability chairs & Keynote 2 – Bran Selic. Matching Software with Reality

The essential nature of modern software methods and corresponding technologies can be traced to the earliest applications of computers. The very term “computing” clearly reveals a foundation firmly grounded on mathematical logic and an algorithmic worldview. However, the range of applications of computers has grown immensely since those early days, and we are now in an era of what are rather euphemistically termed “smart” systems. Perhaps the most outstanding characteristic of such systems is that they are intended to do the “intelligent thing” when interacting with a complex, idiosyncratic, and potentially unpredictable physical and/or social environments. In this talk, we first examine the primary shortcomings of our current computing methods and technologies in addressing such contexts, after which we identify some possible research directions that may hold the potential to deal more effectively in these circumstances.

Continue Reading

CAiSE’23 Welcome & Keynote 1 – Prof. Giancarlo Guizzardi. A Meaningful Road to Explanation

Cyber-human systems are systems formed by the coordinated interaction of human and computational components. The latter can only be justified in these systems to the extent that they are meaningful to humans - in both senses of 'meaning', i.e., in the sense of semantics as well as in the sense of purpose or significance. On one hand, the data these components manipulate only acquire meaning when mapped to shared human conceptualizations of the world. On the other hand, they can only be justified if ethically designed. Ultimately, we can only build trustworthy cyber-human systems if the interoperation of their components is meaning preserving, i.e., if we are able to: semantically interoperate these components; transparently demonstrate (i.e., explain) how their interoperation positively contributes to human values and goals. To meet these requirements, we must be able to explicitly reveal and safely relate the different theories of the world (i.e., ontologies) embedded in these components. In this talk, I discuss the strong relation between the notions of semantics, ontology, and explanation under certain interpretations. Specifically, I will present a notion of explanation termed Ontological Unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications). I show that the models produced by Ontological Unpacking differ from their traditional counterparts not only in their expressivity but also on their nature: while the latter typically merely have a descriptive nature, the former have an explanatory one. Moreover, I show that it is exactly this explanatory nature that is required for semantic interoperability and, hence, trustworthiness. Finally, I discuss the relation between Ontological Unpacking and other forms of explanation in philosophy and science, as well as in Artificial Intelligence. I will argue that the current trend in XAI (Explainable AI) in which “to explain is to produce a symbolic artifact” (e.g., a decision tree or a counterfactual description) is an incomplete project resting on a false assumption, that these artifacts are not “inherently interpretable”, and that they should be taken as the beginning of the road to explanation, not the end.

Continue Reading

End of content

No more pages to load