Please note: This website is no longer maintained. [Last update: January 2017]
Explanation, trust, and transparency are concepts that are strongly associated with information systems. The ability to explain reasoning processes and results can substantially affect the usability and acceptance of a software system.
Within the field of knowledge-based systems, explanations are an important link between humans and machines. There, their main purpose is to increase confidence of the user in the system’s result (persuasion) or the system as a whole (satisfaction), by providing evidence of how the solution was derived (transparency). For example, in recommender systems good explanations can help to inspire user trust and loyalty, and make it quicker and easier (efficiency) for users to find what they want (effectiveness).
Explanations are part of human understanding processes and part of dialogues. They need to be incorporated into system interactions to improve interactive decision-making processes and, in the end, to make such systems more robust and dependable. As information systems grow more and more complex it becomes increasingly important for those systems to have advanced explanation capabilities.
This website documents the quest for explanation-awareness in the design and implementation of computing systems.
Thomas Roth-Berghofer was Professor in Artificial Intelligence and Head of Research and Enterprise in the School of Computing and Engineering, University of West London from 2011 to 2017. He has led the Centre for Intelligent Computing.
This website was his personal scratchpad for documenting and exchanging research results on the topic of explanation from such fields as Artificial Intelligence, Philosophy, Psychology, and related fields in the context of complex information systems.