Talk:Explainable artificial intelligence
![]() | Articles for creation Start‑class ![]() | |||||||||
|
![]() | Technology Start‑class | ||||||
|
Recent edits
@AIxprt: This recent edit removed one of the references from this article. Why did you remove this reference? Jarble (talk) 13:10, 13 August 2019 (UTC)
Changes for History Section, Research in Explanation in the 70s, 80s, and 90s in Symbolic AI
The previous discussion of XAI did not address the large amount of work done in explanation in the 70s, 80s, and 90s. Although XAI is commonly used in the context of deep learning, to restrict discussions of XAI to deep learning alone is to presuppose that XAI will only be developed in that context and not in the context of hybrid symbolic / deep learning systems. Minimally to address the historical record, the earlier work needs to be further addressed.
I just wanted to explain a bit more about the changes I added to flesh out the history of explanation in the 70s, 80s, and 90s. The previous coverage only mentioned MYCIN, ignoring explanation research in intelligent tutoring systems, causal reasoning, explanation-based learning, and truth maintenance systems that I have tried to correct.
Indeed, much more could also be said about the ability of current ontological, semantic web, knowledge based, and intelligent tutoring systems to support explanation. I haven't pursued that at this point, or the point about symbolic approaches addressing primarily what Daniel Kahneman calls Type II systems while deep learning approaches better address Type I systems. I know that Yoshua Bengio and Gary Marcus have debated this, while others such as Doug Lenat, Chris Re, and Oren Etzioni, and others I am most likely missing have also made this distinction. The point here is just to give prior work its due, without taking away from all the amazing accomplishments of deep learning.
Yet more could be said about the work of explanation in abduction, such as Jerry Hobbs’ work on Tacitus. More could be said about the role of explanation in explanation-based reasoning and reasoning by analogy, as covered by some of the chapters in Machine Learning, Volume III. Veritas Aeterna (talk) 04:12, 21 January 2020 (UTC)
Restoring Discussion of Work in 70s, 80s, and 90s
Mindpit deleted the discussion I added of the history in symbolic reasoning work that focused on work related to explanation during the 70s-90s. The explanation was that it 'Removed an unnecessary line(and its references) that seems to have been put by one of the authors of the cited paper in order to promote his work.', but no explanation was provided on the talk page, and no mention of which work or reference was in question.
I am not the author or a co-author of ANY work cited in the text I added. If Mindpit has a specific objection to a reference could he / she please let me know and we can add other alternatives to make the same point?
The point of the section is show the nature and volume of the work in symbolic reasoning at that time relevant to explanation, and to provide concrete examples to illustrate what is being discussed. If there is an objection to a specific reference, I can find other references to make the same point.
I'd request that Mindpit only remove the line or reference they object to, or which they desire additional references for, and not the whole section, too. I wanted to send a notice to Mindpit, but could not find a user or talk page for him or her, if he or she wants to reply here or notify me of any comments or disagreements.
Thanks.
Veritas Aeterna (talk) 20:56, 17 February 2020 (UTC)
What is a white box?
The article has this line: "Nevertheless, genetic programming naturally works as a white box.[24][25]" What is a white box? It is not a common term in the industry, no definition is provided, and most worryingly, the two citations cited [24] and [25] provide no understanding whatsoever of what a white box might be. What is this sentence trying to accomplish in the article?
Thank you. Populationecology (talk) 13:15, 30 April 2020 (UTC)
- Feel free to remove it right away if you like, otherwise we can leave it a week to see if someone can supply a source. Rolf H Nelson (talk) 04:02, 1 May 2020 (UTC)
Sounds good. I waited 10 days since your comment, and then removed the sentence as it was still unsupported. Anyone who would like can delete this talk section if you need to clean up the talk page, but for now I will leave it here.
Thank you. Populationecology (talk) 15:04, 11 May 2020 (UTC)
Proposing Split (Interpretable is not the same as explainable)
Interpretability and explainability are two related concepts that are often used interchangeably, but they have slightly different meanings in the context of machine learning and artificial intelligence. While both concepts aim to provide understanding and insight into how a machine learning model makes its predictions or decisions, they approach the problem from different perspectives. Interpretability refers to the ability to understand or make sense of the internal workings of a machine learning model. It focuses on understanding the relationships between the input features and the model's output. A model is considered interpretable if its inner workings can be easily understood by a human or if it can be represented in a simple and transparent manner. For example, a linear regression model is highly interpretable because the relationship between the input features and the output is explicitly expressed in the form of coefficients. Explainability, on the other hand, goes beyond interpretability and aims to provide a more comprehensive understanding of the model's behavior by explaining why a particular prediction or decision was made. It focuses on providing human-understandable explanations that can justify or rationalize the model's output. Explainable AI techniques try to answer questions such as "Why did the model make this prediction?" or "What were the key factors that influenced the decision?". The goal is to provide insights into the decision-making process of the model, often through the use of visualization, natural language explanations, or highlighting important features. In summary, interpretability is concerned with understanding the internal mechanics of a model, while explainability is concerned with providing understandable justifications for the model's predictions or decisions. Interpretability focuses on the model itself, while explainability focuses on the output and its reasoning. Both concepts are important in different contexts and have different techniques and tools associated with them Geysirhead (talk) 11:38, 11 June 2023 (UTC)
- If interpretability is generally a subset of explainability in the literature, I have no problem with the status quo. IMHO We should leave it all in one article unless/until it grows too long and needs to be split. Rolf H Nelson (talk) 19:09, 11 June 2023 (UTC)