Jump to content

Talk:Right to explanation

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by First Comet (talk | contribs) at 09:14, 19 August 2023 (Criticism?: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconHuman rights Unassessed
WikiProject iconThis article is within the scope of WikiProject Human rights, a collaborative effort to improve the coverage of Human rights on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.

Criticism?

From Right to explanation#Criticism:

More fundamentally, many algorithms used in machine learning are not easily explainable. For example, the output of a deep neural network depends on many layers of computations, connected in a complex way, and no one input or computation may be a dominant factor. The field of Explainable AI seeks to provide better explanations from existing algorithms, and algorithms that are more easily explainable, but it is a young and active field.

How is this criticism? The whole notion of the right to explanation is that one cannot rely on opaque algorithms where the association between the inputs and the outputs cannot be easily explained to make decisions on people on an automated basis, so the fact that "many algorithms used in machine learning are not easily explainable" does not seem to constitute a criticism of the concept. Rather, the paragraph is simply describing it. First Comet (talk) 09:14, 19 August 2023 (UTC)[reply]