Algorithmic transparency
This article, Algorithmic transparency, has recently been created via the Articles for creation process. Please check to see if the reviewer has accidentally left this template after accepting the draft and take appropriate action as necessary.
Reviewer tools: Inform author |
Comment: This draft doesn't really say enough about why this concept is notable. Please request a review at WP:WikiProject Computing. Robert McClenon (talk) 17:16, 10 February 2017 (UTC)
Comment: Pinging David Eppstein who has considerable interest in the subject. Winged Blades Godric 06:58, 6 February 2017 (UTC)
Algorithmic transparency is the principle that the factors that influence the decisions made by algorithms should be visible, or transparent, to the people who use, regulate, and are impacted by systems that employ those algorithms. Although the phrase was coined in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services[1], the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit.
The phrases algorithmic transparency and algorithmic accountability[2] are sometimes used interchangeably---especially since they were coined by the same people---but they have subtly different meanings. Specifically algorithmic transparency states that the inputs to the algorithm and the algorithm's use itself must be known, but they need not be fair. Algorithmic accountability implies that the organizations that use algorithms must be accountable for the decisions made by those algorithms, even though the decisions are being made by a machine, and not by a human being.
Current research around algorithmic transparency interested in both societal effects of accessing remote services running algorithms.[3], as well as mathematical and computer science approaches that can be used to achieve algorithmic transparency[4]
In 2017, the Association for Computing Machinery US Public Policy Council issued a Statement on Algorithmic Transparency and Accountability that listed seven principles intended to support societal benefits of algorithm use while minimizing the negative potential for society. Those points are:
1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.
6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.[5].
See also
References
- ^ Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053
- ^ Diakopoulos, Nicholas (2015). "Algorithmic Accountability: Journalistic Investigation of Computational Power Structures". Digital Journalism. 3 (3): 398-415.
- ^ "Workshop on Data and Algorithmic Transparency". 2015. Retrieved 4 January 2017.
- ^ "Fairness, Accountability, and Transparency in Machine Learning". 2015. Retrieved 29 May 2017.
- ^ USACM Issues Statement on Algorithmic Transparency and Accountability, January 12, 2017. https://www.acm.org/articles/bulletins/2017/january/usacm-statement-algorithmic-accountability
This article, Algorithmic transparency, has recently been created via the Articles for creation process. Please check to see if the reviewer has accidentally left this template after accepting the draft and take appropriate action as necessary.
Reviewer tools: Inform author |