Algorithmic transparency
![]() | An editor has marked this as a promising draft and requests that, should it go unedited for six months, G13 deletion be postponed, either by making a dummy/minor edit to the page, or by improving and submitting it for review. Last edited by Uanfala (talk | contribs) 7 years ago. (Update) | ![]() |
This article, Algorithmic transparency, has recently been created via the Articles for creation process. Please check to see if the reviewer has accidentally left this template after accepting the draft and take appropriate action as necessary.
Reviewer tools: Inform author |
This article, Algorithmic transparency, has recently been created via the Articles for creation process. Please check to see if the reviewer has accidentally left this template after accepting the draft and take appropriate action as necessary.
Reviewer tools: Inform author |
Comment: This draft doesn't really say enough about why this concept is notable. Please request a review at WP:WikiProject Computing. Robert McClenon (talk) 17:16, 10 February 2017 (UTC)
Comment: Pinging David Eppstein who has considerable interest in the subject. Winged Blades Godric 06:58, 6 February 2017 (UTC)
Algorithmic transparency is the principle that the factors that influence the decisions made by algorithms should be visible, or transparent, to the people who use, regulate, and are impacted by systems that employ those algorithms. Although the phrase was coined in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services[1], the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit.
The phrases "algorithmic transparency" and "algorithmic accountability"[2] are sometimes used interchangeably – especially since they were coined by the same people – but they have subtly different meanings. Specifically, "algorithmic transparency" states that the inputs to the algorithm and the algorithm's use itself must be known, but they need not be fair. "Algorithmic accountability" implies that the organizations that use algorithms must be accountable for the decisions made by those algorithms, even though the decisions are being made by a machine, and not by a human being.[3]
Current research around algorithmic transparency interested in both societal effects of accessing remote services running algorithms.[4], as well as mathematical and computer science approaches that can be used to achieve algorithmic transparency[5] In the United States, the Federal Trade Commission's Bureau of Consumer Protection studies how algorithms are used by consumers by conducting its own research on algorithmic transparency and by funding external research.[6]
See also
References
- ^ Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053
- ^ Diakopoulos, Nicholas (2015). "Algorithmic Accountability: Journalistic Investigation of Computational Power Structures". Digital Journalism. 3 (3): 398-415.
- ^ Dickey, Megan Rose (30 April 2017). "Algorithmic Accountability". TechCrunch. Retrieved 4 September 2017.
- ^ "Workshop on Data and Algorithmic Transparency". 2015. Retrieved 4 January 2017.
- ^ "Fairness, Accountability, and Transparency in Machine Learning". 2015. Retrieved 29 May 2017.
- ^ Noyes, Katherine (9 April 2015). "The FTC is worried about algorithmic transparency, and you should be too". PCWorld. Retrieved 4 September 2017.