Wikipedia:WikiProject AI Cleanup/Policies
| Main page | Discussion | Noticeboard | Guide | Resources | Policies | Research |
At the onset of the 2020s AI boom, Wikipedia's existing content policies already addressed many of the emerging AI-related concerns that prompted other platforms and organizations to adopt a dedicated new policy; consequently, Wikipedia has no single all-encompassing, detailed "AI use policy", "AI-generated content policy", "AI content guideline", et cetera. Wikipedia:Large language models § Risks and relevant policies (essay) aims to explain how the broad core content policies and the copyrights policy interact with the use of AI tools, mostly in the domain of text.
A dedicated guideline in this area does exist: Wikipedia:Writing articles with large language models (WP:NEWLLM). It is the closest thing to an explicit "AI policy" page on English Wikipedia, but it is intentionally very spartan, comprising one point: Large language models should not be used to generate new Wikipedia articles from scratch. Still, disparate portions of other policies and guidelines contain certain provisions that are specifically and explicitly about AI-generated content. The most important of these is the speedy deletion criterion Wikipedia:Speedy deletion § G15. LLM-generated pages without human review (WP:G15), which forms the policy basis to speedily delete pages that could only plausibly have been generated by large language models and can be assumed not to have undergone reasonable human review. Seen together, NEWLLM and G15 reflect the project's expectations that large language models are not to be used to originate articles and that the editor who adds LLM-originated text to the site (not limited to articles) should reasonably review it to ensure that it complies with all applicable policies and guidelines.
The rest of those other relevant (albeit non-dedicated) policies and guidelines are listed here as follows (November 2025[update]):
- Wikipedia:Image use policy § AI-generated images (WP:AIIMAGES), a policy section against the use of images wholly generated by AI
- referred to in Wikipedia:No original research § Original images (WP:IMAGEOR)
- portion of Wikipedia:Biographies of living persons § Images (WP:AIIMGBLP), a policy norm against the use of AI-generated images to depict subjects of BLPs
- portion of Wikipedia:Manual of Style/Images § Editing images (MOS:AIUPSCALE), a MoS norm against the use of AI upscaling software
- portion of Wikipedia:Public domain § Works ineligible for copyright protection (WP:NONCREATIVE), a guideline paragraph recounting the legal principle that works created by machines are not copyrightable (generally valid as of 2025[1])
- Wikipedia:Reliable sources § Sources produced by machine learning (WP:RSML), a guideline section against citing AI-generated content as sources
- referred to in Wikipedia:Reliable sources/Perennial sources § Large language models (WP:RSPLLM), an information page
- Wikipedia:Talk page guidelines § LLM-generated (WP:AITALK), a guideline entry allowing the striking or collapsing of comments that are obviously generated by an LLM or similar AI technology
The following are not policies or guidelines, but still have some significance in this context:
- Wikipedia:Translation § Machine translation (WP:MACHINE), an information page section concerning neural machine translation (i.e., "AI translation")
- an entry in Wikipedia:Drafts § Reasons to move an article to draftspace (WP:DRAFTREASON), an explanatory essay about policies on editing and deletion
- portion of Wikipedia:Guide to appealing blocks § Composing your request to be unblocked (WP:NICETRY), an explanatory essay stating that unblock requests that appear to be written using AI are likely to be summarily rejected
References
[edit]- ^ "Copyright and Artificial Intelligence". United States Copyright Office. Retrieved April 9, 2025.