Draft:Frontier AI Safety Institutes
| Submission declined on 29 October 2025 by DoubleGrazing (talk). This submission is not adequately supported by reliable sources. Reliable sources are required so that information can be verified. If you need help with referencing, please see Referencing for beginners and Citing sources. This draft's references do not show that the subject qualifies for a Wikipedia article. In summary, the draft needs multiple published sources that are:
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
|
| Submission declined on 28 October 2025 by Pythoncoder (talk). Your draft shows signs of having been generated by a large language model, such as ChatGPT. Their outputs usually have multiple issues that prevent them from meeting our guidelines on writing articles. These include: Declined by Pythoncoder 8 days ago.
|
Comment: The sources are so inadequately cited that it is near-impossible reliably to identify them for verification. I'm also pretty sure such recent sources are available online, so why are they not cited with links?In any case, primary sources do not establish notability per WP:GNG / WP:ORG. DoubleGrazing (talk) 07:50, 29 October 2025 (UTC)
| Abbreviation | AISIs |
|---|---|
| Formation | 2023 |
| Type | Government-backed technical bodies |
| Purpose | Research and evaluation of advanced artificial intelligence safety |
Region served | Global |
Main organ | International Network of AI Safety Institutes |
The Frontier AI Safety Institutes (AISIs) are a group of government-supported research bodies established to study and mitigate risks linked to advanced and general-purpose artificial intelligence (AI) systems, often referred to as frontier AI. These institutes operate independently from commercial developers and provide technical expertise to guide policy, regulation, and international cooperation on AI safety.[1][2]
Establishment by jurisdiction
[edit]Interest in creating government-backed AI safety institutes grew rapidly in 2023 following major advances in large-scale language and multimodal models.[3] Although all share similar aims, each institute’s structure and authority depend on the host country’s regulatory framework.
| Institute | Host Country/Region | Establishment Date | Core Mandate & Distinguishing Feature |
|---|---|---|---|
| AI Safety Institute (AISI) | United Kingdom | November 2023 | Evaluates advanced models for potentially harmful capabilities such as cyber misuse or biological threats; the first dedicated organisation of its kind.[1] |
| U.S. AI Safety Institute (U.S. AISI) | United States | November 2023 | Operates within the National Institute of Standards and Technology (NIST); develops testing methods and safety benchmarks for AI systems.[4] |
| EU AI Office | European Union | February 2024 | Established to implement and enforce the EU Artificial Intelligence Act; unique in combining both regulatory and enforcement authority over general-purpose AI models.[5] |
| Japan AI Safety Institute (J-AISI) | Japan | February 2024 | Functions within the Information-technology Promotion Agency (IPA); focuses on national AI safety standards and evaluation frameworks.[6] |
International network and cooperation
[edit]Because AI risks cross national boundaries, the institutes formed the International Network of AI Safety Institutes, a cooperative framework for research coordination and policy alignment.[2] The network works to harmonise testing approaches, share research, and strengthen global capacity for AI safety.
Key areas of collaboration include:
- Joint testing: Coordinated “red teaming” and model evaluations to identify shared vulnerabilities in advanced systems.
- Technical alignment: Developing interoperable benchmarks and testing standards for consistent assessment across countries.
- Information sharing: Facilitating secure exchange of research data, evaluation results, and analytical tools to support international oversight.[7]
Mission and core objectives
[edit]Each Frontier AI Safety Institute serves as a public-interest technical centre supporting responsible AI governance. Common objectives include:
- Technical evaluation: Conducting pre-deployment safety assessments of frontier models for potentially hazardous capabilities such as cyber intrusion, bioengineering misuse, or autonomous replication.[3]
- Foundational research: Advancing research in interpretability, alignment, and scalable oversight to improve understanding of complex AI systems.[7]
- Policy support: Providing evidence-based advice to policymakers to inform regulation, safety frameworks (such as the NIST AI Risk Management Framework), and international negotiations.[4]
Challenges and resources
[edit]Despite strong political backing, the Frontier AI Safety Institutes face several challenges in their development and operation.
- Rapid innovation: Technological progress in private AI development often outpaces the resources and capacity of public institutes.
- Funding limitations: Although national funding commitments are significant, they remain small compared with private R&D spending. For example, the UK AI Safety Institute received around £100 million for 2023–2025, a modest amount relative to industry budgets.[1]
- Talent competition: Attracting and retaining top AI researchers is difficult when private firms offer higher salaries, equity, and greater computing resources.
- Regulatory balance: Institutes must promote safety and accountability while avoiding policies that could slow innovation, particularly among smaller developers.[2]
See also
[edit]- Artificial intelligence safety
- AI alignment
- AI governance
- Ethics of artificial intelligence
- Risk management
References
[edit]- ^ a b c "Prime Minister launches new AI Safety Institute." GOV.UK (2023).
- ^ a b c "Global Network of AI Safety Institutes Announced." TIME Magazine (2024).
- ^ a b "AI Safety Institute: Approach to Evaluations." GOV.UK (2023).
- ^ a b "U.S. AI Safety Institute Established under NIST." U.S. Department of Commerce (2023).
- ^ "EU AI Office to Enforce AI Act." European Commission Press Release (2024).
- ^ "Japan Establishes National AI Safety Institute." IPA Japan (2024).
- ^ a b Campos, S. et al. "Frontier AI Risk Management Framework." arXiv (2025).
Category:Artificial intelligence safety Category:Artificial intelligence Category:Research institutes Category:Government agencies established in 2023 Category:Technology governance
