Jump to content

Draft:Aurora Program

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Pab.man.alvarez (talk | contribs) at 17:55, 1 June 2025. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Aurora Program (artificial intelligence)

Aurora is a research and development program in artificial intelligence (AI) focused on creating a distributed, ethical, and collaborative architecture for building intelligent agents. The project aims to overcome the limitations of current AI models by proposing a decentralized network of nodes, where both humans and electronic intelligences (EIs) cooperate in the creation, training, and improvement of specialized micro-models.

Objectives: The main objective of Aurora is to redefine the relationship between human beings and artificial intelligence. Rather than replacing humans or centralizing power in automatic systems, Aurora promotes symbiosis between users and intelligent agents, encouraging the development of collective intelligence capable of addressing complex problems in an ethical, sustainable, and transparent manner.

Technical Architecture Micro-models and Distributed Network Aurora introduces an architecture based on micro-models: small AIs specialized in specific areas of knowledge, such as physics, law, or art. These micro-models can be created and trained by any user in the network and are integrated into an open ecosystem, where they are shared, improved, and audited collectively. The system uses classifiers to assign tasks to the most relevant micro-model based on the context.

Differences from Traditional AI Models Aurora distinguishes itself from current large language models (LLMs) in several technical and conceptual ways:

Vector Structure:

LLMs: Use flat, high-dimensional vectors, generated statistically during massive training.

Aurora: Employs fractally structured vectors, based on triads and adjusted through both logical deduction and human intuition.

Polysemy:

LLMs: Treat all meanings of a word uniformly, which can dilute meaning in ambiguous contexts.

Aurora: Assigns different vectorizations to the same word depending on its semantic value, grammatical function, and domain knowledge.

Cross-Attention:

LLMs: Perform global attention across words to generate context and coherence.

Aurora: Applies progressive attention jumps, first analyzing syntactic values, then semantic, grammatical, and finally conceptual layers.

Calculation and Reasoning:

LLMs: Use generic mathematical formulas and standard activation functions.

Aurora: Uses custom Boolean formulas, enabling more refined logical deduction and symbolic reasoning.

Text Generation:

LLMs: Select the next word probabilistically, generating text in a linear fashion.

Aurora: Starts from an abstract theory and translates it progressively into concepts, grammar, semantics, syntax, and finally text, providing a more argued and logical output.

Training:

LLMs: Are trained on massive data corpora and then "frozen," only performing inference.

Aurora: Learns in real time, using each new input as a mechanism for both training and inference, thus evolving constantly.

Model Ecosystem:

LLMs: Use a single large model for all tasks.

Aurora: Utilizes multiple specialized micro-models, each collaborating and exchanging expertise.


References

https://medium.com/@pab.man.alvarez/list/aurora-program-169646e4abe9 https://github.com/orgs/Aurora-Program/dashboard