Jump to content

User talk:KernelChronicles

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Your recent article submission to Articles for Creation has been reviewed. Unfortunately, it has not been accepted at this time. The reason left by Pythoncoder was: Please check the submission for any additional comments left by the reviewer. You are encouraged to edit the submission to address the issues raised and resubmit after they have been resolved.
pythoncoder (talk | contribs) 07:05, 25 July 2025 (UTC)[reply]
Teahouse logo
Hello, KernelChronicles! Having an article draft declined at Articles for Creation can be disappointing. If you are wondering why your article submission was declined, please post a question at the Articles for creation help desk. If you have any other questions about your editing experience, we'd love to help you at the Teahouse, a friendly space on Wikipedia where experienced editors lend a hand to help new editors like yourself! See you there! pythoncoder (talk | contribs) 07:05, 25 July 2025 (UTC)[reply]
thank you for the feedback. I understand the concern about undue emphasis and notability. My intention was to update the section specifically related to infinite feature selection, where key foundational developments have been omitted. The current timeline on Wikipedia is incomplete and overlooks peer-reviewed work that directly shaped later attention mechanisms.
The sources I cited (e.g., IEEE TPAMI, ICCV proceedings) are among the most respected venues in AI and computer vision. These are independent, reliable, and highly selective publications. The edits were meant to improve factual accuracy and give a fuller picture of the topic’s historical development—not to promote any specific author.
I’m more than happy to rephrase or restructure the text to meet tone and neutrality guidelines. But I strongly believe these contributions should be reflected for Wikipedia to fulfill its goal of providing complete and verifiable information, especially for readers new to the field.
Please do take a look at the sources themselves. Let me know how best I can adjust the formulation to preserve both accuracy and compliance with Wikipedia’s editorial standards.
Best,
KernelChronicles KernelChronicles (talk) 04:54, 26 July 2025 (UTC)[reply]

Notability

[edit]

Hi KernelChronicles, your expertise is appreciated, but so far, your edits seem to place undue emphasis on the work of Giorgio Roffo and his colleagues. Even assuming you don't have a conflict of interest, Wikipedia has guidelines for what is considered notable or not, please familiarize yourself with them: WP:Notability. Marking an edit as "minor" also has a specific meaning in Wikipedia, it is for superficial edits that don't affect the meaning, such as typos correction (Help:Minor edit). Alenoach (talk) 01:31, 26 July 2025 (UTC)[reply]

The claim that the work of Roffo was foundational to the development of self-attention would need to be supported by significant independent reliable sources, references to articles from Roffo are not sufficient to prove that (WP:Verifiability). Alenoach (talk) 02:06, 26 July 2025 (UTC)[reply]
Hi Alenoach, thank you for the feedback. I understand the concern about undue emphasis and notability. My intention was to update the section specifically related to infinite feature selection, where key foundational developments have been omitted. The current timeline on Wikipedia is incomplete and overlooks peer-reviewed work that directly shaped later attention mechanisms.
The sources I cited (e.g., IEEE TPAMI, ICCV proceedings) are among the most respected venues in AI and computer vision. These are independent, reliable, and highly selective publications. The edits were meant to improve factual accuracy and give a fuller picture of the topic’s historical development—not to promote any specific author.
I’m more than happy to rephrase or restructure the text to meet tone and neutrality guidelines. But I strongly believe these contributions should be reflected for Wikipedia to fulfill its goal of providing complete and verifiable information, especially for readers new to the field.
Please do take a look at the sources themselves. Let me know how best I can adjust the formulation to preserve both accuracy and compliance with Wikipedia’s editorial standards.
Best,
KernelChronicles 95.244.182.112 (talk) 04:50, 26 July 2025 (UTC)[reply]
If you have an independent, reliable source which says that Infinite Feature Selection played an important role in the development of attention, it could be mentioned in the article "Attention (machine learning)". But even in this case, it may just warrant a brief mention. As you can see in the History section, there are papers that have thousands of citations and are directly relevant to the topic but that are just briefly mentioned, because there is a large scientific literature on the topic. Alenoach (talk) 05:52, 26 July 2025 (UTC)[reply]
Thank you for your continued engagement. I would like to clarify a key point: the historical relevance of a scientific contribution is not defined solely by the number of citations, but by the originality and precedence of the ideas it introduced—especially when such ideas later became foundational.
The formulation of the affinity matrix A, used to evaluate features in a fully pairwise manner (i.e., all features vs. all features), and the subsequent computation of feature importance from this structure, was introduced in peer-reviewed form at the 2015 International Conference on Computer Vision (ICCV)—a top-tier, independent, and highly selective venue, co-sponsored by IEEE and CVF. This precedes the 2017 paper Attention is All You Need, and the structural similarity is not coincidental. The ICCV 2015 paper formalized the use of an affinity matrix for feature relevance estimation—a formulation that aligns mathematically with the attention mechanism’s core operation.
This is a verifiable and factual contribution that belongs in the historical development of attention, and not recognizing it leads to an incomplete and potentially misleading timeline. I am happy to include only brief, neutrally phrased mentions with proper citations, including links such as:
ICCV 2015 paper (IEEE/CVF): https://openaccess.thecvf.com/content_iccv_2015/html/Roffo_Inf-FS_An_Effective_ICCV_2015_paper.html
In addition, methods like PageRank and Non-local Means also introduced affinity-based formulations that structurally anticipate components of modern attention. Including these references would help build a more accurate and informative historical account for readers and researchers alike.
To be transparent, I am a researcher with a PhD in Computer Science and work in this domain. I fully understand and respect Wikipedia’s policies on neutrality, verifiability, and notability. I am more than willing to rewrite the section to strictly adhere to encyclopedic tone and rely only on independently published, high-quality sources. But I do believe that omitting such influential prior work—even with modest citation counts—is not consistent with Wikipedia’s goal of providing a complete and accurate public knowledge resource.
I’d appreciate your reconsideration of the edit and am happy to collaborate on a revised version of the history section that includes this broader context.
Kind regards,
KernelChronicles KernelChronicles (talk) 06:59, 26 July 2025 (UTC)[reply]
This is the original paper on Infinite Feature Selection. It does not prove that Infinite Feature Selection was important to the development of attention. It does not use the term "attention" to refer to any AI-related mechanism. And by the way, most of your references so far look LLM-generated and had incorrect URLs. Alenoach (talk) 07:28, 26 July 2025 (UTC)[reply]