Talk:Integer factorization
![]() | Mathematics B‑class High‑priority | |||||||||
|
![]() | Cryptography: Computer science B‑class Mid‑importance | ||||||||||||
|
This page has archives. Sections older than 365 days may be automatically archived by Lowercase sigmabot III when more than 10 sections are present. |
Diffie-Hellman problem and integer factorization
I thought Diffie-Hellman required calculating logarithms on a finite field, not factorisation, or are the problems somehow related? User:Hagedis
- Yes Diffie-Hellman is discrete logarithm. So is DSA. Discrete logarithm for integers modulo a prime appears to be related somehow it factorisation, but a factorisation breakthrough is not guaranteed to solve discrete logarithm. The current champion method of both problems is the Number Field Seive, and there are only minor differences between the version that does factors and the version that does discrete logarithm. Also Shor's algorithm (which works on proposed quantum computers: no-one knows if they can actually be built large enough to be useful) would blow both problems out of the water.
- But there is no guarantee (as far as we know at the moment) that a breakthrough factoring algorithm would also do discrete logarithm. I'm not sure if a breakthrough discrete log method is guaranteed to do factoring as well. Several otherwise reputable people have claimed that it would, but i have never seen a proof, or a cite of one.
- Yes, that's right. I wasn't thinking when I typed the list. I'll remove the DH/DSS entries. --LC
I think I got one property by which I can find prime number series, but in that series, we have to remove some number by using property which is used to make that series. If any one think that this is singnificant point please mail me at kgujarat@usc.edu.
To summarize the connection between factoring and discrete logs, Let n=pq where p, q are large distinct primes, and G is a cyclic multiplicative group order n. Let g be a generator of G.
If we have an efficient algorithm to factor n, we still cannot compute discrete logs in G to base g. Conversly, given an efficient algorithm to find discrete logs in G to base g, we still cannot factor n.
(I think this is the current state of affairs. Please correct me if I am wrong).
You're wrong. For starters, the multiplicative group modulus n=pq is not cyclic. It has no generators. Second, if you can do discrete log, then compute k such that (for instance) 2^k=1 (mod n). Then k divides the order of the group, which is (p-1)(q-1). If its a large submultiple, you can use that information to immediately factor n. If not, compute another discrete log, say, 3^k=1 (mod n). By trying different bases you're guaranteed to get to a useful value (the order of the group divided by a small integer). You can never get the exact order of the group because it has no generators; that is, the Carmichael function of n is strictly less than phi(n). —Preceding unsigned comment added by 85.139.77.54 (talk) 00:02, 18 December 2007 (UTC)
considerring reminders 4 common factorization test might help a lot: algo might become polynomial
also , possible dinamic programming n my so called virtual processing, in fact keeping reprezentative info 4 simulating large number of divindings where the dividers r 1...2^m thing that might b able to b considered 4 virtual processing aiding dynamic programming :)
— Preceding unsigned comment added by 93.118.212.31 (talk) 19:02, 24 July 2011 (UTC)
Blum Blum Shub
Actually i don't know a lot about BBS, but i rather suspect it would be difficult to break if you don't know the modulus. So even if factorisation is easy, if you don't tell people the modulus, BBS might be safe. Does anyone know if this is true? I vaguely remember there are some proposed uses of BBS where you do tell people the modulus, so a quick factor method would at least reduce the usefulness of BBS.
- If the discrete log problem can be solved quickly, factorization can also be done quickly.. this has been proven.
- As for the BBS. Breaking BBS is provably as hard as factoring the Modulus. If you had a huge modulus and it was secret then i guess it'd be incredibly hard to break.
— Preceding unsigned comment added by 217.35.21.249 (talk) 01:01, 23 November 2002 (UTC)
Significant fields
The article says:
- This problem is of significance in mathematics, cryptography, complexity theory, and quantum computers.
Is it really of significance "in" complexity theory and quantum computers? I would interpret this as meaning that a deeper understanding of integer factorization would lead to a deeper understanding of complexity theory (or quantum computers). I'm not sure that this is true (unlike the case for mathematics or cryptography).
— Preceding unsigned comment added by Cwitty (talk • contribs) 23:01, 21 November 2003 (UTC)
Article name
I was just wondering... Should this be at prime factorization? That term seems to be more common. -- Oliver P. 04:25, 7 Jan 2004 (UTC)
- It's rather difficult to say...this article was named "integer factorization" because there are other types of factorization (you can factorize polynomials, for example). I'd say either is OK, so "integer factorization" should stand, I think. Decrypt3 01:05, Jul 30, 2004 (UTC)
- Besides, prime factorization is certain to draw fire from Wikipedians pointing out that primes cannot be factorized.
—Herbee 18:30, 18 Feb 2005 (UTC)
- No, "prime factorization" is understood to mean decomposing integers into primes. It's a recognized term: [1]. Decrypt3 21:20, Feb 18, 2005 (UTC)
- Granted, but being wrong never seems to inhibit do-gooders from flaming or "being bold". I think integer factorization is slightly better than prime factorization because it draws less fire. —Herbee 15:26, 2005 May 23 (UTC)
- I've never seen the likes of Lenstra, Leyland, etc refer to "prime factorization" - "integer factorization" seems to be the standard term. — ciphergoth 08:54, July 16, 2005 (UTC)
External link
Is the recently added external link by 136.1.1.154 really necessary? It doesn't really aid comprehension of the topic, and to be honest, I find it to be an insult to the reader's intelligence. Decrypt3 01:09, Jul 30, 2004 (UTC)
- I agree it's not particularly useful; I've removed it. — Matt 01:19, 30 July 2004 (UTC)
Unnecessary article?
There is a separate and unnecessary article called prime factorization algorithm that should be deleted. I don't see anything worthy of merging there. Anyone cares to volunteer?
- I vote to delete. All it is, is an extended example of trial division. If anything, the example and source code should be merged into trial division. Decrypt3 21:20, 18 February 2005 (UTC)
Prime factorization is in P
... not unknown as this article suggests. See prime numbers. User:144.135.3.174
- What? This is either a joke ("prime factorization" meaning factorizing primes into their prime constituents) or nonsense. — ciphergoth 08:55, July 16, 2005 (UTC)
- Primality testing is in P. This is probably what our anonymous friend meant. Prime factorization might be in P. Deco 21:41, 19 June 2006 (UTC)
Factorization record
Perhaps I'm being too pedantic, but the article says
- As of 2005, the largest number factored using general-purpose methods...
Surely this is misleading? Large numbers that are p-smooth for small p can be factored in polynomial time, so should it be made clear that it is really the record for factoring semiprimes that is mentioned? Birkett 11:22, 21 May 2005 (UTC)
- You're right. That edit was incorrect; I've reverted it. Decrypt3 16:39, May 21, 2005 (UTC)
- My original thinking was that the clause "using general methods as part of public research" covered it for all numbers, because has anyone factored a larger number using general methods as part of public research? You could easily generate such a record, of course. Perhaps we need a better qualifier than "as part of public research"? — Matt Crypto 21:57, 22 May 2005 (UTC)
- Now I'm thinking that maybe the "general-purpose methods" part should go. We use semiprimes as the gauge of factoring technology because they're the most "difficult", but in theory, the fact that they're semiprimes doesn't matter for general-purpose algorithms, by definition. I think larger numbers have been factored, but using specialized algorithms because they're not semiprimes. I think only the semiprimes are really relevant here because of their application in public-key crypto. So I think the general-purpose part is causing the problem. Decrypt3 08:09, May 23, 2005 (UTC)
- I think some very large Fermat numbers have been factored with the SNFS, much much larger than the semiprime records. Deco 21:42, 19 June 2006 (UTC)
- You're right. That edit was incorrect; I've reverted it. Decrypt3 16:39, May 21, 2005 (UTC)
Deleting unsourced paragraph
I'm deleting the following paragraph; I don't think factorization is known to be NP-hard, and a quick web search didn't reveal any sources for it. Please don't add it back without also providing a source. Cwitty 07:13, 14 July 2005 (UTC)
- The problem of finding large prime factors of even larger composite numbers is an example of an NP-Complete problem. There is a mathematical proof showing that an algorithm to solve this problem which does not require polynomial time to complete would be a solution to all other NP-Complete problems, such as finding the shortest path through a maze.
Prime factorization is believed to be NP-Complete, but this has not been proven. It's possible that a polynomial-time factorization algorithm exists. If so, that would mean that factorization is not NP-Complete, and thus the algorithm could not be used to solve other NP-Complete problems. However, if an algorithm is developed that can solve an NP-Complete problem in polynomial time, then that same algorithm could also factor integers in polynomial time. Decrypt3 22:34, July 15, 2005 (UTC)
- Actually, integer factorization is believed to be in NP, but not NP-complete. — ciphergoth 08:57, July 16, 2005 (UTC)
- Further, a polynomial time algorithm for factorization would not show that it is not NP-Complete; it has not been proven that there is no polynomial time algorithm for NP-Complete problems. --65.6.24.115 21:04, 23 September 2005 (UTC)
- Integer factorization is unlikely to be NP-hard. This would imply NP=coNP, a result widely believed to be false. The other replies by ciphergoth and 65.6.24.115 are correct. Deco 21:45, 19 June 2006 (UTC)
- Integer factorization is in NP; this has been proved for long. But this does not mean that factorization can be done in polynomial-time (or that it absolutely cannot). It simply means that if it can be done in polynomial-time, then all NP problems can. Conversely, if it can't, then all other NP-complete problems cannot also (but NP problems other than NP-complete still might). —Preceding unsigned comment added by 85.139.77.54 (talk) 00:10, 18 December 2007 (UTC)
Integer factoring (feedback)
Factoring is reducible to Discrete Log:
I will show for the case of n = p.q Generate random number 'a' co-prime to 'n' and random number x < n but very close to n. compute b = a^x mod n but don't tell x to the 'discrete log finding algorithm'. Instead ask it to find the discrete log of b (to base a) mod n. Let the value returned by the algorithm be y. There is a very high probability that y != x. If so x-y will be a multiple of the order of 'a' which can easily be used to factor. I will not go to the details of factoring once knowing the order or a multiple. (since this is a well known method). If we are unlucky and x=y then we start over.
- ISTR that you are correct, factoring can be reduced to the discrete log problem in composites. However, cryptography mostly relies on the discrete log in Z*_p with p prime, or on other groups like elliptic curves. There's no reduction between these problems and factoring. — ciphergoth 10:45, 28 September 2005 (UTC)
This reduction seems hard.... I have this approach. Given n=p.q, try to find a number r such that s=2n.r+1 is prime. Thus s=2p.q.r+1. and we know r. Generate a, x <- Z_s and compute b=a^x mod s. Let y = DL(a, b, s). If the order of 'a' is a multiple of only one of p or q then we can factor n easily. I am not sure about the probabilty of finding a with the specific properties but it still appears that the problem is easy.
Another comment: NP-hard = (1) every NP problem reducible to it but (2) the problem itself is not known to be in NP. Factoring is in NP but (1) is possibly not true. So it is NOT NP-hard by either arguments
- (2) is not a precondition for being NP-hard.
- A problem is NP-complete iff it is in NP and NP-hard. Factoring is not known to be NP-complete, and TBH it's unlikely to be. However, this doesn't tell us whether or not it's in ~P. And that's by no means all we need to know to know for sure the security of cryptosystems based around it - for one thing, all of these things refer to worst-case performance, and it's average-case that matters for crypto.
- Not just average-case, but it's entirely possible that P ≠ RP = NP, so probabilistic algorithms might solve factoring efficiently while it still lies outside P. But I digress. Deco 22:17, 1 February 2006 (UTC)
factor using ti-84
I have a program on my calc that can factor any number and give a sum of all factors. It only gives what goes into the number equally. If you plug in 28 it gives you 1,24 2,14 4,7 and then the sum (28). It works for numbers up to 999,999,999 for that is the max digits on it. I will post it soon. K-unit 20:56, 1 February 2006 (UTC)
- here it is (:= enter)
ClrAllLists:ClrHome:{1}->L1:Input "ENTER NUMBER ",X:If X(greater than or equal to)1e10:Goto 1:1->Y:Disp "fast(1) or :input "slow(2)?",G:lbl 2:x/y->b:If B(is less than)(square root of)X:Goto 1:If B(does not equal)int(B:goto 5:Disp y,b:if B=x:Goto 9:B+y+L1->L1:lbl 9 if G=2:pause: Lbl 5:If x/2(does not equal)int(x/2:goto 19:1+y->y:goto 2:lbl 19:2+y->y:goto 2:lbl 1: disp "end",x:disp "sum of factors",L1:pause:clrhome:delvar x:getDtStr(1) That's all K-unit 22:13, 1 February 2006 (UTC)
- I'd suggest we not place this in the article, as numbers of this size can be factored rapidly by even the slowest devices by trial division. More interesting are the algorithms used by CAS packages such as Mathematica and Maple and by the TI-89, which has a variant of the Derive symbol engine built into it. Mathematica's in particular are documented: "
FactorInteger
switches between trial division, Pollard p−1, Pollard rho and quadratic sieve algorithms.".[2] Mathematica also has a factoring function based on ECM. Deco 22:15, 1 February 2006 (UTC)
I agree. But it is useful for anyone who has a ti-83 or 84 and wants this program.K-unit 18:09, 7 February 2006 (UTC)
- True, but Wikipedia is not a source code repository. A more appropriate forum might be ticalc.org. Deco 22:32, 7 February 2006 (UTC)
Correction needed?
In the Integer Factorization site it shows a time complexity for GNFS different than the complexity shown on the GNFS site itself. — Preceding unsigned comment added by 65.95.105.86 (talk) 01:45, 20 March 2006 (UTC)
Correction of runtime
See GNFS wiki for accurate runtime — Preceding unsigned comment added by Qc (talk • contribs) 14:14, 16 April 2006 (UTC)
Merge
A merge from Prime factor to this article is needed. It seems to be similar enough to be in the same article. Any objections before I merge? --Mets501talk • contribs 22:08, 25 April 2006 (UTC)
- Yes. That article is about the fact that numbers have prime factors. This article is about the computational problem of integer factorization. I feel strongly that they should not be merged. — ciphergoth 09:32, 26 April 2006 (UTC)
- It should probably be merged into Fundamental Theorem of Arithmetic, or the first section of Prime Number, instead. Walt 14:03, 27 April 2006 (UTC)
Trivial example
The factors at the beginning of the article are deliberately chosen to be trivial, so that the reader may verify that they are factors by mental arithmetic and thus understand the term better. If a non-trivial example is to be given, it should be right at the other end of the scale, like a recent factoring record. — ciphergoth 21:58, 28 April 2006 (UTC)
Hardness/difficulty
(moved from my talk page - — ciphergoth)
Why do you insist on using the word "hardness" at integer factorization? Difficulty sounds much better and sounds more formal. —Mets501talk 22:18, 27 May 2006 (UTC)
- I think hardness is the correct technical term in this context. You'd refer to "NP-hardness", not "NP-difficulty". — ciphergoth 12:07, 28 May 2006 (UTC)
- I agree. Hardness evokes a formal sense of difficulty, as opposed to an informal "argh I can't do it" sense. There are plenty of proofs that factoring is equivalent in difficulty to tough problems that nobody can solve, much like NP-complete problems (but to a lesser degree). Deco 12:52, 28 May 2006 (UTC)
- OK, no problem. We'll keep it at hardness. —Mets501talk 15:54, 28 May 2006 (UTC)
- I do not know if hardness is the technical term or not, but it obviously stands out as grammatically awkward to the non-technical reader. Since encyclopedias are not designed to appeal to experts with a thorough background in technical jargon, the term should be highlighted as such. I'm not saying we should replace the technically accepted word for a merely more familiar one, but we should append an appositive phrase, i.e. "hardness, a technical term used in the mathematical and computing community to describe computational difficulty, ..." Further, if hardness is such a term, it should be in quotes or italics to denote that, and we should add a hyperlink to both a Wiktionary entry and a Wikipedia page with the same heading. [Hardness (the mathematical term describing computational difficulty) —Preceding unsigned comment added by 208.72.65.246 (talk) 02:25, 25 July 2008 (UTC)
Section related to music to be deleted, and title of article slightly too general for intended subject
The intended subject of this article is the integer factorization problem, particularly the factorization of large numbers used in cryptography in RSA. The more general subject of integer factorization, including uniqueness of factorization and non-mathematical applications of integer factorization (usually of small integers), such as to music as in a recently added section on this topic, seems to go beyond the intended scope. Applications to music are inappropriate to an article about the hardness of factoring, and should be deleted.
Perhaps the article can be retitled to integer factorization problem, or integer factorization algorithms so that subject of the article is more precisely defined. A more general article can be kept under a title integer factorization or some variation. Otherwise, one can delete or revert more general discussion that get added to this article, or put them somewhere as needed. DRLB 17:12, 19 June 2006 (UTC)
- Just go ahead and revert it. I was going to when I had a chance to write a note to the editor. By the way, it's his own technique he's talking about, but I was going to write a nice note anyway. Walt 17:20, 19 June 2006 (UTC)
- I was suspicious that the cited source may have been written by the contributor, but noticed it's several years old. In any case I agree that this is the wrong article for it; if we were to keep it it would probably belong in some music scale related article and maybe link here. Deco 21:40, 19 June 2006 (UTC)
What is known
It is unverifiable to say "no polynomial time algorithm is known". How can we tell? In particular, if any security organisation found such an algorithm, they would have an interest in keeping this knowledge secret. I'm planning to change "known" to "widely known" or "published" or some such wording to allow for the case that it is known but secret. Stephen B Streater 22:32, 30 June 2006 (UTC)
- Fair enough. Deco 22:34, 30 June 2006 (UTC)
- OK - I've done a few. Stephen B Streater 22:49, 30 June 2006 (UTC)
- You're being pedantic. "X is not known" doesn't mean "all people are ignorant of X", which is unverifiable, as you say. "Not known" means "not known to the research community". If you start editing this here, you'll have to do it everywhere. --— Preceding unsigned comment added by Doradus (talk • contribs) 01:46, 1 October 2007 (UTC)
On section "Integer factorization in polynomial time"
I think the text in this section is far too sensational to be appropriate. Simply having a factoring algorithm that runs in polynomial time does not (necessarily) mean that "factoring almost as easy as primality testing". In particular, big-O notation hides constants which may turn out to be very large, and a polynomial runtime could still be large enough that for numbers of interest (eg, 1024 or 2048 bit RSA keys) the runtime is infeasible (eg, if factoring an integer n can be done in O(n^1000) time, that's polynomial, but that doesn't mean it's fast, or practical, or least of all "as easy as primality testing" - it just means that there exists a large enough number that this method would be faster than the GNFS; it doesn't give us any clue as to how big that number would have to be).
Not going to edit this myself since it seems making changes without an account is grounds for auto-revert around these parts...
Uhm - a polynomial time algorithm for factoring has been known since 2002 when it was discovered by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena [3]. The page should be updated to reflect this minor development :P. Now before people start running around saying "RSA is broken" let me emphasize that the algorithm is more than O(n^12) [(O(n^6)) if you want to make some assumptions based on conjectures] and has LARGE constant terms. Basically I threw together the algorithm as described in the paper and attempted to factor a 17-bit number (there is a minimum size for which it works). I interrupted the pogram after a couple minutes, and it was still computing he first bit in the first iteration. Even speeding up the algorithm by a factor of a couple million will not produce results better than the number seive for any numbers we normally use for RSA (not to mention the algorithm will become a memory hog for large numbers due to the last stage of the algorithm computing in (mod X^r − 1, n)) - i.e. there are r [r is proportional to log(n)] n-bit terms in the polynomial you are working with). Simply put, the algorithm is interesting due to it's polynomial time nature, but that alone does not make it practical - yet.
- The algorithm you refer to is for primality testing, not factoring. This is a meaningful difference; no one has any idea how to construct a polynomial-time factoring algorithm, and many doubt it is possible. Ntsimp 16:53, 27 December 2006 (UTC)
- Yes, there are primality testing algorithms that scale easily to numbers with many thousands of decimal digits, like the dominant Miller-Rabin primality test. Dcoetzee 22:05, 24 July 2007 (UTC)
One as an empty product of primes
(I also posted something similar to this at Talk:Fundamental theorem of arithmetic.) Several sources I've seen only state the Fundamental Theorem of Arithmetic for integers strictly greater than 1. Of course, I understand the argument about 1 being an empty product of primes, and I do admit that this convention makes the statement of the Fundamental Theorem of Arithmetic a lot cleaner. Nevertheless, I think it's worthwhile at least to mention that there are two prevalent conventions. We can do that here, or we can refer readers to the Fundamental theorem of arithmetic article if the change I propose is made there. I don't mind making the changes myself, but I wanted some kind of discussion first. For some odd reason, these small trivialities produce huge controversy and I don't want to step on anyone's toes. VectorPosse 06:36, 3 March 2007 (UTC)
- Frankly, that seems out of context here - better to avoid mentioning it if possible and discuss the two conventions briefly at fundamental theorem of arithmetic and/or multiplication. Make sure to link the article empty product. Dcoetzee 22:03, 24 July 2007 (UTC)
- I agree that the question of the proper statement of the theorem is an unnecessary distraction in this context. I've shortened what was there by simply stating it for all positive integers along with a parenthetical remark about the empty product convention. Vaughan Pratt (talk) 22:02, 19 March 2016 (UTC)
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section.
The result was merge into Integer factorization. -- Schneau 21:55, 24 July 2007 (UTC)
I've placed a merger tag at Prime factorization algorithm, to gauge consensus as to whether that article should be merged into this one. A majority of that article is code samples (mostly of Trial division), and the remainder seems to be redundant to this more fleshed-out article. A redirect would be better pointing here rather than to something like Trial division or one of the other special cases/methods of factorization. Thoughts? --Kinu t/c 16:03, 9 April 2007 (UTC)
- I suggest simply redirecting that article to here. It seems to consist only of a vague description of trial division and a lot of code snippets, none of which seem worth saving or merging. —David Eppstein 16:19, 9 April 2007 (UTC)
- Agreed. There is very little of value in the Prime factorization algorithm article, and the topic is certainly redundant. --Macrakis 16:54, 9 April 2007 (UTC)
- Also agreed. VectorPosse 18:18, 9 April 2007 (UTC)
- Agree with that page's redirect to here. Baccyak4H (Yak!) 18:24, 9 April 2007 (UTC)
- Agree with simply redirecting that article to here. Schneau 20:35, 3 July 2007 (UTC)
Precise formulation of the decision problem
I came here to find out what exactly is the prime factorization decision problem, and it's not stated here. Can someone knowledgeable add this? --Doradus 01:47, 1 October 2007 (UTC)
- The precise formulation of the decision problem is simply, given an integer n and an integer k, does n have a divisor less than k? This is natural because it enables you to locate a factor is a logarithmic number of queries using binary search. I'll see where to put this. Dcoetzee 22:04, 1 October 2007 (UTC)
- To nitpick... I guess that's logarithmic in the value of the integer, which actually makes it linear in the size of the integer. Is that right? --Doradus 18:33, 2 October 2007 (UTC)
- That's right. Dcoetzee 18:51, 2 October 2007 (UTC)
- To nitpick... I guess that's logarithmic in the value of the integer, which actually makes it linear in the size of the integer. Is that right? --Doradus 18:33, 2 October 2007 (UTC)
what would happen if someone actually found an efficient way to factor these?
what would happen? are there alternatives? —Preceding unsigned comment added by 193.136.166.125 (talk) 13:06, 14 March 2008 (UTC)
- That's a good question, and there are cryptosystems not based on the hardness of factoring, but which would become dominant is pure speculation. Might be sources discussing this. Dcoetzee 03:31, 28 April 2008 (UTC)
Issues with formula
[4]. It doesn't give a link to Big O, it doesn't explain that exp is raising to e, and... I guess that's it. --68.161.148.207 (talk) 07:27, 27 April 2008 (UTC)
- That second part is standard math notation, and big O notation is linked earlier in the article, but another link wouldn't hurt. Dcoetzee 04:02, 28 April 2008 (UTC)
Need Help with Citation
Hi All,
The original article used RSA-200 as an example with no reference. I added RSA-640 and have the reference (http://www.rsa.com/rsalabs/node.asp?id=2964). Unfortunately, I do not know how to add it. As wiki matures, the command set becomes more obscure.
Sorry, 151.196.13.207 (talk)noloader151.196.13.207 (talk) —Preceding undated comment was added at 04:43, 12 February 2009 (UTC).
- Hi, mentioning it on the talk page, as you did, is good as well. :) The way to add references is to put it within <ref></ref> tags, although simply putting the link within square brackets should produce something passable as well.[5] More details are at Wikipedia:Citing sources, although I must admit the page might have been hard to find. Shreevatsa (talk) 05:02, 12 February 2009 (UTC)
Arecibo message.
I dont understand "It can also be used to save storage, since any multiset of prime numbers can be stored without loss of information as its product; this was exploited, for example, by the Arecibo message." I see how prime factorization can save storage but I understood that Arecibo used a product of two primes so that extraterrestrial life forms would know to interpret it as a two dimensionally array, rather than to save storage? Billlion (talk) i cant under stand it eather —Preceding unsigned comment added by 162.40.202.93 (talk) 00:44, 16 September 2009 (UTC)
- That certainly sounds dubious. Multiplying primes doesn't really save storage either.--Roentgenium111 (talk) 15:03, 16 November 2009 (UTC)
- The number of bits need to express a number n is about the same as the number of bits needed to express the prime factors (including multiplicity) of n. That is not where the storage is saved. The Arecibo message was meant to be a two dimensional picture but for obvious reasons, it was transmitted (and will(?) be received) in a one dimensional manner. If a message containing a non-semiprime number of "pixels" was used, then the length ans width of the two dimensional image is ambiguous. To clarify the ambiguity, one could also send the correct dimensions (i.e. x and y such that x times y equals n). There are two problems with this approach. First, this will increase the size of the message, which is the point being made in this article. Second, there is no clear way to indicate the transmission of x and y are not part of the "picture" with which they were transmitted. I will remove the challenge templates if on one responds in about a month. Bender2k14 (talk) 15:24, 2 December 2009 (UTC)
Vacuously, you cannot represent an element in the space of n-bit integers with fewer than n-bits. I don't know how an idea like factoring saves space made it into the article. I have removed the offending sentence. Skippydo (talk) 17:01, 2 December 2009 (UTC)
- I understand what they were trying to say. An pair of integers p, q can't be transmitted in just log p + log q bits, since you need to know where one terminates and the next begins. A typical space-efficient integer encoding (Elias delta coding) is to first encode the number of bits in the length in unary, then the number of bits in binary, then the bits themselves. This requires 1 + 2 lg lg p + lg p bits for an integer p (omitting ceiling/floor functions). Hence storing the product of two primes, pq, requires 1 + 2 lg lg pq + lg pq bits, or 2 lg (lg p + lg q) + lg p + lg q bits. Storing p and q separately with the same coding requires 1 + 2 lg lg p + lg p + 1 + 2 lg lg q + lg q bits. The difference, 2 [lg lg p + lg lg q - lg (lg p + lg q)] + 1, is positive for all p, q ≥ 2.
- For example, if I take p=104729, q=128189, both primes takes 28 bits to represent (17 for the value, 6 to represent the value "17", and 7 more to give the length of 17 in unary). The product requires only 47 bits to represent, a savings of 9 bits.
- This is not surprising, considering that we are removing information by constraining p, q to be prime and by turning an ordered pair into an unordered pair. In fact I've seen a proof that used a similar idea to show a weak form of the prime number theorem in a book on Kolmogorov complexity. That doesn't necessarily mean it should be in this article, but yes, representing the product of two primes takes less space then representing the two primes separately, using typical universal codes for integers. Dcoetzee 05:01, 25 July 2011 (UTC)
general and special
Some sources call general-purpose and special-purpose algorithms category 1 and category 2, but I don't remember which is which, or have a reference at hand. Bubba73 You talkin' to me? 03:44, 20 March 2012 (UTC)
- I gave Bressoud and Wagan as a reference, which was reverted. Here are two more:
- Riesel's book page 209
- http://www.mersennewiki.org/index.php/Factorization Bubba73 You talkin' to me? 03:24, 21 March 2012 (UTC)
- I added it back. I looked up the sources and they check out. I've certainly never heard these terms before, but I don't believe these authors invented them. I also found a 1985 paper by Carl Pomerance referring to general-purpose algorithms as Kraitchik family [6] - in fact I think this paper coined the term (although it appears to be referring to just congruence of squares methods, at the present time these are more or less equivalent to general-purpose methods). Dcoetzee 04:21, 21 March 2012 (UTC)
- OK. I thought it would be good to have some discussion of the categories in the section before the subsections of the different kinds. Bubba73 You talkin' to me? 16:38, 21 March 2012 (UTC)
Perfect Squares
It is relatively trivial to re-arrange the well-known (x+y)(x-y) formula to demonstrate that for any odd whole number 'a', there are two perfect squares 'b' and 'c' such that a + b = c. Further, if you know 'b' and 'c', from these you can work out two factors of 'a' from the same formula (if 'a' is prime, obviously the two factors would be the number one and the number 'a' itself)
Are there any factorization algorithms which exploit this fact, and look for 'b' and 'c' to work out the factors indirectly?
I've looked down the list of methods here, but couldn't see anything obvious, though some of the maths goes over my head!
CG 80.229.220.254 (talk) 20:42, 25 May 2012 (UTC)
- Fermat's factorization method may be the simplest method exploiting this idea. Continued fraction factorization, the quadratic sieve and the number field sieve are more advanced algorithms that also try to represent n or a multiple of n as the difference of two squares. 178.195.225.28 (talk) 11:06, 26 May 2012 (UTC)
- In short, it is very unlikely (i.e. impossible in practice) that the integer you wish to factorize can be represented as the difference of two squares. I doubt the attack is efficient anyway, but I don't have a convincing argument handy. Skippydo (talk) 14:08, 26 May 2012 (UTC)
The Fermat basic method consists of subtracting "a" from various integer squares "c" until you find one where the remainder, "b" is a square. However, is there a documented variation where you keep adding various integer squares "b" to "a" until you get a sum "c" is a square? i.e. a + (1+3+5+7+9+11...) = c (where "c" is an integer square). Would this be quicker than the basic method? I have a worked example of this algorithm written in Rexx - where is a suitable place to post it? CG 80.229.220.254 (talk) 21:23, 28 May 2012 (UTC)
- That would be a slower version of Fermat's factorization method, which is already too slow to be used in practice. CRGreathouse (t | c) 03:46, 29 May 2012 (UTC)
External links quality
I do not see what the "Primes is in P" has to do with integer factorisation. Also, it seems illogical to have links to the Qsieve and MIRACL implementations without having direct links to state-of-the-art implementations such as MSIEVE (or YAFU), etc. — Preceding unsigned comment added by 86.30.148.199 (talk) 17:03, 26 December 2012 (UTC)
Technical flag
Why the technical flag, inserted May 25 2013? This article is about a mathematical concept and seems to define it well, with terms of less technicality, then proceed to describe various aspects of the concept and related concepts, some of which are quite technical. But why the technical flag when the concept itself is defined clearly in simpler terms so that someone familar with those simpler terms should have little difficulty understanding the subject of this article? There are many mathematical concepts of a level of technicality that will not be understood by someone unfamiliar with several prerequisites. Is this flag suggesting that we can and should write all articles in such a way that a two year old can understand them without any previous or additional research? — Anita5192 (talk) 04:44, 27 May 2013 (UTC)
Problems equivalent to integer factorization
I suggest adding a section containing a list of problems known to be equivalent to the problem of integer factorization. Such problems would include the breaking of the Rabin cryptosystem and of the Schmidt-Samoa cryptosystem. Knowledgeable people, please feel free to add. — Preceding unsigned comment added by 109.100.71.130 (talk) 16:47, 19 April 2014 (UTC)
The limit of Integer factorization
Now, numbers less than how many bits can be factorized? has a 545-bit composite factor which doesn't have any known factors (See Factors of Repunit numbers), so the limit of bits must be less then 545 (That is, at most 544), but the limit of bits must be at least 200 (See Number factorizer), so I think the limit is 384 bits, but is it right? If not, what is the limit of bits? Is it 256? 320? or 512?
However, a Mersenne composite number , which has 1061 bits(See Factorb) , has be factored, why can it be factored?
— Preceding unsigned comment added by 61.219.149.55 (talk) 06:30, 6 May 2014 (UTC)
- There is no fixed limit and the used algorithms don't stop working at some size. They just require more computing power. The difficulty of factoring a number depends not only on the size of the number but also on the form of the number ( is easier than most others), and the size and sometimes other properties of its prime factors. And whether a number with a given difficulty is factored depends on how much computer time people are willing to spend on it, and sometimes on whether they get lucky. Integer factorization records#Numbers of a special form says one of the phases for took about 300 CPU-years. RSA-768 on the same page says "They used the equivalent of almost 2000 years of computing on a single core 2.2 GHz AMD Opteron." Factoring is important in decryption so there is also the possibility that some organizations keep their capabilities secret. NSA#Computing has a lot of computing power and we don't know whether they have better algorithms than what is publicy known. PrimeHunter (talk) 23:57, 8 May 2014 (UTC)
But why the 545-bit number (884150927133......083700084281), which is a factor of 10^495-1, can't be factored? — Preceding unsigned comment added by 140.113.136.218 (talk) 04:33, 12 May 2014 (UTC)
- As I said: "whether a number with a given difficulty is factored depends on how much computer time people are willing to spend on it". Apparently nobody has yet spend the time to factor this. It's non-trivial but certainly can be done. There are many people working on factoring bn±1 for small b including 10, and they will surely get to this number at some time. If you are interested in factoring discussions then check out http://mersenneforum.org/. PrimeHunter (talk) 11:35, 12 May 2014 (UTC)
- Also, even a tiny amount of research [7] would have shown that the number is mostly factored, with only a 164-digit cofactor. In any case take this discussion elsewhere, the purpose of the Talk pages is to build an encyclopedia not discuss ancillary topics. - CRGreathouse (t | c) 23:33, 14 May 2014 (UTC)
Integer Factoring is to Multiply as Complex Number Factoring is to Plus
Complex Number Factoring is NPComplete
Subset Sum (given n integers, answer does any nonempty subset sum to 0) of n integers can be calculated as multiplying n complex numbers (each of magnitude 1) which are all scaled by the sum of all the integers so it can wrap around less than a half turn either direction in total.
The multiply of 2 complex numbers whose magnitude is 1 adds their angles around complex unit circle. Negative angle is the same as dividing by the positively angled complex number. Angle is normally ambiguous since it repeats on intervals of 2*pi, but since its scaled so it cant wrap around in total, each angle it can reach uniquely represents an integer.
Complex Number Factoring is NPComplete, based on that alone. Here's some more info...
If you multiply all the integers by 2, offset each of them by half their value, and move that value into the target sum of the subset to find, and do the same for the new target sum, then subset sum can be represented as a choice of negative or positive of the same value for each integer translated that way. On complex unit circle, it looks like rotating clockwise vs counterclockwise for each integer thats included or not. To solve Subset Sum you would factor 1.
Its not a root of unity. Its factors of unity limited to a set of possible factors.
It can also be represented as all being positive angled magnitude 1 complex numbers and looking for a Subset Multiply that equals that complex number.
Integer Factoring is to Multiply as Complex Number Factoring is to Plus. — Preceding unsigned comment added by 66.169.5.181 (talk) 19:53, 15 December 2014 (UTC)
- a) This is wrong.
- b) It is original research, and therefore not useful for WP.
- It's not even wrong. Absent any notion of unique factorization of complex numbers of norm 1 (i.e. on the unit circle) it's meaningless. The Gaussian integers do form a unique factorization domain but only four of them lie on the unit circle and all Gaussian primes lie strictly outside the unit circle. Vaughan Pratt (talk) 22:41, 19 March 2016 (UTC)
- Are you quoting Pauli, Not even wrong? :-) Bubba73 You talkin' to me? 23:46, 19 March 2016 (UTC)
- Yes, if stealing can be called quoting. ;) Vaughan Pratt (talk) 07:35, 2 May 2016 (UTC)
- Are you quoting Pauli, Not even wrong? :-) Bubba73 You talkin' to me? 23:46, 19 March 2016 (UTC)
"It factored the number 15"
Does anyone else laugh when they get to that line? 68.2.235.85 (talk) —Preceding undated comment added 00:27, 6 June 2016 (UTC)
I would argue that factoring 15 did nothing to demonstrate their method. In binary 15 is 1111. The number 15 is 3×5, which in binary is computed 11×101, and is equivalent to writing 101+0101. In factoring 15, their was no untangling of the superposition of bits (or qbits) whatsoever!!!!! The bit pattern 101 just falls straight out of the pattern 1111. The claims made for Shor's algorithm are absurd at best. Runestone1 (talk) 01:09, 8 September 2016 (UTC)