Talk:Primality test
This level-5 vital article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||
|
Naïve Methods and 4
[edit]When testing whether n is divisible by m, using values of m from 2 to n - 1, when n is 4, the method correctly determines that 4 is not prime, because it is divisible by 2. However, when using values of m from 2 to the square root of n, when n is 4, an error can occur. Since 2 is the square root of 4, the test range is 2 to 2, and, if the range is exclusive, like a for-loop "for (n = 2; n < sqrt(m); n++)", the whole test is skipped, and 4 is incorrectly determined to be a prime number.
68.13.47.75 (talk) 16:09, 10 January 2008 (UTC)
"The Miller-Rabin primality test and Solovay-Strassen primality test are more sophisticated variants which detect all composites"
- If the methods detect all composites, why are they categorized as probabilistic rather than deterministic? Joestynes 08:31, 2 Mar 2005 (UTC)
- These two categories are unrelated. An algorithm is probabilistic if it makes random "coins tosses" during the computation, and produces the correct answer with high probability. An algorithm is approximative if it produces the correct answer only for (an overwhelming) majority of inputs. You can have a deterministic approximative test for primality (e.g., base-2 Fermat test), and you can have a probabilistic exact test (e.g., Rabin-Miller). The "probability" in "probabilistic test" refers to the internal coin tosses, not to some probability distribution on the inputs. -- EJ 13:49, 2 Mar 2005 (UTC)
- Maybe the headings should be changed to "Approximative" and "Deterministic", rather than "Probabilistic" and "Fast Deterministic", to avoid creating a false dichotomy. Joestynes 01:28, 3 Mar 2005 (UTC)
- This would be wrong. As I explained above, most of the probabilistic tests mentioned in the article are NOT approximative, and approximative vs. deterministic is not a dichotomy at all. -- EJ 12:24, 3 Mar 2005 (UTC)
"(...) if the range is exclusive, like a for-loop "for (n = 2; n < sqrt(m); n++)", the whole test is skipped, and 4 is incorrectly determined to be a prime number." - that's precisely the problem. The condition "n < sqrt(m)" is wrong, the square root of the tested number _must be included_ - otherwise tested numbers which are squares of prime numbers will always be incorrectly identified as primes (which they obviously aren't, being perfect squares). The test condition for this algorithm must be "n <= sqrt(m)". 89.66.239.49 talk) 20:33, 15 June 2015 (UTC)
The text states:
"A simple, but very inefficient primality test uses Wilson's theorem, which states that p is prime if and only if:
(p-1)!\ \equiv\ -1\ (\mbox{mod}\ p)
Although this method requires p modular multiplications, rendering it impractical, theorems about primes and modular residues form the basis of many more practical methods." (11/6/09)
Apart from the bottom sentence being a fragment, I don't think the test requires p modular multiplications. Take 7 for an example. 7-1 = 6, and 6! = 6*5*4*3*2. 6! seems to have 4 modular multiplications: (6*5) being 1, (30*4) being 2, (120*3) being 3, and (360*2) being 4.
—Preceding unsigned comment added by 76.16.199.113 (talk) 23:06, 11 June 2009 (UTC)
- Okay, actually it's p−2, but this makes no substantive difference. I'll just make it say "about p". Dcoetzee 23:16, 11 June 2009 (UTC)
Computer proof vs pen-and-paper proof
[edit]The article says the following:
- This is orders of magnitude less than the probability that the computation will be corrupted by hardware errors, software bugs, mistyping the input, some user's death due to heart attack during the computation, or an airplane crashing on your computer lab. All of these affect deterministic tests as well... If you want to know with certainty that a number is prime, you cannot use a computer at all; you have to find a mathematical proof of its primality, short and simple enough that you can write it down on a piece of paper and verify by hand that it is correct.
I disagree with this, or at least, I think it's expressing a POV. Surely a human prover can also make an error in his proof, and in his subsequent verification? I realise that this is a debatable (and philosophical) matter, but I think that the above, particularly the bold phrase, needs to be reworded. Personally, for various particular combinations of humans, computers and numbers, I would trust a computer proof of primality over a human proof ;-) — Matt Crypto 22:08, 22 May 2005 (UTC)
- I have removed the following section (between the rules). Even though it is useful information (pace NPOV concerns) end even though my queries above may have prompted its addition, it doesn't really belong on the primality testing page; it belongs on Randomized algorithm, Approximation algorithm and/or Deterministic algorithm. Primality tests are particular instances of these, so appropriate links and categorization should suffice. Of course, each listed method should state both whether it's deterministic or randomised and whether it's approximation or definite. I'm not qualified to do that. Joestynes 08:18, 3 Jun 2005 (UTC)
Popular misconceptions about randomized algorithms
[edit]- A randomized test is good for most numbers, but a small number of composites will be declared prime. — this is not the case. The test is only required to succeed with certain probability, but the probability is not taken over the inputs, but over additional auxiliary numbers used in the algorithm. The test must be correct with high probability for every input number n. Having said that, it may be also worthwhile to consider tests which are incorrect on a certain fraction of inputs; such algorithms are called heuristic or approximation algorithms. Notice that heuristic algorithms may be deterministic as well as randomized; it's a different kind of classification.
- A randomized test does not determine with certainty whether a number is prime or not. Deterministic tests are no better in this respect. If you make, say, 25 iterations of the Miller-Rabin tests, the algorithm as such is correct with probability smaller than 10−15. This is orders of magnitude less than the probability that the computation will be corrupted by hardware errors, software bugs, mistyping the input, some user's death due to heart attack during the computation, or an airplane crashing on your computer lab. All of these affect deterministic tests as well (in fact, for general numbers, the deterministic test will take far longer than a reasonable number of randomized test iterations, and hence be more exposed to hardware errors). If you want to know with certainty that a number is prime, you cannot use a computer at all; you have to find a mathematical proof of its primality, short and simple enough that you can write it down on a piece of paper and verify by hand that it is correct. Of course, your proof may employ algorithms as mathematical entities; but then again, deterministic and randomized tests are no different: if you prove that the Miller-Rabin test declares a number n as prime with probability at least 1/2, it is a valid proof that n is prime.
- In perfect world, appropriate links suffice. In reality, people do not follow links, especially if they have a false impression that the link explains something well-known or obvious. Moving the section to Randomized algorithm would be useless, as all the information is already there. EJ 12:14, 9 Jun 2005 (UTC)
- I think it wouldn't hurt to briefly summarize some of the concerns about probabilistic algorithms in this context, since it's where many people first encounter probabilistic algorithms - then we tell them to follow the link for more information. Deco 03:00, 21 June 2006 (UTC)
"cyclotomy test"
[edit]There's no such link in Wikipedia, and I don't see anything substantive about a "cyclotomy test" on the web except the erroneous link in this article! For the time being I've linked to "Root of Unity", which is a redirect from cyclotomy 14:25, 21 December 2006 (UTC)
- Search on "cyclotomy primality proving". PrimeHunter 20:30, 21 December 2006 (UTC)
Hardy-Littlewood conjecture
[edit]There are several statements which go under that name. The one relevant to AKS is about the distribution of Sophie Germain primes, whereas Hardy-Littlewood conjecture currently redirects to an unrelated statement about the distribution of twin primes. So, please, do not mess with the piped wikilink. -- EJ 14:34, 14 January 2007 (UTC)
Primality testing using Primorials
[edit]There is a straightforward, unsophisticated method of deterministic primality testing involving the primorials, though I have no idea whether it is "fast". It accomodates any and all prime numbers of any size without false positives or gaps when the method is done properly.--Billymac00 15:38, 2 June 2007 (UTC)
- Could you be thinking about Wilson's theorem which uses factorials (not primorials)? It is far too slow to be useful. PrimeHunter 16:08, 2 June 2007 (UTC)
- A number n is prime iff gcd(n, (n - 1)#) = 1. But that's every bit as bad as Wilson's theorem. CRGreathouse (t | c) 21:02, 29 January 2009 (UTC)
Probabilistic testing
[edit]When I read about a percentage being associated with probabilistic testing being correct (99.xx%), this might imply there are known documented failures of this testing. Does anyone know if this is the case, and if so, what are a few specific examples, for say Miller-Rabin? Or, is the percentage purely theoretically based, and there are no known, documented false primes reported in published work?--Billymac00 01:35, 17 June 2007 (UTC)
- Yes, many tests can say "probable prime" about a composite and have known examples of that. See Category:Pseudoprimes and Miller-Rabin primality test. See also http://mathworld.wolfram.com/Rabin-MillerStrongPseudoprimeTest.html for some composites which are not detected by a few Miller-Rabin tests with small bases. PrimeHunter 01:52, 17 June 2007 (UTC)
- If you were to ask, has a failure ever occurred in practice that significantly impacted the world, such as damaging a cryptographic scheme, the answer is almost certainly no. They set the error extremely tiny for any case where it matters, and I'm not aware of any documented failure to this effect. The fact that such failures are possible is proven mathematically. Dcoetzee 08:08, 17 June 2007 (UTC)
Naïve Methods Generalization
[edit]"Generalising further, it can be seen that all primes are of the form c#k + i for i < c# where i represents the numbers that are coprime to c# and where c and k are integers. For example, let c = 6. Then c# = 2 \cdot 3 \cdot 5 = 30. All integers are of the form 30k + i for i = 0, 1, 2,...,29 and k an integer. However, 2 divides 0, 2, 4,...,28 and 3 divides 0, 3, 6,...,27 and 5 divides 0, 5, 10,...,25. So all prime numbers are of the form 30k + i for i = 1, 7, 11, 13, 17, 19, 23, 29 (i.e. for i < 30 such that gcd(i,30) = 1). Note that if i and 30 are not coprime, then 30k + i is divisible by a prime divisor of 30, namely 2, 3 or 5, and is therefore not prime."
This part doesn't seem to be correct. Given the example, if c=6, and the rule is 30k + i, with i=1, 7, 11, 13, 17, 19, 23, 29, then we see that for k=1 and i=19, 30k+i=49 = 7x7, which is not a prime.
Prajo (talk) 17:49, 30 June 2009 (UTC)
- You are misreading it. It says that all primes are of the given form, it does not say that all integers of that form are primes. That's the other implication. — Emil J. 17:57, 30 June 2009 (UTC)
Introduction
[edit]"As of 2009[update], factorization is a computationally hard problem, whereas primality testing is comparatively easy (its running time is polynomial)." Factorization also has a polynomial running time. This sentence is misleading. I offer to delete or change "(its running time is polynomial)" part. —Preceding unsigned comment added by 188.3.128.114 (talk) 13:33, 10 July 2009 (UTC)
- No, there is no known factorization algorithm which can produce the factorization in polynomial time in the size of the input, meaning in polynomial time of log(n) where n is the number to be factored. See more at Integer factorization#Difficulty and complexity. Don't make this change. PrimeHunter (talk) 13:57, 10 July 2009 (UTC)
Really?
[edit]In the section Naive Methods "A good way to speed up these methods (and all the others mentioned below) is to pre-compute and store a list of all primes up to a certain bound, say all primes up to 200."
This is a small point, and I may be wrong about this, but I was under the impression that with a modern processor it took less time to compute this list during run time than to fetch it from a hard-disk. Unless you mean "pre-compute during the program and save into memory" instead of "read from a file"; but this sentence indicated the latter to me on first read. 128.120.218.130 (talk) —Preceding undated comment added 23:19, 16 July 2009 (UTC).
- It depends what you mean "from a file" - this may be programmed into the compiler and/or compiled into the program. A variety of common mathematical operations are often stored non-mathematically since they are repeated so many times in so many contexts. Moreover, even if prime number detection is polynomial, that doesn't beat constant time detection. Jztinfinity (talk) 13:21, 2 August 2011 (UTC)
History section?
[edit]I would find a history section of this article interesting and helpful. When learning something new, I find it helpful to study historical beginnings along with logical beginnings. I find this also lends insight into the "naive" ways of thinking and generally makes me feel less stupid starting out.
I don't really know anything about the history of primality testing, but I guess the written record begins with Euclid in Book V of the Elements. If a history section would be helpful to anyone else, then I'm all for working with people on it. —Preceding unsigned comment added by 67.164.32.152 (talk) 13:06, 23 April 2011 (UTC)
Trial division
[edit]I think the primality test should be anything from sqrt(n-1) to (1/2)*n. Anything else can be discarded. --Mnoon (talk) 20:59, 28 September 2011 (UTC)
- I guess you refer to trial division. sqrt(n-1) to (1/2)*n would be possible but extremely inefficient. 2 to sqrt(n-1) is far faster because there are far fewer numbers to test. For example, if n = 1010 then sqrt(n-1) to (1/2)*n is 100,000 to 5,000,000,000 instead of 2 to 100,000. If m is an integer from sqrt(n-1) to (1/2)*n and m divides n, then n/m is an integer from 2 to sqrt(n) and n/m divides n, so the same factors can be found by testing the two intervals. PrimeHunter (talk) 22:50, 28 September 2011 (UTC)
The Fastest Deterministic Primality Test
[edit]How do I introduce the fastest Deterministic Primality Test Method? I created a 1970's program which outlines the method in a mere 70 lines; it utilizes the method devised by John L. Selfridge without the need for a decision tree. It is by far the quickest and simplest test out there...
100 CLS : C = 1 102 DIM A(10001), LT(1), T(10001) 104 FOR N = 3 TO 7919 STEP 2 105 IF N MOD 3 <> 0 AND N MOD 5 <> 0 AND N <> 7 THEN 106 REM *** Compute the numbered terms of the L-Sequence *** 108 A(0) = N: A(1) = (N + 1) / 2: A(2) = A(1) - 1 110 FOR I = 3 TO 10001 STEP 2 112 IF A(I - 1) < 3 THEN 114 IF A(I - 1) = 2 THEN 116 A(I) = 1: A(I + 1) = 0 118 M = I + 1: I = 10001 120 ELSE 122 A(I) = 0: M = I: I = 10001 124 END IF 126 ELSE 128 IF A(I - 1) MOD 2 = 1 THEN 130 A(I) = A(I - 2) / 2 132 ELSE 134 A(I) = (A(I - 2) + 1) / 2 136 END IF 138 A(I + 1) = A(I) - 1 140 END IF 142 NEXT I 143 REM *** I think there's more parity change near, on, or after ((N-1)/4) *** 144 D = 0: L = INT((N - 1)/4): LL = L: LT(0) = 2: LT(1) = 2 146 WHILE (L < N - 2) 148 T(M) = 2: T(M - 1) = L: T(M - 2) = (T(M - 1) ^ 2 - 2) MOD N 150 IF T(M - 2) < 2 THEN T(M - 2) = T(M - 2) + N 152 IF T(M - 2) <> 2 THEN 154 K = 0: Z1 = 0: Z2 = 0 155 REM *** Compute the values of the terms in the L-Sequence *** 156 FOR J = M - 3 TO 0 STEP -1 158 IF A(J) MOD 2 = 1 THEN 160 IF A(J) = A(J + 1) + A(J + 2) THEN K = 0 ELSE K = 1 162 T(J) = (T(J + 1 + K) * T(J + 2 + K) - L) MOD N 164 ELSE 166 T(J) = (T(J + 2) ^ 2 - 2) MOD N 168 END IF 170 IF T(J) < 2 THEN T(J) = T(J) + N 172 NEXT J 174 Z1 = (T(2) ^ 2 - 2) MOD N 176 IF Z1 < 2 THEN Z1 = Z1 + N 178 Z2 = (T(1) ^ 2 - 2) MOD N 180 IF Z2 < 2 THEN Z2 = Z2 + N 181 REM *** Evaluate whether N is prime or composite *** 182 IF T(0) = L THEN 184 IF Z1 = 2 AND Z2 = T(M - 2) THEN 186 LT(D) = -1 188 ELSE 190 IF Z1 = T(M - 2) AND Z2 = 2 THEN 192 LT(D) = 1 194 ELSE 195 IF LMAX < L - LL THEN LMAX = L - LL 196 PRINT N; "is composite!": L = N - 2 198 END IF 200 END IF 202 ELSE 203 IF LMAX < L - LL THEN LMAX = L - LL 204 PRINT N; "is composite!": L = N - 2 206 END IF 208 IF LT(0) = -LT(1) THEN 209 IF LMAX < L - LL THEN LMAX = L - LL 210 PRINT N; L; "is prime!": L = N - 2: C = C + 1 212 END IF 214 D = 1 216 END IF 218 L = L + 1 220 WEND 222 ELSE 224 IF N = 3 OR N = 5 OR N = 7 THEN 226 PRINT N; "is prime!": C = C + 1 228 ELSE 230 PRINT N; "is composite!" 232 END IF 234 END IF 236 NEXT N 237 REM *** LMAX + 1 is the maximum number of trials in the range *** 238 PRINT C; LMAX + 1 240 END
Enjoy! an eigth-grader could verify this result, and I'm not violating any copyright.99.142.30.54 (talk) 03:25, 29 August 2012 (UTC)
- Unexplained and uncommented code with illuminative variable names like L and A is human unreadable (at least for this human). Is it some variant of the Baillie–PSW primality test?—Emil J. 13:38, 30 August 2012 (UTC)
I can't help the fact that this editor doesn't keep track of {white space}, and my code does have comments in it. All you have to do to be human is to copy and paste my program into your own editor and force the {spaces} and {carriage returns}. Otherwise, examining my code would indeed be very simple!99.142.32.120 (talk) 19:46, 4 September 2012 (UTC)
- I have wrapped your code in
<pre>...</pre>
but it's still unreadable to me. Anyway, our verifiability policy means the test must be published in a reliable source stating it works correctly and is the fastest before it can be considered for inclusion in Wikipedia. If the source isn't a well-respected peer-reviewed math journal and the test hasn't been publicly supported by prominent number theorists then Wikipedia may still reject it per WP:REDFLAG. Lots of people have claimed to invent fast primality tests which never fail but they often turn out to be probable prime tests with no valid primality proof. And those who bother to examine them often find counter examples, meaning composite numbers which are allegedly prime according to the test. PrimeHunter (talk) 22:57, 4 September 2012 (UTC)
/2
[edit]I don't think you need to test numbers above n/2 because everything will give 0.x anyway. 71.204.170.66 (talk) 04:00, 25 September 2013 (UTC)
- Please clarify what you mean. If you are suggesting a change to the article then say exactly which part you want to change. PrimeHunter (talk) 11:56, 25 September 2013 (UTC)
Wrong information in Naive methods section.
[edit]That section states that "[A]ll prime numbers are of the form 30k + i for i = 1, 7, 11, 13, 17, 19, 23, 29." That's not true. 2 3 and 5 are the only prime numbers that are not of that form. It didn't specify for some integer k either. Blackbombchu (talk) 02:30, 4 October 2013 (UTC)
Question on notation
[edit]Moved to its own section for clarity
I am not sure if it is a mistake or a notation that I don´t know, so the question: Is the equivalence signal used in the math expression given a mistake or some special notation in number theory? If it is not a mistake then some indication to that kind of notation it is would enrich the text. Hebert Peró (talk) 16:54, 4 May 2014 (UTC)
- Do you mean the notation ? If so, this is the congruence relation in modular arithmetic. Deltahedron (talk) 17:10, 4 May 2014 (UTC)
Naïve Methods section sample implementations
[edit]The sample implementations may give a nice impression of how similar various languages are when used to express a certain problem, but they do not express the problem of determining whether a given number is prime. The loops all start at 5 and then take steps of 6, which means they skip the number 49, which is therefore falsely marked as prime. — Preceding unsigned comment added by 77.173.221.73 (talk) 08:01, 13 February 2015 (UTC)
A Simpler & Shorter JAVA implementation
[edit]public class Primes { public static void main(String[] args){ int p = Integer.parseInt(args[0]); boolean NotPrime = true; boolean result = true; for(int i=2; i< p; i++){ NotPrime = (p % i != 0); result = (NotPrime && result); } if( result == true){ System.out.println(p + " is prime"); }else{ System.out.println(p + " is not prime"); } } }
Heuristic versus probabilistic primality tests
[edit]Under "Heuristic tests", the article says, "The Baillie-PSW primality test is another excellent heuristic, ..."
Actually, the Baillie-PSW primality test is a probabilistic primality test, in exactly the same sense that Fermat, Solovay-Strassen, Miller-Rabin, Lucas, and Frobenius tests are probabilistic. I recommend moving the description of B-PSW to a subsection under "Probabilistic tests", alongside the other probabilistic tests. Comments? MathPerson (talk) 14:34, 31 May 2019 (UTC)
- Seeing no objections, I moved the description to withing "Probabilistic tests". MathPerson (talk) 01:22, 22 June 2019 (UTC)
Primality test algorithm
[edit]Found out that there are some algorithm to test for prime.
if . Else .
Same rules for three different algorithm.
These algorithm are based on the sieve of eratosthenes,
where will be used to represent all numbers till ,
and the product which is build up to "withdraw" the numbers which are multiple of in .
The first derivation of the function at the value equals 3.62759872847, which seems, If I remember correctly, to be the sum of the Dedekind eta function.
Greetings Lincalm Lincalm (talk) 14:44, 22 October 2019 (UTC)
- It sounds like you are trying to describe very convoluted ways to make trial division, maybe original research by you. The normal simple way is already in Primality test#Simple methods. There is no reason to mention far more complicated and impractical ways in the article. PrimeHunter (talk) 00:48, 23 October 2019 (UTC)
I would wish that there were someone who test if the algorithm f(x) is better in comparison to other algorithm by it's runtime.
I've had some problems by computing the algorithm h(x) and g(x) because of the precision of the function sin(x) was to low. Even using in computing was bad because of the representation of decimal places, besides using imaginary numbers by computing g(x) seems to be to diffcult and impracticalbe, so that I can suggest to use f(x) for best computional success.
Would be nice if someone tries to handle it. Lincalm (talk) 06:52, 23 October 2019 (UTC)
On the one hand side I was to express the Sieve of Eratosthenes in a more mathematical way so that no withdrawing is needed and may making list disappear. Also in hope that the computional output turns to a better solution like better runtime by testing for primes, using less space, be more exact etc. On the other hand side I was hoping to understand the properties, which the primes come up with, a little bit better. Lincalm (talk) 10:34, 23 October 2019 (UTC)
- Any direct implementation will be unnecessarily complicated and slow compared to normal trial division which just tests whether x is divisible by numbers from 2 to √x. Most programming languages have a simple loop structure which can easily do this, and stop as soon as a factor is found. Here is a primitive PARI/GP example you can run at https://pari.math.u-bordeaux.fr/gp.html without having programming software:
is_prime(x) = for(n=2, sqrtint(x), if(x%n == 0, return(0))); return(1) for(i=2, 100, if(is_prime(i), print1(i," ")))
- PARI/GP already has an isprime function and hundreds of other mathematical functions. PrimeHunter (talk) 11:46, 23 October 2019 (UTC)
import math import numpy x=7.0 #insert number to test to, here n=2.0 a=x%1 q=1 while(n<=numpy.sqrt(x)): q=q*(x%n) if (x%n==0): break n=n+1 try : y=a/q except ZeroDivisionError: y=1 if(y==0): print("x value ",x,"y value ",y) else: print("x value ",x,"y value ",y)
My code looks like that. It is written in Python. If you want to calculate high numbers I would suggest to add a commata and zero to the value, in variable x, because of some converting problems from int to float when high numbers are reached. Lincalm (talk) 09:20, 24 October 2019 (UTC)
01.11.2019
Tested my programm vs another to check which runtime seems to be better.
Number: 10000000000003
tested with:
from time import * import numpy t1=clock() n=10000000000003.0 prim=True if n==1: prim= False else: i=2 while i<=numpy.sqrt(n): if n%i==0: prim=False break i=i+1 print("n= ", n,prim) t2=clock() t=t2-t1 print("Runtime in sec ", t)
Time needed: 0.000484
VS.
Number: 10000000000003 tested with:
from time import * import math import numpy #if y=0 then is x in P #else x is not in P x=10000000000003.0# insert here the number to test to n=2.0 a=x%1 q=1 t1=clock() while(n<=numpy.sqrt(x)): q=q*(x%n) if (x%n==0): break n=n+1 #calculates the finite product try : y=a/q #calculates the product except ZeroDivisionError: y=1000 #set y to 1000 so the algorithm can continue calculating instead of crashing because of the ZeroDivisionError if(y==0): print("x value ",x,"y value ",y) else: print("x value ",x,"y value ",y) t2=clock() t=t2-t1 print("Runtime in sec ",t)
Time needed: 0.000470
System and environment used: Wiko View XL Processor Quad Core 1.4Ghz Pydroid3 — Preceding unsigned comment added by Lincalm (talk • contribs) 07:22, 2 November 2019 (UTC)
There is to see that my program is a lil bit faster than the other program.
- The first program loops to n-1 and the other to sqrt(x) so that's a meaningless comparison. Never continue above sqrt(n) in trial factoring. You should also stop when a factor is found like my PARI/GP code. PrimeHunter (talk) 21:23, 1 November 2019 (UTC)
Too bad! I thought I've found something. I've tested the algorithm in comparison to each other once again with the result that it seems to be meaningless too.
Runtime for the First algorithm: Number: 10000000000003 36.231738 sec.
Runtime for the other: Number: 10000000000003 31.9732 sec.
So all in all nothing really to meantion.
Thank you for your good eye. Lincalm (talk) 07:51, 2 November 2019 (UTC)
Thank you for the correction. After that, my code works better than befor.
Look above Lincalm (talk) 12:36, 2 November 2019 (UTC)
Fermat is listed under Heuristics and Probability
[edit]The Fermat test is listed in the Heuristics section AND the Probabilistic section. Which one is it? Are there two tests? My professor said Fermat test is probabilistic but that doesn't give me enough references to correct the content of this article so I thought I'd add a "talk section" instead.
Heuristic tests
[edit]→The Fermat test and the Fibonacci test are simple examples, and they are very effective when combined.
Probabilistic tests
[edit]→Fermat primality test
Prime Power Test
[edit]For anyone interested, here's a simple method of determining if a number is a prime power: [1]http://www.abdulbaki.org/math/Value-Counting_Up_to_N.pdf.
The file is also on vixra (2207.0177), since I don't have access to arxiv. BAbdulBaki (talk) 21:31, 21 October 2022 (UTC)
Fermat Deterministic
[edit]There is no mention of a deterministic version of Fermats test. And of course there is one. If you randomly choose 1/2 of all bases [2,n-1] OR if you use the a^n % n == a equation instead, you can come to the absolutely certain conclusion that n is either prime or Carmichael. There is no chance of some arbitrary pseudoprime accidentally passing. If on the other hand you use the a^(n-1) % n == 1 equation and you systematically check all bases [2, sqrt(n)], you get definitive primality results.
This is clearly less efficient than trial division, however. The computations are more complex, and youre still checking the same range [2, sqrt(n)], except you also have to check **all** values; you cant skip. Its not practical; just proof of concept that Fermat **can** be made deterministic.
The reason it works is because a^(n-1) % n == 1 will fail for composites - including Carmichaels - whenever a is not coprime to n. Unfortunately a^n % n == a does not make this distinction for Carmichaels (arguably the defining trait). If n is a composite, whether or not its Carmichael, you're guaranteed to have a factor in the interval [2, sqrt(n)], which is not coprime. So a^(n-1) % n == 1 will fail, and you will have found at least one witness.
Even with a randomized test, Carmichaels are theoretically possible to detect, if the right base a is chosen and the right equation is used.
I just think its worthwhile to mention. 63.40.186.52 (talk) 01:00, 17 April 2024 (UTC)
- Start-Class level-5 vital articles
- Wikipedia level-5 vital articles in Mathematics
- Start-Class vital articles in Mathematics
- Start-Class Computing articles
- Unknown-importance Computing articles
- All Computing articles
- Start-Class Computer science articles
- Mid-importance Computer science articles
- WikiProject Computer science articles