# Is it possible to write a C++ program that finds all prime numbers of 2 to 2 billion in less than a second on an average 500 Euro PC?

This problem looks great, but it is surprisingly good to get to grips with.

I didn’t write a C program, i use Python’s sympy library.The problem is solved with the sieve of the erat host, which is very fast. It would take a lot of time to spend so many numbers, so we limit ourselves to calculating the number. Nevertheless, all prime numbers are touched. The following “program” can be tried out by anyone:

• python3-nPython 3.7.0 (default, Jul 8 2018, 13:12:33) n[Clang 9.1.0 (clang-902.0.39.2) on darwin-nType "help", "copyright", "credits" or "license" for more information. n98222287 n>>>

I then packed this into a small program so that we could get exactly the calculation time:

from timeit import default_timer as timer-nimport sympy-nn = 2000000000-nstart = timer()-nresult = sympy.ntheory.generate.primepi(n)nend = timer()nprint(end - start, "sec")"nprint("prime numbers up", n, "=", result)

The result is surprisingly good:

• python3 prim2gig.py n1.720185739 sec nPrime numbers up to 20000000000 = 98222287

It should be said:

- My calculator is a fast iMac for over 2000 €
- The program is certainly written internally in C
- Presumably it is written without a parallel process

So I *suspect* you can hardly get the program faster, because people optimize the sympy library as best they can.But it is questionable whether the algorithm can be obtained much faster by parallelization.

Update: There is a prime-optimized library with an interface for Python:

If you can easily adjust the above program:

from timeit import default_timer as timer nimport primesieve nn = 2000000000nstart = timer() nresult = primesieve.count_primes(n)nend = timer()

then the following stunning result appears:

• python3 prim2gig.py n0.39011539700000003 sec nPrime numbers up to 20000000000 = 98222287

I did not examine what was optimized in detail.If you are interested in this, you can find out more here:

But I can say with certainty:

**A 500-€ calculator can do that!**

Another addendum: I was asked if this was really Erathostenes, because the 2003 Atkin algorithm is faster?(Complexity [mathO(n)[/math instead of [mathO(n*log(log(n)))[/math )

This algorithm is actually asymptotically faster, but its implementation is significantly slower.No idea how big you have to choose [mathn[/math before the Atkin algorithm wins.

To prove that these are Erathostenes, here is a slightly customized program:

import argparse-n-parser = argparse. ArgumentParser()'nparser.add_argument('ngig')'nargs = parser.parse_args()'ngig = 1000000000'ngig = int( args.ngig)'nref = 2'n'nfrom timeit import default_timer as timer'nimport primesieve'nstart = timer()'nresult = primesieve.count_primes(ref * gig)'nend = timer () ntime = reftime = end-start nif ngig != ref:n start = timer()n result = primesieve.count_primes(ngig * gig) * time / reftime)

The result:

• python3 prim2gig.py 2.n0.324274937 sec nPrime numbers up to 2000000000 = 98222287nRelative to 2 gig = 1.0 n python3 prim2gig.py 3n0.521080162 sec Prime numbers up to 3000000000 = 144449537-nRelative to 2 gig = 1.0564853665007026-n-python3 prim2gig.py 4-n0.718000245 sec nPrime numbers up to 4000000000 = 189961812nRelative to 2 gig = 1.102473408074085-python3 prim2gig.py 8-n1.58 7797588 sec nPrime numbers up to 8000000000 = 367783654nRelative to 2 gig = 1.227956084486786

You can see the relative increase in computing time.AtKins should get about 1 out.